content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Integers and fractions
Sen, Diptiman (1998) Integers and fractions. In: Current Science, 75 (10). pp. 985-987.
Download (66Kb)
The 1998 Nobel Prize in Physics has been awarded jointly to Robert B. Laughlin, Stanford University, USA, Horst L. Störmer, Columbia University and Bell Labs, USA, and Daniel C. Tsui, Princeton
University, USA, for ‘their discovery of a new form of quantum fluid with fractionally charged excitations’ appearing in the fractional quantum Hall effect. The quantum Hall effect has two versions,
the integer (which won an earlier Nobel Prize) and the fractional. These two versions share the remarkable feature that an experimentally measured quantity, characterizing one aspect of a complicated
many-particle system, stays perfectly fixed at some simple and universal values even though many different parameters of the system vary from sample to sample (in fact, some parameters like disorder
vary quite randomly). In addition, the fractional quantum Hall effect has the astounding property that the low-energy excitations of the system carry a charge which is a simple fraction of the charge
of an electron, even though the system is composed entirely of particles like atoms, ions and electrons, all of whose charges are integer multiples of the electronic charge.
Actions (login required) | {"url":"http://eprints.iisc.ernet.in/1482/","timestamp":"2014-04-18T00:42:28Z","content_type":null,"content_length":"20995","record_id":"<urn:uuid:039cf9a6-f6be-44d0-a3d3-44cb9c274be4>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00576-ip-10-147-4-33.ec2.internal.warc.gz"} |
Classic maths books reset with LaTeX on Project Gutenberg
You're reading: News, Phil. Trans. Aperiodic.
Classic maths books reset with LaTeX on Project Gutenberg
I was going to save this for an Aperiodical Round Up but it’s such a good thing I thought I’d post it straight away. Project Gutenberg has moved on from offering just plain-text transcriptions of
books: volunteers have been outstandingly generous with their time and produced LaTeX versions of many maths books, producing versions that are considerably more readable and resemble the original
editions much more closely.
Not all the books in that list have been converted to LaTeX yet. Of those that have, GH Hardy’s A Course of Pure Mathematics leaps out as a good place to start. Compare it with this book still in
HTML format to see the difference.
(via reddit)
Paul Topping
It is a step in the right direction but if they really wanted to use modern technology, they could offer HTML with equations displayed directly from LaTeX via MathJax (www.mathjax.org). Readers could
then copy them to the clipboard as MathML to be pasted into such apps as Mathematica, MathType, and hundreds more. Such equations would also be accessible to readers with disabilities.
• Christian Perfect
Yes, that would be nice. They don’t seem to be using any non-standard packages in Hardy’s book (and I assume the rest are the same), so I wonder if something like Pandoc could be used to convert
the LaTeX to HTML+MathJax straightforwardly.
Perhaps you at Design Science could help?
2 Responses to “Classic maths books reset with LaTeX on Project Gutenberg” | {"url":"http://aperiodical.com/2012/04/classic-maths-books-reset-with-latex-on-project-gutenberg/","timestamp":"2014-04-16T11:02:13Z","content_type":null,"content_length":"41589","record_id":"<urn:uuid:4c54ee1a-c5f6-4f3a-8b06-f45314bbd58a>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00175-ip-10-147-4-33.ec2.internal.warc.gz"} |
Fibre bundles and differential geometry (Tata Institute of Fundamental Research
, 1999
"... These notes contain a survey of some aspects of the theory of graded differential algebras and of noncommutative differential calculi as well as of some applications connected with physics. They
also give a description of several new developments. ..."
Cited by 22 (3 self)
Add to MetaCart
These notes contain a survey of some aspects of the theory of graded differential algebras and of noncommutative differential calculi as well as of some applications connected with physics. They also
give a description of several new developments.
"... We discuss in some generality aspects of noncommutative differential geometry associated with reality conditions and with differential calculi. We then describe the differential calculus based
on derivations as generalization of vector fields, and we show its relations with quantum mechanics. Finall ..."
Cited by 13 (2 self)
Add to MetaCart
We discuss in some generality aspects of noncommutative differential geometry associated with reality conditions and with differential calculi. We then describe the differential calculus based on
derivations as generalization of vector fields, and we show its relations with quantum mechanics. Finally we formulate a general theory of connections in this framework. 1
, 1996
"... . In commutative differential geometry the Frolicher-Nijenhuis bracket computes all kinds of curvatures and obstructions to integrability. In [1] the FrolicherNijenhuis bracket was developed for
universal differential forms of non-commutative algebras, and several applications were given. In this pa ..."
Cited by 6 (3 self)
Add to MetaCart
. In commutative differential geometry the Frolicher-Nijenhuis bracket computes all kinds of curvatures and obstructions to integrability. In [1] the FrolicherNijenhuis bracket was developed for
universal differential forms of non-commutative algebras, and several applications were given. In this paper this bracket and the Frolicher-Nijenhuis calculus will be developed for several kinds of
differential graded algebras based on derivations, which were introduced by [6]. Table of contents 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 2. Convenient vector spaces
. . . . . . . . . . . . . . . . . . . . . . 3 3. Preliminaries: graded differential algebras, derivations, and operations of Lie algebras . . . . . . . . . . . . . . . . . . . . 6 4. Derivations on
universal differential forms . . . . . . . . . . . . . . . 8 5. The Frolicher-Nijenhuis calculus on Chevalley type cochains . . . . . . . 11 6. Description of all derivations in the Chevalley
, 1994
"... In commutative differential geometry the Frölicher-Nijenhuis bracket computes all kinds of curvatures and obstructions to integrability. In [3] the Frölicher-Nijenhuis bracket was developped for
universal differential forms of non-commutative algebras, and several applications were given. In this p ..."
Cited by 2 (2 self)
Add to MetaCart
In commutative differential geometry the Frölicher-Nijenhuis bracket computes all kinds of curvatures and obstructions to integrability. In [3] the Frölicher-Nijenhuis bracket was developped for
universal differential forms of non-commutative algebras, and several applications were given. In this paper this bracket and the Frölicher-Nijenhuis calculus will be developped for several kinds of
differential graded algebras based on derivations, which were indroduced by [6].
, 2006
"... Abstract. Motivated from some results in classical differential geometry, we give a constructive procedure for building up a connection over a (twisted) tensor product of two algebras, starting
from connections defined on the factors. The curvature for the product connection is explicitly calculated ..."
Add to MetaCart
Abstract. Motivated from some results in classical differential geometry, we give a constructive procedure for building up a connection over a (twisted) tensor product of two algebras, starting from
connections defined on the factors. The curvature for the product connection is explicitly calculated, and shown to be independent of the choice of the twisting map and the module twisting map used
to define the product connection. As a consequence, we obtain that a product of two flat connections is again a flat connection. We show that our constructions also behves well with respesct to
bimodule structures, namely being the product of two bimodule connections again a bimodule connection. As an application of our theory, all the product connections on the quantum plane are computed. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=2745383","timestamp":"2014-04-18T12:37:02Z","content_type":null,"content_length":"22079","record_id":"<urn:uuid:7b21b3f6-ef9a-4844-aae9-a312c1a98b0f>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00225-ip-10-147-4-33.ec2.internal.warc.gz"} |
Force of wheel Perependicular to road over speed hump
Im curently trying to work out the upwards force of the wheel of a car as it passes over a hump in the road. I have drawn a very simple diagram to illustrate this. If anyone could help me calculate
this force that would be most helpful.
I understand as the wheel is traveling along the road its normal reaction force (Nr) upwards is equal to that of the downwards force of the car (Mg) however when the wheel travels over the speed hump
the normal reaction force increase as it is gains a greater upwards force (Ma).
if the car was traveling over the speed hump at 20 meteres per second what would be the upwards force of the wheel Ma ? | {"url":"http://www.physicsforums.com/showthread.php?p=3139416","timestamp":"2014-04-19T17:33:37Z","content_type":null,"content_length":"23462","record_id":"<urn:uuid:2f5aaf06-ee5f-4016-a5c7-c3e9cd225c24>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00341-ip-10-147-4-33.ec2.internal.warc.gz"} |
El Sobrante SAT Math Tutor
Find an El Sobrante SAT Math Tutor
...In addition, I also can help the students to understand to basic concept of Physics like motions, pressures, force, wave, energy and light. I helped one of my friend improve her grade in
Introduction to Physics class from D to B.I have a brother who is in grade 6, and I always help him to do Math and check his work. Besides, I'm a tutor of two girls who are in grade 5 and 6.
18 Subjects: including SAT math, calculus, trigonometry, statistics
Hello, my name is Starfire, did you know that the elements in your body come from exploding stars? I'm full of curiosity and love sharing the facts, methods and philosophy of science. I have ten
years of experience teaching high school math and physics.
12 Subjects: including SAT math, chemistry, physics, calculus
What's up everyone! I'm Andrey and this blurb is here to convince you that I'm available for any and all of your mathematical needs. Right now I'm finishing my bachelors in Pure Math from UC
19 Subjects: including SAT math, calculus, writing, physics
...I can help with revision and editing of essays, manuscripts and all types of written documents. Grammar, problem solving, critical reading and vocabulary. I can help with everything from
ancient Greece to Post-Modern thought.
38 Subjects: including SAT math, English, reading, writing
Hi there! First of all, thank you for your interest in Chinese! Chinese language is indeed beautiful, but its culture is the most attracting part!
3 Subjects: including SAT math, Chinese, TOEFL | {"url":"http://www.purplemath.com/el_sobrante_ca_sat_math_tutors.php","timestamp":"2014-04-19T15:23:29Z","content_type":null,"content_length":"23623","record_id":"<urn:uuid:8074c1f9-64ae-4c27-af8b-01f5c10545bb>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00529-ip-10-147-4-33.ec2.internal.warc.gz"} |
Why Digitize Data?
Why Digitize Data?
☆ Interpolation and extrapolation
Suppose you have obtained a set of digital data and plotted a working curve. If you make a fresh measurement on a fresh sample, can you use the working curve to measure the behavior of that
sample? Usually, the answer is: if the new data fall within the range of the working curve so that one interpolates, one can trust the new data, but if the new data fall outside the
previously validated range, one can not trust the use of the curve. Certainly distrusting extrapolation is appropriate for digitized as well as continuous (analog) data. But is interpolation
within the range of a working curve always justified? With sufficient signal averaging so that there are sufficient significant figures, the answer is yes, but for sufficiently low noise
systems, the granularity of digitization poses a problem. Since all digitization is inherently an integer process, there will be jumps in the measurement output as the digitized value
increments by one count at a time. This is stairstepping.
☆ Stair-stepping
As we vary a continuous variable, typically electrical potential, and digitize the value, we get a response that, in the absence of noise, looks like this:
The horizontal red and blue lines show two integer values that can represent potential. If the potential is actually that indicated by the vertical green line, then what? Either the blue or
red number will appear, giving no indication that
the actual number should be half-way in between! Only if there is noise, so that the instantaneous value jumps back and forth between the red and blue levels can we signal average to get a
fractional number that would allow interpolation! | {"url":"http://www.asdlib.org/onlineArticles/elabware/Scheeline_ADC/ADC_visual/why%20digitize4.html","timestamp":"2014-04-19T22:30:33Z","content_type":null,"content_length":"9493","record_id":"<urn:uuid:b28f73dd-85d7-485c-8f50-e6fe6e44b8b8>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00572-ip-10-147-4-33.ec2.internal.warc.gz"} |
Average Task Time Calculator
Features & Benefits
• How long does it take users to complete a task? Reports the best average task time based on your data and adjusts for non-normal task time data.
• Confidence Intervals: Compute accurate confidence intervals to show you the likely range of the true average task time from any size sample of users.
• Are users taking too long? Test whether your data exceeds a benchmark or goal (1-Sample t-test), for example, would all users be able to complete a task in less than 60 seconds?
• Graph of the average task time and confidence intervals.
• Corrects Non-Normal Task Time Data: Time on task data is positively skewed making it incompatible with many statistical tests. The calculator automatically corrects for the skewed data and
generate accurate results.
Who Should Buy
Professionals who want an easy way to find the best average task time, compare it to a benchmark and generate accurate confidence intervals around data. If you can use excel, you can use the Task
Time Calculator.
Select Sample Screen Shots
Average Task Time Calculator Page
Find the best average task time, generate confidence intervals around your average and determine whether your sample data provides evidence that the average time is less than a benchmark.
Graph and Statistical Tests
A graph with the confidence intervals around your time (adjusted for non-normal data) is automatically updated. When comparing your sample data to a benchmark, a 1-sample t-test provides the
appropriate p-values and the power of your test.
Reporting the Results
Results are provided in simple language to understand and communicate.
The best Average Task Time
The calculator will find the best average task time based on your sample data. For small samples (n <25) the geometric mean is used as the best estimate of the middle value. Otherwise, the median is
Statistical Tests
• 1-Sample t-test
• t-Confidence Intervals
• Logarithmic transformations
What Customers are Saying:
The calculator is quite well done! In addition to using it myself, I use it to teach statistics to graduate students in HCI. It's just much easier for students to learn than having to deal with the
complexity of SPSS.
Gavin Lew
Managing Partner, User-Centric
The calculator make it easy to see statistical significance for both large and small sample sizes.
Amber DeRosa
PhillyCHI Local chapter of ACM SIG-CHI | {"url":"http://www.measuringusability.com/products/taskTime","timestamp":"2014-04-17T18:22:53Z","content_type":null,"content_length":"17535","record_id":"<urn:uuid:85e7e4bb-c258-45f4-9eae-fe734a589c44>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00202-ip-10-147-4-33.ec2.internal.warc.gz"} |
On 8/2/2011 2:40 PM, John Smith wrote:
> Alan Weiss <aweiss@mathworks.com> wrote in message
> <j19f7m$js4$1@newscl01ah.mathworks.com>...
>> On 8/1/2011 3:52 PM, John Smith wrote:
>> *SNIP*
>> > So what could I be doing wrong such that lsqlin supposedly produces a
>> > solution, but when I test the solution (as I did with A*ans) it does
>> not
>> > produce the correct result (in this case "b").
>> >
>> > This is rather mystifying so any help would be appreciated. I can post
>> > another example of how it seems to fail if requested.
>> lsqlin produces a least-squares solution within the bounds you specify
>> (or other constraints such as linear equalities or inequalities).
>> Earlier in the thread you posted an example where you restricted the
>> range of possible x values by bounds. lsqlin produced the value (a
>> vector of 940s, if I recall correctly) that was the solution, meaning
>> the vector that gives the lowest value of your objective function
>> among all vectors within your bounds.
>> If you don't give any bounds or other constraints, lsqlin and
>> backslash are identical, as you discovered.
>> If you give constraints, the residual might not be zero, but lsqlin
>> gives the least-squares solution.
>> Alan Weiss
>> MATLAB mathematical toolbox documentation
> Aha, that's beginning to make sense. Is there a way to specify the max
> value of the residual so that if it goes beyond that number it returns
> the warning that a solution is unfeasible?
Take a look at the documentation:
Look at the second output of lsqlin: resnorm. I mean, when you call
lsqlin, call it this way:
[x,resnorm] = lsqlin(...)
resnorm is the squared 2-norm of the residual, meaning norm(C*x-d)^2.
I believe this gives the information you want.
Alan Weiss
MATLAB mathematical toolbox documentation | {"url":"http://www.mathworks.fr/matlabcentral/newsreader/view_thread/311029","timestamp":"2014-04-18T13:09:34Z","content_type":null,"content_length":"51282","record_id":"<urn:uuid:02d90398-3be2-4c43-96a8-f9ace11ab162>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00138-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: can't get InputField to work inside a While command
Replies: 3 Last Post: May 18, 2013 2:38 AM
Messages: [ Previous | Next ]
can't get InputField to work inside a While command
Posted: May 14, 2013 3:14 AM
I need to be able to input parameters for an a-priori indeterminate
number of cases. The way I've been trying to do this is by using a
While statement containing InputFields, one of which asks if there are
to be more cases to deal with. If not, the previously True logical
'test' for While is reset to False.
But apparently InputField is not even recognized as part of the 'body'
inside a While. By itself InputField works as expected, but not in this
reduced example:
cntr = 0;
While[cntr<3, cntr++; InputField[xx]]
which only produces
What am I missing? Or am I going about this the wrong way?
- Dushan
[ reverse the middle word of address to reply ]
Date Subject Author
5/14/13 can't get InputField to work inside a While command Dushan Mitrovich
5/15/13 Re: can't get InputField to work inside a While command David Bailey
5/17/13 Re: can't get InputField to work inside a While command Dushan Mitrovich
5/18/13 Re: can't get InputField to work inside a While command David Bailey | {"url":"http://mathforum.org/kb/thread.jspa?threadID=2572150&messageID=9123319","timestamp":"2014-04-21T08:16:55Z","content_type":null,"content_length":"20187","record_id":"<urn:uuid:1ceadf2c-829a-4b83-b51f-a97a0b653060>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00279-ip-10-147-4-33.ec2.internal.warc.gz"} |
Uniform convergence vs pointwise convergence
April 23rd 2011, 11:34 PM #1
Jan 2011
Uniform convergence vs pointwise convergence
Hi guys,
Can someone please explain to me the advantages of uniform convergence over pointwise convergence? I was told that pointwise convergence does not preserve continuity while uniform convergence
does. Also, pointwise convergence depends on x while uniform convergence doesn't. But whats the big deal about it? I am not getting the whole picture of it. We are learning Fourier Series by the
For example, if f_n : [ a , b ] -> IR is a sequence of continuous functions such that f_n -> f uniformly on [a , b ] then
Int_a^b ( f_n ) -> Int_a^b ( f )
as n -> + infty
perhaps I didn't understand the question but...
the difference between uniformly convergence and convergence by the point's you can best describe if u look at some sequence of functions (f_n (x) )_{n\in \mathbb{N}} defined in R where f_n (x) =
as you see functions are lines that passes thru coordinate (0,0) and with x-axes have less and less angle as x grows
(hmmm perhaps image is to small and I can't fix it right now.... )
but, as you see you have n functions there
notice also that if you draw strait line up at x= 1 let's say, you will get at the intersections of that line and line of the functions actually values of that now numerical sequence
notice that
lim_{n \to \infty} \frac {1}{n} ==== converges to zero (for x = 1)
lim_{n \to \infty} \frac {2}{n} ==== converges to zero (for x = 2)
but also notice that if u chose x to be any number M (irrelevant how big it is, just to be fixed ) it will be
lim_{n \to \infty} \frac {M}{n} ==== converges to zero (for x = M )
and this type of convergence you call convergence by points or you say
(\forall x \in A)(\forall \varepsilon >0 )(\exists n_0 \in N) (\forall n \in N) (n>=n_0 \Rightarrow |f_n (x) -f(x) |<\varepsilon )
as you see from this n_0 depends from x and \varepsilon (with big point at depending on X)
but if you have sequence where you can find some function that all functions from the sequence converge then you have uniformly convergence ....
must be ::
(\forall \varepsilon >0 )(\exists n_0 \in N)(\forall x \in A) (\forall n \in N) (n>=n_0 \Rightarrow |f_n (x) -f(x) |<\varepsilon )
and from that you see that if sequence is uniformly converge than it is also and by point
when you say some sequence is uniformly convergent than you know that in any point of region on where sequence is defined (let's say set A subset R) you will have that sequence converge to the
same function
\lim_{n\to \infty} f_n(x) = f(x)
Re: Uniform convergence vs pointwise convergence
can somebody help me with this problem please??
Show that the sequence {f_n} where f_n=nxe^(-nx^2 ) for n=1,2,3,… converges pointwise in [0,1].Is the convergence uniform?Justify
Re: Uniform convergence vs pointwise convergence
Prove that $f(x)=\lim_{n\to +\infty}f_n(x)=0$ for all $x\in [0,1]$ and $\int_0^1 f(x)\;dxeq \lim_{n\to +\infty}\int_0^1f_n(x)\;dx$.
P.S. It is better a new thread for every new problem.
Last edited by FernandoRevilla; October 16th 2012 at 04:26 AM.
April 24th 2011, 12:51 AM #2
April 24th 2011, 01:58 AM #3
Junior Member
Sep 2010
October 15th 2012, 03:47 PM #4
Oct 2012
sri lanka
October 15th 2012, 11:03 PM #5 | {"url":"http://mathhelpforum.com/calculus/178443-uniform-convergence-vs-pointwise-convergence.html","timestamp":"2014-04-20T03:29:09Z","content_type":null,"content_length":"46705","record_id":"<urn:uuid:0da45836-f0b8-4551-ad25-e16ca4a78549>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00520-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: [vox-tech] ohms law
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [vox-tech] ohms law
On Fri 02 Feb 07, 12:34 PM, Jimbo <evesautomotive@charter.net> said:
> Greetings:
> This is a technical question that concerns dc voltage so hopefully someone
> that has knowledge in this area can help me with this.
> I know that computers use low dc voltage of 12, 5 and 3.3 volts so
> hopefully this will fit the mail list criteria.
> I am a mechanic by trade. I am good at diagnosing electrical and
> drivability. I have seen a few times that high resistance in the negative
> leg of a circuit can take out components like computers, modules and even
> not-so-complicated devices like bulbs and switches. What I don't
> understand is why. Ohm's law states that E=IXR. If this is the case then
> if resistance is high it will decrease amperage. I would tend to think
> that just the opposite would happen...component would just lose power and
> not fry.
> Please enlighten me,
> Jimbo
Hi Jimbo,
This is the teacher in me. Ohm's Law doesn't state E=I*R. That's the way
to get into big trouble when you're trying to calculate things.
Ohm's Law states that (for a linear device) the *change* in potential is
equal to the current times resistance:
\Delta V = I R
Your voltmeter doesn't actually read potential (measured in volts). It
actually reads a potential difference (also measured in volts) between two
points in a circuit.
There's actually no such thing as "a" potential at "a" place. The potential
itself can be anything you like. For example, you can declare to the world
that your battery is 99999999999.5 volts, and you'd be correct. The only
provisio is that the potential difference between the two terminals is 1.5
volts, so your negative terminal would be at 99999999998.0 volts.
Consider a battery hooked up to a home stereo and speakers:
R=4 Ohms R_s=2 Ohms
E = 12v -----------------****---------------****--------------0
Here, *** represents a resistor, 0 is ground. R is the internal resistance
of your home stereo system (which we'll pretend runs on DC from a battery)
and R_s is the resistance of your speaker. Suppose your speaker is rated
at 8 Watts, but will blow if it receives 10 Watts or more.
We'll apply Ohms law between the battery and ground to obtain the total
current through the system:
Delta V
I = -------
where "Delta V" is the change of potential between the battery and ground
(which we'll take to be at 0 volts) and R_total is the total resistance of
the circuit. So:
Delta V E 12 v
I = ------- = --------- = ------ = 2 Amps
R_total R + R_s 6 Ohms
Now, the question is, how much power is your speaker getting? Let's first
calculate the potential drop across your speaker:
\Delta V_s = I * R_s = 2 Amps * 2 Ohms = 4 volts
and now the power delivered to your speaker (note the Delta V_s means "the
potential drop across only the speaker):
P_s = I * Delta V_s = 2 Amps * 4 volts = 8 Watts
Imagine that your speaker's internal resistance increases for some reason.
Perhaps the wiring gets frayed. Perhaps the contacts get dirty (dirt and
corrosion increase resistance. I've had to replace the cables to the
alternator on my motorcycle because dirt in the contacts between the
alternator and regulator raised the resistance between the tangs of the
contact, increasing the heat, and melting the contact. I have replace those
cables about every 3 years). Even heat increases resistance (usually). For
whatever reason, your speaker is now 8 Ohms.
R=4 Ohms R_s=8 Ohms
E = 12v -----------------****---------------****--------------0
In this scenario, the battery is under slightly more load. In real life, E
will be slightly less than 12 volts, but just ever so slightly (if we added,
say, 100000 Ohms, then the real characteristics of the battery would come
into play, but for these numbers, and a typical battery, E would still
essentially be 12 volts.
Let's redo the calculation. Here's the amount of current flowing through
your stereo and speaker:
Delta V E 12 v
I = ------- = --------- = ------- = 1 Amp
R_total R + R_s 12 Ohms
Sure enough, the current decreased, as you said it would. By half, even!
But look at the potential drop across your speaker:
\Delta V_s = I * R_s = 1 Amp * 10 Ohms = 10 volts
This is the heart of the matter. The current might have dropped, but your
speaker now has a larger potential difference across its terminals (its
"voltage increased"). By quite a bit -- by a factor of 150%!
And take a look at the power delivered to your speaker:
P_s = I*Delta V_s = 1 Amps * 10 volts = 10 Watts
Oops! The speaker has now blown. I hope it had a fuse! :-)
The main issue here is not so much current as it is power delivery. It is
very possible that the current goes down and the power delivered goes up (as
it did here). You can even find out the resistance that would maximize the
power delivery to your speaker, but this uses calculus, so I'll end here.
Hope there are no math mistakes in here for the world to see in the
archives. ;-)
How VBA rounds a number depends on the number's internal representation.
You cannot always predict how it will round when the rounding digit is 5.
If you want a rounding function that rounds according to predictable rules,
you should write your own.
-- MSDN, on Microsoft VBA's "stochastic" rounding function
Peter Jay Salzman, email: p@dirac.org web: http://www.dirac.org/p
PGP Fingerprint: B9F1 6CF3 47C4 7CD8 D33E 70A9 A3B9 1945 67EA 951D
vox-tech mailing list
• Follow-Ups:
• References: | {"url":"http://www.lugod.org/mailinglists/archives/vox-tech/2007-02/msg00009.html","timestamp":"2014-04-20T21:01:11Z","content_type":null,"content_length":"26479","record_id":"<urn:uuid:8616dd0b-9892-46e3-86ab-579f6d516f55>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00520-ip-10-147-4-33.ec2.internal.warc.gz"} |
A spring has a natural length of 20 cm. If required to keep it. A... - (414973) | Transtutors
A spring has a natural length of 20 cm. If required to keep it
A spring has a natural length of 20 cm. If a 25-N force is required to keep it stretched to a length of 30 cm, how much work is required to stretch it from 20 cm to 25 cm?
Posted On: Nov 15 2013 02:49 AM
Tags: Science/Math, Math, Calculus, College
Solution to be delivered in 36 hours
after verification
Solution to "A spring has a natural length of 20 cm. If..."
Related Questions in Calculus
Ask Your Question Now
Copy and paste your question here...
Have Files to Attach?
Questions Asked
Questions Answered
Topics covered in Science/Math | {"url":"http://www.transtutors.com/questions/a-spring-has-a-natural-length-of-20-cm-if-required-to-keep-it-414973.htm","timestamp":"2014-04-21T14:47:23Z","content_type":null,"content_length":"77039","record_id":"<urn:uuid:5f5881cc-d556-4e9e-a450-f795c5e09e0c>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00158-ip-10-147-4-33.ec2.internal.warc.gz"} |
Non-isomorphic finite simple groups
up vote 13 down vote favorite
The smallest integer $n$ such that there exists two non-isomorphic simple groups of order $n$, is $n=20160$ (namely for the groups $\mathrm{PSL}_3(\mathbb F _4)$ and $\mathrm{PSL}_4(\mathbb F _2)$).
I read that there are infinitely many integer $n$ such that here exists two non-isomorphic simple groups of order $n$. I have two questions:
1. Do you have a reference (if possible self contained, but that's probably too much to ask)?
2. I suspect that it is "rare" to find such an integer. For instance if we denote by $a_k$ the orders of non-cyclic simple groups ($a_1=60$, $a_2=168$, $a_3=360$,....) and $b_k$ the integers such
that there exists two non-isomorphic simple groups of order $b_k$, then I guess that $\displaystyle \lim\frac{b_k}{a_k}=+\infty$. Do you know if this is the case?
gr.group-theory finite-groups
7 $\mathrm{P}\Omega(2\ell+1,q)$ and $\mathrm{PSp}(2\ell,q^2)$ have the same order and are nonisomorphic if $\ell\gt 2$. – Arturo Magidin Sep 19 '12 at 21:47
1 Note also that $20160$ is the order of the alternating group $A_{8}.$ The fact that the simple groups $A_{8}$ and ${\rm GL}(4,2)$ are isomorphic may be considered as rather exceptional, and I
would call it non-obvious. – Geoff Robinson Sep 20 '12 at 0:24
4 Copy-paste from oeis.org/A109379: The first proof that there exist two nonisomorphic simple groups of order 20160 was given by the American mathematician Ida May Schottenfels (1869-1942): Ida May
Schottenfels, Two Non-Isomorphic Simple Groups of the Same Order 20,160, Annals of Math., 2nd Ser., Vol. 1, No. 1/4 (1899), pp. 147-152. $${}$$ The orders for which there is more than one simple
group are tabulated, with references, at oeis.org/A119648. – Gerry Myerson Sep 20 '12 at 0:44
@Arturo, Isn't it $\mathrm{PSp}(2\ell,q)$ instead of $\mathrm{PSp}(2\ell,q^2)$? – Portland Sep 20 '12 at 2:54
@Geoff: Here's a "geometric" proof. For $H = \{\sum x_i = 0\}$ in affine 8-space over $\mathbf{F}_2$, and $q = \sum_{i<j} x_i x_j$, $q|_H$ has defect line $L = \{x_1=\dots=x_8\}$. The quadratic
2 space $(H/L,q)$ identifies $S_8$ with ${\rm{O}}_6(\mathbf{F}_2)$ through the $S_8$-action on affine 8-space preserving $H$, $L$, and $q$, so $A_8 = {\rm{SO}}_6(\mathbf{F}_2)$ as the unique index-2
subgroups. The isogeny ${\rm{SL}}_4 \simeq {\rm{Spin}}_6 \rightarrow {\rm{SO}}_6$ induces an isomorphism on $\mathbf{F}_2$-points, and ${\rm{SL}}_4(\mathbf{F}_2)={\rm{GL}}(4,2)$, so ${\rm{GL}}
(4,2)=A_8$ – grp Sep 20 '12 at 4:39
show 3 more comments
1 Answer
active oldest votes
Just to summarise the comments: the only nonisomorphic finite simple groups with the same orders are
1. $A_8 \cong {\rm PSL}_4(2)$ and ${\rm PSL}_3(4)$ of order 20160.
up vote 10 2. The groups ${\rm P \Omega}_{2n+1}(q)$ and ${\rm PSp}_{2n}(q)$ for all odd prime powers $q$ and $n \ge 3$. These have order
down vote
accepted $$(q^{n^2} \Pi_{i=1}^n (q^{2i}-1))/2$$
For references, see Gerry Myerson's comment.
#2 is a "shadow" of the purely inseparable isogeny ${\rm{Spin}}_{2n+1} \rightarrow {\rm{Sp}}_{2n}$ in char. 2 that induces an isomorphism on $\mathbf{F}_{2^m}$-points for all $m>0$.
1 Indeed, by Steinberg (or cohomological arguments over Spec($\mathbf{Z}$)), for a simply connected Chevalley group $G$ and a finite field $k$, $\#G(k)$ is a polynomial in $|k|$
depending only on the "type" of $G$ and not on char($k$), so equality for different types and all $q$ follows from equality as $q$ varies through powers of one prime (such as 2). In
this sense, #1 seems more mysterious. – grp Sep 21 '12 at 10:08
Shameless Mathieu plug: Even more mysterious about #1: the smallest group containing subgroups isomorphic to both $A_{8}$ and $PSL_{3}(4)$ is the Mathieu group $M_{23}$. $PSL_{3}(4)$
is also beastly on its own because its outer automorphism group is large, its Schur multiplier is very large (order 48), and its Schur multiplier is related to the notoriously
elusive Schur multiplier of $M_{22}$, in which $PSL_{3}(4)$ is a subgroup of index 22. To see $A_{8} \cong GL_{4}(2)$, one can consider how the stabilizer of an octad in $Aut S
(5,8,24) = M_{24}$ acts on the 24 points. – DavidLHarden Sep 24 '12 at 2:41
add comment
Not the answer you're looking for? Browse other questions tagged gr.group-theory finite-groups or ask your own question. | {"url":"http://mathoverflow.net/questions/107620/non-isomorphic-finite-simple-groups","timestamp":"2014-04-20T03:44:51Z","content_type":null,"content_length":"60711","record_id":"<urn:uuid:b6794631-de03-4862-a336-e0eef85fad6b>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00277-ip-10-147-4-33.ec2.internal.warc.gz"} |
It's the effect size, stupid: what effect size is and why it is important
It's the Effect Size, Stupid
What effect size is and why it is important
Robert Coe
School of Education, University of Durham, email r.j.coe@dur.ac.uk
Paper presented at the Annual Conference of the British Educational Research Association, University of Exeter, England, 12-14 September 2002
Effect size is a simple way of quantifying the difference between two groups that has many advantages over the use of tests of statistical significance alone. Effect size emphasises the size of the
difference rather than confounding this with sample size. However, primary reports rarely mention effect sizes and few textbooks, research methods courses or computer packages address the concept.
This paper provides an explication of what an effect size is, how it is calculated and how it can be interpreted. The relationship between effect size and statistical significance is discussed and
the use of confidence intervals for the latter outlined. Some advantages and dangers of using effect sizes in meta-analysis are discussed and other problems with the use of effect sizes are raised. A
number of alternative measures of effect size are described. Finally, advice on the use of effect sizes is summarised.
│During 1992 Bill Clinton and George Bush Snr. were fighting for the presidency of the United States. Clinton was barely holding on to his place in the opinion polls. Bush was pushing ahead │
│drawing his on his stature as an experienced world leader. James Carville, one of Clinton's top advisers decided that their push for presidency needed focusing. Drawing on the research he had │
│conducted he came up with a simple focus for their campaign. Every opportunity he had, Carville wrote four words - 'It's the economy, stupid' - on a whiteboard for Bill Clinton to see every │
│time he went out to speak. │
'Effect size' is simply a way of quantifying the size of the difference between two groups. It is easy to calculate, readily understood and can be applied to any measured outcome in Education or
Social Science. It is particularly valuable for quantifying the effectiveness of a particular intervention, relative to some comparison. It allows us to move beyond the simplistic, 'Does it work or
not?' to the far more sophisticated, 'How well does it work in a range of contexts?' Moreover, by placing the emphasis on the most important aspect of an intervention - the size of the effect -
rather than its statistical significance (which conflates effect size and sample size), it promotes a more scientific approach to the accumulation of knowledge. For these reasons, effect size is an
important tool in reporting and interpreting effectiveness.
The routine use of effect sizes, however, has generally been limited to meta-analysis - for combining and comparing estimates from different studies - and is all too rare in original reports of
educational research (Keselman et al., 1998). This is despite the fact that measures of effect size have been available for at least 60 years (Huberty, 2002), and the American Psychological
Association has been officially encouraging authors to report effect sizes since 1994 - but with limited success (Wilkinson et al., 1999). Formulae for the calculation of effect sizes do not appear
in most statistics text books (other than those devoted to meta-analysis), are not featured in many statistics computer packages and are seldom taught in standard research methods courses. For these
reasons, even the researcher who is convinced by the wisdom of using measures of effect size, and is not afraid to confront the orthodoxy of conventional practice, may find that it is quite hard to
know exactly how to do so.
The following guide is written for non-statisticians, though inevitably some equations and technical language have been used. It describes what effect size is, what it means, how it can be used and
some potential problems associated with using it.
1. Why do we need 'effect size'?
Consider an experiment conducted by Dowson (2000) to investigate time of day effects on learning: do children learn better in the morning or afternoon? A group of 38 children were included in the
experiment. Half were randomly allocated to listen to a story and answer questions about it (on tape) at 9am, the other half to hear exactly the same story and answer the same questions at 3pm. Their
comprehension was measured by the number of questions answered correctly out of 20.
The average score was 15.2 for the morning group, 17.9 for the afternoon group: a difference of 2.7. But how big a difference is this? If the outcome were measured on a familiar scale, such as GCSE
grades, interpreting the difference would not be a problem. If the average difference were, say, half a grade, most people would have a fair idea of the educational significance of the effect of
reading a story at different times of day. However, in many experiments there is no familiar scale available on which to record the outcomes. The experimenter often has to invent a scale or to use
(or adapt) an already existing one - but generally not one whose interpretation will be familiar to most people.
(a) (b)
Figure 1
One way to get over this problem is to use the amount of variation in scores to contextualise the difference. If there were no overlap at all and every single person in the afternoon group had done
better on the test than everyone in the morning group, then this would seem like a very substantial difference. On the other hand, if the spread of scores were large and the overlap much bigger than
the difference between the groups, then the effect might seem less significant. Because we have an idea of the amount of variation found within a group, we can use this as a yardstick against which
to compare the difference. This idea is quantified in the calculation of the effect size. The concept is illustrated in Figure 1, which shows two possible ways the difference might vary in relation
to the overlap. If the difference were as in graph (a) it would be very significant; in graph (b), on the other hand, the difference might hardly be noticeable.
2. How is it calculated?
The effect size is just the standardised mean difference between the two groups. In other words:
Equation 1
If it is not obvious which of two groups is the 'experimental' (i.e. the one which was given the 'new' treatment being tested) and which the 'control' (the one given the 'standard' treatment - or no
treatment - for comparison), the difference can still be calculated. In this case, the 'effect size' simply measures the difference between them, so it is important in quoting the effect size to say
which way round the calculation was done.
The 'standard deviation' is a measure of the spread of a set of values. Here it refers to the standard deviation of the population from which the different treatment groups were taken. In practice,
however, this is almost never known, so it must be estimated either from the standard deviation of the control group, or from a 'pooled' value from both groups (see question 7, below, for more
discussion of this).
In Dowson's time-of-day effects experiment, the standard deviation (SD) = 3.3, so the effect size was (17.9 - 15.2)/3.3 = 0.8.
3. How can effect sizes be interpreted?
One feature of an effect size is that it can be directly converted into statements about the overlap between the two samples in terms of a comparison of percentiles.
An effect size is exactly equivalent to a 'Z-score' of a standard Normal distribution. For example, an effect size of 0.8 means that the score of the average person in the experimental group is 0.8
standard deviations above the average person in the control group, and hence exceeds the scores of 79% of the control group. With the two groups of 19 in the time-of-day effects experiment, the
average person in the 'afternoon' group (i.e. the one who would have been ranked 10^th in the group) would have scored about the same as the 4^th highest person in the 'morning' group. Visualising
these two individuals can give quite a graphic interpretation of the difference between the two effects.
Table I shows conversions of effect sizes (column 1) to percentiles (column 2) and the equivalent change in rank order for a group of 25 (column 3). For example, for an effect-size of 0.6, the value
of 73% indicates that the average person in the experimental group would score higher than 73% of a control group that was initially equivalent. If the group consisted of 25 people, this is the same
as saying that the average person (i.e. ranked 13^th in the group) would now be on a par with the person ranked 7^th in the control group. Notice that an effect-size of 1.6 would raise the average
person to be level with the top ranked individual in the control group, so effect sizes larger than this are illustrated in terms of the top person in a larger group. For example, an effect size of
3.0 would bring the average person in a group of 740 level with the previously top person in the group.
Table I: Interpretations of effect sizes
│ │ Percentage of control group who │ Rank of person in a control group of 25 │ Probability that you could guess │ Equivalent correlation, r │ Probability that person from experimental │
│ Effect │ would be below average person │ who would be equivalent to the average │ which group a person was in from │ (=Difference in percentage │ group will be higher than person from │
│ Size │ in experimental group │ person in experimental group │ knowledge of their 'score'. │ 'successful' in each of │ control, if both chosen at random (=CLES) │
│ │ │ │ │ the two groups, BESD) │ │
│ 0.0 │ 50% │ 13^th │ 0.50 │ 0.00 │ 0.50 │
│ 0.1 │ 54% │ 12^th │ 0.52 │ 0.05 │ 0.53 │
│ 0.2 │ 58% │ 11^th │ 0.54 │ 0.10 │ 0.56 │
│ 0.3 │ 62% │ 10^th │ 0.56 │ 0.15 │ 0.58 │
│ 0.4 │ 66% │ 9^th │ 0.58 │ 0.20 │ 0.61 │
│ 0.5 │ 69% │ 8^th │ 0.60 │ 0.24 │ 0.64 │
│ 0.6 │ 73% │ 7^th │ 0.62 │ 0.29 │ 0.66 │
│ 0.7 │ 76% │ 6^th │ 0.64 │ 0.33 │ 0.69 │
│ 0.8 │ 79% │ 6^th │ 0.66 │ 0.37 │ 0.71 │
│ 0.9 │ 82% │ 5^th │ 0.67 │ 0.41 │ 0.74 │
│ 1.0 │ 84% │ 4^th │ 0.69 │ 0.45 │ 0.76 │
│ 1.2 │ 88% │ 3^rd │ 0.73 │ 0.51 │ 0.80 │
│ 1.4 │ 92% │ 2^nd │ 0.76 │ 0.57 │ 0.84 │
│ 1.6 │ 95% │ 1^st │ 0.79 │ 0.62 │ 0.87 │
│ 1.8 │ 96% │ 1^st │ 0.82 │ 0.67 │ 0.90 │
│ 2.0 │ 98% │ 1^st (or 1^st out of 44) │ 0.84 │ 0.71 │ 0.92 │
│ 2.5 │ 99% │ 1^st (or 1^st out of 160) │ 0.89 │ 0.78 │ 0.96 │
│ 3.0 │ 99.9% │ 1^st (or 1^st out of 740) │ 0.93 │ 0.83 │ 0.98 │
Another way to conceptualise the overlap is in terms of the probability that one could guess which group a person came from, based only on their test score - or whatever value was being compared. If
the effect size were 0 (i.e. the two groups were the same) then the probability of a correct guess would be exactly a half - or 0.50. With a difference between the two groups equivalent to an effect
size of 0.3, there is still plenty of overlap, and the probability of correctly identifying the groups rises only slightly to 0.56. With an effect size of 1, the probability is now 0.69, just over a
two-thirds chance. These probabilities are shown in the fourth column of Table I. It is clear that the overlap between experimental and control groups is substantial (and therefore the probability is
still close to 0.5), even when the effect-size is quite large.
A slightly different way to interpret effect sizes makes use of an equivalence between the standardised mean difference (d) and the correlation coefficient, r. If group membership is coded with a
dummy variable (e.g. denoting the control group by 0 and the experimental group by 1) and the correlation between this variable and the outcome measure calculated, a value of r can be derived. By
making some additional assumptions, one can readily convert d into r in general, using the equation r^2 = d^2 / (4+d^2) (see Cohen, 1969, pp20-22 for other formulae and conversion table). Rosenthal
and Rubin (1982) take advantage of an interesting property of r to suggest a further interpretation, which they call the binomial effect size display (BESD). If the outcome measure is reduced to a
simple dichotomy (for example, whether a score is above or below a particular value such as the median, which could be thought of as 'success' or 'failure'), r can be interpreted as the difference in
the proportions in each category. For example, an effect size of 0.2 indicates a difference of 0.10 in these proportions, as would be the case if 45% of the control group and 55% of the treatment
group had reached some threshold of 'success'. Note, however, that if the overall proportion 'successful' is not close to 50%, this interpretation can be somewhat misleading (Strahan, 1991; McGraw,
1991). The values for the BESD are shown in column 5.
Finally, McGraw and Wong (1992) have suggested a 'Common Language Effect Size' (CLES) statistic, which they argue is readily understood by non-statisticians (shown in column 6 of Table I). This is
the probability that a score sampled at random from one distribution will be greater than a score sampled from another. They give the example of the heights of young adult males and females, which
differ by an effect size of about 2, and translate this difference to a CLES of 0.92. In other words 'in 92 out of 100 blind dates among young adults, the male will be taller than the female' (p361).
It should be noted that the values in Table I depend on the assumption of a Normal distribution. The interpretation of effect sizes in terms of percentiles is very sensitive to violations of this
assumption (see question 7, below).
Another way to interpret effect sizes is to compare them to the effect sizes of differences that are familiar. For example, Cohen (1969, p23) describes an effect size of 0.2 as 'small' and gives to
illustrate it the example that the difference between the heights of 15 year old and 16 year old girls in the US corresponds to an effect of this size. An effect size of 0.5 is described as 'medium'
and is 'large enough to be visible to the naked eye'. A 0.5 effect size corresponds to the difference between the heights of 14 year old and 18 year old girls. Cohen describes an effect size of 0.8
as 'grossly perceptible and therefore large' and equates it to the difference between the heights of 13 year old and 18 year old girls. As a further example he states that the difference in IQ
between holders of the Ph.D. degree and 'typical college freshmen' is comparable to an effect size of 0.8.
Cohen does acknowledge the danger of using terms like 'small', 'medium' and 'large' out of context. Glass et al. (1981, p104) are particularly critical of this approach, arguing that the
effectiveness of a particular intervention can only be interpreted in relation to other interventions that seek to produce the same effect. They also point out that the practical importance of an
effect depends entirely on its relative costs and benefits. In education, if it could be shown that making a small and inexpensive change would raise academic achievement by an effect size of even as
little as 0.1, then this could be a very significant improvement, particularly if the improvement applied uniformly to all students, and even more so if the effect were cumulative over time.
Table II: Examples of average effect sizes from research
│ Intervention │ Outcome │ Effect Size │ Source │
│ Students' test performance in reading │ 0.30 │ │ │
│ Students' test performance in maths │ 0.32 │ │ │
│ Attitudes of students │ 0.47 │ │ │
│ Attitudes of teachers │ 1.03 │ │ │
│ Student achievement (overall) │ 0.00 │ │ │
│ Student achievement (for high-achievers) │ 0.08 │ │ │
│ Student achievement (for low-achievers) │ -0.06 │ │ │
│ Student achievement │ -0.06 │ │ │
│ Student attitudes to school │ 0.17 │ │ │
│ Mainstreaming vs special education (for primary age, disabled students) │ Achievement │ 0.44 │ Wang and Baker (1986) │
│ Practice test taking │ Test scores │ 0.32 │ Kulik, Bangert and Kulik (1984) │
│ Inquiry-based vs traditional science curriculum │ Achievement │ 0.30 │ Shymansky, Hedges and Woodworth (1990) │
│ Therapy for test-anxiety (for anxious students) │ Test performance │ 0.42 │ Hembree (1988) │
│ Feedback to teachers about student performance (students with IEPs) │ Student achievement │ 0.70 │ Fuchs and Fuchs (1986) │
│ Achievement of tutees │ 0.40 │ │ │
│ Achievement of tutors │ 0.33 │ │ │
│ Individualised instruction │ Achievement │ 0.10 │ Bangert, Kulik and Kulik (1983) │
│ Achievement (all studies) │ 0.24 │ │ │
│ Achievement (in well controlled studies) │ 0.02 │ │ │
│ Additive-free diet │ Children's hyperactivity │ 0.02 │ Kavale and Forness (1983) │
│ Relaxation training │ Medical symptoms │ 0.52 │ Hyman et al. (1989) │
│ Targeted interventions for at-risk students │ Achievement │ 0.63 │ Slavin and Madden (1989) │
│ School-based substance abuse education │ Substance use │ 0.12 │ Bangert-Drowns (1988) │
│ Treatment programmes for juvenile delinquents │ Delinquency │ 0.17 │ Lipsey (1992) │
Glass et al. (1981, p102) give the example that an effect size of 1 corresponds to the difference of about a year of schooling on the performance in achievement tests of pupils in elementary (i.e.
primary) schools. However, an analysis of a standard spelling test used in Britain (Vincent and Crumpler, 1997) suggests that the increase in a spelling age from 11 to 12 corresponds to an effect
size of about 0.3, but seems to vary according to the particular test used.
In England, the distribution of GCSE grades in compulsory subjects (i.e. Maths and English) have standard deviations of between 1.5 - 1.8 grades, so an improvement of one GCSE grade represents an
effect size of 0.5 - 0.7. In the context of secondary schools therefore, introducing a change in practice whose effect size was known to be 0.6 would result in an improvement of about a GCSE grade
for each pupil in each subject. For a school in which 50% of pupils were previously gaining five or more A* - C grades, this percentage (other things being equal, and assuming that the effect applied
equally across the whole curriculum) would rise to 73%. Even Cohen's 'small' effect of 0.2 would produce an increase from 50% to 58% - a difference that most schools would probably categorise as
quite substantial. Olejnik and Algina (2000) give a similar example based on the Iowa Test of Basic Skills
Finally, the interpretation of effect sizes can be greatly helped by a few examples from existing research. Table II lists a selection of these, many of which are taken from Lipsey and Wilson (1993).
The examples cited are given for illustration of the use of effect size measures; they are not intended to be the definitive judgement on the relative efficacy of different interventions. In
interpreting them, therefore, one should bear in mind that most of the meta-analyses from which they are derived can be (and often have been) criticised for a variety of weaknesses, that the range of
circumstances in which the effects have been found may be limited, and that the effect size quoted is an average which is often based on quite widely differing values.
It seems to be a feature of educational interventions that very few of them have effects that would be described in Cohen's classification as anything other than 'small'. This appears particularly so
for effects on student achievement. No doubt this is partly a result of the wide variation found in the population as a whole, against which the measure of effect size is calculated. One might also
speculate that achievement is harder to influence than other outcomes, perhaps because most schools are already using optimal strategies, or because different strategies are likely to be effective in
different situations - a complexity that is not well captured by a single average effect size.
4. What is the relationship between 'effect size' and 'significance'?
Effect size quantifies the size of the difference between two groups, and may therefore be said to be a true measure of the significance of the difference. If, for example, the results of Dowson's
'time of day effects' experiment were found to apply generally, we might ask the question: 'How much difference would it make to children's learning if they were taught a particular topic in the
afternoon instead of the morning?' The best answer we could give to this would be in terms of the effect size.
However, in statistics the word 'significance' is often used to mean 'statistical significance', which is the likelihood that the difference between the two groups could just be an accident of
sampling. If you take two samples from the same population there will always be a difference between them. The statistical significance is usually calculated as a 'p-value', the probability that a
difference of at least the same size would have arisen by chance, even if there really were no difference between the two populations. For differences between the means of two groups, this p-value
would normally be calculated from a 't-test'. By convention, if p < 0.05 (i.e. below 5%), the difference is taken to be large enough to be 'significant'; if not, then it is 'not significant'.
There are a number of problems with using 'significance tests' in this way (see, for example Cohen, 1994; Harlow et al., 1997; Thompson, 1999). The main one is that the p-value depends essentially on
two things: the size of the effect and the size of the sample. One would get a 'significant' result either if the effect were very big (despite having only a small sample) or if the sample were very
big (even if the actual effect size were tiny). It is important to know the statistical significance of a result, since without it there is a danger of drawing firm conclusions from studies where the
sample is too small to justify such confidence. However, statistical significance does not tell you the most important thing: the size of the effect. One way to overcome this confusion is to report
the effect size, together with an estimate of its likely 'margin for error' or 'confidence interval'.
5. What is the margin for error in estimating effect sizes?
Clearly, if an effect size is calculated from a very large sample it is likely to be more accurate than one calculated from a small sample. This 'margin for error' can be quantified using the idea of
a 'confidence interval', which provides the same information as is usually contained in a significance test: using a '95% confidence interval' is equivalent to taking a '5% significance level'. To
calculate a 95% confidence interval, you assume that the value you got (e.g. the effect size estimate of 0.8) is the 'true' value, but calculate the amount of variation in this estimate you would get
if you repeatedly took new samples of the same size (i.e. different samples of 38 children). For every 100 of these hypothetical new samples, by definition, 95 would give estimates of the effect size
within the '95% confidence interval'. If this confidence interval includes zero, then that is the same as saying that the result is not statistically significant. If, on the other hand, zero is
outside the range, then it is 'statistically significant at the 5% level'. Using a confidence interval is a better way of conveying this information since it keeps the emphasis on the effect size -
which is the important information - rather than the p-value.
A formula for calculating the confidence interval for an effect size is given by Hedges and Olkin (1985, p86). If the effect size estimate from the sample is d, then it is Normally distributed, with
standard deviation:
Equation 2
(Where N[E] and N[C] are the numbers in the experimental and control groups, respectively.)
Hence a 95% confidence interval for d would be from
d - 1.96 ´ s [d] to d + 1.96 ´ s [d]
Equation 3
To use the figures from the time-of-day experiment again, N[E] = N[C] = 19 and d = 0.8, so s [d] = Ö (0.105 + 0.008) = 0.34. Hence the 95% confidence interval is [0.14, 1.46]. This would normally be
interpreted (despite the fact that such an interpretation is not strictly justified - see Oakes, 1986 for an enlightening discussion of this) as meaning that the 'true' effect of time-of-day is very
likely to be between 0.14 and 1.46. In other words, it is almost certainly positive (i.e. afternoon is better than morning) and the difference may well be quite large.
6. How can knowledge about effect sizes be combined?
One of the main advantages of using effect size is that when a particular experiment has been replicated, the different effect size estimates from each study can easily be combined to give an overall
best estimate of the size of the effect. This process of synthesising experimental results into a single effect size estimate is known as 'meta-analysis'. It was developed in its current form by an
educational statistician, Gene Glass (See Glass et al., 1981) though the roots of meta-analysis can be traced a good deal further back(see Lepper et al., 1999), and is now widely used, not only in
education, but in medicine and throughout the social sciences. A brief and accessible introduction to the idea of meta-analysis can be found in Fitz-Gibbon (1984).
Meta-analysis, however, can do much more than simply produce an overall 'average' effect size, important though this often is. If, for a particular intervention, some studies produced large effects,
and some small effects, it would be of limited value simply to combine them together and say that the average effect was 'medium'. Much more useful would be to examine the original studies for any
differences between those with large and small effects and to try to understand what factors might account for the difference. The best meta-analysis, therefore, involves seeking relationships
between effect sizes and characteristics of the intervention, the context and study design in which they were found (Rubin, 1992; see also Lepper et al. (1999) for a discussion of the problems that
can be created by failing to do this, and some other limitations of the applicability of meta-analysis).
The importance of replication in gaining evidence about what works cannot be overstressed. In Dowson's time-of-day experiment the effect was found to be large enough to be statistically and
educationally significant. Because we know that the pupils were allocated randomly to each group, we can be confident that chance initial differences between the two groups are very unlikely to
account for the difference in the outcomes. Furthermore, the use of a pre-test of both groups before the intervention makes this even less likely. However, we cannot rule out the possibility that the
difference arose from some characteristic peculiar to the children in this particular experiment. For example, if none of them had had any breakfast that day, this might account for the poor
performance of the morning group. However, the result would then presumably not generalise to the wider population of school students, most of whom would have had some breakfast. Alternatively, the
effect might depend on the age of the students. Dowson's students were aged 7 or 8; it is quite possible that the effect could be diminished or reversed with older (or younger) students. This
illustrates the danger of implementing policy on the basis of a single experiment. Confidence in the generality of a result can only follow widespread replication.
An important consequence of the capacity of meta-analysis to combine results is that even small studies can make a significant contribution to knowledge. The kind of experiment that can be done by a
single teacher in a school might involve a total of fewer than 30 students. Unless the effect is huge, a study of this size is most unlikely to get a statistically significant result. According to
conventional statistical wisdom, therefore, the experiment is not worth doing. However, if the results of several such experiments are combined using meta-analysis, the overall result is likely to be
highly statistically significant. Moreover, it will have the important strengths of being derived from a range of contexts (thus increasing confidence in its generality) and from real-life working
practice (thereby making it more likely that the policy is feasible and can be implemented authentically).
One final caveat should be made here about the danger of combining incommensurable results. Given two (or more) numbers, one can always calculate an average. However, if they are effect sizes from
experiments that differ significantly in terms of the outcome measures used, then the result may be totally meaningless. It can be very tempting, once effect sizes have been calculated, to treat them
as all the same and lose sight of their origins. Certainly, there are plenty of examples of meta-analyses in which the juxtaposition of effect sizes is somewhat questionable.
In comparing (or combining) effect sizes, one should therefore consider carefully whether they relate to the same outcomes. This advice applies not only to meta-analysis, but to any other comparison
of effect sizes. Moreover, because of the sensitivity of effect size estimates to reliability and range restriction (see below), one should also consider whether those outcome measures are derived
from the same (or sufficiently similar) instruments and the same (or sufficiently similar) populations.
It is also important to compare only like with like in terms of the treatments used to create the differences being measured. In the education literature, the same name is often given to
interventions that are actually very different, for example, if they are operationalised differently, or if they are simply not well enough defined for it to be clear whether they are the same or
not. It could also be that different studies have used the same well-defined and operationalised treatments, but the actual implementation differed, or that the same treatment may have had different
levels of intensity in different studies. In any of these cases, it makes no sense to average out their effects.
7. What other factors can influence effect size?
Although effect size is a simple and readily interpreted measure of effectiveness, it can also be sensitive to a number of spurious influences, so some care needs to be taken in its use. Some of
these issues are outlined here.
Which 'standard deviation'?
The first problem is the issue of which 'standard deviation' to use. Ideally, the control group will provide the best estimate of standard deviation, since it consists of a representative group of
the population who have not been affected by the experimental intervention. However, unless the control group is very large, the estimate of the 'true' population standard deviation derived from only
the control group is likely to be appreciably less accurate than an estimate derived from both the control and experimental groups. Moreover, in studies where there is not a true 'control' group (for
example the time-of-day effects experiment) then it may be an arbitrary decision which group's standard deviation to use, and it will often make an appreciable difference to the estimate of effect
For these reasons, it is often better to use a 'pooled' estimate of standard deviation. The pooled estimate is essentially an average of the standard deviations of the experimental and control groups
(Equation 4). Note that this is not the same as the standard deviation of all the values in both groups 'pooled' together. If, for example each group had a low standard deviation but the two means
were substantially different, the true pooled estimate (as calculated by Equation 4) would be much lower than the value obtained by pooling all the values together and calculating the standard
deviation. The implications of choices about which standard deviation to use are discussed by Olejnik and Algina (2000).
Equation 4
(Where N[E] and N[C] are the numbers in the experimental and control groups, respectively, and SD[E] and SD[C] are their standard deviations.)
The use of a pooled estimate of standard deviation depends on the assumption that the two calculated standard deviations are estimates of the same population value. In other words, that the
experimental and control group standard deviations differ only as a result of sampling variation. Where this assumption cannot be made (either because there is some reason to believe that the two
standard deviations are likely to be systematically different, or if the actual measured values are very different), then a pooled estimate should not be used.
In the example of Dowson's time of day experiment, the standard deviations for the morning and afternoon groups were 4.12 and 2.10 respectively. With N[E] = N[C] = 19, Equation 2 therefore gives SD
[pooled] as 3.3, which was the value used in Equation 1 to give an effect size of 0.8. However, the difference between the two standard deviations seems quite large in this case. Given that the
afternoon group mean was 17.9 out of 20, it seems likely that its standard deviation may have been reduced by a 'ceiling effect' - i.e. the spread of scores was limited by the maximum available mark
of 20. In this case therefore, it might be more appropriate to use the morning group's standard deviation as the best estimate. Doing this will reduce the effect size to 0.7, and it then becomes a
somewhat arbitrary decision which value of the effect size to use. A general rule of thumb in statistics when two valid methods give different answers is: 'If in doubt, cite both.'
Corrections for bias
Although using the pooled standard deviation to calculate the effect size generally gives a better estimate than the control group SD, it is still unfortunately slightly biased and in general gives a
value slightly larger than the true population value (Hedges and Olkin, 1985). Hedges and Olkin (1985, p80) give a formula which provides an approximate correction to this bias.
In Dowson's experiment with 38 values, the correction factor will be 0.98, so it makes very little difference, reducing the effect size estimate from 0.82 to 0.80. Given the likely accuracy of the
figures on which this is based, it is probably only worth quoting one decimal place, so the figure of 0.8 stands. In fact, the correction only becomes significant for small samples, in which the
accuracy is anyway much less. It is therefore hardly worth worrying about it in primary reports of empirical results. However, in meta-analysis, where results from primary studies are combined, the
correction is important, since without it this bias would be accumulated.
Restricted range
Suppose the time-of-day effects experiment were to be repeated, once with the top set in a highly selective school and again with a mixed-ability group in a comprehensive. If students were allocated
to morning and afternoon groups at random, the respective differences between them might be the same in each case; both means in the selective school might be higher, but the difference between the
two groups could be the same as the difference in the comprehensive. However, it is unlikely that the standard deviations would be the same. The spread of scores found within the highly selected
group would be much less than that in a true cross-section of the population, as for example in the mixed-ability comprehensive class. This, of course, would have a substantial impact on the
calculation of the effect size. With the highly restricted range found in the selective school, the effect size would be much larger than that found in the comprehensive.
Ideally, in calculating effect-size one should use the standard deviation of the full population, in order to make comparisons fair. However, there will be many cases in which unrestricted values are
not available, either in practice or in principle. For example, in considering the effect of an intervention with university students, or with pupils with reading difficulties, one must remember that
these are restricted populations. In reporting the effect-size, one should draw attention to this fact; if the amount of restriction can be quantified it may be possible to make allowance for it. Any
comparison with effect sizes calculated from a full-range population must be made with great caution, if at all.
Non-Normal distributions
The interpretations of effect-sizes given in Table I depend on the assumption that both control and experimental groups have a 'Normal' distribution, i.e. the familiar 'bell-shaped' curve, shown, for
example, in Figure 1. Needless to say, if this assumption is not true then the interpretation may be altered, and in particular, it may be difficult to make a fair comparison between an effect-size
based on Normal distributions and one based on non-Normal distributions.
Figure 2: Comparison of Normal and non-Normal distributions
An illustration of this is given in Figure 2, which shows the frequency curves for two distributions, one of them Normal, the other a 'contaminated normal' distribution (Wilcox, 1998), which is
similar in shape, but with somewhat fatter extremes. In fact, the latter does look just a little more spread-out than the Normal distribution, but its standard deviation is actually over three times
as big. The consequence of this in terms of effect-size differences is shown in Figure 3. Both graphs show distributions that differ by an effect-size equal to 1, but the appearance of the
effect-size difference from the graphs is rather dissimilar. In graph (b), the separation between experimental and control groups seems much larger, yet the effect-size is actually the same as for
the Normal distributions plotted in graph (a). In terms of the amount of overlap, in graph (b) 97% of the 'experimental' group are above the control group mean, compared with the value of 84% for the
Normal distribution of graph (a) (as given in Table I). This is quite a substantial difference and illustrates the danger of using the values in Table I when the distribution is not known to be
(a) (b)
Figure 3: Normal and non-Normal distributions with effect-size = 1
Measurement reliability
A third factor that can spuriously affect an effect-size is the reliability of the measurement on which it is based. According to classical measurement theory, any measure of a particular outcome may
be considered to consist of the 'true' underlying value, together with a component of 'error'. The problem is that the amount of variation in measured scores for a particular sample (i.e. its
standard deviation) will depend on both the variation in underlying scores and the amount of error in their measurement.
To give an example, imagine the time-of-day experiment were conducted twice with two (hypothetically) identical samples of students. In the first version the test used to assess their comprehension
consisted of just 10 items and their scores were converted into a percentage. In the second version a test with 50 items was used, and again converted to a percentage. The two tests were of equal
difficulty and the actual effect of the difference in time-of-day was the same in each case, so the respective mean percentages of the morning and afternoon groups were the same for both versions.
However, it is almost always the case that a longer test will be more reliable, and hence the standard deviation of the percentages on the 50 item test will be lower than the standard deviation for
the 10 item test. Thus, although the true effect was the same, the calculated effect sizes will be different.
In interpreting an effect-size, it is therefore important to know the reliability of the measurement from which it was calculated. This is one reason why the reliability of any outcome measure used
should be reported. It is theoretically possible to make a correction for unreliability (sometimes called 'attenuation'), which gives an estimate of what the effect size would have been, had the
reliability of the test been perfect. However, in practice the effect of this is rather alarming, since the worse the test was, the more you increase the estimate of the effect size. Moreover,
estimates of reliability are dependent on the particular population in which the test was used, and are themselves anyway subject to sampling error. For further discussion of the impact of
reliability on effect sizes, see Baugh (2002).
8. Are there alternative measures of effect-size?
A number of statistics are sometimes proposed as alternative measures of effect size, other than the 'standardised mean difference'. Some of these will be considered here.
Proportion of variance accounted for
If the correlation between two variables is 'r', the square of this value (often denoted with a capital letter: R^2) represents the proportion of the variance in each that is 'accounted for' by the
other. In other words, this is the proportion by which the variance of the outcome measure is reduced when it is replaced by the variance of the residuals from a regression equation. This idea can be
extended to multiple regression (where it represents the proportion of the variance accounted for by all the independent variables together) and has close analogies in ANOVA (where it is usually
called 'eta-squared', h ^2). The calculation of r (and hence R^2 ) for the kind of experimental situation we have been considering has already been referred to above.
Because R^2 has this ready convertibility, it (or alternative measures of variance accounted for) is sometimes advocated as a universal measure of effect size (e.g. Thompson, 1999). One disadvantage
of such an approach is that effect size measures based on variance accounted for suffer from a number of technical limitations, such as sensitivity to violation of assumptions (heterogeneity of
variance, balanced designs) and their standard errors can be large (Olejnik and Algina, 2000). They are also generally more statistically complex and hence perhaps less easily understood. Further,
they are non-directional; two studies with precisely opposite results would report exactly the same variance accounted for. However, there is a more fundamental objection to the use of what is
essentially a measure of association to indicate the strength of an 'effect'.
Expressing different measures in terms of the same statistic can hide important differences between them; in fact, these different 'effect sizes' are fundamentally different, and should not be
confused. The crucial difference between an effect size calculated from an experiment and one calculated from a correlation is in the causal nature of the claim that is being made for it. Moreover,
the word 'effect' has an inherent implication of causality: talking about 'the effect of A on B' does suggest a causal relationship rather than just an association. Unfortunately, however, the word
'effect' is often used when no explicit causal claim is being made, but its implication is sometimes allowed to float in and out of the meaning, taking advantage of the ambiguity to suggest a
subliminal causal link where none is really justified.
This kind of confusion is so widespread in education that it is recommended here that the word 'effect' (and therefore 'effect size') should not be used unless a deliberate and explicit causal claim
is being made. When no such claim is being made, we may talk about the 'variance accounted for' (R^2) or the 'strength of association' (r), or simply - and perhaps most informatively - just cite the
regression coefficient (Tukey, 1969). If a causal claim is being made it should be explicit and justification provided. Fitz-Gibbon (2002) has recommended an alternative approach to this problem. She
has suggested a system of nomenclature for different kinds of effect sizes that clearly distinguishes between effect sizes derived from, for example, randomised-controlled, quasi-experimental and
correlational studies.
Other measures of effect size
It has been shown that the interpretation of the 'standardised mean difference' measure of effect size is very sensitive to violations of the assumption of normality. For this reason, a number of
more robust (non-parametric) alternatives have been suggested. An example of these is given by Cliff (1993). There are also effect size measures for multivariate outcomes. A detailed explanation can
be found in Olejnik and Algina (2000). Finally, a method for calculating effect sizes within multilevel models has been proposed by Tymms et al. (1997). Good summaries of many of the different kinds
of effect size measures that can be used and the relationships among them can be found in Snyder and Lawson (1993), Rosenthal (1994) and Kirk (1996).
Finally, a common effect size measure widely used in medicine is the 'odds ratio'. This is appropriate where an outcome is dichotomous: success or failure, a patient survives or does not.
Explanations of the odds ratio can be found in a number of medical statistics texts, including Altman (1991), and in Fleiss (1994).
Advice on the use of effect-sizes can be summarised as follows:
□ Effect size is a standardised, scale-free measure of the relative size of the effect of an intervention. It is particularly useful for quantifying effects measured on unfamiliar or arbitrary
scales and for comparing the relative sizes of effects from different studies.
□ Interpretation of effect-size generally depends on the assumptions that 'control' and 'experimental' group values are Normally distributed and have the same standard deviations. Effect sizes
can be interpreted in terms of the percentiles or ranks at which two distributions overlap, in terms of the likelihood of identifying the source of a value, or with reference to known effects
or outcomes.
□ Use of an effect size with a confidence interval conveys the same information as a test of statistical significance, but with the emphasis on the significance of the effect, rather than the
sample size.
□ Effect sizes (with confidence intervals) should be calculated and reported in primary studies as well as in meta-analyses.
□ Interpretation of standardised effect sizes can be problematic when a sample has restricted range or does not come from a Normal distribution, or if the measurement from which it was derived
has unknown reliability.
□ The use of an 'unstandardised' mean difference (i.e. the raw difference between the two groups, together with a confidence interval) may be preferable when:
○ - the outcome is measured on a familiar scale
○ - the sample has a restricted range
○ - the parent population is significantly non-Normal
○ - control and experimental groups have appreciably different standard deviations
○ - the outcome measure has very low or unknown reliability
□ Care must be taken in comparing or aggregating effect sizes based on different outcomes, different operationalisations of the same outcome, different treatments, or levels of the same
treatment, or measures derived from different populations.
□ The word 'effect' conveys an implication of causality, and the expression 'effect size' should therefore not be used unless this implication is intended and can be justified.
Altman, D.G. (1991) Practical Statistics for Medical Research. London: Chapman and Hall.
Bangert, R.L., Kulik, J.A. and Kulik, C.C. (1983) 'Individualised systems of instruction in secondary schools.' Review of Educational Research, 53, 143-158.
Bangert-Drowns, R.L. (1988) 'The effects of school-based substance abuse education: a meta-analysis'. Journal of Drug Education, 18, 3, 243-65.
Baugh, F. (2002) 'Correcting effect sizes for score reliability: A reminder that measurement and substantive issues are linked inextricably'. Educational and Psychological Measurement, 62, 2,
Cliff, N. (1993) 'Dominance Statistics - ordinal analyses to answer ordinal questions' Psychological Bulletin, 114, 3. 494-509.
Cohen, J. (1969) Statistical Power Analysis for the Behavioral Sciences. NY: Academic Press.
Cohen, J. (1994) 'The Earth is Round (p<.05)'. American Psychologist, 49, 997-1003.
Cohen, P.A., Kulik, J.A. and Kulik, C.C. (1982) 'Educational outcomes of tutoring: a meta-analysis of findings.' American Educational Research Journal, 19, 237-248.
Dowson V. (2000) "Time of day effects in school-children's immediate and delayed recall of meaningful material". TERSE Report http://www.cem.dur.ac.uk/ebeuk/research/terse/library.htm
Finn, J.D. and Achilles, C.M. (1990) 'Answers and questions about class size: A statewide experiment.' American Educational Research Journal, 27, 557-577.
Fitz-Gibbon C.T. (1984) 'Meta-analysis: an explication'. British Educational Research Journal, 10, 2, 135-144.
Fitz-Gibbon C.T. (2002) 'A Typology of Indicators for an Evaluation-Feedback Approach' in A.J.Visscher and R. Coe (Eds.) School Improvement Through Performance Feedback. Lisse: Swets and Zeitlinger.
Fleiss, J.L. (1994) 'Measures of Effect Size for Categorical Data' in H. Cooper and L.V. Hedges (Eds.), The Handbook of Research Synthesis. New York: Russell Sage Foundation.
Fletcher-Flinn, C.M. and Gravatt, B. (1995) 'The efficacy of Computer Assisted Instruction (CAI): a meta-analysis.' Journal of Educational Computing Research, 12(3), 219-242.
Fuchs, L.S. and Fuchs, D. (1986) 'Effects of systematic formative evaluation: a meta-analysis.' Exceptional Children, 53, 199-208.
Giaconia, R.M. and Hedges, L.V. (1982) 'Identifying features of effective open education.' Review of Educational Research, 52, 579-602.
Glass, G.V., McGaw, B. and Smith, M.L. (1981) Meta-Analysis in Social Research. London: Sage.
Harlow, L.L., Mulaik, S.S. and Steiger, J.H. (Eds) (1997) What if there were no significance tests? Mahwah NJ: Erlbaum.
Hedges, L. and Olkin, I. (1985) Statistical Methods for Meta-Analysis. New York: Academic Press.
Hembree, R. (1988) 'Correlates, causes effects and treatment of test anxiety.' Review of Educational Research, 58(1), 47-77.
Huberty, C.J.. (2002) 'A history of effect size indices'. Educational and Psychological Measurement, 62, 2, 227-240.
Hyman, R.B, Feldman, H.R., Harris, R.B., Levin, R.F. and Malloy, G.B. (1989) 'The effects of relaxation training on medical symptoms: a meat-analysis.' Nursing Research, 38, 216-220.
Kavale, K.A. and Forness, S.R. (1983) 'Hyperactivity and diet treatment: a meat-analysis of the Feingold hypothesis.' Journal of Learning Disabilities, 16, 324-330.
Keselman, H.J., Huberty, C.J., Lix, L.M., Olejnik, S. Cribbie, R.A., Donahue, B., Kowalchuk, R.K., Lowman, L.L., Petoskey, M.D., Keselman, J.C. and Levin, J.R. (1998) 'Statistical practices of
educational researchers: An analysis of their ANOVA, MANOVA, and ANCOVA analyses'. Review of Educational Research, 68, 3, 350-386.
Kirk, R.E. (1996) 'Practical Significance: A concept whose time has come'. Educational and Psychological Measurement, 56, 5, 746-759.
Kulik, J.A., Kulik, C.C. and Bangert, R.L. (1984) 'Effects of practice on aptitude and achievement test scores.' American Education Research Journal, 21, 435-447.
Lepper, M.R., Henderlong, J., and Gingras, I. (1999) 'Understanding the effects of extrinsic rewards on intrinsic motivation - Uses and abuses of meta-analysis: Comment on Deci, Koestner, and Ryan'.
Psychological Bulletin, 125, 6, 669-676.
Lipsey, M.W. (1992) 'Juvenile delinquency treatment: a meta-analytic inquiry into the variability of effects.' In T.D. Cook, H. Cooper, D.S. Cordray, H. Hartmann, L.V. Hedges, R.J. Light, T.A. Louis
and F. Mosteller (Eds) Meta-analysis for explanation. New York: Russell Sage Foundation.
Lipsey, M.W. and Wilson, D.B. (1993) 'The Efficacy of Psychological, Educational, and Behavioral Treatment: Confirmation from meta-analysis.' American Psychologist, 48, 12, 1181-1209.
McGraw, K.O. (1991) 'Problems with the BESD: a comment on Rosenthal's "How Are We Doing in Soft Psychology'. American Psychologist, 46, 1084-6.
McGraw, K.O. and Wong, S.P. (1992) 'A Common Language Effect Size Statistic'. Psychological Bulletin, 111, 361-365.
Mosteller, F., Light, R.J. and Sachs, J.A. (1996) 'Sustained inquiry in education: lessons from skill grouping and class size.' Harvard Educational Review, 66, 797-842.
Oakes, M. (1986) Statistical Inference: A Commentary for the Social and Behavioral Sciences. New York: Wiley.
Olejnik, S. and Algina, J. (2000) 'Measures of Effect Size for Comparative Studies: Applications, Interpretations and Limitations.' Contemporary Educational Psychology, 25, 241-286.
Rosenthal, R. (1994) 'Parametric Measures of Effect Size' in H. Cooper and L.V. Hedges (Eds.), The Handbook of Research Synthesis. New York: Russell Sage Foundation.
Rosenthal, R, and Rubin, D.B. (1982) 'A simple, general purpose display of magnitude of experimental effect.' Journal of Educational Psychology, 74, 166-169.
Rubin, D.B. (1992) 'Meta-analysis: literature synthesis or effect-size surface estimation.' Journal of Educational Statistics, 17, 4, 363-374.
Shymansky, J.A., Hedges, L.V. and Woodworth, G. (1990) A reassessment of the effects of inquiry-based science curricula of the 60's on student performance.' Journal of Research in Science Teaching,
27, 127-144.
Slavin, R.E. and Madden, N.A. (1989) 'What works for students at risk? A research synthesis.' Educational Leadership, 46(4), 4-13.
Smith, M.L. and Glass, G.V. (1980) 'Meta-analysis of research on class size and its relationship to attitudes and instruction.' American Educational Research Journal, 17, 419-433.
Snyder, P. and Lawson, S. (1993) 'Evaluating Results Using Corrected and Uncorrected Effect Size Estimates'. Journal of Experimental Education, 61, 4, 334-349.
Strahan, R.F. (1991) 'Remarks on the Binomial Effect Size Display'. American Psychologist, 46, 1083-4.
Thompson, B. (1999) 'Common methodology mistakes in educational research, revisited, along with a primer on both effect sizes and the bootstrap.' Invited address presented at the annual meeting of
the American Educational Research Association, Montreal. [Accessed from http://acs.tamu.edu/~bbt6147/aeraad99.htm , January 2000]
Tymms, P., Merrell, C. and Henderson, B. (1997) 'The First Year as School: A Quantitative Investigation of the Attainment and Progress of Pupils'. Educational Research and Evaluation, 3, 2, 101-118.
Vincent, D. and Crumpler, M. (1997) British Spelling Test Series Manual 3X/Y. Windsor: NFER-Nelson.
Wang, M.C. and Baker, E.T. (1986) 'Mainstreaming programs: Design features and effects. Journal of Special Education, 19, 503-523.
Wilcox, R.R. (1998) 'How many discoveries have been lost by ignoring modern statistical methods?'. American Psychologist, 53, 3, 300-314.
Wilkinson, L. and Task Force on Statistical Inference, APA Board of Scientific Affairs (1999) 'Statistical Methods in Psychology Journals: Guidelines and Explanations'. American Psychologist, 54, 8, | {"url":"http://www.leeds.ac.uk/educol/documents/00002182.htm","timestamp":"2014-04-16T10:24:01Z","content_type":null,"content_length":"80756","record_id":"<urn:uuid:eb0872a5-d768-479d-ae7a-ab083c6e92b4>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00024-ip-10-147-4-33.ec2.internal.warc.gz"} |
Boost.MultiIndex Performance
Boost.MultiIndex helps the programmer to avoid the manual construction of cumbersome compositions of containers when multi-indexing capabilities are needed. Furthermore, it does so in an efficient
manner, both in terms of space and time consumption. The space savings stem from the compact representation of the underlying data structures, requiring a single node per element. As for time
efficiency, Boost.MultiIndex intensively uses metaprogramming techniques producing very tight implementations of member functions which take care of the elementary operations for each index: for
multi_index_containers with two or more indices, the running time can be reduced to half as long as with manual simulations involving several STL containers.
The section on emulation of standard containers with multi_index_container shows the equivalence between single-index multi_index_containers and some STL containers. Let us now concentrate on the
problem of simulating a multi_index_container with two or more indices with a suitable combination of standard containers.
Consider the following instantiation of multi_index_container:
typedef multi_index_container<
ordered_unique<identity<int> >,
ordered_non_unique<identity<int>, std::greater >,
> indexed_t;
indexed_t maintains two internal indices on elements of type int. In order to simulate this data structure resorting only to standard STL containers, one can use on a first approach the following
// dereferencing compare predicate
template<typename Iterator,typename Compare>
struct it_compare
bool operator()(const Iterator& x,const Iterator& y)const
return comp(*x,*y);
Compare comp;
typedef std::set<int> manual_t1; // equivalent to indexed_t's index #0
typedef std::multiset<
const int*,
const int*,
> manual_t2; // equivalent to indexed_t's index #1
where manual_t1 is the "base" container that holds the actual elements, and manual_t2 stores pointers to elements of manual_t1. This scheme turns out to be quite inefficient, though: while insertion
into the data structure is simple enough:
manual_t1 c1;
manual_t2 c2;
// insert the element 5
manual_t1::iterator it1=c1.insert(5).first;
deletion, on the other hand, necessitates a logarithmic search, whereas indexed_t deletes in constant time:
// remove the element pointed to by it2
manual_t2::iterator it2=...;
c1.erase(**it2); // watch out! performs in logarithmic time
The right approach consists of feeding the second container not with raw pointers, but with elements of type manual_t1::iterator:
typedef std::set<int> manual_t1; // equivalent to indexed_t's index #0
typedef std::multiset<
> manual_t2; // equivalent to indexed_t's index #1
Now, insertion and deletion can be performed with complexity bounds equivalent to those of indexed_t:
manual_t1 c1;
manual_t2 c2;
// insert the element 5
manual_t1::iterator it1=c1.insert(5).first;
// remove the element pointed to by it2
manual_t2::iterator it2=...;
c1.erase(*it2); // OK: constant time
The construction can be extended in a straightworward manner to handle more than two indices. In what follows, we will compare instantiations of multi_index_container against this sort of manual
The gain in space consumption of multi_index_container with respect to its manual simulations is amenable to a very simple theoretical analysis. For simplicity, we will ignore alignment issues (which
in general play in favor of multi_index_container.)
Nodes of a multi_index_container with N indices hold the value of the element plus N headers containing linking information for each index. Thus the node size is
S[I] = e + h[0] + ··· + h[N-1], where
e = size of the element,
h[i] = size of the i-th header.
On the other hand, the manual simulation allocates N nodes per element, the first holding the elements themselves and the rest storing iterators to the "base" container. In practice, an iterator
merely holds a raw pointer to the node it is associated to, so its size is independent of the type of the elements. Summing all contributions, the space allocated per element in a manual simulation
S[M] = (e + h[0]) + (p + h[1]) + ··· + (p + h[N-1]) = S[I] + (N-1)p, where
p = size of a pointer.
The relative amount of memory taken up by multi_index_container with respect to its manual simulation is just S[I] / S[M], which can be expressed then as:
S[I] / S[M] = S[I] / (S[I] + (N-1)p).
The formula shows that multi_index_container is more efficient with regard to memory consumption as the number of indices grow. An implicit assumption has been made that headers of
multi_index_container index nodes are the same size that their analogues in STL containers; but there is a particular case in which this is often not the case: ordered indices use a spatial
optimization technique which is not present in many implementations of std::set, giving an additional advantage to multi_index_containers of one system word per ordered index. Taking this fact into
account, the former formula can be adjusted to:
S[I] / S[M] = S[I] / (S[I] + (N-1)p + Ow),
where O is the number of ordered indices of the container, and w is the system word size (typically 4 bytes on 32-bit architectures.)
These considerations have overlooked an aspect of the greatest practical importance: the fact that multi_index_container allocates a single node per element, compared to the many nodes of different
sizes built by manual simulations, diminishes memory fragmentation, which can show up in more usable memory available and better performance.
From the point of view of computational complexity (i.e. big-O characterization), multi_index_container and its corresponding manual simulations are equivalent: inserting an element into a
multi_index_container reduces to a simple combination of elementary insertion operations on each of the indices, and similarly for deletion. Hence, the most we can expect is a reduction (or increase)
of execution time by a roughly constant factor. As we will see later, the reduction can be very significative for multi_index_containers with two or more indices.
In the special case of multi_index_containers with only one index, resulting performance will roughly match that of the STL equivalent containers: tests show that there is at most a negligible
degradation with respect to STL, and even in some cases a small improvement.
See source code used for measurements.
In order to assess the efficiency of multi_index_container, the following basic algorithm
multi_index_container<...> c;
for(int i=0;i<n;++i)c.insert(i);
for(iterator it=c.begin();it!=c.end();)c.erase(it++);
has been measured for different instantiations of multi_index_container at values of n 1,000, 10,000 and 100,000, and its execution time compared with that of the equivalent algorithm for the
corresponding manual simulation of the data structure based on STL containers. The table below describes the test environments used.
Tests environments.
Compiler Settings OS and CPU
GCC 3.4.5 (mingw special) -O3 Windows 2000 Pro on P4 1.5 GHz, 256 MB RAM
Intel C++ 7.1 default release settings Windows 2000 Pro on P4 1.5 GHz, 256 MB RAM
Microsoft Visual C++ 8.0 default release settings, _SECURE_SCL=0 Windows XP on P4 Xeon 3.2 GHz, 1 GB RAM
The relative memory consumption (i.e. the amount of memory allocated by a multi_index_container with respect to its manual simulation) is determined by dividing the size of a multi_index_container
node by the sum of node sizes of all the containers integrating the simulating data structure.
The following instantiation of multi_index_container was tested:
ordered_unique<identity<int> >
which is functionally equivalent to std::set<int>.
GCC 3.4.5 ICC 7.1 MSVC 8.0
80% 80% 80%
Table 1: Relative memory consumption of multi_index_container with 1 ordered index.
The reduction in memory usage is accounted for by the optimization technique implemented in Boost.MultiIndex ordered indices, as explained above.
Fig. 1: Performance of multi_index_container with 1 ordered index.
Somewhat surprisingly, multi_index_container performs slightly better than std::set. A very likely explanation for this behavior is that the lower memory consumption of multi_index_container results
in a higher processor cache hit rate. The improvement is smallest for GCC, presumably because the worse quality of this compiler's optimizer masks the cache-related benefits.
The following instantiation of multi_index_container was tested:
which is functionally equivalent to std::list<int>.
GCC 3.4.5 ICC 7.1 MSVC 8.0
100% 100% 100%
Table 2: Relative memory consumption of multi_index_container with 1 sequenced index.
The figures confirm that in this case multi_index_container nodes are the same size than those of its std::list counterpart.
Fig. 2: Performance of multi_index_container with 1 sequenced index.
multi_index_container does not attain the performance of its STL counterpart, although the figures are close. Again, the worst results are those of GCC, with a degradation of up to 7%, while ICC and
MSVC do not exceed a mere 5%.
The following instantiation of multi_index_container was tested:
ordered_unique<identity<int> >,
ordered_non_unique<identity<int> >
GCC 3.4.5 ICC 7.1 MSVC 8.0
70% 70% 70%
Table 3: Relative memory consumption of multi_index_container with 2 ordered indices.
These results concinde with the theoretical formula for S[I] = 28, N = O = 2 and p = w = 4.
Fig. 3: Performance of multi_index_container with 2 ordered indices.
The experimental results confirm our hypothesis that multi_index_container provides an improvement on execution time by an approximately constant factor, which in this case lies around 60%. There is
no obvious explanation for the increased advantage of multi_index_container in MSVC for n=10^5.
The following instantiation of multi_index_container was tested:
ordered_unique<identity<int> >,
GCC 3.4.5 ICC 7.1 MSVC 8.0
75% 75% 75%
Table 4: Relative memory consumption of multi_index_container with 1 ordered index + 1 sequenced index.
These results concinde with the theoretical formula for S[I] = 24, N = 2, O = 1 and p = w = 4.
Fig. 4: Performance of multi_index_container with 1 ordered index + 1 sequenced index.
For n=10^3 and n=10^4, the results are in agreement with our theoretical analysis, showing a constant factor improvement of 50-65% with respect to the STL-based manual simulation. Curiously enough,
this speedup gets even higher when n=10^5 for two of the compilers, namely GCC and ICC. In order to rule out spurious results, the tests have been run many times, yielding similar outcoumes. Both
test environments are deployed on the same machine, which points to some OS-related reason for this phenomenon.
The following instantiation of multi_index_container was tested:
ordered_unique<identity<int> >,
ordered_non_unique<identity<int> >,
ordered_non_unique<identity<int> >
GCC 3.4.5 ICC 7.1 MSVC 8.0
66.7% 66.7% 66.7%
Table 5: Relative memory consumption of multi_index_container with 3 ordered indices.
These results concinde with the theoretical formula for S[I] = 40, N = O = 3 and p = w = 4.
Fig. 5: Performance of multi_index_container with 3 ordered indices.
Execution time for this case is between 45% and 55% lower than achieved with an STL-based manual simulation of the same data structure.
The following instantiation of multi_index_container was tested:
ordered_unique<identity<int> >,
ordered_non_unique<identity<int> >,
GCC 3.4.5 ICC 7.1 MSVC 8.0
69.2% 69.2% 69.2%
Table 6: Relative memory consumption of multi_index_container with 2 ordered indices + 1 sequenced index.
These results concinde with the theoretical formula for S[I] = 36, N = 3, O = 2 and p = w = 4.
Fig. 6: Performance of multi_index_container with 2 ordered indices + 1 sequenced index.
In accordance to the expectations, execution time is improved by a fairly constant factor, which ranges from 45% to 55%.
We have shown that multi_index_container outperforms, both in space and time efficiency, equivalent data structures obtained from the manual combination of STL containers. This improvement gets
larger when the number of indices increase.
In the special case of replacing standard containers with single-indexed multi_index_containers, the performance of Boost.MultiIndex is comparable with that of the tested STL implementations, and can
even yield some improvements both in space consumption and execution time.
Revised May 9th 2006
© Copyright 2003-2006 Joaquín M López Muñoz. Distributed under the Boost Software License, Version 1.0. (See accompanying file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt) | {"url":"http://www.boost.org/doc/libs/1_34_0/libs/multi_index/doc/performance.html","timestamp":"2014-04-20T10:01:39Z","content_type":null,"content_length":"37379","record_id":"<urn:uuid:64b5d9de-7a8f-4377-b6e0-e4ca8dfc6bc9>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00609-ip-10-147-4-33.ec2.internal.warc.gz"} |
spectral methods: is there a name for this basis?
February 12th 2010, 01:47 PM #1
Feb 2010
spectral methods: is there a name for this basis?
The Legendre polynomials, $P_n$, which are orthogonal on the interval $[-1, 1]$, are useful for spectral methods. However they do not satisfy any conditions like $P_n(\pm 1) = 0$, which are
desirable for some boundary value problems. The polynomials defined by
$Q_n(x) = (1 - x^2)P_n'(x)$
are orthogonal on $[-1,1]$ with respect to the weight function $(1-x^2)^{-1}$and they satisfy homogeneous boundary conditions. This class of polynomials is mentioned in Quateroni and Valli,
Numerical Approximation of Partial Differential Equations (Springer 2008).
Does anyone know if they have a special name?
Follow Math Help Forum on Facebook and Google+ | {"url":"http://mathhelpforum.com/advanced-applied-math/128569-spectral-methods-there-name-basis.html","timestamp":"2014-04-23T23:00:42Z","content_type":null,"content_length":"30738","record_id":"<urn:uuid:a7b6d44d-82de-4363-8de9-e1c030316ec8>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00195-ip-10-147-4-33.ec2.internal.warc.gz"} |
Healthcare Economist
Difference in Difference Estimation
Written By: Jason Shafrin - Feb• 11•06
Difference in Difference (DD) is a commonly used empirical estimation technique in economics. Let us take a hypothetical example where a state (Wisconsin) passes a bill which makes employer-provided
health insurance tax deductible. Let us also assume that in the year after the bill passed (year 2) the percentage of firms offering health insurance increased by 50% compared to the year before the
bill was passed (year 1). In order to estimate the impact of the of the bill on the percentage of firms offering health insurance, we could simply do a ‘before and after’ analysis and conclude that
the bill increased insurance offerings by 50%. The problem is that there could be a trend over time for more employers to offer insurance. It is impossible to identify if the tax deductibility or the
time trend caused this increase in firm offering.
One way to identify the impact of the bill is to run a DD regression. If there is a state (California) that did not change the way it treated employer provided health insurance, we could use this as
a control group to compare the changes between Wisconsin and California between the two years.
We will run the regression:
Y=β_0 + β_1*T + β_2*WI + β_3*(T*WI) + e
Y is the percentage of firms offering health insurance in each state in each time period. T is a time dummy, WI is a state dummy for Wisconsin, and T*WI is the interaction of the time dummy and the
Wisconsin state dummy.
The chart below displays the percentage of firms offering insurance in each state and time period.
│ │California │Wisconsin │
│Year 1│ a │ b │
│Year 2│ c │ d │
The next chart explains what each coefficient in the regression represents.
│Coefficient │Calculation │
│β_0 │ a │
│β_1 │ c-a │
│β_2 │ b-a │
│β_3 │(d-b)-(c-a) │
We can see that β_0 is the baseline average, β_1 represents the time trend in the control group, β_2 represents the differences between the two states in year 1, and β_3 represents the difference in
the changes over time. Assuming that both states have the same health insurance trends over time, we have now controlled for a possible national time trend. We can now identify what the true impact
of the tax deductibility is on employers offering insurance.
This is the most concise and clear explanation I have ever read.
Y=b_0 + b_1*T + b_2*CA + b_3*(T*CA) + e
However, I still have one point not clear. Could you please explain more about the dummy variable T? Does it mean year1( yes=1, no=0) and year2 (yes=1, no=0) or year (year1=1 and year2=0). One
variable or two variables?
Also, do I have to put the Wisconsin dummary into the same regression? If I don’t put the Wisconsin how could I know if there is the policy effect?
I would very appreciate if you could take time to guide me. Thanks.
Should be year1, T=0 and year2, T=1. If your statistical package knows what it is doing, however, if you went with the two variable misspecification you mentioned, it would just wind up dropping one
of them and you are left with the 1 variable! Same thing applies with the Wisconsin/CA dummies.
The WI in his formulation is the Wisconsin dummy. You decided you liked CA better, so you have a CA dummy. If those are your only two states, then you cannot have dummies for both. You would have a
situation of perfect collinearity. Basically, the vector of ones for the intercept would equal the sum of the CA and WI vectors. So you only use 1 or the other, not both.
As a rule of thumb, when you run a regression you are not allowed to have a complete set of dummy variables unless you get rid of the constant term…for example, you cannot have variables for male and
female, but rather just one or the other. If you have age bands for under 18, 19-39, 40-64, and 65+, you can only use three of those dummies, but not all four.
Also, the original post does have one problem, in that he says “Y is the percentage of firms offering health insurance in each state in each time period.” Clearly if you did that, you would have
exactly 4 observations. CA year 1, CA year 2, WI year 1, and WI year 2. No degrees of freedom. You need matched data from individual firms within each state both before and after the policy change.
i.e. say the policy change happened in 2004. You would want to have the insurance provision data for 50 CA firms, both in 2003 and 2005, as well as the insurance provision data for 50 WI firms, both
in 2003 and 2005. And since you are likely to be working with binary data on the LHS (did the firm offer health care?), you’d want to run a probit or logit in this case rather than simple OLS.
What if I have more than two years data, say 4 years.
Should T= 0, 1, 2, 3? or use T1=0,1 T2=0,1 T3=0,1?
[...] Miller is that these laws are exogenously enacted; this creates a natural experiment and allows a difference-in-difference [...]
Depends on whether the trend is linear or not. you can test the two methods and compare them using a likelihood ratio test. If the categorical model is significantly “more explanatory” then you need
the “more complicated” categorical model. If it is not, the linear model is sufficient. If you use the categorical model, keep in mind that the coefficient will compare that time period to time 0.
the coeffecient for T1 represents the difference between T1 and T0, the coefficient for T2 represents the difference between T2 and T0, etc.
If generally you the values of the coefficients getting progressively larger (or smaller), then you might have a linear trend. If they go up and down, you’ll likely need the categorical model.
What if you have multiple policy changes over a number of years?
My treatment group is Medicare patients with a control of non-Medicare patients, but I have several policy changes (1 each year over 5 years). | {"url":"http://healthcare-economist.com/2006/02/11/difference-in-difference-estimation/","timestamp":"2014-04-18T02:58:45Z","content_type":null,"content_length":"33073","record_id":"<urn:uuid:e3d5cfaf-c8e7-4073-8843-5c37ad0a4b9c>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00069-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math challenged,,,need help
01-27-2011, 10:36 AM #1
Join Date
Oct 2007
Blog Entries
I want to make this pattern,but I want to start with a 9 1/2 block instead of 8.5. How wide would I then cut the strips??? Would I just divide by 4??
If you cut them 9 1/2" and square to 9", then divide 9 by 4 you should have 2 1/4" strips.
Your strips would have to be cut at 2 3/4 inches wide to get a 9 1/2 inch unfinished block made up of four strips, like the quilt pictured.
Yes, the theory is the same, no matter what size block you start out with :wink:
I really like this pattern, so simple but looks way more complex :D:D:D
It makes the block 9.5" each of the four strips need to be 2 7/8 cut. HTH
Your strips would have to be cut at 2 3/4 inches wide to get a 9 1/2 inch unfinished block made up of four strips, like the quilt pictured.
I am not sure this is correct... I think 4 X 2 3/4" = 11" :wink:
This is getting really confusing. I assumed that you meant 9 1/2" squares...maybe that is not what you meant.
Okay... If you start out with a 9" square and divide it by 4, each strip would be 2 1/4" wide.
If you start with 9 1/2" and divide it by 4, each strip would be 2 3/8" wide :D:D:D
These blocks are cut diagonally, and stitched.... so now the dimension of the block has changed :wink:
Edited*** So you square it up to 9" and then divide it by 4 :D:D:D
Your strips would have to be cut at 2 3/4 inches wide to get a 9 1/2 inch unfinished block made up of four strips, like the quilt pictured.
I am not sure this is correct... I think 4 X 2 3/4" = 11" :wink:
Yes, but there will be three seams joining the four strips, each of which takes 1/2", so she WILL have a 9 1/2 inch block after sewing the three seams!
The important question is whether the 9 1/2 inch is the size square to start, or if the goal is a 9 1/2" pieced block to put into the quilt, or a 9 1/2" finished size of the block after it is
sewn to the blocks around it in the quilt!
(math teacher, retired)
You're not subtracting the seam allowances. Every time you sew two strips together, 1/2 inch gets subtracted from the total of the 11". Sewing four strips together will subtract 1/2" three times
or 1 1/2 inches. 11 minus 1 1/2 equals 9 1/2. Then when the blocks are joined to each other, an additional seam allowance is taken off both sides, equally a 9 inch finished block.
01-27-2011, 10:45 AM #2
Power Poster
Join Date
Oct 2009
Blog Entries
01-27-2011, 10:46 AM #3
01-27-2011, 10:46 AM #4
Power Poster
Join Date
Jul 2007
Out searching for some sunshine :-)
Blog Entries
01-27-2011, 10:48 AM #5
01-27-2011, 10:48 AM #6
Power Poster
Join Date
Jul 2007
Out searching for some sunshine :-)
Blog Entries
01-27-2011, 10:51 AM #7
Power Poster
Join Date
Oct 2009
Blog Entries
01-27-2011, 10:53 AM #8
Power Poster
Join Date
Jul 2007
Out searching for some sunshine :-)
Blog Entries
01-27-2011, 10:58 AM #9
01-27-2011, 10:59 AM #10 | {"url":"http://www.quiltingboard.com/main-f1/math-challenged-need-help-t94412.html","timestamp":"2014-04-17T10:15:21Z","content_type":null,"content_length":"69283","record_id":"<urn:uuid:5f07ef0b-b593-4492-8e88-c45874de5000>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00388-ip-10-147-4-33.ec2.internal.warc.gz"} |
A Report on ICL HOL
- Computer Journal
"... The HOL system is a mechanized proof assistant for higher order logic that has been under continuous development since the mid-1980s, by an ever-changing group of developers and external
contributors. We give a brief overview of various implementations of the HOL logic before focusing on the evoluti ..."
Cited by 11 (7 self)
Add to MetaCart
The HOL system is a mechanized proof assistant for higher order logic that has been under continuous development since the mid-1980s, by an ever-changing group of developers and external
contributors. We give a brief overview of various implementations of the HOL logic before focusing on the evolution of certain important features available in a recent implementation. We also
illustrate how the module system of Standard ML provided security and modularity in the construction of the HOL kernel, as well as serving in a separate capacity as a useful representation medium for
persistent, hierarchical logical theories.
"... This paper reports on a construction of the real numbers in the ProofPower implementation of the HOL logic. Since the original construction was implemented, some major improvements to the ideas
have been discovered. The improvements involve some entertaining mathematics: it turns out that the De ..."
Cited by 1 (0 self)
Add to MetaCart
This paper reports on a construction of the real numbers in the ProofPower implementation of the HOL logic. Since the original construction was implemented, some major improvements to the ideas have
been discovered. The improvements involve some entertaining mathematics: it turns out that the Dedekind cuts provide many routes one can travel to get from the ring of integers, Z, to the eld of real
numbers, R.
"... Abstract. This paper reports on a construction of the real numbers in the ProofPower implementation of the HOL logic. Since the original construction was implemented, some major improvements to
the ideas have been discovered. The improvements involve some entertaining mathematics: it turns out that ..."
Add to MetaCart
Abstract. This paper reports on a construction of the real numbers in the ProofPower implementation of the HOL logic. Since the original construction was implemented, some major improvements to the
ideas have been discovered. The improvements involve some entertaining mathematics: it turns out that the Dedekind cuts provide many routes one can travel to get from the ring of integers, Z, to the
field of real numbers, R. The traditional stop-over on the way is the field of rational numbers, Q. This paper shows that going via certain rings of algebraic numbers can provide a pleasant
alternative to the more well-trodden track. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=1598936","timestamp":"2014-04-16T21:17:58Z","content_type":null,"content_length":"16666","record_id":"<urn:uuid:87d750c8-028c-422e-90a6-4ef6ec168ddd>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00202-ip-10-147-4-33.ec2.internal.warc.gz"} |
Parameters of the VirtualLabs:
Population structure
by Christoph Hauert, Version 1.0, December 2006.
Interaction vs reproduction graphs.
Populations have different characteristic structures determined by the type of interactions of one player with other members of the populations.
mean-field/well-mixed populations:
Well mixed population without any structures, i.e. groups or pairwise encounters are formed randomly. This is often called the mean-field approximation.
linear lattice:
The players are arranged on a straight line - that is actually on a ring in order to reduce finite size and boundary effects - and interact with equal numbers of neighbors to their left and
square lattice:
All players are arranged on a rectangular lattice with periodic boundary conditions. The neighborhood size may be four (von Neumann-) or eight (Moore neighborhood).
hexagonal lattice:
The players are arranged on a hexagonal or honeycomb lattice interacting with their six nearest neighbors.
triangular lattice:
The players are arranged on a triangular lattice interacting with their three nearest neighbors.
linear small world network:
Small world network with an underlying structure of a linear lattice (see above), i.e. first the population is initialized with a linear lattice geometry and then a certain fraction of bonds
(see Frac new joints below) is randomly rewired. Note that the rewiring process leaves the connectivity of the players alone.
square small world network:
Small world network with an underlying structure of a rectangular lattice (see above).
hexagonal small world network:
Small world network with an underlying structure of a hexagonal or honeycomb lattice (see above).
triangular small world network:
Small world network with an underlying structure of a triangular lattice (see above).
random graphs:
Randomly drawn bonds/connections between players. The neighborhood size determines the average number of bonds (average connectivity) of one player, i.e. the players interact with different
numbers of other individuals.
random regular graphs:
The structure of random regular graphs is similar to random graphs with the additional constraint that each player has an equal number of interaction partners.
Neighborhood size:
Determines the number of potential interaction partners. This corresponds to the connectivity of a player. In the case of random graphs, this specifies the average number of interaction partners.
Frac new joints:
Fraction of bonds that get randomly rewired to obtain a small world network out of some underlying regular lattice. Note that fractions close to one will require an enormous number of rewired | {"url":"http://www.univie.ac.at/virtuallabs/General/param.structure.html","timestamp":"2014-04-17T01:14:12Z","content_type":null,"content_length":"6070","record_id":"<urn:uuid:fc5021ce-c4fb-4cc5-a0d0-13a257f174f5>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00000-ip-10-147-4-33.ec2.internal.warc.gz"} |
Chp 13
__________ are used to infer that the results from a sample are reflective of the true population scores.
Inferential statistics
When comparing group means, the _________ states that group means are equal.
Null hypothesis
The F statistic is a ratio of two types of variance: __________ variance and error variance.
Cohen's d expresses effect size in terms of _________.
Standard deviation units
A Type I error occurs when the null hypothesis is _________.
Rejected but the null hypothesis is actually true
Which of the following statements is TRUE?
True differences are more likely to be detected if the sample size is large.
If a mechanic looks at your car engine and says there is nothing wrong with it and your car breaks down when you leave the garage, what type of error did the mechanic make?
Type II
If the null hypothesis was rejected and there was 1 chance out of 100 that the decision was wrong, what was the alpha level in the study?
Which of the following is NOT a reason for a Type II error?
Large sample size
Dr. P is using a t-test to compare the means of two groups. There are 25 participants in each group. How many degrees of freedom are there in this test?
How is the power of a statistical test related to the probability of a Type II error?
Power = 1- Type II error
Which of the following is NOT a major statistical software program?
If the null hypothesis is correct, then the research hypothesis is accepted as correct.
The null hypothesis is rejected when there is a very low probability that the obtained results could be due to random error
The probability required for significance is called the alpha level.
The t-test is most commonly used to examine whether two groups are significantly different from each other.
Error variance is the deviation of the group means from the grand mean.
A .05 significance level indicates that there is a 95% chance that the research findings are incorrect.
The probability of making a Type I error is determined by the choice of alpha level.
Nonsignificant results do not necessarily indicate that the null hypothesis is correct.
Chi-square tests are used in the case of ordinal scale data
The Kruskal Wallace H test and the Mann-Whitney U test are both appropriate significance tests for interval scale data.
Researchers conducted a naturalistic observation to examine gender differences in manners. Standing outside the bookstore, the researchers observed men and women leaving the bookstore and recorded
when they held the door open or did not hold the door open for another person.
Chi-square test
Participants were recruited to participate in a memory study. Participants were randomly assigned to the learn a list of words printed on either white paper with red ink or white paper with black
ink. The number of words correctly recalled was recorded.
Between-subjects t-test
Researchers examined the influence of different types of rewards on creative expression. Children were given an art kit and asked to create a collage. Students were randomly assigned to one of three
experimental conditions: monetary reward, toy reward, or control. Each collage was given points for use of color, originality, structure, and design. The total number of points was recorded.
One-way Between-Subjects Analysis of Variance
A study is done to determine whether dieting plus exercise is more effective for producing weight loss than dieting alone. Participants were matched on initial weight, initial level of exercise, age,
and gender. One member of each pair was put on the diet for 2 months. The other member had the same diet but exercised moderately each week. The weight loss in pounds for the 2-month period was
Repeated Measures (correlated) t-test
Suppose a researcher studied men's shoe size and the mode of delivery of their first child. The researcher collected and classified shoe size data on 600 first-time fathers into three groups: large
(size 10 and above), average (size 8 and 9), and small (size 7 and smaller).
Kruskal-Wallace H test
Suppose researchers conducted a study and ranked salespeople according to the number of automobiles they sold the past six months. The rankings of the top 20 salespeople were separated into two
groups--those who valued time management and those who did not value time management.
Mann-Whitney U test
Degrees of freedom
A concept used in tests of statistical significance; the number of observations that are free to vary to produce a known outcome.
Error variance
Random variability in a set of scores that is not the result of the independent variable. Statistically, the variability of each score from its group mean.
Inferential statistics
Statistics designed to determine whether results based on sample data are generalizable to a population.
Null hypothesis
The hypothesis, used for statistical purposes, that the variables under investigation are not related in the population, that any observed effect based on sample results is due to random error.
The probability of correctly rejecting the null hypothesis.
The likelihood that a given event (among a specific set of events) will occur.
Research hypothesis
The hypothesis that the variables under investigation are related in the population—that the observed effect based on sample data is true in the population.
Statistical significance
Rejection of the null hypothesis when an outcome has a low probability of occurrence (usually .05 or less) if, in fact, the null hypothesis is correct.
Systematic variance
Variability in a set of scores that is the result of the independent variable; statistically, the variability of each group mean from the grand mean of all subjects.
A statistical significance test used to compare differences between means.
Type I error
An incorrect decision to reject the null hypothesis when it is true.
Type II error
An incorrect decision to reject the null hypothesis when it is true. | {"url":"http://quizlet.com/3698402/chp-13-flash-cards/","timestamp":"2014-04-18T13:15:46Z","content_type":null,"content_length":"91611","record_id":"<urn:uuid:44279a56-7357-43e6-803a-58ef1ff71469>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00464-ip-10-147-4-33.ec2.internal.warc.gz"} |
Whitesburg, GA Prealgebra Tutor
Find a Whitesburg, GA Prealgebra Tutor
During 25 years of science teaching at the high school and college levels, I was recognized for developing a unified, "big picture" approach for teaching science. This is a holistic (visual)
approach to the effective mastery of traditional chemistry. It reinforced learning of the course concepts by illustrating how they relate to the students' own life experiences.
2 Subjects: including prealgebra, chemistry
...I also provide homework help and online tutoring using an online whiteboard program (no downloads necessary). It is a fun, effective and interactive way for students to learn while sitting in
their own homes.I am a former classroom teacher. I am certified and have successful classroom teaching ...
23 Subjects: including prealgebra, reading, geometry, accounting
...Starting my educational career in New York (working the private and public education sectors in both general and special education) and moving to Georgia teaching high and middle school math
and social studies, and preparing for administrative roles and responsibilities has made me more equipped ...
22 Subjects: including prealgebra, reading, writing, algebra 1
...I'm a mechanical engineering professor at a top university. I understand physical science concepts very deeply. I use such concepts everyday to solve engineering problems and teach the
students in my introduction to fluid and thermal systems engineering course.
12 Subjects: including prealgebra, calculus, geometry, GRE
...I would administer the Jennings Informal Reading Assessment to determine the student's Independent, Insructional, and Frustrational levels of reading. The starting point would be the highest
reading level with a score of at least 90%. The new Common Core Curriculum (CCGPS) is scheduled for impl...
17 Subjects: including prealgebra, reading, English, algebra 1
Related Whitesburg, GA Tutors
Whitesburg, GA Accounting Tutors
Whitesburg, GA ACT Tutors
Whitesburg, GA Algebra Tutors
Whitesburg, GA Algebra 2 Tutors
Whitesburg, GA Calculus Tutors
Whitesburg, GA Geometry Tutors
Whitesburg, GA Math Tutors
Whitesburg, GA Prealgebra Tutors
Whitesburg, GA Precalculus Tutors
Whitesburg, GA SAT Tutors
Whitesburg, GA SAT Math Tutors
Whitesburg, GA Science Tutors
Whitesburg, GA Statistics Tutors
Whitesburg, GA Trigonometry Tutors | {"url":"http://www.purplemath.com/whitesburg_ga_prealgebra_tutors.php","timestamp":"2014-04-17T13:48:34Z","content_type":null,"content_length":"24188","record_id":"<urn:uuid:8a651e17-1ad9-4a82-9391-3d298ae921a1>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00193-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum: Math Library - Chemistry
1. Chemistry Spreadsheet Calculator - James T. Parker
A calculator designed to help teach high school students how to balance chemistry equations and then perform basic stoichiometry calculations. Includes a short tutorial. more>>
2. Hypermedia Textbook for Grades 6-12 (MathMol) - NYU/ACF Scientific Visualization Laboratory
Designed for middle school students, but also useful for students of high school chemistry. Introductory concepts: mass, weight, volume, density, scientific notation, our 3-dimensional world,
VRML, geometry quiz, the geometry of 2 and 3 dimensions, and mathematical equations. A Model of Matter: structure of an atom, bonding of atoms, and motion of molecules. Structure and Properties
of Important Molecules: water and ice, the element, carbon, simple carbon compounds, molecules of life, materials, and drugs. Appendix of Structures: water and ice, carbon, hydrocarbons,
lipids, DNA, amino acids, sugars, photosynthetic pigments, drugs, and math structures. more>>
3. MathMol - NYU/ACF Scientific Visualization Laboratory
A starting point for those interested in molecular modeling, one of the fastest growing fields in science, from building and visualizing molecules to performing complex calculations on
molecular systems. Using molecular modeling scientists will be better able to design new and more potent drugs against diseases such as Cancer, AIDS, and Arthritis. Short movies show the
mathematics in molecular modeling, e.g.: 1) fullerene molecule; bond angle formed by 3 consecutive carbon atoms in a benzene ring; dihedral angle shown for the hexane molecule; equation E=kT
where k is the slope of the line; relationship between coulombic energy and distance; torsion bond energy of a molecule modeled by a periodic function; ammonia molecule showing symmetry. See
also K12 Activities. more>> | {"url":"http://mathforum.org/library/topics/chemistry/?keyid=38315136&start_at=1&num_to_see=50","timestamp":"2014-04-19T02:00:12Z","content_type":null,"content_length":"35492","record_id":"<urn:uuid:b0efb140-d250-43ec-a423-9d624322205c>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00337-ip-10-147-4-33.ec2.internal.warc.gz"} |
Set-theoretic foundations for formal language theory?
up vote 0 down vote favorite
Has anyone ever seen any papers or books including set-theoretic descriptions of formal language theory? Specifically, I'm interested in how one would formalize context-free grammars with sets.
Some of this, I suppose is fairly obvious. For example, strings would use a foundational formalism much like ordered pairs (e.g. Kuratowski's definition or similar) but what about objects like
production rules and their semantics?
This isn't really necessary for me to get any actual work done, I just thought it would help me build a better intuition around formal language theory.
Thanks in advance,
An example of what I'm looking for would be string concatenation. How would one construct this in pure set theroy? If we consider the strings "a" and "b", and consider their set representations to be
{ { a } } and { { b }} (as per Kuratowski), then the desired result would be { { a }, { a, b } }. Clearly this cannot be accomplished by a defining string concatenation to be set union but there are
obviously numerous ways of doing so. The choice at each point in defining how to do something in formal language theory using pure set theory will, in some cases, determine how other notions can be
defined. As a consequence, some definitions may be more complex than others (or less elegant than others, one might say). I'm just curious if anyone has done anything like this before.
Isn't this bound to be circular? I mean, don't we need a formal language in the first place to define set theory? – Harry Gindi Jan 18 '10 at 14:26
Certainly it would be circular to a degree, however one could minimize the amount of required symbols. For example, if it could be done just with a minimal set of set-theoretic symbols, then it
may provide some interesting insight. – Anthony de Almeida Lopes Jan 18 '10 at 14:29
1 What's not set-theoretic about production rules? Take the set of all trees that can be formed by repeated use of the production rules and add to your formal language the set of all words that
result from such trees. Both of these things are perfectly well-defined. – Qiaochu Yuan Jan 18 '10 at 14:40
2 You can define words as maps from finite ordinals to your alphabet (which is precisely what they are!), and concatenation as the obvious map from the ordinal sum... You seem to making waves in a
cup of tea :P – Mariano Suárez-Alvarez♦ Jan 18 '10 at 15:03
There is a certain degree of circularity, but only on the surface. This is because no formal language can reason or express itself with any meaning. For example, as soon as you try to collect all
1 the codes for true statements about a particular universe, you have just stepped up the consistency strength of the system you are working in. A related construction is called $0^{\sharp}$, and a
quick description of it is here en.wikipedia.org/wiki/Zero_sharp . (Of course I'm blurring the fine details a bit, but this is the gist.) – Michael Blackmon Jul 25 '11 at 22:51
show 3 more comments
3 Answers
active oldest votes
One can do this using less technology, too...
Let $\Sigma$ be an alphabet, $N$ a set of non-terminals, and $\Sigma^\*$ and $(\Sigma\cup N)^\*$ the full languages on $\Sigma$ and $\Sigma\cup N$, respectively. A context-free grammar is
up vote 6 a finite subset $G\subset N\times(\Sigma\cup N)^\*$. Given one such grammar $G$ there is a relation $\mathord\rightarrow_G\subseteq(\Sigma\cup N)^\*\times(\Sigma\cup N)^\*$ which is the
down vote least transitive reflexive relation which contains $G$ (notice that $N\times(\Sigma\cup N)^\*\subseteq (\Sigma\cup N)^\*\times(\Sigma\cup N)^\*$, so this makes sense) and such that $$a\
accepted rightarrow_Gb \wedge a'\rightarrow_Gb'\implies ab\rightarrow_Ga'b'.$$ The language generated by $G$ from a non-terminal $n\in N$ is just $L(G, n)=\{w\in \Sigma^\*:n\rightarrow_Gw\}$. This
is, in fact, the standard way to do this...
Actually, yes, this is much closer to what I had in mind. Beautiful. – Anthony de Almeida Lopes Jan 18 '10 at 15:38
add comment
Hi Anthony,
The question you're asking in your text isn't quite the same as the question in your title, but the answer to the title subsumes the one you ask in the text.
To define a grammar, we start with a an alphabet $\Sigma$, and a set of nonterminal variable $N$, and then define a grammatical expression as an element of the formal semiring $G$ over $\
Sigma + N$. (It's a good exercise to see how the semiring axioms let you convert grammatical expressions into their Backus normal form.) Next, a grammar is a map from $N \to G$ -- that is,
it assigns a grammatical expression to each variable.
Now, consider the free monoid over $\Sigma$. Concretely, these are sequences of characters in $\Sigma$. A language is a set of these strings, and our goal is to give an interpretion sending
nonterminals to languages. If we had a map $f$ sending nonterminals to languages, we could interpret grammatical expressions inductively, by interpreting the formal multiplication in the
ring as concatenation (using the monoid operation) of elements in each language, and interpreting the formal sum as set union of languages.
up vote 4 Now, consider the free monoid over $\Sigma$. Concretely, these are sequences of characters in $\Sigma$. A language is a set of these strings (ie, an element of $\mathcal{P}(\Sigma^{*})$),
down vote and our goal is to give an interpretion sending nonterminals to languages. If we had a map $f$ sending nonterminals to languages, we could interpret grammatical expressions inductively, by
interpreting the formal multiplication in the ring as concatenation (using the monoid operation) of elements in each language, and interpreting the formal sum as set union of languages.
If we have such an interpretation (a function $N \to \mathcal{P}(\Sigma^{*})$) and a grammar (a map $N \to G$), we can lift the grammar to an endofunction on interpretations -- that is, to
a function $(N \to L) \to (N \to L)$. (I'm using $L$ for a set of strings due to jsMath flakiness -- it should be powerset-sigma-star.)
The language for each nonterminal will be the least fixed point of this functional. (As a technical detail, to make this functional monotone, so that the Knaster-Tarski theorem applies, you
need to ensure that each interpretation also unions the new language with the old argument language -- ie, you take an interpretation $i$ sketched above and change it to $\lambda f.\;\
lambda n.\; f(n) \cup i(n)$.)
I should also add that this is all standard material, which should appear in every textbook on formal language theory. (I'm pretty sure it's in Sipser.)
Very nice. That's definitely more along the lines of what I was thinking about, thank you. I think the end of your post was cut off though :\ – Anthony de Almeida Lopes Jan 18 '10 at
Thanks again. I suppose that's what I get for self-education: I had started with a book on automata theory and another on compilers, but after seeing what you said and looking on Google
books, it seems this is indeed part of many books on the subject. – Anthony de Almeida Lopes Jan 18 '10 at 15:27
add comment
For the foundations of formal language theory, the following ideas show up:
1. Universal algebra (here for 'free' monoids and basic logic involving simple equation only proofs, just variables and no connectives or quantifiers)
2. Term rewriting systems (take a look at the wikipedia article)
up vote 0 As for foundations, I have found that much FOM (Foundations of Mathematics) is done assuming full blown ZFC set theory. You can certainly think of formal language theory, computability
down vote theory, model theory, etc... as subjects in ZFC just like any other subject such as abstract algebra or topology/geometry. There is no danger here as long as the context in which the
results were derived is clear.
Meta-mathematical work is often done in ZFC first and then under weaker systems later.
add comment
Not the answer you're looking for? Browse other questions tagged set-theory or ask your own question. | {"url":"http://mathoverflow.net/questions/12190/set-theoretic-foundations-for-formal-language-theory?sort=newest","timestamp":"2014-04-21T15:56:47Z","content_type":null,"content_length":"71129","record_id":"<urn:uuid:fd728fbf-81c5-4628-8b6b-9f306da406f3>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00381-ip-10-147-4-33.ec2.internal.warc.gz"} |
Deep Thoughts
I recently came across the computer science concept of memoization. What's cool is that I had made some suggestions on designs at work that amount to the same thing.
In general, memoization refers to having a computer function remember the problems it's already solved, and just return the answer again if it recognizes the problem, rather than compute it again, as
most software does. The common example is the factorial function. Since the factorial is the product of all the numbers up to the argument, on the way to, say 6!, you will compute all the factorials
less than 6. Usually the function would loop over all the numbers and multiply them together. Instead, a memoized version would have stored all the results up to the highest argument it had
previously seen, and then only compute those beyond that (storing the new results) if necessary.
One type of object-oriented class that I had proposed is a smart Angle class. Since angles can be represented in many different ways, I wanted to be able to get any version the calling software
wanted, without knowing what version it had been stored in. In addition to radians and degrees, an angle can be (and often is in our work) stored as the combination of sine and cosine values. Besides
being able to use an angle without knowing what format it was in originally, I suggested that the other representations should be left uncomputed, but once they were asked for, the result should be
stored to avoid unnecessary further computations. Clearly this can been seen as memoization of the conversion.
In the general case, the "hit rate" of a memoized function may not be that high, depending on the number and valid range of the inputs. However, for an object whose data never changes once it is
created, essentially one of the inputs has been fixed, raising the chance that the same result will be requested repeatedly. (In my design, each Angle object would be for a particular angle. If the
value of the variable changed, a new Angle object would be created to store it.)
By the way, I'm fascinated with the idea of "smart numbers" in computer programs - numbers as objects (or other collections of data, if not object-oriented) which store more than just their value, or
that store exact representations in alternate formats. The Angle example given, quadratic irrationals, storing numbers as rationals or continued fractions instead of decimal, and numbers that store
their own error bounds, are all concepts that I've thought about over the years. If anyone knows where one of these concepts has been used in a real program, I'd be interested to hear about it.
2 comments:
1. With disk space no longer being a premium as it was in the past, memoization makes perfect sense; processing time is accelerated the next time a value is needed and the trade-off in disk space
seems very justifiable!
Numbers storing their own error bounds is interesting as statistical results that have error bounds are computed values by definition, right?
Statistical results, if saved such as in memoization, should be allowed associated metadata such as confidence intervals. If any of the initial parameters upon which the statistical results is
based were changed, one would want both the results and their metadata to be updated.
I usually see applications where tests are run and the results and CIs not saved and updated in the same database; rather, researchers save the routines and re-calculate results in their usual
workflow, saving results only in output files rather than within the database as new calculated fields. I like the discussion as I see the potential application to health science databases.
2. One of the cool things about techniques that track the range of a number, or the error bound, is that you can add the error due to the calculation itself.
Even if the inputs are known exactly, if the function is an approximation to the true result, and knows its error bounds, that can be reflected in the range of the returned result.
It seems pretty slick to handle errors in input value and errors due to approximations all at once. Some techniques even can let you know how much of the error in the output value is due to each
error source. | {"url":"http://half-integer.blogspot.com/2009/08/memoize.html","timestamp":"2014-04-20T03:47:45Z","content_type":null,"content_length":"63648","record_id":"<urn:uuid:afedb994-af29-4072-9e0c-0d72ca65da32>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00525-ip-10-147-4-33.ec2.internal.warc.gz"} |
Next: Arguments Up: Generalized Nonsymmetric Eigenvalue Problems Previous: LA_GGEV   Contents   Index
LA_GGEV computes for a pair of
A generalized eigenvalue of the pair
A right generalized eigenvector corresponding to a generalized eigenvalue
The computation is based on the (generalized) real or complex Schur form of LA_GGES for details of this form.)
Susan Blackford 2001-08-19 | {"url":"http://www.netlib.org/lapack95/lug95/node312.html","timestamp":"2014-04-19T09:31:42Z","content_type":null,"content_length":"5552","record_id":"<urn:uuid:ff733fd2-30e6-418f-9e1a-f93d7ad16afb>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00448-ip-10-147-4-33.ec2.internal.warc.gz"} |
Testing Einstein's E=mc2 in outer space
January 4th, 2013 in Physics / General Physics
According to the Theory of General Relativity, objects curve the space around them. UA physicist Andrei Lebed has proposed an experiment using a space probe carrying hydrogen atoms to test his
finding that the equation E=mc2 is correct in flat space, but not in curved space. Credit: NASA
According to the Theory of General Relativity, objects curve the space around them. UA physicist Andrei Lebed has proposed an experiment using a space probe carrying hydrogen atoms to test his
finding that the equation E=mc2 is correct in flat space, but not in curved space. Credit: NASA
(Phys.org)—University of Arizona physicist Andrei Lebed has stirred the physics community with an intriguing idea yet to be tested experimentally: The world's most iconic equation, Albert Einstein's
E=mc^2, may be correct or not depending on where you are in space.
With the first explosions of atomic bombs, the world became witness to one of the most important and consequential principles in physics: Energy and mass, fundamentally speaking, are the same thing
and can, in fact, be converted into each other.
This was first demonstrated by Albert Einstein's Theory of Special Relativity and famously expressed in his iconic equation, E=mc^2, where E stands for energy, m for mass and c for the speed of light
Although physicists have since validated Einstein's equation in countless experiments and calculations, and many technologies including mobile phones and GPS navigation depend on it, University of
Arizona physics professor Andrei Lebed has stirred the physics community by suggesting that E=mc^2 may not hold up in certain circumstances.
The key to Lebed's argument lies in the very concept of mass itself. According to accepted paradigm, there is no difference between the mass of a moving object that can be defined in terms of its
inertia, and the mass bestowed on that object by a gravitational field. In simple terms, the former, also called inertial mass, is what causes a car's fender to bend upon impact of another vehicle,
while the latter, called gravitational mass, is commonly referred to as "weight."
This equivalence principle between the inertial and gravitational masses, introduced in classical physics by Galileo Galilei and in modern physics by Albert Einstein, has been confirmed with a very
high level of accuracy. "But my calculations show that beyond a certain probability, there is a very small but real chance the equation breaks down for a gravitational mass," Lebed said.
If one measures the weight of quantum objects, such as a hydrogen atom, often enough, the result will be the same in the vast majority of cases, but a tiny portion of those measurements give a
different reading, in apparent violation of E=mc^2. This has physicists puzzled, but it could be explained if gravitational mass was not the same as inertial mass, which is a paradigm in physics.
"Most physicists disagree with this because they believe that gravitational mass exactly equals inertial mass," Lebed said. "But my point is that gravitational mass may not be equal to inertial mass
due to some quantum effects in General Relativity, which is Einstein's theory of gravitation. To the best of my knowledge, nobody has ever proposed this before."
Lebed presented his calculations and their ramifications at the Marcel Grossmann Meeting in Stockholm last summer, where the community greeted them with equal amounts of skepticism and curiosity.
Held every three years and attended by about 1,000 scientists from around the world, the conference focuses on theoretical and experimental General Relativity, astrophysics and relativistic field
theories. Lebed's results will be published in the conference proceedings in February.
In the meantime, Lebed has invited his peers to evaluate his calculations and suggested an experiment to test his conclusions, which he published in the world's largest collection of preprints at
Cornell University Library (see More Info).
"The most important problem in physics is the Unifying Theory of Everything – a theory that can describe all forces observed in nature," said Lebed. "The main problem toward such a theory is how to
unite relativistic quantum mechanics and gravity. I try to make a connection between quantum objects and General Relativity."
The key to understand Lebed's reasoning is gravitation. On paper at least, he showed that while E=mc^2 always holds true for inertial mass, it doesn't always for gravitational mass.
"What this probably means is that gravitational mass is not the same as inertial," he said.
According to Einstein, gravitation is a result of a curvature in space itself. Think of a mattress on which several objects have been laid out, say, a ping pong ball, a baseball and a bowling ball.
The ping pong ball will make no visible dent, the baseball will make a very small one and the bowling ball will sink into the foam. Stars and planets do the same thing to space. The larger an
object's mass, the larger of a dent it will make into the fabric of space.
The simplest atom found in nature, hydrogen, consists only of a nucleus orbited by one electron. Lebed's calculations indicate that the electron can jump to a higher energy level only where space is
curved. Photons emitted during those energy-switching events (wavy arrow) could be detected to test the idea.
In other words, the more mass, the stronger the gravitational pull. In this conceptual model of gravitation, it is easy to see how a small object, like an asteroid wandering through space, eventually
would get caught in the depression of a planet, trapped in its gravitational field.
"Space has a curvature," Lebed said, "and when you move a mass in space, this curvature disturbs this motion."
According to the UA physicist, the curvature of space is what makes gravitational mass different from inertial mass.
Lebed suggested to test his idea by measuring the weight of the simplest quantum object: a single hydrogen atom, which only consists of a nucleus, a single proton and a lone electron orbiting the
Because he expects the effect to be extremely small, lots of hydrogen atoms would be needed.
Here is the idea:
On a rare occasion, the electron whizzing around the atom's nucleus jumps to a higher energy level, which can roughly be thought of as a wider orbit. Within a short time, the electron falls back onto
its previous energy level. According to E=mc^2, the hydrogen atom's mass will change along with the change in energy level.
So far, so good. But what would happen if we moved that same atom away from Earth, where space is no longer curved, but flat?
You guessed it: The electron could not jump to higher energy levels because in flat space it would be confined to its primary energy level. There is no jumping around in flat space.
"In this case, the electron can occupy only the first level of the hydrogen atom," Lebed explained. "It doesn't feel the curvature of gravitation."
"Then we move it close to Earth's gravitational field, and because of the curvature of space, there is a probability of that electron jumping from the first level to the second. And now the mass will
be different."
"People have done calculations of energy levels here on Earth, but that gives you nothing because the curvature stays the same, so there is no perturbation," Lebed said. "But what they didn't take
into account before that opportunity of that electron to jump from the first to the second level because the curvature disturbs the atom."
"Instead of measuring weight directly, we would detect these energy switching events, which would make themselves known as emitted photons – essentially, light," he explained.
Lebed suggested the following experiment to test his hypothesis: Send a small spacecraft with a tank of hydrogen and a sensitive photo detector onto a journey into space.
In outer space, the relationship between mass and energy is the same for the atom, but only because the flat space doesn't permit the electron to change energy levels.
"When we're close to Earth, the curvature of space disturbs the atom, and there is a probability for the electron to jump, thereby emitting a photon that is registered by the detector," he said.
Depending on the energy level, the relationship between mass and energy is no longer fixed under the influence of a gravitational field.
Lebed said the spacecraft would not have to go very far.
"We'd have to send the probe out two or three times the radius of Earth, and it will work."
According to Lebed, his work is the first proposition to test the combination of quantum mechanics and Einstein's theory of gravity in the solar system.
"There are no direct tests on the marriage of those two theories," he said. " It is important not only from the point of view that gravitational mass is not equal to inertial mass, but also because
many see this marriage as some kind of monster. I would like to test this marriage. I want to see whether it works or not."
More information: The details of Andrei Lebed's calculations are published in three preprint papers with Cornell University Library:
Provided by University of Arizona
"Testing Einstein's E=mc2 in outer space." January 4th, 2013. http://phys.org/news/2013-01-einstein-emc2-outer-space.html | {"url":"http://phys.org/print276508993.html","timestamp":"2014-04-18T20:04:49Z","content_type":null,"content_length":"16230","record_id":"<urn:uuid:bd1814f3-4ae6-40f3-87f7-256966c87852>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00233-ip-10-147-4-33.ec2.internal.warc.gz"} |
Homework Help
Posted by Jerald on Thursday, November 8, 2012 at 7:45pm.
Shelly and Marcom are selling popcorn for their music club. Each of them received a case of popcorn to sell. Shelly has sold 7/8 of her case and Marcon has sold 5/6 of his case. Which of the
following explains how to find the portion of popcorn they have sold together?
A.Add the numerators and denominators of the fractions
B.Subtract the numerators and denominators of the fractions
C.Find the common denominator for the fractions and use it o write equivalent fractions. Then add the fractions
D.Find the common denominator for the fractions and use it to write equivalent fractions.Then subtract the fractions
• Math - Ms. Sue, Thursday, November 8, 2012 at 7:47pm
• Math - Jerald, Thursday, November 8, 2012 at 7:51pm
Cynthia bought a carton of 1 dozen eggs. She used 1/6 of the eggs when cooking breakfast and 1/3 of the eggs to make a cake. Which expression represents the fraction of the dozen eggs that
Cynthia has left?
• Math - Ms. Sue, Thursday, November 8, 2012 at 7:53pm
I vote for A.
You're looking for the fraction of one dozen.
• Math - Jerald, Thursday, November 8, 2012 at 7:59pm
Thank you. But can you show me you D is wrong? Since 1 dozen equals 12 and so I thought you'd need to subtract 1/3 and 1/6 from 12
• Math - Ms. Sue, Thursday, November 8, 2012 at 8:03pm
What would you get if you subtract 1/3 and 1/6 from 12?
• Math - anabelle, Saturday, November 10, 2012 at 3:00pm
ms sue ummmm tell everyone to follow me on instagram even simirin player please thats my dream
• Math - Breighton, Monday, September 16, 2013 at 8:21pm
Anabelle, this isn't a place to tell people to follow you on instagram. And if that's your dream, well.. you should really think about your life.
Sorry, couldn't resist replying to that.
• Math - Breighton, Monday, September 16, 2013 at 8:21pm
Anabelle, this isn't a place to tell people to follow you on instagram. And if that's your dream, well.. you should really think about your life.
Sorry, couldn't resist replying to that.
Related Questions
Algerba - THe senior class is raising money by selling popcorn amd soft drinks. ...
math - Ron bought goods from shelly katz. On may 8, shelly gave ron a time ...
math - Ron bought goods from shelly katz. On may 8, shelly gave ron a time ...
business math - Ron bought goods from shelly katz. On may 8, shelly gave ron a ...
math - Ron bought goods from shelly katz. On may 8, shelly gave ron a time ...
business - Do you believe that the move from selling music on a physical product...
math - Latoya has 3/4 of a bag of popcorn. If she eats half of the popcorn at a ...
math - latoya has 3/4 of a bag of popcorn. If she eats half of the popcorn at ...
Math - Latoya has 3/4 of a bag of popcorn. If she eats half of the popcorn at a ...
finance - Ron bought goods from shelly katz. On may 8, shelly gave ron a time ... | {"url":"http://www.jiskha.com/display.cgi?id=1352421912","timestamp":"2014-04-19T13:06:29Z","content_type":null,"content_length":"10852","record_id":"<urn:uuid:8d0d4a1f-1ff3-44b5-b764-21d3050b189f>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00030-ip-10-147-4-33.ec2.internal.warc.gz"} |
NAG Library
NAG Library Routine Document
1 Purpose
C05QSF is an easy-to-use routine that finds a solution of a sparse system of nonlinear equations by a modification of the Powell hybrid method.
2 Specification
SUBROUTINE C05QSF ( FCN, N, X, FVEC, XTOL, INIT, RCOMM, LRCOMM, ICOMM, LICOMM, IUSER, RUSER, IFAIL)
INTEGER N, LRCOMM, ICOMM(LICOMM), LICOMM, IUSER(*), IFAIL
REAL (KIND=nag_wp) X(N), FVEC(N), XTOL, RCOMM(LRCOMM), RUSER(*)
LOGICAL INIT
EXTERNAL FCN
3 Description
The system of equations is defined as:
$fi x1,x2,…,xn = 0 , i= 1, 2, …, n .$
C05QSF is based on the MINPACK routine HYBRD1 (see
Moré et al. (1980)
). It chooses the correction at each step as a convex combination of the Newton and scaled gradient directions. The Jacobian is updated by the sparse rank-1 method of Schubert (see
Schubert (1970)
). At the starting point, the sparsity pattern is determined and the Jacobian is approximated by forward differences, but these are not used again until the rank-1 method fails to produce
satisfactory progress. Then, the sparsity structure is used to recompute an approximation to the Jacobian by forward differences with the least number of function evaluations. The subroutine you
supply must be able to compute only the requested subset of the function values. The sparse Jacobian linear system is solved at each iteration with
computing the Newton step. For more details see
Powell (1970)
Broyden (1965)
4 References
Broyden C G (1965) A class of methods for solving nonlinear simultaneous equations Mathematics of Computation 19(92) 577–593
Moré J J, Garbow B S and Hillstrom K E (1980) User guide for MINPACK-1 Technical Report ANL-80-74 Argonne National Laboratory
Powell M J D (1970) A hybrid method for nonlinear algebraic equations Numerical Methods for Nonlinear Algebraic Equations (ed P Rabinowitz) Gordon and Breach
Schubert L K (1970) Modification of a quasi-Newton method for nonlinear equations with a sparse Jacobian Mathematics of Computation 24(109) 27–30
5 Parameters
1: FCN – SUBROUTINE, supplied by the user.External Procedure
must return the values of the functions
at a point
The specification of
SUBROUTINE FCN ( N, LINDF, INDF, X, FVEC, IUSER, RUSER, IFLAG)
INTEGER N, LINDF, INDF(LINDF), IUSER(*), IFLAG
REAL (KIND=nag_wp) X(N), FVEC(N), RUSER(*)
1: N – INTEGERInput
On entry: $n$, the number of equations.
2: LINDF – INTEGERInput
On entry
specifies the number of indices
for which values of
must be computed.
3: INDF(LINDF) – INTEGER arrayInput
On entry
specifies the indices
for which values of
must be computed. The indices are specified in strictly ascending order.
4: X(N) – REAL (KIND=nag_wp) arrayInput
On entry: the components of the point $x$ at which the functions must be evaluated. $Xi$ contains the coordinate $xi$.
5: FVEC(N) – REAL (KIND=nag_wp) arrayOutput
On exit
must contain the function values
, for all indices
6: IUSER($*$) – INTEGER arrayUser Workspace
7: RUSER($*$) – REAL (KIND=nag_wp) arrayUser Workspace
is called with the parameters
as supplied to C05QSF. You are free to use the arrays
to supply information to
as an alternative to using COMMON global variables.
8: IFLAG – INTEGERInput/Output
On entry: $IFLAG>0$.
On exit
: in general,
should not be reset by
. If, however, you wish to terminate execution (perhaps because some illegal point
has been reached), then
should be set to a negative integer.
must either be a module subprogram USEd by, or declared as EXTERNAL in, the (sub)program from which C05QSF is called. Parameters denoted as
be changed by this procedure.
2: N – INTEGERInput
On entry: $n$, the number of equations.
Constraint: $N>0$.
3: X(N) – REAL (KIND=nag_wp) arrayInput/Output
On entry: an initial guess at the solution vector. $Xi$ must contain the coordinate $xi$.
On exit: the final estimate of the solution vector.
4: FVEC(N) – REAL (KIND=nag_wp) arrayOutput
On exit
: the function values at the final point returned in
contains the function values
5: XTOL – REAL (KIND=nag_wp)Input
On entry
: the accuracy in
to which the solution is required.
Suggested value
, where
is the
machine precision
returned by
Constraint: $XTOL≥0.0$.
6: INIT – LOGICALInput
On entry
must be set to .TRUE. to indicate that this is the first time C05QSF is called for this specific problem. C05QSF then computes the dense Jacobian and detects and stores its sparsity pattern (in
) before proceeding with the iterations. This is noticeably time consuming when
is large. If not enough storage has been provided for
, C05QSF will fail. On exit with
, the number of nonzero entries found in the Jacobian. On subsequent calls,
can be set to .FALSE. if the problem has a Jacobian of the same sparsity pattern. In that case, the computation time required for the detection of the sparsity pattern will be smaller.
7: RCOMM(LRCOMM) – REAL (KIND=nag_wp) arrayCommunication Array
RCOMM must not
be altered between successive calls to C05QSF.
8: LRCOMM – INTEGERInput
On entry
: the dimension of the array
as declared in the (sub)program from which C05QSF is called.
Constraint: $LRCOMM≥12+nnz$ where $nnz$ is the number of nonzero entries in the Jacobian, as computed by C05QSF.
9: ICOMM(LICOMM) – INTEGER arrayCommunication Array
If $IFAIL=0$, $2$, $3$ or $4$ on exit, $ICOMM1$ contains $nnz$ where $nnz$ is the number of nonzero entries in the Jacobian.
ICOMM must not
be altered between successive calls to C05QSF.
10: LICOMM – INTEGERInput
On entry
: the dimension of the array
as declared in the (sub)program from which C05QSF is called.
Constraint: $LICOMM≥8×N+19+nnz$ where $nnz$ is the number of nonzero entries in the Jacobian, as computed by C05QSF.
11: IUSER($*$) – INTEGER arrayUser Workspace
12: RUSER($*$) – REAL (KIND=nag_wp) arrayUser Workspace
are not used by C05QSF, but are passed directly to
and may be used to pass information to this routine as an alternative to using COMMON global variables.
13: IFAIL – INTEGERInput/Output
On entry
must be set to
$-1 or 1$
. If you are unfamiliar with this parameter you should refer to
Section 3.3
in the Essential Introduction for details.
For environments where it might be inappropriate to halt program execution when an error is detected, the value
$-1 or 1$
is recommended. If the output of error messages is undesirable, then the value
is recommended. Otherwise, if you are not familiar with this parameter, the recommended value is
When the value $-1 or 1$ is used it is essential to test the value of IFAIL on exit.
On exit
unless the routine detects an error or a warning has been flagged (see
Section 6
6 Error Indicators and Warnings
If on entry
, explanatory error messages are output on the current error message unit (as defined by
Errors or warnings detected by the routine:
There have been at least
$200 × N+1$
evaluations of
. Consider restarting the calculation from the final point held in
. In this case, before reentering C05QSF, set
No further improvement in the approximate solution
is possible;
is too small.
The iteration is not making good progress. This failure exit may indicate that the system does not have a zero, or that the solution is very close to the origin (see
Section 7
). Otherwise, rerunning C05QSF from a different starting point may avoid the region of difficulty. In this case, before reentering C05QSF with a different starting point, set
You have set
negative in
On entry, $LRCOMM<12+nnz$.
On entry, $LICOMM<8×N+19+nnz$.
An internal error occurred. Please contact
On entry, $N≤0$.
On entry, $XTOL<0.0$.
Internal memory allocation failed.
7 Accuracy
is the true solution, C05QSF tries to ensure that
If this condition is satisfied with
$XTOL = 10-k$
, then the larger components of
significant decimal digits. There is a danger that the smaller components of
may have large relative errors, but the fast rate of convergence of C05QSF usually obviates this possibility.
is less than
machine precision
and the above test is satisfied with the
machine precision
in place of
, then the routine exits with
Note: this convergence test is based purely on relative error, and may not indicate convergence if the solution is very close to the origin.
The convergence test assumes that the functions are reasonably well behaved. If this condition is not satisfied, then C05QSF may incorrectly indicate convergence. The validity of the answer can be
checked, for example, by rerunning C05QSF with a lower value for
8 Further Comments
Local workspace arrays of fixed lengths are allocated internally by C05QSF. The total size of these arrays amounts to $8×n+2×q$ real elements and $10×n+2×q+5$ integer elements where the integer $q$
is bounded by $8×nnz$ and $n2$ and depends on the sparsity pattern of the Jacobian.
The time required by C05QSF to solve a given problem depends on $n$, the behaviour of the functions, the accuracy requested and the starting point. The number of arithmetic operations executed by
C05QSF to process each evaluation of the functions depends on the number of nonzero entries in the Jacobian. The timing of C05QSF is strongly influenced by the time spent evaluating the functions.
is .TRUE., the dense Jacobian is first evaluated and that will take time proportional to
Ideally the problem should be scaled so that, at the solution, the function values are of comparable magnitude.
9 Example
This example determines the values
$x1 , … , x9$
which satisfy the tridiagonal equations:
$3-2x1x1-2x2 = -1, -xi-1+3-2xixi-2xi+1 = -1, i=2,3,…,8 -x8+3-2x9x9 = -1.$
It then perturbs the equations by a small amount and solves the new system.
9.1 Program Text
9.2 Program Data
9.3 Program Results | {"url":"http://www.nag.com/numeric/fl/nagdoc_fl24/html/C05/c05qsf.html","timestamp":"2014-04-17T04:53:11Z","content_type":null,"content_length":"34925","record_id":"<urn:uuid:7026d990-e056-4ea7-ad74-16839516f8e3>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00189-ip-10-147-4-33.ec2.internal.warc.gz"} |
the first resource for mathematics
London Mathematical Society Lecture Note Series. 248. Cambridge: Cambridge University Press. x, 180 p. £24.95; $ 39.95 (1998).
This book gives a beautiful introduction to aspects of o-minimality, in the spirit of Grothendieck’s ‘tame topology’. Although, as the author comments, the subject was developed in close contact with
model theory, no model-theoretic background is needed, and many of the methods come from real algebraic geometry. Much of the material is not previously published. The book begins with a definition
of o-minimality, a proof that o-minimal ordered groups and ordered fields are divisible abelian and real closed respectively, and a proof that the real field
is o-minimal (via the Tarski -Seidenberg Theorem which is proved via a cell decomposition). The o-minimal Monotonicity and Cell Decomposition Theorems (developed in papers of Pillay and Steinhorn,
and one also with Knight) are then proved in Ch. 3. Dimension and Euler characteristic and their basic properties are introduced in the next chapter. In Ch. 5 the author shows that in an o-minimal
structure any definable family of definable sets is a Vapnik-Cervonenkis class (a notion from probability theory, relevant also to neural networks). This is equivalent to the fact that o-minimal
structures do not have the independence property. In the next two chapters some basic point set topology is developed, followed (for o-minimal expansions of fields) by some theory of differentiation:
a Mean Value Theorem, an Implicit Function Theorem, and a Cell Decomposition with
-cells and maps. A Triangulation Theorem is proved in Ch. 8, via a Good Directions Lemma. This leads to a proof that in an o-minimal expansion of an ordered field, two definable sets have the same
dimension and Euler characteristic if and only if there is a definable bijection between them. Under the same assumptions, a Trivialisation Theorem is proved in Section 9. It follows that given a
definable family of definable sets, the sets fall into finitely many embedded definable homeomorphism types. This and Wilkie’s proof of the o-minimality of the reals with exponentiation are applied
to prove a conjecture of Benedetti and Risler: roughly speaking, if we consider semialgebraic subsets of
defined by a bounded number of polynomial equalities and inequalities, and the polynomials are built from monomials by a bounded number of additions, then the semialgebraic sets fall into finitely
many embedded homeomorphism types. Finally, in Ch. 10 the author moves from definable sets to definable spaces, given by an atlas of charts, and constructs definable quotients. The book is an elegant
and lucid account, well-suited to a beginning graduate student, with a number of exercises. No attempt is made to cover recent material on o-minimality, for example on o-minimal expansions of the
reals, or on the Trichotomy Theorem of Peterzil and Starchenko and its applications to definable groups.
03C64 Model theory of ordered structures; o-minimality
14P10 Semialgebraic sets and related spaces
03-02 Research monographs (mathematical logic)
12L12 Model theory for fields | {"url":"http://zbmath.org/?q=an:0953.03045&format=complete","timestamp":"2014-04-23T13:43:28Z","content_type":null,"content_length":"23513","record_id":"<urn:uuid:d68ac693-9729-46c4-ba42-9ef05e5cf822>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00546-ip-10-147-4-33.ec2.internal.warc.gz"} |
Sylvan Beach, TX Math Tutor
Find a Sylvan Beach, TX Math Tutor
...Though I remind my students that their work needs to get done, I try my best to make the tutoring sessions as relaxing as possible. I try to make sure that my students have learned and
retained the information that I give them. A few number of students have stuck with me for more than one school year because they and their parents are pleased with what I have done for them.
8 Subjects: including algebra 1, algebra 2, chemistry, geometry
...I can guarantee that I will help you get an A in your course or ace that big test you're preparing for. I am a Trinity University graduate and I have over 4 years of tutoring experience. I
really enjoy it and I always receive great feedback from my clients.
38 Subjects: including ACT Math, reading, writing, English
I'm an early admissions student at San Jacinto college. My best subject to teach is English or social studies, but I am good with most basic subjects. If you are requesting more advanced math or
science tutoring (anatomy & physiology, physics, algebra, etc.), I would appreciate a heads up sometime before the lesson, so I can refresh my memory.
25 Subjects: including algebra 1, prealgebra, reading, English
...I teach both classical and pop as well as rudiments of music theory. I am an ordained minister and I studied Bible at Houston Baptist University. I have an interest in the Bible as literature
and feel I am highly qualified to assist students taking Bible studies.
42 Subjects: including algebra 1, prealgebra, English, reading
...Your starting score is not as important as the effort you are willing to put in to achieve your goal. I have a 24 hour cancellation policy, but we can also set up a makeup class if you need to
cancel. I am willing to meet at a nearby library to provide services.
22 Subjects: including algebra 2, ACT Math, algebra 1, biology
Related Sylvan Beach, TX Tutors
Sylvan Beach, TX Accounting Tutors
Sylvan Beach, TX ACT Tutors
Sylvan Beach, TX Algebra Tutors
Sylvan Beach, TX Algebra 2 Tutors
Sylvan Beach, TX Calculus Tutors
Sylvan Beach, TX Geometry Tutors
Sylvan Beach, TX Math Tutors
Sylvan Beach, TX Prealgebra Tutors
Sylvan Beach, TX Precalculus Tutors
Sylvan Beach, TX SAT Tutors
Sylvan Beach, TX SAT Math Tutors
Sylvan Beach, TX Science Tutors
Sylvan Beach, TX Statistics Tutors
Sylvan Beach, TX Trigonometry Tutors
Nearby Cities With Math Tutor
Alta Loma, TX Math Tutors
Arcadia, TX Math Tutors
Crystal Beach, TX Math Tutors
El Jardin, TX Math Tutors
Greenway Plaza, TX Math Tutors
Harrisburg, TX Math Tutors
La Porte Math Tutors
Lomax, TX Math Tutors
Monroe City, TX Math Tutors
Morgans Point, TX Math Tutors
Moss Bluff, TX Math Tutors
Pine Valley, TX Math Tutors
Shoreacres, TX Math Tutors
Timber Cove, TX Math Tutors
Tod, TX Math Tutors | {"url":"http://www.purplemath.com/Sylvan_Beach_TX_Math_tutors.php","timestamp":"2014-04-19T14:56:56Z","content_type":null,"content_length":"24098","record_id":"<urn:uuid:0bd93810-4c1b-4b23-b46f-7c1e28bc3220>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00601-ip-10-147-4-33.ec2.internal.warc.gz"} |
Newtown Square SAT Math Tutor
Find a Newtown Square SAT Math Tutor
...I favor the Socratic Method of teaching, asking questions of the student to help him/her find her/his own way through the problem rather than telling what the next step is. This way the student
not only learns how to solve a specific proof, but ways to approach proofs that will work on problems ...
58 Subjects: including SAT math, chemistry, reading, biology
...Throughout my years tutoring all levels of mathematics, I have developed the ability to readily explore several different viewpoints and methods to help students fully grasp the subject matter.
I can present the material in many different ways until we find an approach that works and he/she real...
19 Subjects: including SAT math, calculus, econometrics, logic
...Our son enjoyed working with Jonathan so much that he asked to continue to work with him during the school year. Our son earned an A in a 7th grade advanced math class. At the end of the school
year, my son asked if he could continue to work with Jonathan over the summer to continue to advance his knowledge of math.
22 Subjects: including SAT math, calculus, writing, geometry
...The photo is me at a Buddhist School in Jharkot, Nepal. I served as a Lieutenant with the US Army 8th Special Forces in Latin America building schools and water supplies, and am proud of my
service.I find that many students lack the basic skills of pre-Algebra to succeed in Algebra I. I first a...
35 Subjects: including SAT math, chemistry, reading, English
...I am well-versed in IB (as well as AP) Biology, Theory of Knowledge, English Literature and Composition, Writing craft, and 20th Century History. I have also obtained 'A' grades in Spanish
language studies at the 300 level in college and have studied various epic literature, moral philosophy, an...
18 Subjects: including SAT math, reading, Spanish, English | {"url":"http://www.purplemath.com/Newtown_Square_SAT_Math_tutors.php","timestamp":"2014-04-18T21:47:03Z","content_type":null,"content_length":"24212","record_id":"<urn:uuid:154719ee-ca56-48c5-b9b6-18c3247fa838>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00592-ip-10-147-4-33.ec2.internal.warc.gz"} |
Volumetric Rendering
Bob Amidon
Cornell University, 1995
Volumetric rendering is the process of visualizing three dimensional data sets. Data Explorer (DX) provides a basic volumetric rendering capability, however, it is limited to orthographic cameras and
ambient shading.
The goal of this project was to create a more advanced volumetric rendering module for Data Explorer that provides realistic shading and perspective camera views. Due to the large amount of data that
must be processed when volume rendering, secondary goals of the project were to conserve memory resources and keep the processing time to a minimum.
To add the volumetric rendering module described above to Data Explorer, load the module description file Vol.mdf. This creates a module titled Vol under the Rendering category.
The Vol module has six inputs and one output. Here is a sample network showing how it is used.
Volumetric Data
The first input receives the volumetric data. This is a 7-vector of floats containing the color, opacity and normal vector for each voxel in the data set. Color is an RGB value, opacity is a scalar
and the normal vector is a 3-vector. The Mark modules extract the DX colors and opacities components and the Compute module combines them with the normal vectors to form the 7-vector.
The Colormap module selects the colors and opacities based on the density of each voxel. For performance reasons, it is suggested that opacities be clipped when they fall below a minimum threshold
value. The rendering algorithm is optimized to bypass the shading calculations when a voxel is completely transparent, therefore, by changing near-zero opacities to zero allows more voxels to make
use of this optimization. If the threshold value is chosen well, no information should be lost in the rendering.
The normal vectors are calculated by the Gradient module. These vectors should not be normalized. During the rendering process, the normal vector for a voxel is calculated by interpolating the normal
vectors from the eight surrounding voxels. By using the non-normalized vectors in the interpolation, the magnitudes of the vectors affect the outcome of the interpolation by giving a proportional
weight to each vector.
When the data set being visualized is composed of discrete materials, then it may be desirable to assign colors and opacities based on the materials contained by the voxel rather than by density
alone. For these data sets, a probablistic classifier can be used to extract the percentage of each material contained by the voxel from the density. Then attributes can be assigned accordingly.
Refer to "Volumetric Rendering" by Drebin, Carpenter and Hanrahan for more information on extracting material percentages from density values.
Grid size/spacing information and shading properties are also extracted from the first input. The grid information includes grid dimension, grid counts, grid origin and grid deltas. The grid
dimension must be 3, volumetric data. Non-orthogonal grids are permitted.
The surface shading properties include the ambient (Ka), diffuse (Kd) and specular (Ks) shading coefficients as well as the shininess level (n). These values are added to the volumetric data stream
by using the Shade module. If a Shade module is not used, the default values of Ka=0.4, Kd=0.6, Ks=0.3 and n=15 are used.
An optional eighth component for the volumetric data is surface strength. Surface strength is calculated as the magnitude of the normal vector. It represents the percentage of a surface within the
voxel and is used to diminish the light reflected as the surface percentage approaches zero.
The use of the surface strength component is selected by setting the second input value, strength, to 1. When this component is used, the densities must be normalized. This is necessary to keep the
magnitudes of the normal vectors, and therefore the surface strength values, between 0.0 and 1.0, otherwise, the resulting image colors will be scaled past their maximum component values of 1.0 and
the resulting image will look over-exposed.
If the data set being used does not have strong surface boundaries, i.e. the densities are continuous, it is probably better if the surface strength component is not used (set the strength input to
Note that the strength input must be fed by an Integer interactor when the default value of zero is not used.
The third input specifies the camera parameters. A Camera module provides this input. The camera position, 'look to' point, up direction, field of view, aspect ratio, width, height and background
color are all extracted from the DX camera structure. Only perspective cameras are supported.
The width and height parameters select the resolution of the viewing window and therefore, the size of the rendered image. Because the size of the view window and rendering time are directly related,
it may be more efficient to render to a smaller window and then enlarge the result.
The view window size determines the number of samples taken along the x and y axis in view space. If the view window resolution is much greater than the data set size, then most of the samples are
just going to be further interpolations between the same data points. By rendering to a smaller image, the amount of oversampling is reduced and the same result is accomplished by interpolating the
pixels of the final image.
For example, a 64 cubed data set rendered at 640x480 pixels will take approximately 10 samples between data points along the x-axis and 7.5 samples between data points along the y-axis. By rendering
to a 320x240 image, the job will finish in a 1/4 of the time and the volume is still being sampled adequately (5 x-samples/voxel and 3.25 y-samples/voxel).
640x480 resolution (time=2:25).
320x240 resolution; enlarged to 640x480 (time=0:37).
200x150 resolution; enlarged to 640x480 (time=0:16).
There is no noticeable difference between the 320x240 rendering and the 640x480 rendering. The 200x150 rendering starts to look jagged, though.
Also note that if a large data set is rendered to a small image, the effect will be that some the data points will not be used in the rendering. This could cause discontinuities in the image when
abrupt changes occur in the data set or if the mismatch between the data set size and image size is significant.
Composite Image
The fourth connection inputs an external image to use as the background of the volumetric rendering. The ReadImage module is used to load the image file and feed this input. The size of the image
must correspond to the view window size specified by the camera. This input is optional. When not connected, the background is set to the background color of the camera.
The fifth input reads in a z-buffer filename. This input is optional. When provided, it allows the background image to be composited with the volumetric data. The size of the z-buffer must match the
size of the camera view window.
The z-buffer filename must be fed by a FileSelector interactor, otherwise the default of no z-buffer is assumed.
The sixth input provides the light direction vector. A Light module provides this input. Only a single distant light source is allowed and white light is assumed. If this input is not connected, the
default light direction vector of [0,0,1] is used.
Some ambient shading is needed to fill in the area on objects where the diffuse and specular terms go to zero. The spheres rendered below show the affect that no ambient lighting has. Also notice how
the specular reflections appear throughout the object.
Output Image
The output of the "Vol" module returns the rendered image of the volume. It has the resolution of the camera view window when viewed with a Display module. An Image module can be used instead of the
Display module if the image needs to be resized.
The approach taken by this volumetric renderer is to divide screen space into a series of depth planes perpendicular to the camera, sample a regular grid of points on each plane, map the points to
the volumetric data set to calculate the reflected light and then merge the resulting plane images to form the final image.
To avoid having to allocate memory for the intermediate images, the order that the depth planes are processed is from back to front. This allows each newly rendered point to be merged into the final
image immediately, using the opacity to interpolate the point with the image pixel. Therefore, only the memory required for the final image need be allocated.
The increment parameter selects the resolution that screen space is sampled at. This parameter is a 3-vector of floats containing the X, Y and Z increments. The Z increment selects the distance
between the depth planes and the XY increments specify the regular grid spacings that each depth plane is sampled at.
The depth planes map directly to the final image. Therefore, the XY increments are determined directly from the image size. Since screen space spans a distance of 2.0 (from -1.0 to 1.0) along the X
and Y axis, these increments are calculates as 2.0 / (width-1) along the x-axis and 2.0 / (height-1) along the y-axis.
There is no rigid definition for choosing the Z increment as there is for the X and Y increments, however, to satisfy the Nyquist theorem, the Z increment is chosen to correspond to a 1X sample rate.
The number of samples required for this rate is found by dividing the view depth (the distance between the far and near clipping planes) by the maximum distance along the z-axis between neighboring
data points. Since screen space spans a distance of 1.0 along the Z axis, the Z increment is calculated by 1.0 / (#samples-1).
The images below illustrate the effect of increasing the number of depth planes used in the rendering.
1X sample rate (time=2:25)
2X sample rate (time=4:48)
4X sample rate (time=9:35)
To minimize the number of screen points that map outside of the data set, and therefore reduce the overall rendering time, the clipping planes are chosen so they abut the data set. To find these
values, the corners of the data set are transformed to view space, and the minimum and maximum depths are located and assigned as the near and far clipping planes distances respectively. The
auto_limits subroutine implements this process. The width and height boundaries of screen space are based on the near cutoff plane and the perspective values and are calculated in the view_to_screen
subroutine where the perspective transform matrix is formed.
The auto_resolution subroutine calculates the increment parameter as described above. Of course, this routine could be bypassed and the increments chosen manually or by a different method (e.g. if an
approximate X and Y sample rate is preferred to an exact image size or if a finer depth resolution is required).
The view space depth of a screen point is calculated prior to transforming it to grid space. The macro calculate_zv performs this calculation using the equation -df / (Zs(f-d)-f). Zs is the screen
depth, f is the distance to the far cutoff plane and d is the distance to the near cutoff plane. This value only has to be calculated once per depth plane because all the points on a depth plane have
the same view space depth.
The processing order of this algorithm makes compositing an external image straightforward. The memory allocated for the final image is initially assigned with the external image colors. Then, before
each screen point is mapped to the data set, the view space depth of that point is compared with the corresponding z-buffer value. If the screen point is in front of the z-buffer depth, then the
point is shaded, otherwise it is not. The XY position of the screen point maps directly to the image and z-buffer indices. Appendix B details the z-buffer file format.
The screen point is transformed into grid space to get the data required for the shading calculations. The steps required to perform this transform are:
1. append a w-component of 1.0 to the point to make it homogeneous
2. scale the homogeneous coordinate by the view space depth
3. multiply the scaled point by the transform matrix
The scaling by the view space depth, step 2, is done to counter-act the non-linear perspective divide of the forward transform. Instead of scaling each point individually, the transform matrix is
scaled once for each depth plane because the view space depths for these points are the same.
The transform matrix used to convert screen points to grid points is the product of three matrices: screen-to-view, view-to-world and world-to-grid. The formation of these individual matrices are
detailed in Appendix A. The only significant note about the transforms is that the world-to-grid matrix removes the assumption of an orthogonal grid.
After the matrix multiply, the w-component is dropped to return the grid point to three-dimensional space. The w-component can be dropped because the inverse perspective matrix makes this value 1.0
and the view and grid transform matrices do not alter its value.
When a screen point is transformed into grid space, it falls within a cube formed by the eight closest data points. The term "cube" is used loosely here since non-orthogonal grids are allowed. The
integer value of the grid point locates the lower-right-front point of this cube (right-handed coordinate system for grid space) and the fractional part identifies the relative location of the point
within the cube.
The grid indices of the eight neighboring data points are [X,Y,Z], [X,Y,Z+1], [X,Y+1,Z], [X,Y+1,Z+1], ..., [X+1,Y+1,Z+1], where X,Y,Z is the integer value of the grid point. The data set is stored
with the z-axis as the fast axis, therefore, to access the data set memory, the following equation is used:
((X*Grid_Count_Y*Grid_Count_Z) + (Y*Grid_Count_Z) + Z) * Data_Size
Grid_Count_n is the number of grid points in the data set along the n-axis and Data_Size is the size of the data type that stores the information at each point. The array_index macro implements this
equation a bit more efficiently.
Tri-linear interpolation is the method used to interpolate between these points. This involves performing seven linear interpolations. The first four are along the z-axis between the points:
1. [X,Y,Z] and [X,Y,Z+1]
2. [X,Y+1,Z] and [X,Y+1,Z+1]
3. [X+1,Y,Z] and [X+1,Y,Z+1]
4. [X+1,Y+1,Z] and [X+1,Y+1,Z+1]
The ordering of these points was chosen to reduce the number of cache misses. Sequentially accessing points that differ only by their z-coordinate will typically not cause a cache miss. This
guarantees that four of the eight possible cache misses here are prevented. The ordering of the interpolations will eliminate two more potential cache misses if the size of the cache block is large
enough to contain points that differ only by one in the y-coordinate. For moderate to larger data sets, the cache block size probably won't be large enough.
The next two interpolations are along the y-axis between the results of interpolations (1) and (2), and (3) and (4). The results of the initial four interpolations are stored in local memory and
therefore the ordering of later interpolations has no affect on access time. The final interpolation is along the x-axis between the results of the last two interpolations.
The relative location of the point within the cube (i.e. the fractional part of the grid point) determines the weight of each point along each axis for the interpolations. All components of the data
points (color, opacity and normal) are interpolated to get the 7-vector used to shade the screen point.
Tri-linear interpolation does a good job of creating a smooth image provided that the data set has been sampled adequately. If there are not enough data points to sufficiently reproduce the original
object, the interpolation will fail and a "stair step" will be visible.
The first example below shows a sphere rendered from a 50 cubed volumetric data set. The edges appear jagged. When a 100 cubed data set is used instead, as in the second example, the jagginess goes
A way to increase the smoothness along object edges without increasing the size of the data set is to reduce the opacity delta between surfaces. In the above renderings, the opacity of the sphere was
1.0 and the surrounding area 0.0. In the image below, the opacity of the sphere was reduced to 0.025 and the shading coefficients were increased to brighten the object.
If further smoothing of the image is required, a filtering operation that uses more distant data points in the computation could be implemented.
Before the shading calculations are worked, the opacity and surface strength components of the point are examined. Surface strength is calculated by taking the magnitude of the interpolated normal
vector. If either of these values are zero, then the point is completely transparent and is not shaded and the image color does not change. NOTE: If the surface strength calculations have been turned
off by the user, then only opacities are checked here.
For the points with non-zero opacities and strengths, ambient, diffuse and specular shading terms are calculated. The Phong reflection model is used for these calculations. The normal vector of the
data point, as calculated by the interpolation, must be normalized before the diffuse and specular terms are calculated. The specular calculation also requires the view direction vector to be formed
and normalized.
There is an absolute value taken in the diffuse and specular terms that is not typical of Phong shading. The purpose of this operation is to make the normal vectors bidirectional. This makes the
light reflect off the front and back sides of voxels. Since most of the voxels in the data set have some transparency, this is necessary so all surfaces, even those facing away from the light source,
are visible.
Performance considerations force four simplifications to be made in the shading calculations. Most of these can be removed quite easily if their worth exceeds the performance hit.
1. There is only a single light source. To implement multiple light sources, the shading calculations would need to be executed once for each light source and then the results summed to get the
total shading for the point.
2. The light source is positioned at infinity. This prevents the light direction vector from being recalculated (and normalized) for each screen point.
3. White light is assumed. For non-white light, the color of the data point is multiplied by the color of the light source in the ambient and diffuse calculations and the color of the light is used
directly as a scaling factor in the specular calculation.
4. Light is not attenuated as it travels deeper into the volume. This means that each point is shaded as if there is nothing between it and the light source.
To overcome this last simplification, a volume of light intensities needs to be formed before the data set is rendered. This volume is generated much in the same way that the volumetric data is
rendered. View space is divided into a series of planes perpendicular to the light source, a regular grid of points is sampled on each plane, each point is mapped back to the data set (using a
transformation matrix based on the light source position) and then the opacity of the point is determined by interpolating the neighboring data set points. The opacity of the point and the
intensity of the light at that point determine the intensity of the light at the point directly behind it.
Once the light intensity volume is created, the rendering process proceeds as normal until the shading calculations are reached. Here, the light intensity is needed to scale the diffuse and
specular reflections. It is found by transforming the point into the light intensity volume and interpolating the surrounding values.
Adding this feature would have a significant impact on performance (I'd guess rendering time would double) and memory requirements, however, the result would have the effect of casting shadows
within the volume.
After the point has been shaded, the color is diminished by the surface strength and then interpolated with the background color (the color of the light passing through the point) based on the
opacity of the data point. This result replaces the previous image color of the corresponding pixel.
Once all of the screen points have been rendered, the final image is complete and returned to the calling routine.
This rendering algorithm is implemented by the render.h, render.c module. Vol.c is the interface routine to Data Explorer and the mainline routine for this program.
There are many inherent benefits to this algorithm:
1. Interpolation is a simple, integral part of the data sampling process that results in a smooth image given an adequately sampled data set. If further smoothing is required, this process can be
expanded to provide a filtering operation that would use more of the surrounding data points in the interpolation.
2. Compositing an external image is done with very little impact on the rendering algorithm and performance.
3. The rendering time is mostly a function of the image size and not of the size of the data set.
4. Each depth plane is a complete image in itself and could be output to a separate file to make a cross-sectional animation of the data set.
The only drawback is that the volumetric data set is accessed randomly. This increases cache misses for data fetches and therefore limits performance.
There were many enhancements made after the initial implementation of the algorithm to improve performance.
Subroutine overhead in an application like this becomes significant because the same processing is repeated many times. This overhead was mostly eliminated by using macros for the short subroutines
of about 2-3 lines. There were also a couple of calls to longer subroutines called within the main rendering loop that were replaced with the actual subroutine code. These routines had been broken
out for readability reasons alone.
Cache misses are a big performance impediment when dealing with large amounts of data. Accessing data sequentially, whenever possible, is the only way to address this problem. The macros and inlined
subroutines mentioned above helped to reduce cache misses on instruction fetches by extending the number of instructions that follow sequentially in memory. Cache misses on data access were
controlled by accessing the image and z-buffer memories sequentially and by reducing the random access of the interpolation process as much as possible.
Computation time can be reduced by eliminating instructions with long execution times. Where possible, equations were rearranged to eliminate these long operations or reduce the total number of
operations required. As mentioned before, the "loop" nature of this application magnifies any delay caused by inefficient calculations. Therefore, any calculations that could be moved outside of the
rendering loop were.
Subroutines, such as the generic matrix multiply procedure, are very useful because of the flexibility they provide. However, that flexibility comes at the price of performance. Therefore, generic
routines that were called from the rendering loop were replaced by macros that handled the job directly. The vector-matrix multiply macro is an example.
Conditional checks can go a long way in eliminating unnecessary calculations; they can also diminish performance if not chosen carefully. Three conditional checks are used by this algorithm. The
first is required for compositing an external image. If the point is behind the composite image, the point is never transformed, interpolated or shaded. The second check is done after the screen
point is mapped to the data set. If the point falls outside of the data set, it is not interpolated or shaded. The third check is made after the interpolation and checks the opacity and strength of
the point. If either is zero, then the point is completely transparent and is not shaded.
Rendering Time
The conditional checks used to optimize the performance cause the rendering time to vary depending on how much of the volume is hidden behind the composite image, how many of the screen points map to
the data set and the properties of the data set (e.g. the number of points that are transparent). Therefore, the worst case rendering time occurs when every single point in screen space maps to the
data set, there re no hidden or transparent voxels and each point has some surface strength.
Since times vary depending on the machine used, an overall worst case time cannot be given. To get an idea of the time it will take to render a data set, the worst case times for a 320x240 image with
68 depth samples on an IBM R6000 workstation and an SGI Onyx are given:
63x69x68 data set
d = 2000.0
f = 2013.6
fov = 0.00315 (0.1805 degree perspective angle)
aspect = 1.09524
SGI Onyx: 3 minutes 18 seconds
IBM R6000: 2 minutes 25 seconds
The effect that camera angle has on rendering time is based in the size of the view volume that screen space maps to. As the camera angle approaches 45 degrees on any axis, the orientation of the
data set will increase the distance between the near and far clipping planes. This requires more depth planes to be processed in screen space in order to form the final image. (Note that the XY
sampling remains the same because it is determined by the image size.) This is balanced out, though, by the increased number of screen points that don't map to the data set and, therefore, are not
interpolated or shaded.
Rendering time increases linearly as the number of screen space samples is increased along any one axis. Therefore, doubling the number of depth samples will double the rendering time. Remember that
the number of depth samples depend on the size and resolution of the data set and on the camera angle. Similarly, since the image size determines the number of X and Y samples taken in screen space,
doubling the image size (e.g. 320x240 to 640x480) will result in quadrupling the rendering time.
Color Model: ( color.h)
An RGB color model is being used for this application. This makes the color structure a three vector of floats (r,g,b). Five macros are provided to perform operations on this data structure. They
C_COLOR -- Creates an RGB color data type given three scalars.
C_ADD -- Adds two colors. This routine is used when
multiple colors contribute to a final color (e.g.
diffuse + specular).
C_SCALE -- Multiplies the R, G and B color components by a
scalar. This routine is used to attenuate a color.
C_MULTIPLY -- Multiplies each component (R, G, B) of one color
by the corresponding component of the other. This
routine is used to determine the resulting color
when a colored surface is illuminated by a colored
light source.
C_MIX -- Merges two colors by interpolating the resulting
color using the given ratio. This routine is used
to overlay individual pixels from one image onto
another, using the opacity are the mix ratio.
Vector Operations: ( vector.h)
Vectors are represented by three floats (x,y,z). Ten macros are provided to perform operations on this data type. They are:
V_VECTOR -- Creates a vector type given three scalars.
V_CREATE -- Creates a vector type given two vectors.
V_DOT_PRODUCT -- Calculates the dot product of two vectors.
V_CROSS_PRODUCT -- Calculates the cross product of two vectors.
V_MAGNITUDE -- Calculates the magnitude of a vector.
V_ADD -- Adds two vectors.
V_SUBTRACT -- Subtracts two vectors.
V_SCALE -- Scales a vector.
V_REFLECT -- Calculates the reflected vector given a normal
and incident vectors. Input vectors must be
V_NORMALIZE -- Normalizes a vector.
A matrix is handled as a pointer-to-float type. Therefore, the size of the matrix must be indicated in some form to the routine operating on the matrix. Two of the three operations provided by this
module assume the matrices passed in are of a given size. The third provides parameters to specify the size of the matrices input.
M_MULTIPLY -- Multiplies two matrices. This routine will
handle matrices of any size. The size of the
matrices are input parameters. **1
M_MULT_WV -- Multiplies a matrix with a vector. This is a macro
that assumes a four vector is being multiplied with
a 4X4 matrix. This macro was added to improve the
performance of this common operation. **1
M_INVERSE -- Calculates the inverse matrix of a 3x3 matrix.
**1 result matrix must be unique (i.e. it cannot be one of the operand matrices). If not, the operand will be destroyed during the calculation and an incorrect result will occur.
Shading Model: ( shade.h)
This module provides the ambient, diffuse and specular shading calculations based on the Phong reflection model. It makes use of macros provided by the Color and Vector modules. Note that all input
vectors to these macros must be normalized and that white light is assumed.
AMBIENT -- Calculates the ambient term.
DIFFUSE -- Calculates the diffuse term.
SPECULAR -- Calculates the ambient term.
This module contains the routines needed to calculate the transform matrices to do conversions between grid coordinates, world coordinates, view coordinates and screen coordinates.
AUTO_LIMITS -- Calculates the near and far clipping planes in
view space.
AUTO_RESOLUTION -- Calculates the resolution that screen space is
sampled at for the rendering.
RT_PRODUCT -- Forms a rotation:translation matrix product.
TR_PRODUCT -- Forms a translation:rotation matrix product.
VIEW_ROTATION -- Calculates the camera rotation matrix given the
camera position, up direction and look-to point.
GRID_TO_WORLD -- Calculates the grid-to-world and world to-grid
transform matrices.
WORLD_TO_VIEW -- Calculates the world-to-view and view-to-world
transform matrices.
VIEW_TO_SCREEN -- Calculates the view-to-screen and screen-to-view
transform matrices.
TRANSFORMS -- Main routine for forming the transformation
matrices and other inputs required by the
Input/Output: ( io.h, io.c)
This module contains the routines that allocate resources, handle file io and write messages to the screen.
ALLOCATE_IMAGE -- Allocates the memory required for the image and
initializes it with the background color.
READ_ZBUFFER -- Opens and reads in the contents of a z-buffer file.
DEBUG_WRITE -- Debug routine for writing out the contents of
various data structures.
WRITE_ERROR -- Converts status values into error messages.
Appendix A: (Matrix Formation)
The module xform.c contains all the routines necessary to create the transforms matrices required as inputs by the rendering program. The transforms routine in this module is the main routine that
calculates the forward and inverse transform matrices for the grid-to-world (Tgrid and Tgrid_inv), world-to-view (Tview and Tview_inv) and view-to-screen (Tpersp and Tpersp_inv) conversions. The
product of Tpersp_inv*Tview_inv*Tgrid_inv is taken to create the overall inverse transform matrix from screen to grid coordinates.
The grid-to-world matrix is formed by multiplying the grid deltas matrix with the grid origin matrix. The grid deltas matrix is a rotation operation that reduces to a scaling operation for orthogonal
grids. Below are the individual and resulting 4X4 matrices:
Grid Deltas Matrix: Grid Origin Matrix:
Xdx Xdy Xdz 0.0 1.0 0.0 0.0 0.0
Ydx Ydy Ydz 0.0 0.0 1.0 0.0 0.0
Zdx Zdy Zdz 0.0 0.0 0.0 1.0 0.0
0.0 0.0 0.0 1.0 Ox Oy Oz 1.0
Grid-To-World Matrix:
Xdx Xdy Xdz 0.0
Ydx Ydy Ydz 0.0
Zdx Zdy Zdz 0.0
Ox Oy Oz 1.0
Xdx reads "the rate of change in the world X-axis along the grid X-axis".
Xdy reads "the rate of change in the world Y-axis along the grid X-axis".
Ydx reads "the rate of change in the world X-axis along the grid Y-axis".
O = [Ox,Oy,Oz] is the location of the grid origin.
The inverse of the grid-to-world matrix is formed by multiplying the inverse grid translation matrix with the inverse grid rotation matrix. The result is shown below:
Inverse Grid Origin Matrix: Inverse Grid Deltas Matrix:
1.0 0.0 0.0 0.0 Xdx Ydx Zdx 0.0
0.0 1.0 0.0 0.0 Xdy Ydy Zdy 0.0
0.0 0.0 1.0 0.0 Xdz Ydz Zdz 0.0
-Ox -Oy -Oz 1.0 0.0 0.0 0.0 1.0
World-To-Grid Matrix:
Xdx Ydx Zdx 0.0
Xdy Ydy Zdy 0.0
Xdz Ydz Zdz 0.0
(-O dot X) (-O dot Y) (-O dot Z) 1.0
X = [Xdx,Xdy,Xdz]; Y = [Ydx,Ydy,Ydz]; Z = [Zdx,Zdy,Zdz]
Note that the matrix notation for the world-to-grid matrix above is simplified by assuming that the rotation matrix from the grid-to-world transform is orthonormal and therefore the inverse matrix is
just the transpose of the matrix. In the actual code, this assumption is not made and the inverse matrix is computed.
The world-to-view matrix is formed by multiplying the camera translation matrix with the camera rotation matrix. The individual and result matrices are shown below:
Camera Translation Matrix: Camera Rotation Matrix:
1.0 0.0 0.0 0.0 Ux Vx Nx 0.0
0.0 1.0 0.0 0.0 Uy Vy Ny 0.0
0.0 0.0 1.0 0.0 Uz Vz Nz 0.0
-Cx -Cy -Cz 1.0 0.0 0.0 0.0 1.0
World-To-View Matrix:
Ux Vx Nx 0.0
Uy Vy Ny 0.0
Uz Vz Nz 0.0
(-C dot U) (-C dot V) (-C dot N) 1.0
C is the camera position, N is camera direction vector, V is the camera "up" direction vector and U is the camera "right" direction vector (left-handed coordinate system).
The inverse of the world-to-view matrix is formed by multiplying the inverse camera rotation matrix with the inverse camera translation matrix. The result is shown below:
Inverse Camera Rotation Matrix: Inverse Camera Translation Matrix:
Ux Vx Nx 0.0 1.0 0.0 0.0 0.0
Uy Vy Ny 0.0 0.0 1.0 0.0 0.0
Uz Vz Nz 0.0 0.0 0.0 1.0 0.0
0.0 0.0 0.0 1.0 Cx Cy Cz 1.0
Ux Uy Uz 0.0
Vx Vy Vz 0.0
Nx Ny Nz 0.0
Cx Cy Cz 1.0
The same assumption, concerning orthonormality, made in the world-to-grid matrix formation applies here, however, in this case, the rotation matrix should actually be orthonormal.
The view-to-screen matrix is just the linear perspective matrix and is shown below. After a point is transformed by this matrix, it must be divided by its w-component to return it to 3-vector space.
This divide is non-linear.
View-To-Screen Matrix:
d/w 0.0 0.0 0.0
0.0 d/h 0.0 0.0
0.0 0.0 f/(f-d) 1.0
0.0 0.0 -df/(f-d) 0.0
w is the view window width, h is the view window height, d is the distance to the near clipping plane and f is the distance to the far clipping plane.
The inverse of the perspective matrix was taken and the result is shown below. Since this matrix does not account for the non-linear divide performed after the view-to-screen transform, points to be
transformed by this matrix must be multiplied by the view space depth prior to the operation.
Screen-To-View Matrix:
w/d 0.0 0.0 0.0
0.0 h/d 0.0 0.0
0.0 0.0 0.0 (f-d)/-df
0.0 0.0 1.0 1/d
Appendix B: (Z-Buffer Format)
The format of the z-buffer file is shown below. The subroutine read_zbuffer is used to read in this information.
data type quantity size (bytes)
-------------- ----------- ------------ --------------
magic number int 1 4
width short int 1 2
height short int 1 2
matrix 1 float 16 64
matrix 2 float 16 64
depth values float width*height width*height*4
The magic number is 0x2F0867AB.
Matrix 1 is the product of the view and perspective matrices.
Matrix 2 is the view matrix.
Appendix C: (Error Messages)
Status Message
------- ------------------------------------------------------
100 ERROR: File read error.
101 ERROR: Z-buffer file has wrong magic number.
102 ERROR: Z-buffer size is not compatible with image size.
103 ERROR: Could not allocate z-buffer memory.
104 ERROR: Premature EOF on z-buffer read.
105 ERROR: Could not open z-buffer file.
106 ERROR: Could not allocate image memory.
107 ERROR: Data set is not volumetric.
108 ERROR: Far and near clipping planes are equal.
109 ERROR: Perspective angle is zero.
110 ERROR: Increment value of zero detected.
111 ERROR: View direction vector is null.
112 ERROR: Up direction vector is null.
113 ERROR: Light source direction vector is null.
1. Alan Watt, "3D Computer Graphics", Addison-Wesley, 1993
2. Robert A. Drebin, Loren Carpenter and Pat Hanrahan, "Volume Rendering", Computer Graphics (SIGGRAPH '88 Proceedings) 22(4) pp. 65-73 (August 1988)
3. Steve Hill, "Tri-Linear Interpolation", Graphic Gems 4, pp. 521-525, Academic Press, Inc., 1994
4. James D. Foley, "Computer Graphics: Principles and Practice", Addison-Wesley, 1990
5. Philip J. Davis, "The Mathematics of Matrices", Wiley, 1973, pp.179-183
6. Ed Catmull and Alvy Ray Smith, "3-D Transformations of Images in Scanline Order", ACM 1980, pp. 279-284 | {"url":"http://www.nbb.cornell.edu/neurobio/land/OldStudentProjects/cs490-94to95/ramidon/Vol.html","timestamp":"2014-04-19T01:57:49Z","content_type":null,"content_length":"40714","record_id":"<urn:uuid:94c46104-1ea3-493e-9339-94989e7f0a86>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00123-ip-10-147-4-33.ec2.internal.warc.gz"} |
Electronic Circuit Analysis and Design 2nd edt. by Donald A. Neamen - solution m
Posted on 2007-06-24, updated at 2012-04-04. By anonymous.
Electronic Circuit Analysis and Design 2nd edt. by Donald A. Neamen - solution manuel.pdf
rar password
Sponsored High Speed Downloads
Contents of this information are indexed from the Internet and not censored. All actions are under your responsibility. Send email to admin@ebookee.com to report links to illegal contents, we'll
remove them immediately.
Search More...
Electronic Circuit Analysis and Design 2nd edt. by Donald A. Neamen - solution m
Download this book
No download links here
Please check the description for download links if any or do a search to find alternative books.
Need password?
RAR Password Recovery
Can't Download?
Please search mirrors if you can't find download links for
"Electronic Circuit Analysis and Design 2nd edt. by Donald A. Neamen - solution m"
in "
" and someone else may update the links. Check the
when back to find any updates.
Search Mirrors
Maybe some mirror pages will be helpful, search this book at top of this page or click
to find more info.
Related Books
Books related to "Electronic Circuit Analysis and Design 2nd edt. by Donald A. Neamen - solution m":
14 Comments for "Electronic Circuit Analysis and Design 2nd edt. by Donald A. Neamen - solution m":
1. please i want to this book
second edition
2. i wan solution for Electronic Circuit Analysis and Design 2nd edt by Donald A. Neamen urgently.. please let me know where can i get it?
3. i wan solution for Electronic Circuit Analysis and Design 2nd edt by Donald A. Neamen urgently.. please let me know where can i get it?
4. i want to download the book "Eletronic circuit analysis and design" 2nd edt by Donald A.Neamen urgently
5. i also want dis book and its manual ,anam u 4m pakistan? can u guide me a bit.my email address is sani_2008@live.com... plz reply if read
WWW.solutionmanual#2010.blogspot.com------->updated list
remove "#" from emails,tanx
U CAN FIND CONTACT INFORMATION AND INFORMATION ABOUT MY DATABASE IN FOLLOWING LINK!!!!
plz download more informaion from this link:
u can find my email in this link :
these r just some part of our solution manuals!!!!
پ1-Advanced Engineering Electromagnetics by Constantine Balanis
2-Antenna Theory and Design, 2ed+1ed, by Warren Stutzman
3-Circuit Design with VHDL by Volnei A. Pedroni
4-Convex Optimization by Stephen Boyd and Lieven Vandenberghe
5-Digital signal processing 2+1ed, by Sanjit K. Mitra
6-Partial differential equations, lecture notes by Neta
7-Fundamental Methods of Mathematical Economics 4ed, by Alpha C. Chiang
8-C++ How to program 3ed, by Harvey M. Deitel, Paul J. Deitel
9-Signal Detection And Estimation by Mourad Barkat
10-Differential Equations and Linear Algebra u/e, by Edwards & Penney
11-An Introduction to the Mathematics of Financial Derivatives u/e,by Salih N. Neftci
12-Materials and Processes in Manufacturing, 9 edition,byDegarmo
13-Mathematics for Economists u/e, by Carl P. Simon & Lawrence Blume
14-Digital Systems : Principles and Applications, 10th Edition,byRonald Tocci
15-Fundamental Methods of Mathematical Economics,4rd Edition, by Alpha C. Chiang
16-Linear Algebra Done, 2ed, Sheldon Axler
17-Physics: Principles with Applications,6ed, Douglas C. Giancoli
Elemntary Classical Analysis, solution-manual,Chap.1.to.4 Marsden 18-
19- Field and Wave Electromagnetics (2nd Edition),by David k.cheng,(original version)
20- Automatic Control Systems),8ed, by Benjamin C. Kuo (Author), Farid Golnaraghi
21- Signal Processing and Linear Systems, by BP Lathi
22- Signals and Systems ,by BP Lathi
23- Signals and Systems, 2ed,by haykin
24- Discrete Time Signal Processing,2ed,oppenheim
25- Digital Communications, by Peter M. Grant
26- Digital and Analog Communication Systems, 5ed, Edition,by Leon W. Couch
27- Elements of Information Theory ,2ed,by Thomas M. Cover
28- Computer network ,4ed ,by Andrew Tanenbaum
29- Wireless Communications: Principles and Practice,1ed,by Theodore S. Rappaport
30- Digital Communications: Fundamentals and Applications,2ed, by Bernard Sklar
31- Principles Of Digital Communication And Coding Andrew J Viterbi, Jim K. Omura
30- First Course in Probability, (7th Edition),by Sheldon Ross
31- Digital Signal Processing (3th Edition) by John G. Proakis
32- Principles of Communication: Systems, Modulation and Noise, 5ed by R. E. Ziemer
33- Communication Systems Engineering (2ed),by John G. Proakis, Masoud Salehi
34- Communication Systems 4th Edition by Simon Haykin
35- Modern Digital and Analog Communication Systems by B. P. Lathi
36- Probability, Random Variables and Stochastic Processes with Errata,4ed, Papoulis
37- Electronic Circuit Analysis and Design ,2ed,by Donald A. Neamen
38- Analysis and Design of Analog Integrated Circuits,4ed, by Grey and Meyer
39-Elements of Electromagnetics ,2ed+3ed,by Matthew N. O. Sadiku
40-Microelectronic Circuits (4ed+5ed),by Sedra
41-Microwave Engineering , 2ed,3ed,by POZAR
42-Antennas For All Applications ,3ed, by Kraus, Ronald J. Marhefka
43-Introduction to Electrodynamics (3rd Edition),by David J. Griffiths
44-Thomas' Calculus Early Transcendentals (10th Edition)
45- Recursive Methods in Economic Dynamics,u/e,by Claudio Irigoyen
46-Engineering Electromagnetics, 6ed+7ed, by William Hayt and John Buck
47-Fundamentals of Logic Design - 5th edition,by Charles H. Roth
48-Fundamentals of Solid-State Electronics,1ed,by Chih-Tang Sah
49-Journey into Mathematics: An Introduction to Proofs , by Joseph. Rotman
50-Probability&Statistics for Engineers&Scientists, 8ed,Sharon Myers, Keying Ye
51- Physics for Scientists and Engineers with Modern Physics,3ed,Douglas C. Giancoli
52- Mathematical Methods for Physics and Engineering,3ed,by K. F. Riley,M. P. Hobson
53- Econometric Analysis, 5ed,by william h. Greene
54- Microeconomic Analysis, 3ed,by Hal R. Varian
55- A Course in Game Theory Solutions Manual, Martin J. Osborne
56- Fundamentals of Electronic Circuit Design (David J. Comer, Donald T. Comer)
57- Options, Futures and Other Derivatives, 4ed+5ed ,by John Hull, John C. Hull
58- Adaptive Control, 2ed , by Karl J Astrom
59- A First Course in Abstract Algebra, 7ed ,by John B. Fraleigh
60-Classical Dynamics of Particles and Systems ,5ed,by Stephen T. Thornton, B.Marion
61- Applied Statistics and Probability for Engineers: Douglas C. Montgomery, George
62- Advanced Engineering Mathematics ,8Ed+9ed, by Erwin Kreyszig
63- Digital Design, 4e, by M. Morris Mano, Michael D. Ciletti
64-Cryptography and Network Security (4th Edition), William Stallings
65-Communication Networks,2ed, by Alberto Leon-Garcia
66-Digital Signal Processing,u/e, by Thomas J. Cavicchi
67- Digital Integrated Circuits-A DESIGN PERSPECTIVE, 2nd,by Jan M. Rabaey, Anantha
68- A First Course in String Theory, Barton Zwiebach
69- Wireless Communications ,u/e,Andrea Goldsmith:
70- Engineering Circuit Analysis, 6Ed+7ed, by Hayt
71- Intoduction to electric circuits,7/E,by Richard C. Dorf,James A. Svoboda
72- Introduction to Statistical Quality Control, 4th Edition,by Douglas C. Montgomery
73- Introduction to Robotics Mechanics and Control, 2nd Edition,by John J. Craig
74- Physics for Scientists and Engineers, 6ed,by Serway and Jewett's, Volume One
75- Introduction to Algorithms, 2ed,Thomas H. Cormen, Charles E. Leiserson,
76- Microelectronic Circuit Design,3ed, by Jaeger/Blalock
77- Microwave And Rf Design Of Wireless Systems by David M. Pozar
78- An Introduction to Signals and Systems,1ed, by John Stuller
79-Control Systems Engineering, 4th Edition,by Norman S. Nise
80-Physics for Scientists and Engineers ,5ed,A. Serway ,vol1
81-Laser Fundamentals ,2ed, by William T. Silfvast
82-Electronics, 2Ed,by Allan R. Hambley
83- Power Systems Analysis and Design ,4ed, by Glover J. Duncan
84-Basic Engineering Circuit Analysis, 8th Edition,by J. David Irwin
85- Satellite Communications ,1ed, by Timothy Pratt
86- Modern Digital Signal Processing by Roberto Crist
87- Stewart's Calculus, 5th edition
88- Basic Probability Theory by Robert B. Ash
89- Satellite Communications,u/e,by Timothy Pratt
90- the Econometrics of Financial Markets ,u/e,by Petr Adamek John Y. Campbell
91- Modern Organic Synthesis An Introduction by Michael H. Nantz, Hasan Palandoken
92-Fundamentals of Quantum Mechanics For Solid State Electronics and . by C.L. Tang
93-Basic Probability Theory,u/e,by Robert B. Ash
94- a first course in differential equations the classic,5ed, Dennis G. Zill
95-An Introduction to the Finite Element Method (Engineering Series),3ed, by J Reddy.
96-Semiconductor Physics And Devices ,3ed,by Donald Neamen
97-Advanced Modern Engineering Mathematics (3rd Edition) by Glyn James
98-Database Management Systems,3ed, Raghu Ramakrishnan, Johannes Gehrke,
99- Techniques of Problem Solving by Luis Fernandez
100- Contemporary Engineering Economics (4th Edition),by Chan S. Park
101- Fundamentals Of Aerodynamics ,3ed, by - John D. Anderson
102- Microeconomic Theory ,u/e, Andreu Mas-Colell, Michael D. Whinston, R. Green
103- Introduction to Solid State Physics ,8ed,by Charles Kittel
104- Intermediate Accounting, 12ed,Donald E. Kieso, Jerry J. Weygandt, Terry D.
105- Introduction to VLSI Circuits and Systems,u/e2001, by John P. Uyemura
106-Special Relativity by Schwarz and Schrawz
107- Microprocessor Architecture, Programming with the 8085,u/e,by mzidi
108- Organic Chemistry (4th Edition) , by Paula Y. Bruice
109- Optimal Control Theory An Introduction ,by D.E.Kirk(selected problems)
110- Operating System Concepts ,7ed+6ed , Abraham Silberschatz, Peter Baer Galvin
111- Materials Science and Engineering: An Introduction.,6ed, by William D. Callister Jr
112- An Introduction to Ordinary Differential Equations,u/e, by Robinson.J.C
113- Data Communications and Computer Networks,7ed,by William Stallings
114- Fundamentals of Electric Circuits 2ed+3ed Charles Alexander, Matthew Sadiku
115-Electrical Machines, Drives, and Power Systems, 6ed , by Theodore Wildi
116- Probability and Stochastic,2ed,Roy Yates, David J. Goodman
117-Manual of Engineering Drawing, Second Edition, Colin Simmons, Dennis Maguire
118- Thomas' Calculus, 11th Edition, George B. Thomas, Maurice D. Weir, Joel Hass,
119- Guide to Energy Management, 5 Edition , Klaus-Dieter E. Pawlik
120- Analytical Mechanics ,7ed, Grant R. Fowles
121- Computer Networks: A Systems Approach,2ed,Larry L. Peterson, Bruce S. Davie
122- Statistics and Finance: An Introduction,2004, by David Ruppert.
123- Computer Organization and Architecture: Designing for Performance,7ed, William Stallings
124- Fundamentals of Signals and Systems ,2ed,by M.J. Roberts
125-Satellite Communications, 3ed,by Dennis Roddy
126- Structural Analysis,3ed, Aslam Kassimali
127- Mathematics for Economics - 2nd Edition ,Michael Hoy, John Livernois, Chris McKenna
128- Elementary Mechanics and Thermodynamics by J. Norbury(2000)
129- Optical Fiber Communications,3ED, Gerd Keiser, Gerd Keiser
130- Device Electronics for Integrated Circuits,3ed,by Richard S. Muller
131- Elementary Linear Algebra,student solution manual 9ed Howard Anton
132- Modern Control Systems (11th Edition) ,Richard C. Dorf, Robert H. Bishop
133- Advanced Engineering Mathematics,8ed+9ed,by Erwin Kreyszig
134-Computer Organization and Design (3rd edition) by David A. Patterson
135-Advanced Financial Accounting 8ed,by Richard Baker+testbank
136- Probability And Statistics For Engineering And The Sciences,3ed,by By HAYLER
137- An Introduction to Numerical Analysis,u/e, by Endre Süli
138- Introduction to queueing theory ,2ed, Robert B Cooper
139- Managerial Accounting ,12th Edition,Ray Garrison, Eric Noreen(testbank)
140- Fundamentals of Corporate Finance ,8ed, Stephen A. Ross
141- Artificial Intelligence: A Modern Approach (2ed) ,by Stuart Russell, Peter Norvig)
142- Electric Circuits (7 th +8th Edition) , by James W. Nilsson, Susan Riede
150- Structure and Interpretation of Signals and Systems ,1ed, Edward A. Lee, Pravin
151- Engineering Mechanics - Dynamics (11th Edition) ,by Russell C. Hibbeler
152- Elementary Principles Of Chemical Processes,u/e Richard M. Felder
153- Recursive Macroeconomic Theory ,2ed, by Lars Ljungqvist, Thomas J. Sargent
154- Fracture Mechanics: Fundamentals and Applications, 2ed,Ted L. Anderson
155- Transport Phenomena, 2nd Edition , R. Byron Bird, Warren E. Stewart, Edwin N.
156- Electric Machinery Fundamentals ,1ed+4ed,Stephen J. Chapman
157-Numerical Methods for Engineers by Steven C. Chapra
158-Operating Systems: Internals and Design Principles ,4ed,by William Stallings
159- Power Electronics: Converters, Applications,2ed+3ed, by Ned Mohan
160- Optimal State Estimation:Kalman, H Infinity, Nonlinear Approaches by Dan Simon
161- Problems and Solutions on Atomic,Nuclear and Particle Physics by YungKuo Lim
162-Electronic Devices and Circuits ,Jacob Millman, Christos C.Halkias
163-An Introduction to Mathematical Statistics,4ed,by Richard J.Larsen, Morris L.Marx
164-University Physics with Modern Physics, 12e, Young
165- Principles And Applications of Electrical Engineering,u/e,by Giorgio Rizzoni
166-Design with Operational Amplifiers & Analog Integrated Circuits (3e) by Franco
167- Statistical Digital Signal Processing and Modeling,u/e, by Monson H. Hayes
168- Power Systems Analysis by Hadi Saadat
169-Partial Differential Equations with Fourier Series and Boundary Value Problems (2Ed) ,by Nakhle H. Asmar
170-Partial Differential Equations An Introduction,by K. W. Morton, D. F. Mayers
171-Design with Operational Amplifiers and Analog Integrated Circuits, 3rd edt. by Franco, Sergi
172- Intorducation to General, Organic & Biochemistry 5th Ed ,by Frederick Bettelheim, Jerry March(TEST BANK)
173- Organic & Biochemistry,3ed, by Frederick Bettelheim, Jerry March(TEST BANK)
174-Fundamentals of Machine Component Design, 3rd Edition, Robert C. Juvinall, Kurt
175-Fundamentals of Fluid Mechanics(Student Solution Manual)By Bruce R. Munson, Donald F. Young, Theodore H. Okiishi
176-Kinetics of Catalytic Reactions ,u/e, M. Albert Vannice
177- Lectures on Corporate Finance,2ed, by Peter L. Bossaerts
178- Advance Functions & Introductory Calculus ,McLeish, Montesanto, Suurtamm
179- Fundamentals of Chemical Reaction Engineering ,Mark EE Davis, Robert JJ Davis
180- Statistical Inference ,2ed,George Casella, Roger L. Berger
181- Computer Architectur Pipelined and Parallel Processor Design by Michael J. flynn
182-Investment Analysis & Portfolio Management, 7ed,by Reilly and Brown
183-Elementary Number Theory, 5th Edition, Goddard
184- Principles of Electronic Materials and Devices,2ed, S.O. Kasap
185-Fundamentals of Fluid Mechanics Bruce R. Munson, Donald F. Young,
186-Problems In General Physics ,2ed,by Irodov
187-fundamentals of machine component design ,3ed, by Juvinall, Marshek
188- Zill.Differential.Equations.5thEd.Instructor.Solution.Manual
189- Device Electronics for Inteevice Electronics for Integrated Circuits,3ed, Richard S. Muller
190- Probability for Risk Management Matthew J. Hassett, Donald G. Stewart
191-Problem Solving Wih C++ : The Object of Programming, 6ed,Walter Savitch's
192- Auditing and Assurance Services (12th Edition) ,Alvin A Arens, Randal J. Elder,
193- Engineering Economic Analysis ,9ed, Donald G. Newnan, Ted G. Eschenbach,
194-Introduction to Medical Surgical Nursing ,4ed,by Linton
195- Discrete Mathematics and its Applications, Rosen, 6th Ed (Ans to Odd problems)
196- Economics by N. Gregory Mankiw(SOL+TESTBANK)
197-Control Systems ,2ed,by Gopal
198- Process Systems Analysis and Control by Donald Coughanowr
199-Differenial Equations by With Boundary Value Problems 5ed by ZILL&CULLEN
200- A first course in differential equations the classic,7ed, Dennis G. Zill
201- Foundations of Electromagnetic Theory (u/e), by John R. Reitz, Frederick J. Milford(in Spanish)
202-Theory & Design for Mechanical Measurements 4th edition by Richard S. Figliola & Donald E. Beasley
203-Fundamentals of Digital Logic With Vhdl Design, 1ed+2ed, by Stephen Brown, Zvonko Vranesic
204-microprocessor 8085 ramesh GAONKAR
205- Elementary Linear Algebra (5th Ed) by Stanley I. Grossman
206-Physical Chemistry 8th edition,by Atkins(Student solution manual)
207- Engineering Economic Analysis (9780195335415) Donald G. Newnan, Ted G. Eschenbach, Jerome P. Lavelle
208- introduction to Medical Surgical Nursing by Linton 4th edition
209- Classical Mechanics 2th Edition by Herbert Goldstein
210-Fundamentals of Wireless Communication, by David Tse
211-Fourier and Laplace Transform – by Antwoorden
212-C++ for Computer Science and Engineering,3ed,by Vic Broquard
213- Concepts of Programming Languages ,7ed,by Robert Sebesta
214- Principles of Macroeconomics ,u/e,N. Gregory Mankiw
215- Analog Integrated Circuit Design , u/e, by Johns, Martin
216-introduction to fluid mechanics 6th edition By Alan T. McDonald, Robert W Fox
217-Mechanics of Fluids 8th Edn - Massey & John Ward-Smith
218-Introduction to Chemical Engineering Thermodynamic-Smith&Vannes Abbot,6Ed
219- Real Analysis 1st Edition by H. L. Royden
220- Engineering Fluid Mechanics, 7th ed,by Clayton T. Crowe, Donald F. Elger
221-Computer Organization , by Carl Hamacher, Zvonko Vranesic, Safwat Zaky
222- Fluid Mechanics With Engineering Applications,10ed,by E. John Finnemore, Joseph B. Franzini
223- Embedded System Design: A Unified Hardware/Software Introduction,by Vahid, Tony D. Givargis
224- Mobile Communications, 2ed, by Jochen Schiller
225-Computer Networking: A Top-Down Approach Featuring the Internet,2ed,by James F. Kurose, Keith W. Ross
226-Basics of Compiler Design ,updated2007, by Torben Mogensen
227- TCP/IP Protocol Suite by Behrouz A. Forouzan(Ans to ODD problems)
228- Data Communications and Networking by Behrouz Forouzan
229-Introduction to Fluid Mechanics.,6ed, by Robert W. Fox
230-Mechanical Vibrations (3th Edition) by Singiresu S. Rao?same as 4ed
231-Fundamentals of Electromagnetics with Engineering Applications by Stuart M. Wentworth
232-Accounting Principles 8th Edition by Weygandt, Kieso and Kimmel
233-Managerial Accounting 12th Edition by Garrison, Noreen and Brewer
234-Intermediate accounting 12th edition by Kieso, Weygandt and Warfield
235-Advanced Accounting 9th Edition - Hoyle Schaefer, Doupnik
TEST BANK
236-Intermediate accounting 12th Update edition by Kieso, Weygandt and Warfield
237-Intermediate accounting 13th edition by Kieso, Weygandt and Warfield
238-Managerial Accounting 12th Edition by Garrison, Noreen and Brewer
239-Managerial Accounting 13th Edition by Garrison, Noreen and Brewer
240-Cost Accounting a Managerial Emphasis 13th Edition by Horngren, Datar, Foster Rajan and Ittner
241-Accounting Principles 8th Edition by Weygandt, Kieso and Kimmel
242-Accounting Principles 9th Edition by Weygandt, Kieso and Kimmel
243-Principles of Managerial Finance 12th Edition by Lawrence Gitman
244-South-Western Federal Taxation Comprehensive 2009, by Eugene Willis,William H.Hoffman James E. Smith and William A.Raabe
245-South-Western Federal Taxation Comprehensive 2010, by Eugene Willis,William H. Hoffman James E. Smith and William A.Raabe
246-Modern Advanced Accounting 10th Edition by Larsen
-Operations Management Author: Heizer 9th edition243
247-Principles of Marketing 12th Edition by Philip Kotler
248-Principles of Marketing 13th Edition by Philip Kotler
249-Serway Physics, 8th Edition
250-Managemet Information System By Haag, 7th Edition
251-Accounting Information System By Marshall Romney, 10th Edition
252-Accounting Information System By Marshall Romney, 11th Edition
250-Management By Robbins, 10th edition
251-Advanced Accounting 9th Edition - Hoyle, Schaefer, Doupnik
252-Advanced Accounting 10th Edition - Hoyle, Schaefer, Doupnik
253-Qantitative Analysis
254-Principles of Risk Management and Insurance, 10th edition, George E. Rejda
255-Audit & Assurance
256-Financial Accounting Tools, 4th edition
257-Fundamentals of Human Resource Management 9th ed by DeCenzo, David A./ Robbins, Stephen P
258-Accounting Principles 8th Edition by Weygandt, Kieso and Kimmel
259-Managerial Accounting 12th Edition by Garrison, Noreen and Brewer
260-Intermediate accounting 12th edition by Kieso, Weygandt and Warfield
261-Advanced Accounting 9th Edition – Hoyle Schaefer, Doupnik
262-Physics for Scientists and Engineers,2ed, Randall D. Knight
263-Electronic Devices 6ed+Electronic Devices(Conventional Flow Version)4ed by Thomas Floyd
264-Finite element techniques in structural mechanics. by Carl T. F. Ross
265-Basic Economerics,4ed,Damodar N. Gujarati(student solution manual)
266-Probability and Statistical Inference,7ed, Robert V. Hogg, Elliot Tanis
267- Fundamental of Physics,7ed,by Halliday, Resnick and Walker(+TEST BANK)
268- Calculus - Jerrold Marsden & Alan Weinstein - Student Solution Manual,vol1
269- A First Course in Complex Analysis by Dennis Zill
270- Signals, Systems and Transforms ,4ed, C. L. Philips, J.M.Parr and Eve A.Riskin
271- An Interactive Introduction to Mathematical Analysis ,by Jonathan Lewin
272- Principles of Geotechnical Engineering , Braja M. Das
273- Semiconductor Devices;3nd Ed., 2006,Simon M. Sze, Kwok K. Ng
274- Unit Operations of Chemical Engineering (6th ed) by Warren McCabe, Julian Smith
275- Management advisory services:comprehensive guide by Agamata, Franklin
276- Pattern Recognition and Machine Learning,u/e, Christopher M. Bishop
277- Modern Digital Electronics by R Jain
278-RF Circuit Design: Theory and Applications by Reinhold Ludwig
279- Fundamentals of Applied Electromagnetics (5th Edition): Fawwaz T. Ulaby
280- Modern Organic Synthesis: An Introduction(2006),by Michael Nantz, Hasan Palandoken
281- Probability and Statistics for Engineering and the Sciences,6th ed,by Jay Devore
282- A Transition to Advanced Mathematics ,5th ed , Douglas Smith, Maurice Eggen, Richard St. Andre
283- Network Administration for Solaris 9 Operating environment SA-399
283- Network Administration for Solaris 9 Operating environment SA-399
284-Signal Processing,1th ed,by James H. McClellan, Ronald W. Schafer, Mark A. Yoder
285-Recursive Methods in Economic Dynamics , Nancy L. Stokey, Robert E. Lucas Jr., Edward C. Prescott
286-Introduction to Operations Research ,7ed,Frederick S. Hillier, Gerald J. Lieberman
287-An Introduction to Database Systems by ,8ed, by C. J. Date
288- Fundamentals of Corporate Finance, 4th Edition (Brealey, Myers, Marcus)
289- Solid State Physics by Neil W. Ashcroft N. David Mermin
290- Microwave Transistor Amplifiers: Analysis and Design ,2th ed,by Guillermo Gonzalez
291- Fundamentals of Differential Equations (7th Edition, Kent B. Nagle, Edward B. Saff
292- Fundamentals of Differential Equations with Boundary Value Problems, (5th ed) by R. Kent Nagle
293- CMOS VLSI Design: A Circuits and Systems Perspective (3rd Edition) by Neil Weste
294- Networks and Grids: Technology and Theory (Information Technology: Transmission, Processing and Storage, BY Thomas G. Robertazzi
295- Electromagnetism: Principles and Applications ,BY Paul Lorrain
296- Engineering Mechanics: Dynamics (12th Edition) (9780136077916), Russell C. Hibbeler
297-Mechanics of Materials (2009) by James M. Gere, Barry J. Goodno
298 Organic Chemistry,Jonathan Clayden, Nick Greeves, Stuart Warren, Peter
299- Organic Chemistry, 7th by John E. McMurry
300- Fundamentals of Physics(7+8th edition) by Resnick,Halliday and Walker
301- Calculus Complete Course ,6th ed, Robert A. Adams
302-The Science and Engineering of Materials by Donald R. Askeland Frank Haddleton 4th edition
303-South-Western Federal Taxation 2010 Comprehensive Volume
304-Managerial Accounting,11e, GARRISON NOREEN BREWER(SM)
305-Understanding Basic Statistics Student Solutions Manual (4th Edition
306-Engineering mechanics dynamics ,6th edition meriam and kraige
307- General Chemistry, 9th edition, Ebbing, D.D.; Gammon
308- A Quantum Approach to Condensed Matter Physics Philip L. Taylor, Olle Heinonen
309- Semiconductor Devices by Simon M. Sze
310- Elementary Principles of Chemical Processes,3th ed,Richard felder
311- Fundamentals of Heat and Mass Transfer,5 Ed,Frank P. Incropera,David P.DeWitt
312-Economics of Money, Banking, and Financial Markets 9e Frederic S. Mishkin TEST BANK
313-Physical Chemistry by Thomas Engel & Philip Reid
314- Financial Management Principles and Applicattion 10e Keown
315- Cost Accounting: A Managerial Emphasis ,12 ed , Charles T. Horngren, Srikant M. Datar, George
316- Engineering Fluid Mechanics ,u/e, by Clayton T. Crowe, Donald F. Elger, John A. Roberson, Barbara C. Williams
315- Auditing and Assurance Services ,2ed, by Louwers
316- Performance Management: by Agunis
317- Understanding Basic Statistics,4ed,Charles Henry Brase, Corrinne Pellillo
318- Elementary Differential Equations and Boundary Value Problems 7th Edition: William E. Boyce
319- Principles of Economics. ,2ed, N. Gregory Mankiw(TB)
320-Biology with MasteringBiology (8th Edition) ,Neil A. Campbell Benjamin
321-Managerial Accounting, 8ed, by Ronald Hilton
322-Cost Accounting, 13e, horngren solution manual
323-Financial Management Theory And Practice, Brigham,10th Ed
324-Design and Analysis of Experiments 6E – Montgomery
325-Investment Analysis & Portfolio Management, 7e, by Reilly and Brown
326- Introductory Econometrics A Modern Approach,2ed,Woolridge
327- Semiconductor Device Fundamentals 1st edt, by Robert F._Pierre
328-Introduction to Chemical Engineering Thermodynamics 7th Ed. by Smith, Van Ness & Abbot
329-Cost Accounting, 7th edition, by Kinney and Raiborn,(SM)
330-Analytical and Numerical Approaches to Mathematical,U/E, by Peter V O'Neil
331- Digital Fundamentals (9th Edition),Thomas L. Floyd
332- Essentials of Modern Business Statistics with Microsoft Office Excel, 4th ed,(SM)
333- Basic Marketing Cannon Perreault TB 17e.zip
334- Data Structures and Problem Solving Using Java,3ED,: Mark Allen Weiss
335- Business Statistics a Decision-Making Approach 7e by Groebner, Shannon, Fry,
336- Financial Management Theory And Practice, 11th Ed, Brigham
337- Corporate Finance: A Focused Approach ,Michael C. Ehrhardt, Eugene F. Brigham
338-Corporate Finance: A Focused Approach -1st Edition(1,2,3,4,5,6,7,8,9,10,11,13,17)
Michael C. Ehrhardt , Eugene F. Brigham
339-Investments , 5th Ed,(chapters 10,11,12,14,15,16,21,22,24),by Zvi Bodie , Alex Kane
340-Database System Concepts ,4th ed, Abraham Silberschatz, Henry F. Korth,
341- Database System Concepts ,5th ed, Abraham Silberschatz, Henry F. Korth,
342- Operating System Concepts (7th ed) ,Abraham Silberschatz, Peter Baer Galvin, Greg
343-Modern Thermodynamics: From Heat Engines to Dissipative Structures Dilip Kondepudi
344- Embedded Microcomputer Systems: Real Time Interfacing by Jonathan W. Valvano
345-Statistics for Business and Economics - Anderson, Sweeny & Williams(A practical Approach)
346-Statistics for Business&Economics (8th Ed.) , David R Anderson, Dennis J Sweeney, Thomas A Williams
347-Introduction to Mechatronics & Measurement Systems ,2ed, David G. Alciatore, Michael B
348-Power Systems Analysis ,u/e, Arthur R. Bergen, Vijay Vitta
349-Fundamentals of Signals and Systems Using the Web and MATLAB (3nd Edition) Ed Kamen, Bonnie Heck
350-Digital Design: Principles and Practices ,u/e, John F. Wakerly
351-Microprocessors and Interfacing: Programming and Hardware ,2/e,Douglas V. Hall
352-Molecular Symmetry and Group Theory,u/e, by R. L. Carter T
353-Applied Partial Differential Equations with Fourier Series and Boundary Value Problems (4th ed) by Richard Haberman
354-Fluid Mechanics and Thermodynamics of Turbomachinery by S L DIXON
355-Introduction to Wireless Systems , P.M Shankar
356-Linear Algebra ,2ed, by Micheal Prophet,Douglas Shaw
357-Introduction to Environmental Engineering and Science ,2th ed, Gilbert M Masters
358-Modern Physics for Scientists and Engineers (3rd ed) Stephen T. Thornton
359-Electrical Engineering,Principles and Applications (3th Ed+ 4ed) by Allan R. Hambley
360-Advanced Engineering Mathematics by Dennis Zill
361-Hydraulics in Civil and Environmental Engineering by,4th ed , Andrew Chadwick
362-Concepts and Applications of Finite Element Analysis, 4th Edition),Robert D. Cook
363-Differential Equations and Linear Algebra(2nd Ed), Jerry Farlow,James E. Hall,Jean
364-Engineering_Economy Blank Tarquin (STUDENT SOLUTION MANUAL )
365-Vector Mechanics for Engineers: Statics and Dynamics ,8th ed, Ferdinand Beer, Jr.,E. Russell
366-Differential Equations and Linear Algebra (2nd Edition),Jerry Farlow, James E. Hall, Jean
367-Network Flows,u/e,by Ahuja, Magnanti & Orlin
368-Introduction to Environmental Engineering and Science ,3th ed, Gilbert M Masters
369-Nonlinear Programming 2nd Ed by Bertseka
370-Strength of Materials 4th Ed by Ferdinand L. Singer Andrew Pyte
371-Linear circuit analysis 2nd edt. by R. A. DeCarlo and P. Lin
372- Engineering Mechanics: Statics & Dynamics (5e), Bedford & Fowler
373-Wireless Communications,1 ed , by Andreas Molisch Molisch.
374-Digital Signal Processing,u/e, by Ashok Ambardar
375-DIGITAL SIGNAL PROCESSING. Principles, Algorithms, and Aplications. 4Edition,John G. PROAKIS and Dimitris
376-Fundamentals of Power Semiconductor Devices ,1st Ed. ,by Balinga
377-Numerical Analysis 8th Ed. by Burden and Faires
378-Mobile Communications (2nd Edition) by Jochen H. Schiller
379-Computer Controlled Systems: Theory and Design,3/e, by Karl J. Astrom and Bjorn Wittenmark
380-Bayesian Core 1st Edition by Christian P. Robert and Jean-Michel Marin
381-MEMS and Microsystems Design, Manufacture and Nanoscale Engineering 2nd Edition by Tai-Ran Hsu
382-Introduction to Mathematical Statistics 6th Edition by Robert V. Hogg, Joseph W. McKean & Allen T. Craig
383- solid state electronic devices 6th edition -Streetman and Bannerjee
384-introductory linear algebra text book by bernard kolman and david R.hill 8th edition(Incomplete)
385-Artificial Neural Networks 1st Edition by B. Yegnanarayana and S. Ramesh
386-Basic Electromagnetics with Applications 1st Edition by Nannapaneni Narayana Rao
387- Theory of interest, 3rd Edition, Author Stephen G. Kellison
388-Principles of Corporate Finance 7th, by Brealey & Myers
389-Theory & Design for Mechanical Measurements 4th edition by Richard S. Figliola & Donald E. Beasley
390-Differential Equations ,A modeling approach , by Ledder,STUDENT MANUAL
391-Microelectronics Circuit Analysis and Design ,3ed,by Donald A. Neamen
392-Engineering Mechanics Statics 6th Edition by Meriam and Kraige
393-Soil Mechanics Concepts and Applications 2nd Ed. by William Powrie
394-Digital Communication ,5th edition, by John PROAKIS
395-Chemical Reaction Engineering, 3rd Edition By Octave Levenspiel
396- Chemical Enginering vol 6, 4th ed , Coulson and Richardson's
397- Intermediate Accounting 12e,by Kieso (SM)
398-Fundamentals of financial management 12th edition by James C. Van Horne
399- Feedback Control of Dynamic Systems, 4th Edition, by Franklin
400-Thermal Physics ,u/e, Charles Kittel
401- ENGINEERING BIOMECHANICS: STATICS,by Beatriz Guevarez, Joshua Ros, Nayka, Carmen M. Figueroa, Evamariely Garcia, Mariel Garcia
402-Calculus (8th Edition) ,Dale Varberg, Edwin Purcell,Steve Rigdon(TESTBANK+SOL)
403-Elementary Statistics,u/e, by Mario F. Triola(TESTBANK)
404- Introduction to the Theory of Computation,u/e, Michael Sipser
405- Discrete random signals and statistical signal processing. by Charles W. Therrien
406-Introduction to Graph Theory: Amazon.ca: Douglas B. West
407- Discrete Mathematics,6TH ED, Johnsonbaugh, Richard
408-Fundamentals of Probability with Stochastic Processes (3rd ed),Saeed Ghahramani
409-Systems Analysis & Design, An Object-Oriented Approach with UML: , Alan Dennis, Barbara Haley Wixom(SM+TB)
410-A Course in Modern Mathematical Physics: Groups, Hilbert Space and Differential Geometry , Peter Szekeres
411-Introductory Quantum Optics ,u/e,Christopher Gerry (Author), Peter Knight
412-Financial Accounting 6th Ed. by Harrison(SM)
413-Cost Accounting - Horngren 13e Test Bank(+)
414-Intermediate Accounting - Spiceland 5e TestBank
415-Financial Accounting ,2/e,Robert Libby, Patricia Libby
416-Chemistry, 8th Edition ,2002 By Ralph H. Petrucci; William S. Harwood
417- Optics (4th Edition) ,by Eugene Hecht
41-Classical Dynamics: A Contemporary Approach, Jorge V. José, by Jorge V. José , Eugene J. Saletan
419- Microeconomics 5th ed. Jeffrey M. Perloff TEST BANK
420-Cost Accounting, 13e, horngren solution manual
421-financial Markets and Institutions, 5th Edition, Mishkin, Eakins Test Bank
422-Human Resource Management, Dessler Test Bank
7. please can some one help me with a free download link where i can download (HUGHES ELECTRICAL AND ELECTRONIC TECHNOLOGY)
8. it is becoming little bit difficult for us to download free EBOOKS
9. i want free ebook download of electronic circuits 2nd edition by donald.a.neamen
10. Hello, how r u students
I have many testbaks &manual solution with cheapest price , if you need any testbanks please contact with me on this email jaber_tb.sm@hotmail.com
This is my list testbank
Microeconomics 5e Jeffrey M Perloff*
Financial Accounting Information for Decisions 6e Robert W. Ingram*
Foundations of Finance 6e Keown*
Accounting Information Systems_ 11E Marshall B. Romney_ Paul J. Steinbart*
Microeconomics Principle _ Applications _ and tools 6E - arthur O_Sullivan _ Steven Sheffrin*
Basic Marketing 17e by Perreault *
Microeconomic Theory Basic Principles and Extensions 9e by Nicholson *
Understanding Financial Statements 9e by Fraser *
The Economics of Money_Banking_and Financial Market 8th by Frederic S. Mishkin word*
Supply Chain Management_ 4E by Sunil Chopra*
Statistics for Business and Economics 10e by Anderson_ Sweeney_ Williams *
Services Marketing 6e Christopher H. Lovelock Jochen Wirtz *
Selling Today 10th Edition Manning and Reece*
Principles of Microeconomics_ 9e Case_ Fair & Oster *
Principles of Managerial Finance_ Brief_ 5E *
Principles of Managerial Finance_ Brief_ 4E *
ORGANIZATIONAL BEHAVIOR S T E P H E N P. R O B B I N S 11e *
ORGANIZATIONAL BEHAVIOR S T E P H E N P. R O B B I N S 12e *
Operations_ManagementEdition9 by Heizer Render *
Operations Management, 10e William J[1]. Stevenson *
Multinational Financial Management_ 7th Edition by by Alan C. Shapiro *
MIS 2ed by kroenke *
Managerial Accounting_Edition12_Garrison_Noreen__Brewer *
managerial economics 9e by Christopher R Thomas _ S. Charles Maurice *
Management Information Systems for the Information Age_ 7e by stephen haag *
Management A Competency-Based Approach hellriegel gakson slocum 10ed *
Management 9e by Robbins, Coulter *
Principles of Marketing, 12th Kotler Armstrong’s *
Principles of Marketing, 11th Kotler Armstrong’s *
Jeff Madura Personal Finance, Third Edition *
Investment Analysis and Portfolio Management 9th Edition Frank K. Reilly_ Keith C. Brown *
International Marketing 14th by Philip Cateora_ John Graham *
International Financial Management_ 9th Edition by Jeff Madura *
International Business Competing in the Global Marketplace_ 7e by Charles W. L. Hill *
Intermediate Accounting. 12th Edition. Kieso *
Government and Not-for-Profit Accounting Concepts & Practices 4e by Granof and Wardlow*
Fundamentals of Investments 5th by Bradford Jordan_ Thomas Miller *
Fundamentals of Investments 3e Gordon J[1]. Alexander William F. Sharpe *
Fundamentals of Financial Management by brigham and houston 12th edition *
*Financial Reporting and Analysis Using Financial Accounting Information 11e Charles H. Gibson test ban
Financial Markets and Institutions 5e by Mishkin, Eakins *
*Financial Management Theory & Practice 12th Edition by Eugene F. Brigham and Michael C. Ehrhardt
*Financial Accounting Tools for Business Decision Making_ 4th Edition_ Kimmel.Weygandt kieso test bank
Financial Accounting 6th edition by Libby *
Excellence in Business Communication. jonh.v.thill coutrland l. bovee 6 edition *
Economics_ 18e by Campbell R. McConnell _ Stanley L. Brue _ Sean M. Flynn *
Cost Management A Strategic Emphasis 4e by Bloche *
Cost Accounting 13e Horngren *
*Corporate Finance Ross TB (incomplete) Ross.S., Westerfield. R., & Jaffe.J.,(2008), Corporate Finance, 8th edition, McGraw-HillIrwin. Boston.
Business Statistics a Decision Making Approach 7e Groebner *
Bank Management and Financial Services_ 6e by rose *
*Advertising and Promotion an Integrated Marketing Communications Perspective 8e by Belch Test Bank
Accounting Principles, 8th EdWeygandt, Kieso, Kimmel *
Advance finance _Accounting e5_BAKER *
*Quantitative Analysis for Management 10E by Render _ Stair _ Hanna
*Understanding and Managing Organizational Behavior_ 5th Edition_ by Gareth Jones_ Jennifer George
*Financial Institutions and Markets 7e by Jeff Madura tb
*Derivatives Markets 2e by McDonald Test Bank
*International Business e12 by daniels
*Consumer Behavior_ 8e by Michael R. Solomon
*The Economics of Money Banking and Financial Markets 9e Mishkin Test Bank
*Intermediate Accounting 13Ed. TB
*E-Commerce Business. Technology. Society. 5e LaudonTraver
*Business Law Text and Cases 11th Edition by Clarkson
*Auditing And Assurance Services(13thEd) - Arens - TestBank
Introduction to Information Systems James O'Brien, George Marakas15e*
Principles of Managerial Finance 12e Gitman *
marketing management e13Kotler and Kevin Lane Keller’s*
Principles of Operations Management 7e by Heizer *
Essentials of Investments 7th edition Zvi Bodie Alex Kane Alan marcus *
Accounting 8e by Horngren Harrison Oliver *
*Organizational Behavior Managing People and Organizations, 9th Ricky W. Griffin , Gregory Moorhead
Performance Management e2 by Herman Aguinis *
Principles of corporate finance 9e by brealy mayers allen *
South-Western Federal Taxation 2010 Corporations, Partnerships, Estates and Trusts, Professional *
*Strategic Management Concepts & Cases Competitiveness And Globalization 8e Author Michael A. Hitt_ R. Duane Ireland_ Robert E. Hoskisson
Accounting Information Systems 11E Romney *
Advanced accounting 10e by Floyd Beams
*International Business e12 by daniels
* advanced financial accounting Baker 8th edition
* Modern Auditing Assurance Services and The Integrity of Financial Reporting 7e by Boynton Johnson
* Introduction To Management Accounting by horngren 14e
*Advanced Accounting 10 edition by Fischer
*Business and Society Ethics and Stakeholder Management 6e
*Business Law Today Comprehensive 8th edition Roger LeRoy Miller, Gaylord A. Jentz
*Core Concepts of Accounting Information Systems, 10th Edition Bagranoff, Simkin, Norman *Essentials of Modern Business Statistics 4th Edition David R. Anderson, Dennis J. Sweeney, Thomas A.
*Intermediate Financial Management 9th Edition Eugene F. Brigham, Phillip R. Daves
* Microeconomics (and its application) 10e Walter Nicholson, Christopher Snyder
*Managerial Accounting An Introduction to Concepts, Methods and Uses 10e by Maher, Stickney, Weil
*Managerial Economics Applications, Strategies, and Tactics 11th Edition James R. McGuigan, R. Charles Moyer, Frederick H.deB. Harris
*Microeconomics 7e Robert Pindyck Daniel Rubinfeld
*Multinational Business Finance 11E by David K. Eiteman
*Human Resource Management 9E by R Wayne Mondy Robert M Noe
And this is solution manuals list:
Accounting Principles 8e by Kieso *
Advance finance Accounting 5e Baker Lembke King
Business Finance Economics and Government 1st Ed. by Diebold
Fundamentals of Financial Management twelfth edition James C. Van Horne John M. Wachowicz JR.
Fundamentals of corporate finance Ross, Westerfield, and Jordan 8e
Statistics for Business Practical Approach 9e by Anderson Sweeney Williams Chen
Advanced accounting 10e by beams
Cost Accounting 13e Horngren, Foster, Datar, Rajan & Ittner
Fundamentals of Financial Management 12e - Horne & Wachowicz
The Theory of Interest, 3ed, Kellison
managerial accounting 11e by Garrison Noreen
Capital Budgeting and Long-Term Financing Decisions Neil Seitz Mitch Ellison 4e IM
Intermediate accounting Revised Spiceland Sepe Tomassini 4th edition
Financial Accounting by Kimmel 5e
Operations Research 8th Hamdy.Taha
Cost Management Accounting and Control by Hansen Mowen and Guan 6E (2009)
Cost Accounting A MANAGERIAL EMPHASIS 13th Ed. by Horngren Foster Datar Rajan & Ittner
Auditing & Assurance Services A Systematic Approach 6th Edition by William Messier
Auditing and Assurance Services 13eAlvin A. ArensRandal J. ElderMark S. Beasley
Auditing and Assurance Services 12eAlvin A. Arens Randal J. ElderMark S. Beasley
Foundations of Finance 6e Keown
Quantitative Analysis for management 10th by Barry Render, Ralph M. Stair, Jr, Michael E.Hanna solution manual
Economics for Managers by Paul Farnham, 2008 custom edition
Modern Auditing Assurance Services 8e Boynton
Introductory Mathematical Analysis for Business_ Economics and the Life and Social Sciences_ 12E Ernest F. Haeussler Richard S. Paul Files
11. hi
i want solutions for electronic circuits analysis by neamen
12. i could not find solution manual
13. please show the solved problem of all chapter on naeman
14. best collection
Add Your Comments
1. Download links and password may be in the description section, read description carefully!
2. Do a search to find mirrors if no download links or dead links. | {"url":"http://ebookee.org/Electronic-Circuit-Analysis-and-Design-2nd-edt-by-Donald-A-Neamen-solution-m_82129.html","timestamp":"2014-04-19T19:33:16Z","content_type":null,"content_length":"74468","record_id":"<urn:uuid:4e1f2f9a-3f82-4b89-858a-29c20f29f995>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00179-ip-10-147-4-33.ec2.internal.warc.gz"} |
Optional Paper (Both sides) thing?
Click here to go to the NEW College Discussion Forum
Discus: Individual Schools: US News Top 25: Massachusetts Institute of Technology: 2004 Archive: Optional Paper (Both sides) thing?
How many of you did that? Did you make something out of the paper or write an essay on the paper? Mine sort of plays off one of the hacks from a few years ago.
I submitted something. Hopefully it will convey my passion for MIT.
I applied online and all I got was a text area.
I put in a "picture" of myself like you have to do when applying for things after college (med school, grad school, jobs, etc). Below is the picture:
(periods for spacers on board only)
And then I listed some specific fields of my independent studies.
I got 250 people (out of 300) in my senior class to sign a piece of paper stating I am a strong applicant and should be admitted into the Class of '08. ;)
Haha, Bigman, so that means 16.7% of your graduating class thinks you should NOT be admitted into MIT? Well... I think you are an automatic rejection then! =)
hahaha clever bigman
oh my god, i think this is MIT's worst idea ever. After opening hundreds of envelops for them, i think this has gone way out of hand. Some people take it too far.
Binks, what have been some of the worst?
I drew a map of Middle Earth and replaced the locations with college names such as "MIT-dor", "Caltech-han", "Harvard-or," "Mt. Harvard," (Mordor) and "Yale-gard."
Oh yeah, and Loriengineer, Hobbitechton, and Rivendology.
Then I said "I want to join the forces of good" and drew a stickfigure picture of myself, labeled it "me" and drew an arrow pointing to "MIT-dor" (Gondor).
God Lord of the Rings is awesome.
Hahaha, that's so awesome. I love LOTR -- I just bought the new leather-bound editions of The Hobbit and the trilogy with some Christmas money. They're so awesome... yummy!
Posted this on another thread, but in case you didn't see it...
Submitted the front/back one.
The front was a computer program that can do any calculation regardless of length, normally 20 pages long. I put it in size-2 font and stuck a magnifying glass and the disk on there, lol. The back
was an example of it calculating all of the digits of 5000 factorial (all 30000-some digits) in an attempt to search for the ASCII values of "MIT"... there were none. I suppose I could have done
something more meaningful, like the square root of 3 to a few thousand places, but those are all available online.
Im guessing you mean to within the number of available memory addys on your harddrive, Zaqwert
Well, 5000! is available online.
It is? Ah well screw it... They'll be able to see the code with the magnifying glass or on the disk ;)
Vsage, pretty much... which is a lot... once it hits 32,768 it makes a new element in the array. Soooooooooo technically it can do 32768^32768 if you get enough memory.
32768 looks like some outdated 16-bit stuff here. Why aren't you using 64-bit integers (unsigned long long)?
Uhhhhhh, you mean I should use unsigned long long so that if I reach a calculation that has more digits than 32768^32768, the data type won't malfunction? ;) (Rhetorical question)
No, but it disturbs me that you're using 16-bit elements. It seems a bit outdated if that's what your compiler supports.
im impressed ... and a lil scared
umm...yeah, is there anyone out there who DOESN'T have a clue what they're saying? Are any non-CS inclined folks applying?
hells yeah
I made....a papercut?
Maybe there's an artistically inclined member of the admissions committee
May_1: Zagwert seems to have written a program that calculates arbitrary mathematical expressions. To store the resulting very large or very small numbers, he uses variables which have a maximum
capacity of 32768 and he uses 32768 of these (so he can name each one with a single number from another variable of the same size). As you can probably understand, this is a huge number of available
digits (many, many more digits than the number of atoms in the visible universe (~googol is being generous)). This is the source of zagwert's rhetorical question. Me^4 then questioned why zagwert
didn't use a newer variable type that could hold a larger number (64 bits or binary digits (~1.8E19 is the largest number)).
heh i hope so cuz i took a piece of paper and put my handprint on it, with lotsa paint splatters... and wrote some stuff about how i love art and how it's going to change MIT. Want to be a visual
arts major at MIT!
haha yeah im an art person too, definitely not a super techy person. im into biology, planetary sciences, and art. computers are definitely "not my bag"
Gottagetout got my point perfectly. Implementing multiplication with 64-bit elements on our newer processors of today is much faster (though a bit harder) than using 16 bits.
Ehhh, yeah, I suppose I could change it to 64-bit just for searching for whenever I declare an int and change the data type. But I mean there's no real reason to if it just effects it length-wise. In
terms of speed, however, I didn't know that it was faster. Is it? Because if it is, I might wanna do that.
Actually, unless you have a 64-bit processor (and an OS to take advantage of it), you'd want to go with 32-bit ints.
Not necessarily. In timed tests on an encryption system I developed last year, I found 64-bit big integer arithmetic about 20% or 30% faster than 32-bit big integer arithmetic. Although Pentium IV is
32-bit, the 64-bit arithmetic is probably quite optimized in software.
What compiler? Perhaps it was able to take advantage of SIMD extensions with your code, because otherwise I am pretty sure it will be slower (a add, adc / sub, sbb for 64-bit addition and perhaps a
bandwidth hit depending). Maybe you could take a look at your assembly output?
Suppose you have a 64-bit number you want to represent. No matter how you do it, on a 32-bit processor, it will be represented by two 32-bit integers. Suppose you want to multiply two of these 64-bit
numbers. If you represent it with two 32-bit integers explicitly, you have to implement the divide by conqeuor method (which is actually faster than the normal multiplication method) to avoid
overflow. On the other hand, you can just have a single multiplication function for the 64-bit representation (even though it is internally two 32-bit numbers). And despite the fact that the divide
by conqeuor is actually supposed to be faster than straight multiplication, the 64-bit was a bit faster. I used SSE2 for both 64-bit and 32-bit multiplication. Now, I think that may have helped
somewhat, since SSE2 enhances arithmetic that is media-oriented (which 64-bit multiplication certainly is).
However, the point here I believe is that using 64-bits is faster than 32-bits, no matter by what means. Now, on the other hand, there is no doubt in my mind that 16-bit arithmetic is just plain
silly. The 16-bits in the computer will be represented by 32-bits anyways, resulting in a waste of 16-bits (unless you use a 286, in which case I'm not going to comment). Also, your arrays will be
much bigger than needed, making it slower by a large factor (maybe even 4 times slower, if you're using normal multiplication, not divide-by-conqeuor or Fast Fourier Transform).
Now, if you were to use FFT to multiply everything, it hardly matters how you represent numbers. But calculating factorials with FFT is kind of overkill, if you actually coded that yourself.
Me^4, I think it is important to note that your argument is based on the chip instructions (including SSE2!). The fact is, if some operation is hard-wired into the chip -- yeah it'll always be faster
than any kind of normal software operation. Zagwert's code accomplishes its task well and is quite portable. Remember, TMTOWTDI.
Yes, SSE2 is certainly helping you out (as I mentioned). However, I have never seen it claimed that for normal operations on a 32-bit x86 architectures a 64-bit integer operation would be faster than
a 32-bit one, and none of the tests I have carried out have ever indicated anything other than that they are slower. Which of course makes sense given the limitations of the architecture...
I think I'm being misunderstood.
The operations are clearly performed on large integers, for which you can have arrays of 32-bit or arrays of 64-bit integers. If you have 64-bit integers, you can take advantage of the simpler
mathematics operations hard-wired into the system. What I mean is not that 64-bits are faster than 32-bits (that's absurd) but that an operation of multiplication on one 64-bit integer is faster than
operation of multiplication of two 32-bit integers in an array.
In that case I need not switch to 64-bit. You obviously misunderstood my data type. The array is not even being invoked UNLESS the maximum number of digits are being used. When the array comes in
after the maximum number of digits are used, all it is doing is switching to the next array. The process will still be O(n) where n is the number of digits in either case, whether it's 32 bit or 64
bit. So unless you're trying to say that 64-bit multiplication is faster than 32-bit multiplication, it will still take the same amount of time.
Report an offensive message on this page E-mail this page to a friend | {"url":"http://www.collegeconfidential.com/discus/messages/8/43929.html","timestamp":"2014-04-16T22:01:17Z","content_type":null,"content_length":"41390","record_id":"<urn:uuid:219da331-cd89-4238-ac07-0aa77b69913f>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00443-ip-10-147-4-33.ec2.internal.warc.gz"} |
Adaptive secure data transmission method for OSI level 1
Chapter IV
4. Data Transmission in Channels and Networks
– A Simulation Study
4.1. Worksheet Simulator
The use of a personal computer with programs already in use at the office was one goal in
modeling and simulations in this study. Compilers of simulation languages are expensive and
a simulation package is adapted to one type of problem only. The Excel worksheet program
as a standard language was selected for rapid modeling and during this study the author programmed
an Excel-simulator for simulations made in this study, presented in paper [Lal04b].
The reasons for this choice were mathematical, economical and practical. The simulator is
based on the mathematics programmed in Excel cells forming a block model, Figure 4.1. The
model blocks include mathematical entities. The simulations are executions of the programmed
mathematical formulae in networked Excel cells [Lal97b] and [Lal04b]. The Excel
worksheet program itself has all the mathematics and graphics needed. It is widely used and
thus available for most PC-users. It is an effective way of programming and it has excellent
graphics to present results. Most of the particular blocks and waveforms needed for this study
were not available in the libraries of the reference [Com90]. Thus a new computer simulation
method for evaluation of the characteristics of the ADM-channel and data transmission was
needed and its first version MIL.xls was developed in November-December 1992, based on a
standard worksheet program (Excel), Figure 4.1. The latest development of the robust worksheet
simulation, 26 data channels (AWGN, granular, and multi-path) with an adaptive
1…160-point DFT calculation, is presented in reference [Lal04b].
Fig. 4.1 Blocks of robust worksheet simulator
The ADM-channel model included an ADM-modulator, an AWGN-channel, and an ADMdemodulator.
Adaptation simulations were made with this ADM-channel model using 2-bit,
3-bit or 4-bit memory in Modulation Level Analyzer (MLA). EUROCOM specifies the adaptation
with 3-bit memory, which was used in simulations unless otherwise stated. The
granular noise channel modeled is called here the ADM-channel model, Figure 4.1.
The data transmission simulation model used a random bit source, a waveform generator, the
ADM-channel model, and a data modem receiver simulator using the Discrete Fourier Transform,
which is called here the DFT-receiver. Most of the simulations were made with this
simulator system, which is called here the Excel-simulator.
The present worksheet simulation package for modeling data transmission over different
channels (incl. the adaptive delta modulated voice channel of the present 16 kbps network)
includes generation of waveforms, a model of discrete Fourier transform receiver for waveform
detection, random bit and symbol generation, calculation and estimation of simulated
BER, setting of Gaussian noise level, setting of multi-paths, setting of interference signals,
signal-to-noise ratio calculations, phase distortion calculations, and group delay calculations.
Limitation of Worksheet Program
The limitations with a worksheet simulation are the memory size available in PC, PC
throughput with Excel and Excel worksheet limitations. An Excel worksheet used in 1992
had 16384 rows and 256 columns. A minimum robust 1000-bit simulation used in this work
needed about 6 MB memory to manipulate 13000 samples stored in Excel cells. This was the
practical limit for the personal computers used earlier. These problems were minimized in
ten years and 10000-bits simulation is not a practical limit. To get a high quality waveform
in an Excel-simulator a sample rate about 10 times the highest signal frequency was used.
The same limitation was also observed in other simulators [Tes92].
Use of Discrete Fourier Transform
Two approaches were considered: Discrete Fourier Transform (DFT) and Fast Fourier Transform
(FFT), definition in Appendx 1. DFT was selected instead of FFT for the calculation of
the response of the ADM-channel and for the decision of the bits from the output waveforms
of different data modems. The main reasons for this are:
- The use of DFT makes the approach adaptive. The number of samples (N) was freely
- DFT is easy to program.
- DFT gives both amplitude and phase of a given signal.
- The number of multiplications of DFT is limited in calculations using N=13 or 26.
- The number of samples in FFT must be in powers of two. Thus the sample numbers used
in this study were not optimal for FFT.
- DFT works in the simulator with any limited number of samples.
Computational Limitations of DFT and FFT
To calculate one magnitude point of frequency response the first version of MIL.xls simulator
(1993) made more than 20000 calculations. It took about 30 seconds while N=160 samples
were used in a direct computation. In reference [Pro92] the computational complexity for the
direct computation of the DFT is compared to the FFT algorithm. The number of multiplications
needed in DFT (N
) is much larger than in FFT (N/2)log[2](N), see Table 4.1. However
the rapid development of processor power and RAM memories have made the time delay
in simulations with DFT negligible. The FFT-values for N=13, N=26 and N=160 are “not po w-
ers of two” and thus not possible with FFT (N/A).
Table 4.1. Complexity of DFT versus FFT
Number of points N Multiplications
13 169 N/A
26 676 N/A
160 25600 N/A
4.2 Modeling of Data Transmission
Simplified models are used in the simulation of data transmission over the ADM-channel (analog
voice grade channel, ADM coding), Figures 4.1-4.4. A detailed presentation of the data
transmission is found in reference [Ska01] and Appendix 2. In the simplified model random of
Figure 4.2 digital data bits were generated (Bits IN) and analyzed (Bits out) in the PCs. The
symbol waveforms or analog signals were generated and detected in the data modem (Data
Mod, Data Dem). The data waveform was led to the multiplexer (Mux), which includes the
equipment for the adaptive delta modulation coding of analog signals (A/D ADM, D/A ADM).
Crypto equipment is needed in wireless communications but was not modeled for the simulations.
The air interface is a radio link or a base station (Mod, Dem), which are modeled with an
AWGN or a multi-path channel model. Noise was added to the signal in the receiver (Dem).
In Figure 4.3 the channel is a radio channel or a wired channel. DM and PCM are source encoding
and decoding methods used for a base-band digital signal. Discrete channel encoders and
decoders for base-band line signaling are not modeled in simulations. The analog radio or wired
channel is robustly modeled with an AWGN, multi-path, and granular channel model.
Fig. 4.2 Simplified model of analog data transmission
Fig. 4.3 Simplified simulation models of data transmission [Lal97a]
Problems and Research Methods
The measurements demonstrated the quality of standard analog data transmission with 1200
bit/s or 2400 bit/s rate modems only using granular noise channels. The quality levels were
acceptable or poor. This motivates to study other than standard data modulation methods for
improvements of the data rate and transmission quality over granular (digital network),
AWGN (theoretical) and multi-path (radio) channels. The investigation was made with a robust
modeling and simulations method. The programming was made with a worksheet as discussed
in papers [Lal97b], [Lal99], and [Lal04b]. The results were verified with measurements
and reference simulators.
The information transmission chain is: digital data source or analog source waveform - granular
noise in source coding – AWGN noise and multi-path channel - receiver was modeled.
The simulation results show the probability of the correctly received message in different
cases or BER (bit error rate). The information transmission blocks are analyzed and described
in detail in papers [Lal97b], [Lal99], [Lal04a], and [Lal04b]. The three different channel
models causing different problems in data transmission (quality impairment) are discussed in
several references are available [Sha48, Cha66, Rum86]:
- AWGN.
- Granular noise.
- Multi-path interferences.
Discussion of the Results
The simulation results, conclusions and proposals in the papers [Lal99], [Lal00], [Lal01],
[Lal02], and [Lal04a-b] include:
- Simulation analysis of different granular noise channels (phase and amplitude distortion
and a polynomial channel model)
- Comparison of standard data transmission methods with the developed adaptive multicarrier
data transmission methods.
- Qualitative results using AWGN, granular noise and multi-path channels for data transmission.
- Recommendation for selecting adaptive data transmission parameters and design principles
for an adaptive modem.
- Results of simulations with a model for biomedical data network using adaptive versus
standard data transmission at different bit rates.
A brief summary is presented next.
4.3. Adaptive Delta Modulation and Granular Channel
Figure 4.4.presents the adaptive delta modulator and demodulator of the Eurocom recommendation
[Eur86] and the simulation model of the adaptive delta modulator, paper [Lal04b].
The ADM-channel (granular noise channel) discussed in this study is established between
points C and C’. The demodulation process is simply the integrator and it includes the same
modulation level analyzer as the modulator. The MLA (modulation level analyzer) and the
first integrator in Figure 4.4 define the step size, which is propotional to the granular noise
level, described in more detail in reference [Eur86].
Fig. 4.4 ADM modulation demodulation process and blocks [Eur86]
Figure 4.4 shows the analogue/digital conversion of speech signals with a pulse modulator in
the transmitter and digital/analog conversion in the receiver end of the digital transmission
channel between (C-C´). The receiver has a leaky integrator (between F-G) and a VF (voice
frequency) filter (between G-B).
Fig. 4.5 Simulation model of ADM modulator [Lal04b]
In the simulation model of Figure 4.5 the modulation level analyzer is developed into different
two, three or four bit versions and used for the ADM algorithm simulation process in
paper [Lal97b]. In Figure 4.5 a three bit algorithm (A+B+C)/3 = 1 or –1 is used in the simulated
result of the adaptive step size versus a continuous sinusoidal signal. The modulation
simulation result, the adaptive discrete audio signal x(nT), is presented in Figure 4.6 with the
original input signal 800 Hz.
Fig. 4.6 ADM modulation of 800 Hz test signal [Lal97b]
The integration is controlled by the adaptive step size s(t), formula (4.1).
t^) ^= S^(t^) y^(iT ^) (4.1)
[C ]i[=]0
Fig. 4.7 Simulated adaptive step size [Lal97b]
Figure 4.7 presents the step adaptation at different frequencies in the adaptive delta modulation
system. To avoid slope overload in a delta modulation system the step size S (line in the
figure) must be greater than a minimum value.
Adaptive Digital Channel
The performance of the delta-modulated voice channel is presented in the simulated results
of Figures 4.8-4.10. The simulated magnitude and phase response functions for different
amplitude to minimum step size ratios of the ADM-channel are seen.
Fig. 4.8 Simulated magnitude response of ADM-channel [Lal97b]
Fig. 4.9 Simulated phase response of ADM-channel [Lal97b]
Fig. 4.10 Covariance between input and output samples
In Figure 4.10 the voice band signal has redundancy between delayed samples, which is seen
in the calculated covariance results between the 1...4 - sample delayed 16 kbps signals.
Polynomial Channel Model
The Polynomial Signal Processing (PSP) of the simulation result gives the channel model.
The channel model polynomial in dB values is in formula (4.2) where the voltage y is normalized
and the frequency x is given in kHz. The results are presented in Figure 4.11. The
quality of the polynomial curve fitting was evaluated with the coefficient of determination r
also called correlation coefficient. The best fitting r = 0.9474 for the amplitude response is
achieved with the polynomial of degree 6 in Figure 4.11.
Fig. 4.11 ADM-channel amplitude response [Lal04b]
The polynomial amplitude response model [dB] of degree 6:
y = -0.0266x^6+0.2811x^5-1.2868x^4+2.6801x^3-2 .6318x^2+1.1013x+0.757; (4.2)
r = 0.9474
The polynomial phase response model of degree 3:
y = -4E-9x^3+2E-5x^2-0.0682x+97.429; (4.3)
r =0.9864
A very good fitting for the phase response (r=0.9864) is achieved with the polynomial of
degree 3 if the frequency range x is limited to 2600 Hz. The polynomial is presented in formula
(4.3) where the phase y is in degrees and the frequency x is given in Hz. | {"url":"http://www.openthesis.org/documents/Adaptive-secure-data-transmission-method-596630-s5.html","timestamp":"2014-04-21T04:59:36Z","content_type":null,"content_length":"32885","record_id":"<urn:uuid:079e17e1-c69f-4938-82e1-2cc53cae1170>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00056-ip-10-147-4-33.ec2.internal.warc.gz"} |
15-815 Automated Theorem Proving
15-815 Automated Theorem Proving
Lecture 8: The Inverse Method
We then introduce the inverse method as an alternative approach to proof search. The inverse method is based on a sequent calculus designed for forward reasoning (from the initial sequents to the
conclusion) combined with the observation that we only ever need to introduce subformula of our proposed theorem during proof search. The inverse method avoids backtracking, because we draw more and
more conclusions from our initial sequents, but can suffer from space problems. Therefore techniques to reduce the number of generated and retained sequents are critical for the efficiency of the
inverse method.
[ Home | Schedule | Assignments | Handouts | Software | Resources ]
Frank Pfenning | {"url":"http://www.cs.cmu.edu/~fp/courses/atp/lectures/08-inverse.html","timestamp":"2014-04-19T07:04:26Z","content_type":null,"content_length":"3125","record_id":"<urn:uuid:64698064-4d70-480a-8884-5afd7fac698f>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00366-ip-10-147-4-33.ec2.internal.warc.gz"} |
Binders Unbound
Stephanie Weirich, Tim Sheard and I recently submitted a paper to ICFP entitled Binders Unbound. (You can read a draft here.) It’s about our [S:kick-ass:S], I mean, expressive and flexible library,
unbound (note: GHC 7 required), for generically dealing with names and binding structure when writing programs (compilers, interpreters, refactorers, proof assistants…) that work with syntax. Let’s
look at a small example of representing untyped lambda calculus terms. This post is working Literate Haskell, feel free to save it to a .lhs file and play around with it yourself!
First, we need to enable lots of wonderful GHC extensions:
> {-# LANGUAGE MultiParamTypeClasses
> , TemplateHaskell
> , ScopedTypeVariables
> , FlexibleInstances
> , FlexibleContexts
> , UndecidableInstances
> #-}
Now to import the library and a few other things we’ll need:
> import Unbound.LocallyNameless
> import Control.Applicative
> import Control.Arrow ((+++))
> import Control.Monad
> import Control.Monad.Trans.Maybe
> import Text.Parsec hiding ((<|>), Empty)
> import qualified Text.Parsec.Token as P
> import Text.Parsec.Language (haskellDef)
We now declare a Term data type to represent lambda calculus terms.
> data Term = Var (Name Term)
> | App Term Term
> | Lam (Bind (Name Term) Term)
> deriving Show
The App constructor is straightforward, but the other two constructors are worth discussing in more detail. First, the Var constructor holds a Name Term. Name is an abstract type for representing
names, provided by Unbound. Names are indexed by the sorts of things to which they can refer (or more precisely, the sorts of things which can be substituted for them). Here, a variable is simply a
name for some Term, so we use the type Name Term.
Lambdas are where names are bound, so we use the special Bind combinator, also provided by the library. Something of type Bind p b represents a pair consisting of a pattern p and a body b. The
pattern may bind names which occur in b. Here is where the power of generic programming comes into play: we may use (almost) any types at all as patterns and bodies, and Unbound will be able to
handle it with very little extra guidance from us. In this particular case, a lambda simply binds a single name, so the pattern is just a Name Term, and the body is just another Term.
Now we use Template Haskell to automatically derive a generic representation for Term:
> $(derive [''Term])
There are just a couple more things we need to do. First, we make Term an instance of Alpha, which provides most of the methods we will need for working with the variables and binders within Terms.
> instance Alpha Term
What, no method definitions? Nope! In this case (and in most cases) the default implementations, implemented in terms of automatically-derived generic representations, work just fine.
We also need to provide a Subst Term Term instance. In general, an instance for Subst b a means that we can use the subst function to substitute things of type b for Names occurring in things of type
a. We override the isvar method so the library knows which constructor(s) of our type represent variables which can be substituted for.
> instance Subst Term Term where
> isvar (Var v) = Just (SubstName v)
> isvar _ = Nothing
OK, now that we’ve got the necessary preliminaries set up, what can we do with this? Here’s a little lambda-calculus evaluator:
> done :: MonadPlus m => m a
> done = mzero
> step :: Term -> MaybeT FreshM Term
> step (Var _) = done
> step (Lam _) = done
> step (App (Lam b) t2) = do
> (x,t1) <- unbind b
> return $ subst x t2 t1
> step (App t1 t2) =
> App <$> step t1 <*> pure t2
> <|> App <$> pure t1 <*> step t2
> tc :: (Monad m, Functor m) => (a -> MaybeT m a) -> (a -> m a)
> tc f a = do
> ma' <- runMaybeT (f a)
> case ma' of
> Just a' -> tc f a'
> Nothing -> return a
> eval :: Term -> Term
> eval x = runFreshM (tc step x)
Note how we use unbind to take bindings apart safely, using the the FreshM monad (also provided by Unbound) for generating fresh names. We also get to use subst for capture-avoiding substitution. All
without ever having to touch a de Bruijn index!
OK, but does it work? First, a little Parsec parser:
> lam :: String -> Term -> Term
> lam x t = Lam $ bind (string2Name x) t
> var :: String -> Term
> var = Var . string2Name
> lexer = P.makeTokenParser haskellDef
> parens = P.parens lexer
> brackets = P.brackets lexer
> ident = P.identifier lexer
> parseTerm = parseAtom `chainl1` (pure App)
> parseAtom = parens parseTerm
> <|> var <$> ident
> <|> lam <$> (brackets ident) <*> parseTerm
> runTerm :: String -> Either ParseError Term
> runTerm = (id +++ eval) . parse parseTerm ""
Now let’s try some arithmetic:
*Main> runTerm "([m][n][s][z] m s (n s z))
([s] [z] s (s z))
([s] [z] s (s (s z)))
s z"
Right (App (Var s) (App (Var s) (App (Var s)
(App (Var s) (App (Var s) (Var z))))))
2 + 3 is still 5, and all is right with the world.
This blog post has only scratched the surface of what’s possible. There are several other combinators other than just Bind for expressing binding structure: for example, nested bindings, recursive
bindings and embedding terms within patterns are all supported. There are also other operations provided, such as free variable calculation, simultaneous unbinding, and name permutation. To learn
more, see the package documentation, or read the paper!
7 Responses to Binders Unbound
1. This is super awesome!
btw, is there a particular reason the FreshM is not a member of the MonadFix typeclass?
□ Thanks!
No reason other than simple oversight. I’ve just uploaded a new version (0.2.2) with MonadFix instances for FreshM and LFreshM. Thanks for the suggestion!
2. Some related work you seem not to be aware of: Dave Herman, in his recent thesis at Northeastern, provided an expressive binding specification language for specifying and typechecking Scheme
macros. In particular, see Herman and Wand, A Theory of Hygienic Macros, ESOP 2008, and his dissertation, 2010, both available here: http://www.ccs.neu.edu/home/dherman/
□ Thanks! We were indeed not aware of that work, we’ll take a look.
3. Wow, really nice work. This is exactly the sort of thing that should make my life easier: I’m implementing a Haskell dialect with numeric type indices, and binding-related woes are commonplace.
Trying out the library, it seems to give incomprehensible error messages in the TH-spliced code if my expression type is a GADT or existential. Are these not supported, or am I simply missing
something? If the former, would it be possible to implement support for them?
□ Hi Adam, unfortunately, GADTs and existential types are not supported at the moment (primarily because they are not supported by RepLib). However, they are on our to-do list. Actually, it’s
more of a priority queue, and knowing you want them has just increased their priority. I make no promises but I’ll take a look and let you know.
This entry was posted in grad school, haskell, projects, writing and tagged binding, EDSL, ICFP, library, names, paper. Bookmark the permalink. | {"url":"http://byorgey.wordpress.com/2011/03/28/binders-unbound/","timestamp":"2014-04-18T10:34:08Z","content_type":null,"content_length":"87101","record_id":"<urn:uuid:33238400-e1b3-49fd-b8f3-4b5c1f4caa89>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00411-ip-10-147-4-33.ec2.internal.warc.gz"} |
Columbia, MD Statistics Tutor
Find a Columbia, MD Statistics Tutor
...I empathize with students who are frustrated with mathematical coursework and concepts, and incorporate a supportive manner in my teaching style. This applies especially to students with
learning disabilities, or otherwise unique learning styles that may be less compatible with a traditional cla...
16 Subjects: including statistics, calculus, geometry, algebra 1
...As an undergraduate student in Electrical Engineering and Physics and as a graduate student, I took courses in mathematical methods for physics and engineering. These courses included
fundamental theory and techniques (including numerical) of linear algebra and its applications to engineering an...
16 Subjects: including statistics, physics, calculus, geometry
...Usually, I help them do as many problems as possible until they grasp the underlying concept very well. When needed, I use real-life examples or create my own problems on the side, which I
feel would elaborate the academic concept being discussed better. And when discussing complex concepts, I start by using simple examples that the student understands easily.
14 Subjects: including statistics, chemistry, calculus, physics
...My work experiences include a research and development internship, a statistical internship, and a couple of analyst roles. Additionally, I have worked in the hospitality industry. My GRE
score was: 1340.
12 Subjects: including statistics, algebra 1, GRE, GMAT
...Also it is very important to encourage the student by letting them know how much they know instead of pointing out their weaknesses. I have helped, on more than one occasion, students who were
struggling badly in their math courses but in the end they got a very well deserved grade. It is my hope to be able to help more students achieve high grades in their math courses.
34 Subjects: including statistics, physics, calculus, geometry
Related Columbia, MD Tutors
Columbia, MD Accounting Tutors
Columbia, MD ACT Tutors
Columbia, MD Algebra Tutors
Columbia, MD Algebra 2 Tutors
Columbia, MD Calculus Tutors
Columbia, MD Geometry Tutors
Columbia, MD Math Tutors
Columbia, MD Prealgebra Tutors
Columbia, MD Precalculus Tutors
Columbia, MD SAT Tutors
Columbia, MD SAT Math Tutors
Columbia, MD Science Tutors
Columbia, MD Statistics Tutors
Columbia, MD Trigonometry Tutors
Nearby Cities With statistics Tutor
Baltimore, MD statistics Tutors
Bowie, MD statistics Tutors
Catonsville statistics Tutors
Elkridge statistics Tutors
Ellicott City statistics Tutors
Ellicott, MD statistics Tutors
Glen Burnie statistics Tutors
Jessup, MD statistics Tutors
Laurel, MD statistics Tutors
Montgomery Village, MD statistics Tutors
Pikesville statistics Tutors
Rockville, MD statistics Tutors
Severn, MD statistics Tutors
Silver Spring, MD statistics Tutors
Simpsonville, MD statistics Tutors | {"url":"http://www.purplemath.com/columbia_md_statistics_tutors.php","timestamp":"2014-04-18T23:19:09Z","content_type":null,"content_length":"24320","record_id":"<urn:uuid:05505f63-2788-40ea-aa59-caee9c2d20a7>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00256-ip-10-147-4-33.ec2.internal.warc.gz"} |
Series R, L,
Series R, L, and C
Let's take the following example circuit and analyze it:
The first step is to determine the reactances (in ohms) for the inductor and the capacitor.
The next step is to express all resistances and reactances in a mathematically common form: impedance. Remember that an inductive reactance translates into a positive imaginary impedance (or an
impedance at +90^o), while a capacitive reactance translates into a negative imaginary impedance (impedance at -90^o). Resistance, of course, is still regarded as a purely "real" impedance (polar
angle of 0^o):
Now, with all quantities of opposition to electric current expressed in a common, complex number format (as impedances, and not as resistances or reactances), they can be handled in the same way as
plain resistances in a DC circuit. This is an ideal time to draw up an analysis table for this circuit and insert all the "given" figures (total voltage, and the impedances of the resistor, inductor,
and capacitor).
Unless otherwise specified, the source voltage will be our reference for phase shift, and so will be written at an angle of 0^o. Remember that there is no such thing as an "absolute" angle of phase
shift for a voltage or current, since it's always a quantity relative to another waveform. Phase angles for impedance, however (like those of the resistor, inductor, and capacitor), are known
absolutely, because the phase relationships between voltage and current at each component are absolutely defined.
Notice that I'm assuming a perfectly reactive inductor and capacitor, with impedance phase angles of exactly +90 and -90^o, respectively. Although real components won't be perfect in this regard,
they should be fairly close. For simplicity, I'll assume perfectly reactive inductors and capacitors from now on in my example calculations except where noted otherwise.
Since the above example circuit is a series circuit, we know that the total circuit impedance is equal to the sum of the individuals, so:
Inserting this figure for total impedance into our table:
We can now apply Ohm's Law (I=E/R) vertically in the "Total" column to find total current for this series circuit:
Being a series circuit, current must be equal through all components. Thus, we can take the figure obtained for total current and distribute it to each of the other columns:
Now we're prepared to apply Ohm's Law (E=IZ) to each of the individual component columns in the table, to determine voltage drops:
Notice something strange here: although our supply voltage is only 120 volts, the voltage across the capacitor is 137.46 volts! How can this be? The answer lies in the interaction between the
inductive and capacitive reactances. Expressed as impedances, we can see that the inductor opposes current in a manner precisely opposite that of the capacitor. Expressed in rectangular form, the
inductor's impedance has a positive imaginary term and the capacitor has a negative imaginary term. When these two contrary impedances are added (in series), they tend to cancel each other out!
Although they're still added together to produce a sum, that sum is actually less than either of the individual (capacitive or inductive) impedances alone. It is analogous to adding together a
positive and a negative (scalar) number: the sum is a quantity less than either one's individual absolute value.
If the total impedance in a series circuit with both inductive and capacitive elements is less than the impedance of either element separately, then the total current in that circuit must be greater
than what it would be with only the inductive or only the capacitive elements there. With this abnormally high current through each of the components, voltages greater than the source voltage may be
obtained across some of the individual components! Further consequences of inductors' and capacitors' opposite reactances in the same circuit will be explored in the next chapter.
Once you've mastered the technique of reducing all component values to impedances (Z), analyzing any AC circuit is only about as difficult as analyzing any DC circuit, except that the quantities
dealt with are vector instead of scalar. With the exception of equations dealing with power (P), equations in AC circuits are the same as those in DC circuits, using impedances (Z) instead of
resistances (R). Ohm's Law (E=IZ) still holds true, and so do Kirchhoff's Voltage and Current Laws.
To demonstrate Kirchhoff's Voltage Law in an AC circuit, we can look at the answers we derived for component voltage drops in the last circuit. KVL tells us that the algebraic sum of the voltage
drops across the resistor, inductor, and capacitor should equal the applied voltage from the source. Even though this may not look like it is true at first sight, a bit of complex number addition
proves otherwise:
Aside from a bit of rounding error, the sum of these voltage drops does equal 120 volts. Performed on a calculator (preserving all digits), the answer you will receive should be exactly 120 + j0
We can also use SPICE to verify our figures for this circuit:
ac r-l-c circuit
v1 1 0 ac 120 sin
r1 1 2 250
l1 2 3 650m
c1 3 0 1.5u
.ac lin 1 60 60
.print ac v(1,2) v(2,3) v(3,0) i(v1)
.print ac vp(1,2) vp(2,3) vp(3,0) ip(v1)
freq v(1,2) v(2,3) v(3) i(v1)
6.000E+01 1.943E+01 1.905E+01 1.375E+02 7.773E-02
freq vp(1,2) vp(2,3) vp(3) ip(v1)
6.000E+01 8.068E+01 1.707E+02 -9.320E+00 -9.932E+01
The SPICE simulation shows our hand-calculated results to be accurate.
As you can see, there is little difference between AC circuit analysis and DC circuit analysis, except that all quantities of voltage, current, and resistance (actually, impedance) must be handled in
complex rather than scalar form so as to account for phase angle. This is good, since it means all you've learned about DC electric circuits applies to what you're learning here. The only exception
to this consistency is the calculation of power, which is so unique that it deserves a chapter devoted to that subject alone.
• REVIEW:
• Impedances of any kind add in series: Z[Total] = Z[1] + Z[2] + . . . Z[n]
• Although impedances add in series, the total impedance for a circuit containing both inductance and capacitance may be less than one or more of the individual impedances, because series inductive
and capacitive impedances tend to cancel each other out. This may lead to voltage drops across components exceeding the supply voltage!
• All rules and laws of DC circuits apply to AC circuits, so long as values are expressed in complex form rather than scalar. The only exception to this principle is the calculation of power, which
is very different for AC. | {"url":"http://www.electronicsteacher.com/alternating-current/reactance-and-impedance-rlc/series-rlc.php","timestamp":"2014-04-20T00:43:20Z","content_type":null,"content_length":"28105","record_id":"<urn:uuid:e8ef6c27-de9d-4e3a-abda-039f333872ae>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00304-ip-10-147-4-33.ec2.internal.warc.gz"} |
[plt-scheme] Htdp chapter 12 insertion sort
From: Marco Morazan (morazanm at gmail.com)
Date: Tue Jun 2 07:47:21 EDT 2009
> ;; sort : list-of-numbers -> list-of-numbers (sorted)
> ;; to create a list of numbers with the same numbers as
> ;; alon sorted in descending order
> (define (sort alon)
> (cond
> [(empty? alon) empty]
> [(cons? alon) (insert (first alon) (sort (rest alon)))]))- what is cons?
> alon here , is it required there?
What exactly is bothering you? cons? is a primitive used to determine
if its input is a constructed list. Given that the contract guarantees
a (listof number) as input, you can substitute (cons? alon) with else
in the above function. In other words, given the above contract there
is not need to check if alon is a constructed list after knowing it is
not an empty list.
Posted on the users mailing list. | {"url":"http://lists.racket-lang.org/users/archive/2009-June/033496.html","timestamp":"2014-04-16T22:02:30Z","content_type":null,"content_length":"6085","record_id":"<urn:uuid:64f4c6d9-fd01-4243-a6ce-0191763df335>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00111-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
which of the following is not true about electric field intensity of a uniform charged solid sphere? 1 is directly proportional to the distance from the center of the sphere, 2..decreases as the
square of the distance from the surface of the spare 3. decreases as the square of the distance from the center of the spare
• 9 months ago
• 9 months ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/51ca2b19e4b063c0de5a6884","timestamp":"2014-04-19T07:31:29Z","content_type":null,"content_length":"28891","record_id":"<urn:uuid:bb6fce47-8420-4c62-a524-8795719d4906>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00375-ip-10-147-4-33.ec2.internal.warc.gz"} |
Los Angeles Valley College: Math Lab
The Philip S. Clarke Math Lab
LA Valley College has 2 Math Tutoring Lab.
The Math Lab is intended for any students enrolled in any of the Math 100 level courses, Math 215 or Statistics.
The Transfer Math Lab is intended for students enrolled in Math 238, 240, 245, 260, 259, 265, 266, 267, 270 and 275.
The Math Lab provides free tutoring in mathematics for all actively enrolled LA Valley College students enrolled in any of the Math 100 level courses, Math 215 or Statistics. If the Transfer Math Lab
is closed, students in the more advanced courses are welcome to use the Math Lab also.
A valid LA Valley College id is required for entrance. In addition to one on one tutoring, the lab also provides free access to math textbooks, student solution manuals and computers.
Location: Library and Academic Resource Center (LARC) 226
Hours of Operation - Fall and Spring SemesterThe Math Lab will be open from the second week of the fall and spring semester.
Monday to Thursday from 9:45 am to 6:30 pm.
Saturday from 10:00 am to 2:00 pm
Laptop computers are available for for students to use in the lab for online Math Homework Only.
One on One Tutoring
One on one tutoring appointments for students in Math 105, 110, 112, 113, 114, 115 and 125
For appointments call 1-818-947-7263 Monday to Thursday from 10:00 am to 6:15 pm.
Students enrolled in Math 105, 110 and 112 may make an appointment up to 7 days in advance.
Students enrolled in Math 113, 114, 115 and 125 may make an appointment up to 3 days in advance.
Students willing to be tutored in a group of 2 or more friends may make an appointment up to 7 days in advance.
The Transfer Math Lab provides free tutoring in mathematics for all actively enrolled LA Valley College students who are enrolled in Math 238, 240, 245, 260, 259, 265, 266, 267, 270 and 275. A valid
LA Valley College id is required for entrance.
Location: Library and Academic Resource Center (LARC 219)
Hours of Operation - Fall and Spring Semester
The Transfer Math Lab will be open from the second week of the fall and spring semester.
Monday to Thursday 10:00 to 5:30
Friday 9:00 to 12:30
The Transfer Math Lab is staffed by math professors and tutors who are more advanced in mathematics than
the Math Tutoring Lab.
Transfer Math Lab Supervisor
Mostapha (Steve) Barakat
Math Lab Supervisor
John Kawai, PhD.
MS 104B
Math Lab Instructional Assistant
Nick Olshanskiy, PhD.
LARC 226 | {"url":"http://lavc.edu/math/mathlab.html","timestamp":"2014-04-19T19:33:08Z","content_type":null,"content_length":"17569","record_id":"<urn:uuid:b03bc76d-87e1-43c2-b37b-3f0da7b84587>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00351-ip-10-147-4-33.ec2.internal.warc.gz"} |
Pittsburg, CA ACT Tutor
Find a Pittsburg, CA ACT Tutor
...I love to teach! I have a simple tutoring philosophy of taking a student where he is with his strengths and struggles and building upon these to help this student achieve success to the best
of his ability. It is a great feeling as a tutor to work with a student and then to see him progress not only academically, but in his confidence as a student.
11 Subjects: including ACT Math, geometry, ASVAB, algebra 1
...I also can teach the basics for geometry and Algebra II. I teach elementary math, pre-algebra, algebra I, and the basics for algebra II and geometry. I teach elementary math, pre-algebra,
algebra I, and the basics for algebra II and geometry.
27 Subjects: including ACT Math, English, reading, algebra 1
...I plan to then go through the credential program and then become a high school teacher. I received a 5/5 on my AP Calculus Test, and ever since then I knew that I really wanted to help people,
like you, understand and love Math as I do. I always loved Algebra more than any other subject though.
12 Subjects: including ACT Math, geometry, ASVAB, ESL/ESOL
...Besides Biology, I have taken various classes on Chemistry including Organic Chemistry and Calculus classes. Math is by far my favorite and best subject because there are a lot of ways that
difficult math can become easy. J]You've just got to look at the problems in a different way.
27 Subjects: including ACT Math, English, writing, reading
...I have a B.Sc. in Economics/Operations Research and I studied optimization, graph theory and applied Operations Research methodology to research and modeling. I worked more than 10 years in
research using different forms of differential equations covering stiff ODEs and multidimensional equation...
41 Subjects: including ACT Math, calculus, geometry, statistics
Related Pittsburg, CA Tutors
Pittsburg, CA Accounting Tutors
Pittsburg, CA ACT Tutors
Pittsburg, CA Algebra Tutors
Pittsburg, CA Algebra 2 Tutors
Pittsburg, CA Calculus Tutors
Pittsburg, CA Geometry Tutors
Pittsburg, CA Math Tutors
Pittsburg, CA Prealgebra Tutors
Pittsburg, CA Precalculus Tutors
Pittsburg, CA SAT Tutors
Pittsburg, CA SAT Math Tutors
Pittsburg, CA Science Tutors
Pittsburg, CA Statistics Tutors
Pittsburg, CA Trigonometry Tutors
Nearby Cities With ACT Tutor
Alamo, CA ACT Tutors
Albany, CA ACT Tutors
Antioch, CA ACT Tutors
Brentwood, CA ACT Tutors
Burlingame, CA ACT Tutors
Castro Valley ACT Tutors
Concord, CA ACT Tutors
Danville, CA ACT Tutors
Diamond, CA ACT Tutors
Lafayette, CA ACT Tutors
Oakley, CA ACT Tutors
Pacifica ACT Tutors
Pleasant Hill, CA ACT Tutors
San Bruno ACT Tutors
Walnut Creek, CA ACT Tutors | {"url":"http://www.purplemath.com/Pittsburg_CA_ACT_tutors.php","timestamp":"2014-04-19T02:07:01Z","content_type":null,"content_length":"23805","record_id":"<urn:uuid:fd63aa5f-7b16-47d5-a347-89f703a7bfe4>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00648-ip-10-147-4-33.ec2.internal.warc.gz"} |
Simultaneous decomposition into generalized eigenvectors
up vote 2 down vote favorite
Hi! This is my first question here, so please excuse me if it is too elementary.
I was wondering if the notion of a simultaneous decomposition into eigenspaces could be generalized in a special way I describe below. Let $V$ be a vector space over an algebraically closed field $k$
and let $T \subset \mbox{End}(V)$ be a finite dimensional subspace consisting of pairwise commuting and diagonizable endomorphisms. Than we have a decomposition
$\begin{align*} V = \bigoplus_{\lambda \in T^{\*}} V_{\lambda}, \end{align*}$
where $V_{\lambda} = \lbrace v\in V \hspace{0.3em}\lvert \hspace{0.3em} xv = \lambda v \mbox{ for all } x\in T \rbrace$ and $T^{\*}$ is the dual space of $T$.
I was wondering now if a very similar thing in another context might be possible as well. Some notations first. Let $V$ be as above and let $f \in \mbox{End}(V)$. Then set $\mbox{Hau}(f,\lambda) = \
bigcup_{n\ge 0} \mbox{ker}(f-\lambda\cdot\mbox{id})^n$. It is known that $V = \bigoplus_{\lambda\in k} \mbox{Hau}(f,\lambda)$ if and only if $f$ is locally finite.
Now let $S\subset \mbox{End}(V)$ be an abelian, finitely generated subalgebra such that each $x\in S$ is locally finite. By $S^{\times}$ I denote the set of algebra homomorphisms $S\to k$ (that map
$1$ to $1$) and for $\chi\in S^{\times}$ I denote
$\begin{align*} \mbox{Hau}_s(S,\chi) = \bigcap_s\mbox{Hau}(s,\chi(s)), \end{align*}$
where $s$ runs over all $s\in S$. My question now is the following: is it true that
$\begin{align*} V = \bigoplus_{\chi\in S^{\times}}\mbox{Hau}_s(S,\chi) ? \end{align*}$
I have serious difficulties proving it. My attempts so far have been that $S$ must be isomorphic to $k[x_1, \dots, x_l]/I$ for some $l$, and I tied induction over $l$. The above equality seemed basic
enough for me to be found in any text on linear algebra - I thought. But I did not find it. I would be very very glad for any pointers to literature or anything else. Or is the statement false in
this way?
Thank you very much.
To give you my motivation for such a question: In the representation theory in the context of category $\mathcal{O}$, $\mathcal{O}$ can be decomposed into blocks, parameterised by algebra
homomorphisms $\chi: \mbox{Z}(\mathfrak{g})\to k$, where each $M\in \mathcal{O}_{\chi}$ satisfies
$\begin{align*} \forall z\in \mbox{Z}(\mathfrak{g}) \forall v\in M: (z-\chi(z))^n v = 0 \mbox{ for some } n>0 \mbox{ depending on } z. \end{align*}$
Since $M$ is an $\mbox{U}(\mathfrak{g})$-module, we get an algebra homomorphism $\mbox{Z}(\mathfrak{g})\to \mbox{End}(M)$. $\mbox{Z}(\mathfrak{g})$ is known to be isomorphic to a polynomial algebra
in finitely many variables. The image of $\mbox{Z}(\mathfrak{g})$ under this morphism would play the role of $S$ in the above paragraph, and if I had the statement I want to prove, it would explain
why each $\mbox{U}(\mathfrak{g})$-module decomposes into a direct sum, where each summand belongs to a block...
linear-algebra rt.representation-theory
add comment
2 Answers
active oldest votes
I'm not sure if this precise formulation is standard linear algebra, but it is true. The important point is that $S$ acts locally finitely on $V$ (be careful, this only works because
$S$ is commutative): if $v$ is a random vector, and $x_i$'s generators of $S$, then there's some minimal $m_i$ such that $x_i^{m_i}v=a_{m_i-1}x_i^{m_i-1}v+\cdots$, and the space $S\cdot
v$ is spanned by monomials in the $x_i$'s where the power of $x_i$ is less than $m_i$ (just check that any linear combination of these times an $x_i$ can written in this form, using the
relation above).
up vote 2 Thus, one can decompose any vector $v\in V$ by simply considering $S\cdot v$ and decomposing this using the finite dimensional result. This shows that $V$ is the sum of these subspaces,
down vote and their intersections are trivial essentially by definition.
For category $\mathcal{O}$ this is really unnecessary though; you can consider the action of the center on the endomorphism space (which is finite dimensional) of your module, and the
projections of the identity to the different generalized eigenspaces will be idempotents projecting to the desired block decomposition.
First of all: thank you very much! That helps me a lot! But do you mean that my module itself is finite dimensional? One typical example would be a verma module, and they are in
generally not finite dimensional. Or do you mean that the endomorphism spaces are finite dimensional? If true, why are they finite dimensional? (The merely linear endomorphisms of a
verma would also form an infinite dimensional space). I'm not at all trying to argue with you, just trying to learn. Thank you! – Sh4pe Dec 12 '11 at 16:20
I meant the endomorphism space. This is finite dimensional since any $\mathfrak{g}$-module map between objects in category $\mathcal{O}$ is determined by what it does on finitely many
weight spaces (those which appear as highest weights in composition factors) and the space of vector space maps between those weight spaces is finite dimensional. For example, any
endomorphism of a Verma module sends the highest weight vector to a highest weight vector, and thus is scalar multiplication. – Ben Webster♦ Dec 12 '11 at 19:41
add comment
Ben has addressed the general question here, which as he suggests is not "standard" linear algebra. I guess this set-up might be relevant in categories of modules with fewer finiteness
restrictions than category $\mathcal{O}$. Anyway it's probably helpful to add explicit references for the motivating example, since that much is standard by now in representation theory of
Lie algebras. The first chapter of my 2008 AMS text on the BGG category discusses central characters. Theorem 1.11 shows explicitly that the Hom space is finite dimensional for any two
up vote modules in the category. There is also a discussion of block decomposition in the category (1.13), though I prefer to define "block" more narrowly than is done in the question when the
2 down weights involved are non-integral (4.9).
By the way, in the first part of the question one unhelpful phrase should be omitted: "and $T^*$ is the dual space of $T$."
With this unhelpful phrase I was trying to set up the notation I would like to use, since I've seen other notations for dual spaces as well. I agree that this is somewhat lengthy and too
verbose. Apart from that, do you think that such a phrase is harmful? Thank you very much for you suggestions! I appreciate that. – Sh4pe Dec 14 '11 at 10:00
I was just pointing out that the dual space of T plays no further role in the question here. A more concise formulation of questions is always welcome, as long as essential detail and
notation is included. – Jim Humphreys Dec 14 '11 at 23:25
add comment
Not the answer you're looking for? Browse other questions tagged linear-algebra rt.representation-theory or ask your own question. | {"url":"http://mathoverflow.net/questions/83242/simultaneous-decomposition-into-generalized-eigenvectors","timestamp":"2014-04-20T18:28:18Z","content_type":null,"content_length":"63051","record_id":"<urn:uuid:85097dc1-e7d1-49d2-8d11-f6aa3f1f8248>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00009-ip-10-147-4-33.ec2.internal.warc.gz"} |
Clifton, VA Precalculus Tutor
Find a Clifton, VA Precalculus Tutor
...I am a senior software engineer with over 20 years experience. In college, I had a major in Math with minor in Computer Science. I have a Masters degree in pure mathematics.
37 Subjects: including precalculus, physics, calculus, GRE
...Feel free to read my regular blog posts on math education.Having a strong understanding of Algebra 1 is quintessential for a student's success in higher mathematics. This is where most
students start to struggle, and it continues into the later years. Algebra 2 is my favorite subject.
24 Subjects: including precalculus, reading, calculus, geometry
...These courses included fundamental theory and techniques (including numerical) of linear algebra and its applications to engineering and physics. I have also taught undergraduate and graduate
courses involving linear algebraic techniques. I am an instructor in Electrical Engineering.
16 Subjects: including precalculus, calculus, physics, statistics
...Some lack confidence, others lack motivation. Whatever the case may be, everyone has their story and the potential for improvement. If you’re interested in working with me, feel free to send
me an email and inquire about my availability.
9 Subjects: including precalculus, calculus, physics, geometry
...While I was in college, I was a professor's assistant for 3 years in a calculus class, which included me lecturing twice a week, and working one-on-one with students. After graduating, I
taught high school math for one year (courses were College Prep, Algebra II, and Geometry), but I am well ver...
10 Subjects: including precalculus, calculus, geometry, algebra 1
Related Clifton, VA Tutors
Clifton, VA Accounting Tutors
Clifton, VA ACT Tutors
Clifton, VA Algebra Tutors
Clifton, VA Algebra 2 Tutors
Clifton, VA Calculus Tutors
Clifton, VA Geometry Tutors
Clifton, VA Math Tutors
Clifton, VA Prealgebra Tutors
Clifton, VA Precalculus Tutors
Clifton, VA SAT Tutors
Clifton, VA SAT Math Tutors
Clifton, VA Science Tutors
Clifton, VA Statistics Tutors
Clifton, VA Trigonometry Tutors | {"url":"http://www.purplemath.com/Clifton_VA_precalculus_tutors.php","timestamp":"2014-04-19T17:09:11Z","content_type":null,"content_length":"23999","record_id":"<urn:uuid:b959e5db-c30d-4563-8762-7f51cf14db4e>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00011-ip-10-147-4-33.ec2.internal.warc.gz"} |
Litchfield Park Math Tutor
Find a Litchfield Park Math Tutor
...This is one of favorite subjects to tutor as the majority of students need help in the same course that I'm teaching. I have been playing the piano since I was 4 years old, and I love this
instrument. I was classically trained through the Royal Conservatory of Music in Canada, and have passed my Grade 9 level examination.
28 Subjects: including trigonometry, ACT Math, algebra 1, algebra 2
...I graduated Cum Laude from UC Irvine. I attended USC and graduated with a Doctors Degree in 1984 in Dentistry. I served in the United States Navy from 1992 to 1996 stationed at 29 Palms Marine
Corps base and part of the 23rd Dental Company.
7 Subjects: including algebra 1, biology, chemistry, prealgebra
My name is Steven and I studied Physics, Astronomy, and Mechanical Engineering for 4 years at Northern Arizona University. My fiancee and I moved to Surprise, AZ, recently for her Nursing degree,
and I plan on finishing my degree at Arizona State University soon. If you are looking for any tutorin...
11 Subjects: including algebra 1, trigonometry, statistics, prealgebra
...Bachelor's Degree obtained in Elementary Education. I am certified K-8. I am a recent college graduate with a degree in Elementary Education and teaching certification for grades K-8 and
structured English Immersion K-12.
17 Subjects: including algebra 1, English, prealgebra, writing
...I am able to provide information and teaching for all general studies that are required for any grade of K-12 as well as many other activity based learning such as sports and hobbies. As well
as general studies information, I am also able to aid in many testings. I am knowledgeable in GED testing, ACT Prep Testing, and SAT Prep Testing.
40 Subjects: including algebra 1, algebra 2, ACT Math, SAT math | {"url":"http://www.purplemath.com/Litchfield_Park_Math_tutors.php","timestamp":"2014-04-16T04:36:04Z","content_type":null,"content_length":"23910","record_id":"<urn:uuid:47387254-35fc-4f7d-a2a8-d1d57585a35d>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00010-ip-10-147-4-33.ec2.internal.warc.gz"} |
Adapted from “Cracking the Maya Code,” a NOVA activity.
We’re familiar with a method of tracking time that uses days, months, years, decades, and centuries. This method of timekeeping is based upon the Gregorian Calendar System. The Maya, however,
measured time in kins, uinals, tuns, katuns and baktuns using a system called the Long Count. If you add the numbers in a Maya Long Count date, the sum is the number of days from the beginning of
the Maya Fourth Creation: August 13, 3114 B.C.
Maya Long Count dates are written as a series of numbers separated by periods. For example, 12 . 18 . 14 . 11. 16 (December 31, 1987) is the date you will use as a starting point for your
calculations. The same date is shown below in its separate component parts above its representative glyph.
Step One: Using the “Maya Long Count Conversion” chart above, convert each place value in the date 12 . 18 . 14 . 11 . 16 into days. Add these five numbers together and subtract 2 to get the total
number of days. A formula has been provided below to help you get started. You will need to do your calculations on another sheet of paper.
12*Baktun + 18*Katun + 14* Tun + 11*Uinal + 16*Kin – 2 = ________days
Step Two: Record your birth date (in the Gregorian method). If you were born prior to January 1, 1988, calculate the number of days from the day you were born to December 31, 1987 (Answer A). If you
were born on or after January 1, 1988, calculate the number of days from this date to the day you were born (Answer B). Keep in mind that leap years have an extra day. The chart below will help you
with the number of days for each month. Record this number.
Note: The following are leap years and have a total of 366 days (a 29th day in February): 1960, 1964, 1968, 1972, 1976, 1980, 1984, 1988, 1992, 1996, 2000, 2004, 2008, and 2012. All non-leap years
have 365 days.
Step Three: If you calculated answer A, subtract this number from the Step One answer. If you calculated answer B, add this number to the answer from Step One. Record this number..
Step Four:
Convert the number of days since the Maya Fourth Creation to your birth date in Maya Long Count using the “Maya Long Count Conversions” chart.
To calculate your birthday:
How many whole baktuns are there in C days? This number (we’ll call it D) goes in the baktun position.
How many days are left over from C after you subtract the number of days in D baktuns? Call this E.
How many whole katuns are in E days? Call this number F and put it in the katun position.
How many days are left over from E after you subtract the number of days in F katuns? Call this number G.
How many whole tuns are in G days? Call this number H and put it in the tun position.
How many days are left over from G after you subtract the number of days in H tuns? Call this number I.
How many whole uinals are in I days? Call this number J and put it in the uinal position.
How many days are left over after you subtract the number of days in J uinals? This is the number of kin in your birthday.
Fill in the spaces using your calculations, and check your answer here by plugging it into the applet. | {"url":"http://blog.hmns.org/tag/katuns/","timestamp":"2014-04-20T10:47:07Z","content_type":null,"content_length":"47304","record_id":"<urn:uuid:79f9985f-c55b-413a-b50b-8aa917368135>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00330-ip-10-147-4-33.ec2.internal.warc.gz"} |
Help for a young programmer? - Game Industry Job Advice
I've been learning a lot of new things recently, but I still have a few questions that I need to ask. I plan to be a computer scientist with a hobby of game development. My goal is to get good at
mathematics, get good at programming, and hopefully one day go to M.I.T.
1. Should I study mathematics before I study programming? I was recently haveing a conversation with a very intelliegent person and he told me that I should study mathematics atleast up to calculus
before I get into programming. I'm now studying algebra over the summer, since I will be put in algebra next year in eighth grade, so I just want to know what your opinions are....on whether I should
focus on mathematics before I learn programming. I plan to be a computer scientist when I grow up, but I would really love game development as a hobby. Heres my list of mathematics that I plan on
studying if it matters....
2. What do I need to know to know if I wanted to build my own gaming device like the ps3? Do I need to know a lot about engineering...and does programing need regarding the game engine it runs
on....in other words is programming needed for the hardware or is that engineering?
3. (This sort of relates to my last question) What do I need to know to create my own game engine? I hear theres different programming languages that are used for 3d graphics...I think someone told
me SDL and SFML, but could someone explain a bit more?
I plan to study these
Basic Math
* // Already completed the ones in dark blue
Pre Algebra
Algebra 1
// Im learning this next year in 8th grade, but plan to learn it over this summer
// I also plan on learning the subjects that are purple
Algebra 2
Calculus 1
Calculus 2
Calculus 3Computaion TheoryReal Analysis
// The subjects in red I am unsure of
, but am also a bit curious about
Complex Analysis Abstract Algebra Point-Set Topology
Set Theory
Differential Equations
Number Theory
Measure TheoryCategory Theory----PRORAMMING----C#C++Python
Assembly....Then I plan on learning thing like java and some web development languages.----COMPUTER HARDWARE ENGINEERING----Still trying to figure out what I exactly need to study in this topic. I
ordered a supposedly decent book, but I don't think it will help me very much since I don't know exactly what I need to study on the topic....The book only cost me $4....I got it REALLY cheap, but
money isn't what i'm worried about. I just want to learn everything correctly and not get lost in something just because I bought a book that wasn't what I was looking for or because I got the wrong
information. Computer hardware engineering is the only topic left that I need to buy books on to continue my research on computers this summer. At the moment i'm gathering a collection of books I
plan to study off of, but like I said I need more info on computer hardware engineering to buy the books I need and continue my studies. I know you said that I should learn each thing a bit as a time
as I go on...thats what i'm doing, but i'm planning ahead of time. I like to plan ahead of time so I know what I need to work on in oreder and I stick to my original plan.
........I also plan on learning web development libraries, but those are the main programming libraries I want to focus on.
Edited by Cdunn-1999, 30 May 2012 - 03:12 PM. | {"url":"http://www.gamedev.net/topic/625483-help-for-a-young-programmer/","timestamp":"2014-04-25T02:42:42Z","content_type":null,"content_length":"138905","record_id":"<urn:uuid:f0a408be-2797-4c2f-8910-8acde2e00225>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00466-ip-10-147-4-33.ec2.internal.warc.gz"} |
DPMMS Seminars 16 -21 November 1998
UNIVERSITY OF CAMBRIDGE
Department of Pure Mathematics and Mathematical Statistics
16 Mill Lane, Cambridge CB2 1SB
THIS WEEK'S SEMINARS
Monday 16^th November
Seminar: Geometry Seminar
Location & Time: Syndics Room, DAMTP at 2.00 p.m.
Speaker: Malcolm Perry
Title: Sigma Models
Seminar: Topology Seminar
Location & Time: DPMMS Seminar Room 1 at 3.30 p.m.
Speaker: E. Friedlander
Title: Graph Mappings and Poincaré duality
Tuesday 17^th November
Seminar: Number Theory Seminar
Location & Time: Seminar Room 1, DPMMS at 4.15 p.m.
Speaker: D. Delbourgo
Title: An upper bound for the "bad part" of III (E/Q)
Seminar: Dynamical Systems Seminar
Location & Time: Syndics Room, DAMTP at 4.30 p.m.
Speaker: Bernd Krauskopf
Title: Codimension-three unfoldings of resonant homoclinic bifurcations
Seminar: Statistical Laboratory
Location & Time: Mill Lane Lecture Room 9 at 2.00 p.m.
Speaker: Peter Donnelly
Title: Models and inference in Molecular Population Genetics
Wednesday 18^th November
Seminar: Analysis Seminar
Location & Time: Seminar Room 1, DPMMS at 2.00 p.m.
Speaker: Professor G. Willis
Title: Totally disconnected groups
Seminar: Algebra Seminar
Location & Time: Seminar Room 2, DPMMS at 4.30 p.m.
Speaker: Dr S. Norton
Title: Moonshine Mysteries
Thursday 19^th November
Seminar: Combinatorics Seminar
Location & Time: Seminar Room 2, DPMMS at 2.15 p.m.
Speaker: Dr Alex Scott
Title: Arithmetic progressions of cycles in graphs
Friday 20^th November
Seminar: Theoretical Physics Colloquia
Location & Time: Syndics Room, DAMTP at 3.00 p.m.
Speaker: A. Rogers
Title: Gauge-fixing in Batalin-Fradkin-Vilkovisky Quantization
Seminar: Conformal Field Theory
Location & Time: Seminar Room 1 DPMMS, at 4.30 p.m.
Speaker: Vincent Lafforgue
Title: K-theory of Banach algebras and the Connes-Baum conjecture
(subject to possible cancellation) | {"url":"https://www.dpmms.cam.ac.uk/Seminars/Weekly/1998-1999/Seminar16November.html","timestamp":"2014-04-16T04:10:21Z","content_type":null,"content_length":"3672","record_id":"<urn:uuid:97750d87-5f81-4b4c-bd69-8d6d682927cd>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00373-ip-10-147-4-33.ec2.internal.warc.gz"} |
Post a reply
ill do some more research and try to find out exactly which is it is: here is my calculator so far, its made in flash and deals with complex numbers: it isnt yet finished but it will also have a
graphics side for plotting real graphs and complex graphs(3d):
http://img146.imageshack.us/my.php?image=calculator3jq.swf (round, ceil and floor obivously aernt working yet, nor is memory allocation or equation navigation and editing, everything else should work
syntax for obscure functions:
logz finds the log of the number to base z, its syntax is:
"base logz number" - for example the log of 3.5 to base (-4+5.51i) would be
(-4+5.51i) logz 3.5
zroot finds the z'th root of the number, syntax is:
"root zroot number" - for example the (3+4i)'th root of 5 would be
(3+4i) zroot 5
here are some renderins (in beta) for the graphics side: the engine im using is not the one that i will use for the proper renderings as this engine uses cameras which obviously slows things down
alot - to see them properly and at a good enough speed youll need flash player 8/beta
(y = re[sin(z)]/20 )
(y = re[ z^conj(z) ]/40 )
(y = mod((z^conj(z)) / (e^z)) /40 ) | {"url":"http://www.mathisfunforum.com/post.php?tid=1516&qid=13890","timestamp":"2014-04-16T16:10:10Z","content_type":null,"content_length":"19207","record_id":"<urn:uuid:053fad63-98ba-4401-a256-5517f2b1c832>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00602-ip-10-147-4-33.ec2.internal.warc.gz"} |
least upper bound
least upper bound
<theory> (lub or "join", "supremum") The least upper bound of two elements a and b is an upper bound c such that a <= c and b <= c and if there is any other upper bound c' then c <= c'. The least
upper bound of a set S is the smallest b such that for all s in S, s <= b. The lub of mutually comparable elements is their maximum but in the presence of incomparable elements, if the lub exists, it
will be some other element greater than all of them.
Lub is the dual to greatest lower bound.
(In LaTeX, "<=" is written as \sqsubseteq, the lub of two elements a and b is written a \sqcup b, and the lub of set S is written as \bigsqcup S).
Last updated: 1995-02-03
Try this search on Wikipedia, OneLook, Google
Nearby terms: least fixed point « least recently used « least significant bit « least upper bound » leaves » LEC » LECOM
Copyright Denis Howe 1985 | {"url":"http://foldoc.org/least+upper+bound","timestamp":"2014-04-16T07:16:37Z","content_type":null,"content_length":"5330","record_id":"<urn:uuid:a4bb52d1-f906-4360-b7e5-cb7def0f9c2c>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00280-ip-10-147-4-33.ec2.internal.warc.gz"} |
Verification of the MDG Components Library in HOL
"... We describe a hybrid formal hardware verification tool that links the HOL interactive proof system and the MDG automated hardware verification tool. It supports a hierarchical verification
approach that mirrors the hierarchical structure of designs. We obtain advantages of both verification paradi ..."
Cited by 8 (2 self)
Add to MetaCart
We describe a hybrid formal hardware verification tool that links the HOL interactive proof system and the MDG automated hardware verification tool. It supports a hierarchical verification approach
that mirrors the hierarchical structure of designs. We obtain advantages of both verification paradigms. We illustrate its use by considering a component of a communications chip. Verification with
the hybrid tool is significantly faster and more tractable than using either tool alone.
- TPHOLS 2001 SUPPLEMENTAL PROCEEDINGS, INFORMATIC RESEARCH REPORT EDI-INF-RR-0046 , 2001
"... An existential theorem, for the specification or implementation of hardware, states that for any inputs there must exist at least one output which is consistent with it. It is proved to prevent
an inconsistent model being produced and it is required to formally import the verification result from on ..."
Cited by 4 (3 self)
Add to MetaCart
An existential theorem, for the specification or implementation of hardware, states that for any inputs there must exist at least one output which is consistent with it. It is proved to prevent an
inconsistent model being produced and it is required to formally import the verification result from one verification system to another system. In this paper
"... We investigate the verification of a translation phase of the Multiway Decision Graphs (MDG) verification system using the Higher Order Logic (HOL) theorem prover. In this paper, we deeply embed
the semantics of a subset of the MDG-HDL language and its Table subset into HOL. We define a set of funct ..."
Cited by 2 (1 self)
Add to MetaCart
We investigate the verification of a translation phase of the Multiway Decision Graphs (MDG) verification system using the Higher Order Logic (HOL) theorem prover. In this paper, we deeply embed the
semantics of a subset of the MDG-HDL language and its Table subset into HOL. We define a set of functions which translate this subset MDG-HDL language to its Table subset. A correctness theorem for
this translator, which quantifies over its syntactic structure, has been proved. This theorem states that the semantics of the MDG-HDL program is equivalent to the semantics of its Table subset.
, 2002
"... We describe an approach for formally verifying the linkage between a symbolic state enumeration system and a theorem proving system. This involves the following three stages of proof. Firstly we
prove theorems about the correctness of the translation part of the symbolic state system. It interface ..."
Cited by 2 (2 self)
Add to MetaCart
We describe an approach for formally verifying the linkage between a symbolic state enumeration system and a theorem proving system. This involves the following three stages of proof. Firstly we
prove theorems about the correctness of the translation part of the symbolic state system. It interfaces between low level decision diagrams and high level description languages. We ensure that the
semantics of a program is preserved in those of its translated form. Secondly we prove linkage theorems: theorems that justify introducing a result from a state enumeration system into a proof
system. Finally we combine the translator correctness and linkage theorems. The resulting new linkage theorems convert results to a high level language from the low level decision diagrams that the
result was actually proved about in the state enumeration system.They justify importing low-level external verification results into a theorem prover. We use a linkage between the HOL system and a
simplified version of the MDG system to illustrate the ideas and consider a small example that integrates two applications from MDG and HOL to illustrate the linkage theorems.
, 2000
"... . We describe a hardware verification tool called HOL-MDG. This tool combines the HOL theorem prover with an automated verification package, namely MDG. The aim of such a combination is to bring
together the strength of theorem proving and the automation of MDG. Moreover, the presented hybrid tool o ..."
Add to MetaCart
. We describe a hardware verification tool called HOL-MDG. This tool combines the HOL theorem prover with an automated verification package, namely MDG. The aim of such a combination is to bring
together the strength of theorem proving and the automation of MDG. Moreover, the presented hybrid tool offers facilities for a hierarchical verification approach. 1. Introduction Formal verification
methods fall in one of three categories: theorem proving, decision diagrams based methods and symbolic simulation. In this work, we focus on combining the first two categories. In theorem proving
methods, the design's behavior as well as its structure are described in some formal logic. Then the design structure is proved to conform to the expected behavior using a set of axioms and inference
rules. Theorem provers generally provide very powerful reasoning and abstraction mechanisms. This makes it possible to deal with complex designs. Nevertheless, theorem provers require a deep
understanding of...
"... We describe an approach for formally linking a symbolic state enumeration system and a theorem proving system based on a veri ed version of the former. It has been realized using a simpli ed
version of the MDG system and the HOL system. Firstly, we have veri ed aspects of correctness of a simp ..."
Add to MetaCart
We describe an approach for formally linking a symbolic state enumeration system and a theorem proving system based on a veri ed version of the former. It has been realized using a simpli ed version
of the MDG system and the HOL system. Firstly, we have veri ed aspects of correctness of a simpli ed version of the MDG system. We have made certain that the semantics of a program is preserved in
those of its translated form. Secondly, we have provided a formal linkage between the MDG system and the HOL system based on importing theorems. The MDG veri cation results can be formally imported
into HOL to form a HOL theorem. Thirdly, we have combined the translator correctness theorems and importing theorems. This allows the MDG veri cation results to be imported in terms of a high level
language (MDG-HDL) rather than a low level language. We also summarize a general method to prove existential theorems for the design. The feasibility of this approach is demonstrated in a case study
that integrates two applications: hardware veri cation (in MDG) and usability veri cation (in HOL). A single HOL theorem is proved that integrates the two results. | {"url":"http://citeseerx.ist.psu.edu/showciting?doi=10.1.1.47.8092","timestamp":"2014-04-20T20:42:28Z","content_type":null,"content_length":"26314","record_id":"<urn:uuid:a78f3eba-f3f3-45aa-82e2-c82a46515792>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00194-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Let f(x) = x^2 – 81. Find f–1(x). Can someone double-check what I have?
• one year ago
• one year ago
Best Response
You've already chosen the best response.
Show me what ya got :)
Best Response
You've already chosen the best response.
Okay so I know first to substitute f(x) with y and then to reverse the x & y variables which would give me x=y^2-81. Then I use a root on both sides and get \[\pm x=y-9\] and finally I add 9 on
both sides to get \[\pm 9 \sqrt{x} = y\]
Best Response
You've already chosen the best response.
how u got y- 9 ??
Best Response
You've already chosen the best response.
Interesting :)
Best Response
You've already chosen the best response.
but incorrect :P
Best Response
You've already chosen the best response.
well we have y^2-81, idk factoring it would be (y+9)(y-9) but am I supposed to do that?
Best Response
You've already chosen the best response.
no, what you did was \(\sqrt{y^2-81}=y-9\) that is incorrect. whats to be done : let y= x^2-81 add 81 to both sides. then take square root of both sides...
Best Response
You've already chosen the best response.
oh okay :) so it'd be x+81= y^2 and once we square root both sides would the answer come out to \[\pm 9 \sqrt{x}=y\]
Best Response
You've already chosen the best response.
@hartnn Bache mai halka haat rakho :)
Best Response
You've already chosen the best response.
taking square root on both sides of \(x+81=y^2\) will give you \(\sqrt{x+81}=\sqrt{y^2}=y\) to get the inverse function as \(\sqrt{x+81}\) got this ? hba, i didn't get you.
Best Response
You've already chosen the best response.
okay I get it :) thx a ton! :)
Best Response
You've already chosen the best response.
welcome ^_^
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50e333e3e4b028291d749f33","timestamp":"2014-04-21T10:14:39Z","content_type":null,"content_length":"54146","record_id":"<urn:uuid:6edde838-2fba-4c7a-9dc2-25fd399a7647>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00609-ip-10-147-4-33.ec2.internal.warc.gz"} |
Last modified: March 25, 1997
Table of Contents
Red-black trees are binary search trees with worst-case height no more than 2 log_2 n. Standard dictionary operations (insert, delete, search, minimum, maximum, successor, predecessor) take O (log n)
time. In contrast, standard binary search trees have Omega (n) height in the worst-case and O (log n) height on the average. They are, however, easier to implement.
Red-black trees have all the characteristics of binary search trees. In addition, red-black trees have the following characteristics.
• Each node of the tree contains the fields color, key, left, right, parent and an optional field rank.
Every node is colored either "red" or "black".
• The rank in a tree goes from zero up to the maximum rank which occurs at the root. The rank roughly corresponds to the height of a node. The rank of two consecutive nodes differs by at most one.
All the external nodes are black and have a rank zero.
If a black node has rank "r", then its parent has rank "r+1".
If a red node has rank "r", then its parent will have rank "r" as well.
• Consecutive red nodes are disallowed. This means every red node is followed by a black node, on the other hand a black node may followed by a black or red node. This implies that at most 50% of
the nodes on any path from external node to root are red.
• Every path from the root to a leaf contains the same number of black nodes. This number called black-height of the tree.
In this section, we give an example of red-black tree with five levels. We set the ranks starting at the bottom and show some sample ranks. Note that there is some freedom in assigning ranks to
h is the height of a tree and r is the rank of the root.
Property 1: h/2 <= r <= h.
The rank of the root "r" is greater than or equal to "h/2", since there are at least 50% black nodes on any path from external node to root. "r" is also less than or equal to the height of the
tree "h".
Property 2: A node of rank r has >= 2^r external nodes in its subtree.
Proof (by induction):
i) Base case: Let r = 1 then root 1 has 2^1 external nodes.
ii) Assumption: Assume true up to but not including r.
iii) Verification: Verify the claim for r.
Since the number of external nodes in the subtrees >= 2^(r-1) + 2^(r-1) = 2^r, then 2^r <= n + 1 external nodes as required.
Main property: The height of a red-black tree with n internal nodes is at most 2 log_2 (n + 1).
If r were > log_2 (n + 1), then it would have (by property 2) > 2^(log_2 (n + 1)) = n + 1 external nodes in its subtree, but this is impossible as there are only n + 1 external nodes in the tree.
Thus, h <= 2r (property 1) <= 2 log_2 (n + 1) (as shown above ex absurdo).
Phase I, Standard insert.
Phase II, Fix red-black tree property.
Phase I
Phase I is as for the standard binary search tree. The tree is traversed and, when an external node is encountered, it is replaced with the new node. The new node must be colored red in order to
preserve the characteristics of the red-black tree.
However, if the newly inserted node's parent is red we will have two consecutive red nodes in a path, which is not allowed. So, we have to fix the tree.
Phase II
│ │
│ While x is in case 2a and x is not root and p[x] is not root do: │
│ Increase rank of p[p[x]] by one │
│ (this makes p[x] and u[x] black and p[p[x]] red). │
│ x <- p[p[x]]. │
│ If x is in case 1 or x is root or p[x] is root then stop. │
│ else [x is not root and p[x] is not root and x is in case 2b] │
│ Perform rotation as shown below and stop. │
│ │
Case 1: p[x] is black.
Insert node x is red and its parent p[x] is black then we can terminate the insertion.
Case 2a: p[x] is red and u[x] is red.
x, p[x] and u[x] are all red, then we can recolor u[x] and p[x] to black and move x toward the root of the tree to recheck. There are four symmetric cases to case 2a, and the graph below show
only one of the cases.
Case 2b: p[x] is red and u[x] is black.
x and p[x] are red and u[x] is black.
The keys of the nodes fall naturally into place.
In above picture the tree above and subtree below this portion of tree are consider as outside world. In order to preserve the red-black tree properties, one must perform a little changing (local
surgery). Outside world must not see this change after surgery.
Every insertion consists of a standard insert (at O (log n) cost, as h <= 2 log_2 (n + 1)), followed by a number of rank changes (corresponding to case 2a: there are at most log_2 (n + 1) of
these), and finally at most one rotation (case 2b). It is one of the few balanced binary search trees that requires only one rotation in the worst case per insertion.
The time for Phase I in insertion takes O (lg n).
The time for Phase II in insertion takes O (lg n) for changing the rank and O (1) for rotation.
Link to red-black tree applet.
Red-black tree were invented by R.Bayer[1] under the same name "symmetric binary B-trees." Guibas and Sedgewick[2] studied their properties at length and introduced the red/black color
CS660: RedBlack, B-Trees [Roger Whitney, San Diego State University].
MIT Scheme Reference [Tim Singletary (tsingletsingle@sunland.gsfc.nasa.gov)]
The (Combinatorial) Object Server [This page maintained by Barney Fife].
Vlad Web Space [Vladimir Forfuldinov, Windsor University].
Red-Black Trees [David Lewis, Swarthmore education].
Red-Black Tree Animation [Doug Ierardi and Ta-Wei Li, Developed with the assistance of C Aydin, S Deng, C Kawahara, S Kolim and J Tsou].
Red/Black Tree Demonstration [John Franco, University of Cincinnati].
Red Black Tree Simulation [Eli Hadad, Tel Aviv University].
The Red Black Tree Song [Sean D. Sandys, University of Washington].
Here is a song about red-black trees.
[1] R.Bayer. Symmetric binary B-trees: Data structure and maintenance algorithms.
Acta Informatica 1:290-306, 1972.
[2] Leo J. Guibas and Robert Sedgewick. A diochromatic framework for balanced trees.
In Proceedings of the 19th Annual Symposium on Foundations of Computer Science,
pages 8-21. IEEE Computer Society, 1978.
[3] Nicholas Wilt, "Classical Algorithms in C++",
Published by John Wiley & Sons, Inc. (1995), 209-229.
[4] Mark Allen Weiss, "Algorithms, Data Structures, and Problem Solving with C++",
Published by Addison-Wesley Company, Inc. (1996), 572-594.
Web page creators
(Java Applet) (Html, reference, graphics and researching sites) (Java Applet and Html) Copyright ©1997,by Cheryl Tom, James Leung, Mansour-Mohammad Esmaeil and Marco Raimo . All rights reserved.
Reproduction of all or part of this work is permitted for educational research use provided that this copyright notice is included in any copy. Disclaimer: this collection o notes is
experimental, and does not serve as-is as a substitute for attendance in the actual class. | {"url":"http://luc.devroye.org/1997notes/topic18/","timestamp":"2014-04-21T05:04:17Z","content_type":null,"content_length":"14124","record_id":"<urn:uuid:93d0d1cf-4154-43ef-96b4-73818465066b>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00205-ip-10-147-4-33.ec2.internal.warc.gz"} |
the encyclopedic entry of compressibility
In thermodynamics and fluid mechanics, compressibility is a measure of the relative volume change of a fluid or solid as a response to a pressure (or mean stress) change.
$beta=-frac\left\{1\right\}\left\{V\right\}frac\left\{partial V\right\}\left\{partial p\right\}$
where V is volume and p is pressure. The above statement is incomplete, because for any object or system the magnitude of the compressibility depends strongly on whether the process is adiabatic or
isothermal. Accordingly we define the isothermal compressibility as:
$beta_T=-frac\left\{1\right\}\left\{V\right\}left\left(frac\left\{partial V\right\}\left\{partial p\right\}right\right)_T$
where the subscript T indicates that the partial differential is to be taken at constant temperature. The adiabatic compressibility as:
$beta_S=-frac\left\{1\right\}\left\{V\right\}left\left(frac\left\{partial V\right\}\left\{partial p\right\}right\right)_S$
where S is entropy. For a solid, the distinction between the two is usually negligible.
The inverse of the compressibility is called the bulk modulus, often denoted K (sometimes B). That page also contains some examples for different materials.
The term "compressibility" is also used in thermodynamics to describe the deviance in the thermodynamic properties of a real gas from those expected from an ideal gas. The compressibility factor is
defined as
$Z=frac\left\{p tilde\left\{V\right\}\right\}\left\{R T\right\}$
is the
of the gas,
is its
, and
is its
molar volume
. In the case of an ideal gas, the compressibility factor
is equal to unity, and the familiar
ideal gas law
is recovered:
$p = \left\{RTover\left\{tilde\left\{V\right\}\right\}\right\}$
Z can, in general, be either greater or less than unity for a real gas.
The deviation from ideal gas behavior tends to become particularly significant (or, equivalently, the compressibility factor strays far from unity) near the critical point, or in the case of high
pressure or low temperature. In these cases, a generalized compressibility chart or an alternative equation of state better suited to the problem must be utilized to produce accurate results.
A related situation occurs in hypersonic aerodynamics, where dissociation causes an increase in the “notational” molar volume, because a mole of Oxygen, as O2, becomes 2 moles of monatomic Oxygen and
N2 similarly dissociates to 2*N. Since this occurs dynamically as air flows over the aerospace object, it is convenient to alter Z, defined for an initial 30 gram “mole” of air, rather than track the
varying mean molecular weight, millisecond by millisecond. This pressure dependent transition occurs for atmospheric Oxygen in the 2500K to 4000K temperature range, and in the 5000K to 10,000K range
for Nitrogen.
In transition regions, where this pressure dependent dissociation is incomplete, both beta (the volume/pressure differential ratio) and the differential, constant pressure heat capacity will greatly
For moderate pressures, above 10,000K the gas further dissociates into free electrons and ions. Z for the resulting plasma can similarly be computed for a mole of initial air, producing values
between 2 and 4 for partially or singly ionized gas. Each dissociation absorbs a great deal of energy in a reversible process and this greatly reduces the thermodynamic temperature of hypersonic gas
decelerated near the aerospace object. Ions or free radicals transported to the object surface by diffusion may release this extra (non thermal) energy if the surface catalyzes the slower
recombination process.
The isothermal compressibility is related to the isentropic (or adiabatic) compressibility by the relation,
$beta_S = beta_T - frac\left\{alpha^2 T\right\}\left\{rho c_p\right\}$
via Maxwell's relations. More simply stated,
$frac\left\{beta_T\right\}\left\{beta_S\right\} = gamma$
$gamma !$ is the heat capacity ratio. See here for a derivation.
Earth sciences
Vertical, drained compressibilities
Material β (m²/N)
Plastic clay 2 – 2.6
Stiff clay 2.6 – 1.3
Medium-hard clay 1.3 – 6.9
Loose sand 1 – 5.2
Dense sand 2 – 1.3
Dense, sandy gravel 1 – 5.2
Rock, fissured 6.9 – 3.3
Rock, sound <3.3
Water at 25 °C (undrained) 4.6
is used in the
Earth sciences
to quantify the ability of a soil or rock to reduce in volume with applied pressure. This concept is important for
specific storage
, when estimating
reserves in confined
. Geologic materials are made up of two portions: solids and voids (or same as
). The void space can be full of liquid or gas. Geologic materials reduces in volume only when the void spaces are reduced, which expel the liquid or gas from the voids. This can happen over a period
of time, resulting in
It is an important concept in geotechnical engineering in the design of certain structural foundations. For example, the construction of high-rise structures over underlying layers of highly
compressible bay mud poses a considerable design constraint, and often leads to use of driven piles or other innovative techniques.
Fluid dynamics
Aeronautical dynamics
Compressibility is an important factor in
. At low speeds, the compressibility of air is not significant in relation to
design, but as the airflow nears and exceeds the
speed of sound
, a host of new aerodynamic effects become important in the design of aircraft. These effects, often several of them at a time, made it very difficult for
World War II
era aircraft to reach speeds much beyond 800 km/h (500 mph).
Some of the minor effects include changes to the airflow that lead to problems in control. For instance, the P-38 Lightning with its thick high-lift wing had a particular problem in high-speed dives
that led to a nose-down condition. Pilots would enter dives, and then find that they could no longer control the plane, which continued to nose over until it crashed. Adding a "dive flap" beneath the
wing altered the center of pressure distribution so that the wing would not lose its lift. This fixed the problem.
A similar problem affected some models of the Supermarine Spitfire. At high speeds the ailerons could apply more torque than the Spitfire's thin wings could handle, and the entire wing would twist in
the opposite direction. This meant that the plane would roll in the direction opposite to that which the pilot intended, and led to a number of accidents. Earlier models weren't fast enough for this
to be a problem, and so it wasn't noticed until later model Spitfires like the Mk.IX started to appear. This was mitigated by adding considerable torsional rigidity to the wings, and was wholly cured
when the Mk.XIV was introduced.
The Messerschmitt Bf 109 and Mitsubishi Zero had the exact opposite problem in which the controls became ineffective. At higher speeds the pilot simply couldn't move the controls because there was
too much airflow over the control surfaces. The planes would become difficult to maneuver, and at high enough speeds aircraft without this problem could out-turn them.
Finally, another common problem that fits into this category is flutter. At some speeds the airflow over the control surfaces will become turbulent, and the controls will start to flutter. If the
speed of the fluttering is close to a harmonic of the control's movement, the resonance could break the control off completely. This was a serious problem on the Zero. When problems with poor control
at high speed were first encountered, they were addressed by designing a new style of control surface with more power. However this introduced a new resonant mode, and a number of planes were lost
before this was discovered.
All of these effects are often mentioned in conjunction with the term "compressibility", but in a manner of speaking, they are incorrectly used. From a strictly aerodynamic point of view, the term
should refer only to those side-effects arising as a result of the changes in airflow from an incompressible fluid (similar in effect to water) to a compressible fluid (acting as a gas) as the speed
of sound is approached. There are two effects in particular, wave drag and critical mach.
Wave drag is a sudden rise in drag on the aircraft, caused by air building up in front of it. At lower speeds this air has time to "get out of the way", guided by the air in front of it that is in
contact with the aircraft. But at the speed of sound this can no longer happen, and the air which was previously following the streamline around the aircraft now hits it directly. The amount of power
needed to overcome this effect is considerable. The critical mach is the speed at which some of the air passing over the aircraft's wing becomes supersonic.
At the speed of sound the way that lift is generated changes dramatically, from being dominated by Bernoulli's principle to forces generated by shock waves. Since the air on the top of the wing is
traveling faster than on the bottom, due to Bernoulli effect, at speeds close to the speed of sound the air on the top of the wing will be accelerated to supersonic. When this happens the
distribution of lift changes dramatically, typically causing a powerful nose-down trim. Since the aircraft normally approached these speeds only in a dive, pilots would report the aircraft attempting
to nose over into the ground.
In hypersonic aerodynamics, dissociation causes an increase in the “notational” molar volume (a mole of Oxygen, as O2, becomes 2 moles of monatomic Oxygen and N2 similarly dissociates to 2*N). This
pressure dependent transition occurs for atmospheric Oxygen in the 2500K to 4000K temperature range, and in the 5000K to 10,000K range for Nitrogen.
Dissociation absorbs a great deal of energy in a reversible process. This greatly reduces the thermodynamic temperature of hypersonic gas decelerated near an aerospace vehicle. In transition regions,
where this pressure dependent dissociation is incomplete, both the differential, constant pressure heat capacity and beta (the volume/pressure differential ratio) will greatly increase. The later has
a pronounced effect on vehicle aerodynamics including stability.
See also | {"url":"http://www.reference.com/browse/compressibility","timestamp":"2014-04-19T09:01:33Z","content_type":null,"content_length":"94358","record_id":"<urn:uuid:ba587c88-229c-4ca3-b313-fee9998e2cf8>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00292-ip-10-147-4-33.ec2.internal.warc.gz"} |
Unending Loop Issue? Can't find it.
March 4th, 2011, 06:09 PM
Unending Loop Issue? Can't find it.
Although I think this is an issue with my loops structure/conditions this could also be an issue with how Java does arithmetic. I'm sure this is probably simple but I can't find it right now. :(
The polynomial object is composed of an array of size 100 which stores the coefficients of a polynomial in index locations corresponding to the power of each respective term. The purpose of this
code is to create a new polynomial object which is the result of multiplying two polynomials.
I'm pretty sure that the loop is not ending but I'm not sure why. The initial S.o.ps of i+j values are what I expect but then they go into enormous negative numbers and I have no idea why.
Code Java:
public Polynomial mult(Polynomial p)
throws ExponentOutOfRangeException
Polynomial multiPolynomial = new Polynomial();
//Start testcode
System.out.print("this polynomial ");
System.out.print("p polynomial ");
//End testcode
for(int i = 0; i < 99; i++)
for(int j = 0; j < 99; i++)
if((i+j) < MAX)
multiPolynomial.polyArr[i + j] = multiPolynomial.polyArr[i + j] + (this.polyArr[i] * p.polyArr[j]);
// Multiplies polynomial p to this polynomial without modifying this
// polynomial and returns the result.
// Precondition: None.
// Postcondition: The returned polynomial is the product of this
// and p. Both this and p are unchanged.
// Throws: ExponentOutOfRangeException if exponent is out of range.
return multiPolynomial;
By the way,
March 4th, 2011, 07:02 PM
Re: Unending Loop Issue? Can't find it.
Thanks to whoever fixed the way the code displays here. I was looking up the necessary tags just now. :) Anyone see the problem? Maybe I should include more code...
March 4th, 2011, 07:03 PM
Re: Unending Loop Issue? Can't find it.
Code Java:
for(int j = 0; j < 99; i++)
You're incrementing i in the j loop, not j :P
March 4th, 2011, 07:04 PM
Re: Unending Loop Issue? Can't find it.
You seem to be incrementing i twice.
Perhaps you should change it to j++;
As it is, i is being incremented twice and j is stuck at 0.
Musta posted at same time as helloworld922.
Oh well.
March 4th, 2011, 07:22 PM
Re: Unending Loop Issue? Can't find it.
Omigosh, how embarrassing! I looked at the problem is sooo many ways and I missed that simple item. I guess that's why you ask other people to look at code. Thanks guys! :D | {"url":"http://www.javaprogrammingforums.com/%20whats-wrong-my-code/7713-unending-loop-issue-cant-find-printingthethread.html","timestamp":"2014-04-20T01:12:02Z","content_type":null,"content_length":"12853","record_id":"<urn:uuid:0f4c1422-9a46-444e-9e0e-d6bf6df456d4>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00106-ip-10-147-4-33.ec2.internal.warc.gz"} |
MathGroup Archive: September 2008 [00257]
[Date Index] [Thread Index] [Author Index]
Re: Alternating sums of large numbers
• To: mathgroup at smc.vnet.net
• Subject: [mg91902] Re: Alternating sums of large numbers
• From: David Bailey <dave at Remove_Thisdbailey.co.uk>
• Date: Fri, 12 Sep 2008 05:27:55 -0400 (EDT)
• References: <gaar1q$156$1@smc.vnet.net>
AsMikhail Lemeshko wrote:
> Dear friends,
> I'm using a Mathematica 6, and I've faced with the following problem.
> Here is a copy of my Mathematica notebook:
> --------------------------------------------------------
> NN = 69.5;
> n = 7;
> coeff = (Gamma[2*NN - n + 1]*(2*NN - 2*n))/n!;
> f[z_] := Sum[((-1)^(\[Mu] + \[Nu])*Binomial[n, \[Mu]]*Binomial[n, \
> [Nu]]*Gamma[2*NN - 2*n + \[Mu] + \[Nu], z])/(Gamma[2*NN - 2*n + \[Mu]
> + 1]*Gamma[2*NN - 2*n + \[Nu] + 1]), {\[Mu],0, n}, {\[Nu], 0, n}];
> Plot[coeff*f[z], {z, 0, 100}]
> --------------------------------------------------------
> As you can see, I want to calculate a double alternating sum,
> consisting of large terms (products of Gamma-functions and binomial
> coefficients). Then I want to plot the result, in dependence on
> parameter z, which takes part in the summation as an argument of the
> incomplete Gamma-function, Gamma[2*NN - 2*n + \[Mu] + \[Nu], z].
> Apart from this, I have another parameter, n, which is an upper limit
> for both of sums, and also takes part in Gamma functions. When this
> parameter grows, the expression next to summation also increases. At
> some point, Mathematica begins to show very strange results - and my
> question is actually about this.
> For instance, if the parameter n=5, everything is O.K., the plot shows
> a smooth curve. When we set n=6, there appears a little "noise" at
> 60<z<80, which is of no sense. This noise increases with n and is huge
> for n=8.
> A suppose that this error is caused by the huge numbers with
> alternating signs, contributing to the summation - probably there are
> some mistakes introduced by numerical evaluation. I tried to play with
> Accuracy etc., but it does not help. I also investigated the
> possibility that the error is introduced not by the summation, but by
> the product of big numbers. According to this, I tried to compute the
> sum of Exp[Log[Gamma]+Log[Gamma]...] (the logarithm smoothly
> depends on z). But it does not help as well...
> I would very much appreciate your advice on such problem.
> Many thanks in advance,
> Mikhail.
As you already realise, the alternating sums are generating rounding
error (noise) because your calculations are being done at machine
precision. Once Mathematica starts a calculation in machine precision,
it basically stays at that precision, so you need to ensure that the
real numbers entering your calculation are high precision. Thus you need
to set
and also, Plot will by default inject real number values, which you can
override by using the WorkingPrecision option - setting it to some large
value - say 50. With both of these changes, you get a smooth graph.
David Bailey | {"url":"http://forums.wolfram.com/mathgroup/archive/2008/Sep/msg00257.html","timestamp":"2014-04-16T16:12:41Z","content_type":null,"content_length":"28166","record_id":"<urn:uuid:57ffed42-2401-4125-ab6b-169e16024421>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00508-ip-10-147-4-33.ec2.internal.warc.gz"} |
Hercules Statistics Tutor
Find a Hercules Statistics Tutor
...Many students, who are new to statistics, think of it as “pure math” type of a subject; however there is a lot of real world application in statistics, and not just math. I teach my students
both the mathematical concepts of statistics/probability and how to deal with word problems: recognize th...
14 Subjects: including statistics, calculus, algebra 1, algebra 2
...Whether English wasn't their first language or they had a learning disability that was making the mathematics of chemistry challenging, they were able to succeed because of their hard work once
they were explained the concepts in their own terms. I began tutoring Calculus when I was taking i...
19 Subjects: including statistics, chemistry, physics, calculus
...I received an A in 12 of those classes, and a B in the other five. I am currently enrolled in a teaching credential program at Mills. In that program we study sociology as it relates to
education, language, race, and gender.
15 Subjects: including statistics, reading, calculus, writing
...Each session is one hour. In case your child has difficulties concentrating that long, a shorter time may be chosen in the beginning. After a few sessions, I usually know what can be reasonably
expected from a student and then discuss expectations with the student and the parents.
12 Subjects: including statistics, calculus, algebra 2, geometry
...I am an expert on math standardized testing, as stated in my reviews from previous students. I have worked on thousands of these types of problems and can show your student how to do every
single one, which will dramatically increase their test scores! I can help your student ace the following standardized math tests: SAT, ACT, GED, SSAT, PSAT, ASVAB, TEAS, and more.
59 Subjects: including statistics, chemistry, reading, physics | {"url":"http://www.purplemath.com/Hercules_statistics_tutors.php","timestamp":"2014-04-18T11:10:46Z","content_type":null,"content_length":"23999","record_id":"<urn:uuid:97a91a4e-be75-4053-b0a5-35ceca7d26b9>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00495-ip-10-147-4-33.ec2.internal.warc.gz"} |
Hercules Statistics Tutor
Find a Hercules Statistics Tutor
...Many students, who are new to statistics, think of it as “pure math” type of a subject; however there is a lot of real world application in statistics, and not just math. I teach my students
both the mathematical concepts of statistics/probability and how to deal with word problems: recognize th...
14 Subjects: including statistics, calculus, algebra 1, algebra 2
...Whether English wasn't their first language or they had a learning disability that was making the mathematics of chemistry challenging, they were able to succeed because of their hard work once
they were explained the concepts in their own terms. I began tutoring Calculus when I was taking i...
19 Subjects: including statistics, chemistry, physics, calculus
...I received an A in 12 of those classes, and a B in the other five. I am currently enrolled in a teaching credential program at Mills. In that program we study sociology as it relates to
education, language, race, and gender.
15 Subjects: including statistics, reading, calculus, writing
...Each session is one hour. In case your child has difficulties concentrating that long, a shorter time may be chosen in the beginning. After a few sessions, I usually know what can be reasonably
expected from a student and then discuss expectations with the student and the parents.
12 Subjects: including statistics, calculus, algebra 2, geometry
...I am an expert on math standardized testing, as stated in my reviews from previous students. I have worked on thousands of these types of problems and can show your student how to do every
single one, which will dramatically increase their test scores! I can help your student ace the following standardized math tests: SAT, ACT, GED, SSAT, PSAT, ASVAB, TEAS, and more.
59 Subjects: including statistics, chemistry, reading, physics | {"url":"http://www.purplemath.com/Hercules_statistics_tutors.php","timestamp":"2014-04-18T11:10:46Z","content_type":null,"content_length":"23999","record_id":"<urn:uuid:97a91a4e-be75-4053-b0a5-35ceca7d26b9>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00495-ip-10-147-4-33.ec2.internal.warc.gz"} |
Machine Learning (Theory)
Several recent papers have shown that SVM-like optimizations can be used to handle several large family loss functions.
This is a good thing because it is implausible that the loss function imposed by the world can not be taken into account in the process of solving a prediction problem. Even people used to the
hard-core Bayesian approach to learning often note that some approximations are almost inevitable in specifying a prior and/or integrating to achieve a posterior. Taking into account how the system
will be evaluated can allow both computational effort and design effort to be focused so as to improve performance.
A current laundry list of capabilities includes:
I am personally interested in how this relates to the learning reductions work which has similar goals, but works at a different abstraction level (the learning problem rather than algorithmic
mechanism). The difference in abstraction implies that anything solvable by reduction should be solvable by a direct algorithmic mechanism. However, comparing and constrasting the results I know of
it seems that what is solvable via reduction to classification versus what is solvable via direct SVM-like methods is currently incomparable.
1. Can SVMs be tuned to directly solve (example dependent) cost sensitive classification? Obviously, they can be tuned indirectly via reduction, but it is easy to imagine more tractable direct
2. How efficiently can learning reductions be used to solve structured prediction problems? Structured prediction problems are instances of cost sensitive classification, but the regret transform
efficiency which occurs when this embedding is done is too weak to be of interest.
3. Are there any problems efficiently solvable by SVM-like algorithms which are not efficiently solvable via learning reductions?
2 Comments to “SVM Adaptability”
1. The answer to question 1 is “yes”. Alex Smola showed how the ICML 2004 ‘any loss’ can be applied to example-dependent losses (and randomly sampled) losses, giving it the full generality of cost
sensitive classification.
2. [...] In addition, this also solves a problem: yes, any classifier can be effectively and efficiently applied on complex structured prediction problems via Searn. [...] | {"url":"http://hunch.net/?p=110","timestamp":"2014-04-20T13:30:00Z","content_type":null,"content_length":"32209","record_id":"<urn:uuid:56aab90f-9e08-463b-a86f-faa196aa4075>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00327-ip-10-147-4-33.ec2.internal.warc.gz"} |
The basics of FPGA mathematics | EE Times
Most Recent Comments
9:27:26 PM
Flash Poll
All Polls
Frankenstein's Fix, Teardowns, Sideshows, Design Contests, Reader Content & More
Engineer's Bookshelf
The Engineering Life - Around the Web
Surprise TOQ Teardown at EELive!
Caleb Kraft Post a comment
This year, for EELive! I had a little surprise that I was quite eager to share. Qualcomm had given us a TOQ smart watch in order to award someone a prize. We were given complete freedom to ...
Design Contests & Competitions
Engineering Investigations
Frankenstein's Fix: The Winners Announced!
Caleb Kraft 8 comments
The Frankenstein's Fix contest for the Tektronix Scope has finally officially come to an end. We had an incredibly amusing live chat earlier today to announce the winners. However, we ...
MORE EELife
Top Comments of the Week
Like Us on Facebook
Datasheets.com Parts Search
185 million searchable parts
(please enter a part number or hit search to begin) | {"url":"http://www.eetimes.com/messages.asp?piddl_msgthreadid=39252&piddl_msgid=236033","timestamp":"2014-04-18T01:35:05Z","content_type":null,"content_length":"157850","record_id":"<urn:uuid:bfafe15e-d501-4e4d-8172-7c3894df2054>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00645-ip-10-147-4-33.ec2.internal.warc.gz"} |
Earthstone 110 or FGM?
Also, remember that you can only count the dome space in the vicinity of the door, not the back of the oven.
While the front of the dome gets hotter than the rear, the whole dome is heated by the superheated air. If the whole dome is heated, the dome in the back gets counted.
Fabricate an insert, and pre-heat the oven with and without it. If the insert doesn't trim off at least 1/4 of the pre-heat time (and use at least 1/4 less wood to reach the same temps), I'll eat my | {"url":"http://www.pizzamaking.com/forum/index.php?topic=21071.msg214395","timestamp":"2014-04-21T13:40:00Z","content_type":null,"content_length":"92356","record_id":"<urn:uuid:2803c30d-cd97-42a9-a638-789c779a15bf>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00504-ip-10-147-4-33.ec2.internal.warc.gz"} |
Volume of a Rhombicuboctahedron
Date: 01/19/2001 at 01:47:06
From: Nicoal
Subject: Volume of rhombicuboctahedron
Dear Dr. Math,
I need the formula for finding the volume of a polyhedron called a
Thank you
Date: 01/19/2001 at 13:09:15
From: Doctor Rob
Subject: Re: Volume of rhombicuboctahedron
Thanks for writing to Ask Dr. Math, Nicoal.
For a picture, see this page from MathConsult - Dr. R. Maeder:
10: rhombicuboctahedron
I figured out the volume in the following way:
Let the length of each edge of the polyhedron be s. From its center,
connect line segments to each of its 24 vertices. Each edge of the
polyhedron and two of these lines to its center form a triangle. Each
face of the polyhedron and three or four of these triangles (along
with their interiors) form a pyramid with either a square base (18 of
these) or equilateral triangular base (8 of these).
Now the base of each square-based pyramid has area s^2 and its height
is s*(1+sqrt[2])/2 (found by drawing an octagonal cross-section of the
polyhedron). Now the volume of any pyramidal region is 1/3 its base
times its height, so the volume of each of these is:
V = s^3*(1+sqrt[2])/6
The base of each triangular pyramid has area s^2*sqrt(3)/4, and its
height is s*(3+sqrt[2])/(2*sqrt[3]) (found by drawing a cross-section
at a 45-degree angle to the first one). Then the volume of each of
these is:
V = s^3*(3+sqrt[2])/24
Thus the total volume of the rhombicuboctohedral region is:
V = 18*s^3*(1+sqrt[2])/6 + 8*s^3*(3+sqrt[2])/24
= 2*(6+5*sqrt[2])*s^3/3
The radius of its circumscribed sphere turns out to be:
R = s*sqrt(5+2*sqrt[2])/2
Its surface area is:
S = 18*(s^2) + 8*(s^2*sqrt[3]/4)
= 2*s^2*(9+sqrt[3])
The largest sphere that can be inscribed has radius r = s. This is
tangent to all the square faces at their centers, but does not
intersect any of the triangular faces, which are
s*(3+sqrt[2])/(2*sqrt[3]) > 1.247*s > r
away from the center.
I believe I have done the above computations correctly, but you should
redo them to check me. They seem to check with the following site:
Platonic And Archimedean Solids - Bruce A. Rawles
- Doctor Rob, The Math Forum | {"url":"http://mathforum.org/library/drmath/view/55259.html","timestamp":"2014-04-19T03:10:45Z","content_type":null,"content_length":"7433","record_id":"<urn:uuid:a4f43b39-8902-414f-b90a-bbcc82683ae3>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00489-ip-10-147-4-33.ec2.internal.warc.gz"} |
This Article
Bibliographic References
Add to:
Approximating Congestion + Dilation in Networks via "Quality of Routing” Games
Sept. 2012 (vol. 61 no. 9)
pp. 1270-1283
ASCII Text x
Costas Busch, Rajgopal Kannan, Athanasios V. Vasilakos, "Approximating Congestion + Dilation in Networks via "Quality of Routing” Games," IEEE Transactions on Computers, vol. 61, no. 9, pp.
1270-1283, Sept., 2012.
BibTex x
@article{ 10.1109/TC.2011.145,
author = {Costas Busch and Rajgopal Kannan and Athanasios V. Vasilakos},
title = {Approximating Congestion + Dilation in Networks via "Quality of Routing” Games},
journal ={IEEE Transactions on Computers},
volume = {61},
number = {9},
issn = {0018-9340},
year = {2012},
pages = {1270-1283},
doi = {http://doi.ieeecomputersociety.org/10.1109/TC.2011.145},
publisher = {IEEE Computer Society},
address = {Los Alamitos, CA, USA},
RefWorks Procite/RefMan/Endnote x
TY - JOUR
JO - IEEE Transactions on Computers
TI - Approximating Congestion + Dilation in Networks via "Quality of Routing” Games
IS - 9
SN - 0018-9340
EPD - 1270-1283
A1 - Costas Busch,
A1 - Rajgopal Kannan,
A1 - Athanasios V. Vasilakos,
PY - 2012
KW - Algorithmic game theory
KW - congestion game
KW - routing game
KW - Nash equilibrium
KW - price of anarchy.
VL - 61
JA - IEEE Transactions on Computers
ER -
A classic optimization problem in network routing is to minimize C+D, where C is the maximum edge congestion and D is the maximum path length (also known as dilation). The problem of computing the
optimal C^{\ast}+D^{\ast} is NP-complete even when either C^{\ast} or D^{\ast} is a small constant. We study routing games in general networks where each player i selfishly selects a path that
minimizes C_i + D_i the sum of congestion and dilation of the player's path. We first show that there are instances of this game without Nash equilibria. We then turn to the related quality of
routing (QoR) games which always have Nash equilibria. QoR games represent networks with a small number of service classes where paths in different classes do not interfere with each other (with
frequency or time division multiplexing). QoR games have O(\log^4 n) price of anarchy when either C^{\ast} or D^{\ast} is a constant. Thus, Nash equilibria of QoR games give poly-log approximations
to hard optimization problems.
[1] E. Anshelevich, A. Dasgupta, J. Kleinberg, É. Tardos, T. Wexler, and T. Roughgarden, “The Price of Stability for Network Design with Fair Cost Allocation,” SIAM J. Computing, vol. 38, no. 4, pp.
1602-1623, 2008.
[2] E. Anshelevich, A. Dasgupta, É. Tardos, and T. Wexler, “Near-Optimal Network Design with Selfish Agents,” Proc. 35th Ann. ACM Symp. Theory of Computing (STOC), pp. 511-520, June 2003.
[3] R. Banner and A. Orda, “Bottleneck Routing Games in Communication Networks,” IEEE J. Selected Areas in Comm., vol. 25, no. 6, pp. 1173-1179, Aug. 2007.
[4] P. Berenbrink and C. Scheideler, “Locally Efficient On-Line Strategies for Routing Packets along Fixed Paths,” Proc. 10th ACM-SIAM Symp. Discrete Algorithms (SODA), pp. 112-121, 1999.
[5] C. Busch and M. Magdon-Ismail, “Atomic Routing Games on Maximum Congestion,” Theoretical Computer Science, vol. 410, no. 36, pp. 3337-3347, Aug. 2009.
[6] G. Christodoulou and E. Koutsoupias, “The Price of Anarchy Of Finite Congestion Games,” Proc. 37th Ann. ACM Symp. Theory of Computing (STOC), pp. 67-73, May 2005.
[7] J.R. Correa, A.S. Schulz, and N.E. Stier Moses, “Computational Complexity, Fairness, and the Price of Anarchy of the Maximum Latency Problem,” Proc. 10th Int'l Conf. Integer Programming and
Combinatorial Optimization (IPCO), pp. 59-73, June 2004.
[8] R. Cypher, F. Meyer auf der Heide, C. Scheideler, and B. Vöcking, “Universal Algorithms for Store-and-Forward and Wormhole Routing,” Proc. 28th ACM Symp. Theory of Computing, pp. 356-365, 1996.
[9] Czumaj and Vocking, “Tight Bounds for Worst-Case Equilibria,” ACM Trans. Algorithms, vol. 3, pp. 1-17, 2007.
[10] A. Czumaj, P. Krysta, and B. Vöcking, “Selfish Traffic Allocation for Server Farms,” Proc. 34th Ann. ACM Symp. Theory of Computing (STOC), pp. 287-296, May 2002.
[11] D. Fotakis, S.C. Kontogiannis, E. Koutsoupias, M. Mavronicolas, and P.G. Spirakis, “The Structure and Complexity of Nash Equilibria for a Selfish Routing Game,” Proc. 29th Int'l Colloquium
Automata, Languages and Programming (ICALP), pp. 123-134, July 2002.
[12] D. Fotakis, S.C. Kontogiannis, and P.G. Spirakis, “Selfish Unsplittable Flows,” Theoretical Computer Science, vol. 348, nos. 2/3, pp. 226-239, 2005.
[13] M. Gairing, T. Lücking, M. Mavronicolas, and B. Monien, “Computing Nash Equilibria for Scheduling on Restricted Parallel Links,” Proc. 36th Ann. ACM Symp. Theory of Computing (STOC), pp.
613-622, June 2004.
[14] M. Gairing, T. Lücking, M. Mavronicolas, and B. Monien, “The Price of Anarchy for Polynomial Social Cost,” Theoretical Computer Science, vol. 369, nos. 1-3, pp. 116-135, 2006.
[15] M. Gairing, T. Lücking, M. Mavronicolas, B. Monien, and M. Rode, “Nash Equilibria in Discrete Routing Games with Convex Latency Functions,” Proc. 31st Int'l Colloquium Automata, Languages and
Programming (ICALP), pp. 645-657, July 2004.
[16] M.R. Garey and D.S. Johnson, Computers and Intractability. Freeman and Company, 1979.
[17] T. Harks, M. Klimm, and R.H. Möhring, “Strong Nash Equilibria in Games with the Lexicographical Improvement Property,” WINE '09: Proc. Fifth Int'l Workshop Internet and Network Economics, pp.
463-470, 2009.
[18] E. Koutsoupias, M. Mavronicolas, and P.G. Spirakis, “Approximate Equilibria and Ball Fusion,” Theory of Computing Systems, vol. 36, no. 6, pp. 683-693, 2003.
[19] E. Koutsoupias and C. Papadimitriou, “Worst-Case Equilibria,” Proc. 16th Ann. Symp. Theoretical Aspects of Computer Science (STACS), pp. 404-413, Mar. 1999.
[20] F.T. Leighton, B.M. Maggs, and S.B. Rao, “Packet Routing and Job-Scheduling in ${O}(Congestion+Dilation)$ Steps,” Combinatorica, vol. 14, pp. 167-186, 1994.
[21] T. Leighton, B. Maggs, and A.W. Richa, “Fast Algorithms for Finding O(Congestion + Dilation) Packet Routing Schedules,” Combinatorica, vol. 19, pp. 375-401, 1999.
[22] L. Libman and A. Orda, “Atomic Resource Sharing in Noncooperative Networks,” Telecomm. Systems, vol. 17, no. 4, pp. 385-409, 2001.
[23] T. Lücking, M. Mavronicolas, B. Monien, and M. Rode, “A New Model for Selfish Routing,” Theoretical Computer Science, vol. 406, no. 3, pp. 187-206, 2008.
[24] M. Mavronicolas and P. Spirakis, “The Price of Selfish Routing,” Algorithmica, vol. 48, pp. 91-126, 2007.
[25] D. Monderer and L.S. Shapely, “Potential Games,” Games and Economic Behavior, vol. 14, pp. 124-143, 1996.
[26] R. Ostrovsky and Y. Rabani, “Universal $O({\rm Congestion}+{\rm Dilation}+\log^{1+\varepsilon }{N})$ Local Control Packet Switching Algorithms,” Proc. 29th Ann. ACM Symp. Theory of Computing,
pp. 644-653, May 1997.
[27] C. Papadimitriou, “Algorithms, Games, and the Internet,” Proc. 33rd Ann. ACM Symp. Theory of Computing (STOC), pp. 749-753, July 2001.
[28] Y. Rabani and É. Tardos, “Distributed Packet Switching in Arbitrary Networks,” Proc. 28th Ann. ACM Symp. Theory of Computing, pp. 366-375, May 1996.
[29] R.W. Rosenthal, “A Class of Games Possessing Pure-Strategy Nash Equilibria,” Int'l J. Game Theory, vol. 2, pp. 65-67, 1973.
[30] T. Roughgarden, “The Maximum Latency of Selfish Routing,” Proc. 15th Ann. ACM-SIAM Symp. Discrete Algorithms (SODA), pp. 980-981, Jan. 2004.
[31] T. Roughgarden, “Selfish Routing with Atomic Players,” Proc. 16th Ann. ACM-SIAM Symp. Discrete Algorithms (SODA), pp. 1184-1185, 2005.
[32] T. Roughgarden and É. Tardos, “How Bad Is Selfish Routing,” J. ACM, vol. 49, no. 2, pp. 236-259, Mar. 2002.
[33] T. Roughgarden and É. Tardos, “Bounding the Inefficiency of Equilibria in Nonatomic Congestion Games,” Games and Economic Behavior, vol. 47, no. 2, pp. 389-403, 2004.
[34] A. Srinivasan and C.-P. Teo, “A Constant-Factor Approximation Algorithm for Packet Routing, and Balancing Local vs. Global Criteria,” Proc. 29th Ann. ACM Symp. Theory of Computing (STOC '97),
pp. 636-643, 1997.
[35] S. Suri, C.D. Toth, and Y. Zhou, “Selfish Load Balancing and Atomic Congestion Games,” Algorithmica, vol. 47, no. 1, pp. 79-96, Jan. 2007.
Index Terms:
Algorithmic game theory, congestion game, routing game, Nash equilibrium, price of anarchy.
Costas Busch, Rajgopal Kannan, Athanasios V. Vasilakos, "Approximating Congestion + Dilation in Networks via "Quality of Routing” Games," IEEE Transactions on Computers, vol. 61, no. 9, pp.
1270-1283, Sept. 2012, doi:10.1109/TC.2011.145
Usage of this product signifies your acceptance of the
Terms of Use | {"url":"http://www.computer.org/csdl/trans/tc/2012/09/ttc2012091270-abs.html","timestamp":"2014-04-19T05:09:44Z","content_type":null,"content_length":"57886","record_id":"<urn:uuid:2675908c-4b4d-43a9-bfc8-f7b2163d7200>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00373-ip-10-147-4-33.ec2.internal.warc.gz"} |
Szemerédi’s theorem
- Ann. of Math
"... Abstract. We prove that there are arbitrarily long arithmetic progressions of primes. ..."
"... Abstract. A famous theorem of Szemerédi asserts that all subsets of the integers with positive upper density will contain arbitrarily long arithmetic progressions. There are many different
proofs of this deep theorem, but they are all based on a fundamental dichotomy between structure and randomness ..."
Cited by 19 (1 self)
Add to MetaCart
Abstract. A famous theorem of Szemerédi asserts that all subsets of the integers with positive upper density will contain arbitrarily long arithmetic progressions. There are many different proofs of
this deep theorem, but they are all based on a fundamental dichotomy between structure and randomness, which in turn leads (roughly speaking) to a decomposition of any object into a structured
(low-complexity) component and a random (discorrelated) component. Important examples of these types of decompositions include the Furstenberg structure theorem and the Szemerédi regularity lemma.
One recent application of this dichotomy is the result of Green and Tao establishing that the prime numbers contain arbitrarily long arithmetic progressions (despite having density zero in the
integers). The power of this dichotomy is evidenced by the fact that the Green-Tao theorem requires surprisingly little technology from analytic number theory, relying instead almost exclusively on
manifestations of this dichotomy such as Szemerédi’s theorem. In this paper we survey various manifestations of this dichotomy in combinatorics, harmonic analysis, ergodic theory, and number theory.
As we hope to emphasize here, the underlying themes in these arguments are remarkably similar even though the contexts are radically different. 1.
, 2006
"... Abstract. A famous theorem of Szemerédi asserts that any set of integers of positive upper density will contain arbitrarily long arithmetic progressions. In its full generality, we know of four
types of arguments that can prove this theorem: the original combinatorial (and graph-theoretical) approac ..."
Cited by 12 (2 self)
Add to MetaCart
Abstract. A famous theorem of Szemerédi asserts that any set of integers of positive upper density will contain arbitrarily long arithmetic progressions. In its full generality, we know of four types
of arguments that can prove this theorem: the original combinatorial (and graph-theoretical) approach of Szemerédi, the ergodic theory approach of Furstenberg, the Fourier-analytic approach of
Gowers, and the hypergraph approach of Nagle-Rödl-Schacht-Skokan and Gowers. In this lecture series we introduce the first, second and fourth approaches, though we will not delve into the full
details of any of them. One of the themes of these lectures is the strong similarity of ideas between these approaches, despite the fact that they initially seem rather different. 1.
"... Abstract. In this expository article, we describe the recent approach, motivated by ergodic theory, towards detecting arithmetic patterns in the primes, and in particular establishing in [26]
that the primes contain arbitrarily long arithmetic progressions. One of the driving philosophies is to iden ..."
Cited by 5 (3 self)
Add to MetaCart
Abstract. In this expository article, we describe the recent approach, motivated by ergodic theory, towards detecting arithmetic patterns in the primes, and in particular establishing in [26] that
the primes contain arbitrarily long arithmetic progressions. One of the driving philosophies is to identify precisely what the obstructions could be that prevent the primes (or any other set) from
behaving “randomly”, and then either show that the obstructions do not actually occur, or else convert the obstructions into usable structural information on the primes. 1.
- Collectanea Mathematica (2006), Vol. Extra., 37-88 (Proceedings of the 7th International Conference on Harmonic Analysis and Partial Differential Equations, El Escorial
"... Abstract. We describe some of the machinery behind recent progress in establishing infinitely many arithmetic progressions of length k in various sets of integers, in particular in arbitrary
dense subsets of the integers, and in the primes. 1. ..."
Cited by 3 (0 self)
Add to MetaCart
Abstract. We describe some of the machinery behind recent progress in establishing infinitely many arithmetic progressions of length k in various sets of integers, in particular in arbitrary dense
subsets of the integers, and in the primes. 1. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=14510009","timestamp":"2014-04-20T12:25:28Z","content_type":null,"content_length":"22180","record_id":"<urn:uuid:f42f3ddc-de72-4822-bf6a-98bccfe66282>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00591-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mathematics and Computation
It is well known that, both in constructive mathematics and in programming languages, types are secretly topological spaces and functions are secretly continuous. I have previously exploited this in
the posts Seemingly impossible functional programs and A Haskell monad for infinite search in finite time, using the language Haskell. In languages based on Martin-Löf type theory such as Agda, there
is a set of all types. This can be used to define functions $\mathbb{N} \to \mathrm{Set}$ that map numbers to types, functions $\mathrm{Set} \to \mathrm{Set}$ that map types to types, and so on.
Because $\mathrm{Set}$ itself is a type, a large type of small types, it must have a secret topology. What is it? There are a number of ways of approaching topology. The most popular one is via open
sets. For some spaces, one can instead use convergent sequences, and this approach is more convenient in our situation. It turns out that the topology of the universe $\mathrm{Set}$ is indiscrete:
every sequence of types converges to any type! I apply this to deduce that $\mathrm{Set}$ satisfies the conclusion of Rice’s Theorem: it has no non-trivial, extensional, decidable property.
To see how this works, check:
The Agda pages can be navigated be clicking at any (defined) symbol or word, in particular by clicking at the imported module names.
Running a classical proof with choice in Agda
As a preparation for my part of a joint tutorial Programs from proofs at MFPS 27 at the end of this month with Ulrich Berger, Monika Seisenberger, and Paulo Oliva, I’ve developed in Agda some things
we’ve been doing together.
• Berger-Oliva modified bar recursion, or alternatively,
for giving a proof term for classical countable choice, we prove the classical infinite pigeonhole principle in Agda: every infinite boolean sequence has a constant infinite subsequence, where the
existential quantification is classical (double negated).
As a corollary, we get the finite pigeonhole principle, using Friedman’s trick to make the existential quantifiers intuitionistic.
This we can run, and it runs fast enough. The point is to illustrate in Agda how we can get witnesses from classical proofs that use countable choice. The finite pigeonhole principle has a simple
constructive proof, of course, and hence this is really for illustration only.
The main Agda files are
These are Agda files converted to html so that you can navigate them by clicking at words to go to their definitions. A zip file with all Agda files is available. Not much more information is
available here.
The three little modules that implement the Berardi-Bezem-Coquand, Berger-Oliva and Escardo-Oliva functionals disable the termination checker, but no other module does. The type of these functionals
in Agda is the J-shift principle, which generalizes the double-negation shift.
How eff handles built-in effects
[UPDATE 2012-03-08: since this post was written eff has changed considerably. For updated information, please visit the eff page.]
From some of the responses we have been getting it looks like people think that the io effect in eff is like unsafePerformIO in Haskell, namely that it causes an effect but pretends to be pure. This
is not the case. Let me explain how eff handles built-in effects.
Programming with effects II: Introducing eff
[UPDATE 2012-03-08: since this post was written eff has changed considerably. For updated information, please visit the eff page.]
This is a second post about the programming language eff. We covered the theory behind it in a previous post. Now we turn to the programming language itself.
Please bear in mind that eff is an academic experiment. It is not meant to take over the world. Yet. We just wanted to show that the theoretical ideas about the algebraic nature of computational
effects can be put into practice. Eff has many superficial similarities with Haskell. This is no surprise because there is a precise connection between algebras and monads. The main advantage of eff
over Haskell is supposed to be the ease with which computational effects can be combined.
A Haskell monad for infinite search in finite time
I show how monads in Haskell can be used to structure infinite search algorithms, and indeed get them for free. This is a follow-up to my blog post Seemingly impossible functional programs. In the
two papers Infinite sets that admit fast exhaustive search (LICS07) and Exhaustible sets in higher-type computation (LMCS08), I discussed what kinds of infinite sets admit exhaustive search in finite
time, and how to systematically build such sets. Here I build them using monads, which makes the algorithms more transparent (and economic). Continue reading | {"url":"http://math.andrej.com/category/guest-post/","timestamp":"2014-04-17T07:24:54Z","content_type":null,"content_length":"34273","record_id":"<urn:uuid:d4c3cdee-d094-4000-9e87-b9cc6593ca21>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00072-ip-10-147-4-33.ec2.internal.warc.gz"} |
Boole Summary
This section contains 618 words
(approx. 3 pages at 300 words per page)
George Boole (1815–1864) was the founder of modern mathematical logic. Nevertheless, few of his ideas are currently accepted in mainstream logic in the forms originally proposed by him. His learned
and fertile mind conceived of several important hypotheses, the testing and modification of which changed the face of logic irrevocably. One of his most important hypotheses was that every
proposition can be expressed using an algebraic equation suitably reinterpreted: that logic and algebra share a common uninterpreted formal language and thus also that they have similar problem types
and similar methods.
The universal affirmative, or A proposition, "Every square is a rectangle" was expressed by x = xy, where x is the class of squares, y the class of rectangles, and xy the "Boolean or logical product"
of x with y, the class of common members of x and y. The universal negative, or E...
This section contains 618 words
(approx. 3 pages at 300 words per page) | {"url":"http://www.bookrags.com/research/boole-eoph/","timestamp":"2014-04-16T14:08:04Z","content_type":null,"content_length":"32101","record_id":"<urn:uuid:493d72e6-6683-4087-ae24-a0edd57f94c5>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00599-ip-10-147-4-33.ec2.internal.warc.gz"} |
Sun City, AZ Calculus Tutor
Find a Sun City, AZ Calculus Tutor
...He was still a little slow, but started to come around, this I did back in 2003. I am currently spending the fall semester of 2013 as a Estrella Mountain Community College math tutor. I am
tutoring in algebra, trigonometry, pre-calculus, and calculus I-III.
7 Subjects: including calculus, geometry, algebra 1, algebra 2
...I use a modified Socratic method of teaching, making the student familiar with basic concepts, and learning to solve specific problems the student has found to be undecipherable. The solutions
are very carefully detailed, and important concepts are particularly emphasized for attention. The stu...
30 Subjects: including calculus, chemistry, English, reading
...My fiancee and I moved to Surprise, AZ, recently for her Nursing degree, and I plan on finishing my degree at Arizona State University soon. If you are looking for any tutoring from high
school math and science up to Calculus and college level Physics I would be happy to help. I have very little official tutoring history, but I have had many unpaid tutoring opportunities.
11 Subjects: including calculus, physics, statistics, geometry
...I have a relaxed, but encouraging teaching style and I form strong relationships with my students. I am an Eagle Scout as well as a talented chess player and have coached chess for the past
two years. I have a passion for education and teaching and really enjoy working with students and helping them learn.
14 Subjects: including calculus, physics, geometry, algebra 1
...I am a graduate math student. I have tutored algebra since the 1970s. I love to work with high school students and college freshmen and sophomores.
20 Subjects: including calculus, English, reading, writing
Related Sun City, AZ Tutors
Sun City, AZ Accounting Tutors
Sun City, AZ ACT Tutors
Sun City, AZ Algebra Tutors
Sun City, AZ Algebra 2 Tutors
Sun City, AZ Calculus Tutors
Sun City, AZ Geometry Tutors
Sun City, AZ Math Tutors
Sun City, AZ Prealgebra Tutors
Sun City, AZ Precalculus Tutors
Sun City, AZ SAT Tutors
Sun City, AZ SAT Math Tutors
Sun City, AZ Science Tutors
Sun City, AZ Statistics Tutors
Sun City, AZ Trigonometry Tutors | {"url":"http://www.purplemath.com/Sun_City_AZ_calculus_tutors.php","timestamp":"2014-04-18T19:11:09Z","content_type":null,"content_length":"23939","record_id":"<urn:uuid:3491136d-59d8-4879-938e-3b4271d92405>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00423-ip-10-147-4-33.ec2.internal.warc.gz"} |
2010 Mathematics Game Update
Oct 20 2010
2010 Mathematics Game Update
[Photo by pfala.]
Thanks to John Cook’s article about factorials in the recent Mathematics and Multimedia Carnival, we’re adding new rules to the 2010 Mathematics Game.
Let’s play with multifactorials!
New Rules
(n!)! = a factorial of a factorial
n!! = a double factorial = the product of all integers from 1 to n that have the same parity (odd or even) as n
n!!! = a triple factorial = the product of all integers from 1 to n that are equal to n mod 3
$n !^k$ = product of all integers from 1 to n that are equal to n mod k, where both n and k must be constructed from the year digits
I did a search this morning and realized that John’s article barely scratched the surface of the topic. I think we need to draw a line somewhere. So we will allow multifactorials (with the above
limitation regarding n and k) but not subfactorials, superfactorials, hyperfactorials, alternating factorials, primorials, etc.
This week is hyper-busy, so I won’t have time to explore these new numbers for several days. But I look forward to hearing what you all come up with….
Don’t miss any of “Let’s Play Math!”: Subscribe in a reader, or get updates by Email.
Have more fun on Let’s Play Math! blog: | {"url":"http://letsplaymath.net/2010/10/20/2010-mathematics-game-update/","timestamp":"2014-04-18T21:31:38Z","content_type":null,"content_length":"65194","record_id":"<urn:uuid:97569bb9-c184-4f48-a7f9-fb9a06a69db5>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00618-ip-10-147-4-33.ec2.internal.warc.gz"} |
Jaxworks.com: Excel Proficiency Test
There is no scoring for this test. The test is designed to reveal proficiency as the degree of difficulty increases. Make notes of the areas that yield problems and go to our free training page to
scan through the indexes for help.
Instructors: You may want to make a scoring result sheet and assign levels of proficiency attained to determine additional training goals for the student.
Proficiency Test Files:
1. artsoft.xls
2. filter.xls
3. refrig.xls
1. All calculations must be done using Excel (formulas or functions), using no other method of calculation
(calculators, hand-calculations).
2. You may find it easier to print this document so you can refer to it while you complete the test.
3. Read each problem carefully and perform the tasks in each section.
Section 1 of 3
Open Excel, then open the file artsoft.xls from the list above and make sure you are on the Sales worksheet.
This file contains Sales, Payroll, and Company Savings Plan information for the ArtSoft Company.
1. Enter your full name in cell A1.
2. What are the Net Sales Totals for each of the following cities: Chicago, Decatur, and Champaign (put in cells E8-E10)?
3. What is the average Net Sales for Decatur during the first quarter (use an Excel function and put this figure in cell F9)?
4. Format the sales figures in B8:F10 in the Currency format, with 2 decimal places.
5. Construct a line graph comparing the sales for each city in the first quarter (i.e. construct a line graph of sales by month, with one line for each city) with the following features:
1. Place the graph in cells A15 to F25.
2. Make a legend to designate which line color refers to which city.
3. Enter “ArtSoft First Quarter Sales” as the Chart Title.
4. Enter “Month” as the x-axis label and “Sales (Millions)” as the y-axis label.
5. Change the scale of the y axis to make 30 the minimum value, instead of zero.
6. Move to the Payroll worksheet in this workbook. This sheet contains a payroll report for the ArtSoft employees. Insert a new employee in row 13. Enter the following information for this new
1. In A13 enter Jones, Peggy.
2. In B13 enter 85
3. In C13 enter $23.50
4. In D13 enter 35
5. Gross Pay for Peggy Jones should appear in cell E13. If it doesn’t, copy the formula from cell E12 to cell E13.
7. Each employee is given a weekly aptitude test, a score of 80 or above on this test entitles them to a bonus (a score below 80 yields a bonus of 0). The test scores are in cells B6:B18. The bonus
amount is a percentage of their gross pay. This percentage is given in cell B22. In cell F6 of the Payroll worksheet, use an Excel IF function to compute the bonus amount for Robert Anderson, and
then copy the formula from F6 to F7:F18. Your formulas must refer to cell B22, and they must be formulas that were copied, not individually typed in for each cell.
8. In cell G6, compute the Taxable Income (Gross Pay plus Bonus) for Robert Anderson. Copy your formula from G6 to G7:G18.
9. Move to the Savings Plan worksheet. As an employee of ArtSoft, you are considering investing in a company savings plan. In the plan, the employee contributes $200 each month for the next 5 years.
The annual interest rate earned is 5%. Assume that payments are made at the end of the period.
1. In cell B5 of the Savings Plan worksheet, enter the monthly interest rate for the plan.
2. In cell B6, enter the total monthly contribution for the plan.
3. In cell B8, use the Excel FV function to determine the future value of this investment. Your function should refer to
cells B5 and B6.
Section 2 of 3
Open the file filter.xls from above and make sure you are on the Employee Record worksheet.
This file contains employee records for the company Widget World.
10. Filter the data from A4:F295 to find all of the people who have recently been promoted, do not belong to the union,
and make between $45,000 and $75,000 (inclusive). Copy the filtered data to the range of cells beginning
in A302:F302.
11. Sort the filtered data that you copied to A302:F302 in descending order by salary.
Section 3 of 3
Open refrig.xls from above.
This workbook contains sales information from the four sales regions of the Freezing Point appliance company: North, South, East and West.
12. Insert a new blank worksheet named “Total Sales” at the end of the workbook.
13. Copy the column and row headings from the East worksheet to the new Total Sales worksheet (cells A1:F2 and A3:A9). You may use the Fill Across Sheets command on the Edit menu to do this, or just
Copy and Paste.
14. In the range B3:F9 of the Total Sales worksheet, insert the formula that sums the sales in the corresponding cells of the North, South, West, and East worksheets. (Cell B3 of Total Sales should
sum cell B3 from each of the other sheets, North, South, West, and East.)
15. Format the numbers in the five sales worksheets with the Number format, reducing the number of decimal places to zero and adding the 1000 (,) separator.
16. Apply the Classic 2 AutoFormat to the sales data on each of the five sheets. | {"url":"http://www.jaxworks.com/test.htm","timestamp":"2014-04-17T03:49:11Z","content_type":null,"content_length":"18275","record_id":"<urn:uuid:19cac35d-229a-41ad-976f-4951dd351e06>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00640-ip-10-147-4-33.ec2.internal.warc.gz"} |
Teaching and Practicing Numbers
The Internet TESL Journal
Teaching and Practicing Numbers
Natasa Intihar Klancar
When it comes to teaching new vocabulary, teachers are often puzzled as to how to present the new items in a way that will appeal to the pupils and make them learn effectively and at the same time
joyfully. Throughout the years, I have tried several different teaching styles, techniques and approaches and found out it is almost impossible to find the perfect way. But what it comes down to in
the end is simplicity and diversity.
In my opinion, there’s nothing more efficient than changing the activities, mixing and matching various strategies and thus keeping the learners interested and motivated.
Here’s a short list of various approaches to teaching numbers. I would like to point out that some of them have worked perfectly with all of the pupils while some proved of use in some classes and
failed in others. It is all a matter of practice and sooner or later the “correct” style will have shown itself and will enrich your teaching and give it some extra flavor we have all been looking
Listening to Numbers
Children have an extraordinary ability to memorize the words and phrases they hear and thus the importance of a proper input is even greater. The better the input, the better the output. Therefore
clear and accurate presentation of the words’ pronunciation should be provided for the pupils. Either a teacher may read out the numbers aloud or another media could be used, such as a CD recording.
Each number should be repeated a few times, allowing the kids to remember the correct pronunciation.
Repetition of Drills
After the input has been sufficient and each number has been presented to the pupils a few times, it is time for practice. At the beginning, it is probably a good idea to involve the whole class in
repeating the numbers. Children work as a group and by taking an active part in choral repetition and being one of the many involved in this process, pupils build up their confidence so eventually we
may choose pairs of students to say the numbers and then even individuals. In order to make this activity a bit more lively, we may try changing the pace and the volume. Pupils love that and follow
Numbers on Flashcards
Frequent repetition practice can become dull so actions should be taken to keep the class motivated and creative. After making sure the class is confident naming the numbers, bringing in flashcards
is always a nice option. Illuminated pictures of numbers can be put to use in a number of ways, the most simple being connected to a very simple question: “What number can you see?” The pupils first
answer together and later answer individually. I suggest starting with the correct order (e. g. numbers from 1 to 20), then mixing the flashcards a bit, slowly uncovering each “hidden” number and
making them guess which one is hidden. Every now and then we also play the which-one-is-missing game where I hide one of the fleshcards and then the pupils have to guess which one it is. This game is
lots of fun and they never get tired of it. This is also a nice way to end a lesson.
The Fingers Game
Once acquainted with the numbers, we may try simple calculations. Step by step various techniques are uncovered and applied. Usually I start the fingers game by showing them, for example, three
fingers and they have to tell me how many they can see. After playing this for a while, it is their turn to show me the number of fingers I say. It is best for the pupils to close their eyes while
doing the exercise (thus preventing them from looking to their neighbor and cheating). Once I see that no mistakes are made, we start practicing simple calculations. With their eyes closed they
answer my “calculation questions” such as: “What’s 5 + 5?” Either the solution is shown with fingers or said out loud. The last part of the game is the one the pupils particularly like. Namely, they
become the teachers and make calculations. Their peers have to answer them correctly – either by showing fingers again or by saying out loud the solutions.
In My Bag I've Got...
This is a game of guessing and predicting and it involves so much more than the knowledge of numbers. We can include a wide array of new vocabulary items (things such as stationery, toys, and the
like). It can evolve into a memory game where pupils have to remember the items from the bag and then repeat them – either orally or in writing and/or drawing. After we have finished discovering the
contents of my bag, it is their turn to talk about their satchels’ contents, which is usually a very lively activity everybody enjoys. You may be surprised what some pupils bring to school … The
“I’ve got” and “(s)he’s got” structures can be practiced here as well.
How Many ... Can You See in the Classroom?
Applying the acquired knowledge onto real things from our surrounding is always a good starting point. Pupils love searching for answers and finding solutions to different questions. A nice way of
making them be careful and attentive to details is by asking them questions about the classroom, e. g.: “How many windows can you see?” Then individual pupils ask the questions, the point being to
ask as many different questions as possible and to use as many different numbers as possible. Afterwards the game can be continued with their eyes closed – this part is really lots of fun for the
class and they never get tired of it. As for their homework, I often give them the task of writing about the things in class, trying to find something for each number up to (for example) twenty.
Checking the homework next time we meet is always enjoyable, believe me.
Recording of Learner's Speech
Once the numbers have been practiced sufficiently, it might be nice to record the pupils’ pronunciation. Counting may be recorded in isolation or a song/a rhyme/a chant on numbers may be learnt by
heart and then performed in front of the class. The recording can be listened to immediately after the production has taken place and/or at the end of a school year as a kind of sum-up of what they
have learnt. Pupils’ active involvement during the recording process is music to every teacher’s ears and hearing themselves speak English is a never-to-be-forgotten experience. A teacher, though,
should try to find rhymes and songs that are easy enough for all the pupils to memorize for their first contact with a foreign language should be enjoyable, motivating and rewarding – which sets a
great basis for further development.
There are numerous ways of teaching and practicing numbers in a young learners’ classroom and the only trick is to find the right balance between the various approaches and techniques and thus make
each lesson a motivating, aspiring, creative, communicative and enjoyable experience. There are no rules as to how to mix and match the games and strategies, but be sure that the pupils’ reactions
and the extent of their active involvement will help you understand what they like, what they want and (last but not least) what they need in order to learn a foreign language effectively.
The Internet TESL Journal, Vol. XI, No. 8, August 2005 | {"url":"http://iteslj.org/Techniques/Klancar-Numbers.html","timestamp":"2014-04-18T15:41:20Z","content_type":null,"content_length":"8893","record_id":"<urn:uuid:d1d6775e-7c97-4d09-89cf-b3bf948eaeea>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00345-ip-10-147-4-33.ec2.internal.warc.gz"} |
How to Calculate/Find the Rating of Transformer in kVA (Single Phase and Three Phase)?
Rating of Single Phase Transformer:
P = V x I.
Rating of a single phase transformer in kVA
kVA= (V x I) / 1000
Rating of a Three Phase Transformer:
P = √3. V x I
Rating of a Three phase transformer in kVA
kVA = (√3. V x I) /1000
Did you notice something????Anyway, I don’t care what is your answer J but lets me try to explain.
Here is the rating of Transformer is 100kVA.
But Primary Voltages or High Voltages (H.V) is 11000 V = 11kV.
And Primary Current on High Voltage side is 5.25 Amperes.
Also Secondary voltages or Low Voltages (L.V) is 415 Volts
And Secondary Current (Current on Low voltages side) is 139.1 Amperes.
In simple words,
Transformer rating in kVA = 100 kVA
Primary Voltages = 11000 = 11kV
Primary Current = 5.25 A
Secondary Voltages = 415V
Secondary Current = 139.1 Amperes.
Now calculate for the rating of transformer according to
P=V x I (Primary voltage x primary current)
P = 11000V x 5.25A = 57,750 VA = 57.75kVA
Or P = V x I (Secondary voltages x Secondary Current)
P= 415V x 139.1A = 57,726 VA = 57.72kVA
Once again we noticed that the rating of Transformer (on Nameplate) is 100kVA but according to calculation...it comes about 57kVA…
The difference comes due to ignorance of that we used single phase formula instead of three phase formula.
Now try with this formula
P = √3 x V x I
P=√3 Vx I (Primary voltage x primary current)
P =√3 x 11000V x 5.25A = 1.732 x 11000V x 5.25A = 100,025 VA = 100kVA
Or P = √3 x V x I (Secondary voltages x Secondary Current)
P= √3 x 415V x 139.1A = 1.732 x 415V x 139.1A= 99,985 VA = 99.98kVA
Consider the (next) following example.
Voltage (Line to line) = 208 V.
Current (Line Current) = 139 A
Now rating of the three phase transformer
P = √3 x V x I
P = √3 x 208 x 139A = 1.732 x 208 x 139
P = 50077 VA = 50kVA
Note: This post has been made on the request of our Page fan Anil Vijay.
27 comments:
1. Why is transformer rating always written in KVA? We know that the unit of Power is Watt. then why don't we write transformer rating in Watts?
1. Dear Mukesh Khatri@
Read This Article
Why Transformer rating in kVA and not in kW?
2. The two types of transformer are core loss and ohmic losses.
The core loss depends on transformer voltage and ohmic loss depends on transformer current.As these losses depends on transformer voltage and current and are almost unaffected by the load pf,
transformer rated output is expressed in VA or in KVa.
3. because the power factor of the load is not known so transformer rating is given in KVA
2. Thanks
3. dear in last consideration u shows that line to line volts is 208V then after calculation over value is came 50 KVA but actual value is 100 KVA have. plz Kindly explain it
1. Dear Waqas Ahmed
Here are two examples.
The first one is for 100kVA which shown in the image.
The Second one is for 50kVA (where line to line voltage is 208V)
4. Your article looks great!Thanks for you sharing. I love it.
High Voltage Transformers
5. thanks teacher
6. If we dont know the line current,how 2 calculate the rating?
1. It is already given on every transformer's nameplate
2. This system arrangement is very common, both at the utilization level as 480 Y/277 V and 208 Y/120 V, and also
on most utility distribution systems.
7. Anonymous20:33
is it right?
1. Yes it is 100% right.
When considering the apparent power we also considered the effect of inductor and capacitor which always will be minimized by using power factor correction technique.
Points to be remembered:-
->KVA is always greater then KW
because KW only have the effect of resistor.
-> KVA=KW if and only if the power factor is one so the load is pure resistive.
-> Single Phase KVA
->three Phase KVA
->Single Phase KW
->Three Phase KW
(1000 if conversion factor for Kilo (10^3) & square root of 3 = 1.132051)
8. sir could u explain DYn-11.....
1. Wait for the upcoming posts... Thanks
2. http://en.m.wikipedia.org/wiki/Vector_group
9. Hello Sir, Can u explain why in transformer,the power factor is not considering.? And why in others.?
1. Correct me if I'm wrong ' The power factor is determined by the load, not the transformer. The PF on the load side of the transformer will be the same as it is on the primary, except at very
light loads where the magnetizing current could have some effect. Having said that, the impedance in the primary circuit could effect the PF of the system. In other words, for a given load
(with reactance) the system PF depends on the impedance in the primary circuit.'
10. hi sir, how do the transformer designer calculate the turns, emf, loss, flux, area if we say we need1600KVA
11. hi sir, how does transformer designer calculate flux, emf, loss, turns if v just say the rating eg,1600Kva if u reply with a example calculation it will be comfortable
12. Anonymous18:18
Hi Sir
what size transformer do I need for 5A,230V Load?
13. dear 220kv/500kv transmission poll install in the forest.and we don't know the load and the production side.so tell the method of load and production identification?
14. Anonymous21:44
hi sir, i want to design a 500VA rating of transformer.but i dont understand how to select the v/g & c/n value? and no. of turn on each side of transformer?
pls kindely rply me
15. Anonymous22:40
Sir ,why we dont consider power factor for transfrmer but for generator or motor?
16. Anonymous14:33
Sir ,why we dont consider power factor for transfrmer but for generator or motor
17. Dear Sir,
I want to know about how to select step down transformer rating. 415/230V and VA rating. ..?
..I mean how va can be calculated. .based on the which factors. . | {"url":"http://www.electricaltechnology.org/2013/07/how-to-calculatefind-rating-of.html","timestamp":"2014-04-19T09:24:05Z","content_type":null,"content_length":"239228","record_id":"<urn:uuid:4c590ce9-6627-4207-9888-4b4213a909f5>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00535-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
"Has an x-intercept of 6 and a y-intercept of 3." Could someone write that equation in slope-intercept, point slope, or standard form?
• one year ago
• one year ago
Best Response
You've already chosen the best response.
y = 1/2x + 3
Best Response
You've already chosen the best response.
That's slope-intercept form
Best Response
You've already chosen the best response.
Thank you so much! Could you show me how you got that?
Best Response
You've already chosen the best response.
sure slope-intercept form is y = mx + b m is the slope b is the y-intercept with the x intercept and the y intercept I got a slope of 1/2 then I just inserted it in and got y = 1/2x + 3 since
there's a y-intercept of 3
Best Response
You've already chosen the best response.
do you want me to explain slope?
Best Response
You've already chosen the best response.
Sure, I'm not sure how to get the slope.
Best Response
You've already chosen the best response.
ok the slope is change of y / change of x so the x-intercept is (6,0) and the y-intercept is (0,3) so 3-0/0-6 = 3/-6 = 1/-2
Best Response
You've already chosen the best response.
BTW change it to y = 1/-2x + 3 I forgot it in the original answer sorry
Best Response
You've already chosen the best response.
\[y = \frac{ 1 }{-2 }x + 3?\]
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
Thank you!
Best Response
You've already chosen the best response.
No problem
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50bfa7b5e4b0689d52fdcef8","timestamp":"2014-04-16T07:42:39Z","content_type":null,"content_length":"53776","record_id":"<urn:uuid:12c50121-2c9a-4bcd-94fc-5157e535100f>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00628-ip-10-147-4-33.ec2.internal.warc.gz"} |
Solving Equations with One Variable
The process of finding all solution(s) to an equation is called "solving the equation." Solving an equation is like solving a crime—you need to make like Sherlock Holmes and collect your clues,
analyze them, and deduce their meanings. Fortunately, we're referring to the Sherlock Holmes from Sir Arthur Conan Doyle's stories, not the one in the feature films, so you probably won't need to
deliver any uppercuts.
To solve an equation, we transform it into simpler equivalent equations until we can easily read off the value(s) of the variable that make the equation true. Usually we'd like to get the variable
all by its lonesome on one side of the equation, with a number on the other side.
That's the ideal scenario, anyway, but sometimes it becomes tricky. Luckily, "tricky" is our middle name. Shmoop "Tricky" Aagaard. It's Norwegian.
An equation makes a claim that two quantities are equal. It's like saying, for example, that a dozen hockey players is twelve hockey players. Never mind what they're all doing on the ice at once. If
we put each quantity in one pan of a balance scale, the scale will balance. | {"url":"http://www.shmoop.com/equations-inequalities/solving-equations-one-variable-help.html","timestamp":"2014-04-19T14:48:05Z","content_type":null,"content_length":"33835","record_id":"<urn:uuid:2dc9eeeb-f850-454f-8195-dd9c4ecd1cf8>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00485-ip-10-147-4-33.ec2.internal.warc.gz"} |
Proposition 25
If two numbers are relatively prime, then the product of one of them with itself is relatively prime to the remaining one.
Let A and B be two numbers relatively prime, and let A multiplied by itself make C.
I say that B and C are relatively prime.
Make D equal to A.
Since A and B are relatively prime, and A equals D, therefore D and B are also relatively prime. Therefore each of the two numbers D and A is relatively prime to B. Therefore the product of D and A
is also relatively prime to B.
But the number which is the product of D and A is C. Therefore C and B are relatively prime.
Therefore, if two numbers are relatively prime, then the product of one of them with itself is relatively prime to the remaining one.
This proposition says that if a is relatively prime to b, then a^2 is also relatively prime to b.
It’s a special case of the previous proposition and hardly needs its own enunciation. It is used in VII.27 and IX.15. | {"url":"http://aleph0.clarku.edu/~djoyce/java/elements/bookVII/propVII25.html","timestamp":"2014-04-19T01:49:07Z","content_type":null,"content_length":"3612","record_id":"<urn:uuid:1df32b3e-94ea-4e3b-b1b7-cf3044672488>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00307-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
x^3+y^3+3axy=0 dy/dx=?
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50c8b043e4b0a14e4368b4df","timestamp":"2014-04-20T03:44:08Z","content_type":null,"content_length":"81649","record_id":"<urn:uuid:f8cb25ab-de96-47d5-8fed-9a8e12df7f93>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00603-ip-10-147-4-33.ec2.internal.warc.gz"} |
Adding Velocities
HPS 0410 Einstein for Everyone Spring 2010
Back to main course page
Assignment 2: Adding Velocities Einstein's Way
For submission Tues. Jan. 19
What do I do if my recitation is on Monday--Martin Luther King Day?
1. Two spaceships pass a planet, moving in opposite directions. A planet observer judges each to be moving at 100,000 miles per second. An observer on one of the spaceships measures the speed of the
other spaceship.
(a) According to classical physics, what speed will that spaceship observer measure for the other spaceship? Is this speed faster than light?
(b) According to relativity theory, what speed will that spaceship observer measure for the other spaceship? Is this speed faster than light?
2. The planet observer of question 1. above watches the first spaceship observer measure the speed of the second spaceship by means of a procedure that uses rods and clocks. Would the planet observer
judge that measuring procedure to be a fair one that gives the correct result?
For discussion in the recitation.
A. Imagine that you have a gun that can fire a particle at 100,000 miles per second. You are in a spaceship moving at 100,000 miles per second with respect to the earth. You point the gun in the
direction of your motion and fire. Would an earthbound observer judge the particle to travel at 200,000=100,000+100,000 miles per second? Show that the earthbound observer could not since that would
violate the principle of relativity, when that principle is combined with the light postulate. How rapidly would you (the spaceship observer) judge the particle to be moving?
B. The arguments we have investigated show that relativity theory prohibits us accelerating an object past the speed of light. Do any of them rule out objects that have always been traveling faster
than light (or, possibly, were created initially already moving faster than light)? | {"url":"http://www.pitt.edu/~jdnorton/teaching/HPS_0410/2010_Spring/assignments/02_P_of_R_II/index.html","timestamp":"2014-04-16T20:38:48Z","content_type":null,"content_length":"3893","record_id":"<urn:uuid:4479bdf6-1a72-4808-a8b1-f387707cb751>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00035-ip-10-147-4-33.ec2.internal.warc.gz"} |
Density is of an object or material is its mass per unit of volume. The symbol used for denoting density is the lower case Greek letter rho (ρ). The simple formula for calculating density is:
ρ (rho) is density, m is mass and V is volume.
In layman’s terms density is described as the weight per unit of volume, however this is technically incorrect as the quantity should be defined as specific weight. Specific weight is defined as the
weight per unit volume of a material and more accurately fits with the density definition.
How do you measure the components needed to calculate density? As there are two components which are mass and volume you must know both variables to accurately calculate density. Measuring mass can
be done by using a set of scales or a balance. Measuring volume is harder and requires either geometric measuring of the object or via displacement of a fluid for a solid object. When you need to
measure the volume of a fluid or gas you should use a hydrometer (used to measure the specific gravity of a fluid) or dasymeter (used for measuring the buoyant effect of gasses).
An example of a density calculation is as follows. Imagine that you have a large brick of compressed coffee measuring 20cm x 20cm x 5cm that weighs 852 grams, what would be the density of your coffee
brick? With the two variables you can then go about calculating density via the density formula above.
The first step is calculating the volume for which the formula is length x width x thickness. In our example those figures are 20*20*5 which equals 2,000 cm^3. The second step is calculating density
which as we know is mass / Volume. In our example we know that the mass of our coffee brick is 852 grams and the Volume is 2,000 cm^3. 852g / 2,000 cm^3. = 0.426 g/cm^3. The density of the coffee
brick is 0.426 g/cm^3. | {"url":"http://www.formulafordensity.com/","timestamp":"2014-04-19T06:53:50Z","content_type":null,"content_length":"7301","record_id":"<urn:uuid:48d1ffe7-6e36-41d3-ad9a-753bd427a332>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00303-ip-10-147-4-33.ec2.internal.warc.gz"} |
Computer Art and the Theory of Computation: Chapter 1: The Blooming
What I’d like to do in a series of posts is explore the relevance of the theory of computation to computer art. Both of those terms, however, need a little unpacking/explanation before talking about
their relations.
Let’s start with computer art. Dominic Lopes, in A Philosophy of Computer Art, makes a useful distinction between digital art and computer art. Digital art, according to Lopes, can refer to just
about any art that is or was digitized. Such as scanned paintings, online fiction, digital art videos, or digital audio recordings. Digital art is not a single form of art, just as fiction and
painting are different forms of art. To call something digital art is merely to say that the art’s representation is or was, at some point, digital. It doesn’t imply that computers are necessary or
even desirable to view and appreciate the work.
Whereas the term computer art is much better to describe art in which the computer is crucial as medium. What does he mean by “medium”? He says “a technology is an artistic medium for a work just in
case it’s use in the display or making of the work is relevant to its appreciation” (p. 15). We don’t need to see most paintings, texts, videos or audio recordings on computers to display or
appreciate them. The art’s being digital is irrelevant to most digital art. Whereas, in computer art, the art’s being digital is crucial to its production, display and appreciation.
Lopes also argues that whereas digital art is simply not a single form of art, computer art should be thought of as a new form of art. He thinks of a form of art as being a kind of art with shared
properties such that those properties are important to the art’s appreciation. He defines interactivity as being such that the user’s actions change the display of the work itself. So far so good.
But he identifies the crucial property that works of computer art share as being interactivity.
I think all but one of the above ideas by Lopes are quite useful. The problem is that there are non-interactive works of computer art. For instance, generative computer art is often not interactive.
It often is different each time you view it, because it’s generated at the time of viewing, but sometimes it requires no interaction at all. Such work should be classified as computer art. The
computer is crucial to its production, display, and appreciation.
Lopes’s book is quite useful in a number of ways. It’s the first book by a professional philosopher toward a philosophy of computer art. It shows us how a philosophy of computer art might look and
proceed. But it is a philosophy, in the end, of interactive computer art. Which is a more limited thing than a philosophy that can also accommodate non-interactive computer art.
Now, why did a professional philosopher writing a philosophy of computer art fail to account for non-interactive computer art in his philosophy? Well, to write a more comprehensive philosophy
requires an appreciation of the importance of programmability. For it is programmability, not interactivity, that distinguishes computer art from other arts. And it is at this point that we begin to
glimpse that some understanding of the theory of computation might be relevant to an understanding and appreciation of computer art.
I’ll return to this point in another chapter. I’ve given you some idea of what I mean by computer art. Now let’s have a look at the theory of computation.
It was inaugurated in the work of the mathematician and logician Alan Turing in 1936 with his world-shaking paper entitled “On Computable Numbers, with an Application to the Entscheidungsproblem“.
This is one of the great intellectual documents of the twentieth century. In this paper, Turing invented the modern computer. He introduced us to what we now call the Turing machine, which is an
abstract idea, a mathematization, an imaginary object that has all the theoretical capabilities of modern computers. As we know, lots of things have changed since then in how we think about
computers. However, the Turing machine has not changed significantly and it is still the bedrock of how we think about computers. It is still crucial to our ability to think about the limits of the
capabilities of computers.
And that is precisely what the theory of computation addresses: the limits of the capabilities of computers. Not so much today’s limits, but theoretical limits. The theory of computation shows us
what is theoretically possible with computers and what is theoretically impossible. “Theoretically impossible” does not mean “probably won’t happen”. It means “will absolutely never ever (not ever)
happen as long as the premises of the theory are true”.
Since we are dealing with matters of art, let’s first get a sense of the poetry of the theory of computation. It’s a little-appreciated fact that Turing devised the Turing machine in order to show
that there are some things it will never be capable of doing. That is, he devised the modern computer not to usher in the age of computers and the immense capabilities of computers, but to show that
there are some things that no computer will ever do. That’s beautiful. The theory of computation arises not so much from attempts to create behemoths of computation as to understand the theoretical
limits of the capabilities of computing devices.
If you wish to prove that there are some things that no computer will ever do, or you suspect that such things do exist, as did Turing—and he had good reason for this suspicion because of the earlier
work of Kurt Gödel—then how would you go about proving it? One way would be to come up with a computer that can do anything any conceivable computer can do, and then show that there are things it
can’t possibly do. That’s precisely how Turing did it.
Why was he more interested in showing that there are some things no computer will ever do than in inventing the theoretical modern computer? Well, in his paper, he solves one of the most famous
mathematical problems of his day. That was closer to the intellectual focus of his activities as a mathematician and logician, which is what he was. The famous problem he solved was called the
Entscheidungsproblem, or the decision problem. Essentially, the problem, posed by David Hilbert in 1928, was to demonstrate the existence or non-existence of an algorithm that would decide the truth
or falsity of any mathematical/logical proposition. More specifically, the problem was to demonstrate the existence or non-existence of an algorithm which, given any formal system and any proposition
in that formal system, determines if the proposition is true or false. Turing showed that no such algorithm can exist.
At the time, one of the pressing problems of the day was basically whether mathematics was over and done with. If it was theoretically possible to build a computer that could decide the truth or
falsity of any mathematical/logical proposition, then mathematics was finished as a serious intellectual enterprise. Just build the machines and let them do the work. The only possibly serious
intellectual work left in mathematics would be meta-mathematical.
However, Turing showed that such an algorithm simply cannot exist. This result was congruent with Kurt Godel’s earlier 1930 work which demonstrated the existence in any sufficiently powerful formal
system of true but unprovable propositions, so-called undecidable propositions. In other words, Godel showed that there are always going to be propositions that are true but unprovable. Consequently,
after Godel’s work, it seemed likely that no algorithm could possibly exist which could decide whether any/every well-formed proposition was true or false.
We glimpse the poetics of the theory of computation in noting that its historical antecedant was this work by Gödel on unprovable truths and the necessary incompleteness of knowledge. The theory of
computation needed the work of Gödel to exist before it could bloom into the world. And let us be clear about the nature of the unprovable truths adduced by Godel. They are not garden-variety axioms.
Garden-variety axioms, such as the parallel postulate in geometry, are independent. That is, we are free to assume the proposition itself or some form of the negation of the proposition. The so
called undecidable propositions adduced by Gödel are true propositions. We are not at liberty to assume their negation as we are in the case of independent axioms. They are (necessarily) true but
unprovably true. And if we then throw in such a proposition as an axiom, there will necessarily always be more of them. Not only do they preclude the possibility of being able to prove every true
proposition, since they are unprovable, but more of them cannot be avoided, regardless of how many of them we throw into the system as axioms.
Sufficiently rich and interesting formal systems are necessarily, then, incomplete, in the sense that they are by their nature incapable of ever being able to determine the truth or falsity of all
propositions that they can express.
So the theory of computation begins just after humanity becomes capable of accomodating a deep notion of unprovable truth. Of course, unprovable truth is by no means a new concept! We have sensed for
millenia that there are unprovable truths. But we have only recently been able to accommodate them in sensible epistemologies and mathematical analysis. Unprovable truths are fundamental to poetry
and art, also. We know all too well that reason has its limits.
The more we know about ourselves, the more we come to acknowledge and understand our own limitations. It’s really only when we can acknowledge and understand our own limitations that we can begin to
do something about them. The first design for a so-called Turing-complete computer—that is, a computer that has all the theoretical capabilities of a Turing machine—pre-dated Turing by a hundred
years: Charles Babbage’s Analytical Engine. But Babbage was never able create his computer. It was not only a matter of not being able to manufacture the parts in the way he wanted, but he lacked the
theory of computation that Turing created. A great theory goes a long way. The Turing machine, as we shall see, is simplicity itself. Children can understand it. We think of computers as
intimidatingly complex machines, but their operation becomes much more understandable as Turing machines.
We could have had computers without the theory of computation, but we wouldn’t have understood them as deeply as we do, wouldn’t have any sense of their theoretical limitations—concerning both what
they can and can’t do. And we wouldn’t have been able to develop the technology as we have, because we simply wouldn’t have understood computers as deeply as we do now, wouldn’t have been able to
think about them as productively as we can with the aide of a comprehensive theory. Try to put a man on the moon without Newtonian mechanics. It might be doable, but who would want to be on that
ship? Try to develop the age of computers without an elegant theory to understand them with? No thank you. That sounds more like the age of the perpetual blue screen.
Gödel’s incompleteness theorems are not logically prior to Turing’s work. In other words, Turing’s work does not logically depend on Gödel’s work—in fact, incompleteness can be deduced from Turing’s
work, as computer scientists sometimes point out. But it was Gödel’s work that inspired Turing’s work. Not only as already noted, but even in its use of Georg Cantor’s diagonal argument. Gödel’s work
was the historical antecedant of Turing’s work. Gödel’s work established a world view that had the requisite epistemological complexity for Turing to launch a theory of computation whose
epistemological capabilities may well encompass thought itself.
The theory of computation does not begin as a manifesto declaring the great capabilities of computers, unlike the beginnings of various art movements. Instead, it begins by establishing that
computers cannot and simply will never ever solve certain problems. That is the main news of the manifesto; it means that mathematics is not over, which had been a legitimate issue for several years.
Were computers going to take over mathematics, basically? Well, no. That was very interesting news. You don’t often get such news in the form of mathematical proofs. News that stays news. The other
news in the manifesto is almost incidental: oh, by the way, here is a mathematization of all conceivable machines—here is the universal machine, the machine that can compute anything that any
conceivable machine can compute.
His famous paper layed the foundation for the theory of computation. He put the idea of the computer and the algorithm in profound relation with Gödel’s epistemologically significant work. He layed
the philosophical foundation for the theory of computation, establishing that it does indeed have important limitations, epistemologically, and he also provided us with an extrordinarily robust
mathematization of the computer in the form of the Turing Machine.
Turing’s paper is significant in the history of mathematics. We see now that the development of the computer and the theory of computation occurs after several hundred years of work on the “crisis of
foundations” in mathematics and represents a significant harvest or bounty from that research. At least since the seventeenth century, when bishop Berkeley famously likened Newton’s treatment of some
types of numbers in calculus to “the ghosts of departed quantities”, and especially since the birth pains in the eighteenth century of non-Euclidean geometry, mathematicians had understood that the
foundations of mathematics were vaguely informal, possibly contradictory, and needed to be formalized in order to provide philosophical and logical justification and logical guidelines in the
development of mathematics from first principles.
There’s a straight line from that work to the work of Frege, Cantor, and Gödel. And thence to Turing. The theory of computation, it turns out, needed all that work to have been done before it could
bloom. It needed the philosophical perspective and the tools of symbolic logic afforded by that work. Because the theory of computation is not simply a theory of widgets and do-dad machines. At least
since the time of Leibniz in the seventeenth century, the quest to develop computing devices has been understood as a quest to develop aides to reason and, more generally, the processes of thought.
The Turing Machine and the theory of computation provide us with machines that operate, very likely, at the atomic level of thought and mind. Their development comes after centurys of work on the
philosophical foundations of mathematics and logic. Not to say that it’s flawless. After all, it’s necessarily incomplete and perhaps only relatively consistent, rather than absolutely consistent.
But it’s good enough to give us the theory of computation and a new age of computers that begins with a fascinatingly humble but far-reaching paper entitled “On Computable Numbers, with an
Application to the Entscheidungsproblem” by Alan Turing.
It changes our ideas about who and what we are. Computer art, without it, would be utterly different. Just as would the world in so many ways.
Here are links to the blog posts, so far, in Computer Art and the Theory of Computation:
Chapter 1: The Blooming
Chapter 2: Greenberg, Modernism, Computation and Computer Art
Chapter 3: Programmability
Chapter X: Evolution and the Universal Machine
2 Responses to “Computer Art and the Theory of Computation: Chapter 1: The Blooming”
• Fascinating articles, Jim. I look forward to seeing where you proceed with the relevance of theory of computation to computer art. I’ve been thinking about interactivity recently, so let me
consider your following statement:
The problem is that there are non-interactive works of computer art. For instance, generative computer art is often not interactive. It may be different each time you view it, because it’s
generated at the time of viewing, but sometimes it requires no interaction at all. Such work should be classified as computer art. The computer is crucial to its production and its display.
I think I agree with you, but let me play devil’s advocate for a bit. Let’s think of some cases:
1) a) a writer uses a word processor and spell-checker to write a novel. b) A couple of months later it is published as a paperback.
2) a) an artist uses commercially-available software and a wacom tablet to draw a comic book. (as in: The DC Comics Guide to Digitally Drawing Comics, Freddie E Williams II) b) The comic book is
printed, and is indistinguishable from conventionally-drawn comics.
3) a) an artist uses a fractal art generator to create an image that they would not have been able to create otherwise. b) The image is printed out and exhibited.
4) a) an art student uses a digital camera to take a number of pictures for a photography class. b) a digital picture display (which cycles through the images sequentially) is used to exhibit
5) a) an artist uses Processing to create a program that repeatedly draws sequences of complex images from primitives (circles, polygons, etc.) b) The program is exhibited on a computer screen,
and the shapes and colors of the images vary (using a sensor) based on how close the viewer is.
6) a) a poet programs a scriptable tool, like JanusNode, that is able to generate poetry. b) another poet writes scripts for JanusNode to generate a specific type of poetry. c) some JanusNode
poems are printed out and exhibited
Instead of getting tied down by terms, I’d like to cluster some of these cases.
First cluster: 1a, 2a, 4a, 5a.
The writer with the spell-checker, the digital comic-book artist, the digital photographer, and the shape animator, are all using computers to make their task easier.
The task could be done without the computer (although there might be more spelling mistakes, the drawings might take longer, and even a photorealistic painting wouldn’t look completely like a
photograph, the shape animations might require a movie camera.)
The artwork is “born digital”; using computer inputs, computational algorithms, and stored on a computer.
There seems to be an ordering in terms of how vital the computer is to the task. Spell-checking is purely a time-saving extra, programs for drawing make the job much easier (i.e. to edit if you
get something wrong), animation is drudgery without a computer, and a child can do with digital photographs what an illustrator would take months to accomplish.
Second cluster: 3a, 6a.
The fractal artist and the poetry generator programmer are both using computers to make the task possible, rather than easier.
This is possibly on a continuum with the first cluster, since theoretically an artist could draw fractal art, or create specifications to author oulipo-like poetry generation algorithms, but
without a computer this is not intuitive.
The fractal image and the poetry generator are both “born digital” as in the first cluster.
Third cluster: 1b, 2b, 3b, 6c
The artwork, which was created with the help of a computer, is instantiated in non-computational form.
The book, comic book, and poem are generally indistinguishable from versions that were not created with the help of computers. The fractal image is more of an exception, since often they are
obviously computer-generated.
Fourth cluster: 4b, 5b, 6b
A human interacts with the digital picture, shape animation, or poetry generator.
Ordering is by increasing levels of interaction (Digital picture: passive observation. Shape animation: simple proxemic input. Poetry generator: engagement with programmability of computational
I’m not going to try to say what is and what isn’t computer art, and I’m not familiar enough with Lopes’ ideas to critique them. But in my opinion, any definitive analysis of computer art should
consider the following:
how the artist uses the computer is important: i.e. using a spell-checker vs developing a tool for poetry generation scripts (clusters 1&2). Part of this is the type of computational power used;
this may be what you’re getting at in discussing programmability. Part of it is what we have come to expect from a computer; to a society that has never seen computers, novels written with a word
processor may indeed be computer art.
cluster 3 (non-computational media that transmit artwork authored with computers) suggests that even if the authoring process is highly computational, the audience experience need not be. In this
case, all computational work has finished by the time the audience reaches the product; any further processing occurs solely in the mind of the audience member.
even computationally-mediated audience experiences (cluster 4) represent a continuum of experiences with computation. In some cases it is passively observing the results of simple animations, in
other cases reactions to simple inputs, in other cases engagement with fairly sophisticated scripting languages.
In other words interactivity occurs in at least 3 different ways. It may be OK to have an umbrella term for all these, but their distinctions are important.
• Thanks, Edde. Instead of saying “Such work should be classified as computer art. The computer is crucial to its production and its display”, I should have added that the computer is also crucial
to the appreciation of the work.
The idea I’m trying to capture in computer art is art in which the computer is crucial as medium. And that’s art in which the computer is crucial to the appreciation of the work and its
production. So
1 is not computer art, 2 is not, 3 is, 4 is not, 5 and 6 are. | {"url":"http://netartery.vispo.com/?p=1174","timestamp":"2014-04-17T21:34:07Z","content_type":null,"content_length":"62410","record_id":"<urn:uuid:ec74a6c1-a34d-4b48-9ffe-11709a2bd790>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00315-ip-10-147-4-33.ec2.internal.warc.gz"} |
Schrödinger's virus and decoherence
The physics arXiv blog, Nature, Ethiopia, Softpedia, and many people on the Facebook were thrilled by a new preprint about the preparation of Schrödinger's virus, a small version of Schrödinger's
The preprint is called
Towards quantum superposition of living organisms (click)
and it was written by Oriol Romero-Isart, Mathieu L. Juan, Romain Quidant, and J. Ignacio Cirac. They wrote down some basic stuff about the theory and a pretty clear recipe how to cool down the virus
and how to manipulate with it (imagine a discussion of the usual "atomic physics" devices with microcavities, lasers, ground states, and excited states of a virus, and a purely technical selection of
the most appropriate virus species).
It is easy to understand the excitement of many people. The picture is pretty and the idea is captivating. People often think that the living objects should be different than the "dull" objects
studied by physics. People often think that living objects - and viruses may or may not be included in this category - shouldn't ever be described by superpositions of well-known "privileged" wave
functions. Except that they can be and it is sometimes necessary. Quantum mechanics can be baffling but it's true.
A rational viewpoint
Let me admit, I don't share this particular excitement because it's damn clear what will be observed in any experiment of this kind. It's been clear since the 1920s - and all the "marginal" issues
have been clarified in the 1980s. People often say that the interpretation of quantum mechanics is confusing and they expect similar experiments to lead to surprising or uncertain results. However,
they don't.
As long as decoherence is as small for the virus as for any other microscopic dipole, it will behave as a quantum dipole. It will interfere and do all the things you expect from the small things.
Once it becomes large, it will behave as a cat. ;-)
See also: Entanglement, Bell's inequalities, interpretation of quantum mechanics, decoherence (lecture 26)
Google Docs viewer may view the PDF files...
The Copenhagen school may have said some confusing things about the "collapse" of the wave functions and about "consciousness" but they surely knew how to predict experiments where both quantum
mechanics and classical physics played a role. They realized that
1. only probabilities may be predicted in this quantum world: the usual QM calculations are helpful
2. "small" objects behave according to the quantum logic while "large" objects behave according to the classical logic; they interact according to the "measurement theory"
Nothing has changed about the point (1) whatsoever. All attempts to deny or weaken (1) have been ruled out. This "probabilistic" rule seems to be a completely fundamental and universal feature of the
real world. There can be no "hidden variables" because the "probabilistic character" of the predictions is not emergent but fundamental.
Limitations of the Copenhagen interpretation
On the other hand, the Copenhagen rule (2) was phenomenological in character. It allowed them to predict what's going on when microscopic and macroscopic objects interact. However, it didn't allow
them to explain several related questions, namely
1. Where is the boundary between classical objects and quantum objects located?
2. What's exactly happening near this boundary, in the marginal situations?
3. How can this boundary be derived?
The question (1) has led to all kinds of philosophical speculations and quasi-religious delusions.
The ability to "reduce" the wave function and to "perceive" the results has been attributed to mammals (mammal racism), humans (anthropocentric racism), the white people (conventional racism), the
author of the sentence and no one else (solipsism), macroscopic objects above a micron (approximate truth but not quite exact and universal), and to many other categories of "objects" and "subjects".
While it was known that the cat behaved classically, the question (2) looked pressing. People wanted to know what happens when objects in one category "cross the boundary" and behave according to the
other set of rules. They thought that the co-existence of the "two philosophies" - behind quantum and classical objects - was problematic. That was why Schrödinger invented his cat.
People had no idea how to calculate the answer to the questions (2) and (3), i.e. how to derive the location of the quantum-classical boundary and the precise behavior near the boundary.
Decoherence as the cure for these Danish imperfections
Your humble correspondent prefers the "Consistent Histories" (associated with the names of Gell-Mann and Hartle; Omnes; Griffiths; and others) as the most concise, state-of-the-art framework that
tells you which questions are legitimate in quantum mechanics; and how the answers to these questions - i.e. the probabilities of different histories - should be calculated.
But we should realize that the Consistent Histories are just a formalism. The actual physics needed to overcome the difficulties of the Copenhagen interpretation is called "
quantum decoherence
" or "decoherence" for short, pioneered primarily by
Wojciech Zurek
(see the picture,
Decoherence is a universal, omnipresent process that destroys coherence, i.e. the information about the relative phases of distinct quantum complex amplitudes. I will discuss it in the rest of the
article. Decoherence is important because:
Decoherence is the only process in Nature that leads to the transition from the quantum rules to the ordinary Joe's familiar classical rules of physics.
If you ask whether a system is allowed to be found in superpositions of well-known states, whether it has the right to "perceive" its own state, whether it exhibits "consciousness", and so on,
decoherence is the only physical consideration that determines the boundary between the quantum and classical worlds.
Of course, objects and subjects may be more or less able to manipulate with the information, to remember it, and so on, but all objects or collections of degrees of freedom that are strongly
influenced by decoherence have the same qualitative behavior when it comes to the ability to "reduce" wave functions as the humans.
How decoherence works
Imagine two quantum states of a virus, |ψ¹» and |ψ²». And imagine that the Hamiltonian destines them to emit and/or reflect a photon (which will be the representative of any kind of environmental
degrees of freedom) in such a way that the corresponding states of the photon created by |ψ¹» and |ψ²» are orthogonal to each other.
That will usually happen if |ψ¹» and |ψ²» are chosen to be "natural" states that have well-defined "local properties" or other observables that can be "seen" by the photon. The "locality" properties
are determined by the Hamiltonian. That's why the Hamiltonian also contains the information about the "privileged basis" of the Schrödinger's virus Hilbert space.
At any rate, if the initial state is a superposition
|ψ» = a|ψ¹» + b|ψ²»
with complex amplitudes "a,b" (and you may want to press "ctrl/+" if the superscripts are too small), it will evolve into
|ψ[final]» = a |ψ¹» |photon¹» + b |ψ²» |photon²».
Note that this simple "tensor squaring" of the terms only works in one privileged basis of states. For example, it is not true that the initial state
(|ψ¹» + |ψ²»)
evolves into a simple "square" of it,
(|ψ¹» + |ψ²») (|photon¹» + |photon²»).
It can't! Expand the product above to see that it contains previously unwanted "mixed terms". The "squaring rule" is not true for all states because such an evolution would violate the quantum xerox
no-go theorem which is a simple consequence of the evolution operator's being linear rather than quadratic.
Nevertheless, it is easy to see that this evolution is what happens for a properly chosen basis. The following steps are clear. The photon quickly disappears somewhere in the environment. It becomes
(or at least
) to follow its state - or the state of all other particles it influences in the future. Because we
only study the virus (or we
to study the virus only), we must trace over the photonic part of the Hilbert space.
The usual rules to trace over give us the final density matrix, after the photon was emitted:
|ψ[final]» = a |ψ¹» |photon¹» + b |ψ²» |photon²»,
ρ = |a|^2 |ψ¹» «ψ¹| + |b|^2 |ψ²» «ψ²|.
The Greek letters starting the two lines above are pronounced "rho" and "psi". Note that the information about the relative phase of "a,b" has been forgotten. The relative phases have been forgotten
because they would only survive in the off-diagonal elements of the density matrix. But all the off-diagonal states were abruptly set to zero because of the orthogonality of the photonic states. Only
the absolute values of "a,b" are remembered. The latter may be interpreted as "classical probabilities".
How quickly does it work?
In general, the photon states are not exactly orthogonal. But when you calculate how quickly the process destroys the off-diagonal elements of the density matrix, it is extremely fast. Even the
interactions with the cosmic microwave background are enough for a very tiny speck of dust to decohere within a tiny fraction of a second (if we care about the off-diagonal elements between position
states separated by as little as the CMB wavelength or even less).
The rate of decoherence gets faster for larger, hotter, denser, and strongly interacting environments. You must really cool the viruses very brutally to have any chance to avoid decoherence.
Also, the typical time dependence of the off-diagonal matrix elements is schematically "exp(-exp(t))", i.e. expo-exponential. (I omitted many coefficients, to make the function more readable.) It's
much faster than an exponential decrease. Once decoherence begins, it destroys the information about the relative phase immediately: let us accept an approximate yet pretty accurate convention (for
all conceivable purposes) that probabilities smaller than 10^{-2000} are identified with zero. ;-)
The expo-exponential dependence emerges because the number "N" of degrees of freedom that the state of the virus influences is going exponentially with time (an exploding, cascading propagation of
information, "N=exp(t)"), and each degree of freedom adds a small multiplicative factor to the inner products of the environmental degrees of freedom ("ρ(12)=exp(-N)").
Decoherence, i.e. the "classical-quantum boundary in action", has been routinely observed in the labs since the 1996 experiments by
Raimond, Haroche, and others
Preserving the probabilistic character of physics
Fine, so the quantum, interfering, complex "probability amplitudes" (that routinely violate Bell's inequalities) have been transformed to classical probabilities (that obey Bell's inequalities). Now,
you may ask how does the theory make the second step: how does it transform the classical probabilities to particular, sharply determined classical answers?
This cat is way to similar to Lisa whom I have fed (and discussed quantum mechanics with) for 10 days in late August. Ouch.
Well, it never does. Even for macroscopic objects, the probabilistic character of the predictions is real. The outcomes of the experiments can't be determined by any hidden variables, not even in
principle. They are genuinely random. It is a fundamental fact about Nature.
The only way why "determinism" seems to arise in the macroscopic world is when the probabilities predicted by quantum mechanics for "most outcomes" except a small neighborhood of the "classical
result" are nearly equal to zero. That's why the macroscopic world looks approximately deterministic, given a finite accuracy of the measured eigenvalues. But it never becomes "fundamentally
Decoherence, i.e. the liquidation of the information about the relative phases, is the only transformation that physics is doing in order to make the quantum world behave as a classical one. All the
ideas that "something else is needed" to get macroscopic, conscious, large objects similar to us and the cats (vital forces, holy spirits, additional privileged "beables" that differ from ordinary
"observables", or gravitational collapses of wave functions) are delusions and deep misunderstandings of quantum mechanics.
You may ask whether Schrödinger's cat can ever "feel" as being in a linear superposition of those two states that will quickly decohere. Well, it can't. Its "feelings" are an observable whose only
allowed eigenvalues are "dead" and "alive". Whenever you make any observations (including a "poll" in which you ask the cat about its feelings), you will see that the cat is either dead or alive.
In fact, because the probabilities have been transformed into ordinary classical probabilities that don't interfere, you may always imagine that one of the answers about the cat's condition - "dead"
or "alive" - was true even before you made the observation. More generally, you can always "imagine" that the "reduction" of the wave function took place immediately when decoherence became strong.
With this assumption, you can't ever run into any contradictions because the classical probabilities do obey Bell's inequalities and all the similar conditions. But you may still ask whether the
"reduction" was real - whether the cat was "really" in one of the states before you measured it.
Because you just don't know what the state was - and an observation is the only way to find out - the question whether the answer was decided "before your measurement" is unphysical. For microscopic
systems that don't decohere much, it can be shown that the outcomes couldn't have been determined before the measurements. For macroscopic objects that do decohere, you can't prove such a thing. In
fact, one can prove that no one can prove such a thing. ;-)
So it's consistent to imagine that decohering degrees of freedom had one of their allowed "classical values" even before the measurement. That's what most people do, anyway. The Moon is over there
even if no mouse is watching it.
Alternatively, if you're a solipsist, you may keep linear superpositions as a description of all objects (and cats) inside the Universe and only "reduce" this wave function when you want to determine
what your brain feels.
Your predictions will always be identical to the case when you "reduce" the wave function for all degrees of freedom as soon as they decohere. And if "two theories" give identical predictions for all
situations that are measurable, at least in principle, they are physically identical, regardless of the gap between the feelings that these "two theories" create in our minds.
Decoherence: arrow of time
The arrow of time is being frequently discussed on the physics blogs. Decoherence has its own arrow of time, too. The states tend to be "pure" (vectors in the Hilbert space) in the past but "mixed"
(density matrices) in the future.
Our derivation of decoherence instantly shows that the "decoherent arrow of time" is inevitably correlated with the logical arrow of time. We are tracing over the environmental degrees of freedom
because we're either "forgetting" about them, or we "want to forget" about them. The photons won't matter for the future life of the virus which is why we are able to "eliminate them" by tracing over
their Hilbert space. Our ability to predict things about the virus is not reduced at all.
This process can't be reverted because the information that doesn't exist or has been forgotten can't suddenly be "created again" or "unforgotten". ;-) Yes, all these arguments assume an asymmetry
between the past and the future - in the methods how we remember the past, and not the future, and so on.
These assumptions are called the "logical arrow of time" and no logical or logically sound argument relevant for the evolution of anything in time can ever avoid the "logical arrow of time". When we
think about time, the "logical arrow of time" is a part of the basic logic. And the basic logic is more fundamental than any "emergent process" that someone could imagine to "explain" the arrow of
time by some convoluted dynamics.
The "decoherent arrow of time" is also manifestly aligned with the "thermodynamic arrow of time", determining the direction in which the entropy increases. After all, if you define the entropy as the
uncertainty present in your density matrix, i.e. as the coarse-grained entropy
S = -Tr ρ ln(ρ),
it's clear that the evolution of pure states into mixed states (by tracing over some degrees of freedom, as in decoherence) increases "S".
So these two arrows of time coincide (and in fact, even the rate of decoherence is pretty much linked to the rate of the entropy growth) but it's not a new insight - it's an insight that sane
physicists have understood since the very moment when they started to discuss decoherence, or even the density matrices (computed as partial traces).
So all these things are cool and sexy and we're used to viewing them as mysterious. And we often love the profound feelings of mystery. But in reality, there is no genuine question concerning the
behavior of Schrödinger viruses (or even cats) that would remain uncertain as of 2009.
And that's the memo.
snail feedback (6) :
Dear Lubos,
thanks so much for this post.
I've been frustrated by hand waving about "wave function collapse" for so many years and now I have something that at least makes sense. I have to read it a few times more to make sure I really
get it.
This comment has been removed by the author.
Dear Mike,
thanks for your compliments. The Palmer preprint is here (click).
It's the kind of words about a new picture of quantum mechanics that unifies it with the emergent geometry and clarifies everything blah blah blah - that may occur sometimes in the future.
However, this particular paper looks like another confused diatribe about Bohmian mechanics - with fractals etc. un-quantitatively added to the mixture - so I'm not gonna study it in detail. (It
has 0 citations, so I am probably not the only one who decides in this way.)
Best wishes
I forgot to say: this text about contextual and other observables explains one key aspect of QM that people like Palmer don't understand, namely that all observables are and must be treated by
the same mathematical structure.
All classical quantities are promoted to linear operators, all of them have a spectrum (eigenvalues), all these eigenvalues can only be predicted probabilistically (probabilities of different
outcomes), no eigenvalue exists for "certain" prior to the measurement, and all these basic postulates are true and must be inevitably true regardless of the spectrum's discreteness, continuity,
or mixed character.
Also, it doesn't matter whether one can create a good Bohmian model for a given observable or not. All of observables - operators - in quantum mechanics (and in our real quantum world) are
equally "real" or "unreal".
Eliezer Yudkowsky's Quantum Physics Sequence gave a pretty good explanation for laymen. At least as far as this layman can tell!
Particle physics is leading to the quantum field theory correlation function for the virtual force photons' map on the spacetime volume of an atom. This deals with the Schrodinger wavefunction,
and as "Schrodinger's Virus and Decoherence" points out, there are several interesting continuations. One productive result is the relative quantum expansion with the correlation function
solution. It solves the Schrodinger wavefunction for one atom.
The atom's RQT (relative quantum topological) data point imaging function is built by combination of the relativistic Einstein-Lorenz transform functions for time, mass, and energy with the
workon quantized electromagnetic wave equations for frequency and wavelength. The atom labeled psi (Z) pulsates at the frequency {Nhu=e/h} by cycles of {e=m(c^2)} transformation of nuclear
surface mass to forcons with joule values, followed by nuclear force absorption. This radiation process is limited only by spacetime boundaries of {Gravity-Time}, where gravity is the force
binding space to psi, forming the GT integral atomic wavefunction. The expression is defined as the series expansion differential of nuclear output rates with quantum symmetry numbers assigned
along the progression to give topology to the solutions.
Next, the correlation function for the manifold of internal heat capacity particle 3D functions condensed due to radial force dilution is extracted; by rearranging the total internal momentum
function to the photon gain rule and integrating it for GT limits. This produces a series of 26 topological waveparticle functions of five classes; {+Positron, Workon, Thermon, -Electromagneton,
Magnemedon}, each the 3D data image of a type of energy intermedon of the 5/2 kT J internal energy cloud, accounting for all of them.
Those values intersect the sizes of the fundamental physical constants: h, h-bar, delta, nuclear magneton, beta magneton, k (series). They quantize nuclear dynamics by acting as fulcrum
particles. The result is the picoyoctometric, 3D, interactive video atomic model data imaging function, responsive to keyboard input of virtual photon gain events by relativistic, quantized
shifts of electron, force, and energy field states and positions.
Images of the h-bar magnetic energy waveparticle of ~175 picoyoctometers are available online at http://www.symmecon.com with the complete RQT atomic modeling guide titled The Crystalon Door,
copyright TXu1-266-788. TCD conforms to the unopposed motion of disclosure in U.S. District (NM) Court of 04/02/2001 titled The Solution to the Equation of Schrodinger.
(C) 2009, Dale B. Ritter, B.A. | {"url":"http://motls.blogspot.co.uk/2009/09/schrodinger-virus-and-decoherence.html","timestamp":"2014-04-20T18:44:02Z","content_type":null,"content_length":"222774","record_id":"<urn:uuid:e885f989-4df1-42c0-b0ca-9388b163c714>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00194-ip-10-147-4-33.ec2.internal.warc.gz"} |
Pendulum hitting peg
It looks like the math does not work out to be what it is supposed to be.
I was able to get your expression for sin (beta) and it looks like v^2 should be 2g/3(Lcos[theta] - R cos [alpha] (the opposite of your expression, if I am not mistaken).
Yes, it was a typo in v^2.
From the condition of "slack":
Inserting into the equation fro conservation of energy:
[tex] v^2=\frac{2g}{3}(R\cos{\alpha}-L\cos{\theta})[/tex]
Back to "slack":
I think it is easier to proceed by introducing the notation
[tex] v^2=\frac{2gB}{3}[/tex]
Either way, I end up with the following for cos (theta):
cos [theta] = (R/L)cos (a) + 3(L-R)cos^2(beta)/2Lsin(beta)
I got
But it is easier to keep
I got for the projectile part:
Replacing * for sin(beta)
and you get theta from here....But you are right, this was an arithmetics marathon. . | {"url":"http://www.physicsforums.com/showthread.php?t=48125","timestamp":"2014-04-20T05:42:03Z","content_type":null,"content_length":"75308","record_id":"<urn:uuid:51b5439c-dbe1-4954-913c-50d08c47d322>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00052-ip-10-147-4-33.ec2.internal.warc.gz"} |
Copyright 2008-2009 Mario Blazevic
This file is part of the Streaming Component Combinators (SCC) project.
The SCC project is free software: you can redistribute it and/or modify it under the terms of the GNU General Public
License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later
SCC is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty
of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License along with SCC. If not, see
{-# LANGUAGE ScopedTypeVariables, KindSignatures, Rank2Types, ImpredicativeTypes, ExistentialQuantification, DeriveDataTypeable,
MultiParamTypeClasses, FlexibleInstances, FunctionalDependencies #-}
module Control.Concurrent.SCC.ComponentTypes
(-- * Classes
Component (..), BranchComponent (combineBranches), LiftableComponent (liftComponent), Container (..),
-- * Types
AnyComponent (AnyComponent), Performer (..), Consumer (..), Producer(..), Splitter(..), Transducer(..),
ComponentConfiguration(..), Boundary(..), Markup(..), Parser,
-- * Lifting functions
liftPerformer, liftConsumer, liftAtomicConsumer, liftProducer, liftAtomicProducer,
liftTransducer, liftAtomicTransducer, lift121Transducer, liftStatelessTransducer, liftFoldTransducer, liftStatefulTransducer,
liftSplitter, liftAtomicSplitter, liftStatelessSplitter, liftStatefulSplitter,
-- * Utility functions
showComponentTree, optimalTwoParallelConfigurations, optimalTwoSequentialConfigurations, optimalThreeParallelConfigurations,
splitToConsumers, splitInputToConsumers
import Control.Concurrent.SCC.Foundation
import Control.Monad (liftM, when)
import Data.List (minimumBy)
import Data.Maybe
import Data.Typeable (Typeable, cast)
-- | 'AnyComponent' is an existential type wrapper around a 'Component'.
data AnyComponent = forall a. Component a => AnyComponent a
-- | The types of 'Component' class carry metadata and can be configured to use a specific number of threads.
class Component c where
name :: c -> String
-- | Returns the list of all children components.
subComponents :: c -> [AnyComponent]
-- | Returns the maximum number of threads that can be used by the component.
maxUsableThreads :: c -> Int
-- | Configures the component to use the specified number of threads. This function affects 'usedThreads', 'cost',
-- and 'subComponents' methods of the result, while 'name' and 'maxUsableThreads' remain the same.
usingThreads :: Int -> c -> c
-- | The number of threads that the component is configured to use. By default the number is usually 1.
usedThreads :: c -> Int
-- | The cost of using the component as configured.
cost :: c -> Int
cost c = 1 + sum (map cost (subComponents c))
instance Component AnyComponent where
name (AnyComponent c) = name c
subComponents (AnyComponent c) = subComponents c
maxUsableThreads (AnyComponent c) = maxUsableThreads c
usingThreads n (AnyComponent c) = AnyComponent (usingThreads n c)
usedThreads (AnyComponent c) = usedThreads c
cost (AnyComponent c) = cost c
-- | Show details of the given component's configuration.
showComponentTree :: forall c. Component c => c -> String
showComponentTree c = showIndentedComponent 1 c
showIndentedComponent :: forall c. Component c => Int -> c -> String
showIndentedComponent depth c = showRightAligned 4 (cost c) ++ showRightAligned 3 (usedThreads c) ++ replicate depth ' '
++ name c ++ "\n"
++ concatMap (showIndentedComponent (succ depth)) (subComponents c)
showRightAligned :: Show x => Int -> x -> String
showRightAligned width x = let str = show x
in replicate (width - length str) ' ' ++ str
data ComponentConfiguration = ComponentConfiguration {componentChildren :: [AnyComponent],
componentThreads :: Int,
componentCost :: Int}
-- | A component that performs a computation with no inputs nor outputs is a 'Performer'.
data Performer m r = Performer {performerName :: String,
performerMaxThreads :: Int,
performerConfiguration :: ComponentConfiguration,
performerUsingThreads :: Int -> (ComponentConfiguration, forall c. Pipe c m r),
perform :: forall c. Pipe c m r}
-- | A component that consumes values from a 'Source' is called 'Consumer'.
-- data Consumer m x r = Consumer {consumerData :: ComponentData (forall c. Source c x -> Pipe c m r),
-- consume :: forall c. Source c x -> Pipe c m r}
data Consumer m x r = Consumer {consumerName :: String,
consumerMaxThreads :: Int,
consumerConfiguration :: ComponentConfiguration,
consumerUsingThreads :: Int -> (ComponentConfiguration, forall c. Source c x -> Pipe c m r),
consume :: forall c. Source c x -> Pipe c m r}
-- | A component that produces values and puts them into a 'Sink' is called 'Producer'.
data Producer m x r = Producer {producerName :: String,
producerMaxThreads :: Int,
producerConfiguration :: ComponentConfiguration,
producerUsingThreads :: Int -> (ComponentConfiguration, forall c. Sink c x -> Pipe c m r),
produce :: forall c. Sink c x -> Pipe c m r}
-- | The 'Transducer' type represents computations that transform data and return no result.
-- A transducer must continue consuming the given source and feeding the sink while there is data.
data Transducer m x y = Transducer {transducerName :: String,
transducerMaxThreads :: Int,
transducerConfiguration :: ComponentConfiguration,
transducerUsingThreads :: Int -> (ComponentConfiguration,
forall c. Source c x -> Sink c y -> Pipe c m [x]),
transduce :: forall c. Source c x -> Sink c y -> Pipe c m [x]}
-- | The 'Splitter' type represents computations that distribute data acording to some criteria. A splitter should
-- distribute only the original input data, and feed it into the sinks in the same order it has been read from the
-- source. If the two 'Sink c x' arguments of a splitter are the same, the splitter must act as an identity transform.
data Splitter m x b = Splitter {splitterName :: String,
splitterMaxThreads :: Int,
splitterConfiguration :: ComponentConfiguration,
splitterUsingThreads :: Int -> (ComponentConfiguration,
forall c. Source c x -> Sink c x -> Sink c x -> Sink c b
-> Pipe c m [x]),
split :: forall c. Source c x -> Sink c x -> Sink c x -> Sink c b -> Pipe c m [x]}
-- | A 'Markup' value is produced to mark either a 'Start' and 'End' of a region of data, or an arbitrary
-- 'Point' in data. A 'Point' is semantically equivalent to a 'Start' immediately followed by 'End'. The 'Content'
-- constructor wraps the actual data.
data Boundary y = Start y | End y | Point y deriving (Eq, Show, Typeable)
data Markup x y = Content x | Markup (Boundary y) deriving (Eq, Typeable)
type Parser m x b = Transducer m x (Markup x b)
instance Functor Boundary where
fmap f (Start b) = Start (f b)
fmap f (End b) = End (f b)
fmap f (Point b) = Point (f b)
instance (Show y) => Show (Markup Char y) where
showsPrec p (Content x) s = x : s
showsPrec p (Markup b) s = '[' : shows b (']' : s)
-- | The 'Container' class applies to two types where a first type value may contain values of the second type.
class Container x y where
-- | 'unwrap' returns a pair of a 'Splitter' that determines which containers are non-empty, and a 'Transducer' that
-- unwraps the contained values.
unwrap :: ParallelizableMonad m => (Splitter m x (), Transducer m x y)
-- | 'rewrap' returns a 'Transducer' that puts the unwrapped values into containers again.
rewrap :: ParallelizableMonad m => Transducer m y x
instance (Typeable x, Typeable y) => Container (Markup x y) x where
unwrap = (liftStatelessSplitter "isContent" isContent, liftStatelessTransducer "unwrapContent" unwrapContent)
where isContent (Content x) = True
isContent _ = False
unwrapContent (Content x) = [x]
unwrapContent _ = []
rewrap = lift121Transducer "wrapContent" Content
class LiftableComponent cx cy x y | cx -> x, cy -> y, cx y -> cy, cy x -> cx where
liftComponent :: cy -> cx
instance forall m x y. (Container x y, ParallelizableMonad m, Typeable x, Typeable y)
=> LiftableComponent (Transducer m x x) (Transducer m y y) x y where
liftComponent t = liftTransducer "liftComponent" (maxUsableThreads t + maxUsableThreads (rewrap :: Transducer m y x)) $
\threads-> let (configuration, t', w', parallel) = optimalTwoParallelConfigurations threads t wrapper
(wrapper :: Splitter m x (), unwrap' :: Transducer m x y) = unwrap
tx source sink = liftM (const []) $
(\true-> pipe
(split w' source true sink)
(\wrapped-> pipe
(transduce unwrap' wrapped)
(\unwrapped-> pipe
(transduce t' unwrapped)
(\out-> transduce rewrap out sink)))
in (configuration, tx)
instance forall m x y. (Container x y, ParallelizableMonad m, Typeable x, Typeable y)
=> LiftableComponent (Splitter m x ()) (Splitter m y ()) x y where
liftComponent splitter = liftSplitter "liftComponent" (maxUsableThreads splitter + maxUsableThreads (rewrap :: Transducer m y x)) $
\threads-> let (configuration, s', w', parallel) = optimalTwoParallelConfigurations threads splitter wrapper
(wrapper :: Splitter m x (), unwrap' :: Transducer m x y) = unwrap
split' :: forall c. Source c x -> Sink c x -> Sink c x -> Sink c () -> Pipe c m [x]
split' source true false edge
= liftM (fst . fst . fst) $
(\rewrappedTrue-> pipe
(\rewrappedFalse-> split'' source rewrappedTrue rewrappedFalse false edge)
(flip (transduce rewrap) false))
(flip (transduce rewrap) true)
split'' :: forall c. Source c x -> Sink c y -> Sink c y -> Sink c x -> Sink c () -> Pipe c m ([x], ([x], [y]))
split'' source true1 false1 false2 edge = pipe
(\sink-> split''' source sink false2 edge)
(\source-> pipe
(transduce unwrap' source)
(\source-> split s' source true1 false1 edge))
split''' :: forall c. Source c x -> Sink c x -> Sink c x -> Sink c ()
-> Pipe c m [x]
split''' source true false edge = split w' source true false edge
in (configuration, split')
instance Component (Performer m r) where
name = performerName
subComponents = componentChildren . performerConfiguration
maxUsableThreads = performerMaxThreads
usedThreads = componentThreads . performerConfiguration
usingThreads threads performer = let (configuration', perform' :: forall c. Pipe c m r) = performerUsingThreads performer threads
in performer{performerConfiguration= configuration', perform= perform'}
cost = componentCost . performerConfiguration
instance Component (Consumer m x r) where
name = consumerName
subComponents = componentChildren . consumerConfiguration
maxUsableThreads = consumerMaxThreads
usedThreads = componentThreads . consumerConfiguration
usingThreads threads consumer = let (configuration',
consume' :: forall c. Source c x -> Pipe c m r) = consumerUsingThreads consumer threads
in consumer{consumerConfiguration= configuration', consume= consume'}
cost = componentCost . consumerConfiguration
instance Component (Producer m x r) where
name = producerName
subComponents = componentChildren . producerConfiguration
maxUsableThreads = producerMaxThreads
usedThreads = componentThreads . producerConfiguration
usingThreads threads producer = let (configuration',
produce' :: forall c. Sink c x -> Pipe c m r) = producerUsingThreads producer threads
in producer{producerConfiguration= configuration', produce= produce'}
cost = componentCost . producerConfiguration
instance Component (Transducer m x y) where
name = transducerName
subComponents = componentChildren . transducerConfiguration
maxUsableThreads = transducerMaxThreads
usedThreads = componentThreads . transducerConfiguration
usingThreads threads transducer = let (configuration', transduce' :: forall c. Source c x -> Sink c y -> Pipe c m [x])
= transducerUsingThreads transducer threads
in transducer{transducerConfiguration= configuration', transduce= transduce'}
cost = componentCost . transducerConfiguration
instance Component (Splitter m x b) where
name = splitterName
subComponents = componentChildren . splitterConfiguration
maxUsableThreads = splitterMaxThreads
usedThreads = componentThreads . splitterConfiguration
usingThreads threads splitter = let (configuration',
split' :: forall c. Source c x -> Sink c x -> Sink c x -> Sink c b -> Pipe c m [x])
= splitterUsingThreads splitter threads
in splitter{splitterConfiguration= configuration',
split= split'}
cost = componentCost . splitterConfiguration
-- | 'BranchComponent' is a type class representing all components that can act as consumers, namely 'Consumer',
-- 'Transducer', and 'Splitter'.
class BranchComponent cc m x r | cc -> m x where
-- | 'combineBranches' is used to combine two components in 'BranchComponent' class into one, using the
-- given 'Consumer' binary combinator.
combineBranches :: String -> Int
-> (forall c. Bool -> (Source c x -> Pipe c m r) -> (Source c x -> Pipe c m r) -> (Source c x -> Pipe c m r))
-> cc -> cc -> cc
instance forall m x r. Monad m => BranchComponent (Consumer m x r) m x r where
combineBranches name cost combinator c1 c2 = liftConsumer name 1 $
\threads-> (ComponentConfiguration [AnyComponent c1, AnyComponent c2] 1 cost,
combinator False (consume c1) (consume c2))
instance forall m x. Monad m => BranchComponent (Consumer m x ()) m x [x] where
combineBranches name cost combinator c1 c2 = liftConsumer name 1 $
\threads-> (ComponentConfiguration [AnyComponent c1, AnyComponent c2] 1 cost,
liftM (const ())
. combinator False
(\source-> consume c1 source >> return [])
(\source-> consume c2 source >> return []))
instance forall m x y. BranchComponent (Transducer m x y) m x [x] where
combineBranches name cost combinator t1 t2
= liftTransducer name (maxUsableThreads t1 + maxUsableThreads t2) $
\threads-> let (configuration, t1', t2', parallel) = optimalTwoParallelConfigurations threads t1 t2
transduce' source sink = combinator parallel
(\source-> transduce t1 source sink)
(\source-> transduce t2 source sink)
in (configuration, transduce')
instance forall m x b. (ParallelizableMonad m, Typeable x) => BranchComponent (Splitter m x b) m x [x] where
combineBranches name cost combinator s1 s2
= liftSplitter name (maxUsableThreads s1 + maxUsableThreads s2) $
\threads-> let (configuration, s1', s2', parallel) = optimalTwoParallelConfigurations threads s1 s2
split' source true false edge = combinator parallel
(\source-> split s1 source true false edge)
(\source-> split s2 source true false edge)
in (configuration, split')
-- | Function 'liftPerformer' takes a component name, maximum number of threads it can use, and its 'usingThreads'
-- method, and returns a 'Performer' component.
liftPerformer :: String -> Int -> (Int -> (ComponentConfiguration, forall c. Pipe c m r)) -> Performer m r
liftPerformer name maxThreads usingThreads = case usingThreads 1
of (configuration, perform) -> Performer name maxThreads configuration
usingThreads perform
-- | Function 'liftConsumer' takes a component name, maximum number of threads it can use, and its 'usingThreads'
-- method, and returns a 'Consumer' component.
liftConsumer :: String -> Int -> (Int -> (ComponentConfiguration, forall c. Source c x -> Pipe c m r)) -> Consumer m x r
liftConsumer name maxThreads usingThreads = case usingThreads 1
of (configuration, consume) -> Consumer name maxThreads configuration
usingThreads consume
-- | Function 'liftProducer' takes a component name, maximum number of threads it can use, and its 'usingThreads'
-- method, and returns a 'Producer' component.
liftProducer :: String -> Int -> (Int -> (ComponentConfiguration, forall c. Sink c x -> Pipe c m r)) -> Producer m x r
liftProducer name maxThreads usingThreads = case usingThreads 1
of (configuration, produce) -> Producer name maxThreads configuration
usingThreads produce
-- | Function 'liftTransducer' takes a component name, maximum number of threads it can use, and its 'usingThreads'
-- method, and returns a 'Transducer' component.
liftTransducer :: String -> Int -> (Int -> (ComponentConfiguration, forall c. Source c x -> Sink c y -> Pipe c m [x]))
-> Transducer m x y
liftTransducer name maxThreads usingThreads = case usingThreads 1
of (configuration, transduce) -> Transducer name maxThreads configuration
usingThreads transduce
-- | Function 'liftAtomicConsumer' lifts a single-threaded 'consume' function into a 'Consumer' component.
liftAtomicConsumer :: String -> Int -> (forall c. Source c x -> Pipe c m r) -> Consumer m x r
liftAtomicConsumer name cost consume = liftConsumer name 1 (\_threads-> (ComponentConfiguration [] 1 cost, consume))
-- | Function 'liftAtomicProducer' lifts a single-threaded 'produce' function into a 'Producer' component.
liftAtomicProducer :: String -> Int -> (forall c. Sink c x -> Pipe c m r) -> Producer m x r
liftAtomicProducer name cost produce = liftProducer name 1 (\_threads-> (ComponentConfiguration [] 1 cost, produce))
-- | Function 'liftAtomicTransducer' lifts a single-threaded 'transduce' function into a 'Transducer' component.
liftAtomicTransducer :: String -> Int -> (forall c. Source c x -> Sink c y -> Pipe c m [x]) -> Transducer m x y
liftAtomicTransducer name cost transduce = liftTransducer name 1 (\_threads-> (ComponentConfiguration [] 1 cost, transduce))
-- | Function 'lift121Transducer' takes a function that maps one input value to one output value each, and lifts it into
-- a 'Transducer'.
lift121Transducer :: (Monad m, Typeable x, Typeable y) => String -> (x -> y) -> Transducer m x y
lift121Transducer name f = liftAtomicTransducer name 1 $
\source sink-> let t = canPut sink
>>= flip when (getSuccess source (\x-> put sink (f x) >> t))
in t >> return []
-- | Function 'liftStatelessTransducer' takes a function that maps one input value into a list of output values, and
-- lifts it into a 'Transducer'.
liftStatelessTransducer :: (Monad m, Typeable x, Typeable y) => String -> (x -> [y]) -> Transducer m x y
liftStatelessTransducer name f = liftAtomicTransducer name 1 $
\source sink-> let t = canPut sink
>>= flip when (getSuccess source (\x-> putList (f x) sink >> t))
in t >> return []
-- | Function 'liftFoldTransducer' creates a stateful transducer that produces only one output value after consuming the
-- entire input. Similar to 'Data.List.foldl'
liftFoldTransducer :: (Monad m, Typeable x, Typeable y) => String -> (s -> x -> s) -> s -> (s -> y) -> Transducer m x y
liftFoldTransducer name f s0 w = liftAtomicTransducer name 1 $
\source sink-> let t s = canPut sink
>>= flip when (get source
>>= maybe
(put sink (w s) >> return ())
(t . f s))
in t s0 >> return []
-- | Function 'liftStatefulTransducer' constructs a 'Transducer' from a state-transition function and the initial
-- state. The transition function may produce arbitrary output at any transition step.
liftStatefulTransducer :: (Monad m, Typeable x, Typeable y) => String -> (state -> x -> (state, [y])) -> state -> Transducer m x y
liftStatefulTransducer name f s0 = liftAtomicTransducer name 1 $
\source sink-> let t s = canPut sink
>>= flip when (getSuccess source
(\x-> let (s', ys) = f s x
in putList ys sink >> t s'))
in t s0 >> return []
-- | Function 'liftStatelessSplitter' takes a function that assigns a Boolean value to each input item and lifts it into
-- a 'Splitter'.
liftStatelessSplitter :: (ParallelizableMonad m, Typeable x) => String -> (x -> Bool) -> Splitter m x b
liftStatelessSplitter name f = liftAtomicSplitter name 1 $
\source true false edge->
let s = get source
>>= maybe
(return [])
(\x-> put (if f x then true else false) x
>>= cond s (return [x]))
in s
-- | Function 'liftStatefulSplitter' takes a state-converting function that also assigns a Boolean value to each input
-- item and lifts it into a 'Splitter'.
liftStatefulSplitter :: (ParallelizableMonad m, Typeable x) => String -> (state -> x -> (state, Bool)) -> state -> Splitter m x ()
liftStatefulSplitter name f s0 = liftAtomicSplitter name 1 $
\source true false edge->
let split s = get source
>>= maybe
(return [])
(\x-> let (s', truth) = f s x
in put (if truth then true else false) x
>>= cond (split s') (return [x]))
in split s0
-- | Function 'liftSplitter' lifts a splitter function into a full 'Splitter'.
liftSplitter :: forall m x b. (Monad m, Typeable x) =>
String -> Int
-> (Int -> (ComponentConfiguration, forall c. Source c x -> Sink c x -> Sink c x -> Sink c b -> Pipe c m [x]))
-> Splitter m x b
liftSplitter name maxThreads usingThreads = case usingThreads 1
of (configuration, split) -> Splitter name maxThreads configuration usingThreads split
-- | Function 'liftAtomicSplitter' lifts a single-threaded 'split' function into a 'Splitter' component.
liftAtomicSplitter :: forall m x b. (Monad m, Typeable x) =>
String -> Int -> (forall c. Source c x -> Sink c x -> Sink c x -> Sink c b -> Pipe c m [x])
-> Splitter m x b
liftAtomicSplitter name cost split = liftSplitter name 1 (\_threads-> (ComponentConfiguration [] 1 cost, split))
-- | Function 'optimalTwoParallelConfigurations' configures two components, both of them with the full thread count, and
-- returns the components and a 'ComponentConfiguration' that can be used to build a new component from them.
optimalTwoSequentialConfigurations :: (Component c1, Component c2) => Int -> c1 -> c2 -> (ComponentConfiguration, c1, c2)
optimalTwoSequentialConfigurations threads c1 c2 = (configuration, c1', c2')
where configuration = ComponentConfiguration
[AnyComponent c1', AnyComponent c2']
(usedThreads c1' `max` usedThreads c2')
(cost c1' + cost c2')
c1' = usingThreads threads c1
c2' = usingThreads threads c2
-- | Function 'optimalTwoParallelConfigurations' configures two components assuming they can be run in parallel,
-- splitting the given thread count between them, and returns the configured components, a 'ComponentConfiguration' that
-- can be used to build a new component from them, and a flag that indicates if they should be run in parallel or
-- sequentially for optimal resource usage.
optimalTwoParallelConfigurations :: (Component c1, Component c2) => Int -> c1 -> c2 -> (ComponentConfiguration, c1, c2, Bool)
optimalTwoParallelConfigurations threads c1 c2 = (configuration, c1', c2', parallelize)
where parallelize = threads > 1 && parallelCost + 1 < sequentialCost
configuration = ComponentConfiguration
[AnyComponent c1', AnyComponent c2']
(if parallelize then usedThreads c1' + usedThreads c2' else usedThreads c1' `max` usedThreads c2')
(if parallelize then parallelCost + 1 else sequentialCost)
(c1', c2') = if parallelize then (c1p, c2p) else (c1s, c2s)
(c1p, c2p, parallelCost) = minimumBy
(\(_, _, cost1) (_, _, cost2)-> compare cost1 cost2)
[let c2threads = threads - c1threads `min` maxUsableThreads c2
c1i = usingThreads c1threads c1
c2i = usingThreads c2threads c2
in (c1i, c2i, cost c1i `max` cost c2i)
| c1threads <- [1 .. threads - 1 `min` maxUsableThreads c1]]
c1s = usingThreads threads c1
c2s = usingThreads threads c2
sequentialCost = cost c1s + cost c2s
-- | Function 'optimalThreeParallelConfigurations' configures three components assuming they can be run in parallel,
-- splitting the given thread count between them, and returns the components, a 'ComponentConfiguration' that can be
-- used to build a new component from them, and a flag per component that indicates if it should be run in parallel or
-- sequentially for optimal resource usage.
optimalThreeParallelConfigurations :: (Component c1, Component c2, Component c3) =>
Int -> c1 -> c2 -> c3 -> (ComponentConfiguration, (c1, Bool), (c2, Bool), (c3, Bool))
optimalThreeParallelConfigurations threadCount c1 c2 c3 = undefined
-- | Given a 'Splitter', a 'Source', and three consumer functions, 'splitToConsumers' runs the splitter on the source
-- and feeds the splitter's outputs to its /true/, /false/, and /edge/ sinks, respectively, to the three consumers.
splitToConsumers :: forall c m x b r1 r2 r3. (ParallelizableMonad m, Typeable x, Typeable b)
=> Splitter m x b -> Source c x -> (Source c x -> Pipe c m r1) -> (Source c x -> Pipe c m r2)
-> (Source c b -> Pipe c m r3) -> Pipe c m ([x], r1, r2, r3)
splitToConsumers s source trueConsumer falseConsumer edgeConsumer
= pipe
(\true-> pipe
(\false-> pipe
(split s source true false)
>>= \(((extra, r3), r2), r1)-> return (extra, r1, r2, r3)
-- | Given a 'Splitter', a 'Source', and two consumer functions, 'splitInputToConsumers' runs the splitter on the source
-- and feeds the splitter's /true/ and /false/ outputs, respectively, to the two consumers.
splitInputToConsumers :: forall c m x b r1 r2. (ParallelizableMonad m, Typeable x, Typeable b)
=> Bool -> Splitter m x b -> Source c x -> (Source c x -> Pipe c m [x]) -> (Source c x -> Pipe c m [x])
-> Pipe c m [x]
splitInputToConsumers parallel s source trueConsumer falseConsumer
= pipe'
(\false-> pipe'
(\true-> pipe
(split s source true false)
>>= \(((extra, _), xs1), xs2)-> return (prependCommonPrefix xs1 xs2 extra)
where pipe' = if parallel then pipeP else pipe
prependCommonPrefix (x:xs) (y:ys) tail = x : prependCommonPrefix xs ys tail
prependCommonPrefix _ _ tail = tail | {"url":"http://hackage.haskell.org/package/scc-0.3/docs/src/Control-Concurrent-SCC-ComponentTypes.html","timestamp":"2014-04-21T02:37:46Z","content_type":null,"content_length":"163866","record_id":"<urn:uuid:582f7648-43f1-4e64-925d-ddc87f8fc4b3>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00617-ip-10-147-4-33.ec2.internal.warc.gz"} |
Allowing user to input decimal.
October 21st, 2013, 04:01 PM
Allowing user to input decimal.
This code is near complete, the only task that is left is allowing the user to input a decimal and then two integers, or automatically using .00 decimal.
The automatic part: /*This is not correct.
printf(".%.2d\n", number);
But that does no good for me. Question: Do I have to create a some sort of while loop again, to allow the user to input a decimal followed by integer?
#include <iostream>
#include <cstdlib>
using namespace std;
int main()
int j , i = 0, k = 0;
int number;
double sum = 0.0;
double Average = 0.0;
cout << "Enter Positive integer number: ";
while(cin >> number)
cout << endl;
if( number < 0)//test if the number is negative
cout << "Ending program since user has input a negative number" <<endl;
int temp = number;
int p = 1;
sum = (sum+temp);
Average = sum/2;
while(temp > 0) //counting number of digits
temp /= 10;
p *= 10;
cout << "Total Sum: " << sum << endl;
cout << "Average: "<< Average << endl;
j = i % 3;
p /= 10;
while( i > 0 )//display integer number with 1000 seperator
//entering gives me error if digits exceed 9
cout << char ((number/p) +'0');
number %= p;
p /= 10;
if ((k % 3 == 0 && i > 0)||(j == 0 && i > 2) )
cout <<",";
k = 0;
/*This is not correct.
printf(".%.2d\n", number);
cout << endl << endl;
cout << "This program exits if you input negative number and/or input non-integer\n";
cout << "Enter another integer number: ";
return 0;
October 21st, 2013, 04:57 PM
Re: Allowing user to input decimal.
This line does nothing
as the result of the cast is not used.
Average = sum/2;
The average of a set of numbers is their sum divided by the number of numbers. So in this case you need to keep a count of the number of numbers entered and use that for the division to find the
correct average.
For 32 bit signed integers, the maximum value is 2147483647 (see limits.h)
//entering gives me error if digits exceed 9
because when the number of digits exceeds 9,
p *= 10;
causes p to overflow its max value.
/*This is not correct.
printf(".%.2d\n", number);
printf("\n%.2lf\n", (double)number);
which casts number to a double so it can be displayed as a double with decimals.
Note that at this point in your code, number does not have the correct value to display.
allowing the user to input a decimal and then two integers,
If the user needs to be able to input a number containing a decimalpoint, then this is a double/float number and can be input directly as such.
When confronted with a program which does not work as expected, you need to use the debugger to trace through the program to see where its behaviour deviates from that which is expected. being
able to use successfully the debugger is as important as being able to write code and is a skill which needs to be mastered.
Also, just using single letter variable names is not very helpful when trying to understand the code. Use meaninful variable names and don't re-use a variable for more than one purpose. | {"url":"http://forums.codeguru.com/printthread.php?t=540497&pp=15&page=1","timestamp":"2014-04-19T18:05:07Z","content_type":null,"content_length":"11688","record_id":"<urn:uuid:95bde947-9829-4ec8-b569-d7d82b94b0c6>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00371-ip-10-147-4-33.ec2.internal.warc.gz"} |
Struve function H: Integration
Indefinite integration
Involving only one direct function
Involving one direct function and elementary functions
Involving power function
Involving exponential function and a power function
Involving functions of the direct function and elementary functions
Involving elementary functions of the direct function and elementary functions
Involving products of the direct function and a power function
Involving direct function and Bessel-, Airy-, Struve-type functions
Involving Bessel functions
Involving Bessel J and power
Definite integration
For the direct function itself
Involving the direct function | {"url":"http://functions.wolfram.com/Bessel-TypeFunctions/StruveH/21/ShowAll.html","timestamp":"2014-04-16T16:35:16Z","content_type":null,"content_length":"46052","record_id":"<urn:uuid:64d8b68c-9284-4a2b-90fb-009ebab709bd>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00438-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mathematical English Usage - a Dictionary
by Jerzy Trzeciak
[see also: characteristic, feature
Let f be a map with f|M having the Mittag-Leffler property.
Then F has the property that......
the space of all functions with the property that......
Now F has the additional property of being convex.
The operators A[n] have still better smoothness properties.
Consequently, F has the Δ[2] property. [= F has property Δ[2].]
Among all X with fixed L^2 norm, the extremal properties are achieved by multiples of U.
However, not every ring enjoys the stronger property of being bounded.
On the other hand, as yet, we have not taken advantage of the basic property enjoyed by S: it is a simplex.
Certain other classes share this property.
This property is characteristic of holomorphic functions with......
The structure of a Banach algebra is frequently reflected in the growth properties of its analytic semigroups.
It has some basic properties in common with another most important class of functions, namely, the continuous ones.
The space X does not have <fails to have> the Radon-Nikodym property.
Back to main page | {"url":"http://www.emis.de/monographs/Trzeciak/glossae/property.html","timestamp":"2014-04-19T12:26:23Z","content_type":null,"content_length":"1966","record_id":"<urn:uuid:63e27398-e063-4f11-8dbb-b57566ffc9c4>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00594-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Wave Equation
Many physical systems can be modeled by a scalar-valued function of space and time. In the case when the second derivative with respect to time (that is the acceleration) is proportional to the
second derivative with respect to space (that is the local convexity), the system exhibits wave behavior.
In one dimension, one can write ∂^2f/∂t^2=c^2∂^2f/∂x^2 . The corresponding (scalar) second derivative with respect to space in higher dimensions is the divergence of the gradient. The gradient has as
many dimensions as the domain space of the function in question, while the divergence (of a vector-vector function, such as the gradient) is a scalar (function). Thus, the general wave equation is
the following partial differential equation:
∂^2f/∂t^2=c^2div grad f
Exact Solutions
In the one-dimensional case the general solution is
where A and B are (twice) differentiable scalar functions. The above formula (the so-called d'Alambert solution) can be easily verified using the chain rule for differentiation.
The exact solution for higher dimensions also exists. It looks as follows:
f(x,t)=∫[τ]∫[y] |x-y|^r-1D(y,τ+|x-y|/c)dydτ
where r is the dimensionality of x and D is the so-called excitation (or disturbance) function (a scalar function of time and space).
Numerical Solution
In numerical models, both time and space are (usually uniformly) quantized. Instead of differentials, we use differences. By simple calculations, one can see that the divergence of the gradient
becomes the (appropriately scaled) difference between a sample point and the average of its neighbors.
A numeric simulation of the two-dimensional case using Euler's method would have the following pseudo-code:
1. ddf:=c^-2*[f-(f1+f2+f3+f4)/4]
2. f:=f+df
3. df:=df+ddf
4. go to 1.
where f1(x,y)=f(x+1,y), f2(x,y)=f(x-1,y), f3(x,y)=f(x,y-1), f4(x,y)=f(x,y+1).
A java applet accomplishing the same on a 200 by 200 grid can be found below. Periodic excitation can be added by pressing the mouse button. Note the circular shape of the wavefronts, the
reflections, the law of conservation of energy, the doppler effect and the increment of entropy.
All the text and the software above have been originally written by Daniel A. Nagy | {"url":"http://www.epointsystem.org/~nagydani/wave","timestamp":"2014-04-20T00:38:25Z","content_type":null,"content_length":"3557","record_id":"<urn:uuid:048c5551-3fd4-4417-9a5f-4262e449f4c6>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00164-ip-10-147-4-33.ec2.internal.warc.gz"} |
Valley Forge SAT Math Tutor
Find a Valley Forge SAT Math Tutor
...The Praxis tests consist of questions that are presented in a specific format. You need to be aware of this type of test format and you should have practice in answering the questions asked
within the time allotted for answering them. I will familiarize you with the different types of questions that appear on the Reading and Writing tests.
62 Subjects: including SAT math, reading, English, calculus
...We talk about organization, note-taking, personal responsibility, self-advocacy, and test-taking strategies. These study skills are invaluable in a student's success in K-12 classes and
beyond. For several years, I have been working with students on the verbal section of the MCAT.
47 Subjects: including SAT math, chemistry, reading, English
...I hold Bachelor of Science and Master of Science degrees. Also, I have experience instructing elementary-age children in a home-schooling environment. I consider one of the most important
elements of science to be researching the correct answer.
20 Subjects: including SAT math, reading, statistics, biology
...In just a few weeks I had about 25 outfits completed. I recruited 8 of my close friends, one of which had modeled for several years in Milan, Italy, to model the garments. I selected a date,
invited more friends, acquaintances and anyone who would listen and held a fashion show to introduce my designs.
51 Subjects: including SAT math, English, reading, geometry
...D.S. (Mother & middle school teacher) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ We initially employed Jonathan to work with our son to prepare for a 7th grade
math placement exam. Our son earned one of the highest marks in his class. Our son enjoyed working with Jonathan so much that he asked to continue to work with him during the school year.
22 Subjects: including SAT math, calculus, writing, geometry
Nearby Cities With SAT math Tutor
Center Square, PA SAT math Tutors
Charlestown, PA SAT math Tutors
Cynwyd, PA SAT math Tutors
Drexelbrook, PA SAT math Tutors
Frazer, PA SAT math Tutors
Graterford, PA SAT math Tutors
Gulph Mills, PA SAT math Tutors
Linfield, PA SAT math Tutors
Miquon, PA SAT math Tutors
Oaks, PA SAT math Tutors
Penllyn, PA SAT math Tutors
Plymouth Valley, PA SAT math Tutors
Rahns, PA SAT math Tutors
Rosemont, PA SAT math Tutors
Southeastern SAT math Tutors | {"url":"http://www.purplemath.com/valley_forge_pa_sat_math_tutors.php","timestamp":"2014-04-20T10:55:33Z","content_type":null,"content_length":"24258","record_id":"<urn:uuid:c4f86b4a-4b2a-4106-9ddf-01d999907d12>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00023-ip-10-147-4-33.ec2.internal.warc.gz"} |
[SciPy-User] Request for usage examples on scipy.stats.rv_continuous and scipy.ndimage.grey_dilate(structure)
[SciPy-User] Request for usage examples on scipy.stats.rv_continuous and scipy.ndimage.grey_dilate(structure)
Christoph Deil Deil.Christoph@googlemail....
Mon Mar 22 12:47:42 CDT 2010
On Mar 22, 2010, at 6:13 PM, Robert Kern wrote:
> On Mon, Mar 22, 2010 at 11:58, Christoph Deil
> <Deil.Christoph@googlemail.com> wrote:
>> Dear Robert,
>> thanks for the tip. I tried understanding the examples in scipy/stats/distributions.py, but being a python / scipy newbie I find the mechanism hard to understand and couldn't implement the simple examples I suggested below.
> What confused you?
I didn't know how to specify the limits. The rv_continuous docstring says:
Constructor information:
Definition: scipy.stats.rv_continuous(self, momtype=1, a=None, b=None, xa=-10.0, xb=10.0, xtol=1e-14, badvalue=None, name=None, longname=None, shapes=None, extradoc=None)
The meaning of the parameters a, b, xa, xb doesn't seem to be documented. Now that I had a look at the source code it's obvious.
>> Maybe it would be possible to add an example to the tutorial? At http://docs.scipy.org/scipy/docs/scipy-docs/tutorial/stats.rst/#stats there is an example on how to use rv_discrete, but none on how to use rv_continuous.
>> Would it be possible to add a convenience function to scipy.stats that makes it easy to construct a distribution from a function:
>>>>> p = lambda x: x**2
>>>>> pdist = scipy.stats.rv_continuous_from_function(pdf=p, lim=[0,2]) # a suggestion, doesn't exist at the moment
>>>>> samples = pdist.rvs(size=10)
> class x2_gen(rv_continuous):
> def _pdf(self, x):
> return x * x * 0.375
> x2 = x2_gen(a=0.0, b=2.0, name='x2')
Thanks! This works.
Multidimensional correlated parameter distributions like pdf(x,y) = x*y cannot be implemented using rv_continuous, right?
If yes, is there a python module to work with correlated multidimensional pdfs?
>> I would guess that getting random numbers from a user defined distribution function is such a common usage that it would be nice (at least for newbies like me :-) to being able to do it from the command line, without having to derive a class.
> It's not particularly common, no.
> --
> Robert Kern
> "I have come to believe that the whole world is an enigma, a harmless
> enigma that is made terrible by our own mad attempt to interpret it as
> though it had an underlying truth."
> -- Umberto Eco
> _______________________________________________
> SciPy-User mailing list
> SciPy-User@scipy.org
> http://mail.scipy.org/mailman/listinfo/scipy-user
More information about the SciPy-User mailing list | {"url":"http://mail.scipy.org/pipermail/scipy-user/2010-March/024785.html","timestamp":"2014-04-17T09:41:10Z","content_type":null,"content_length":"6268","record_id":"<urn:uuid:14c1f690-4687-4df8-81dd-4e0ebd95d086>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00051-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum - Problems Library - Primary, Numeration - Writing Numbers
This page:
ordering and comparing
place value
ordinal numbers
About Levels
of Difficulty
operations with numbers
number theory
algebraic thinking
data analysis
logical reasoning
Browse all
About the
PoW Library
Writing Numbers
Children get practice in writing numbers with these problems.
Related Resources
Interactive resources from our Math Tools project:
Math 1: Number Sense
NCTM Standards:
Number and Operations Standard for Grades Pre-K-3
Access to these problems requires a Membership.
Grade 1. Write a number sentence to explain how many pennies Darryl has. ... more>>
Grade 1. Find the next numbers in the pattern. ... more>>
Grade K. Show the number 10 in different ways. ... more>>
Page: 1 | {"url":"http://mathforum.org/library/problems/sets/primary_numeration_writing.html","timestamp":"2014-04-16T05:38:34Z","content_type":null,"content_length":"10849","record_id":"<urn:uuid:4ec43a8d-4c66-4eb2-88ff-5a9a43c7cdd4>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00309-ip-10-147-4-33.ec2.internal.warc.gz"} |
RE: st: RE: Testing nested models using logistic regression with robust
[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]
RE: st: RE: Testing nested models using logistic regression with robust standard errors
From "Nick Cox" <n.j.cox@durham.ac.uk>
To <statalist@hsphsun2.harvard.edu>
Subject RE: st: RE: Testing nested models using logistic regression with robust standard errors
Date Tue, 29 Apr 2008 14:28:20 +0100
Well, I agree strongly and I don't. Assessing parsimony and goodness of
fit -- which often but not always is a trade-off problem -- are I
imagine major issues for many if not most people in this list. Wanting
to regard the trade-off as a testing problem is not however universal. I
am fortunate enough to find myself in fields where the choice of model
depends finally almost always on physical or biological criteria rather
than some P-value. (Substitute "economic" or whatever for your own
One of the most intriguing divides in statistical science is between
those whose ideal is evidently a formal set of rules which will guide
one ineluctably towards the correct model or decision for any dataset --
and those who doubt very much whether that is desirable, let alone
possible. (A pretty large class of questions on this list ends with the
question "Is this correct?". Most of those seem to be "econometrically
Those who agree with the first position then commit themselves to a
career-long argument with each other about quite what those rules are
going to be.
Nick Cox
John LeBlanc
Thanks for the reply. I take your point about the limitations of sw
regression and I will be more hesitant in using them. However, whether
one uses sw or whether a more appropriate theory-driven approach with
thoughtful removal of variables, there is still a problem of testing
whether a more parsimonious model differs in the fit of the data from
its more saturated model.
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2008-04/msg01240.html","timestamp":"2014-04-18T20:55:47Z","content_type":null,"content_length":"7246","record_id":"<urn:uuid:8386d431-886c-425f-b68b-39c710eb1a4b>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00057-ip-10-147-4-33.ec2.internal.warc.gz"} |
Millbrae Statistics Tutor
Find a Millbrae Statistics Tutor
...Oakland, CA Andreas was a huge help to me in preparing for my GRE. We met several times and his combination of patience and humor helped to keep me on track despite my admittedly deep-seated
math phobia. I hadn't taken a math class in over 10 years and he was able to refresh my memory of important concepts and equations.
41 Subjects: including statistics, calculus, geometry, algebra 1
...During these 6 years I have had students from different backgrounds and age groups. I started as a tutor for middle and high school students, and later tutored students at The Glendale
Community College. Upon successful tutoring experience I received the opportunity to work with undergraduate students at University of California, Irvine.
29 Subjects: including statistics, reading, calculus, geometry
...In addition, I have significant experience tutoring students in lower division college mathematics courses such as calculus, multivariable calculus, linear algebra and differential equations,
as well as lower division physics. Teaching math and physics is exciting for me because I am passionate ...
25 Subjects: including statistics, physics, algebra 1, calculus
...My experience in Matlab covers more than 100,000 lines of code and more than 12 years. I use Matlab all the time, for my personal needs and for my clients. It is one of the preferred tools for
serious research in finance and statistics.
9 Subjects: including statistics, finance, SPSS, MATLAB
- Are you "struggling" with mathematics? - Do you want to improve your grade in math? - Do you want to take your math tests with greater confidence? - Do you want to make math more fun? OR -- -
Are you already scoring an “A” in math but want to learn even more, so that you can be fully prepared fo...
13 Subjects: including statistics, calculus, physics, algebra 2 | {"url":"http://www.purplemath.com/millbrae_ca_statistics_tutors.php","timestamp":"2014-04-18T23:48:41Z","content_type":null,"content_length":"24092","record_id":"<urn:uuid:75dd7583-7de9-4dbc-b01c-9f6b753b8dc8>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00625-ip-10-147-4-33.ec2.internal.warc.gz"} |
Forcing for Arbitrary First Order Theories
up vote 4 down vote favorite
Forcing is a relative model construction method for models of $ZF$ as a particular first order theory using models of another first order theory (forcing companion) that in this case is the theory of
partial orders ($PO$).
Is it possible to develope forcing for other first order theories? In fact I am asking for possible relative model constructions of a first order theory without finite models like $T$ (instead of
$ZF$) using models of a "forcing companion" theory like $S$ (instead of $PO$). This model construction method produces countable models of $T$ with special properties using a given model of $T$
and a given model of $S$.
Obviously if we define a uniform "forcing companion theory" $S$ for an arbitrary first order theory $T$ then in many cases when countable models of $T$ are not too board up to isomorphism the
"forcing theory of $T$" is just a trivial method and produces no interesting models of $T$. But in this question my focus is on first order theories with wild countable and countable models like $ZF$
and $PA$.
Every references in this frame work is welcome.
For starters, see Abraham Robinson, “Forcing in model theory”. – Emil Jeřábek Feb 14 at 22:10
And also Feferman's Some applications of the notions of forcing and generic sets [Fund. Math. 56. (1964), 325–345]. – François G. Dorais♦ Feb 15 at 0:09
Searching in MO I found this related post. – Konrad Feb 15 at 0:28
see also "Forcing in a general setting" by Bowen. Abstract: The author gives a logic-free treatment of some of the basics of forcing in a topological setting. The general scheme is shown to
specialize to virtually all known varieties of forcing (set-theoretic, Robinson model-theoretic, etc.). – Mohammad Golshani Feb 16 at 5:01
add comment
2 Answers
active oldest votes
One of the most robust ways to understand forcing is via the method of Boolean ultrapowers, and this is a purely model-theoretic construction that makes sense to undertake with any
first-order theory whatsoever. One may form the Boolean ultrapower of any graph, group, ring, field, partial order, and indeed of any structure in any first-order language whatsoever.
The general construction of the Boolean ultrapower has classical roots (due in set theory Vopenka, developed also by Solovay, Scott and others including a very nice presentation by Bell,
but also as a purely model-theoretic construction by Mansfield and others), and a general introductory account can be found in my paper Well-founded Boolean ultrapowers as large cardinal
embeddings, written jointly with Dan Seabold.
One may consider the general class of $\mathbb{B}$-valued models in a given first-order language. Specifically, a $\mathbb{B}$-valued structure in a first-order language (not necessarily
set-theoretic) consists of a set of objects $M$, called names, and an assignment $[\![\tau=\sigma]\!]\in\mathbb{B}$ giving the Boolean value that any two names are equal, as well as the
Boolean value $[\![R(\vec \sigma)]\!]\in\mathbb{B}$ that a given relation holds at a tuple of names, such that the laws of equality hold with respect to these assignments (and one can also
handle function symbols and constants). The basic fact is that the concept of a Boolean-valued model is not particularly connected with set theory, and makes sense for models in any
first-order language.
For any such $\mathbb{B}$-valued structure, whether it is group, ring, field, partial order or model of set theory, one may collapse it to a classial structure by taking the quotient by an
arbitrary ultrafilter on $\mathbb{B}$. Specifically, if $U\subset\mathbb{B}$ is an ultrafilter (no need for any genericity), then one defines $\sigma=_U\tau$ for names just in case $[\![\
sigma=\tau]\!]\in U$. This is an equivalence relation, indeed a congruence, and one defines the structure on the resulting quotient structure $R([\sigma]_U)$ just in case $[\![R(\sigma)]\!]
up vote 4 \in U$, which is well-defined because the equality axioms had Boolean value one. In this way, any $\mathbb{B}$-valued structure is transformed into a classical $2$-valued structure by the
down vote quotient.
Thus, we have a generalization of the classical ultrapower construction from ultrapowers on a power set algebra to arbitrary ultrapowers on a complete Boolean algebra, and this is known as
the Boolean ultrapower. Specifically, if you have a first-order structure $\cal M=\langle M,\ldots\rangle$ and a complete Boolean algebra $\mathbb{B}$, then consider the set of spanning
functions $f:D\to M$, where $D$ is any open dense set in $\mathbb{B}$. Define $[\![R(f)]\!]=\bigvee\{b\in\mathbb{B}\mid R(f(b))\}$, and this produces a Boolean-valued model. If $U\subset\
mathbb{B}$ is an ultrafilter in $\mathbb{B}$, define $f=_Ug$ for two spanning functions $f:D\to M$, $g:E\to m$, if $\bigvee\{b\in\mathbb{B}\mid f(b)=g(b)\}\in U$, and let ${\cal M}^{\
downarrow\mathbb{B}}/U$. This is an equivalence relation, and we may consider the set of spanning functions modulo this relation, denoted $M^{\downarrow\mathbb{B}}/U$. For any relation $R$
in the language, we define the interpretation of $R$ on this structure by $R([f]_U)$ holds if and only if $\bigvee\{b\mid {\cal M}\models R(f(b))\}\in U$. One may similarly handle constants
and functions as explained in the paper. If $U\subset\mathbb{B}$ is an ultrafilter, then the corresponding Boolean ultrapower map is the map $x\mapsto[c_x]_U$, where $c_x$ is the constant
map on $\mathbb{B}$ with value $x$. This is a generalization of the ordinary ultrapower construction from ultrafilters on power set algebras to ultafilters on arbitrary complete Boolean
The connection with forcing is that for any complete Boolean algebra $\mathbb{B}$, we may construct the $\mathbb{B}$-valued model $V^{\mathbb{B}}$, whose objects are the $\mathbb{B}$-names
with Boolean-valued truth defined in the usual forcing manner. If $U\subset\mathbb{B}$ is any ultrafilter (not necessarily generic in any sense), then one gets an elementary embedding $j:V\
to \check V_U\subset \check V_U[[\dot G]_U]\cong V^{\mathbb{B}}/U$, which is precisely the Boolean ultrapower map of $V$ into the ground model of the Boolean extension $V^{\mathbb{B}}/U$
quotiented by the ultrafilter $U$.
One should think of the Boolean ultrapower construction as a generalization of the ultrapower construction, which might be thought of as an averaging method, a method of producing
homogeneous non-special models, rather than special models. – Joel David Hamkins Feb 15 at 0:42
The idea of interpreting forcing in Boolean valued version in order to generalize it to an arbitrary first order theory is very interesting. Did you construct any special model of a first
order theory using this method before? By "special" I mean something like a special group, graph, field, etc. For example can we think about producing counterexamples using this kind of
forcing for essential problems like finding a field which refutes Zilber's tricotomy conjecture in a different way from Hrushovski construction, etc? – Konrad Feb 15 at 0:42
Oh! I think I used the word "special" in an inappropriate way here. I didn't mean special models in the model theoretic sense. I meant a model with a special property as same as set
theory that we use forcing to produce models of ZF with special properties. – Konrad Feb 15 at 0:47
Thus we can interpret set theoretic forcing as a kind of ultraproduct. So the case seems a bit strange because using large cardinal assumptions one can form another form of ultraproducts
of the universe too. Is there any direct relevance between Boolean valued forcing ultraproducts and large cardinal ultraproducts of the universe here? – Konrad Feb 15 at 0:58
Well, that is exactly what my paper with Dan Seabold is about: Well-founded Boolean ultrapowers as large cardinal embeddings jdh.hamkins.org/boolean-ultrapowers. – Joel David Hamkins Feb
15 at 1:10
show 1 more comment
Adrian Mathias has been working on forcing over models of a weak fragment of set theory, of which he says:
"...we give a treatment of set forcing appropriate for working over models of a theory PROVI which may plausibly claim to be the weakest set theory supporting a smooth theory
up vote 4 down of set forcing, and of which the minimal model is Jensen's $J_\omega$."
See his papers here: https://www.dpmms.cam.ac.uk/~ardm/
add comment
Not the answer you're looking for? Browse other questions tagged reference-request lo.logic set-theory model-theory forcing or ask your own question. | {"url":"http://mathoverflow.net/questions/157613/forcing-for-arbitrary-first-order-theories","timestamp":"2014-04-21T12:44:51Z","content_type":null,"content_length":"71617","record_id":"<urn:uuid:8d6b17b7-c322-4a65-aa25-0ac528ddda17>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00662-ip-10-147-4-33.ec2.internal.warc.gz"} |
Linear Programming: Shares and Dividends
March 9th 2009, 11:58 AM #1
Mar 2009
Linear Programming: Shares and Dividends
Hi all,
I have two eqns which are related to stock and dividends which i think LP could help to solve( .. i hope).
5.5x + 2.5y = theta
67.00x + 28y = 200 000
x>0, y>0, theta>0
I derived the above two eqns from this:
Stock A is @ 67.00
Stock B is @ 28.00
Stock A pays 5.5 dollars per share annually
Stock B pays 2.5 dollars per share annually.
(ie. if i get 1 x 67.00 and 0 x 28.00, that year i'll get $5.50)
My question is how to derive the set of {x,y} that would yield a max theta?
Million thanks,
Hi all,
I have two eqns which are related to stock and dividends which i think LP could help to solve( .. i hope).
5.5x + 2.5y = theta ====> "I assume this is your profit function"
67.00x + 28y = 200 000 ====> "I don't understand where this came from"
x>0, y>0, theta>0
I derived the above two eqns from this:
Stock A is @ 67.00
Stock B is @ 28.00
Stock A pays 5.5 dollars per share annually
Stock B pays 2.5 dollars per share annually.
(ie. if i get 1 x 67.00 and 0 x 28.00, that year i'll get $5.50)
My question is how to derive the set of {x,y} that would yield a max theta?
Million thanks,
Hi Gary,
My guess is that the above equation in red should be an inequality to represent some kind of restraint, but there's not enough info to determine what that restraint is.
Do you just have $200,000 to invest? If so, the restraint would be:
$67x+28y \leq 200000$
But, even with these constraints, we end up with an unbounded region. So, we need more detail.
Hi masters
Yes, i have 200000 to invest and yes your constraint is correct
if i am correct, i think the unbounded region is the theta.
i think i am not sure how to estimate theta such that x and y are bounded.
Million thanks,
March 9th 2009, 01:17 PM #2
A riddle wrapped in an enigma
Jan 2008
Big Stone Gap, Virginia
March 10th 2009, 12:54 AM #3
Mar 2009 | {"url":"http://mathhelpforum.com/advanced-applied-math/77750-linear-programming-shares-dividends.html","timestamp":"2014-04-17T11:43:54Z","content_type":null,"content_length":"38252","record_id":"<urn:uuid:4fdd535c-2f63-4294-9996-dbadf0c97948>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00174-ip-10-147-4-33.ec2.internal.warc.gz"} |
Domain and Range
November 18th 2012, 01:57 PM
Domain and Range
Hi, just wonderin if I could have some help with these two questions: each of them asks to find the domain and the range using calculus techniques, and as I am reveiwing this after doing this 12
weeks ago, I am a bit hazy as to what's required.
(a)f(x)=cos sqrt(e^x-1)
(b) g(x)=1//(x^2+1)
if anyone could help and explain fully, that would be greatly appreciated :)
November 18th 2012, 03:12 PM
Re: Domain and Range
(a) domain ...
$e^x - 1 \ge 0$ (why?)
$e^x \ge 1$
$x \ge 0$
range ...
$\cos(whatever)$ is between what two values?
(b) domain ...
$x^2+1 > 0$ for all $x \in \mathbb{R}$ , so what is the domain?
range ...
$0 < \frac{1}{x^2+1} < \, ?$
November 18th 2012, 03:27 PM
Re: Domain and Range
Basically finding the domain means that all the places in $\mathbb{R}$ where the function is defined. So for example
1) Find the domain of $f(x) = \frac{1}{x-1}$. The domain is All real numbers except 1. Because $f(1) = \frac{1}{1-1}$ and u cannot divide by 0.
So for ur first example
1) $cos(\sqrt{e^x - 1})$. cos(x) is defined for all real numbers, but $\sqrt{e^x - 1}$ is real only for $e^x - 1 \geq 0$ (because $\sqrt(negative num)$ is not a real number, its a complex
As for the range, range is basically all the values ur function can take if u plug in x's from ur domain only.
For example
2) Find the Range of $f(x) = \frac{1}{x-1}$. Since we said the domain of f was all real numbers except 1. We see that pluggin in $x > 1$, for $1 < x < 2$ makes the denominator in $f(x) = \frac{1}
{x-1}$ makes the denominator < 1, which means the function it it self can become as large as $\infty$ So for $1 < x < 2$ our range is $(1, \infty)$. Now take $x \geq 2$ So our range can take on
values from $(0, 1]$ So our range for x > 1 is $(0, 1]$ plus $(1, \infty)$ which means range is $(0, \infty )$Now since we did x > 1, now we try x < 1, so for x < 1, we see that for if $0 \geq x
\< 1$ our denominator takes on the values in the interval $[-\infty, -1 ]$ . and if x < 0, we get a small negative denominator, and the function takes on every value from [-1, 0). So our range is
$(0, \infty) plus (-\infty, -1] plus [-1, 0)$, which means that our whole range is $(-\infty, 0) \cup (0, \infty)$, basically ur function can take on every value except 0. | {"url":"http://mathhelpforum.com/calculus/207936-domain-range-print.html","timestamp":"2014-04-17T19:59:19Z","content_type":null,"content_length":"11295","record_id":"<urn:uuid:f58651de-c09b-4d53-881a-286ea4312cda>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00395-ip-10-147-4-33.ec2.internal.warc.gz"} |
Galois Theory, Please Help
December 10th 2005, 08:56 PM #1
Dec 2005
Galois Theory, Please Help
Let F have characteristic 0 and suppose the coefficients of the polynomial
f(x)=a_n x^n + a_(n-1) x^(n-1) + ... + a_1 t + a_{0}
be in F[x] and satisfy a_n=1, a_i = a_(n-i) for all i=0,...,n
(forexample f(t)=t^3-4t^2-4t+1
Show that if f(t) is irredicuble then n is even.
and if n=2k>4,then the galois group of this polynomial (over F) cannot be isomorphic S_n (the symmetric group)
Try to write x^{-k}f as a polynomial g in y = x + 1/x. Then consider the field generated by the roots of g and its relation to the field generated by the roots of f.
Galois theory
could you please explain a little more, i dont understand what you mean here
A reciprocal polynomial f is one for which the coefficients read the same in either direction: if f is of degree n then f(x) = x^n f(1/x). Suppose f is reciprocal of degree n = 2k. Now s = x^k.
(x+1/x)^k is also reciprocal of degree n, and hence so is f_1 = f - a_n.s. Now f_1/x is reciprocal of degree n-2 and so can be written as a power of x times a polynomial in (x+1/x). We conclude
that any reciprocal polynomial of degree n = 2k is of the form x^k.g(x+1/x) where g is of degree k.
Now let K be the base field and consider the extension L of K by all the roots alpha_i of g: this is Galois with group a subgroup of S_k: the degree [L:K] <= k!. The extension F of L by the roots
of all the x+1/x = alpha_i is a composite of k quadratic extensions so [F:L] <= 2^k. So [F:K] <= 2^k.k! and this cannot be n!. Hence Gal(F/K) cannot be s_n.
December 11th 2005, 09:18 AM #2
December 11th 2005, 02:40 PM #3
Dec 2005
December 11th 2005, 10:34 PM #4 | {"url":"http://mathhelpforum.com/advanced-math-topics/1443-galois-theory-please-help.html","timestamp":"2014-04-21T04:41:19Z","content_type":null,"content_length":"36640","record_id":"<urn:uuid:adf3d423-7da0-40cd-9bde-8200d59bbf2b>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00630-ip-10-147-4-33.ec2.internal.warc.gz"} |
punny title
punny title
Many mathematics titles (not to mention the names for notions) like to involve a game of double meanings, repetition, wit, connotations, seeming absurdity, self-referencing, self-mockery and the
like. I propose creating list of some known examples of such titles in areas of our interest. Maybe I should have titled this page Title of titles to keep the spirit on.
In that vein, I just introduced the term ‘punny’, although it is a cliché.
• Pavol Ševera, Some title containing the words “homotopy” and “symplectic”, e.g. this one, arXiv/math.SG/0105080
• A. Grothendieck, Hodge’s general conjecture is false for trivial reasons, Topology 8: 299–303 (1969).
• Arne Strøm, The homotopy category is a homotopy category, Arch. Math. (Basel) 23 (1972), 435–441.
• V. A. Hinich, V. V. Schechtman, On homotopy limit of homotopy algebras, $K$-theory, arithmetic and geometry (Moscow, 1984–1986), 240–264, Lecture Notes in Math., 1289, Springer, Berlin, 1987.
• V. Hinich, Homological algebra of homotopy algebras, Comm. Algebra 25 (1997), no. 10, 3291–3323.
• C. Rezk, A model for the homotopy theory of homotopy theories, Trans. Amer. Math. Soc. 353 (2001), no. 3, 973–1007, doi.
• J. Bergner, Three models for the homotopy theory of homotopy theories,
• P. Balmer, G. Tabuada, The Mother of all isomorphism conjectures via DG categories and derivators, arXiv:0810.2099
• J. P. Serre, Gèbres (in French; Engl. Gebras) Enseign. Math. (2) 39 (1993), no. 1-2, 33–85.
• Sasha Beilinson, Determinant gerbils, a talk at MPI Bonn. (A gerbil is a kind of animal, the pun is on gerbes.)
Several more candidates of similar character are listed in the responses to MO question most-memorable-titles.
Revised on August 23, 2011 23:51:00 by
Zoran Škoda | {"url":"http://ncatlab.org/nlab/show/punny+title","timestamp":"2014-04-17T00:52:32Z","content_type":null,"content_length":"13996","record_id":"<urn:uuid:b9ac9d73-1517-4ac8-a398-b75cc89ca450>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00034-ip-10-147-4-33.ec2.internal.warc.gz"} |
Comparisons between the Wake of a Wind Turbine Generator Operated at Optimal Tip Speed Ratio and the Wake of a Stationary Disk
Modelling and Simulation in Engineering
Volume 2011 (2011), Article ID 749421, 7 pages
Research Article
Comparisons between the Wake of a Wind Turbine Generator Operated at Optimal Tip Speed Ratio and the Wake of a Stationary Disk
Research Institute for Applied Mechanics, Kyushu University, 6-1 Kasugakoen, Kasuga, Fukuoka 816-8580, Japan
Received 11 November 2010; Accepted 22 February 2011
Academic Editor: Guan Yeoh
Copyright © 2011 Takanori Uchida et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in
any medium, provided the original work is properly cited.
The wake of a wind turbine generator (WTG) operated at the optimal tip speed ratio is compared to the wake of a WTG with its rotor replaced by a stationary disk. Numerical simulations are conducted
with a large eddy simulation (LES) model using a nonuniform staggered Cartesian grid. The results from the numerical simulations are compared to those from wind-tunnel experiments. The
characteristics of the wake of the stationary disk are significantly different from those of the WTG. The velocity deficit at a downstream distance of (: rotor diameter) behind the WTG is
approximately 30 to 40% of the inflow velocity. In contrast, flow separation is observed immediately behind the stationary disk (), and the velocity deficit in the far wake () of the stationary disk
is smaller than that of the WTG.
1. Introduction
As a countermeasure against global warming, a substantial reduction in CO[2] emissions has become an urgent issue. Accordingly, the effective use of wind power energy is attracting attention as a
clean and environmentally friendly solution. In Japan, the number of wind power generation facilities has been rapidly increasing to achieve the goal of 300 million kW of wind-generated energy in
2010. These wind power generation facilities range from those with a few wind turbine generators (WTG) to large wind farms (WF) with dozens of WTGs.
Given this background, we have developed RIAM-COMPACT (Research Institute for Applied Mechanics, Kyushu University, Computational Prediction of Airflow over Complex Terrain), a nonstationary,
nonlinear wind synopsis simulator that is capable of predicting the optimum sites for wind turbine construction to the pin-point level within a target area of a few km or less [1]. RIAM-COMPACT can
also estimate the annual energy generation and the utilized capacity of a proposed WTG with observational data. To model the wind field, RIAM-COMPACT has adopted a large-eddy simulation (LES)
To achieve improved accuracy of the simulation result, a continuous effort has been made in the research and development of RIAM-COMPACT. As a part of this effort, a wake model of high accuracy is
currently under development to evaluate the influence of the mutual interference between WTGs. The wake model will be able to determine an appropriate separation distance between WTGs to avoid the
reduction of energy generation at a wind farm as a whole due to the mutual interference between WTGs at the farm. Development of such a wake model is necessary for the effective planning of wind
farms, especially in countries with limited flat areas, including Japan, in which large WTGs are constructed in high concentrations.
When multiple WTGs are installed at a site, the following empirical values are generally considered appropriate for the separation distance between two WTGs: approximately ten-times the WTG rotor
diameter in the streamwise direction and three times the WTG rotor diameter in the spanwise direction. A few wind-tunnel and field experiments have been conducted to study the wake flows of WTGs [2,
3]. However, the characteristics of the wake flow have not been sufficiently investigated.
The present study examines the following two characteristics of the wake flow which will enable the construction of an accurate wake model. First, the mean wind velocity deficit is evaluated at the
above-mentioned downstream distance of ten-times the rotor diameter of a single WTG. Second, the characteristics of the wake flow of a single WTG are compared to those of a stationary disk, which has
been used as the basis of existing wake models. In both cases, the wake flow of the WTG operated at the optimal tip speed ratio (tip speed ratio at the maximum power output) is considered. The
investigations are conducted with an LES model that uses a nonuniform staggered Cartesian grid system. The results from the numerical simulations are compared to those from wind-tunnel experiments.
2. Numerical Simulation Technique and Results
2.1. Numerical Simulation Technique
LES simulations are conducted using a staggered Cartesian grid with variable grid spacing. The finite difference method (FDM) is adopted for the computational technique. For the subgrid scale (SGS)
model, the mixed-time scale model [4] is utilized. The mixed-time scale model is characterized by a high degree of computational stability and does not require a near-wall damping function. For
explicit filtering, Simpson’s rule is applied. For the pressure-velocity coupling algorithm, the fractional-step (FS) method based on the first-order explicit Euler method [5] is used. The Poisson’s
equation for pressure is solved by the successive overrelaxation (SOR) method. For discretization of all the spatial terms except for the convective term, a second-order central difference scheme is
applied. The convective term is discretized by a third-order upwind differencing scheme, which consists of a fourth-order central differencing term based on the interpolation technique of Kajishima [
6] and a numerical dispersion term in the form of the fourth derivative. A weighting value of 3.0 is normally applied as the coefficient of the numerical dispersion term in the third-order upwind
differencing scheme proposed by Kawamura and Kuwahara (the Kawamura-Kuwahara Scheme) [7]. However, the coefficient is set to 0.5 in the present study to minimize the influence of numerical
2.2. Modeling of the WTG
Figure 1 shows the small-scale WTG investigated in the wind-tunnel experiments. The blades of the WTG model are MEL airfoils [8] with increased thickness. The performance curve of the WTG determined
from the wind-tunnel experiments shows that the optimal tip speed ratio of the present WTG is 4 (see arrow in Figure 2). To recreate the conditions of the wind-tunnel experiments in the numerical
simulations, the configurations of the spinner, nacelle, and tower were reconstructed with a rectangular grid approximation (Figure 4). To model the rotation of the WTG rotor, an actuator-disc
approach based on blade element theory [9, 10] is applied. In the actuator-disc approach, the tangential and thrust forces generated by the rotating blade are added to the Navier-Stokes equations as
external terms. These external terms represent the reaction forces exerted on the fluid in the direction of the streamwise flow and the rotation. Thus, no wall boundary condition exists for the rotor
as an object. In the present study, because a nonuniform staggered Cartesian grid system has been adopted, the component of the force in the direction of rotation is decomposed into the spanwise and
vertical directions. In addition to the investigation of the decelerating effect of the WTG simply as a drag-inducing body, the adopted modeling approach allows the investigation of the effect of the
blade rotation on the airflow, which is considered a major benefit of the adopted modeling approach. Furthermore, the model is designed so that it allows the user to simulate wake flows of various
WTGs by inputting only the data of the blade chord length, the lift coefficient, the drag coefficient, and the angle of attack as a function of the distance from the center of the rotor. With the
developed model, airflow past the entire WTG, including the tower, is simulated for a tip speed ratio of 4, the optimal tip speed ratio.
2.3. Computational Domain and Conditions
Numerical simulations are conducted for (1) airflow past the WTG operated at the optimal tip speed ratio and (2) airflow past a WTG for which the rotor has been replaced by a stationary disk with a
diameter identical to that of the rotor. The WTG with a stationary disk will be referred to simply as the stationary disk hereafter. The computational domain and boundary conditions applied in the
simulations are summarized in Figure 3. The dimensions of the computational domain are (streamwise () × spanwise () × vertical ()), where is the rotor diameter. The computational domain consists of
181 × 171 × 161 grid points (approximately 5 million grid points) in the , , and directions, respectively. Sufficiently high grid resolution is provided around the WTG to analyze the flow field past
the entire WTG, including the spinner, nacelle, and tower (). The same set of boundary conditions is applied for the two simulations except for those of the wind velocity at the rotor; for the case
with the stationary disk, the wind velocity is set to zero at all grid points on the surface and in the interior of the disk. In both cases, the wind velocity is set to zero at all grid points on the
surface and in the interior of the spinner, nacelle, and tower. As for the boundary conditions for pressure, Neumann boundary conditions are applied at all surfaces. The Reynolds number of the flow
based on the uniform inflow wind speed, , and the rotor diameter, , that is, , is in the present study. For simulations, a time step of () is used.
2.4. Computational Results and Discussion
Figures 5 and 6 are contour plots of the streamwise () wind velocity, , in the vicinity of the WTG operated at the optimal tip speed ratio and in the vicinity of the stationary disk, respectively.
The former figure shows a cross-section of the streamwise () wind velocity field viewed from top at , while the latter figure shows the same wind field viewed from the side at . In these figures,
thirty equally spaced contour intervals are shown between and , and the entire computational domain is shown. The two figures suggest that the characteristics of the wake flow differ significantly
between the WTG and the stationary disk. In the case of the WTG, the wake width is approximately the same as the rotor diameter, and undulating motions are observed in the wake at downstream
distances larger than approximately as indicated in Figures 5 and 6. Furthermore, formation of strong tip vortices is evident at the blade tips of the WTG. It can be speculated that these vortices
suppress the momentum exchange between the wake and its surrounding flow, and as a result, the wake width near the WTG is reduced approximately to the rotor diameter. However, at a downstream
distance of approximately , the effect of the tip vortices becomes sufficiently weak for shear instability to cause undulations in the wake flow. In other words, the influence of the tip vortices
generated at the optimal tip speed ratio extends to a downstream distance of at least .
In contrast, the characteristics of the wake flow behind the stationary disk are complex. A large area of reverse flow is formed immediately behind the stationary disk. As a result of the flow
curling around the edge of the disk, the magnitude of the spanwise component of the flow behind the stationary disk is significantly larger than that behind the WTG (not shown). From the area of
reverse flow, large vortices are shed periodically. The Strouhal number of the vortex shedding and the structure of the shed vortices behind the stationary disk are topics of high interest in fluid
dynamics and will be investigated in future research. Figures 7 and 8 are enlarged views of the flow near the WTG and disk in Figures 5 and 6, respectively, and also include the dynamic pressure
field on the spinner, nacelle, and tower. The enlarged views provide confirmation of the above-mentioned differences between the wake flow around the WTG and that around the stationary disk. The same
figures show vortex shedding from the individual components of the WTG, that is, spinner, nacelle, and tower, and the resulting formation of flow separation behind these components. Although not
shown due to space limitations, a Karman vortex street was observed downstream of the WTG. These results suggest that analyses of the airflows simulated around an entire WTG rather than those around
the individual components of a WTG are necessary in future discussions of vibrations of a WTG and the ability of a WTG to withstand high winds. To address these issues, the computational technique
used in the present study may serve as an effective tool because it is user friendly and computationally inexpensive.
Figure 9 illustrates the streamlines of virtual fluid particles in the wake flows of the WTG operated at the optimal tip speed ratio and the stationary disk. In the case of the WTG (Figure 9(a)), a
spiral flow forms as a result of the blade rotation. In contrast, in the case of the stationary disk (Figure 9(b)), complex three-dimensional streamlines are observed. These results confirm the
presence of complex turbulent flow behind the stationary disk as shown in Figures 5–8.
A significant difference between the wake flow of the WTG and that of the stationary disk is also evident in the time-averaged fields of the streamwise () component of the flow velocity (Figures 10
and 11). The time duration for obtaining the time-averaged streamwise velocity is (). These figures show the presence of a large region of reverse flow immediately behind the stationary disk;
however, at a downstream distance of , the streamwise velocity deficit in the wake behind the disk is smaller than that behind the WTG.
To investigate this finding quantitatively, spanwise and vertical profiles of the time-averaged streamwise velocity are calculated (Figures 12 and 13). The profiles from the wind-tunnel experiments
in these figures were obtained with an I-type hot wire probe. As for the spanwise profiles, the simulation (solid lines) and the experimental results (symbols) agree well for both the WTG and the
stationary disk except for the profile at in the wake of the stationary disk. The deviation of the spanwise profile in this case is attributable to the use of an I-type probe for the airflow
measurement. The streamwise velocity deficits at a downstream distance of behind the center of the rotor are approximately 30–40% of the inflow velocity for the WTG. These quantitative results
confirm that the streamwise velocity deficit at a downstream distance of in the wake behind the disk is smaller than that behind the WTG. The large value of the velocity deficit behind the WTG is
likely due to the tip vortices, the effect of which extends to a downstream distance of .
3. Conclusion
The present study investigated the characteristics of the wake flow behind a single wind turbine generator (WTG) operated at the optimal tip speed ratio. Even at a downstream distance of as large as
ten-times the rotor diameter of a single WTG, the wind velocity behind the center of the rotor was approximately 30–40% of the inflow wind velocity. This large value of the wind velocity deficit was
likely attributable to the tip vortices formed at the blade tips, which suppressed the momentum exchange between the wake flow of the WTG and the surrounding flow.
The wake of the WTG was also compared to that of a WTG with the rotor replaced by a stationary disk because existing wake models are based on the wake of a stationary disk. The diameter of the
stationary disk was set equal to that of the rotor of the WTG. The wake flow of the WTG and that of the stationary disk were significantly different from each other. The wake flow of the stationary
disk was characterized by a large area of reverse flow immediately behind the disk. However, at a downstream distance of , the streamwise velocity deficit in the wake behind the disk was smaller than
that behind the WTG.
Our future research topics include investigations of the effects of the inflow turbulence on a WTG and turbulence distributions in the wake of a WTG. Numerical simulations of wake flows behind
multiple WTGs are also planned as a future project.
1. T. Uchida and Y. Ohya, “Micro-siting technique for wind turbine generators by using large-eddy simulation,” Journal of Wind Engineering and Industrial Aerodynamics, vol. 96, no. 10-11, pp.
2121–2138, 2008. View at Publisher · View at Google Scholar · View at Scopus
2. L. P. Chamorro and F. Porté-Agel, “Effects of thermal stability and incoming boundary-layer flow characteristics on wind-turbine wakes: a wind-tunnel study,” Boundary-Layer Meteorology, vol. 136,
no. 3, pp. 515–533, 2010. View at Publisher · View at Google Scholar · View at Scopus
3. Y. Käsler, S. Rahm, R. Simmet, and M. Kühn, “Wake measurements of a multi-MW wind turbine with coherent long-range pulsed doppler wind lidar,” Journal of Atmospheric and Oceanic Technology, vol.
27, no. 9, pp. 1529–1532, 2010. View at Publisher · View at Google Scholar
4. M. Inagaki, T. Kondoh, and Y. Nagano, “A mixed-time-scale SGS model with fixed model-parameters for practical LES,” Journal of Fluids Engineering, Transactions of the ASME, vol. 127, no. 1, pp.
1–13, 2005. View at Publisher · View at Google Scholar · View at Scopus
5. J. Kim and P. Moin, “Application of a fractional-step method to incompressible Navier-Stokes equations,” Journal of Computational Physics, vol. 59, no. 2, pp. 308–323, 1985. View at Scopus
6. T. Kajishima, “Finite-difference method for convective terms using non-uniform grid,” Transactions of the Japan Society of Mechanical Engineers, Part B, vol. 65, no. 633, pp. 1607–1612, 1999.
View at Scopus
7. T. Kawamura, H. Takami, and K. Kuwahara, “Computation of high Reynolds number flow around a circular cylinder with surface roughness,” Fluid Dynamics Research, vol. 1, no. 2, pp. 145–162, 1986.
View at Scopus
8. J. N. Sørensen and A. Myken, “Unsteady actuator disc model for horizontal axis wind turbines,” Journal of Wind Engineering and Industrial Aerodynamics, vol. 39, no. 1–3, pp. 139–149, 1992. View
at Scopus
9. H. Snel, “Review of the present status of rotor aerodynamics,” Wind Energy, vol. 1, pp. 46–69, 1998. | {"url":"http://www.hindawi.com/journals/mse/2011/749421/","timestamp":"2014-04-19T18:11:36Z","content_type":null,"content_length":"85648","record_id":"<urn:uuid:f07311f3-aa68-4266-aa27-cd72771451b7>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00210-ip-10-147-4-33.ec2.internal.warc.gz"} |
The color of clutch
I hope this is the last time I need to write an article devoted to clutch hitting.
The data
There have been many attempts to find evidence of clutch hitting. All of these attempts focus on the same basic principle: compare a player’s performance in timely situations to his overall
performance, and determine if that difference is more than expected from random. This has been done in the following ways:
• correlations of career performances in odd years to performances in even years
• year-to-year correlations
• distribution of differences compared to the binomial
In every case, the result is the same: yes, clutch hitting exists. There is no question: clutch hitting does exist. Indeed, as long as you make humans the central participants in contexts that change
wildly, it will be a foregone conclusion that the results will not be completely random from our expectation of those participants. Therefore, that we find the existence of clutch hitting is not
terribly exciting. It is expected. However, we haven’t established the degree to which it exists, nor have we established the likelihood that we can even find the thing that we know exists.
The test of clutch hitting with the most clarity for illustrative purposes was produced by Nate Silver in Between The Numbers (p. 29), using a method popularized by Keith Woolner: for each player,
compare the gap in performances in clutch and non-clutch situations, and total it based on odd years and even years. The idea is that the average gap in the odd years should be roughly the same as
the average gap in the even years, for each player. This method does a nice job of removing the age and aging bias. The result is a correlation of r=0.33. The number of PA required in the sample was
a minimum of 2500 for each set of even and odd years. We can estimate that the average size of each set to be PA = 3500. In order to get a correlation of r=0.33, with trials=3500, we can produce this
This equation means that if you had 7000 PA in each sample, you would get a sample-to-sample correlation of r=.50. If you had PA=3500, then the correlation would be r=.33. For purposes of
ballplayers, we usually just focus on a few years. After all, it doesn’t help us to know if Bobby Abreu is a clutch hitter at age 35. We want to know this early on. Realistically, you would want to
compare a two-year sample to another two-year sample. That would mean each sample would have some 1000 or 1200 PA. And using our equation above, this would mean we’d get an r=.15.
What does this mean? Well, whatever results your analysis shows as to how much clutch the sample shows, our best estimate of the true rate would be 15 percent of the sample rate. So, if you have
figured out that someone has a sample of +13 clutch runs per 600 PA in the clutch (and that is a very very high figure), the regressed value would yield a +2 runs estimate as our true clutch talent.
Other attempts as documented in a chapter written by Andy Dolphin in The Book, and on my site yields a similar 2 run estimate. My equation was:
r = clutchPAs / (clutchPAs + 1250)
And since clutchPAs is 20% of a player’s total PAs, this equation is the same as:
r = PA / (PA + 6250)
For all intents and purposes, this equation is an almost perfect match to the equation derived from Woolner/Silver. Basically, if you want to find a player’s clutch talent level, you cannot look at
his clutch numbers. The sample size simply cannot give you the certainty we need. Clearly, we need to get our noses out of our spreadsheets and watch a game.
Watching a game
Last year, I proposed The Great Clutch Project, which reads in part:
Certainly, we can and should accept that Clutch exists in some form and to some extent—not everything that happens is random variation spinning around a constant centered mean. Even so, there is
a limit to how much a clutch skill can change your mean center point. No amount of Clutch will make anyone want to choose Marco Scutaro over Alex Rodriguez. Even if Scutaro is the clutchiest
player ever, and A-Rod is the biggest choker ever, when a manager has A-Rod on deck and Scutaro on the bench, he is not going to call back A-Rod to put in Scutaro. It simply won’t happen.
So, even if we grant that the clutch skill exists, its practicality is limited to the extent that it can exist. No one believes that the clutch skill is big enough that he would really choose
Scutaro over Rodriguez. Jeter over Rodriguez, though? Maybe.
So, the questions are: How big is the clutch skill; and, in practical purposes, how far can Clutch vault a player over a better hitter who doesn’t have as much?
Realizing that the numbers are of no help to me in determining who is a clutch hitter, I instead turned to the fan. After all, it is the fan that most believes in clutch hitting, and it is the fan
who knows a clutch hitter when he sees one. So the project started:
The first task is to find such pairs of hitters for each team. It wasn’t easy. I polled the blogosphere and ended up with over 2,200 votes.
The fans on each team ended up picking a clutch hitter (best exemplified by Jeter, Dustin Pedroia and Placido Polanco), while I picked strictly by the numbers (Rodriguez, JD Drew, Curtis Granderson).
I ended up with 36 Clutch players as voted by the Fans, and 36 better overall and less clutchy players, as selected by a forecasting system. Obvious picks that both sides wanted (e.g., Albert Pujols,
Vladimir Guerrero, Chipper Jones) were discarded. The forecasting system estimated that, clutch aside, my hitters were .020 wOBA points better than those that the Fans selected. And so, we ended up
So, much like Ginger, my hitters have a sizeable advantage. You might think this is not fair, but in each and every case, the Fans preferred their choice to mine. It’s their bed, people. Except
that, the Fans’ picks have some intangible quality, like Mary-Ann possesses. And the Fans believe that this intangible quality, this clutch factor, is enough to propel their picks to be at least
equal to, if not better than, my picks when the game is on the line.
We have a situation here where both sides agree that, overall, my hitters are better. But, even given that, the Fans decided that their pick would perform better in clutch situations. (A clutch
situation is where the Leverage Index is at least 2.0, which occurs roughly 10 percent of the time.)
The results
I called on David Appelman at Fangraphs to track the results for me. And he very generously did. First, let’s see how both groups did overall. My hitters had an 11 point advantage in OBP and 46 point
advantage in SLG. Clearly my guys produced better, overall. In wOBA speak, this is roughly a 21 point advantage for my players. Indeed, this is pretty much exactly what the forecasting system
expected. That is, before the season started, the forecasting system expected my guys to hit 20 points better than the Fans’ clutch players, overall. And they did.
But, how did both groups do with the game on the line? First thing I noticed is that my guys got alot of IBB. In order to be fair, I removed IBB from consideration when looking at OBP. So the results
are as follows: my guys had a six point advantage in OBP and a 27 point advantage in SLG. In wOBA-speak, that translates to around a 12 point advantage for my team over the Fans’ team. So, I think we
can say that, yes, the Fans did have some insight into picking clutch players, but it was nowhere near enough to overcome the talent gap I started with. That is, while we can accept that “Fans know
clutch”, they don’t know the extent of clutch. That extent is roughly 10 wOBA points (which is 10 OBP points and roughly 15 SLG points).
Is that a big deal? Well, it’s less than the platoon advantage, which is 20 wOBA points. So, when you give consideration to wanting a clutch hitter at-bat, you have to temper your enthusiasm with the
understanding that that clutch skill is less than if you had a similar batter with the platoon advantage. No one is going to select Marco Scutaro over Alex Rodriguez. The two players must be pretty
close to begin with in talent, before you go off having a preference for your clutch hitter over someone who is otherwise a better hitter.
Fan bias
One thing that was interesting is the kinds of playes Fans considered clutch. Overall, both our teams had a bit over 19,000 PA. Both had around 970 doubles and 80 triples. But my guys had almost 300
more homeruns, and 600 fewer singles. My guys had 500 more walks and 1000 more strikeouts. As I noted in the summary to this project on my blog:
The guys they selected as clutch put the ball in play (excludes HR) 76 percent of the time, compared to my great hitters of 67 percent, in all situations. Those numbers dropped 2 percent points
for both groups in clutch situations.
The selection criteria by the fans on this basis was nine standard deviations from the mean, showing a fantastically clear bias in this regard.
It’s very possible that to a fan, clutch is all about doing what Carlos Beltran didn’t do in his last at-bat against the Cards, when he took strike three.
The Fans have a clear bias as to what they think is clutch: put the g-dd-mn bat on the g-dd-mn ball. This bias is best exemplified by Reds fans, as I noted before the season started:
The Reds Fans detest their best hitter (Adam Dunn) so much that they actually selected four different hitters ahead of him. Every time I would check the results, a new leader would emerge. Ken
Griffey Jr., Scott Hatteberg and Brandon Phillips each would have made a fine choice, but the task will be taken up by Edwin Encarnacion. (And Javy Valentin was just behind Dunn in fan
In the end, the Fans’ bias is the main insight we gain from this project. The other insight is that the extent of perceived clutch does not match the reality of the impact of clutch. The Fans wanted
their clutch hitters batting, even if they were 20 points worse than my hitters. And they lost. But, they didn’t lose by 20 points, just by 10 points. Color me somewhat impressed.
Technical sticklers
For you party poopers, one standard deviation given 1900 PA is 12 wOBA points. So, the observed 10 point clutch skill that the Fans perceived won’t pass any statistical significance tests. The
expectation is that if I were to rerun this project for the 2009 season (which I won’t), is that the Fans would not be so lucky. But, let’s not let this technical detail get in the way of the partial
win for the Fans.
Let’s let this clutch debate end today (please?), and simply agree that: a) yes, clutch exists, b) yes, fans can perceive clutch players, but c) the impact of clutch players is limited to less than
the platoon advantage. | {"url":"http://www.hardballtimes.com/the-color-of-clutch/","timestamp":"2014-04-20T05:51:42Z","content_type":null,"content_length":"52166","record_id":"<urn:uuid:cac69319-9198-46b3-b3f1-94aef1516c72>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00370-ip-10-147-4-33.ec2.internal.warc.gz"} |
Solve the Pattern
?,?,13,?,?,?,?,?,29,?,19,38,11,47,59,? Modulus 64 should be involved. What are the missing numbers of the sequence, and how is the sequence derived?
Hey pitfer. There are ways to derive sequences (like recurrence relations) but one of the down-falls to this approach is that you can fit an infinite number of sequences to a finite list just like
you can pick an infinite number of polynomials have the same roots (for the polynomial case, you just change the multiplicity of the roots). For this reason even if you did get a sequence, it would
be only one of many. Although I am not an expert in the subject, I do know that there are techniques and algorithms to find sequence definitions but as I said above, I don't know if they will really
help you all that much due to the nature that is sequences where you only have a small subset of the entire sequence. | {"url":"http://mathhelpforum.com/number-theory/210284-solve-pattern-print.html","timestamp":"2014-04-16T04:28:13Z","content_type":null,"content_length":"4137","record_id":"<urn:uuid:0935ec78-0567-4e56-bf48-97170c7ab3a8>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00060-ip-10-147-4-33.ec2.internal.warc.gz"} |