content
stringlengths 86
994k
| meta
stringlengths 288
619
|
|---|---|
Infinite Integration of Fick's Second Law
Hi everyone!
Recently, I've been trying to understand how the error function pertains to solving for concentration in a non-steady state case (with a constant diffusivity D), but I've been having some trouble
with the initial assumptions. The source I am currently using (Crank's The Mathematics of Diffusion) claims that, for a the case of a plane source,
C = A/sqrt(t) * exp(-(x^2)/4Dt)
Where C is the concentration (with respect to position and time), x is the position (assuming one dimension only), t is the time, and A is an arbitrary constant, which is a solution for Fick's Second
Law (dC/dt = D (d2C/dx2)). Crank (as well another source I've been using <http://www.eng.utah.edu/~lzang/images/lecture-4.pdf>) claim that this is solvable by integrating Fick's Second Law, but
whether I am making a mistake or otherwise not understanding the concept, I can't seem to get this result to work. Could someone help me with this, either by providing the math, or a source which has
this derivation? Thanks again.
Substitute [itex]C(x,t)=\frac{A}{\sqrt{t}}f(\eta)[/itex] into the partial differential equation for Fick's second law, where
By doing this, the partial differential equation should reduce to an ordinary differential equation to solve for f and a function of [itex]\eta[/itex]. This yields a so-called similarity solution.
I think a better book to use than Crank would be Transport Phenomena by Bird, Stewart, and Lightfoot. You may have to look in the chapters on heat transfer, since diffusion problems using Ficks
second law are mathematical analogs of unsteady state conductive heat transfer problems.
|
{"url":"http://www.physicsforums.com/showthread.php?t=701470","timestamp":"2014-04-16T04:36:07Z","content_type":null,"content_length":"25310","record_id":"<urn:uuid:95d19aba-b075-4488-893c-08dca223cc6e>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00225-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Getting started with XNA – Animating a 3D Model
In the previous tutorial, we’ve loaded and displayed a 3d model. But can you think of a game with static 3d models? No, it’s time we start animating our model. Most of the time, a 3d artist will
handle the animations, and you’ll just load it and play it. But animations aren’t supported out of the box in XNA. There is a demo on creators club which allows you to load skinned animations, and
there is a project on codeplex, called the XNA Animation Component Library, which allows you to load skinned and unskinned animations, with multiple meshes, and so on. But in this tutorial, we’ll
animate our model from code. We’ll do a tutorial on the XNA Animation Component Library later.
Start by loading the end result of our previous tutorial. We’ll take that model, move the forks of the forklift up and down, and rotate our entire model. You’ll see that it’s relatively easy. Start
by opening up our Forklift class, since that will be the only class we’ll be editing.
Add the following variable:
// Store the original transform matrix for each animating bone.
Matrix _forksTransform;
In this variable, we'll store the original transform matrix of our bone that we're going to animate. So in the LoadGraphicsContent method, initialize that variable.
// Store the original transform matrix for each animating bone.
_forksTransform = _model.Bones["vorken"].Transform;
Since we're doing an animation, we'll have to calculate the position of the forks and the position of the forklift every frame. For that, the XNA Framework provides us with a method just for that.
public override void Update(GameTime gameTime)
Before the base.Update(gameTime); call, we'll calculate the position of our forks. The calculation below might seem a bit excessive, but we want our forks to move up and down relative to our mast, so
the forks don't go through them. The easy way to calculate this is using the sine and cosine of the angle. In this case, the angle is 3.18 degrees (you can see this in 3d Studio Max, when you design
your model).
// Calculate the new position of the forks.
float time = (float)gameTime.TotalGameTime.TotalSeconds;
double angle = MathHelper.ToRadians(3.18f);
float sine = MathHelper.Clamp((float)(Math.Sin(angle) * Math.Sin(time)), 0, 1) * 100f;
float cosine = MathHelper.Clamp((float)(Math.Cos(angle) * Math.Sin(time)), 0, 1) * 100f;
// Apply the transform to the Bone
_model.Bones["vorken"].Transform = Matrix.CreateTranslation((float)(sine), 0, (float)(cosine)) * _forksTransform;
First off all, we calculate the new position of our forks, relative to the mast and to the time. We clamp the sine and cosine so they aren't negative. Then we apply the translation (movement) to our
bone. If you press F5 now, you'll already see the forks move relative to the mast.
Now, that which we can do for a bone, we can do for an entire model. Let's rotate our model while our forks are moving. Add the following code just below the previous code.
// Set the world matrix as the root transform of the model.
_model.Root.Transform = Matrix.CreateRotationY(time * 0.5f);
That's it, we've animated our model. See how easy that was? Next up, we'll add a first person camera, so we can move through the scene.
As usual, you can download the source for this tutorial.
8 thoughts on “Getting started with XNA – Animating a 3D Model”
1. Pingback: 3D Game Programming » Getting started with XNA - First Person Camera - XNA News and Tutorials - DirectX News and Tutorials
2. I think there is a tiny typo at the end
_model.Root.Transform = Matrix.CreateRotationY(time * 0.5f);;
I did not have a problem with it, but for all the blind copy/pasters out there
3. No bad, the last ; is just an empty statement. But I’ve updated it just in case
4. this tutorial is fantastic. It definitely helped me get my head around 3d modelling. I bought a book on using xna by o’reilly and it doesnt cover any of this, only how to load the model and
rotate the whole thing, nothing on animating individual parts of it.
5. This tutorial definitely seems to be helpful, but there’s one thing I don’t understand.
About the 3.14 angle… This angle is between exactly what and what? I assume it’s between the fork and the mast, but what parts of the fork and mast, along which axis?
6. Hi,
I’m deeply intrested in examining full source code of this tutorial you have provided, but asai see it’s gone from the server. Is it anyhow possible to get it? It’s probably something I’ve been
searching for for quite a long time.
Best Regards
7. Hy,
The source cod download link dont work. Pleace email source cod!!!
8. Hi,
Gr8 post. But source code is missing. can you please email the source code to me also?
|
{"url":"http://www.3dgameprogramming.net/2007/07/09/getting-started-with-xna-animating-a-3d-model/","timestamp":"2014-04-19T14:30:25Z","content_type":null,"content_length":"28556","record_id":"<urn:uuid:9b94048c-190a-4a3e-9811-f950d6d6e031>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00217-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Characteristic Function of Product of Random Variables
May 13th 2008, 06:34 PM
Characteristic Function of Product of Random Variables
I am trying to calculate the characteristic function of X*Y where both X and Y are standard normal variables with correlation coefficient p. I have succeeded in the case where p = 0 using the law
of iterated expectations E{X} = E{E{X|Y}} but have failed in this case.
Thanks for taking the time.
April 1st 2009, 03:49 PM
mr fantastic
I am trying to calculate the characteristic function of X*Y where both X and Y are standard normal variables with correlation coefficient p. I have succeeded in the case where p = 0 using the law
of iterated expectations E{X} = E{E{X|Y}} but have failed in this case.
Thanks for taking the time.
See here: http://www.mathhelpforum.com/math-he...variables.html
|
{"url":"http://mathhelpforum.com/advanced-statistics/38271-characteristic-function-product-random-variables-print.html","timestamp":"2014-04-23T14:03:16Z","content_type":null,"content_length":"4797","record_id":"<urn:uuid:f4e1e902-90bb-4057-8e4e-e3d18177acbe>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00495-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
what is the inverse of the following statement: if he speaks arabic, he can act as the interpreter.
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/4ee0ca95e4b05ed8401b544a","timestamp":"2014-04-21T04:42:14Z","content_type":null,"content_length":"37306","record_id":"<urn:uuid:1e04fbe3-ace0-4709-b7e3-a235dc142fa1>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00011-ip-10-147-4-33.ec2.internal.warc.gz"}
|
phenomenon. Nonpredictive information is derived from rates and assumes a random distribution about that rate. Causal precursors assume some connection and understanding about the failure process. To
be a predictive precursor, a phenomenon not only must be related to one particular earthquake sequence but also must demonstrably provide more information about the time of that sequence than
achieved by assuming a random distribution.
Let us consider some examples to clarify these differences. Determining that southern California averages two earthquakes above M5 every year and, thus, that the annual probability of such an event
is 80% is clearly useful, but nonprecursory, information. On the other hand, if we were to record a large, deep strain event on a fault 2 days before an earthquake on that fault we would clearly call
it a causal precursor. However, it would not be a predictive precursor because recording a slip event does not guarantee an earthquake will then occur, and we do not know how much the occurrence of
that slip event increases the probability of an earthquake. The only time we have clearly recorded such an event in California (3), it was not followed by an earthquake. To be able to use a strain
event as a predictive precursor, we would need to complete the difficult process of determining how often strain events precede mainshocks and how often they occur without mainshocks. Merely knowing
that they are causally related to an earthquake does not allow us to make a useful prediction.
Long-Term Phenomena
Long-term earthquake prediction or earthquake forecasting has extensively used earthquake rates for nonprecursory information. The most widespread application has been the use of magnitude-frequency
distributions from the seismologic record to estimate the rate of earthquakes and the probability of future occurrence (4). This technique provides the standard estimate of the earthquake hazard in
most regions of the United States (5). Such an analysis assumes only that the rate of earthquakes in the reporting period does not vary significantly from the long-term rate (a sufficient time being
an important requirement) and does not require any assumptions about the processes leading to one particular event.
It is also possible to estimate the rate of earthquakes from geologic and geodetic information. The recurrence intervals on individual faults, derived from slip rates and estimates of probable slip
per event, can be summed over many faults to estimate the earthquake rate (6–8). These analyses assume only that the slip released in earthquakes, averaged over many events, will eventually equal the
total slip represented by the geologic or geodetic record. Use of a seismic rate assumes nothing about the process leading to the occurrence of a particular event.
A common extension of this approach is the use of conditional probabilities to include information about the time of the last earthquake in the probabilities (9–11). This practice assumes that the
earthquake is more likely at a given time and that the distribution of event intervals can be expressed with some distribution such as a Weibull or normal distribution. This treatment implies an
assumption about the physics underlying the earthquake failure process—that a critical level of some parameter such as stress or strain is necessary to trigger failure. Thus, while long-term rates
are nonprecursory, conditional probabilities assume causality—a physical connection between two succeeding characteristic events on a fault.
For conditional probabilities to be predictive precursors (i.e., they provide more information than available from a random distribution), we must demonstrate that their success rate is better than
that achieved from a random distribution. The slow recurrence of earthquakes precludes a definitive assessment, but what data we have do not yet support this hypothesis. The largest scale application
of conditional probabilities is the earthquake hazard map prepared for world-wide plate boundaries by McCann et al. (12). Kagan and Jackson (13) have argued that the decade of earthquakes since the
issuance of that map does not support the hypothesis that conditional probabilities provide more accurate information than the random distribution.
Another way to test the conditional probability approach is to look at the few places where we have enough earthquake intervals to test the periodicity hypothesis. Three sites on the San Andreas
fault in California—Pallet Creek (14), Wrightwood (15), and Parkfield (16)—have relatively accurate dates for more than four events. The earthquake intervals at those sites (Fig. 2) do not support
the hypothesis that one event interval is significantly more likely than any others. We must therefore conclude that a conditional probability that assumes that an earthquake is more likely at a
particular time compared to the last earthquake on that fault is a deterministic approach that has not yet been shown to produce more accurate probabilities than a random distribution.
Intermediate-Term Phenomena
Research in phenomena related to earthquakes in the intermediate term (months to a few years) generally assumes a causal relationship with the mainshock. Phenomena such as changes in the pattern of
seismic energy release (19), seismic quiescence (20), and changes in coda-Q (21) have all assumed a causal connection to a process thought necessary to produce the earthquake (such as accumulation of
stress). These phenomena would thus all be classified as causal precursors and because of the limited number of cases, we have not yet demonstrated that any of these precursors is predictive.
Research into intermediate-term variations in rates of seismic activity falls into a gray region. Changes in the rates of earthquakes over years and decades have been shown to be statistically
significant (22) but without agreement as to the cause of the changes. Some have interpreted decreases in the rate to be precursory to large earthquakes (20). Because a decreased rate would imply a
decreased probability of a large earthquake on a purely Poissonian basis, this approach is clearly deterministically causal. However, rates of seismicity have also increased, and these changes have
been treated in both a deterministic and Poissonian analysis.
One of the oldest deterministic analyses of earthquake rates is the seismic cycle hypothesis (23–25). This hypothesis assumes that an increase in seismicity is a precursory response to the buildup of
stress needed for a major earthquake and deterministically predicts a major earthquake because of an increased rate. Such an approach is clearly causal and has not been tested for its success against
|
{"url":"http://www.nap.edu/openbook.php?record_id=5709&page=3722","timestamp":"2014-04-18T08:25:55Z","content_type":null,"content_length":"42221","record_id":"<urn:uuid:7cd3d5b9-2eab-408b-9a28-7b619bb653ab>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00595-ip-10-147-4-33.ec2.internal.warc.gz"}
|
ShareMe - free Prime 95 download
Math Grade 4
Chat History Import
Hair Care Down There Shaving Kit
Austin Independent School District Calendar
Protel 98 For Xp
Anti Virus For Htc Touch Pro2 Smart Phone
Air Care 60180
Farsi Bazar Mobil
Convert Msn Txt To Eml
Bob S Track Builder
Picture Merging Program
Bearshare Music Download Latest Version
Gateway Nv53a
Ableton Live 5
Bamboozle 2008 Tickets
1. PDF prime - Utilities/Text/Document Editors
... PDF prime can help you merge, split, lock, and unlock your PDF files with ease! PDF prime was designed from the beginning as a fast, friendly, and easy software tool that helps you do your job
quicker so you can go on to better things. Try the no risk, 30-day FREE trial.How PDF prime can help you: Merge (combine) many PDF files into one. Split PDF files into seprate pages. Lock PDF
files with a password for security. Unlock previously-locked PDF files. No knowledge of PDF file formats is ...
2. prime Factors - Utilities/Other Utilities
... This project is to attempt to find prime factors of a very large number. The code we are starting with uses the Eratothenes Sieve algorith and is multithreaded. We hope to get this running on
a Parallel Knoppix cluster. ...
3. prime Poster - Home & Personal/Misc
... Save money and time in your shopping. Use prime Poster to manage and print your shopping list. prime Option also stores key phone numbers and special messages. It is easy to use and is
completely free. ...
4. JoFlash prime - Utilities/Other Utilities
... What's a common factor of 333 and 148? JoFlash prime is a simple program that calculates the factors of a given number, calculates the common factors of two numbers, or writes a table of
primes up to a user specified number. <br>Calculate common factors of two numbers<br>Calculate factors of a single number<br>Output a list of primes ...
5. prime Desktop 3D - Home & Personal/Misc
... prime Desktop 3D it's a NEW! lightweight 3D Desktop for Windows XP/Vista/7 integrated with 3D Window-Switcher. Features: Unique 3D Interface, Auto Turn On/Off, Tiny Size, Low Memory Usage,
Easy Control, Multi-Core Optimized ...
6. Metroid prime - Games/Action
... Metroid prime Fangame - simple modyfication of oryginal Metroid Fangame. Many interesting options, like savegame, shop, upgradavle weapons system and rocket packages. Kill zombies, enemies,
aliens,big creatures to find the mighty Metroid monster and anihilate him! ...
7. prime Cards - Games/Cards
... Gather the brainy ones and play the arithmetic tricks! Your goal in this game is to make use of the cards to form prime numbers before your opponent does so. The game uses 60 cards, which are
divided into two categories, namely composite cards and operation cards. The following shows these cards with their quantities stated in brackets: composite cards are 4 (x10), 8 (x4), 10 (x4), 20
(x2), 25 (x2), 26 (x2), and the operation cards are -2 (x12), +1 (x12), and +7 (x12). When the game starts, 7 ...
8. Mathcad prime - Multimedia & Design/Other Related Tools
... Mathcad prime 2.0 simplifies engineering calculation documentation through its document centric, what-you-see-is-what-you-get-approach. It combines equations, text and graphics in a
presentable format, making it easy to perform, document and share calculations and design work. With access to over 600 built-in engineering calculation functions, users can display, manipulate,
analyze and plot data with full units support throughout the application. Mathcad enables engineers to easily perform, ...
9. prime Mover - Programming/Other
... prime Mover was specially designed as a build tool, that is very similar to make. It is developed to be small, portable, flexible, powerful, and is very easy to deploy. It can be distributed
along with your application source code and does not require your end user to have anything other than a basic C compiler in order to use it. ...
10. Apurba prime - Educational/Mathematics
... A Visual Basic application that resolves any given composite number (up to 999999999) into prime factors, gives list of primes up to 214749239, and counts primes between two given numbers
(within 214749263).Form ?Euclid?: resolves any given composite number (up to 999999999) into prime factors.Form ?Eratosthenes?: gives list of primes and pairs of twin primes up to 214749239.Form
?Gauss?: counts primes between two given numbers (within 214749263).First realeased in 2001.Upgraded version 1.1.2 ...
Prime 95
From Short Description
1. prime database project - Utilities/Mac Utilities
... primedb is a project that attempt to generation prime numbers fast and saving them to a database(Mysql or else) for prime research: finding large number of prime,prime factorization,Goldbach's
conjecture,twin prime conjecture... ...
2. Apophenia - Educational/Mathematics
... prime sprial plotting software for square (Ulam) and related polygon prime number spirals. Allows multiple prime generating quadratic equations to be displayed simultaneously. Plot data and
image files can be saved to disk for further analysis. Demo version is limited to prime spirals based on the square and up to two prime generating quadratic equations. The activated version can
display prime number spirals based on any regular polygon with up to 12 quadratic equations. Data files can be ...
3. prime Number Spiral - Utilities/Other Utilities
... The prime Number Spiral (a.k.a. the Ulam Spiral) is constructed as follows: Consider a rectangular grid. We start with the central point and arrange the positive integers in a spiral fashion
(anticlockwise). The prime numbers are then marked. There is a tendency for the prime numbers to occur on diagonal lines, however far out into the spiral one goes. This is software is for
exploring the prime Number Spiral. It also (a) allows coloring of the prime numbers in various ways, (b) displays a ...
4. prime Number Finder - Educational/Mathematics
... Lists all prime Numbers, also allows you to enter a number and check if it is a prime number. If not - it will show all factors of the number. ...
5. Factorizer - Home & Personal
... Factorizer is a Windows program to find factors of numbers up to 2,147,483,646 and to find primes, pairs of primes and Palmen colors of numbers. Or in more detail, Factorizer may be used: (1)
to get the prime decomposition of all numbers in a range of numbers, (2) to get all factors of a single number or all factors of all numbers in a range, (3) to find only the prime numbers in a
range of numbers, (4) to find pairs of prime numbers (e.g. 107 and 109) in a certain range, (5) to count (without ...
6. prime Number Generator - Utilities/Other Utilities
... Generates prime numbers. This program finds primes by checking to see if the modulus of the current number is equal to 0 for any previously found primes up unitl the square root of the number.
7. Parallel Primes - Utilities/Other Utilities
... It is an implementation of my algorithm to find prime no.s. It uses Grid Computing to find a list of primes with the maximum speed possible. ...
8. Fssplit - Multimedia & Design/Other Related Tools
... This program breaks down the number of prime factors. The maximum length of the folding number is 17 characters. Unfolding of numbers into prime factors are commonly used in cryptography,
among others, the various types of security breaches. An important feature of programs that perform this task is the speed with which a given number can be broken down, obviously the sooner the
better. Algorithm used here is one of the fastest, it checks the specified number of divisibility by 2, then by 3 ...
9. Bookland barcode prime image generator - Business & Productivity Tools/Inventory Systems
... Barcode prime Image Generator for Bookland. Easily create Bookland barcode images ready for clipboard pasting into other applications - or save it as a graphic file in high quality TIF format.
10. Codabar barcode prime image generator - Business & Productivity Tools/Inventory Systems
... Barcode prime Image Generator for Codabar. Easily create Codabar barcode images ready for clipboard pasting into other applications - or save it as a graphic file in high quality TIF format.
Prime 95
From Long Description
1. Prime95 27.7 Build - Educational/Other
... Prime95 is a program designed to be used to find Mersenne prime numbers. Mersenne numbers can be proved composite (not prime) by either finding a factor or by running a Lucas-Lehmer primality
test. prime numbers have long fascinated amateur and professional mathematicians. An integer greater than one is called a prime number if its only divisors are one and itself. The first prime
numbers are 2, 3, 5, 7, 11, etc. For example, the number 10 is not prime because it is divisible by 2 and 5. A ...
2. Sprimer - Educational/Science
... Sprimer is a mathematical tool designed to look for prime number patterns. The exceptional qualities of prime numbers make searching for them a mathematical constant, and this program can help
you do it.To do this, Sprimer plots numbers on a grid and makes spiral movements from the centre out. This way only the prime numbers are visible, highlighted in white.The rest of the numbers are
hidden in black colour, and the spiral centre in blue. Thus a diagonal pattern of blue lines appears on-screen. ...
3. MatheHP - Utilities/System Utilities
... Program MatheHP contains calculators for integer, rational, real and complex numbers with high precision (up to 144 digits with point numbers, 616 with integer numbers). You can also solve
algebraic equations with up to 8 th degree and linear systems of equations. For the quick computation of prime numbers a prime number generator is available. Moreover prime factors and divisors
can be calculated. ...
4. Quine McCluskey Simplifier - Utilities/Other Utilities
... The Quine McCluskey Simplifier(qmcs) is a tool used to simplify a boolean function. With the input of binary/decimal data the program calculates the prime implicants, which are then used to
calculate the essential prime implicants. ...
5. Prime2 - Multimedia & Design/Graphic & Design
... Prime2 was designed as a small, simple and accessible application that is able to calculate prime numbers using the new Fork/Join framework of Java 7.You can also select the number of cores
you want to use in order to calculate your prime numbers. ...
6. PrimeNumbersInformer - Multimedia & Design/Graphic & Design
... PrimeNumbersInformer is a small, simple, command prompt based application specially designed to help users learn the prime numbers.Once you've opened the app, just enter the limit number and
PrimeNumbersInformer will create a TXT file with the prime numbers. for Windows7 ...
7. PNGPython - Multimedia & Design/Graphic & Design
... PNGPython is a simple and useful program built using the Python programming language, that can help you calculate prime numbers. The program will ask you the amount of prime numbers you want.
Also displayed in the application is the amount of time it took the program to execute. ...
8. Laser Dolphin (for Windows) - Games/Action
... When the prime Minister has been abducted by aliens who can you call? Laser Dolphin, of course! With his unique blend of speed, agility, and firepower, Laser Dolphin is no ordinary dolphin. He
is the only one capable of rescuing the prime Minister. Take control of Laser Dolphin to experience action, adventure, and underwater fun. You will need to use all of your cunning to evade and
destroy the bizarre sea creatures that you encounter - including Missile Fish, Robo Birds, TNT Turtles, Electric ...
9. Amazon App for Pokki - Internet/Tools & Utilities
... A free desktop app for your Windows PC to browse, search, get product details, and read reviews on millions of products available from Amazon.com and other merchants. With the Amazon app you
can sell DVDs, CDs, MP3 downloads, software, video games, electronics, apparel, furniture, food, toys, and jewelry. Download the Amazon Pokki desktop app to access your existing Amazon account,
cart, wish lists, payment and Amazon prime member shipping options, order history, 1-Click settings, and prime ...
10. prime number scanner - Utilities/Mac Utilities
... Searches for prime numbers within a given range and outputs in a format proper for uploading. ...
Prime 95
Related Searches:
|
{"url":"http://shareme.com/programs/prime/95","timestamp":"2014-04-21T04:45:19Z","content_type":null,"content_length":"53000","record_id":"<urn:uuid:135b1384-1f94-404c-b1ca-29093b5ccc9a>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00414-ip-10-147-4-33.ec2.internal.warc.gz"}
|
CS 276, Spring 2004
David Wagner (daw@cs, 765 Soda Hall, 642-2758)
Monday/Wednesday, 10:30-12:00, 310 Soda
• No class on Monday, May 10! Last lecture is Wednesday, May 5.
• Final projects are due at 9am on Wed, May 19.
Here is a list of past lectures and the topics covered. I've also indicated possibilities for further reading. B&R = Bellare & Rogaway's notes; V7 = Vadhan's lecture 7; etc.
01 (1/21): Introduction. Basic motivating scenarios for cryptography. History. Information-theoretic secrecy. [notes] (V1,V3; B&R intro, B&R info-theory)
02 (1/26): Shannon secrecy. Computational indistinguishability. Pseudorandom generators. [notes] (V3,V11; B&R info-theory)
03 (1/28): Exercises with indistinguishability. Pseudorandom functions. Pseudorandom permutations. [notes + notes] (B&R block ciphers, B&R prfs; V12)
04 (2/2): Pseudorandom functions and permutations. The birthday paradox. PRF/PRP switching lemma. [notes + notes] (B&R prfs, B&R Appendix A; V12)
05 (2/4): Guest lecture from Umesh Vazirani.
06 (2/9): Symmetric-key schemes. Definitions of security (IND-CPA): real-or-random, left-or-right, find-then-guess. Equivalence of real-or-random and left-or-right. [notes] (B&R symm encryption)
07 (2/11): Left-or-right and find-then-guess are equivalent. Semantic security. Find-then-guess and semantic security are equivalent. [notes + notes] (B&R symm encryption)
08 (2/18): CTR mode is IND-CPA secure. Message integrity: INT-PTXT, INT-CTXT. Encryption does not provide integrity. [notes + notes] (B&R integrity)
09 (2/23): Message authentication codes (MACs). 2-universal hashing. PRFs are good MACs. Stretching the input size of a PRF. [notes + notes]
10 (2/25): HMAC. Broken systems. The need for message authentication when encrypting. IND-CCA2. [notes + notes]
11 (3/1): IND-CPA and INT-CTXT => IND-CCA2. Intro to number theory: groups, finite fields, Fermat's theorem, Euler's theorem, Legendre symbols, quadratic residues. [notes] (paper on EtA, AtE, E&A) (B
&R number thy)
12 (3/3): Public key encryption. Trapdoor one-way permutations: RSA, Rabin. Hard-core bits. [notes + notes] (B&R asym enc)
13 (3/8): Goldreich-Levin theorem. Goldwasser-Micali public-key encryption. [notes] (Goldreich-Levin notes: from a previous class, from Mihir Bellare)
14 (3/10): Goldwasser-Micali for arbitrary-length messages. Hard-core bits from any trapdoor one-way permutation. The random oracle model. Simple RSA. [??? + notes] (paper on random oracles)
15 (3/15): Chosen-ciphertext secure public-key encryption in the random oracle model. Non-malleability. Public-key signatures. Several candidate signature schemes. [David M. + notes]
16 (3/17): Full Domain Hash (FDH). Probabilistic Full Domain Hash (PFDH). Pitfalls of the random oracle model. [notes + notes]
17 (3/29): Implications in symmetric-key cryptography. The following are equivalent: OWF, PRG, PRF, PRP, symmetric-key encryption, bit commitment, coin flipping. [notes + Alex]
18 (3/31): Guest lecture from Vinod Prabhakaran: information-theoretic (unconditionally secure) cryptography.
19 (4/5): Bit commitment, coin flipping. Signatures from any one-way function. Black-box reductions and separations, Impagliazzo-Rudich. [notes + notes]
20 (4/7): Algebraic cryptanalysis of public-key cryptosystems. Factoring: Fermat's method, Dixon's algorithm, quadratic sieve. Attacks on RSA: the common modulus attack, the related message attack.
Lattices and cryptanalysis.
21 (4/12): Interactive proof systems. Zero-knowledge proofs. ZKIP for 3-coloring. Zero-knowledge proofs of knowledge.
22 (4/14): Secret sharing. Shamir's scheme for t-out-of-n sharing. Verifiable secret sharing. Pedersen's VSS scheme. [notes] [partial notes (from S'02) + errata regarding accusals + Shamir's original
23 (4/19) Secure multi-party computation. The millionaire's problem. Adversary models: semi-honest, malicious. Definitions of security for the semi-honest model. Oblivious transfer. [partial notes
(from S'02) + notes on the defn]
24 (4/21) A general 2-party protocol secure against semi-honest attackers, for any functionality. Definitions of security for the malicious model.
25 (4/26) Finishing up multi-party computation. Electronic cash. Blind signatures, Chaum's online ecash scheme, payer- and payee-anonymity. [notes (from S'02)]
26 (4/28) Threshold cryptography. Schemes with trusted dealer: RSA, El Gamal. Security in the malicious model. Distributed key generation for El Gamal.
27 (5/3) Electronic voting protocols. Honest-verifier zero-knowledge proofs of knowledge of a discrete log; of equality of two discrete logs. The Fiat-Shamir heuristic for non-interactive ZK. The
disjunction trick. The Cramer-Gennaro-Schoenmakers protocol.
28 (5/5) Mixes. Publicly verifiable mixes. Anonymous email. Visual cryptography. Chaum's digital voting protocol.
Homework 1 (due 2/2): assignment [solution].
Homework 2 (due 2/9): assignment [solution].
Midterm 1 (due 3/17): assignment [solution] [common errors].
Course Overview
This class teaches the theory, foundations and applications of modern cryptography. In particular, we treat cryptography from a complexity-theoretic viewpoint. In recent years, researchers have found
many practical applications for these theoretical results, and so we will also discuss their impact along the way and how one may use the theory to design secure systems.
Official Course Description
CS276: Cryptography. Prerequisite: CS170. Graduate survey of modern topics on theory, foundations, and applications of modern cryptography. One-way functions; pseudorandomness; encryption;
authentication; public-key cryptosystems; notions of security. May also cover zero-knowledge proofs, multi-party cryptographic protocols, practical applications, and/or other topics, as time permits.
This list is tentative and subject to change.
• Introduction. Basic motivating scenarios for cryptography. History. Information-theoretic secrecy.
• Block ciphers. Standard modes of operation.
• Pseudorandom functions. Pseudorandom permutations. The birthday paradox. Applications. One-way functions.
• Symmetric encryption schemes. Definitions. IND-CPA. Security of standard modes of operation. IND-CCA2.
• Message authentication. MACs. Definitions. PRFs as MACs. CBC-MAC.
• Authenticated encryption. INT-PTXT. INT-CTXT. Non-malleability.
• Commitment schemes. Hard-core predicates. Goldreich-Levin theorem.
• Pseudorandom generators. PRG's from OWF's. Blum-Micali-Yao.
• PRF's from PRG's. Goldreich-Goldwasser-Micali
• Basics on number theory. Number-theoretic primitives. RSA. Rabin's function. Definition of trapdoor one-way functions.
• Public-key encryption. Definitions. Semantic security. Message indistinguishability. Goldwasser-Micali cryptosystem. Hybrid encryption.
• Digital signatures. Trapdoor signatures. RSA. Random oracles. Full-domain hash. PSS.
• Zero knowledge proofs. Proofs of knowledge.
• Foundations. Constructions of signatures based on any one-way function. Oracles and separations.
If there is time, advanced topics may also include:
• Secret sharing. Shamir's scheme. Generalized access structures.
• Threshold cryptography. Verifiable secret sharing. Proactive security.
• Secure voting schemes. Electronic cash.
• Secure multi-party computation.
• Cryptographic protocols.
Enrollment Policies
The class appears to be over-enrolled at the moment. This is a graduate course, and as such, EECS graduate students will receive first priority on taking the course. I hope to be able to accomodate
all interested EECS graduate students.
I have received many queries about whether the class is open to undergraduates; my policy on undergraduate admission to CS276 is available.
Homeworks: 10%
Scribe notes: 20%
Take-home midterm: 30%
Final project: 40%
You will be asked to write a set of scribe notes for either a lecture or for a set of homework solutions. We strongly recommend that scribe notes be written in LaTeX. Please make an effort to make
your scribe notes "beautiful", clear, and readable.
You will do a final project. Further details will be made available here.
We will assign several homework sets throughout the semester. To really learn this material, it is important that you not only watch our lectures but also practice the material. Please turn in your
homework solutions on paper at the beginning of class on the appropriate day.
There is no required textbook.
The following sources may be helpful as a reference, and will provide supplemental material.
M. Bellare and P. Rogaway, Introduction to Modern Cryptography.
We will follow their exposition fairly closely.
S. Goldwasser and M. Bellare, Lecture Notes on Cryptography.
Another excellent set of notes, with a somewhat different focus.
S. Vadhan, Introduction to Cryptography.
An excellent if introductory set of class notes.
Various authors, Scribe notes for CS276 in Spring 2002.
The notes from the last time this course was offered.
O. Goldreich, Foundations of Cryptography, Cambridge Univ. Press, 2001.
A more abstract treatment of the topic. Goldreich's writings are the canonical treatment of multi-party computation and other advanced topics.
We will assume basic background with probability theory, algorithms, complexity theory, and number theory. For review purposes, you may refer to Prof. Trevisan's Notes on Algebra and Notes on
Probability. If you prefer a textbook covering this background material, we recommend the following:
L.N. Childs, A Concrete Introduction to Higher Algebra, Springer, 1995.
David Wagner, daw@cs.berkeley.edu, http://www.cs.berkeley.edu/~daw/.
|
{"url":"http://www.cs.berkeley.edu/~daw/teaching/cs276-s04/","timestamp":"2014-04-17T09:49:28Z","content_type":null,"content_length":"13264","record_id":"<urn:uuid:634b6590-1a8e-4367-ba59-110699b6b7f6>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00573-ip-10-147-4-33.ec2.internal.warc.gz"}
|
DeVry Mesa
DeVry Mesa - MATH 221
MATH 221 provides students with an in-depth understanding of Math theory and concepts, such as Trigonometry.
Course Hero
• Online study resources available anywhere, at any time
• High-quality Study Documents, expert Tutors and Flashcards
• Everything you need to learn more effectively and succeed
|
{"url":"http://www.coursehero.com/sitemap/schools/172-DeVry-Mesa/courses/871598-MATH221/","timestamp":"2014-04-17T07:36:52Z","content_type":null,"content_length":"62830","record_id":"<urn:uuid:5f23ce34-01bb-4c15-8d73-3e7ce51a6dab>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00396-ip-10-147-4-33.ec2.internal.warc.gz"}
|
tion and
Bai, Jushan and Wang, Peng (2012): Identification and estimation of dynamic factor models.
Download (651Kb) | Preview
We consider a set of minimal identification conditions for dynamic factor models. These conditions have economic interpretations, and require fewer number of restrictions than when putting in a
static-factor form. Under these restrictions, a standard structural vector autoregression (SVAR) with or without measurement errors can be embedded into a dynamic factor model. More generally, we
also consider overidentification restrictions to achieve efficiency. General linear restrictions, either in the form of known factor loadings or cross-equation restrictions, are considered. We
further consider serially correlated idiosyncratic errors with heterogeneous coefficients. A numerically stable Bayesian algorithm for the dynamic factor model with general parameter restrictions is
constructed for estimation and inference. A square-root form of Kalman filter is shown to improve robustness and accuracy when sampling the latent factors. Confidence intervals (bands) for the
parameters of interest such as impulse responses are readily computed. Similar identification conditions are also exploited for multi-level factor models, and they allow us to study the spill-over
effects of the shocks arising from one group to another.
Item Type: MPRA Paper
Original Identification and estimation of dynamic factor models
Language: English
Keywords: dynamic factor models; multi-level factor models; impulse response function; spill-over effects
C - Mathematical and Quantitative Methods > C1 - Econometric and Statistical Methods and Methodology: General > C10 - General
Subjects: C - Mathematical and Quantitative Methods > C3 - Multiple or Simultaneous Equation Models; Multiple Variables > C33 - Models with Panel Data; Longitudinal Data; Spatial Time Series
C - Mathematical and Quantitative Methods > C1 - Econometric and Statistical Methods and Methodology: General > C11 - Bayesian Analysis: General
Item ID: 38434
Depositing Peng Wang
Date 30. Apr 2012 01:28
Last 12. Feb 2013 14:38
[1] D. Amengual and M. Watson. Consistent estimation of the number of dynamic factors in large N and T panel. Journal of Business and Economic Statistics, 25(1):91–96, 2007.
[2] T.W. Anderson. An Introduction to Multivariate Statistical Analysis. New York: Wiley, 1984.
[3] T.W. Anderson and H. Rubin. Statistical inference in factor analysis. In In J. Neyman (Ed.): Proceedings of the Third Berkeley Symposium on Mathematical Statistics and Probability.,
volume 5, pages 111–150. Univ. of California Press, 1956.
[4] J. Bai. Inferential theory for factor models of large dimensions. Econometrica, 71(1):135–172, 2003.
[5] J. Bai and S. Ng. Determining the number of factors in approximate factor models. Econometrica, 70(1):191–221, 2002.
[6] J. Bai and S. Ng. Determining the number of primitive shocks in factor models. Journal of Business and Economic Statistics, 25(1):52–60, 2007.
[7] J. Bai and S. Ng. Principal components estimation and identification of the factors. Working paper, 2010.
[8] J.O. Berger and L.R. Pericchi. The intrinsic Bayes factor for model selection and prediction. Journal of the American Statistical Association, 91(433):109–122, 1996.
[9] B. Bernanke, J. Boivin, and P. Eliasz. Measuring monetary policy: a factor augmented vector autoregressive (FAVAR) approach. Quarterly Journal of Economics, 120(1):387–422, 2005.
[10] G.J. Bierman. Factorization methods for discrete sequential estimation. Dover Books on Mathematics, Dover Publications, 1977.
[11] J. Breitung and J. Tenhofen. GLS estimation of dynamic factor models. Journal of the American Statistical Association, 106(495):1150–1166, 2011.
[12] C.K. Carter and R. Kohn. On Gibbs sampling for state space models. Biometrika, 81:541–553, 1994.
[13] S. Chib. Marginal likelihood from the Gibbs output. Journal of the American Statistical Association, 90 (432):1313–1321, 1995.
[14] I. Choi. Efficient estimation of factor models. Econometric Theory, 28(2):274–308, 2012.
[15] T. Cogley and T.J. Sargent. Drifts and volatilities: monetary policies and outcomes in the post WWII US. Review of Economic Dynamics, 8 (2):262–302, 2005.
[16] G. Connor and R. Korajzcyk. Performance measurement with the arbitrage pricing theory: a new framework for analysis. Journal of Financial Economics,15:373–394, 1986.
[17] M.J. Crucini, M.A. Kose, and C. Otrok. What are the driving forces of international business cycles? Review of Economic Dynamics, 14:156–175, 2011.
[18] M. Del Negro and C. Otrok. Dynamic factor models with time-varying parameters: measuring changes in international business cycles. Working paper, 2008.
[19] F.X. Diebold, C. Li, and V. Yue. Global yield curve dynamics and interactions: a generalized Nelson-Siegel approach. Journal of Econometrics, 146(2):351–363, 2008.
[20] C. Doz, D. Giannone, and L. Reichlin. A quasi-maximum likelihood approach for large approximate dynamic factor models. forthcoming, Review of Economics and Statistics, 2011.
References: [21] G. Evensen. Data assimilation: the ensemble Kalman filter. 2nd ed. Springer, 2009.
[22] M. Forni, M. Hallin, M. Lippi, and L. Reichlin. The generaized dynamic factor model: identification and estimation. Review of Economics and Statistics, 82(4):540–554, 2000.
[23] J.F. Geweke. The dynamic factor analysis of economic time series. In In D. J. Aigner and A. S. Goldberger (Eds.): Latent Variables in Socio-economic Models. Amsterdam:
North-Holland, 1977.
[24] A.W. Gregory and A.C. Head. Common and country-specific fluctuations in productivity, investment, and the current account. Journal of Monetary Economics, 44(3):423–451, 1999.
[25] B. Junbacker and S.J. Koopman. Likelihood-based analysis for dynamic factor models. Working paper, 2008. [26] C. Kim and C. Nelson. State Space Models With Regime Switching:
Classical and Gibbs Sampling Approaches With Applications. Massachusetts: MIT Press, 1999.
[27] M.A. Kose, C. Otrok, and C.H. Whiteman. International business cycles: World, region, and country-specific factors. American Economic Review, 93(4):1216–1239, 2003.
[28] D.N. Lawley and A.E. Maxwell. Factor Analysis in a Statistical Method. Butterworth, London, 1971.
[29] E. Moench, S. Ng, and S. Potter. Dynamic hierarchical factor models. Working paper, 2011.
[30] A. O’Hagan. Fractional Bayes factors for model comparison. Journal of the Royal Statistical Society. Series B (Methodological), 57(1):99–138, 1995.
[31] D. Quah and T. Sargent. A dynamic index model for large cross sections. In Business Cycles, Indicators and Forecasting, NBER Chapters, pages 285–310. National Bureau of Economic
Research, Inc, December 1993.
[32] C.A. Sims and T. Zha. Error bands for impulse responses. Econometrica, 67(5):1113–1156, 1999.
[33] J.H. Stock and M.W. Watson. Diffusion indexes. NBER Working Paper 6702, 1998.
[34] J.H. Stock and M.W. Watson. Forecasting using principal components from a large number of predictors. Journal of the American Statistical Association, 97:1167–1179, 2002.
[35] J.H. Stock and M.W. Watson. Implications of dynamic factor models for VAR analysis. NBER Working Paper 11467, 2005.
[36] M.K. Tippett, J.L. Anderson, C.H. Bishop, T.M. Hamill, and J.S. Whitaker. Ensemble square root filters. Monthly Weather Review, 131:1485–1490, 2003.
[37] P. Wang. Large dimensional factor models with a multi-level factor structure: identification, estimation, and inference. Working paper, 2010.
[38] M.W. Watson and R.F. Engle. Alternative algorithms for the estimation of dynamic factor, mimic and varying coefficient regression models. Journal of Econometrics, 23(3):385–400,
[39] M. West and J. Harrison. Bayesian Forecasting and Dynamic Models. Springer Series in Statistics, 1997.
[40] J.H. Wright. Term premia and inflation uncertainty: empirical evidence from an international panel dataset. American Economic Review, 101(4):1514–1534, 2011.
[41] A. Zellner. An introduction to Bayesian inference in econometrics. John Wiley and Sons, Inc, 1971.
URI: http://mpra.ub.uni-muenchen.de/id/eprint/38434
|
{"url":"http://mpra.ub.uni-muenchen.de/38434/","timestamp":"2014-04-21T04:32:06Z","content_type":null,"content_length":"31673","record_id":"<urn:uuid:dd3b0090-1156-40d1-baeb-5c1bfaf53f20>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00017-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Audubon, NJ Geometry Tutor
Find an Audubon, NJ Geometry Tutor
...Having worked with a diverse population of students, I have strong culturally competent teaching practices that are adaptive to diverse student learning needs. I have a robust knowledge of
various math curricula and resources that will get your child to love math in no time! I look forward to w...
9 Subjects: including geometry, ESL/ESOL, algebra 1, algebra 2
Hello! I am currently a junior in the University of Pennsylvania's undergraduate math program. Previously, I completed undergraduate work at North Carolina State University for a degree in
22 Subjects: including geometry, calculus, statistics, algebra 1
I graduated from Chestnut Hill College (Philadelphia, PA) with a degree in French Language & Literature with a minor in Art History. While there, I tutored several students in not only French but
also other subjects, particularly different math topics. I am comfortable tutoring students from the 7th grade forward.
33 Subjects: including geometry, English, French, physics
...PLEASE NOTE: I only take serious SAT students who have time, the drive, and a strong personal interest in learning the tools and tricks to boost their score. Background: I graduated from UCLA,
considered a New Ivy, with a B.S. in Integrative Biology and Physiology with an emphasis in physiology ...
26 Subjects: including geometry, English, chemistry, reading
I am teaching math, for over 20 years now, and was awarded four times as educator of the year. I was also mentor of the year twice. I have a variety of experience teaching not only in different
countries, but also teaching here in public school, private school, charter school, and adult continuing education school.
15 Subjects: including geometry, algebra 1, algebra 2, GED
|
{"url":"http://www.purplemath.com/Audubon_NJ_Geometry_tutors.php","timestamp":"2014-04-19T14:53:11Z","content_type":null,"content_length":"24141","record_id":"<urn:uuid:e62bfb52-0f4f-499b-970b-b767d2584438>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00340-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[plt-scheme] Re: The Philosophy of DrScheme
From: Greg Woodhouse (gregory.woodhouse at gmail.com)
Date: Mon Dec 1 13:26:16 EST 2008
A minor nit: There is no reason why mathematics cannot be taught as an
active process of discovery. The problem (well, one problem) is that the
only way to really learn mathematics is by doing, and that means
calculating. Still, there is no reason it can't be interesting. I'll give
you an example: one thing that always intrigued me, even as a child, is that
there are only 5 regular polyhedra (the tetrahedron, octahedron, cube,
dodecahedron and icosohedron), but I didn't realize until much later how
accessible a result it really is. You could almost make it a homework
exercise! Start with Euler's famous formula V - E + F = 2 (for a topological
sphere) and then suppose you have refgular polyhedron the faces of which are
n-gons. It all comes down to counting: If there are m of them, how many
times will you count each vertex in m times n vertices per face? How many
times will you count each edge? What happens if you plug these numbers in
Euler's formula? Even if youer students take euler's formula on faith, the
result is still impressive.
On Thu, Nov 27, 2008 at 5:52 AM, Eduardo Bellani <ebellani at gmail.com> wrote:
> Well, I guess great minds think alike :)
> >From what I'm seeing so far the target population are a bit different,
> yours being mostly the undergrad students, Papert's being children,
> but I guess the goal is pretty much the same:
> "We therefore believe that the study of program design deserves the same
> central
> role in general education as mathematics and English. Or, put more
> succinctly,
> everyone should learn how to design programs. On one hand, program design
> teaches the same analytical skills as mathematics. But, unlike mathematics,
> working with programs is an active approach to learning." - HtDP
> "In many schools today, the phrase "computer-aided instruction"
> means making the computer teach the child. One might say the
> computer is being used to program the child. In my vision, the
> child programs the computer and, in doing so, both acquires a
> sense of mastery over a piece of the most modern and powerful
> technology and establishes an intimate contact with some of the
> deepest ideas from science, from mathematics, and from the art of
> intellectual model building." - Mindstorms, Children, Computers and
> Powerful Ideas
> Just by curiosity
> > I ran into Logo and the book a year after I finished most of HtDP.
> What book are you talking about?
> --
> Eduardo Bellani
> www.cnxs.com.br
> "What is hateful to you, do not to your fellow men. That is the entire
> Law; all the rest is commentary." The Talmud
> _________________________________________________
> For list-related administrative tasks:
> http://list.cs.brown.edu/mailman/listinfo/plt-scheme
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.racket-lang.org/users/archive/attachments/20081201/a3effb45/attachment.html>
Posted on the users mailing list.
|
{"url":"http://lists.racket-lang.org/users/archive/2008-December/028959.html","timestamp":"2014-04-17T01:56:45Z","content_type":null,"content_length":"8861","record_id":"<urn:uuid:1577557b-0024-446e-be5d-7ce46e92f2f6>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00381-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Low and high pass filters - calculate the cutoff frequencies, Electrical Engineering
At the completion of this unit, you will be able to determine the cutoff frequencies and attenuations of RC and RL low- and high-pass filters by using test circuits.
A filter is a frequency-selective circuit that permits signals of certain frequencies to pass while it rejects signals at other frequencies.
A low-pass filter, as its name implies, passes low frequencies but rejects high frequencies.
The dividing line between the passing of low frequencies and the rejecting of high frequencies is the cutoff frequency (f[c]), or -3 dB point. In a low-pass filter, signals lower than the cutoff
frequency pass essentially unmodified. Frequencies higher than the cutoff frequency are greatly attenuated, or reduced.
In a high-pass filter, signals higher than the cutoff frequency pass essentially unmodified. Signals lower than the cutoff frequency is greatly attenuated, or reduced.
The cutoff frequency (f[c]) is the point where the output voltage (V[o]) drops to 70.7% of, or 3 dB down from, the input voltage.
Frequency response data may be expressed in terms of output voltage but is usually expressed in decibels (dB). Decibels are units that express or measure the gain or loss (attenuation) in a circuit.
The decibel can be based on the ratio of the output voltage (V[o]) to the input voltage (V[i]).
NOTE: In the type of filters studied in this volume, the output voltage (V[o]) is always less than the input voltage (V[i]).
The rate of attenuation, or loss, beyond the cutoff frequency (f[c]) is highly predictable. This attenuation is 6 dB per octave or 20 dB per decade. An attenuation rate of 6 dB per octave is the same
rate as 20 dB per decade.
band - a range of frequencies.
dB per octave - decibels per octave (dB/octave); a 1 dB increase or decrease over a two-to-one frequency range.
dB per decade - decibels per decade (dB/decade); a 1 dB increase or decrease over a ten-to-one frequency range.
octave - a two-to-one or one-to-two ratio; a frequency factor of two. One octave is the doubling or halving of a frequency.
decade - a ten-to-one or one-to-ten ratio; a frequency factor of ten.
rolled off - gradually attenuated, or decreased. A filter attenuates when its rejected frequencies are rolled off.
F.A.C.E.T. base unit
AC 2 FUNDAMENTALS circuit board
Oscilloscope, dual trace
Generator, sine wave
Exercise 1 - Low-Pass Filters
When you have completed this exercise, you will be able to calculate the cutoff frequencies and attenuations of RC and RL low-pass filters. You will verify your results with an oscilloscope.
• Several ways exist for the implementation of low-pass filters, each of which consist of a voltage-divider network containing a resistor and a frequency-varying component (inductor or capacitor).
• Output voltage from the filters is "tapped off" the voltage divider.
• Changes in the frequency of the supply voltage cause changes in the circuit reactance, resulting in output voltage variations.
• In RC filters, the capacitive reactance is high at low frequencies compared to the resistance, causing most of the input voltage to appear across the output capacitor.
• Capacitive reactance decreases as the generator frequency increases, causing larger voltage drops across the R and decreasing the voltage across the output capacitor.
• Low-pass filters are designed so that frequencies below the cut-off frequency are passed while higher frequencies are attenuated.
• In low-pass RL filters, the inductive reactance is small at low frequencies compared to the resistance, and most of the input voltage falls across the output resistor.
• Inductive reactance increases as the generator frequency increases; therefore, more and more voltage is dropped across the inductor and less across the output resistor.
• Cutoff frequency is defined as the frequency where the output signal is 3 dB down, or 0.707 x V[o].
• For RC circuits: f[c] = 1/2πRC
• For RL circuits: f[c] = R/2πL
Posted Date: 3/5/2013 4:44:57 AM | Location : United States
Your posts are moderated
Q. Counting to moduli other than 2 is a frequent requirement, the most common being to count through the binary-coded decimal (BCD) 8421 sequence. All that is required is a four-st
Two single-phase 60-Hz sinusoidal-source generators (with negligible internal impedances) are supplying to a common load of 10 kW at 0.8 power factor lagging. The impedance of the
Q. Explain Radio and Television Broadcasting? Radio (AM and FM) and television broadcasting are the most familiar forms of communication via analog transmission systems. The re
which ckt. Or components using for irregular ac sine wave change into pure sine wave.......
what is operating condition?
Can I find out the salution of electrods Transmission medium light sound uncoupling magnet for high dc cycle.
i wanted the exact names of the components of the internal circuit of a 555 timer ic
Consider a GLC parallel circuit excited by i(t) = Ie st in the time domain. Assume no initial inductive current or capacitive voltage at t = 0. Draw the transformed network in the
a) Illustrate the Schematic diagram of the Transformer Box used in the Practical Session b) Calculate the output voltage on an Oscilloscope and determine its amplitude and freq
Q. Explain stages of attending to Rail fracture? Various stages of attending to Rail fracture / weld failure in a L.W.R. track in field - Equipment required - i) Special
|
{"url":"http://www.expertsmind.com/questions/low-and-high-pass-filters-calculate-the-cutoff-frequencies-30139068.aspx","timestamp":"2014-04-16T10:18:16Z","content_type":null,"content_length":"34485","record_id":"<urn:uuid:f6da8408-dc80-49cc-991e-c808ef604963>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00541-ip-10-147-4-33.ec2.internal.warc.gz"}
|
NASA - SPACE SHUTTLE ASCENT: MASS VS. TIME
Algebra 1 Key Topic:
Modeling data with linear regression equations
Students will be asked to create scatter plots and find linear regression equations using mission data on the mass of the space shuttle during the first two minutes of the ascent phase.
Students will
• create scatterplots from a data table;
• determine correlation and interpret its meaning;
• find linear regression equations;
• find the slope and y-intercept from a linear equation; and
• communicate the meanings of slope and y-intercept as they relate to a real-world problem.
Prerequisites: Students should have prior knowledge of scatter plots, types of correlations, linear equations (slope and y-intercept) and linear graphs. Students should also have experience using a
graphing calculator or spreadsheet application to create scatter plots and to find linear regression equations. Note: This problem is related to the Algebra 1 problem in this series, Space Shuttle
Ascent: Altitude vs. Time. DOWNLOADS Files for use with the TI-84 Plus™ › Space Shuttle Ascent: Mass vs. Time Educator Edition (PDF 332 KB) › Space Shuttle Ascent: Mass vs. Time Student Edition (PDF
242 KB) Files for use with the TI-Nspire™ › Space Shuttle Ascent: Mass vs. Time TI_Nspire Educator Edition (PDF 331 KB) › Space Shuttle Ascent: Mass vs. Time TI_Nspire Student Edition (PDF 452 KB)
Note: The following file is a software specific file for Texas Instrument Nspire calculators. › Space Shuttle Ascent: Mass vs. Time TI_Nspire Document (TNS 134 KB) Related Resources › VIDEO: STS-121
|
{"url":"http://www.nasa.gov/audience/foreducators/exploringmath/algebra1/Prob_ShuttleMassTime_detail.html","timestamp":"2014-04-19T19:41:51Z","content_type":null,"content_length":"21314","record_id":"<urn:uuid:742122b0-067a-42ee-80bd-eb0ae8de4db0>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00285-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Palisades, NY Math Tutor
Find a Palisades, NY Math Tutor
...I try to spend as much time as that student may need, and provide a variety of practice exercises. Algebra II/Trigonometry:I cover as much topics as the student needs or all the topics,
especially the ratio of those most pertinent to the Regents exam. From Algebraic expressions, Functions and R...
47 Subjects: including statistics, SAT math, accounting, writing
For over 30 years in the chemicals and biofuels industries, Ken has trained and mentored scientists, engineers, and business people. Throughout his career Ken has been particularly good at
helping people understand basic principles in chemistry, physics and math and how they are and can be applied ...
12 Subjects: including trigonometry, differential equations, precalculus, algebra 1
...I am well-versed in Microsoft Office, including Word. I recently graduated with an Intensive Bachelor of Science degree in Molecular, Cellular, and Developmental Biology from Yale University.
Upon graduation, I was granted "distinction in the major" for outstanding scholarship, and awarded a coveted prize for my undergraduate research pursuits.
24 Subjects: including geometry, algebra 1, algebra 2, prealgebra
Hello my name is Andres. I was a language teacher in my native country teaching English as a second language for native students and Spanish as a second language for foreign students. I am
currently finishing my second major in engineering science.
9 Subjects: including algebra 1, algebra 2, calculus, geometry
...I help with your assignments. I will reteach everything that you need to learn. I will cut down on the time it takes for you to do your homework.
56 Subjects: including ACT Math, SAT math, geometry, algebra 2
Related Palisades, NY Tutors
Palisades, NY Accounting Tutors
Palisades, NY ACT Tutors
Palisades, NY Algebra Tutors
Palisades, NY Algebra 2 Tutors
Palisades, NY Calculus Tutors
Palisades, NY Geometry Tutors
Palisades, NY Math Tutors
Palisades, NY Prealgebra Tutors
Palisades, NY Precalculus Tutors
Palisades, NY SAT Tutors
Palisades, NY SAT Math Tutors
Palisades, NY Science Tutors
Palisades, NY Statistics Tutors
Palisades, NY Trigonometry Tutors
|
{"url":"http://www.purplemath.com/palisades_ny_math_tutors.php","timestamp":"2014-04-17T04:23:16Z","content_type":null,"content_length":"23731","record_id":"<urn:uuid:f2a693d0-c563-4db0-bf1c-df85d1bb16c1>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00491-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Washington Precalculus Tutors
...Post-undergraduate, I began as a volunteer group tutor before also becoming a private one-on-one tutor as well. I enjoy every minute of it, and it's been one of the most rewarding experiences
of my life so far, one that has inspired me to become a secondary math teacher. I value a student's desire to learn and commitment to having a good educational relationship.
15 Subjects: including precalculus, chemistry, calculus, geometry
...I work on the basics and let the student understand the concept instead of just getting the answer. I believe in teaching the small steps and showing your work. I think that if the student
keeps it simple and repeatable, they will do considerably better when the pressure is on.
28 Subjects: including precalculus, English, calculus, reading
...In tutoring, I always make it a point to figure out the student's style of learning and I plan my tutoring sessions accordingly, spending extra time to prepare for the session prior to meeting
with the student. My broad background in math, science, and engineering combined with my extensive rese...
16 Subjects: including precalculus, calculus, physics, statistics
...I give confidence to the student. I do not give the answers after I feel the student has knowledge in order to boost self-confidence for exam situations. Whether you are a college student just
trying to pass that Algebra class, a Mom or Dad furthering their career, or an overachieving high school student taking advanced math, I can help you.
13 Subjects: including precalculus, chemistry, physics, algebra 2
...Due to the nature of this volunteer setting, I had only an hour or two with each student. Thus I cannot assess how successful that I was. However, I did get valuable insights into the Jefferson
Labs approach to preparation for this test.
13 Subjects: including precalculus, chemistry, calculus, physics
|
{"url":"http://www.algebrahelp.com/Washington_precalculus_tutors.jsp","timestamp":"2014-04-17T12:45:23Z","content_type":null,"content_length":"25339","record_id":"<urn:uuid:154c87ac-bff3-4147-9a62-8ef64ab2f568>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00324-ip-10-147-4-33.ec2.internal.warc.gz"}
|
May the
May the (Trading) Odds Be Ever In Your Favor
by Tyler Craig | March 30, 2012 12:43 pm
The Hunger Games, I was pleasantly surprised to come across a slogan that could very well have turned up in an options manual.
“May the Odds Be Ever In Your Favor”
While the phrase was a motto of sorts for the denizens of the nation Panem, it could very well be a motto for the option sellers among us. Most of these traders look to have the odds in their favor
by structuring positions with a high probability of success.
In order to inject probability analysis into the mix, traders must first understand how to use the Greek delta. The property of delta that interests us in this discussion is its ability to gauge the
probability of an option expiring in the money. Suppose you purchased a May 50-strike call with a delta of 75. The delta tells you there is a 75% chance the call will reside in the money at May
expiration. Put another way, there is a 75% chance the stock will be trading above $50 when that series of options expires.
Now, let’s say you purchased a May 70 strike put with a delta of -30. This tells you there is a 30% chance the put will reside in the money at May expiration. In other words, there is a 30% chance
the stock will be trading below $70.
Add a dash of intuition to the aforementioned principle and you will also be able to calculate the probability of an option expiring out of the money. Simply put, if delta equals the probability of
an option expiring in the money, then 1 minus delta equals the probability of an option expiring out of the money.
Suppose with the USO trading at $39.50, you short a May 37 put option, which has a delta of -27. Given the delta value, there is a 27% chance the put option will sit in the money at expiration. So
what are the odds the put expires out of the money, thereby allowing you to capture your max profit? Using the 1 minus delta formula, outlined above we arrive at a profit probability of 73% (1 –
This unique usage of delta allows option sellers to select the optimal strike price to achieve their desired probability of profit. If they’re looking to increase the likelihood of capturing a
profit, they simply sell lower delta options.
An important side note is that modifying the probability of profit also changes the potential risk and reward of the position. This, however, is a discussion for another day.
1. [Image]: http://investorplace.com/wp-content/uploads/2012/03/6081746931_14af2e3b01.jpg
Source URL: http://investorplace.com/2012/03/may-the-trading-odds-be-ever-in-your-favor/
Short URL: http://invstplc.com/1fy8JRB
|
{"url":"http://investorplace.com/2012/03/may-the-trading-odds-be-ever-in-your-favor/print","timestamp":"2014-04-19T23:45:47Z","content_type":null,"content_length":"5046","record_id":"<urn:uuid:19415ed6-f99d-47b4-83e8-66e58fbb9983>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00598-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Results 1 - 10 of 14
, 1998
"... The aim of this paper is to comprehensively question the validity of the standard way of interpreting Chaitin’s famous incompleteness theorem, which says that for every formalized theory of
arithmetic there is a finite constant c such that the theory in question cannot prove any particular number ..."
Cited by 9 (0 self)
Add to MetaCart
The aim of this paper is to comprehensively question the validity of the standard way of interpreting Chaitin’s famous incompleteness theorem, which says that for every formalized theory of
arithmetic there is a finite constant c such that the theory in question cannot prove any particular number to have Kolmogorov complexity larger than c. The received interpretation of theorem claims
that the limiting constant is determined by the complexity of the theory itself, which is assumed to be good measure of the strength of the theory. I exhibit certain strong counterexamples and
establish conclusively that the received view is false. Moreover, I show that the limiting constants provided by the theorem do not in any way reflect the power of formalized theories, but that the
values of these constants are actually determined by the chosen coding of Turing machines, and are thus quite accidental.
"... ion in itself is not the goal: for Whitehead [117]"it is the large generalisation, limited by a happy particularity, which is the fruitful conception." As an example consider the theorem in ring
theory, which states that if R is a ring, f(x) is a polynomial over R and f(r) = 0 for every element of ..."
Cited by 6 (2 self)
Add to MetaCart
ion in itself is not the goal: for Whitehead [117]"it is the large generalisation, limited by a happy particularity, which is the fruitful conception." As an example consider the theorem in ring
theory, which states that if R is a ring, f(x) is a polynomial over R and f(r) = 0 for every element of r of R then R is commutative. Special cases of this, for example f(x) is x 2 \Gamma x or x 3 \
Gamma x, can be given a first order proof in a few lines of symbol manipulation. The usual proof of the general result [20] (which takes a semester's postgraduate course to develop from scratch) is a
corollary of other results: we prove that rings satisfying the condition are semi-simple artinian, apply a theorem which shows that all such rings are matrix rings over division rings, and eventually
obtain the result by showing that all finite division rings are fields, and hence commutative. This displays von Neumann's architectural qualities: it is "deep" in a way in which the symbol
- Presented at the 1997 Annual Meeting of the American Political Science Association, August 27–31 , 1997
"... Recent literature in voting theory has developed the idea that individual voting preferences are probabilistic rather than strictly deterministic. This work builds upon spatial voting models
(Enelow and Hinich 1981, Ferejohn and Fiorina 1974, Davis, DeGroot and Hinich 1972, Farquharson 1969) by intr ..."
Cited by 1 (0 self)
Add to MetaCart
Recent literature in voting theory has developed the idea that individual voting preferences are probabilistic rather than strictly deterministic. This work builds upon spatial voting models (Enelow
and Hinich 1981, Ferejohn and Fiorina 1974, Davis, DeGroot and Hinich 1972, Farquharson 1969) by introducing probabilistic uncertainty into the calculus of voting decision on an individual level.
Some suggest that the voting decision can be modeled with traditional probabilistic tools of uncertainty (Coughlin 1990, Coughlin and Nitzen 1981). Entropy is a measure of uncertainty that originated
in statistical thermodynamics. Essentially, entropy indicates the amount of uncertainty in probability distributions (Soofi 1992), or it can be thought of as signifying a lack of human knowledge
about some random event (Denbigh and Denbigh, 1985). Entropy in statistics developed with Kolmogorov (1959), Kinchin (1957), and Shannon (1948), but has rarely been applied to social science
problems. Exception...
, 1992
"... Introduction 2. Spatiotemporal Data 3. Dynamical Systems Concepts 4. Karhunen-Love Decomposition 5. Overview of kltool 6. Examples 7. Future Directions 8. Summary Bibliography Appendix: Galrkin
Projection for Kuramoto-Sivashinsky PDE 1. Introduction The quantitative analysis of low-dimensional ch ..."
Cited by 1 (0 self)
Add to MetaCart
Introduction 2. Spatiotemporal Data 3. Dynamical Systems Concepts 4. Karhunen-Love Decomposition 5. Overview of kltool 6. Examples 7. Future Directions 8. Summary Bibliography Appendix: Galrkin
Projection for Kuramoto-Sivashinsky PDE 1. Introduction The quantitative analysis of low-dimensional chaotic dynamical systems has been an active area of research for many years. Up until now, most
work has concentrated on the analysis of time series data from laboratory experiments and numerical simulations. Examples include Rayleigh-Bnard convection, Couette-Taylor fluid flow, and the
Belousov-Zhabotinskii chemical reaction [Libchaber, Fauve & Laroche '83], [Roux '83] and [Swinney '84]. The key idea is to reconstruct a representation of the underlying attractor from the time
series. (The time-delay embedding method [Takens '81] is one popular approach). Given the reconstructed attractor, it is possible to estimate various properties of the dynamics - Lyapunov e
, 2007
"... Authors Reliable predictions of how changing climate and disturbance regimes will affect forest ecosystems are crucial for effective forest management. Current fire and climate research in
forest ecosystem and community ecology offers data and methods that can inform such predictions. However, resea ..."
Cited by 1 (0 self)
Add to MetaCart
Authors Reliable predictions of how changing climate and disturbance regimes will affect forest ecosystems are crucial for effective forest management. Current fire and climate research in forest
ecosystem and community ecology offers data and methods that can inform such predictions. However, research in these fields occurs at different scales, with disparate goals, methods, and context.
Often results are not readily comparable among studies and defy integration. We discuss the strengths and weaknesses of three modeling paradigms: empirical gradient models, mechanistic ecosystem
models, and stochastic landscape disturbance models. We then propose a synthetic approach to multi-scale analysis of the effects of climatic change and disturbance on forest ecosystems. Empirical
gradient models provide an anchor and spatial template for stand-level forest ecosystem models by quantifying key parameters for individual species and accounting for broad-scale geographic variation
among them. Gradient imputation transfers predictions of fine-scale forest composition and structure across geographic space. Mechanistic ecosystem dynamic models predict the responses of biological
variables to specific environmental drivers and facilitate understanding of temporal dynamics and disequilibrium. Stochastic landscape dynamics models predict frequency, extent, and severity of
broad-scale disturbance. A robust linkage of these three modeling paradigms will facilitate prediction of the effects of altered fire and other disturbance regimes on forest ecosystems at multiple
scales and in the context of climatic variability and change.
"... Introduction Aeons of scientific thought have been dominated by the Newtonian philosophy that physical systems in the cosmos are predictable. Given the exact knowledge of a system initial
condition and the physical laws that govern it, it is possible to predict its long-term behaviour. However, the ..."
Add to MetaCart
Introduction Aeons of scientific thought have been dominated by the Newtonian philosophy that physical systems in the cosmos are predictable. Given the exact knowledge of a system initial condition
and the physical laws that govern it, it is possible to predict its long-term behaviour. However, there are systems whose nature defies any practical attempt to predict their behaviour. Such an
example is the weather system, which even with the latest technology, it is difficult to forecast accurately beyond two or three days. However, what is responsible for this unpredictability? Is it a
mere lack of adequate model/data or is it an intrinsic property of the system? This dichotomy and its causes is exactly what chaos theory attempts to investigate and illuminate. The concept of chaos
(or deterministic chaos) emerged in between early 60's and early 70's in theoretical and applied mathematics and it is now engulfed in the theory of non-linear dynamics
, 2000
"... The concept of path dependence refers to a property of contingent, non-reversible dynamical processes, including a wide array of biological and social processes that can properly be described as
`evolutionary'. To dispel existing confusions in the literature, and clarify the meaning and significance ..."
Add to MetaCart
The concept of path dependence refers to a property of contingent, non-reversible dynamical processes, including a wide array of biological and social processes that can properly be described as
`evolutionary'. To dispel existing confusions in the literature, and clarify the meaning and significance of path dependence for economists, the paper formulates definitions that relate the
phenomenon to the property of non-ergodicity in stochastic processes; it examines the nature of the relationship between between path dependence and `market failure', and discusses the meaning of
`lock-in'. Unlike tests for the presence of non-ergodicity, assessments of the economic significance of path dependence are shown to involve difficult issues of counterfactual specification, and the
welfare evaluation of alternative dynamic paths rather than terminal states. The policy implications of the existence of path dependence are shown to be more subtle and, as a rule, quite different
from those which have been presumed by critics of the concept. A concluding section applies the notion of `lock-in' reflexively to the evolution of economic analysis, suggesting that resistence to
historical economics is a manifestation of `sunk cost hysteresis' in the sphere of human cognitive development.
"... In very general terms, we call DYNAMICAL any kind of “system ” which evolves in time, starting from an initial time t0, and whose state at any later time t> t0 can be explicitly and uniquely
determined from the assumed knowledge of its initial state at t = t0 †. One of the major goals of the theory ..."
Add to MetaCart
In very general terms, we call DYNAMICAL any kind of “system ” which evolves in time, starting from an initial time t0, and whose state at any later time t> t0 can be explicitly and uniquely
determined from the assumed knowledge of its initial state at t = t0 †. One of the major goals of the theory of dynamical systems it to understand how
, 2004
"... Technological capabilities, invisible infrastructure and the un-social construction of predictability: the overlooked fixed costs of useful research ..."
Add to MetaCart
Technological capabilities, invisible infrastructure and the un-social construction of predictability: the overlooked fixed costs of useful research
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=1120823","timestamp":"2014-04-20T12:23:19Z","content_type":null,"content_length":"36079","record_id":"<urn:uuid:822b9184-1116-40cb-9045-e3f4a518a557>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00132-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Matches for: Author/Editor=(Shchigolev_Vladimir)
Modular Branching Rules for Projective Representations of Symmetric Groups and Lowering Operators for the Supergroup \(Q(n)\)
             
Memoirs of the There are two approaches to projective representation theory of symmetric and alternating groups, which are powerful enough to work for modular representations. One is based on
American Sergeev duality, which connects projective representation theory of the symmetric group and representation theory of the algebraic supergroup \(Q(n)\) via appropriate Schur (super)
Mathematical algebras and Schur functors. The second approach follows the work of Grojnowski for classical affine and cyclotomic Hecke algebras and connects projective representation theory of
Society symmetric groups in characteristic \(p\) to the crystal graph of the basic module of the twisted affine Kac-Moody algebra of type \(A_{p-1}^{(2)}\).
2012; 123 pp; The goal of this work is to connect the two approaches mentioned above and to obtain new branching results for projective representations of symmetric groups.
• Preliminaries
Volume: 220 • Lowering operators
• Some polynomials
ISBN-10: • Raising coefficients
0-8218-7431-4 • Combinatorics of signature sequences
• Constructing \(U(n-1)\)-primitive vectors
ISBN-13: • Main results on \(U(n)\)
978-0-8218-7431-8 • Main results on projective representations of symmetric groups
• Bibliography
List Price: US$71
Members: US$42.60
Members: US$56.80
Order Code: MEMO/
|
{"url":"http://ams.org/cgi-bin/bookstore/booksearch?fn=100&pg1=CN&s1=Shchigolev_Vladimir&arg9=Vladimir_Shchigolev","timestamp":"2014-04-19T19:55:13Z","content_type":null,"content_length":"15429","record_id":"<urn:uuid:9713b0bf-40ca-499e-a6cc-94ad28e69dcc>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00337-ip-10-147-4-33.ec2.internal.warc.gz"}
|
6.4.1 Edge Covering: The Chinese Postman's Problem
Consider the case of a mailman who is responsible for the delivery of mail in the city area shown in graph form on Figure 6.12. The mailman must always begin his delivery route at node A where the
post office.is located; must traverse every single street in his area; and, eventually, must return to node A (while using only the street grid shown). The lengths of the edges of the graph (where
each edge represents a street segment between street intersections indicated as nodes) are given on Figure 6.12. The graph is undirected.
The most natural question to ask in this case is: How should the mailman's route be designed to minimize the total distance he walks, while at the same time traversing every street segment at least
This type of edge-covering problem is known as the Chinese postman's problem We shall discuss the problem only for undirected graphs and we use Problem 6.6 to extend our results to directed graphs.
The Chinese postman's problem (CPP) has an interesting history. It was examined in detail for the first time by the great Swiss mathematician and
physicist Leonhard Euler in the eighteenth century. Euler tried to find a way in which a parade procession could cross all seven bridges shown in Figure 6.13 exactly once. These seven bridges were at
the Russian city of Konigsberg (now Kaliningrad) on the river Pregel.
Euler proved in 1736 that no solution to the Konigsberg routing problem exists. He also derived some general results that provide the motivation for the solution approaches to the CPP that have been
devised recently.
At this time, efficient (i.e., polynomial time) algorithms exist for solving the CPP on undirected graphs and on directed graphs. Interestingly, a solution approach to the CPP on directed graphs
(which, incidentally differs in important respects from the corresponding approach in the case of undirected
graphs) was developed in the course of a project aimed at designing routes for street-sweeping vehicles in New York City [BELT 74]. After several researchers had spent a good amount of time trying to
develop a similarly efficient procedure for solving the CPP on a mixed graph, it was finally shown [PAPA 76] that this last problem belongs to a class of very hard ("NP-complete") problems for which
it is unlikely that polynomial algorithms will ever be found (see also Section 6.4.6)!
^10 On the other hand, the mailman might instead wish to minimize the number of "pound miles" per day. This may result in a distinctly different route.
^11 The name derives fromt the fact that an early paper discussing this problem appeared in the journal Chinese Mathematics (MEI 52).
|
{"url":"http://web.mit.edu/urban_or_book/www/book/chapter6/6.4.1.html","timestamp":"2014-04-18T13:52:10Z","content_type":null,"content_length":"4056","record_id":"<urn:uuid:ace77911-69b0-4a86-ac75-68c9d4e8ecb1>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00075-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Bending of Cantilever Beams
Chapter 3
Bending of Cantilever Beams
3.1 Introduction
In designing engineering structures, such as buildings and bridges, cantilever beams are a main structural element receiving bending forces. This chapter considers the bending of a static cantilever
beam of a constant cross section by a force at the end of the beam. The type of beam under consideration is also known as the Timoshenko beam due to the assumptions made in generating the equations
of equilibrium for a beam. The analytical solutions are obtained by using the Saint-Venant's semi-inverse method. This stress function approach is adapted to obtain the stress and displacement
distribution for various beam cross sections.
Structural Mechanics discusses closed-form solutions for the following set of beam cross sections:
For these cross sections, you can calculate the bending stress function, bending stresses, and the deflection of the center line of a beam. A number of two- and three-dimensional graphical functions
are also available to generate illustrative representations of deflected beams under bending loads. For example, you can draw pictures of a cross section and a deflected beam by using the function
CrossSectionPlot from the torsional analysis package TorsionAnalysis and the function BendingPlot from the BeamAnalysis package.
3.2 Circular Cross Section
Using the cross-section plotting function CrossSectionPlot, introduced in the torsional analysis package TorsionAnalysis, you can visualize a number of cross sections included in the package
This shows a circle with a unit radius and a number of options for the Mathematica function Plot.
To simplify the equations of equilibrium of an elastic body under bending, the stress function, which may be viewed as a potential for the stress components, is introduced. BendingStressFunction
calculates the stress function of various cross sections.
Compute the stress function for a circular shaft with radius y axis is P. The coordinate variables are x and y, respectively.
You compute the moment of inertia about the y axis using the function SectionInertialMoments from the SymCrossSectionProperties package.
Threading the replacement rule operator (->) over the symbols {, , mm, you generate a list of replacement rules for the moments of inertia terms.
Replacing the term x, y), the Poisson's ratio of the shaft material, and the radius of the cross section.
The components of the stress tensor due to the bending force are calculated from the stress function associated with the cross section. Structural Mechanics has the function BendingStresses to
calculate the bending stresses. Often, the x and y axes form the cross-section plane, and the x axis points downward. The z axis is the longitudinal axis of the cantilever beam (perpendicular to the
cross-section plane). The root cross section is at z = 0.
Again, using the Mathematica function Thread, you generate a list of replacement rules for the stress components.
In[10]:=(stresscomponents=Thread[strnot-> Together[str]])//TableForm
You can compute the deflection of the centerline of a cantilever beam under loading using the function CenterlineDeflection. The function calculates the deflection using a method based on the
elementary theory of bending.
You can compute the centerline curve with no rotation and no displacement at the root section, that is, u'[0] == 0 and u[0] == 0.
Now you check that the boundary conditions, u'[0] == 0 and u[0] == 0, are really satisfied by this solution.
As a numerical example, you use some values for the model parameters P, l, and
The condition of no displacement at both the root section and at the midpoint of the shaft yields a different deflection curve.
You use the same numerical values for the model parameters.
Again, check that the boundary conditions are satisfied.
The slope of the centerline curve at the root section is nonzero in the case of this boundary condition.
The second boundary condition is also satisfied.
The centerline curves for these two boundary conditions reveal the nature of the conditions. The line crossing the x axis at z = 5 corresponds to the condition u[0] == 0, u[l/2] == 0. The other line
is obtained for the condition u'[0] == 0, u[0] == 0.
You use BendingPlot to generate a three-dimensional picture of a beam under bending.
The rotation of the beam cross section with respect to its centerline is defined for the shear modulus G.
The function BendingPlot defines the rotation with respect to the undeformed cross section, so you only use the second term in the previous expression for
First, you need to compute the coordinates of the vertices of a cross section. For the circular cross section, you calculate a table for the coordinates of 10 sampling points.
This shows the sampled cross section.
You should calculate the rotation of the cross section from the components of the displacement vector. In this example, assume the rotation of the circular cross section is z, the y axis is the
horizontal axis, which is perpendicular to the beam cross section, and the horizontal axis is the x axis.
By choosing the command 3D ViewPoint Selector in the Input menu, you can set the ViewPoint option to a new value to show the deflected plot from a different angle. For example, you can view the
deflected beam on the x-z plane by setting the ViewPoint option. Note that the Shading option is set to False.
3.3 Elliptical Cross Section
Consider the following elliptical cross section with radii of 2 units in length by 1 unit in height.
The most general form of the stress function for an elliptical cross section, with the radii (y axisP.
Here are the stress components for this cross section using the function BendingStresses.
You calculate the moments of inertia for this section using the function SectionInertialMoments for the section object EllipseSector.
Threading the replacement rule operator (->) over the symbols {, , mm, you generate a list of replacement rules for the moment of inertia terms.
Next, replace the moment of inertia term in strs,
As in the case of the circular cross section, you first sample the ellipse, and then produce the plots of the deflected beam.
Show[Graphics[Line[cs]],PlotRange->{{-2,2},{-2,2}},Axes->True ];
This shows the bent beam in a three-dimensional plot with a cross-section rotation of -
3.4 Rectangular Cross Section
The domain name RectangularSection is used to specify a rectangular cross section. The cross section under consideration is specified by the coordinates of its vertices.
PlotRange->{2{-1,1},2{-1,1}} ,
The bending stress function in a rectangular section is expressed in series form. Mathematica 4 can express the sums in calculating the bending stress function in terms of hypergeometric functions.
However, the output is a very long algebraic expression. To avoid the associated memory and CPU time problems, the Structural Mechanics function BendingStressFunction for RectangularSection returns
the bending stress function in HoldForm. To evaluate the output, use the ReleaseHold function.
Using functions in the package SymCrossSectionProperties, you can compute the moments of inertia for a rectangular cross section.
The bending stress components in a rectangular section are expressed in series form.
Here are the moments of inertia for this cross section.
You replace the moment of inertia, moi for the rectangular cross section.
This computes the deflection of the centerline with a set of given boundary conditions u'[0] == 0 and u[0] == 0; it allows no deflection and no rotation for the cross section at z = 0.
Here you can verify that these two boundary conditions are satisfied.
Now you can visualize the deflection of a rectangular cantilever beam for the deflection curve uz.
In[52]:=uz=.0001 z^2-.0002 z^3
Create vertices of the cross section.
Assume the rotation term x = 0, y = 0. As previously noted, you can calculate the rotation using the following formula:
ViewPoint->{1.749, -2.792, 0.774},
3.5 Equilateral-Triangular Cross Section
Here are the coordinates of the vertices of an equilateral-triangular cross section for the height a = 1 unit length.
You generate the shape of the cross section with CrossSectionPlot.
PlotRange->{{-1,1},{-1,1}} ,
BendingPlot generates the three-dimensional plot of a bent equilateral-triangular beam. In order to avoid the torsional load due to a force applied at the tip of a beam, the force must be applied at
the shear center of the cross section.
ViewPoint->{1.749, -2.792, 0.774}];
You obtain the stress function for a generic equilateral-triangular section using the domain object EquilateralTriangle in the function BendingStressFunction. Note that the bending function for the
equilateral triangle is independent of Poisson's ratio
You can generate the moments of inertia for the domain object EquilateralTriangle using the domain object RightTriangle for a = 1 unit length.
You can also compute the stress components in the beam by using BendingStresses for the cross-section object EquilateralTriangle.
We replace the value of EquilateralTriangle, a ] in the stress components strs.
Here we compute the deflection of the centerline with a set of given boundary conditions.
|
{"url":"http://reference.wolfram.com/applications/structural/BendingofCantileverBeams.html","timestamp":"2014-04-20T21:21:28Z","content_type":null,"content_length":"70772","record_id":"<urn:uuid:18f49a13-d9dd-4c62-846b-028a1916f6f3>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00192-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Graphing Inequalities Word Problems
Submit your word problem on graphing inequalities:
Are you looking for graphing inequalities word problems? TuLyn is the right place. We have tens of word problems on graphing inequalities and hundreds on other math topics.
Below is the list of all word problems we have on graphing inequalities.
Graphing Inequalities Word Problems
• Graphing Inequalities (#7296)
A fund raiser is being planned at school. The goal is to raise $1500. If you have 300 adult tickets and 100 children tickets to sell, how much should you charge for each type of ticket? Graph all
• Suppose that you are a hospital dietician (#1302)
Suppose that you are a hospital dietician. A patient must have at least 165 milligrams and no more than 330 milligrams of cholesterol per day from a diet of eggs and meat. Each egg provides 165
milligrams of cholesterol and each ounce of meat provides 110 milligrams of cholesterol.
Graph the system of inequalities in the first quadrant (explain why). Identify at least two ordered pairs that are part of the solution ...
Find word problems and solutions on graphing.
|
{"url":"http://tulyn.com/wordproblems/graphing_inequalities.htm","timestamp":"2014-04-17T10:05:22Z","content_type":null,"content_length":"11136","record_id":"<urn:uuid:eaf8c582-d56a-4f78-8c46-05e55dcf4818>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00263-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Automorphisms and derivations of U_q(sl_4^+)
Launois, S. and Lopes, S.A. (2007) Automorphisms and derivations of U_q(sl_4^+). Journal of Pure and Applied Algebra , 211 (1). pp. 249-264. ISSN 0022-4049 . (The full text of this publication is not
available from this repository)
We compute the automorphism group of the q-enveloping algebra U-q(sl(+)(4)) of the nilpotent Lie algebra of strictly upper 4 triangular matrices of size 4. The result obtained gives a positive answer
to a conjecture of Andruskiewitsch and Dumas. We also compute the derivations of this algebra and then show that the Hochschild cohomology group of degree I of this algebra is a free (left) module of
rank 3 (which is the rank of the Lie algebra Sl(4)) over the center of U-q(sl(4)(+))
• Depositors only (login required):
|
{"url":"http://kar.kent.ac.uk/3156/","timestamp":"2014-04-17T15:43:15Z","content_type":null,"content_length":"21901","record_id":"<urn:uuid:6ea5b799-fb38-43d4-beaa-b6ed295c07da>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00352-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Reduced Order Long Interconnect Modeling
"... At higher operating (GHz) frequency the interconnect wire does not behave like a simple metallic resistance but as a transmission line. This being the main reason for signal integrity losses in
high frequency interconnect line. Signal Integrity (SI) losses in the interconnect wires are the disturban ..."
Add to MetaCart
At higher operating (GHz) frequency the interconnect wire does not behave like a simple metallic resistance but as a transmission line. This being the main reason for signal integrity losses in high
frequency interconnect line. Signal Integrity (SI) losses in the interconnect wires are the disturbances coming out of their distributed nature of parasitic capacitances, resistances and inductances
at high frequency operation. These SI losses are further aggravated if multiple interconnect lines couple energy from or to each other. In the paper two interconnect lines, as per maximal aggressor
fault model [9], have been considered where the aggressor line is assumed to couple energy to the victim line only, based on which the cross-talk model of an aggressor and a victim line has been
developed using ABCD two-port network model. After the model order reduction by Pade-approximation various signal integrity losses, such as delay, overshoot, undershoot or glitch etc., for a given
set of applied input transitions, are estimated numerically and verified through experimental PSPICE simulation. Based on the above prediction of SI losses the applied input transitions can be
identified as potential test patterns that are believed to excite the SI faults. In order to simplify the crosstalk model computation only the capacitive coupling is considered here because,
inductive coupling will contribute more significantly only if the operating frequency is higher than several GHz. 1.
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=5017882","timestamp":"2014-04-19T19:25:59Z","content_type":null,"content_length":"13285","record_id":"<urn:uuid:12e22819-5fc3-402a-ab26-d4676cf1f7e9>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00066-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Life Cycles
Life Cycles
Are there periodic booms and busts in the diversity of life on Earth? Hear a tale of fossils and Fourier transforms
Inside the Black Box
Does that wiggly line reveal a periodic oscillation? There are certainly plenty of humps and dips, including deep valleys that correspond to several mass extinctions. But are the ups and downs
periodic, with a fixed time scale? Or do they look more like the meandering of a random walk? The eye is not a reliable judge in such matters, sometimes inventing regularities that don't exist and
missing others that do.
A better tool for teasing out periodicity is Fourier analysis, Joseph Fourier's mathematical trick for taking apart a curve with arbitrarily intricate wiggles and reassembling it out of simple sine
waves. The Fourier transform identifies a set of component waves that add up to a replica of any given signal. The result can be presented as a power spectrum, which shows the amount of energy in the
signal at each frequency.
Fourier analysis is often treated as a black box. Put in any time-domain signal, turn the crank, and out comes the frequency-domain equivalent, with no need to worry about how the process works.
Muller has argued against this kind of mystification; he is co-author (with Gordon J. MacDonald) of an excellent book on spectral analysis that opens the lid of the box. Among other things, Muller
and MacDonald present a complete program for Fourier analysis in seven lines of basic.
The black-box approach to Fourier transforms is not only unnecessary but also misleading. It's simply not true that you can run any data through a Fourier analysis and expect a meaningful result. On
the contrary, rather careful preprocessing is needed.
Here are the preliminaries Muller and Rohde went through with the fossil-diversity data. First they selected only the "well-resolved genera, those dated to the stage or substage level; they also
excluded all genera known only from a single stratum. This refinement process discards fully half of the data set. Next, they calculated the cubic polynomial that best fits the data and subtracted
this "detrending curve from the data. The residual values left by the subtraction form a new curve in which the largest-scale (or lowest-frequency) kinks have been straightened out. This is the curve
they finally submitted to Fourier analysis.
Muller and Rohde's result—or rather my reconstruction of something like it—appears to the right. The spectrum has a tall spike at a period of 62 million years and a lesser peak at 140 million years,
indicating that these two periods account for most of the energy in the signal.
|
{"url":"http://www.americanscientist.org/issues/pub/life-cycles/4","timestamp":"2014-04-20T11:07:22Z","content_type":null,"content_length":"124339","record_id":"<urn:uuid:5abf8bc9-eccc-4d3b-9020-f7cc22611cd0>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00421-ip-10-147-4-33.ec2.internal.warc.gz"}
|
How to Calculate the Volume of a Rectangular Prism
Edit Article
Edited by DifuWu, Luv_sarah, Nicole Willson, Teresa and 4 others
Calculating the volume of a rectangular prism is easy once you know its width, length, and height. If you want to know how to calculate the volume of a rectangular prism, just follow these easy
1. 1
Find the length of the rectangular prism. The length is the longest side of the flat surface of the rectangle on the top or bottom of the rectangular prism.
2. 2
Find the width of the rectangular prism. The width is the shorter side of the flat surface of the rectangle on the top or bottom of the rectangular prism.
3. 3
Find the height of the rectangular prism. The height is the part of the rectangular prism that rises up. Imagine that the height is what stretches up a flat rectangle until it becomes a
three-dimensional shape.
4. 4
Multiply the length, the width, and the height. You can multiply them in any order to get the same result. The formula for finding the volume of a rectangular prism is the following: Volume =
Length * Height * Width, or V = L * H * W.
□ Ex: V = 5 in. * 4 in. * 3 in. = 60 in.
5. 5
State your answer in cubic units. Since you're calculating volume, you're working in a three-dimensional space. Just take your answer and state it in cubic units. Whether you're working in feet,
inches, or centimeters, you should state your answer in cubic units.
Article Info
Thanks to all authors for creating a page that has been read 92,457 times.
Was this article accurate?
|
{"url":"http://www.wikihow.com/Calculate-the-Volume-of-a-Rectangular-Prism","timestamp":"2014-04-21T08:13:41Z","content_type":null,"content_length":"66176","record_id":"<urn:uuid:68fc0590-006b-4230-88ee-719b4255ea20>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00348-ip-10-147-4-33.ec2.internal.warc.gz"}
|
188 helpers are online right now
75% of questions are answered within 5 minutes.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/users/pvtgunner/medals","timestamp":"2014-04-19T20:06:01Z","content_type":null,"content_length":"104339","record_id":"<urn:uuid:cfb1cd79-d2e2-48fd-b0d1-02071e1f43fe>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00223-ip-10-147-4-33.ec2.internal.warc.gz"}
|
About Pythagorean triple...
August 8th 2010, 07:12 PM #1
Jul 2010
Hi! Can anyone provide a proof for the following:
The following formula
generates all the positive integers that satisfy $a^2+b^2=c^2$.
I believe we can use an argument similar to the one in this thread post #22.
What undefined suggests is good! Here is another way.
You can suppose that $a,b,c$ are relatively prime. Then it is impossible for both $a,b$ to be odd (because then $c^2\equiv 2 \mod 4$, which is impossible), and impossible for both of them to be
even, which would contradict the hypothesis that $a,b,c$ are relatively prime. Hence, say $a$ is even and $b,c$ are odd. Write $(c-b)(c+b)=a^2$. Now both $c-b,c+b$ are even, say $c-b=2u,c+b=2v$;
moreover $u,v$ are relatively prime. Since $a^2=4uv$ is a square, both $u,v$ must be squares, so we have $c-b=2m^2. c+b=2n^2$, i.e. $c=m^2+n^2, b=n^2-n^2, a=2mn$.
August 8th 2010, 07:26 PM #2
August 8th 2010, 09:15 PM #3
|
{"url":"http://mathhelpforum.com/number-theory/153111-about-pythagorean-triple.html","timestamp":"2014-04-17T12:50:17Z","content_type":null,"content_length":"41875","record_id":"<urn:uuid:1176ff0e-84d3-46b1-84cb-23cbd6ff1aea>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00082-ip-10-147-4-33.ec2.internal.warc.gz"}
|
This Javascript calculator estimates interspecies dosage scaling between animals of different weights via exponential allometry.
Enter weights for the two animals, the dose used for the first animal and an exponent for the allometric calculation. Generally, allometric scaling uses an exponent of 0.75-0.80.
Click on Calculate! and the estimated dose for the second animal is provided, along with the ratio of dose to weight for both animals.
As an example, if the dosage for a 0.25 kg rat is 0.1 mg, then using an exponent of 0.75, the estimated dosage for a 70 kg human would be 6.8 mg. While the dose to weight ratio for the rat is 0.4 mg/
kg, the value for the human is only about 0.1 mg/kg.
The calculator may also be used going from a large animal to a small animal. For example, if the dosage for a 70 kg human is 10 mg, then using an exponent of 0.80, the estimated dosage for a 0.02 kg
mouse would be 0.015 mg. The dose to weight ratio would increase from 0.14 mg/kg for the human to 0.73 mg/kg for the mouse.
Other pharmacokinetics parameters often obey the following exponentials: clearance 0.75, volume of distribution 1.0, and elimination half-life 0.25.
West & Brown (J Exp Bio 208, 1575-1592, 2005) have explored the reasons why metabolic rate scales as the ¾ power with body weight, and derive a hydrodynamic theory to explain this universal result.
Besides explaining metabolic rates they also show why lifespan goes like the +¼ power, heart rate goes as the -¼ power, and hence all species have a similar number of heartbeats during their
lifetimes (about 1.5 billion). A similar consideration of scaling of blood flow (+¾) and resistance (-¾) explains why blood pressure is constant across species.
Hu and Hayton have discussed whether the basal metabolic rate scale is a 2/3 or 3/4 power of body mass. The exponent of 3/4 might be used for substances that are eliminated mainly by metabolism or by
metabolism and excretion combined, whereas 2/3 might apply for drugs that are eliminated mainly by renal excretion.
Here's a list of typical animal weights:
Species Weight, kg
Human 65.00
Mouse 0.02
Hamster 0.03
Rat 0.15
Guinea Pig 1.00
Rabbit 2.00
Cat 2.50
Monkey 3.00
Dog 8.00
Here is a list of other calculations that can be performed, taken from Ritschel and Banerjee (1986) without permission.
To perform a calculation, stick 1.0 into Weight 1, the weight of the animal of interest into Weight 2, the allometric coefficient (b) into Dose 1, and the allometric exponent (a) into Exponent.
The result will be provided in Dose 2, with the units as given in the table below.
Property Allometric Exponent Allometric Coefficient Units
(a) (b)
Basal O2 consumption 0.734 3.8 ml/hr
Endogenous N output 0.72 0.000042 g/hr
O2 consumption by liver slices 0.77 3.3 ml/hr
Creatine 0.69 8.72 ml/hr
Inulin 0.77 5.36 ml/hr
PAH 0.80 22.6 ml/hr
Antipyrine 0.89 8.16 ml/hr
Methotrexate 0.69 10.9 ml/hr
Phenytoin 0.92 47.1 ml/hr
Aztreonam 0.66 4.45 ml/hr
Ara-C and Ara-U 0.79 3.93 ml/hr
Volume of distribution (Vd)
Methotrexate 0.918 0.859 l
Cyclophosphamide 0.99 0.883 l
Antipyrine 0.96 0.756 l
Aztreonam 0.91 0.234 l
Kidney weight 0.85 0.0212 g
Liver weight 0.87 0.082 g
Heart weight 0.98 0.0066 g
Stomach and intestines weight 0.94 0.112 g
Blood weight 0.99 0.055 g
Tidal volume 1.01 0.0062 ml
Elimination half-life
Methotrexate 0.23 54.6 min
Cyclophosphamide 0.24 36.6 min
Digoxin 0.23 98.3 min
Hexobarbital 0.35 80.0 min
Antipyrine 0.07 74.5 min
Turnover time
Serum albumin 0.30 5.68 day-1
Total body water 0.16 6.01 day-1
RBC 0.10 68.4 day-1
Cardiac circulation 0.21 0.44 day-1
Home page • Email • May 13, 2012
|
{"url":"http://home.fuse.net/clymer/minor/allometry.html","timestamp":"2014-04-20T21:08:08Z","content_type":null,"content_length":"29817","record_id":"<urn:uuid:7be18daf-74dc-4f67-a8ff-f3dea688449d>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00066-ip-10-147-4-33.ec2.internal.warc.gz"}
|
THE ASTRONOMICAL VARIANTS
OF THE LUNAR APOGEE - BLACK MOON
by Juan Antonio Revilla
I. Introduction
The Black Moon / Lunar Apogee or "Empty Focus" is an essential element of astrological and astronomical symbolism. Its action is very powerful in every horoscope, but unfortunately it tends to be
underestimated and there is great confusion among astrologers about how to calculate it. I will start by giving their definitions and categorization, without delving into their astrological meaning
Astronomically, there are 3 types and 7 variants of "lunar apogee": the types are "osculating", "mean", and "natural". From the "osculating geocentric apogee" (1) are derived the "osculating
topocentric apogee" (2), the "osculating topocentric perigee" (3), and the "osculating topocentric empty focus of the lunar orbit" (4). To these are added the so-called "interpolated" apogees, which
Riyal calls "natural apogee" (5) and "natural perigee" (6). The "Mean" apogee (7) by definition excludes short-period fluctuations and therefore has no variants (see explanation further below).
II. The Mean Apogee
The "Mean Apogee" is the most popular of the alternative "Black Moons", mainly because it has been used for a much longer time than all the others, and also because often astrologers are not aware of
the alternatives. It seems to have been introduced into astrological practice in France by Don Neroman (Pierre Rougié, 1884-1953) in the early or mid 1930's. He apparently was also the first to call
it "Black Moon".
Astronomically, this point corresponds to the apogee or perigee of the reference lunar orbit used by astronomers to construct lunar ephemerides. It moves very regularly in a perfectly circular orbit
with a radius of 405,863 Km around the Earth/Moon barycenter, i.e., its positions are not geocentric, but the difference between its barycentric and geocentric position is never more than 0,40'.
Despite what one may think in theory, though, the barycenter does not really have any effect in the calculation of the lunar apogee. Please consider the following:
Astronomically, the concept of "mean apogee" (or of mean elements in general) excludes by definition any differences between geocentric and barycentric. These differences represent short-period
fluctuations that have been already averaged-out in the "mean position". In other words, when talking about mean orbital elements and positions, the geocenter and the barycenter coincide, there is no
difference between them. Therefore, there is no "barycentric" *mean* apogee. For this same reason, there really is no topocentric *mean* apogee, since the difference between geocentric and
topocentric is a short-term fluctuation and is excluded in the definition of "mean", it has already been averaged out. (Of course one can always calculate barycentric and topocentric equivalents of a
"mean" position, but they have no astronomical meaning.)
Therefore, when using the MEAN apogee or perigee, the geocenter, barycenter, and topocenter all coincide, there is only one, not 3 positions. Likewise, the MEAN empty focus and the MEAN apogee will
always be aligned, i.e., their longitude will always be the same. This sounds a little odd, but this is what the *mean* lunar elements represent: an average devoid of any real-world short-term
In the case of the "true" or osculating lunar apogee (see below), one must keep in mind that it is always calculated from the geocentric position and velocity vectors of the Moon, therefore there is
no barycentric equivalent. The periodic fluctuations between barycentric and geocentric positions do not have any effect on it, because the barycenter is never part of the equation when it is
calculated. In other words: there is no "barycentric" true apogee or empty focus. The osculating apogee is already geocentric from the start, and it always coincides with the position of the empty
focus of the lunar orbit.
NOTE: I have to thank Alois Treindl for helping me see these points clearly after a discussion last February in the Riyal_Compute forum. After the discussion Alois added a clarifying note to the
Swiss Ephemeris documentation and I added a corresponding explanation in my compilation "Variants of the Apogee" (Alois and I disagree, however, on the general significance of the osculating apogee).
Seen geocentrically, the line of the apsides (i.e., from perigee to apogee across the orbit) is identical to the position of the "empty" or 2nd focus, sometimes called "kenofocus". This empty focus
is an essential aspect of the Black Moon symbolism, and is the basis of the ideas and metaphors I developed in my Black Moon essay at my site.
The circular, extremely regular motion of the fictitious mean lunar apogee / Black Moon belongs to the world of solar symbolism. Such type of motion is alien to the lunar world and to the symbolism
of the Black Moon. Although I respect the experience of astrologers who don't question the validity of this mathematical point, my mind, accustomed to find the astronomical symbolism reflected in the
astrological symbolism, finds it impossible to identify this point with the "dark" world of lunar symbolism represented by the Black Moon.
III. Topocentric Positions
One of the many proofs that astrology does not deal with what really happens in the sky, besides the fact that the mean lunar apogee is completely fictitious and its motion has little to do with the
real changes in the lunar orbit and in lunar motion, is to realize that the apparent topocentric position of the Moon (relative to the observer), which can differ by more than 1 degree from its
geocentric position, is almost never used by astrologers, that almost invariably use the Moon's geocentric position.
The difference between the geocentric positions of the lunar apogee and the lunar empty or 2nd orbital focus reaches 6.4 degrees every 27 days; however, when we compute their topocentric positions,
this difference (a result of "parallax") reaches a maximum of 7.9 degrees every 6 months. It gives rise to 3 topocentric variants of the Mean Black Moon, which can be labelled as:
- the topocentric equivalent of the geocentric position of the Mean Lunar Apogee
- the topocentric equivalent of the geocentric position of the Mean Lunar Perigee, and
- the topocentric equivalent of the geocentric position of the 2nd or empty focus of the mean lunar orbit
Topocentric positions are normally not used in Astrology, but the large difference of up to 7.9 degrees between the geocentric and the topocentric empty focus of the lunar orbit shows that of all
points in an astrological chart, this is the one closest to the Earth, much closer than the Moon, giving a more personal perspective of basic or primitive lunar symbolism than anything else in a
chart. The unicorn and Priapus-like or --as Axel Harvey calls it-- Charybdis-like symbolism of this point is enormous.
1-) the ordinary, traditional "Mean Black Moon" or mean lunar apogee is not geocentric but barycentric. It orbits the Earth/Moon barycenter, located in a straight line between the centers of the
Earth and of the Moon about 1350 Km inside the Earth's crust (both the Earth and the Moon orbit this point once every 27 days).
2-) the Mean Apogee is a fictitious point that is used only as reference, it does not represent the true orbit of the Moon at a given point in time. Its orbit is a perfect circle with a radius of
405,863 Km, and its motion is almost completely linear or - symbolically-- "solar".
3-) if instead of the apogee we think of the mean empty (or second) focus of the lunar orbit, which is closer to the basic "Black Moon" symbolism, its barycentric position is the same as that of the
mean apogee, but the radius of its orbit is 42,230 Km.
4-) the apogee is measured from the vernal equinox (0 Aries) along the lunar orbit and not the cliptic, so it has a latitude that is a function of its distance from the node and can reach more than
+- 5 degrees. This latitude produces an oscillation of +- 0,06' in the ecliptical longitude of the apogee.
5-) the transformation from barycentric to geocentric is never made in astrological practice. It results in an oscillation of +- 0,40' when the Black Moon is defined as the apogee, and of +- 6.4
degrees when it is defined as the empty focus of the orbit. In other words, the geocentric position of the apogee can differ by more than 6 degrees from the geocentric position of the empty focus of
the orbit. They are the same only in the barycentric reference frame, not in the geocentric.
6-) if we are interested in the observer or topocentric --not the geocentric-- point of view, then the difference between the barycentric and the topocentric positions will periodically reach as much
as 7.9 degrees.
7-) the distinction between the barycentric and the geocentric position applies also to the mean lunar node. The transformation from barycentric to geocentric produces an oscillation of +- 0,44' in
the position of the node, although this distinction, like in the case of the apogee, is never made.
8-) when calculated geocentrically instead of barycentrically, the mean perigee and mean apogee will no longer be 180 degrees from each other. They will be different also from the topocentric point
of view.
9-) the 3 frames of reference: barycentric, geocentric, and topocentric plus the 2 points of the axis (apogee and perigee, ascending and descending node) produce the following variations of the lunar
apogee and node:
- ordinary mean apogee/perigee (barycentric)
- geocentric mean apogee (+- 0,40')
- geocentric mean perigee (+- 0,40')
- geocentric mean empty focus (+- 6.4 degrees)
- topocentric mean apogee (+-0,40')
- topocentric mean perigee (+- 0,40')
- topocentric mean empty focus (+- 7.9 degrees)
- ordinary mean ascending node (barycentric)
- geocentric mean ascending node (+-0,44')
- geocentric mean descending node (+-0,44)
- topocentric mean ascending node (+-0,44')
- topocentric mean descending node (+-0,44)
We have then no less than 7 variations of the *mean* Black Moon only, and 5 variations of the mean node. Their values will all be slightly different.
IV. Riyal's output
Here are the values calculated by Riyal 1.4 in the "Tables --> Astronomical Data" routine for the time I am writing this:
Mean Node = 28,01.3 Tau
--geocentric = 28,45 Tau
--descending = 27,18 Sco
--topocentric = 28,25 Tau
--descending = 27,37 Sco
Mean Apogee = 14,23.6 Tau
--geocentric = 15,00 Tau
--perigee = 13,44 Sco
--empty focus = 20,22 Tau
--topocentric = 14,53 Tau
--perigee = 13,50 Sco
--empty focus = 18,45 Tau
and a sample of part of the ephemerides routine display ("Special-->Generate Ephemerides-->Apsides and node-->Moon..."):
M.Bari M.Geoc M.Peri M.Foco
10 Jun 2003| 13Ta16 | 13Ta00 | 13Sc35 | 10Ta50 |
11 Jun 2003| 13Ta23 | 13Ta15 | 13Sc32 | 12Ta16 |
12 Jun 2003| 13Ta30 | 13Ta31 | 13Sc28 | 13Ta45 |
13 Jun 2003| 13Ta36 | 13Ta47 | 13Sc24 | 15Ta13 |
14 Jun 2003| 13Ta43 | 14Ta03 | 13Sc20 | 16Ta37 |
15 Jun 2003| 13Ta50 | 14Ta17 | 13Sc19 | 17Ta53 |
16 Jun 2003| 13Ta56 | 14Ta29 | 13Sc19 | 18Ta57 |
17 Jun 2003| 14Ta03 | 14Ta40 | 13Sc21 | 19Ta46 |
18 Jun 2003| 14Ta10 | 14Ta49 | 13Sc26 | 20Ta18 |
19 Jun 2003| 14Ta16 | 14Ta55 | 13Sc33 | 20Ta31 |
20 Jun 2003| 14Ta23 | 14Ta59 | 13Sc43 | 20Ta24 |
MBari = ordinary Mean Black Moon, reduced to the ecliptic
M.Geoc = geocentric mean apogee
M.Peri = geocentric mean perigee
M.Foco = geocentric mean empty or 2nd focus
Besides the comments of Alois Treindl in his source code for "Placalc" in the mid or late 80's, which he later reiterated in the Swiss Ephemeris, I have never seen anyone else making a distinction
between the barycentric and the geocentric position of the mean lunar apogee (this distinction does not exist in the case of the true apogee, which is already geocentric). Since I am not familiar
with the French "Lune Noire" literature, I don't know if that distinction is made in Europe.
Alois Treindl's commentary is what inspired me to investigate further the matter of "mean apogee positions". Unfortunately, he never went on to offer the calculations in his software (barycentric to
geocentric and then geocentric to topocentric positions of the lunar node and apogee), so I have nothing against which to check Riyal's accuracy. The Swiss Ephemeris assumes the mean node and apogee
as geocentric and then tranforms them to topocentric; the topocentric positions of the true or osculating node, apogee, perigee, and second focus --which do not need to be transformed from
barycentric to geocentric-- agree with Riyal's.
NOTE: In early February 2005, I personally asked Alois Treindl why the distinction between the geocentric and the barycentric apogee, though mentioned in the Swiss Ephemeris documentation, is never
made in the Swiss Ephemeris program itself. We had an exchange of emails on this subject in the "Riyal_compute" forum, the result of which was an important clarification. Alois wrote: "The whole
concept of a mean orbits precludes consideration of such short term (monthly) fluctuations. In the temporal average, the EMB [Earth/Moon Barycenter] coincides with the geocenter... It is probably
pointless to compute topocentric positions of mean points - a contradiction in itself. Don't do it, or don't expect meaningful results from it." He subsequently added the following note in the Swiss
Ephemeris documentation: "[added by Alois 7-feb-2005, arising out of a discussion with Juan Revilla] The concept of 'mean lunar orbit' means that short term. e.g. monthly, fluctuations must not be
taken into account. In the temporal average, the EMB coincides with the geocenter. Therefore, when mean elements are computed, it is correct only to consider the geocenter, not the Earth-Moon
Barycenter. In addition, computing topocentric positions of mean elements is also meaningless and should not be done." As a result of this clarification, the conversion from barycentric to geocentric
--and from geocentric to topocentric-- mean lunar node and apogee was immediatly removed from Riyal.. The "true" or osculating apogee is not affected by any of this.
V. The "Corrected" Apogee
The erroneous "corrected" apogee used in Europe is based on a regular sinusoidal correction of 12.x degrees (the fraction apparently varies among different authors) applied to the mean apogee. This
came as a result of efforts to find the "true" position when the astronomical theory necessary to calculate this "true" position had not been developed. The ability to calculate it was there, but the
method of calculation, based on the position and velocity vectors of the Moon (the method used in Riyal), was not readily accessible to astrologers.
This situation changed only after the publication of "Lunar Tables and Programs " in 1991, authored by Jean Chapront and Michelle Chapront-Touzé, astronomers at the Bureau des Longitudes and
developers of the most modern and accurate lunar theory to this date, called " ELP-2000" (Riyal uses a truncated long-term version of it called "ELP2000-85", published by the same authors in 1988. I
had been following the development of this theory since the authors' first publications in the early 1970's until its final working version introduced world-wide in 1984.
The tables of the trigonometric expansion of the mean lunar apogee to produce an accurate approximation of the true or osculating apogee, published in the above-mentioned book in 1991 and based on
ELP2000-85, made evident that the "12.x" degree correction used by some astrologers until then (promoted by reputable thinkers such as Jean Carteret), was wrong from every point: one, because the
real maximum difference between the true and the mean apogee was not 12-13 degrees but 30, two, because the "main solar perturbation term" (period=31.8 days) on which this correction was based was
not "12.x" degrees but 15.4 degrees, and three, because the so called "correction" used by astrologers, and for which tables had been published, etc., was being applied in the opposite (wrong)
These facts were not known even to astronomers in general before the 1991 book by the Chapronts. The "ignorance" here was for practical and historical reasons: the true or osculating lunar orbit was
a factor that had not been a part of present-day lunar theory since it began to be developed in the late 1800's. Lunar theory was (and is to this date) based on a reference or idealized ellipse that
establishes the so-called "Delaunay arguments" from which to build the trigonometric expansion of the 3 lunar coordinates: longitude, latitude, and distance. In this process, the mean reference
perigee/apogee is used to form the arguments of the trigonometric terms, but the true instantaneous position of the apogee is never needed.
It may come as a surprise that an accurate theory of the true or osculating lunar apogee did not come until 1991. This may give an idea of the complexity of the Moon's motion and orbit in space, and
the enormous difficulties that theoreticians of celestial mechanics had to face to develop a suitable theory for it. So it is no wonder that astrologers -- or even astronomers-- had an erroneous
understanding of the real instantaneous motion and gravitational perturbations of the lunar apogee/perigee. There were theoretical developments and approximations, but nobody had tried the real
numerical solution until the Chapronts published their results.
Nevertheless it is very common for astrologers to ignore or misunderstand the astronomical facts, and to this date there are still people working with this erroneous "corrected" apogee or Black Moon.
The 12/13- degree gap that opens and closes periodically between the corrected (called "true" by its users adding to the confusion) and the mean position is even given special significance by some
astrologers... interesting concept this gap... but based on something that is mathematically and astronomically erroneous or non-existent. Astrology has many examples of this: the Uranian planets,
the Dark Moon, and in my opinion, the mean lunar apogee (more on this later).
One wonders, with so many options available (we still have to see the variants of the true or osculating apogee), if the Black Moon has meaning at all. Astrology is full of cases like this (e.g.,
house division, asteroids...). There is no easy answer. However, I think this question disappears when astrologers realize that Astrology is what astrologers do: work with more or less fancy and
abstract mathematical points in the imagination. Astrology has very little or nothing to do with our relationship with "the sky out there" or with "the cosmos". If you realize this then the efficacy
of imaginary points comes to light under a different perspective, one which has to do with cognitive patterns and structures in the human brain and not with astronomical events. It becomes a matter
of individual idiosyncrasy the tools you choose to work with, and there is no fear or prejudice against tools that have no solid astronomical basis. Some people simply do not need that basis...
however, I do, and I think that this basis is important in order to keep Astrology (or astrologers' minds) disciplined and "down to earth", i.e., to keep Astrology healthy.
V. The "True" or Osculating Apogee
Unlike the mean apogee where topocentric positions do not make much sense astronomically (and therefore, there is really never any difference between the mean empty focus and the mean apogee), the
osculating or "true" apogee, by definition, represents the actual, real-world fluctuations of the lunar orbit, so calculating its topocentric equivalents makes sense astronomically.
The word "osculating" is explained at the end of a long compilation of posts that you will find in my site discussing the astronomical definition of the Black Moon or lunar apogee:
This is the relevant part (written in June 2000):
[BEGIN QUOTE]
If you look in a Latin dictionary, you find:
--"OSCULATIO" = kiss, the action of kissing "OSCULOR, OSCULATUM SUM": to kiss, to caress, to pet.
The word is also in the Webster's:
--"osculant" = united by certain common characteristic "oscular" = pertaining to an osculum, pertaining to the mouth or kissing
--"osculate" = 1- to kiss; 2- to come into close contact or union; 3- (geometry, of a curve) to touch another curve or another part of the same curve so as to have the same tangent and curvature at
the point of contact; 4- to bring into close contact or union; 5- to touch another curve or another part of the same curve in osculation; 6- (archaic) to kiss
--"osculating plane" = the plane containing the circle of curvature of a point on a given curve.
--"osculation" = 1- the act ok kissing; 2- a kiss; 3- close contact; 4- (geometry) the contact between two osculating curves or the like
--osculum" = a small mouthlike aperture as of a sponge.
The "moment of osculation" is only a brief moment: the next instant the "point of osculation" will have shifted in space...
[END QUOTE]
That is, the real orbit of an object --and in particular the orbit of the Moon-- is changing all the time due to the attraction of many or of several perturbing gravitational forces, so the moment of
osculation is only an instant, it represents an "instantaneous orbit" that "kisses" the real orbit and then diverges as the real orbit is accelerated. It is like opening a momentary window to observe
the orbit at that instant, knowing that it will change its appearance as soon as we close the window again, or like taking a picture that "freezes" the instantaneous reality of the orbit.
By "real orbit" is meant the orbital plane as it looks through time; it can be conceived as a collection or accumulation of these instants, of an endless series of instants describing its changes or
oscillations, the slightly different shapes that the orbit assumes through time. Of course the word "real" used here is very relative, because it does not imply that the osculating orbit is not real.
Some people prefer to think arbitrarily that an osculating orbit --a perfectly defined keplerian ellipse--, does not correspond to "reality", forgetting the fact that the osculating ellipse is the
accurate representation of an object's trajectory in space at a given moment. The Keplerian ellipse, i.e., the osculating orbit, describes the motion of the object at that moment of time.
This is exactly what we do in Astrology when we make a chart: we "freeze" artificially the movement of the celestial sphere and work only with that instantaneous picture.
The orbit through time constitutes a series of oscillations around an average or "mean" slowly evolving keplerian ellipse. This would be the equivalent of the Mean Black Moon or barycentric mean
lunar apogee. The osculating apogee represents the actual trajectory of the Moon as it actually is at a given instant. We can think of it as a ghost image that the Moon carries with it all the time.
This ghost image represents a sort of ideal, an "ideal future" when the Moon is (or will be) at apogee, but it keeps changing or evolving as the Moon travels through space.
We can also think of the perigee, the north and south nodes, and the empty second focus of the orbit in the same way: they all represent idealized focal points or "directions" that are a result of
the "psychic projections" of the Moon, they are "Moon ghosts" that the Moon always carries with it, that are part of the "lunar structure" of every individual. The Moon represents the present moment,
the nodes, apsides, and empty focus represent the past and "look forward" psychic projections that give shape and structure to the lunar dynamics of a person's life. They are like the rooms, passages
and corridors of a house (the different parts of the orbit) that become projections of the person who inhabits it (the Moon).
Some people think that an osculating orbit is something too artificial or unreal, and call the use of the osculating lunar apogee or Black Moon "nonsense", "close to nonsense", "makes no sense at
all", etc. The main reasons usually given for this are 1-) the large difference (of up to 30 degrees) between the mean and osculating value of the lunar apogee, 2-) the fact that this osculating
value "can travel to places where the Moon will never go" (see how amazingly significant this is in psychological and psychic terms!), and 3-) its erratic changes of direction and velocity, which for
some it means it cannot be really called "a motion" at all.
But I have always insisted that it is precisely all that what makes the symbolism of the osculating apogee / Black Moon so powerful. It doesn't matter at all that its motion is erratic: it doesn't
have to move like a planet because it is not a planet!... this brings it symbolically closer to all the neglected psychical projections of the Moon, both positively and negatively. The osculating
Black Moon, representing the constantly changing shape of the lunar orbit is a very good fit to the organic, instinctual nature of Black Moon symbolism.
NOTE: I discuss this symbolism in http://www.expreso.co.cr/centaurs/blackmoon/lilith.html
VI. Numeric Data
We saw that the Mean Black Moon or lunar apogee describes a circular orbit around the Earth/Moon barycenter (period=8.8 years), and how paradoxical it seems that such circular motion is used to
represent the lunar apogee. The radius of this orbit depends on the value of the mean Earth-Moon distance, which the ELP2000 theory gives as 384,747.98 Km. This value, however, is derived directly
from the Moon's mean sidereal motion and is barycentric, i.e., it is the semimajor axis of the Moon's orbit around the center of mass of the Earth and Moon.
In a publication of 1994 (Astronomy and Astrophysics, 282, p.663), the authors of ELP2000 provided for the first time to the consideration of astronomers true mean elements of the Moon comparable to
those of the planets, and gave the value of the mean Earth/Moon distance as 383,397.77 Km (this is the quantity used by Riyal). Using the mean lunar eccentricity provided in the same publication, one
can calculate the following:
- radius of the circular orbit of the mean apogee (Black Moon) around the Earth = 404,694 Km
- radius of the circular orbit of the mean perigee ("Priapus" in France) around the Earth = 362,102 Km
- distance between the 2 foci of the orbit (Earth and empty focus) = 42,592 Km
These distances, as far as the "Mean Black Moon" is concerned, are fixed, and represent concentric circles around the Earth. They are unmutable abstractions that represent the horizontal poles
(called "line of the apsides") of a reference Lunar orbit circulating the Earth/Moon barycenter. In the real world, the apogee/perigee distances and the distance between the foci of the lunar orbit
vary within a certain range:
- the apogee varies between 404,039 and 406,720 Km
- the perigee varies between 356,337 and 370,407 Km
- the distance of the 2nd focus varies between 34,506 and 49,841 Km
When one plots the true distance of the Moon in its cycle from apogee to perigee over a period of time against the distance of the osculating apogee, it becomes evident that the osculating apogee
reaches distances that exceed those of the Moon, i.e., it "travels to places where the Moon can never go" (this is a phrase used by Alois Treindl in a post to alt.astrology.moderated). This is
illustrated by a graphic done with Riyal's "Graphic Transits" routine:
You can see that most of the times when the Moon is at perigee the orbit stretches outward and the osculating apogee reaches distances of up to 415,000 Km, that the Moon will never (and can never)
reach. Here is the complementary graphic, made with a modified Riyal in order to show the osculating perigee:
In this case, we can see the distance of the osculating perigee also stretching but much less, reaching minimums of about 352,000 Km, while the pattern is inverted: when the Moon is at apogee, the
orbit stretches inward.
What we can conclude from this is that only the osculating apogee "goes to those places" that the Moon can never reach but which are nevertheless part of its osculating orbit, like the "ideals" or
"ghosts" I mentioned before. The fluctuations shown here represent the real changes of the instantaneous lunar orbit, always matched by the expansion or contraction of the distance between the 2
foci, i.e, the empty focus and the Earth. The stretches or "ideals" of the osculating apogee are a reflection of the organic, "live" dialectical relationship between Earth and the empty focus of the
Moon, the lunar ghost of the Earth.
VII. The oscillations
In classical planetary theory, every "real" orbit is seen as a series of periodic oscillations around a mean Keplerian orbit that changes slowly with time, this last secular change often being also
an oscillation of very long period. This means that there is a real place in astronomy (and astrology) for "mean" or average values of orbital elements such as the lunar apogee. But normally the
oscillations are of relatively small amplitude, as in the case of the lunar node (see below). It is a peculiarity of the lunar apogee that the oscillations (that is, the difference between the "true"
or osculating value and the mean) can reach an amplitude of 30 degrees.
This large amplitude, according to some (e.g., the writers of the Swiss Ephemeris), implies that the osculating (or "oscillating" as is called sometimes by mistake) has no meaning. But as I have
explained, it is exactly the opposite: this very large difference with respect to the mean value enhances its meaning, it makes the osculating lunar apogee --the True Black Moon, with its wild
oscillations and changes of speed and direction-- more unique and powerful, the best representation there is of the "emotional accumulator" or reactor of primitive and organic lunar symbolism.
The changes in the shape of the lunar orbit reflected by the True Black Moon, precisely because they are swifter and more pronounced or irregular, resemble a living organism more than anything in
I would like to illustrate numerically these changes, compared to the changes of the lunar node. This can be done with the tables in the 1991 book "Lunar Tables and Programs" by the Chapronts
mentioned before. The tables allow to calculate the osculating or true node with an accuracy of 1.6 arcminutes and the osculating or true apogee with an accuracy of 29 arcminutes (0.5 degrees). This
was the source of the first tables ever of the True Black Moon published in France in the early or mid 90's. (NOTE: Riyal does not use this method. It uses a different, more accurate procedure based
on the instantaneous position and velocity vectors of the Moon).
The largest terms of the lunar node (i.e., the first "perturbations" or periodic oscillation that force the node to deviate from the mean value) look like this:
a-) 1,30' period = 173 days
b-) 0,09' period = 1 year
c-) 0,07' period = 14.8 days
d-) 0,07' period = 13.6 days
e-) 0,05' period = 3 years...
... up to 22 terms in the book. They are all functions of a combination of the Sun, the mean reference barycentric lunar orbit, and the Earth/Moon barycenter. You can see that the largest oscillation
is moderately small (one degree and a half), and that the second largest is only a small fraction of the first. Now see the difference in the case of the apogee (it is the perigee in the book, but
they are interchangeable simply adding 180 degrees to the result):
a-) 15,27' period = 31.8 days
b-) 9,28' period = 205.9 days
c-) 2,43' period = 27.5 days
d-) 2,36' period = 37.6 days
e-) 2,05' period = 15.9 days
f-) 1,29' period = 9.6 days
... up to 58 terms. They are too functions of a combination of the Sun, the mean reference barycentric lunar orbit, and the Earth/Moon barycenter. The oscillations are much larger, and many more
terms are required to calculate the position of the osculating apogee from its reference mean value. (The first term, taken as 12.3 degrees instead of 15.4, applied in the opposite direction, and
arbitrarily ignoring all the other terms, is the origin of the "corrected" apogee still used in Europe).
To illustrate the scale of the oscillations, here is a graphic done with Riyal's "Graphic transits" routine that plots the longitude of the true osculating node and the true osculating apogee for a
period of 1 year from 1-1-2003 to 1-1-2004.:
You can see clearly in the graphic the main 15-degree monthly (31.8 days) oscillation. The graphic is also showing a series of conjunctions between the transiting True Black Moon and the transiting
True Node beginning in mid-2003. No less than 13 conjunctions can be seen. These can be calculated very accurately with Riyal's "Special --> Phenomena (search)" routine:
Node/Apogee 29Ta25 29Ta25 2452794.6657 4/ 6/2003 3h58.6
Node/Apogee 29Ta25 29Ta25 2452804.2594 13/ 6/2003 18h13.5
Node/Apogee 28Ta38 28Ta38 2452823.9266 3/ 7/2003 10h14.4
Node/Apogee 27Ta50 27Ta50 2452837.3294 16/ 7/2003 19h54.3
Node/Apogee 26Ta15 26Ta15 2452856.0912 4/ 8/2003 14h11.3
Node/Apogee 24Ta52 24Ta52 2452868.7514 17/ 8/2003 6h02.1
Node/Apogee 23Ta04 23Ta04 2452890.6715 8/ 9/2003 4h06.9
Node/Apogee 22Ta04 22Ta04 2452903.3662 20/ 9/2003 20h47.4
Node/Apogee 20Ta52 20Ta52 2452921.2558 8/10/2003 18h08.4
Node/Apogee 20Ta31 20Ta31 2452936.6596 24/10/2003 3h49.9
Node/Apogee 20Ta27 20Ta27 2452950.2213 6/11/2003 17h18.6
Node/Apogee 20Ta22 20Ta22 2452972.1431 28/11/2003 15h26.1
Node/Apogee 20Ta23 20Ta23 2452977.2176 3/12/2003 17h13.4
In a period 6 months from June 2003 to December, the True Black Moon made 14 conjunctions with the lunar node. I have never understood why some people, seeing this, think that using the True Black
Moon is "nonsense". I think it is fantastic! Its transits are really obsessive/compulsive, like the symbolism attributed to it! (NOTE: it is common to see it transiting a natal point 19 or 20 times
during a year).
VII. A Brief Example
José Asunción Silva was born November 27th, 1865, in Bogotá. His most famous poem, "Nocturne III" or "One Night", was dedicated to his sister Elvira, 5 years younger than him, who died of pneumonia
on January 11, 1891, at the age of 21. By the way he expressed himself, this poem gave origin to many commentaries about a possible incestuous relationship between them. Personally, I think this is
unjustified, although it is obvious, from what I read from his biographers, that her death affected him very deeply (all the biographical material is taken from the Web)
"One Night" shows a tremendous obsessive edypical erotism, of majestic beauty and musicality. It has the "wolf" atmosphere of a song to the night and the expansion of the "wings of death" moved by
amorous passion. I believe this can be seen in the following position at his birth (I am using 18h GMT):
Venus = 13,58 Scorpio
Pluto = 12,49 Taurus
But most interesting is the position of the True or osculating Black Moon --the queen of this type of symbolism-- the day her sister died:
11-Jan-1891 0h 13,33 Leo
12-Jan-1891 0h 11,48 Leo
Corrected for precession, the position on January 11th is 12 Leo. The last hours and agony of her beautiful young sister, whose death caused an impression throughout Bogotá, were marked by the Black
Moon, Venus, and Pluto, a fusion that perfectly describes the "Nocturne" the poet wrote.
José A. Silva shot himself in the heart in the night of 23 and 24th of May, 1896, when he was 30, overwhelmed by the economic bankruptcy of his family, a responsibility that he had assumed fully, and
by the loss during a storm at sea of his last unedited writings. The shipwreck happened off the Venezuelan coast on January 28, 1895. If we take as reference that day at 18h GMT:
Mercury = 20,56 Aquarius
Venus = 22,19 Aquarius
True Black Moon = 21,28 Aquarius
We find a very good description of his literary treasure being "swallowed" by the sea, getting lost forever, from which the poet never recovered. The day of his death, a little more than 1 year
later, we find:
Venus = 20,31 Taurus
Uranus = 21,35 Aquarius
Nessus death of Elvira = 21,52 Aquarius
NOTE: the complete material (written in July 2000) and the original poem in Spanish can be found at:
Nocturne III
José Asunción Silva (1865-1896)
Translation by Luis Zalamea Borda
It was evening,
a night filled with perfumes, whispers, and the music of bird' wings;
A night
when fantastic glowworms flickered in the nuptial, humid shadows,
at my side, ever so slowly, close to me, listless and silent
as if prey to premonition of the most stinging pain
that inflamed the deep secret of your fibers,
over the path filled with flowers that stretched across the plain,
you were walking;
and the full moon
in the sky, so infinite, so unfathomable, spread its light.
And your shadow,
lean and languid,
and my shadow,
by the moon's rays silhouetted
on the path's sorrowful gravel,
were united
and were one,
but one long and lonely shadow,
but one long and lonely shadow,
but one long and lonely shadow...
desolate; my soul
by your death so bitterly pained and anguished,
torn from you by time, distance and the grave
upon that infinite blackness
where our voice cannot be heard,
lone and mute,
on the path I kept on walking...
And dogs braying at the moon came to my ears,
at the pale face of the moon,
and the croaking of the frogs.
I felt cold; the same chill that in your chamber
numbed your precious cheeks, hands and brow
amidst the snow-white linens
of the funereal shroud.
It was frost out of the tomb, it was the ice of the dead,
and the chillness of the void...
And my shadow,
sketched out by the paleness of the moon,
walked alone
walked alone,
walked alone upon the prairie;
and your shadow, lean and graceful,
pure and languid,
as in that warm spring evening long ago,
as in that night filled with perfumes, whispers and the music of bird' wings,
approached me and walked with mine,
approached me and walked with mine,
approached me and walked whit mine... Oh embraced shadows!
Oh the shadows of the bodies mingling with the shadows of the souls!
Oh shadows that search each other in tear-filled and somber nights!
VIII. Another Example
The case of José Asunción Silva exemplifies the emotional, psychological, oniric, dark, and erotic world to which the osculating Black Moon belongs; these are the traditional associations, which have
originated in French and British interpretations of that ancient mysterious pseudo-mythical character "Lilith" of Hebrew and Babylonian folklore.
Because the Black Moon is usually associated with Lilith, its traditional interpretations unfortunately tend to ignore the sociological and collective aspects of its symbolism. This social and
community aspect can be understood if we learn to disentangle the character "Lilith" from the Black Moon, and consider how the "Great Mother" --as explained by the Jungian school-- describes
practically all the traditional Black Moon associations. Knowing this, it is easier to understand the collective manifestation of its primitive symbolism: our "ancestral" relationships, and
particularly, our relationship with "Mother Earth".
The case of the Mexican revolutionary Emiliano Zapata is an excellent example of this level of expression of the Black Moon, normally neglected by astrologers "possessed" by the demonic and magical
character Lilith. The sketchy material that follows is based on a large compilation in Spanish you can find in my site:
The war of Emiliano Zapata was a war of agrarian reivindication, with roots in ancient "mother earth" archetypes. In a matter of months, having been called by the leaders of his village because they
needed "someone who could put his pants on" to fight the unscrupulous usurpation by the great land owners of the land they needed to survive, the young man of 31 had become "General Zapata", the
living symbol of a religious or mystical utopia whom everybody followed with fervor.
He was born in Anenecuilco (18n46/98w59 ), state of Morelos, the night of Aug 8 1879 (the time appears in a fictionalized account of his life... it may be an invention):
Sun = 16,03 Leo (calculated for sunset)
True Black Moon = 15,40 Taurus
His Sun in the middle of Leo making a square to the Apogee/Perigee axis is a good description of the social and historical phenomenon called Emiliano Zapata, the symbol-man, his very strong
individuality absorbed in an impossible and perennial struggle for the reivindication of man's primordial relationship with the earth.
He was betrayed and assassinated on April 10 1919 between 2:10 and 2:15 in the afternoon (Chinameca 18n37/99w00):
Sun = 19,56 Aries
True Black Moon (geocentric) = 21,29 Libra
True Perigee (topocentric) = 20,57 Aries
NOTE: although the orb at death is 94' (applying) geocentrically, it is made more significant because they were in square at birth. The Sun conjunct the topocentric perigee has a smaller orb (61')
In my essay on Black Moon symbolism I explained that this point carries the instinctive energies, including atavistic wisdom and clairvoyance, which in this case are related to the feelings peasants
have for the earth. The most brilliant explanation of this dimension of the personality and struggle of Zapata, particularly in the last period before he was murdered, is found in a book by Alfredo
Krauze "Biografia del Poder: Emiliano Zapata" (Mexico, FCE, 1987):
<<Zapata didn't fight for "the little lands" --as Villa used to say-- but for Mother Earth, and from Her. His struggle takes roots because his struggle is roots. This is why none of his alliances
remains. Zapata doesn't want to go anywhere: he wants to remain. His purpose is not to open the doors of progress... but to close them: to reconstruct the mythical map of a human ecological system
where each tree and each hill were there with a purpose; a world alien to any dynamism that is not the vital dialog with the earth.>>
Here, I believe, lies one of the most fundamental and most neglected aspects of Black Moon symbolism. The isolation and self-absorption (the "Unicorn" aspect) of the Black Moon can be seen here
(continue quoting Enrique Krauze):
<<Zapata doesn't come out of his land because he doesn't know and fears "the other": the central power is always perceived as an intruder, as "a prying nest of traitors and the greedy". His vision is
not active or voluntaristic, like that of all religiosities marked by the father, but passive and animistic, marked by the mother. His war of resistance exhausts itself. During the truce of 1915,
instead of gaining strength outwards, he goes inward in a search of the lost order, to the point of wanting to rebuild it with the memory of the elders. It is not a productive map what he is after,
it is the bosom of Mother Earth and its constellation of symbols.>>
These 2 paragraphs concentrate the meaning of the Black Moon better than most explanations I have seen, and bring to light the social aspect of it which is so consistently neglected.
The film "Viva Zapata" (1952), directed by Elia Kazan and written by John Steinbeck, focuses on Zapata's personality and "passion": the mysterious, quasi-religious phenomenon of the apparition of a
leader in moments of crisis, who becomes the focus of powerful collective feelings that give to his life a mythical character.
Thanks to the combined vision and talent of Elia Kazan and John Steinbeck, the film is free from modern psychologism and portrays men integrated to the land and to their social and historical milieu:
there is no difference here between man and history.
The end of the film shows the white stallion of Zapata fleeing unharmed from the ambush, and is seen running proudly and free. The completely intangible character of the Black Moon, unreachable,
oniric, ancestral and solitary, fits well a symbol like this, i.e., the mythical stallion is Zapata himself. Zapata was called "the purest of all revolutionaries", so passionately he strove to remain
loyal to his cause, to his dear Plan of Ayala, and to this end he never accepted compromises of any kind with anybody, something that ultimately was the cause of his destruction.
The fact that the story of the horse as it appears in the film is authentic (his name was "As de Oros" -- Ace of Diamonds), is a beautiful expression of the mystery that was the phenomenon of Zapata
and of men like him: in spite of all their human deficiencies, an elemental historical or cosmic force seems to be driving them, and in the end there is no difference between the man and the myth,
and their figure becomes archetypal even before they die, expanding into intemporality after death
Now consider the following astrological fact:
Elia Kazan was born in Istabul 7 Sept1909. At 12h GMT:
Sun = 14,12 Virgo
True Black Moon = 14,35 Virgo
John Steinbek, who wrote the script and was to receive the Nobel Prize for literature in 1962, was born 27 Feb 1902 in Salinas, California, at 3 PM PST (+8h):
Sun = 8,26 Pisces
True Black Moon = 6,48 Sagittarius
A square (orb=1.6) like the case of Zapata.
[NOTE: the symbol of the white horse is examined in detail in my original compilation with respect especially to Asbolus , which is exactly conjunct the Sun of Zapata when John Steinbeck was born]
For a further examination of the social role of the Black Moon, particularly its relationship to the figure of the Virgin Mary in the Catholic Church, see my latest study on the Second Vatican
Council. For another case from the psychological point of view, see my study on Tchaikovsky.
IX. The natural or "interpolated" apogee
If one compares the actual position of the Moon when it is at its apogee every 27 days, with the position of the "mean" apogee or Black Moon, the difference is never more than 5 degrees (actually
-5.4 to +5.7), and the maximum is reached every 206 days. This has suggested to some people that the "true" position of the apogee must therefore describe a very smooth curve with an amplitude of 5
degrees only, in contrast to the very large 30 degree curve of the osculating apogee, which they describe as "unrealistic".
The Swiss Ephemeris documentation mentions a proposal made by Henry Gouchon in "Dictionnaire Astrologique, Paris 1992", based on a curve with an amplitude of 5 degrees. This solution is said to be
"the most realistic of all so far". It is also explained that the actual curve of the deviation between this Moon position at every apogee and the position of the mean apogee is not exactly a sine,
and that Dieter Koch published a table in "Meridian" in 1995 <<that pays regard to the fact that the motion does not precisely have the shape of a sine>>.
In the long (and old) compilation of posts I wrote on the calculation of the Black Moon in my site, you will find those quotes and also a numerical formula I devised that demonstrates this (written
25 Nov 1999):
[begin quote]
I calculated the times of all Lunar apogees from 2000 to 2010, a total of 136 apogees. I then calculate the positions of the Moon and of the mean apogee/Black Moon at those times, and the difference
between the two. The difference oscillates between -5.4 to +5.7 degrees. When this difference is plotted on a graph, one sees a clear cycle with a period of 206 days. This is the period of the
difference between the longitude of the Sun and the longitude of the mean apogee, and it shows that the main deviation in the longitude of the apogee is caused directly by the Sun.
Let's call the Sun/Apogee difference "A"
A = 197.1132 +31931.756*T +0.0106*T^2 (degrees)
where T is centuries from J2000
T = (Julian day-2451545)/36525
One can reduce the difference mentioned above (-5.4 to +5.7) to 1/4 or 1/5 of it, by adding the following correction to the mean apogee:
-4.7 degrees * sine of (2 * A)
With this correction the errors will be less than 1 degree, and the maximum will be 2 degrees or less.
[end quote]
If the difference were a perfect sine, the above formula would give the exact deviation of the position of the Moon at apogee and the position of the mean apogee. The remaining 1/4 or 1/5 means that
the sine curve with a period of 206 days described will approximate the deviation with an error of 20 or 25%.
The main idea of this approach is to observe the position of the Moon when it is at apogee every "anomalistic" month (27 days). Its proponents use the positions of the Moon when it is at apogee as
"the true apogee", and to find where this "true apogee" is at other moments when the Moon is at any other point of its orbit away from apogee, they use a numerical interpolation formula.
Because this way of understanding the lunar apogee is based on the actual occurrences of lunar apogees in the natural world, I think the word "natural" describes it well. This is why this variant of
the Apogee or Black Moon is called "natural" in Riyal. It is the most recent version of how to calculate the Black Moon.
When the apogee and the perigee are calculated this way, they are no longer an axis, and the difference between the perigee and its mean position can reach 25 degrees instead of only 5 degrees as in
the case of the apogee.
This approach was described by Miguel García in 1997 in an article published in Spain ("Realidad y ficción astronómica de Lilith, Cuadernos de Investigación astrológica Mercurio-3, n° 6), who
implemented its calculation in his software "Armon" (1997). It is also the approach of Dieter Koch, who together with Bernhard Rindgen published "Lilith und Priapus, die Schalen des Menschen"
(Frankfurt 2000), with ephemerides of Lilith (the apogee) and Priapus (the perigee) from 1900 to 2010. Dieter's work was awarded as <<the best astrological research result in 2000>> by the
Internationalen Astrologie-Weltkongress 2000 in Luzern.*
* information by Robert von Heeren.
Both the natural and the osculating apogee coincide at the time when the Moon reaches its apogee. One can say, therefore, that both are "true" only at that time, while at any other time they are an
approximation. But there is an important difference: the osculating apogee is calculated rigorously from the geometry of an ellipse, while the natural apogee dismisses completely the idea of an
ellipse (and of geometry), something evident in the fact that the apogee and the perigee do not form an axis.
Paradoxically, even though the proponents of the natural apogee reject the osculating value because of its very large divergence from the mean, the natural perigee can be almost as far away from the
mean as the osculating value.
X. Riyal's output (b)
Instead of working with geometric projections and orbital planes as in everything else in Astrology, the natural apogee is calculated through a series of observations or "actual happenings" to which
is applied a numerical approximation to find the intermediate value. Because it is only a numerical approximation, its position is not included in the Swiss Ephemeris, and we must use other sources
in order to calculate it and compare the positions given by Riyal.
Riyal gives the position of the natural apogee and perigee in "Tables --> Astronomical Data", and it is possible to construct an ephemerides of them. The ephemerides ("Special-->Generate
Ephemerides-->Apsides and Nodes-->Moon") will show their positions together with the other variants. Here is a sample output:
Oscu MBari MGeo MPeri MFoco nApo nPeri
1 Jan 2003| 17Ta27 | 25Ar30 | 26Ar00 | 24Li57 | 29Ar53 | 23Ar12 | 16Sc07 |
2 Jan 2003| 16Ta47 | 25Ar37 | 26Ar12 | 24Li58 | 0Ta52 | 23Ar25 | 15Sc01 |
3 Jan 2003| 15Ta26 | 25Ar44 | 26Ar22 | 25Li01 | 1Ta36 | 23Ar39 | 13Sc51 |
4 Jan 2003| 13Ta11 | 25Ar50 | 26Ar30 | 25Li06 | 2Ta03 | 23Ar53 | 12Sc38 |
5 Jan 2003| 10Ta07 | 25Ar57 | 26Ar35 | 25Li14 | 2Ta10 | 24Ar07 | 11Sc22 |
6 Jan 2003| 6Ta39 | 26Ar04 | 26Ar39 | 25Li24 | 1Ta58 | 24Ar21 | 10Sc04 |
7 Jan 2003| 3Ta17 | 26Ar10 | 26Ar41 | 25Li36 | 1Ta25 | 24Ar35 | 8Sc44 |
8 Jan 2003| 0Ta29 | 26Ar17 | 26Ar42 | 25Li50 | 0Ta35 | 24Ar49 | 7Sc21 |
9 Jan 2003| 28Ar21 | 26Ar24 | 26Ar41 | 26Li04 | 29Ar29 | 25Ar03 | 5Sc58 |
10 Jan 2003| 26Ar49 | 26Ar30 | 26Ar40 | 26Li20 | 28Ar12 | 25Ar17 | 4Sc32 |
The quantities are:
Oscu = geocentric osculating or "true" apogee
MBari = mean barycentric apogee
MGeo = mean geocentric apogee
MPeri = mean geocentric perigee
MFoco = mean geocentric kenofocus or empty focus
nApo = natural apogee
nPeri = natural perigee
To compare Riyal's accuracy (remember, in this case the positions can only be approximate by definition, especially in the case of the perigee), we will use sample outputs from the program "Armon 1.0
" (1997) by Miguel García and "Ceres 1.17" (2001), by Dieter Koch. The sample is calculated for the 1st day of each month at 0h U.T.:
Table 1.- natural or "interpolated" apogee:
Armon Riyal Ceres
1 Jan 2000| 19Sa47 | 19Sa11 | 19Sa11 |
1 Feb 2000| 22Sa23 | 22Sa12 | 22Sa13 |
1 Mar 2000| 27Sa50 | 28Sa17 | 28Sa17 |
1 Apr 2000| 4Cp24 | 5Cp06 | 5Cp05 |
1 May 2000| 10Cp19 | 10Cp49 | 10Cp48 |
1 Jun 2000| 14Cp57 | 14Cp37 | 14Cp38 |
1 Jul 2000| 14Cp34 | 14Cp13 | 14Cp23 |
1 Aug 2000| 12Cp49 | 13Cp05 | 13Cp02 |
1 Sep 2000| 16Cp22 | 15Cp57 | 15Cp57 |
1 Oct 2000| 22Cp02 | 21Cp18 | 21Cp18 |
1 Nov 2000| 28Cp38 | 27Cp58 | 27Cp58 |
1 Dec 2000| 4Aq31 | 4Aq32 | 4Aq32 |
As you can see, the positions of Riyal are almost the same as those of Ceres. The most probable reason is that we are using the same algorithm. Originally, it was Dieter Koch who gave me the
suggestion about how to calculate it when I implemented it in Riyal in November of 1999. The perigee shows larger discrepancies...
Table 2.- natural or "interpolated" perigee:
Armon Riyal Ceres
1 Jan 2000| 1Ca23 | 1Ca35 | 1Ca37 |
1 Feb 2000| 16Ca38 | 17Ca44 | 17Ca23 |
1 Mar 2000| 22Ca41 | 24Ca02 | 24Ca05 |
1 Apr 2000| 17Ge54 | 23Ge53 | 19Ge45 |
1 May 2000| 14Ge01 | 13Ge50 | 14Ge53 |
1 Jun 2000| 26Ge14 | 26Ge52 | 26Ge49 |
1 Jul 2000| 11Ca37 | 11Ca39 | 11Ca38 |
1 Aug 2000| 27Ca35 | 27Ca14 | 27Ca16 |
1 Sep 2000| 11Le31 | 11Le26 | 10Le40 |
1 Oct 2000| 15Le17 | 10Le39 | 14Le00 |
1 Nov 2000| 8Ca13 | 10Ca43 | 6Ca39 |
1 Dec 2000| 8Ca47 | 6Ca41 | 7Ca51 |
In this case, the discrepancies are larger between the 3 programs, probably because different algorithms are being used. (Details of Riyal's algorithm are given in the program's documentation.).
Since there is no way of obtaining high accuracy in this case, it is very difficult to know which positions are more accurate or "correct".
The proponents of the natural apogee and perigee consistently disqualify the use of the osculating ellipse; however, this "interpolated" approach to the lunar apogee or Black Moon, based on past and
future coordinate points instead of instantaneous positions and geometrical projections, represents a mixture of temporal planes and contradicts how all other radical astronomical points in an
astrological chart are calculated.
XI. Constructing Tabular Black Moon Ephemerides
Riyal normally produces 2 different Black Moon ephemerides. Details of the calculations and of their accuracy are found in the program's documentation. The tables can be made for any period of time
within the program's range (-4700 to +9000) and for any amount of days or fractions of a day as the tabular interval, and if needed, the output can be sent to a file to produce an independent tabular
ephemeris in text form.
The first type has already been illustrated, and is the "apsides and node" ephemeris option, that shows all the different versions of the Black Moon together in columns (see sample in section X
above). The second type of Black Moon ephemerides is constructed when one chooses an ephemeris of the Apogee directly through the "one body only" option. This will tabulate either the osculating or
the mean apogee/Black Moon, depending on how the user has configured the program, and looks like this:
APOGEE lon lat dec distance velocity 1 Jan 2003| 17Ta26.6 1s53 15n14 403541 km -0°29' 2 Jan 2003| 16Ta47.2 1s56 15n00 405737 km -0°59' 3 Jan 2003| 15Ta26.2 2s02 14n31 406999 km -1°48' 4 Jan 2003|
13Ta11.0 2s13 13n41 407345 km -2°41' 5 Jan 2003| 10Ta07.1 2s27 12n31 407018 km -3°19' 6 Jan 2003| 6Ta38.5 2s43 11n10 406353 km -3°28'
When tabular ephemerides are not needed, but simply the positions for one single instant of time or one astrological chart, Riyal will detail all the variations in the routine "Astronomical Data", or
by pressing "F2". Here is a sample screen (actual size 800x600) of the output in this case:
Riyal is FREEWARE. It can be downloaded here.
Juan Antonio Revilla
San José, Costa Rica
October 2003
|
{"url":"http://www.expreso.co.cr/centaurs/blackmoon/barycentric.html","timestamp":"2014-04-19T14:29:17Z","content_type":null,"content_length":"75702","record_id":"<urn:uuid:909d7735-876c-4036-8ef7-07e4c30efbe3>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00420-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Hightstown Precalculus Tutor
...I am extremely qualified to provide homework help and test preparation and would love to discuss how I can help your child succeed this school year. I have a Bachelor of Science degree from the
University of Delaware and received a 4.0 grade point average in my mathematics courses for my K-12 te...
9 Subjects: including precalculus, geometry, algebra 1, algebra 2
...I have considerable experience in a wide range of computer programming activities, ranging from web programming to sophisticated scientific number crunching. I have a firm grasp of the concepts
and applications of different computer programming paradigms (procedural, functional, event-based, obj...
15 Subjects: including precalculus, chemistry, calculus, statistics
...I have met all qualification and am only waiting for final approval.I taught Algebra 1 in my student teaching experience. I helped engage my students through the use of manipulatives and
activities. I am certified in K-12 math education.
9 Subjects: including precalculus, geometry, algebra 1, algebra 2
My name is Joyce and I am a Rutgers graduate. I graduated with a bachelors degree in mathematics. I am in pursuit of my masters in K-12 mathematics education with a specialization in urban
24 Subjects: including precalculus, calculus, geometry, algebra 1
...I have a bachelors' degree in biology and math from the College of Staten Island. I have been tutoring elementary, junior high, and high school students in math for over a year now. I have
prepared students for integrated algebra and geometry regents.
15 Subjects: including precalculus, calculus, geometry, ESL/ESOL
|
{"url":"http://www.purplemath.com/hightstown_nj_precalculus_tutors.php","timestamp":"2014-04-17T16:17:18Z","content_type":null,"content_length":"24051","record_id":"<urn:uuid:362ebed3-ea17-4889-97f4-3a49c3a167fc>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00456-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Barban-Davenport-Halberstam without von Mangoldt weights
up vote 4 down vote favorite
The Barban-Davenport-Halberstam theorem gives a bound for the average (in L_2 norm) difference between $\sum_{n\leq N: n\equiv a \mod q} \Lambda(n)$ and $N/\phi(q)$. It is obvious that a similar
result should hold for the difference between $\sum_{p\leq N: p\equiv a \mod q} 1$ (where $p$ ranges only across primes) and $\pi(N)/\phi(q)$. Does anybody know where in the literature a statement in
that form can be found (so that it can be quoted without any further ado - the alternative is to spend some space in its derivation)?
nt.number-theory prime-numbers
I don't know of any such cut and dried reference but I find dealing with von-Mangoldt easier than dealing with primes. – Idoneal Oct 26 '10 at 4:29
Yes, that's why the von Mangoldt function was ever defined. At the same time, we sometimes have to deal with primes! – H A Helfgott Oct 26 '10 at 8:59
Well said Mukherjee! – Idoneal Oct 26 '10 at 10:36
This is obviously a cultural reference I am missing. At any rate, can we get back to the question? – H A Helfgott Oct 26 '10 at 11:58
1 Have you seen Theorem 17.5 of Iwaniec-Kowalski? I think that is all you need. – Idoneal Oct 26 '10 at 13:33
show 1 more comment
1 Answer
active oldest votes
Theorem 17.5 of Iwaniec-Kowalski seems to do the job.
up vote 1 down
For the benefit of anyone else reading - what exactly is Iwaniec-Kowalski? – Gerry Myerson Oct 26 '10 at 23:07
1 It is the modern bible of analytic number theory. books.google.com/… – Idoneal Oct 27 '10 at 4:25
Thanks. To save others the trouble of clicking through, it's Henryk Iwaniec and Emmanuel Kowalski, Analytic Number Theory, American Mathematical Society Colloquium Publications
Volume 53. – Gerry Myerson Oct 27 '10 at 5:22
add comment
Not the answer you're looking for? Browse other questions tagged nt.number-theory prime-numbers or ask your own question.
|
{"url":"https://mathoverflow.net/questions/43580/barban-davenport-halberstam-without-von-mangoldt-weights","timestamp":"2014-04-17T04:04:55Z","content_type":null,"content_length":"59338","record_id":"<urn:uuid:f492c30c-5ddd-43d9-990e-8bf3aba7a66e>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00111-ip-10-147-4-33.ec2.internal.warc.gz"}
|
general solution to linear first order differential equations
February 4th 2009, 08:18 PM
general solution to linear first order differential equations
What is the basis for e, the natural log, as a general solution in D.E. ?
For example:
x' (t) + 2x (t) =6
x(t) = Ce^-2x +3
I understand d/dx (e^x) =e^x
In general solutions what is the concept and analysis of the natural log
application ?
February 4th 2009, 08:23 PM
You have $x' + 2x = 6$ multiply both sides by $e^{2t}$ and so $e^{2t}x' + 2e^{2t}x = 6$.
This can be written as $\left( e^{2t} x \right)' = 6$.
Can you continue?
February 4th 2009, 08:43 PM
Actually my question is on why (e) the natural log is used in solving D.E
That is, what is the basis or analysis of why (e) is used in the general solutions ?
February 4th 2009, 08:58 PM
Because $e^t$ has the special property that its it is own derivative.
Look at how this fact was used in the solution of the differencial equation.
|
{"url":"http://mathhelpforum.com/differential-equations/71880-general-solution-linear-first-order-differential-equations-print.html","timestamp":"2014-04-16T04:38:34Z","content_type":null,"content_length":"7032","record_id":"<urn:uuid:fa777eef-ad6f-418c-a45a-03fa5420db2f>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00235-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Varying n for Speed or Resolution
Next: Using KL as a Up: Karhunen-Loeve Decomposition for Statistical Previous: Decoding a Key into
Note that the original vectors and the eigenvectors are all n-tuples with n=7000. Therefore, the computation of the 60-dimensional key (c[0], c[1], ..., c[59]) given by Equation requires convolution
of the input image (an arbitrary n-tuple) with 60 eigenvectors (the n-tuples). Thus, 60 convolutions of 7000 pixel masks must be performed each time to convert an image into its key. To reduce this
computation, we have also generated scaled versions of the input vectors and eigenvectors. These images are n=858 element vectors. However, the computation of (c[0], c[1],..., c[59]) for the smaller
vectors requires roughly . Thus, these versions of the KL decomposition can be useful when time is more critical than resolution or precision. However, the quality of such low resolution images
makes them unreliable for face recognition purposes. These should only be used to perform coarse face detection. We shall now describe a technique for detecting a face using an image's 60-scalar KL
Next: Using KL as a Up: Karhunen-Loeve Decomposition for Statistical Previous: Decoding a Key into Tony Jebara
|
{"url":"http://www.cs.columbia.edu/~jebara/htmlpapers/UTHESIS/node68.html","timestamp":"2014-04-20T13:36:21Z","content_type":null,"content_length":"6042","record_id":"<urn:uuid:09a6fe0a-bf9d-46b8-9687-d9a5073bcfdc>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00425-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Invertable matrix P and a matrix C
November 23rd 2009, 02:25 PM
Invertable matrix P and a matrix C
I need to find and invertable matrix P
Then a matrix C with the form:
|a -b|
|b a |
this comes from complex eigenvalues, and the formula
A= PC(P^-1)
my problem is
|5 -2|
|1 3 |
is the matrix
for the eigenvalues i got 4 +- 2i.
now i have to find eigenvectors and the C matrix
Can you help?
November 23rd 2009, 03:43 PM
By subbing the eigenvalues back into A for lambda, we get:
This leads to $\begin{bmatrix}1-i&-2\\1&-1-i\end{bmatrix}\begin{bmatrix}x_{1}\\x_{2}\end{bmat rix}=\begin{bmatrix}0\\0\end{bmatrix}$
This leads to eigenvector: $t\begin{bmatrix}\frac{2}{1-i}\\1\end{bmatrix}$
For the other eigenvalue, the corresponding eigenvector is: $t\begin{bmatrix}\frac{2}{1+i}\\1\end{bmatrix}$
Normalizing and using Gram-Schmidt
$p_{1}=\begin{bmatrix}\frac{2}{\sqrt{2}(1-i)}\\ \frac{1}{\sqrt{2}}\end{bmatrix}$
$p_{2}=\begin{bmatrix}\frac{2}{\sqrt{2}(1+i)}\\ \frac{1}{\sqrt{2}}\end{bmatrix}$
This gives us:
$P=\begin{bmatrix}\frac{2}{\sqrt{2}(1-i)}&\frac{2}{\sqrt{2}(1+i)}\\ \frac{1}{\sqrt{2}}&\frac{1}{\sqrt{2}}\end{bmatrix}$
$P^{-1}=\begin{bmatrix}\frac{1}{\sqrt{2}i}&\frac{1}{\sq rt{2}}+\frac{1}{\sqrt{2}}i\\ \frac{1}{\sqrt{2}}i&\frac{1}{\sqrt{2}}-\frac{1}{\sqrt{2}}i\end{bmatrix}$
Now, $P^{-1}AP=\begin{bmatrix}4+i&0\\0&4-i\end{bmatrix}$
Therefore, P diagonalizes A
|
{"url":"http://mathhelpforum.com/advanced-algebra/116355-invertable-matrix-p-matrix-c-print.html","timestamp":"2014-04-21T16:03:31Z","content_type":null,"content_length":"7435","record_id":"<urn:uuid:1ef27c64-182a-407e-8d09-068ac8bfbdfd>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00355-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Solar-Powered Car
If you like the sun, and you like cars, then I’m guessing you’d love to have a solar-powered car, right? This trick works well for chocolate and peanut butter, but not so well for garlic bread and
strawberries. So how compatible are cars with solar energy? Do we relish the combination or spit it out? Let’s throw the two together, mix with math, and see what happens.
What Are Our Options?
Short of some solar-to-liquid-fuel breakthrough—which I dearly hope can be realized, and described near the end of a recent post—we’re talking electric cars here. This is great, since electric drive
trains can be marvelously efficient (ballpark 85–90%), and immediately permit the clever scheme of regenerative braking.
Obviously there is a battery involved as a power broker, and this battery can be charged (at perhaps 90% efficiency) via:
• on-board internal combustion engine fueled by gasoline or equivalent;
• utility electricity;
• a fixed solar installation;
• on-board solar panels.
Only the final two options constitute what I am calling a solar-powered car, ignoring the caveat that hydro, wind, and even fossil fuels are ultimately forms of solar energy. The last item on the
list is the dream situation: no reliance on external factors other than weather. This suits the independent American spirit nicely. And clearly it’s possible because there is an annual race across
the Australian desert for 100% on-board solar powered cars. Do such successful demonstrations today mean that widespread use of solar cars is just around the corner?
Full Speed Ahead!
First, let’s examine the requirements. For “acceptable” travel at freeway speeds (30 m/s, or 67 m.p.h.), and the ability to seat four people comfortably, we would have a very tough job getting a
frontal area smaller than 2 m² and a drag coefficient smaller than c[D] = 0.2—yielding a “drag area” of 0.4 m². Even a bicyclist tends to have a larger drag area than this! Using the sort of math
developed in the post on limits to gasoline fuel economy, we find that our car will experience a drag force of F[drag] = ½ρc[D]Av² ≈ 250 Newtons (about 55 lbs).
Work is force times distance, so to push the car 30 meters down the road each second will require about 7,500 J of energy (see the page on energy relations for units definitions and relationships).
Since this is the amount of energy needed each second, we can immediately call this 7,500 Watts—which works out to about ten horsepower. I have not yet included rolling resistance, which is about
0.01 times the weight of the car. For a super-light loaded mass of 600 kg (6000 N), rolling resistance adds a 60 N constant force, requiring an additional 1800 W for a total of about 9 kW.
What can solar panels deliver? Let’s say you can score some space-quality 30% efficient panels (i.e., twice as efficient as typical panels on the market). In full, overhead sun, you may get 1,000 W/
m² of solar flux, or a converted 300 W for each square meter of panel. We would then need 30 square meters of panel. Bad news: the top of a normal car has well less than 10 square meters available. I
measured the upward facing area of a sedan (excluding windows, of course) and got about 3 m². A truck with a camper shell gave me 5 m².
If we can manage to get 2 kW of instantaneous power, this would allow the car in our example to reach a cruising speed on the flats of about 16 m/s (35 m.p.h.). In a climb, the car could lift itself
up a grade at only one vertical meter every three seconds (6000 J to lift the car one meter, 2000 J/s of power available). This means a 5% grade would slow the car to 6.7 m/s, or 15 miles per hour—in
full sun. Naturally, batteries will come in handy for smoothing out such variations: charging on the downhill and discharging on the uphill, for an average speed in the ballpark of 30 m.p.h.
So this dream of a family being comfortably hurtled down the road by real-time sun will not come to pass. (Note: some Prius models offered a solar roof option, but this just drove a fan for keeping
the car cooler while parked—maybe simply offsetting the extra heat from having a dark panel on the roof!) But what of these races in Australia? We have real-live demonstrations.
The Dream Realized
In recent years, the Tokai Challenger, from Tokai University in Japan, has been a top performer at the World Solar Challenge. They use a 1.8 kW array of 30% efficient panels (hey—my guess was right
on!), implying 6 square meters of panel. The weight of the car plus driver is a mere 240 kg. As with most cars in the competition, the thing looks like a thin, worn-down bar of soap with a bubble for
the driver’s head: both the drag coefficient (a trout-like 0.11) and the frontal area (I’m guessing about 1 m², but probably less) are trimmed to the most absurd imaginable limits. From these
numbers, I compute a freeway-speed aerodynamic drag of about 60 Newtons and a rolling resistance of about 25 N, for a total of 85 N: about 35% of what we computed for a “comfortable” car. Solving for
the speed at which the combination of air drag plus rolling resistance requires 1.8 kW of power input, I get 26 m/s, or 94 km/h, or 58 m.p.h., which is very close to the reported speed.
Bring on the Batteries: Just Add Sun
We have seen that a practical car operating strictly under its own on-board power turns in a disappointing performance. But if we could use a large battery bank, we could store energy received when
the car is not in use, or from externally-delivered solar power. Even the Australian solar racers are allowed 5 kWh of storage on board. Let’s beef this up for driving in normal conditions. Using
today’s production models as examples, the Volt, Leaf, and Tesla carry batteries rated at 16, 24, and 53 kWh, respectively.
Let’s say we want a photovoltaic (PV) installation—either on the car or at home—to provide all the juice, with the requirement that one day is enough to fill the “tank.” A typical location in the
continental U.S. receives an average of 5 full-sun hours per day. This means that factoring in day/night, angle of the sun, season, and weather, a typical panel will gather as much energy in a day as
it would have if the high-noon sun persisted for five hours. To charge the Volt, then, would require an array capable of cranking out 3 kW of peak power. The Tesla would require a 10 kW array to
provide a daily charge. The PV areas required vastly exceed what is available on the car itself (need 10 m² even for the 3 kW system at a bank-breaking 30% efficiency; twice this area for affordable
But this is not the best way to look at it. Most people care about how far they can travel each day. A typical electric car requires about 30 kWh per 100 miles driven. So if your daily march requires
30 miles of round-trip range, this takes about 10 kWh and will need a 2 kW PV system to provide the daily juice. You might be able to squeeze this onto the car roof.
How do the economics work out? Keeping up this 30 mile per day pattern, day after day, would require an annual gasoline cost of about $1000 (if the car gets about 40 MPG). Installed cost of PV is
coming in around $4 per peak Watt lately, so the 2 kW system will cost $8000. Thus you offset (today’s) gas prices in 8 years. This math applies to the standard 15% efficient panels, which precludes
a car-top solution. For this reason, I will primarily focus on stationary PV from here on.
Practicalities: Stand-Alone or Grid-Tie?
Ah—the practicalities. Where dreams get messy. For the purist, a totally solar car is not going to be so easy. The sun does not adhere to our rigid schedule, and we often have our car away from home
during the prime-charging hours anyway. So to stay truly solar, we would need significant home storage to buffer against weather and charge-schedule mismatch.
The idea is that you could roll home at the end of the day, plug up your car, and transfer stored energy from the stationary battery bank to your car’s battery bank. You’d want to have several days
of reliable juice, so we’re talking a battery bank of 30–50 kWh. At $100 per kWh for lead-acid, this adds something like $4000 to the cost of your system. But the batteries don’t last forever.
Depending on how hard the batteries are cycled, they might last 3–5 years. A bigger bank has shallower cycles, and will therefore tolerate more of these and last longer, but for higher up-front cost.
The net effect is that the stationary battery bank will cost about $1000 per year, which is exactly what we had for the gasoline cost in the first place. However, I am often annoyed by economic
arguments. More important to me is the fact that you can do it. Double the gas prices and we have our 8-year payback again, anyway. Purely economic decisions tend to be myopic, focused on the
conditions of today (and with some reverence to trends of the past). But fundamental phase transitions like peak oil are seldom considered: we will need alternative choices—even if they are more
expensive than the cheap options we enjoy today.
The other route to a solar car—much more widespread—is a grid-tied PV system. In this case, your night-time charging comes from traditional production inputs (large regional variations in mix of
coal, gas, nuclear, and hydro), while your daytime PV production helps power other people’s air conditioners and other daytime electricity uses. Dedicating 2 kW of panel to your transportation needs
therefore offsets the net demand on inputs (fossil fuel, in many cases), effectively acting to flatten demand variability. This is a good trend, as it employs otherwise underutilized resources at
night, and provides (in aggregate) peak load relief so that perhaps another fossil fuel plant is not needed to satisfy peak demand. Here, the individual does not have to pay for a stationary battery
bank. The grid acts as a battery, which will work well enough as long as the solar input fraction remains small.
As reassuring as it is that we’re dealing with a possible—if expensive—transportation option, I must disclose one additional gotcha that makes for a slightly less rosy picture. Compared to a
grid-tied PV system, a standalone system must build in extra overhead so that the batteries may be fully charged and conditioned on a regular basis. As the batteries approach full charge, they
require less current and therefore often throw away potential solar energy. Combining this with charging efficiency (both in the electronics and in the battery), it is not unusual to need twice the
PV outlay to get the same net delivered energy as one would have in a grid-tied system. Then again, if we went full-scale grid-tied, we would need storage solutions that would again incur efficiency
hits and require a greater build-up to compensate.
A Niche for Solar Transport
There is a niche in which a vehicle with a PV roof could be self-satisfied. Golf carts that can get up to 25 m.p.h. (40 km/h) can be useful for neighborhood errands, or for transport within a small
community. They are lightweight and slow, so they can get by with something like 15 kWh per 100 miles. Because travel distances are presumably small, we can probably keep within 10 miles per day,
requiring 1.5 kWh of input per day. The battery is usually something like 5 kWh, so can store three days’ worth right in the cart. At an average of five full-sun hours per day, we need 300 W of
generating capacity, which we can achieve with 2 square meters of 15% efficient PV panel. Hey! This could work: self-contained, self-powered transport. Plug it in only when weather conspires against
you. And unlike unicorns, I’ve seen one of these beasts tooling around the UCSD campus!
Digression: Cars as the National Battery?
What if we eventually converted our fleet of petroleum-powered cars to electric cars with a substantial renewable infrastructure behind it. Would the cars themselves provide the storage we need to
balance the system? For the U.S., let’s take 200 million cars, each able to store 30 kWh of energy. In the extreme, this provides 6 billion kWh of storage, which is about 50 times smaller than the
full-scale battery that I have argued we would want to allow a complete renewable energy scheme. And this assumes that the cars have no demands of their own: that they obediently stay in place during
times of need. In truth, cars will operate on a much more rigorous daily schedule (needing energy to commute, for instance) than what Mother Nature will throw at our solar/wind installations.
We should take what we can get, but using cars as a national battery does not get us very far. This doesn’t mean that in-car storage wouldn’t provide some essential service, though. Even without
trying to double-task our electric cars (i.e., never demanding that they feed back to the electricity grid), such a fleet would still relieve oil demand, encourage renewable electricity production,
and act as load balancer by preferentially slurping electricity at night.
I Want a Solar-Powered Car
I also want a land speeder from Star Wars, a light saber while we’re at it, and a jet pack. And a pony. But unlike many of these desires, a solar powered car can be a practical reality. I could go
out tomorrow and buy a Volt or a Leaf and charge it with my home-built off-grid PV system (although I would first need to beef it up a bit to cover our modest weekly transportation needs).
Alternatively, I could park a solar-charged golf cart in the sun—or charge an electric-assist bicycle with a small PV system, for that matter—to get around my neighborhood. Slightly less satisfying,
I could install a grid-tied PV system with enough yearly production to offset my car’s electricity take. The point is, I could make stops at the gas station a thing of the past (or at least rare, in
the case of a plug-in hybrid).
So solar powered cars fall solidly on the reality side of the reality-fantasy continuum. That said, pure solar transport (on board generation) will suffer serious limitations. More reliable transport
comes with nuances that may be irritating to the purist. You can apply a bumper sticker that says SOLAR POWERED CAR, but in most cases, you will need to put an asterisk at the end with a lengthy
footnote to explain exactly how you have realized that goal.
|
{"url":"http://www.energybulletin.net/print/59916","timestamp":"2014-04-19T05:55:47Z","content_type":null,"content_length":"21980","record_id":"<urn:uuid:c6f5ea62-da32-4034-bcec-8160fe542778>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00319-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Solutions of nonlinear SPDE via random Colombeau distribution
Çapar, Uluğ (2006) Solutions of nonlinear SPDE via random Colombeau distribution. [Working Paper / Technical Report] Sabanci University ID:SU_FENS_2006/0008
Full text not available from this repository.
The solutions of nonlinear SPDE are usually involved with the singular objects like the products of the Dirac delta, Heaviside functions and the nonlinear white-noise functionals which are diffucult
to handle within the classical theory. In this work the framework is the white-noise space (Ω, ∑, V), where Ω in the space of tempered distributions, ∑ is an appropriate δ algebra and V is the
Bochner -Minlos measure. Following [1] and[2] a generalized s.p. is defined as measurable mapping Ω → GΩ(R n+1), where GΩ is the space of Colombeau distrubitions. In this set-up the solutions to the
SPDE are sought in the representative platform using the represantatives in the Colembeau factor space, of the random excitations. When the moderateness of the represantative solutions are
demonstrated, their equivalence classes constitute the Colombeau solutions. A shock-wave equation of the form U + U x U = W and a preypredator system with white-noise excitation are handled in this
spirit. ( = denotes the association relation in the Colombeau theory).
Item Type: Working Paper / Technical Report
Subjects: Q Science > QA Mathematics
ID Code: 804
Deposited By: Uluğ Çapar
Deposited On: 20 Dec 2006 02:00
Last Modified: 29 Jul 2011 13:39
Repository Staff Only: item control page
|
{"url":"http://research.sabanciuniv.edu/804/","timestamp":"2014-04-19T10:11:32Z","content_type":null,"content_length":"14816","record_id":"<urn:uuid:dcf80646-66fb-4844-8d34-06d67e0ba2a8>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00159-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The point location problem
Comments 0
Computational geometry is a field that studies efficient solution of geometric problems, which are critical in mapping, manufacturing, and particularly graphics applications. If you find data
structures and algorithms interesting, it's likely you'd also find computational geometry interesting, because it often combines and adapts well-known data structures and algorithms in novel ways to
solve natural problems that are easy to explain. As an example of this, I'm going to discuss the point location problem.
Imagine you have a political map of Africa with a red dot somewhere on it. Someone asks you, what country is that dot in? This is a simple question for even a child to answer, but how do we make a
computer answer the same question? We can represent the map as a planar decomposition, meaning a division of the plane into polygonal regions. We store the vertices making up the boundary of each
region. Stop and think for a little bit about how you might solve this.
Perhaps the simplest approach is to exploit the cross product. Suppose the vertices of the region are listed in clockwise (or counterclockwise) order. Compute the cross product of the vector from
vertex k to vertex k − 1 with the vector from vertex k to the point being located. The point lies in the region if and only if all of these cross products (for all k) point in the same direction.
Since we visit each vertex twice and the cross product takes constant time, this requires O(n) time, where n is the total number of edges.
Another simple solution that generalizes better to higher dimensions is to start at the point and draw a line emanating from it. The first time you hit an edge, you know that edge is on the boundary
of the correct region. The edge may be on the boundary of two adjacent regions, but comparing the slope of the line to the slope of the edge will tell you which one is correct. To actually compute
this, we define the line parametrically with an equation like this, where (x[0], y[0]) is the point being located:
(x, y) = (x[0], y[0]) + t(1, 1)
Then for each edge, we compute the t value at which the line intersects that edge, if any, and choose the edge with the smallest positive t value. This algorithm is also O(n).
These algorithms are fine if you're only locating one point, but if we wish to locate many points we can do better by creating a data structure which facilitates quick point location. I'll begin by
solving a much simpler problem: say your map only has rectangles in it, and these rectangles are arranged in vertical "strips" as shown to the right. To solve this problem, we can construct a
balanced binary search tree. This tree will first compare the x coordinate of the point against the x coordinates of the vertical lines to determine what strip of rectangles the point falls in. It
then compares the y coordinate of the point against the y coordinates of the horizontal lines to determine what rectangle it falls in. Each comparison eliminates about half of the remaining
rectangles from consideration, so this takes only O(log n) time in all for n rectangles. The search tree takes O(nlog n) time to construct and takes O(n) space, since there's at most one node for
each edge and there are 4n edges.
We generalize this by allowing each "strip" to contain not just rectangles but trapezoids, as shown below (both rectangles and triangles are considered to be trapezoids here). This is called a
trapezoidal map. This makes the second part of the search only a little more complicated: instead of comparing the y coordinate to a fixed value, we choose an edge and use a cross product to check
which side of the edge the point falls on.
Finally, take any map, and draw a vertical line through each vertex; in the image to the left the gray vertical lines were added in this manner. You'll end up with a trapezoidal map. Since we can
determine what region of this map contains the point in O(log n) time, and each of these regions is part of only one region in the original map, we've solved the problem for general maps in O(log n)
Unfortunately, adding all these vertical lines also creates a lot of new regions, potentially as much as squaring the number of regions. Search time is still good, since log(n^2) = 2 log n, but the
storage required can jump from O(n) up to O(n^2). This is worst-case, but even in typical practical cases it can require a lot of space, and construction time for the data structure also increases.
We want to retain a quick search capability but without having quite so many regions.
To achieve this, notice that we don't actually need to draw a complete vertical line through every point to divide the map into trapezoids. All we really have to do is draw a line up and down from
each vertex until we reach the edges above and below. It can be shown that the number of trapezoids is now no more than about three times the number of original regions, so the worst-case space has
been cut back down to O(n). The search procedure is similar: at each node we either compare the x coordinate of the point to that of a vertical line segment, or we check on which side of an edge the
point lies. Either way we eliminate about half the remaining regions. The search "tree" is no longer a tree but a directed acyclic graph, because a single trapezoid may fall on both sides of a
vertical line segment. See a demonstration.
Of course I've left a lot of details out here, like how to ensure the search tree remains well-balanced during construction, what to do if some points have the same x coordinate, and so on, but I
hope this gives you a taste of how clever algorithms can efficiently solve geometric problems.
|
{"url":"http://blogs.msdn.com/b/devdev/archive/2005/09/01/459540.aspx","timestamp":"2014-04-20T02:28:18Z","content_type":null,"content_length":"60436","record_id":"<urn:uuid:f954525f-9b5e-4501-818b-2f59fde86378>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00084-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Alhambra, CA Statistics Tutor
Find an Alhambra, CA Statistics Tutor
...I tutored throughout high school (algebra, calculus, statistics, chemistry, physics, Spanish, and Latin) and tutored advanced math classes during college. Above all other things, I love to
learn how other people learn and to teach people new things in ways so that they will find the material int...
28 Subjects: including statistics, Spanish, French, chemistry
...I believe in addressing each student's individual needs and/or learning disabilities to develop an effective tutoring program, emphasizing positive reinforcement. I work very well with
children and adults alike and love to see the sudden light of understanding shine through my student's eyes.Whe...
8 Subjects: including statistics, SPSS, Microsoft Excel, psychology
...I have had lifeguard training and instruction in teaching swimming. As a Master of Public Health student in epidemiology/biostatistics I studied SAS. I continued to study and use SAS as a
doctoral student.
31 Subjects: including statistics, English, reading, literature
...Bob H. I have more than 10 years of experience in teaching math from middle school-level, high school-level and college-level students. I have extensive experience in teaching algebra,
geometry and trigonometry.
10 Subjects: including statistics, chemistry, calculus, algebra 1
I am an experienced tutor in math and science subjects. I have an undergraduate and a graduate degree in electrical engineering and have tutored many students before. I am patient and will always
work with students to overcome obstacles that they might have.
37 Subjects: including statistics, chemistry, English, calculus
Related Alhambra, CA Tutors
Alhambra, CA Accounting Tutors
Alhambra, CA ACT Tutors
Alhambra, CA Algebra Tutors
Alhambra, CA Algebra 2 Tutors
Alhambra, CA Calculus Tutors
Alhambra, CA Geometry Tutors
Alhambra, CA Math Tutors
Alhambra, CA Prealgebra Tutors
Alhambra, CA Precalculus Tutors
Alhambra, CA SAT Tutors
Alhambra, CA SAT Math Tutors
Alhambra, CA Science Tutors
Alhambra, CA Statistics Tutors
Alhambra, CA Trigonometry Tutors
|
{"url":"http://www.purplemath.com/alhambra_ca_statistics_tutors.php","timestamp":"2014-04-17T16:03:24Z","content_type":null,"content_length":"24016","record_id":"<urn:uuid:49af2e82-5c3e-4e34-a702-f2280568ac00>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00121-ip-10-147-4-33.ec2.internal.warc.gz"}
|
An empirical bayes estimator of the mean of a normal population
- in Adv. Neural Information Processing Systems (NIPS*06 , 2007
"... Bayesian estimators are defined in terms of the posterior distribution. Typically, this is written as the product of the likelihood function and a prior probability density, both of which are
assumed to be known. But in many situations, the prior density is not known, and is difficult to learn from ..."
Cited by 19 (8 self)
Add to MetaCart
Bayesian estimators are defined in terms of the posterior distribution. Typically, this is written as the product of the likelihood function and a prior probability density, both of which are assumed
to be known. But in many situations, the prior density is not known, and is difficult to learn from data since one does not have access to uncorrupted samples of the variable being estimated. We show
that for a wide variety of observation models, the Bayes least squares (BLS) estimator may be formulated without explicit reference to the prior. Specifically, we derive a direct expression for the
estimator, and a related expression for the mean squared estimation error, both in terms of the density of the observed measurements. Each of these prior-free formulations allows us to approximate
the estimator given a sufficient amount of observed data. We use the first form to develop practical nonparametric approximations of BLS estimators for several different observation processes, and
the second form to develop a parametric family of estimators for use in the additive Gaussian noise case. We examine the empirical performance of these estimators as a function of the amount of
observed data. 1
, 2009
"... Abstract: A variety of experimental studies suggest that sensory systems are capable of performing estimation or decision tasks at near-optimal levels. In this chapter, I explore the use of
optimal estimation in describing sensory computations in the brain. I define what is meant by optimality and p ..."
Cited by 6 (5 self)
Add to MetaCart
Abstract: A variety of experimental studies suggest that sensory systems are capable of performing estimation or decision tasks at near-optimal levels. In this chapter, I explore the use of optimal
estimation in describing sensory computations in the brain. I define what is meant by optimality and provide three quite different methods of obtaining an optimal estimator, each based on different
assumptions about the nature of the information that is available to constrain the problem. I then discuss how biological systems might go about computing (and learning to compute) optimal estimates.
The brain is awash in sensory signals. How does it interpret these signals, so as to extract meaningful and consistent information about the environment? Many tasks require estimation of
environmental parameters, and there is substantial evidence that the system is capable of representing and extracting very precise estimates of these parameters. This is particularly impressive when
one considers the fact that the brain is built from a large number of low-energy unreliable components, whose responses are affected by many extraneous factors (e.g., temperature, hydration, blood
glucose and oxygen levels). The problem of optimal estimation is well studied in the statistics and engineering communities, where a plethora of tools have been developed for designing, implementing,
calibrating and testing such systems. In recent years, many of these tools have been used to provide benchmarks or models for biological perception. Specifically, the development of signal detection
theory led to widespread use of statistical decision theory as a framework for assessing performance in perceptual experiments. More recently, optimal estimation theory (in particular, Bayesian
estimation) has been used as a framework for describing human performance in perceptual tasks.
, 2007
"... Bayesian estimators are commonly constructed using an explicit prior model. In many applications, one does not have such a model, and it is difficult to learn since one does not have access to
uncorrupted measurements of the variable being estimated. In many cases however, including the case of cont ..."
Cited by 4 (4 self)
Add to MetaCart
Bayesian estimators are commonly constructed using an explicit prior model. In many applications, one does not have such a model, and it is difficult to learn since one does not have access to
uncorrupted measurements of the variable being estimated. In many cases however, including the case of contamination with additive Gaussian noise, the Bayesian least squares estimator can be
formulated directly in terms of the distribution of noisy measurements. We demonstrate the use of this formulation in removing noise from photographic images. We use a local approximation of the
noisy measurement distribution by exponentials over adaptively chosen intervals, and derive an estimator from this approximate distribution. We demonstrate through simulations that this adaptive
Bayesian estimator performs as well or better than previously published estimators based on simple prior models. 1
- Ph.D. dissertation, Courant Institute of Mathematical Sciences , 2007
"... First and foremost, I would like to thank my advisors, Eero Simoncelli and Dan Tranchina. Dan supervised my work on cortical modeling, and his insight and advice were extremely helpful in
carrying out the bulk of the work of Chapter 1. He also had many useful comments about the remainder of the mate ..."
Cited by 2 (2 self)
Add to MetaCart
First and foremost, I would like to thank my advisors, Eero Simoncelli and Dan Tranchina. Dan supervised my work on cortical modeling, and his insight and advice were extremely helpful in carrying
out the bulk of the work of Chapter 1. He also had many useful comments about the remainder of the material in the thesis. Over the years, I have learned a lot about computational neuroscience in
general from discussions with him. Eero supervised my work on prior-free methods and applications, which make up the substance of Chapters 2-4. His intuition, insight and ideas were crucial in
helping me progress in this line of research, and more importantly, in obtaining useful results. I also learned a lot from him about image processing, statistics and computational neuroscience,
amongst other things. I would like to thank my third reader, Charlie Peskin, for his input to my thesis and defense and helpful discussions about the material. I would also like to thank Mehryar
Mohri for being on my committee and for some useful discussions about VC type bounds for regression. As well, I would like to thank Francesca Chiaromonte for being on my committee, and for helpful
discussions and comments about the material in the thesis. It was good to have a statistician’s point of view on the work. I would like to thank Bob Shapley for his helpful input, and for information
about contrast dependent summation area. I would also like to thank him for letting me sit in on his ”new view ” class about visual cortex, where I read some very useful papers. I would like to thank
members of the Laboratory for Computational v Vision, for helpful comments and discussions along the way. I would also like to thank LCV alumni Liam Paninski and Jonathan Pillow, who both had some
particularly useful comments about the prior-free methods. I would also like thank the various people at Courant, too numerous to mention, who have provided help along the way.
, 2009
"... The two standard methods of obtaining a least-squares optimal estimator are (1) Bayesian estimation, in which one assumes a prior distribution on the true values and combines this with a model
of the measurement process to obtain an optimal estimator, and (2) supervised regression, in which one opti ..."
Cited by 2 (1 self)
Add to MetaCart
The two standard methods of obtaining a least-squares optimal estimator are (1) Bayesian estimation, in which one assumes a prior distribution on the true values and combines this with a model of the
measurement process to obtain an optimal estimator, and (2) supervised regression, in which one optimizes a parametric estimator over a training set containing pairs of corrupted measurements and
their associated true values. But many real-world systems do not have access to either supervised training examples or a prior model. Here, we study the problem of obtaining an optimal estimator
given a measurement process with known statistics, and a set of corrupted measurements of random values drawn from an unknown prior. We develop a general form of nonparametric empirical Bayesian
estimator that is written as a direct function of the measurement density, with no explicit reference to the prior. We study the observation conditions under which such “prior-free ” estimators may
be obtained, and we derive specific forms for a variety of different corruption processes. Each of these prior-free estimators may also be used to express the mean squared estimation error as an
expectation over the measurement density, thus generalizing Stein’s unbiased risk estimator (SURE) which provides such an expression for the additive Gaussian noise case. Minimizing this expression
over measurement samples provides an “unsupervised
"... A number of recent algorithms in signal and image processing are based on the empirical distribution of localized patches. Here, we develop a nonparametric empirical Bayesian estimator for
recovering an image corrupted by additive Gaussian noise, based on fitting the density over image patches with ..."
Cited by 1 (1 self)
Add to MetaCart
A number of recent algorithms in signal and image processing are based on the empirical distribution of localized patches. Here, we develop a nonparametric empirical Bayesian estimator for recovering
an image corrupted by additive Gaussian noise, based on fitting the density over image patches with a local exponential model. The resulting solution is in the form of an adaptively weighted average
of the observed patch with the mean of a set of similar patches, and thus both justifies and generalizes the recently proposed nonlocalmeans (NL-means) method for image denoising. Unlike NL-means,
our estimator includes a dependency on the size of the patch similarity neighborhood, and we show that this neighborhood size can be chosen in such a way that the estimator converges to the optimal
Bayes least squares estimator as the amount of data grows. We demonstrate the increase in performance of our method compared to NL-means on a set of simulated examples. 1
"... Selection of an optimal estimator typically relies on either supervised training samples (pairs of measurements and their associated true values) or a prior probability model for the true
values. Here, we consider the problem of obtaining a least squares estimator given a measurement process with kn ..."
Add to MetaCart
Selection of an optimal estimator typically relies on either supervised training samples (pairs of measurements and their associated true values) or a prior probability model for the true values.
Here, we consider the problem of obtaining a least squares estimator given a measurement process with known statistics (i.e., a likelihood function) and a set of unsupervised measurements, each
arising from a corresponding true value drawn randomly from an unknown distribution. We develop a general expression for a nonparametric empirical Bayes least squares (NEBLS) estimator, which
expresses the optimal least squares estimator in terms of the measurement density, with no explicit reference to the unknown (prior) density. We study the conditions under which such estimators exist
and derive specific forms for a variety of different measurement processes. We further show that each of these NEBLS estimators may be used to express the mean squared estimation error as an
expectation over the measurement density alone, thus generalizing Stein’s unbiased
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=3833524","timestamp":"2014-04-18T02:00:41Z","content_type":null,"content_length":"34535","record_id":"<urn:uuid:16d40c3f-6e01-4e7a-a9ef-016620468e44>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00251-ip-10-147-4-33.ec2.internal.warc.gz"}
|
What is R?
Author What is R?
Joined: I've seen and heard about "R" but don't know much about it. What technology(ies) are "R" dependent on?
Aug 27,
2010 Thanks
Posts: 4 Kam
Greenhorn Hi Kam,
Joined: I'm not sure what you mean by "what technologies R is dependent on". R is an open source case-senstive interpreted language and platform for data analysis and graphics. It is similar to the
Mar 28, S Language originally developed at Bell Labs.
Posts: 25 Here is the description from the R project homepage:
R is a system for statistical computation and graphics. It consists of a language plus a run-time environment with graphics, a debugger, access to certain system functions, and the
ability to run programs stored in script files.
The design of R has been heavily influenced by two existing languages: Becker, Chambers & Wilks' S and Sussman's Scheme. Whereas the resulting language is very similar in appearance to
S, the underlying implementation and semantics are derived from Scheme.
The core of R is an interpreted computer language which allows branching and looping as well as modular programming using functions. Most of the user-visible functions in R are written
in R. It is possible for the user to interface to procedures written in the C, C++, or FORTRAN languages for efficiency. The R distribution contains functionality for a large number of
statistical procedures. Among these are: linear and generalized linear models, nonlinear regression models, time series analysis, classical parametric and nonparametric tests,
clustering and smoothing. There is also a large set of functions which provide a flexible graphical environment for creating various kinds of data presentations. Additional modules are
available for a variety of specific purposes.
The description is a bit dry and out of date. For example, R is object oriented, you can imbed R functionality in programs written in other languages (e.g., Java), and the number of add-on
packages now number in the thousands. Everything in R is done through function calls, and you can write your own functions to modify or build upon existing functions. This lets you carry
out complex and involved analyses and data manipulations with a few lines of code. And of course, most people are originally attracted to R for its extensive graphics capabilites.
Apr 20, I work in the financial sector and some of the clients we work with use R to develop a statistical model to help decide whether or not certain applicants qualify for a loan.
Posts: 2
Joined: R is actually used quite heavily in finance. See for example, the CRAN Task view on Empirical Finance, Statistics and Finance: An Introduction by David Ruppert, or Statistical Analysis in
Mar 28, Financial Data in S-Plus (and now R) by Rene Carmona. The Rmetrics website is also interesting. Finally, there is an annual an R/Finance conference every year.
Posts: 25
Aug 27, Thanks for the reply Robert. It's sounds very interesting and useful. The key for me is integration with Java and other platforms. Much success on the book!
Posts: 4
Oct 14,
18110 My son (who is a professional paleontologist) uses R in his research. There's lots of statistics in science these days.
I like...
subject: What is R?
|
{"url":"http://www.coderanch.com/t/535134/ol/","timestamp":"2014-04-16T10:43:02Z","content_type":null,"content_length":"29665","record_id":"<urn:uuid:1011932d-77b1-4fbf-a769-d44c0b7860d8>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00358-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Solving for x: A/sin(x)=B/sin(C-x)
May 1st 2011, 05:34 AM #1
May 2011
I am trying to solve this equation: A/sin(x)=B/sin(C-x)
for x, where 0<=C<=pi and A<B.
It's been years since I last would have contemplated solving such an expression and quite frankly I am somewhat saddened that I have forgotten how. I once knew how to tackle this. Much of the
maths I learnt studying engineering just never got used
It actually is a derivative of the law of sines equation relating to solving the torpedo fire control problem.
x= angle of deflection
A= target speed
B= torpedo speed
C= track angle
I ultimately want to be able to use Excel to create and plot curves like these.
Once I am on the right track I will be fine but I can't actually say what approach needs to be taken to deal with this problem. Need someone talking maths to me using maths language again. I
can't even remember what this type of problem/expression is called.
Thanks for helping out.
PS: Not sure if this is in the right forum
$A\sin(C-x) = B\sin(x)$
$A(\sin(C) \cos(x) - \cos(C) \sin(x)) = B\sin(x)$
$\sin(C) \cot(x) - \cos(C) = \dfrac{B}{A}$
$\cot(x) = \csc(C) \left(\dfrac{B}{A} + \cos(C)\right)$
$\tan(x) = \sin(C) \dfrac{1}{\left(\dfrac{B}{A} + \cos(C)\right)}$
$x = \arctan \left( \dfrac{\sin(C)}{\left(\dfrac{B}{A} + \cos(C)\right)}\right)$
OK, as I kind of suspected, subbing in trignometric identities was the way to go. Actually, I would have preferred if you just said something like:
Use a basic trig identity substitution (hint: sin(A-B)=sinA.cosB-cosA.sinB)
Relax, it's not complex.
...and let me take it from there and salvage what dignity I have left.
Still, thank you!
Last edited by topsquark; May 1st 2011 at 10:29 AM.
May 1st 2011, 06:29 AM #2
May 1st 2011, 07:07 AM #3
May 2011
|
{"url":"http://mathhelpforum.com/trigonometry/179128-solving-x-sin-x-b-sin-c-x.html","timestamp":"2014-04-18T20:05:59Z","content_type":null,"content_length":"37492","record_id":"<urn:uuid:0bfe9861-eb9a-4740-b836-3e71badc303b>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00305-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Wolfram Demonstrations Project
Solving Systems of Transcendental Equations
Choose an equation or a system of equations from the popup menu. The first three choices are univariate complex analytic equations, the last one is a pair of real equations not derived from a single
complex analytic one. You can move around the rectangle in which the solutions are to be found by choosing the coordinates of its center and its width and height. After choosing the equation (or
system of equations) to be solved and the region in which to look for solutions, increase the number of iterations to see the rectangles subdivided and tested for the presence of solutions. When
there are no more blue rectangles left, all the solutions contained in the region (if any) will be represented by green points.
For the details of Semenov's algorithm see the related Demonstration "Semenov's Algorithm for Solving Nonlinear Equations" and:
V. Yu. Semenov, "The Method of Determining All Real Nonmultiple Roots of Systems of Nonlinear Equations,"
The Journal of Computational Mathematics and Mathematical Physics
(9), 2007 p. 1428.
|
{"url":"http://demonstrations.wolfram.com/SolvingSystemsOfTranscendentalEquations/","timestamp":"2014-04-18T05:33:33Z","content_type":null,"content_length":"43354","record_id":"<urn:uuid:997c31ad-9bd5-42db-a5a0-4a159452bbaa>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00014-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Help
September 8th 2011, 05:23 PM #1
Aug 2011
Need some help to tell me what I'm doing wrong.
solve for y:
2e^(-y/3) = t + 4
ln(2e^(-y/3)) = ln(t+4)
-y/3(2) = ln(t+4)
y= -(3/2)ln(t+4)
ln(20-y) = -.2t + ln8
*Mult both sides by e
20-y = 8e^(-.2)
y= 20 - 8e^-.2
Re: Logs
One thing at a time...
$2\cdot e^{-y/3} = t+4$
Try simple things first. Divide by 2
$e^{-y/3} = (t+4)/2$
Now the logarithm.
$-y/3 = ln\left(\frac{t+4}{2}\right)$
Can you finish?
On the second, think really, REALLY hard about "multiply by e" to remove a logarithm. Until you see that this is very, VERY wrong, keep thinking about it.
Re: Logs
I'll continue on the 2nd but for the first I just got y = -3ln((t+4)/2) Which I still think is wrong.
Re: Logs
No, that's good, but it can be written a few other ways. Figure out how to get this one:
$y = 3\cdot ln\left(\frac{2}{t+4}\right)$
You should also worry about the Domain.
Re: Logs
Here is your error: ln(ab)= ln(a)+ ln(b), NOT a ln(b). ln (2e^{-y/3})= ln(2)+ ln(e^{-y/3})= ln(2)- y/3, not "-y/3(2)". To solve for y, you would then subtract ln(2) from both sides which would
give you a factor of 2 in the denominator inside the logarithm. That is, this is the same as first dividing both side by 2 as TKHunny suggested.
-y/3(2) = ln(t+4)
y= -(3/2)ln(t+4)
ln(20-y) = -.2t + ln8
*Mult both sides by e
NO! Not "multiply both sides by e". Take "e to the power of both sides" or "take the exponential of both sides."
What you did is correct but thinking "multiply" when you don't mean that will lead you into errors eventually.
20-y = 8e^(-.2)
y= 20 - 8e^-.2
What happened to the "t"? exp(-.2t+ ln(8))= exp(-.2t)(8)
September 8th 2011, 05:32 PM #2
MHF Contributor
Aug 2007
September 8th 2011, 05:45 PM #3
Aug 2011
September 8th 2011, 07:26 PM #4
MHF Contributor
Aug 2007
September 9th 2011, 04:13 AM #5
MHF Contributor
Apr 2005
|
{"url":"http://mathhelpforum.com/pre-calculus/187599-logs.html","timestamp":"2014-04-18T09:09:31Z","content_type":null,"content_length":"43345","record_id":"<urn:uuid:5c23588e-47b6-49b0-9a12-20f06805f9e4>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00432-ip-10-147-4-33.ec2.internal.warc.gz"}
|
8th grade math for Ms. Sue
Number of results: 246,541
CAMERON IS IN 8TH ASUNTE IS IN 5TH THIS PROBLEM IS 8TH GRADE MATH.YES I NEED TO EXPRESS IT INTO MATH TERMS. TY
Thursday, August 23, 2012 at 8:45pm by CAMERON
8th Grade Math
Thanks Ms. Sue your the best :)
Tuesday, October 30, 2012 at 5:07pm by Anonymous
8th grade math - Ms. Sue please
I don't know.
Friday, November 16, 2012 at 9:41pm by Ms. Sue
8th grade math Ms. Sue
Wednesday, December 26, 2012 at 9:40pm by noor
8th grade math
Thanks Ms. Sue and Erin for checking my answers! :)
Monday, November 8, 2010 at 6:56pm by Elizabeth
8th grade math
Sorry Ms. Sue I meant 6z
Saturday, November 13, 2010 at 6:25pm by cindy
8th grade math ms.Sue please?
What is the value of x in 162 = x?
Wednesday, October 24, 2012 at 6:03pm by mia
8th grade math for Ms. Sue please check
Thank you so much!
Thursday, December 27, 2012 at 7:51pm by Destiny
8th grade math for Ms. Sue please check
You're very welcome.
Thursday, December 27, 2012 at 7:51pm by Ms. Sue
8th Grade math - Ms. Sue please
whats the answer?!
Friday, November 16, 2012 at 9:45pm by Anonymous
8th grade math for Ms. Sue please check
Yes, that's a good example. :-)
Thursday, December 27, 2012 at 7:51pm by Ms. Sue
8th Grade Math
Thank you so so much Ms.Sue!!! Now I completely understand, thank you :)
Sunday, September 22, 2013 at 10:04pm by Cassidy
8th grade math
i just want you to know that i might be more confused with this because i am a 7th grader and i am in accelerated math so that is why i am doing 8th grade math and this is really hard for me right
now so yeah
Thursday, November 3, 2011 at 9:19pm by saranghae12
8th grade math for Ms. Sue please check
what is 10<13x
Thursday, December 27, 2012 at 7:51pm by Isabella
Another Question
Each grade gets tougher as you go allow. For me 8th grade was not that bad. The grades that you get in 8th grade do not go towards your high school gpa unless you are taking a high school credit
course. But, even so, you still want to try your best! If you are in 8th grade or...
Saturday, November 19, 2011 at 10:12am by rachel
8th grade
im in advanced math so i do 8th grade math and i dont understand how too gragh the fractions in equations like y=2/3x+1
Sunday, November 29, 2009 at 2:42pm by jazmine
8th Math - Probability
no this is 8th grade math im in 8th grade and this is what were learning
Tuesday, April 2, 2013 at 4:50pm by Nisabel
8th grade
The School Subject is definitely NOT 8th grade. If it is math, state that and you will get better help faster. Sra
Friday, April 23, 2010 at 10:38pm by SraJMcGin
8th grade
4 x 4 + 1 (the top) = 17? Sra 8th grade is NOT the School Subject. Is it MATH? If so, please state that.
Wednesday, September 15, 2010 at 8:22pm by SraJMcGin
8th grade
Maybe you should just make it as math instead of 8th grade. I suggest you post it agin.
Monday, November 15, 2010 at 10:41pm by Abby
Asunte/Cameron -- are you in 5th or 8th grade? How would you express this in math terms?
Thursday, August 23, 2012 at 8:45pm by Ms. Sue
8th grade math
You're welcome. I can't believe it either. I didn't learn the quadratic in the 8th grade for sure.
Thursday, February 7, 2008 at 9:52pm by drbob222
9th Grade Classes
I'm doing tons of extra credit anyway so my goal is to get a 99 or a 100 average. I'll ask my conselor in 8th grade (before the year in over). Thank You Ms. Sue!!!!! :) Now I'm going to look some
classes to take in 10-12 and tell you if it's good to make it to Yale next year (...
Sunday, December 18, 2011 at 5:36pm by Laruen
8th grade
Please identify the School Subject correctly (MATH, for example, Algebra, etc.) for the correct teachers to read and answer your post. 8th grade will not do it. Sra
Wednesday, November 17, 2010 at 8:02pm by SraJMcGin
5th grade
Rest was told, good job Ms.Sue. I am a 12 year old in the 8th grade I'll come to you for help on homework.
Monday, September 20, 2010 at 9:51pm by Adrian Lagos
8th Grade Math - Ms Sue
A ship leaves the East Coast for California. It is traveling at a constant speed of five knots and takes 200 days to reach California. The ship arrives in California at the exact time of day that it
left the East Coast. How far did the ship travel? Please help with step by ...
Friday, December 7, 2012 at 9:34pm by Chris
7th grade science help Ms.Sue please help
dont call him/her a cheater because your just as guilty as him/her for looking up the answers ... are you from 8th grade hrlled schnell?
Wednesday, October 10, 2012 at 10:33pm by POEPLE
8th grade
oh, i'm sorry ms.sue
Sunday, December 14, 2008 at 1:18pm by brianna
8th grade
Oh thanks Ms Sue. and Anonymous
Sunday, September 12, 2010 at 9:14pm by Anonymous
8th Grade Science
Saturday, October 12, 2013 at 1:58pm by Gabby
8th grade math
I dont get this but this isint 8th grade math its 7th grade
Friday, November 30, 2012 at 6:37pm by Anony
4TH GRADE MATH
MS T BOUGHT CUPCAKES FOR HER 3RD GRADE CLASS. MS. CASA BOUGHT TWICE AS MANY CUPCAKES AS MS T, AND 4 TIMES AS MANY CUPCAKES AS MS. G. MS. G BOUGHT 241 CUP CAKES. A. HOW MANY CUPCAKES DID MS T BUY? B.
MS.. B CAME LATE TO THE PARTY AND HALF OF THE CUPCAKES HAD ALREADY BEEN EATEN...
Wednesday, February 26, 2014 at 5:03am by DARRYL
8th grade science
Thank You so much Ms. Sue :D
Sunday, September 29, 2013 at 4:41pm by A
Math 8th Grade (Ms. Sue) ?
3 in = 1/4 ft (12 + 1/4)(9 + 1/4) - (12*9) = ?
Saturday, December 1, 2012 at 12:20am by PsyDAG
8th grade science
Hello?? Ms. Sue?? I need help too! :-)
Friday, February 18, 2011 at 2:21pm by Nina
Thank You Ms. Sue!!! :) Now I know what is is. Do I get one in grade 8th???
Thursday, December 1, 2011 at 7:07pm by Laruen
Urgent 8th Grade Language Arts
Monday, September 9, 2013 at 12:46pm by Gabby
8th grade science
okay thank you Ms. Sue I also appreciate your help :D
Sunday, December 29, 2013 at 8:37pm by jack
8th grade
timmy or bryce or 8th grade or whoever, There is no need to keep posting questions in different names. In addition, you need to indicate what YOU THINK and what work YOU HAVE DONE so that a math
teacher can help you.
Thursday, August 19, 2010 at 5:36pm by Writeacher
9th grade algebra/Ms Sue
Okay Ms. Sue. I am in 8th grade but I am taking Algebra with 9th graders. I am very confused with my previous problem. I have asked my Dad but we are both getting confused. Now I have come up with
22. Is this my correct answer?
Wednesday, August 22, 2012 at 9:59pm by Reed
8th grade Social Studies
so Ms.Sue is that your answer for this question?
Saturday, September 20, 2008 at 7:51pm by Thao
Soanish-8th grade please check answer
Thank you Ms. Sue
Monday, May 14, 2012 at 10:35am by Anthony
Urgent 8th Grade Language Arts
Ms.Sue am I right???
Wednesday, September 4, 2013 at 1:37pm by Gabby
8th grade science :(
Oh My God thank you sooo much Ms. Sue!! :)
Monday, December 9, 2013 at 6:54pm by Ira
8th grade math Ms. Sue please
in connections too. are these answers at the top right? really need these to be right!
Wednesday, December 5, 2012 at 6:27pm by some kid
8th grade math
Oh my gosh its getting sooooo annoying sorry Ms. Sue! I keep forgetting my sister's name is still on here (we share a computer)
Wednesday, November 28, 2012 at 6:40pm by Destiny
8th grade Social Studies
Did you see the answer below by Ms. Sue?
Saturday, September 20, 2008 at 8:13pm by Writeacher
8th Grade Language Arts
WriteTeacher or Ms.Sue can you help with this one question?
Thursday, September 5, 2013 at 11:19am by Gabby
8th Grade
Me and my friend are making our own 8th grade binder, you know like writing down planns for our 8th grade year. Here are the sections: - To-Do-List - Doodles (if you are bored) - Projects that are do
(school plans) - Graduation Plans (speech, to-do-list, etc.) - Dates for ...
Monday, December 5, 2011 at 8:15pm by Laruen
8th grade
Please select the School Subject carefully. 8th grade will not get the proper teacher to read and answer your post. Sra
Monday, November 15, 2010 at 10:41pm by SraJMcGin
8th grade
8th grade is NOT the School Subject. Select that carefully so the right teacher will read and answer y our post. Sra
Saturday, December 11, 2010 at 1:01am by SraJMcGin
And your question.... It bothers me that 100 randomly selected poor readers in the 8th grade. Goodness, what was the total population of poor readers in the 8th grade? 1000?
Monday, November 2, 2009 at 4:30pm by bobpursley
8th grade MATH
3 tarm 8th quesaons
Wednesday, March 25, 2009 at 7:13pm by rahul
8th grade
What is the School Subject? It is not 8th grade. And of which boundary are you speaking? Sra
Thursday, December 16, 2010 at 11:07pm by SraJMcGin
8th Math - Probability
ummmmmmmmmmmm. Haha jkjk. I know the answer, but think about it... what is the point of school, if you ask someone else to do it.... Oh, and by the way, this isn't 8th grade math! This is 7th grade
math, I'm just did this quiz and got 100%. Just try next time!! Its not that ...
Tuesday, April 2, 2013 at 4:50pm by Maggie
8th grade Math
I'm actually in 7th grade, but I'm doing 8th grade pre-algebra, but anyway, here: I'm supposed to simplify these four problems and I don't really understand how: 28s (15 power) ____ 42s (12 power)
4m+(n[7th power]m) 63m(5 power)n(6 power) ______________________ 27mn 3a·4a(4 ...
Thursday, September 9, 2010 at 6:16pm by Mika
8th grade, history ?#5
Be sure to use the websites that Ms. Sue and Sra have given you in a post below.
Saturday, October 11, 2008 at 1:30pm by Writeacher
8th Grade math - Ms. Sue please
36 * 24 = 864 54 * 42 = 2268 2268 - 864 = 1404
Friday, November 16, 2012 at 9:45pm by Ms. Sue
8th grade math
I am not sure if I did it correct, can you please check this answer for me, Ms.Sue? :) Writing equations in point-slope form: m:3, p:(-1, -2) (y-y1) = m(x-x1) y-(-2)=3(x-(-1)) y+2=3x+3 y+2-2=3x+3-2 y
Sunday, October 27, 2013 at 9:35pm by Lauren
You're in 8th grade. You can certainly at least try to answer those questions. Is 8th grade going to be too hard for you? Maybe you should rethink your dreams of going to college for 8 years or so.
Wednesday, August 15, 2012 at 4:45pm by Ms. Sue
8th grade math for Ms. Sue - last question please
If there is minus the inequality sign will change while equation dose not change e.g a<-3 that is a>-3 while a=-3 also a=-3
Thursday, December 27, 2012 at 8:28pm by Idris
8th grade math for Ms. Sue
How is an inequality different from an equation? Give a real-world scenario in which you would write an inequality rather than an equation. I don't know how to solve this or what the answer is.
Please help!
Wednesday, December 26, 2012 at 9:11pm by Destiny
8th grade tech
Thank you so much guys, Ms. Sue, I'm g oing to take it in he stuff I've copied from that site, I'll tell you tomorrow if its right. Lee
Monday, December 8, 2008 at 3:47pm by Lee
what can i compare middle school to?
what can i compare middle school to? (use in metaphor), based on 6th, 7th, and 8th grade. for example playing monopoly 6th grade - buying properties, 7th grade- adjusting to the game, 8th grade -
stress over money
Sunday, May 16, 2010 at 6:34pm by NEEDSHELP(:
8th grade math for Ms. Sue - last question please
equation says two things are equal x+y = 3 inequality says they are not, and in what way. x+y <= 3 x/2 > y+7 The big clue is equality and inequality
Thursday, December 27, 2012 at 8:28pm by Steve
U.S history 8th grade
Ms.Sue~~ i reed the book like 5 times and says 2 names.... and they weren't those names
Monday, September 5, 2011 at 9:53pm by sammie
Urgent 8th Grade Language Arts
Yay, 100%, thx Ms.Sue.. sorry about the last part but I got it after reading it a couple times.
Wednesday, September 4, 2013 at 1:37pm by Gabby
After years of data collection by college students, it has been determined that the distribution of course grades in Ms. Green’s Statistic II class follows a normal distribution with a mean of 62 and
a standard deviation of 14. (a) A student will pass Ms. Green’s course if he ...
Monday, May 9, 2011 at 11:19am by Karen
8th Grade Connections Academy
Connections Academy 8th grade Pennsylvania But I think all of the conncetion academy's have the same schoolwork as long as ur in the same grade
Wednesday, January 8, 2014 at 5:52pm by mtv
8th Grade
is it a high award for some 8th graders who get good grades ?
Saturday, November 19, 2011 at 1:05pm by Laruen
8th Grade
Whta is a verctorican (srry i can't spell that) award when the 8th graders get one?????
Saturday, November 19, 2011 at 1:05pm by Laruen
Another Question
yes 8th grade in school besides since you graute gr 8th did you got any awards?
Saturday, November 19, 2011 at 10:12am by Laruen
8th grade math
im in 5th grade and i no round up so 7
Monday, January 7, 2013 at 11:17pm by kk
8th grade math
yeah i did ask my teacher if i could go back to regular math but then she decided that she will decide on my quiz score and i got a good grade
Thursday, November 3, 2011 at 9:19pm by saranghae12
8th Grade
for the project like I mean clubs to join. contests to partcipate and that any things I'm going to do for gr 8th
Monday, December 5, 2011 at 8:15pm by Laruen
8th grade math Ms. Sue
2n < 50 n < 50/2 n < 25 She can't pay more than $25 for each shirt.
Wednesday, December 26, 2012 at 9:40pm by Ms. Sue
7th grade
Okay, so what are the axiom names? (honors math) Should this go in the 8th grade place? Oh well...
Monday, December 13, 2010 at 7:01pm by Chloe
8th grade math
Like Ms. Sue showed, you need to combine like terms. 3n-5n=-2n -7+1=-6 You cannot combine -2n and -6 so the answer would be -2n-6. -2n-6
Friday, September 9, 2011 at 5:34pm by Kate
reply my other post (8th grade/art) but ignore about the reflection part cuz u helped me but reply it on the 8th grade part (this is for me sister)
Tuesday, November 8, 2011 at 7:42am by Laruen
8th grade math
Yes, this helped. It took me all the way back to high school. I can't believe they're doing the quadratic formula in the eighth grade. Thanks for your help.
Thursday, February 7, 2008 at 9:52pm by KL
8th Math
Thx Ms.Sue! But why the +4 became -4? I confused with this part
Sunday, August 25, 2013 at 7:27pm by Cassidy
- Computers - English - Foreign Languages - Health - Home Economics - Math - Music - Physical Education - Science - Social Studies GRADE LEVELS - Preschool - Kindergarten - Elementary School - 1st
Grade - 2nd Grade - 3rd Grade - 4th Grade - 5th Grade - 6th Grade - 7th Grade - ...
Friday, January 8, 2010 at 1:36pm by peter
Ms.Sue (ONLY)
can you plz help me in science? PLZ HELP ME! YOU HELP ALL THE TIME! *begs* plz help! i am my knees!!!!! (really i am!) its called "science 8th grade"
Wednesday, March 7, 2012 at 7:36pm by sammie doodles :-)
8th grade Physics
Tuesday, December 16, 2008 at 10:26pm by Babygirl
Ms. Sue please.....
Please delete these if you can: 1. "7th grade math Ms. Sue please last questions! Posted by Delilah on Wednesday, February 13, 2013 at 9:40pm." 2. "7th grade math please help Ms. Sue ASAP Posted by
Delilah on Wednesday, February 13, 2013 at 9:19pm." 3. "7th grade math please ...
Friday, February 15, 2013 at 7:50pm by Delilah
8th grade math for Ms. Sue
An equation show two equal entities. An inequality shows that one is larger than the other. An example of an inequality: Denise has $50 and wants to buy two new shirts. She needs to know how the
average price she'll need to pay so that she doesn't exceed her $50. 2n < 50
Wednesday, December 26, 2012 at 9:11pm by Ms. Sue
8th grade
well i did post it on 8th grade...
Wednesday, November 3, 2010 at 12:44am by hexagon help
Urgent 8th Grade Language Arts
How is the word street used in the sentence below: Street cleaners can often be seen on our block in the very early morning. adjective*** adverb noun pronoun Am I right Ms.Sue?
Monday, September 9, 2013 at 12:46pm by Gabby
8th Grade Language Arts Ms. Sue Help
Subordinate clauses: that the Jebusites got their drinking water from a spring just outside the city walls. However, when David came to conquer the city with his army, Independent clause: he
Thursday, December 19, 2013 at 1:24pm by Ms. Sue
what is the difference between translation , rotation, relections, and dilation in 8 grade work In 8th grade talk, I recommend you check the examples in your book. OR, if you put your definitions
here, I can critique them.
Sunday, January 28, 2007 at 10:59am by jose
URGENT 8th Grade Math
I think so, but I dont know about the last. I am in the *th grade, but honestly I am not 2100% sure. Sorry just trying to be helpful. kathi anderson forever never lasts
Wednesday, September 4, 2013 at 12:29pm by rockrboiluvr14
8th grade math
4 5/8
Wednesday, September 20, 2006 at 8:11pm by tabatha
8th grade math.
Monday, September 8, 2008 at 5:58pm by mena
8th grade math
thank you!
Monday, September 22, 2008 at 10:05pm by abbi
math 8th grade
-1.2 - 0.5 = -1.7
Tuesday, November 11, 2008 at 10:55am by Ms. Sue
8th grade math
Monday, January 5, 2009 at 5:48pm by Anonymous
8th Grade Math
If x/y=4, x-y=9, and y-x=-9, what is x & y?
Thursday, September 24, 2009 at 7:49pm by Sarah
8th Grade Math
If x/y=4, x-y=9, and y-x=(-9) then what is x and y?
Thursday, September 24, 2009 at 8:00pm by Sarah
8th grade math
Thursday, January 28, 2010 at 7:39pm by jeremiah
8th grade math
1 1/3
Wednesday, August 25, 2010 at 7:40pm by lindsey
Pages: 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | Next>>
|
{"url":"http://www.jiskha.com/search/index.cgi?query=8th+grade+math+for+Ms.+Sue","timestamp":"2014-04-20T09:02:27Z","content_type":null,"content_length":"31071","record_id":"<urn:uuid:19c24836-5494-4801-9d96-78886526ec27>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00460-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Surface Effects on the Vibration and Buckling of Double-Nanobeam-Systems
Journal of Nanomaterials
Volume 2011 (2011), Article ID 518706, 7 pages
Research Article
Surface Effects on the Vibration and Buckling of Double-Nanobeam-Systems
Department of Engineering Mechanics, SVL, Xi'an Jiaotong University, Xi'an 710049, China
Received 13 June 2011; Accepted 18 August 2011
Academic Editor: Raymond Whitby
Copyright © 2011 Dong-Hui Wang and Gang-Feng Wang. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and
reproduction in any medium, provided the original work is properly cited.
Surface effects on the transverse vibration and axial buckling of double-nanobeam-system (DNBS) are examined based on a refined Euler-Bernoulli beam model. For three typical deformation modes of
DNBS, we derive the natural frequency and critical axial load accounting for both surface elasticity and residual surface tension, respectively. It is found that surface effects get quite important
when the cross-sectional size of beams shrinks to nanometers. No matter for vibration or axial buckling, surface effects are just the same in three deformation modes and usually enhance the natural
frequency and critical load. However, the interaction between beams is clearly distinct in different deformation modes. This study might be helpful for the design of nano-optomechanical systems and
nanoelectromechanical systems.
1. Introduction
Nanowires hold a wide variety of potential applications, such as sensors, actuators, transistors, probes, and resonators in nanoelectromechnical systems (NEMSs) [1]. In the design of nanowire-based
components, it is of great importance to acquire the mechanical behaviors of nanowires accurately. Owing to the increasing ratio of surface area to volume in nanostructured components, the influence
of surfaces gets important in their mechanical performance. To account for surface energy in solids, Gurtin et al. [2] established the surface elasticity theory, and recently its applications in
nanosized structural element agree reasonably well with atomistic simulations and experimental measurements [3–6]. For example, Miller and Shenoy [3] investigated the size-dependent elastic
properties of nanoscale beams and plates by both surface elasticity and atomic simulation. Ru [4] explained the difference and essence of various versions of Gurtin’s surface elastic theory. A
core-shell model was developed by Chen et al. [5] to explain the size-dependent Young’s modulus of ZnO nanowires. Through Laplace-Young equation, Wang and Feng [7, 8] addressed both the impacts of
residual surface stress and surface elasticity on the vibration and buckling of nanobeams. He and Lilley [9] analyzed the static bending of nanowires, and explained its size-dependent elastic
modulus. Using this model, Wang [10] considered the transverse vibration of fluid-conveying nanotube, Fu et al. [11] studied the nonlinear static and dynamic behaviors of nanobeams, and Assadi and
Farshi [12] investigated the size-dependent stability and self-stability of circular nanoplates.
Most of above analyses addressed surface effects on single nanowire. Recently, the double-nanobeam-system (DNBS) has been utilized in nano-optomechanical systems [13–18]. The DNBS can be modeled by
two one-dimensional nanobeams connected by coupling medium (i.e., van der Waals force, electrostatic force, capillary force, or Casimir force). Frank et al. [14] used electrostatic forces to tune the
reconfigurable filters based on two coupled nanobeams model. Karabalin et al. [18] studied the nonlinear dynamics of two elastically coupled nanomechanical resonators and demonstrated that one
oscillator could be modified by driving the second oscillator.
In the present paper, we will analyze surface effects on the transverse vibration and axial buckling of DNBS. Our solutions would provide more accurate predictions on the mechanical properties of
DNBS and a more reliable mechanical model for the design of coupled photonic crystal nanobeams [16].
2. Surface Effects on Beam Deformation
The creation of a free surface in a solid leads to excess free energy, which is referred as surface energy, thereby the increase in surface area during deformation will require external work. In
addition, the atoms or molecules near a free surface experience a different local environment than that in the interior of the material. As a consequence, the energy density, atom density, and other
physical properties in the surface layer will be distinct from those in the bulk.
Surface effects on the mechanical behavior of nanosized elements and nanomaterials can be examined by considering surface energy and/or surface stresses. According to Gibbs [19] and Cammarata [20],
the surface stress tensor is related to the surface energy density through the surface strain tensor by
For the deformation of microbeam, only surface stress and surface strain along the longitude direction are of importance, and the one-dimensional and linear form of (1) is where is the residual
surface stress when the bulk is under unstrained and is the surface Young’s modulus and can be determined either by atomic simulations or experimental measurements [3, 6]. The ratio of surface
energy, surface stress, and surface modulus to the bulk elastic modulus is usually on the order of nanometers. For macroscopic structures, the influence of surface effects can be neglected. However,
for nanosized structural elements, the contribution of surface effects becomes quite important and should be accounted for.
According to Laplace-Young equation [7, 9], the residual surface stress induces a distributed normal pressure spanning the beam (as shown in Figure 1(a)), which is given by where is deflection at the
position . is a constant related to the residual surface stress and the cross-sectional shape. For a rectangular cross section with width and height or a circular cross section with diameter as shown
in Figures 1(b) and 1(c), respectively, one has [7, 9]
The influence of the second term in (2) can be accounted for by the effective flexural rigidity [7, 9] where is the Young’s modulus of the bulk of beam. In what follows, we will consider the
vibration and buckling of DNBS by this surface model.
3. Vibration of Double-Nanobeam-System
Consider a double-nanobeam-system as illustrated in Figure 1(a). Two nanobeams with identical length are connected by distributed springs with stiffness . In physical nature, the springs could
represent the effects of electrostatic force, nano-optomechanical-induced force, van der Waals force, or elastic medium, which can be adjusted by the electrical potential difference or the distance
between two nanobeams [14]. Denote the elastic modulus, mass density, cross-section area, and second moment of inertia of the th beam by , and , respectively. Since the DNBS usually has a large
length/depth ratio () [13], it is reasonable to neglect the effect of shear deformation and rotary inertia, and adopt the Euler-Bernoulli beam theory to predict its mechanical behaviors.
Denote the deflections of two nanobeams by and , respectively. Account for surface effects in Section 2, the differential equations of the free vibration of DNBS can be obtained as
for nano-beam-1,
and for nano-beam-2,
In practical applications, the two nanobeams in DNBS are usually identical; therefore, in present paper we assume It should be noted that more general cases can also be achieved but in a more
complicated form [21].
For convenience in analysis, we introduce the relative movement of two beams as [22] Then Subtracting (6) from (7) gives When surface effects are ignored () and single beam () is considered, (12)
reverts to the vibration equations of a single Euler beam.
In order to display the surface effects, we consider a simple case, in which both beams are simply supported at their ends. The boundary conditions are given by
Thus, the boundary conditions associated with (11) become
Assuming that the relative motion is one of its natural modes of vibration of DNBS, the boundary condition (16) can be satisfied by the following vibration displacement: where is the natural
frequency of th mode.
To discuss the vibration and buckling of DNBS, three typical cases including out-of-phase sequence, in-phase sequence, and one-beam being stationary as shown in Figures 2(a), 2(b), and 2(c), are
considered, respectively.
3.1. Out-of-Phase Vibration
In this case, both nanobeams vibrate out-of-phase, and , as shown in Figure 2(a). Substituting (17) into (11), one can obtain the natural frequency of DNBS in the out-of-phase vibration as
When surface effects are neglected (), the natural frequency reduces to the classical double Euler beam solution [22],
3.2. In-Phase Vibration
In the case of in-phase vibration as shown in Figure 2(b), two nanobeams vibrate synchronously, thus the relative displacement between them disappears (). Therefore, any one of the two beams could
represent the vibration of the coupled vibration system. Following a similar procedure as that in out-of-phase vibration, one can determine the frequency as It is seen that the interaction between
nanobeams does not affect the natural frequency of DNBS for in-phase vibration, since two beams vibrate synchronously. For this vibrating mode, the vibration frequency is just as the same as that of
the single Euler beam with surface effects [7].
3.3. One Nanobeam Being Stationary
Another vibrating mode of interest is one nanobeam being stationary (i.e., ), as shown in Figure 2(c). In this case, the vibration equation (11) reduces to In this context, the DNBS behaves as if
nanobeam-1 is supported on an elastic medium. Similarly, one obtains the natural frequency of beam as Comparing (18), (20), and (22), it is noticed that surface effects have the same contribution to
these three vibration modes, but the influence of beam interaction is distinct in different vibration modes. The interaction between beams tends to increase the natural frequency for vibration modes
other than in-phase vibration.
3.4. Example and Discussion
To illustrate surface effects on the vibration of DNBS quantitatively, we consider a DNBS consisting of two silver nanowires with circular cross section. The material constants of nanowires are , ,
and the surface properties and [9]. To examine the influence of beam interaction, the spring stiffness has been taken from 2 × 10^4N/m^2 to 2 × 10^7N/m^2 [23]. We also take a length/diameter aspect
ratio as in the following analysis.
Since surface effects are just the same in three vibration modes, here we consider surface effects and beam interaction on only the out-phase vibration. Figure 3 displays the variation of normalized
first-order natural frequency with respect to the wire diameter. It is seen that when the diameter reduces to nanometers, the vibration frequency depends on the absolute size of nanobeam, which is
clearly distinct from the prediction of conventional elasticity. As the diameter decreases, the contribution of surface effects gets important and tends to increase the natural frequency. It is also
noticed that surface effects are more prominent for a small spring constant corresponding to a weak beam interaction. With the spring constant increasing, the influence of surface effects becomes
relatively unimportant compared to the beam interaction. The higher-order natural frequency of DNBS is also plotted in Figure 4, in which the spring stiffness is taken as 2 × 10^4N/m^2. It is found
that surface effects are more significant for low-order natural frequency and declines dramatically for high-order frequency.
4. Axial Buckling of Double-Nanobeam-System
It is also of interest to consider the axial buckling of DNBS. Accounting for surface effects stated in Section 2, the buckling equations of two nanobeams subjected to axial compressive forces and
can be expressed as
for beam one,
and for beam two, Assume the two beams are identical for simplification and , we get
Similar to those of vibration modes, the buckling of DNBS can also be categorized into the out-of-phase buckling, in-phase buckling, and buckling with one beam being stationary. We also consider only
the boundary conditions for ends of two beams being simply supported, as described by (16). The following buckling mode satisfies the boundary condition
For out-of-phase buckling, substitution of (27) into (25) yields Consequently, the critical buckling load is derived as For the case without surface effects (), the critical load reduces to the
classical solution
Similarly, the critical load of buckling for in-phase buckling can be readily given as This coincides with the solution of axial buckling of a single nanowire [8].
Also, for the case of buckling with one beam being stationary, the critical load is expressed as For the axial buckling of DNBS, it is found again that the influence of surface effects is just the
same in three buckling modes. For in-phase buckling, the beam interaction has no influence on the critical load, while for other buckling modes, the beam interaction will enhance the critical load of
To demonstrate surface effects on the buckling of NDBS, we consider two circular silver nanowires with , , and surface properties N⁄m and N⁄m [9]. The interaction between them is modeled by the
coupling stiffness varying from 2 × 10^4N/m^2 to 2 × 10^7N/m^2 [23]. For out-of-phase buckling, Figure 5 displays the critical compressive force with respect to the wire diameter. The normalized
critical force of buckling exhibits a distinct dependence on the characteristic size of the nanowires, which is different from the conventional elasticity. The influence of surface effects become
significant as the diameter decreases in the range of nanometers and usually raises the critical buckling load of DNBS. Moreover, surface effects are more prominent for weak beam interaction than for
stiff interaction.
5. Conclusions
Based on a modified Euler-Bernoulli beam theory, we have analyzed surface effects on the transverse vibration and axial buckling of DNBS. The natural frequency and critical compression force are
obtained analytically. The results show that both surface elasticity and residual surface tension affect the natural frequency and buckling load of DNBS when the cross section of nanowires shrinks to
nanometers. Surface effects play the same impact in three deformation modes no matter for vibration or axial buckling and evidently enhance the natural frequency and critical load. In contrast, the
influence of beam interaction is clearly distinct in different deformation modes. The present study might be helpful for the design of double-nano-beam-based nano-optomechanical systems and
nanoelectromechanical systems.
The supports from the National Natural Science Foundation (Grant no. 11072186) and the NCET program and SRF for ROCS of MOE are acknowledged.
1. Y. Cui, Q. Wei, H. Park, and C. M. Lieber, “Nanowire nanosensors for highly sensitive and selective detection of biological and chemical species,” Science, vol. 293, no. 5533, pp. 1289–1292,
2001. View at Publisher · View at Google Scholar · View at Scopus
2. M. E. Gurtin, J. Weissmüller, and F. Larché, “A general theory of curved deformable interfaces in solids at equilibrium,” Philosophical Magazine A, vol. 78, no. 5, pp. 1093–1109, 1998. View at
3. R. E. Miller and V. B. Shenoy, “Size-dependent elastic properties of nanosized structural elements,” Nanotechnology, vol. 11, no. 3, pp. 139–147, 2000. View at Publisher · View at Google Scholar
· View at Scopus
4. C. Q. Ru, “Simple geometrical explanation of Gurtin-Murdoch model of surface elasticity with clarification of its related versions,” Science China, vol. 53, no. 3, pp. 536–544, 2010. View at
Publisher · View at Google Scholar · View at Scopus
5. C. Q. Chen, Y. Shi, Y. S. Zhang, J. Zhu, and Y. J. Yan, “Size dependence of Young's modulus in ZnO nanowires,” Physical Review Letters, vol. 96, no. 7, Article ID 075505, pp. 1–4, 2006. View at
Publisher · View at Google Scholar
6. S. Cuenot, C. Frétigny, S. Demoustier-Champagne, and B. Nysten, “Surface tension effect on the mechanical properties of nanomaterials measured by atomic force microscopy,” Physical Review B, vol.
69, no. 16, Article ID 165410, 5 pages, 2004. View at Publisher · View at Google Scholar
7. G.-F. Wang and X.-Q. Feng, “Effects of surface elasticity and residual surface tension on the natural frequency of microbeams,” Applied Physics Letters, vol. 90, no. 23, Article ID 231904, 2007.
View at Publisher · View at Google Scholar
8. G.-F. Wang and X.-Q. Feng, “Surface effects on buckling of nanowires under uniaxial compression,” Applied Physics Letters, vol. 94, no. 14, Article ID 141913, 2009. View at Publisher · View at
Google Scholar
9. J. He and C. M. Lilley, “Surface effect on the elastic behavior of static bending nanowires,” Nano Letters, vol. 8, no. 7, pp. 1798–1802, 2008. View at Publisher · View at Google Scholar · View
at Scopus
10. L. Wang, “Vibration analysis of fluid-conveying nanotubes with consideration of surface effects,” Physica E, vol. 43, no. 1, pp. 437–439, 2010. View at Publisher · View at Google Scholar · View
at Scopus
11. Y. Fu, J. Zhang, and Y. Jiang, “Influences of the surface energies on the nonlinear static and dynamic behaviors of nanobeams,” Physica E, vol. 42, no. 9, pp. 2268–2273, 2010. View at Publisher ·
View at Google Scholar · View at Scopus
12. A. Assadi and B. Farshi, “Size dependent stability analysis of circular ultrathin films in elastic medium with consideration of surface energies,” Physica E, vol. 43, no. 5, pp. 1111–1117, 2011.
View at Publisher · View at Google Scholar
13. M. Eichenfield, R. Camacho, J. Chan, K. J. Vahala, and O. Painter, “A picogram- and nanometre-scale photonic-crystal optomechanical cavity,” Nature, vol. 459, no. 7246, pp. 550–555, 2009. View at
Publisher · View at Google Scholar · View at Scopus
14. I. W. Frank, P. B. Deotare, M. W. McCutcheon, and M. Lonèar, “Programmable photonic crystal nanobeam cavities,” Optics Express, vol. 18, no. 8, pp. 8705–8712, 2010. View at Publisher · View at
Google Scholar · View at Scopus
15. M. W. McCutcheon, P. B. Deotare, Y. Zhang, and M. Lončar, “High- Q transverse-electric/transverse-magnetic photonic crystal nanobeam cavities,” Applied Physics Letters, vol. 98, no. 11, Article
ID 111117, 3 pages, 2011. View at Publisher · View at Google Scholar
16. P. B. Deotare, M. W. McCutcheon, I. W. Frank, M. Khan, and M. Lončar, “Coupled photonic crystal nanobeam cavities,” Applied Physics Letters, vol. 95, no. 3, Article ID 031102, 3 pages, 2009. View
at Publisher · View at Google Scholar
17. Q. Lin, J. Rosenberg, D. Chang et al., “Coherent mixing of mechanical excitations in nano-optomechanical structures,” Nature Photonics, vol. 4, no. 4, pp. 236–242, 2010. View at Publisher · View
at Google Scholar · View at Scopus
18. R. B. Karabalin, M. C. Cross, and M. L. Roukes, “Nonlinear dynamics and chaos in two coupled nanomechanical resonators,” Physical Review B, vol. 79, no. 16, Article ID 165309, 5 pages, 2009. View
at Publisher · View at Google Scholar
19. J. W. Gibbs, The Scientific Papers of J. Willard Gibbs. Vol 1: Thermodynamics, Longmans and Green, New York, NY, USA, 1993.
20. R. C. Cammarata, “Surface and interface stress effects in thin films,” Progress in Surface Science, vol. 46, no. 1, pp. 1–38, 1994. View at Scopus
21. S. G. Kelly and S. Srinivas, “Free vibrations of elastically connected stretched beams,” Journal of Sound and Vibration, vol. 326, no. 3-5, pp. 883–893, 2009. View at Publisher · View at Google
Scholar · View at Scopus
22. H. V. Vu, A. M. Ordonez, and B. H. Karnopp, “Vibration of a double-beam system,” Journal of Sound and Vibration, vol. 229, no. 4, pp. 807–822, 2000. View at Publisher · View at Google Scholar ·
View at Scopus
23. J. Zhu, C. Q. Ru, and A. Mioduchowski, “Instability of a large coupled microbeam array initialized at its two ends,” Journal of Adhesion, vol. 83, no. 3, pp. 195–221, 2007. View at Publisher ·
View at Google Scholar · View at Scopus
|
{"url":"http://www.hindawi.com/journals/jnm/2011/518706/","timestamp":"2014-04-17T19:20:36Z","content_type":null,"content_length":"210893","record_id":"<urn:uuid:a05a0d18-987d-492e-b31b-8db6937fdd0a>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00246-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Basic box game - Collision Detection [Archive] - OpenGL Discussion and Help Forums
04-21-2011, 09:10 AM
Im making a basic box game which involves moving boxes around a maze and pushing other boxes, im new to collision detection but am able to get my head around it. The thing I can't get my head around
though is why this function won't work.
note: This isn't the final function it is just the basic outline of what needs to be done.. once this works i'll be able to do it all.
typedef struct {
float x;
float y;
float z;
} point3D;
typedef struct {
point3D min;
point3D max;
} boundingBox;
boundingBox * wallBox[9];
boundingBox * playerBox;
//and then I had two functions that gave each boundingBox its values
int collisionWall(){
int i;
for(i = 0; i < 9; i++){
if(wallBox[i].min.x; =< playerBox.min.x;){
return 1;
return 0;
The errors I get are...
Error 1 error C2231: '.min' : left operand points to 'struct', use '->'
Error 2 error C2231: '.min' : left operand points to 'struct', use '->'
Error 3 error C2059: syntax error : '<'
Error 4 error C2059: syntax error : '}'
|
{"url":"http://www.opengl.org/discussion_boards/archive/index.php/t-174353.html","timestamp":"2014-04-16T16:59:07Z","content_type":null,"content_length":"6244","record_id":"<urn:uuid:6fa6374f-c2b8-4303-a32b-3198377cf7e8>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00096-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Definition. The Wronskian of two functions f and g is W(f,g) = fg′–gf′. More generally, for n real- or complex-valued functions f1, ...
Wronskian is described in multiple online sources, as addition to our editors' articles, see section below for printable documents, Wronskian books and related discussion.
Suggested Pdf Resources
Suggested Web Resources
Great care has been taken to prepare the information on this page. Elements of the content come from factual and lexical knowledge databases, realmagick.com library and third-party sources. We
appreciate your suggestions and comments on further improvements of the site.
|
{"url":"http://www.realmagick.com/wronskian/","timestamp":"2014-04-16T14:46:37Z","content_type":null,"content_length":"19802","record_id":"<urn:uuid:b6f80f52-7ecf-46e5-9351-baff761a9333>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00355-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Johnson Trotter Algorithm to generate Permutations!
Johnson trotter algorithm gives a non recursive approach to generate permutations. The algorithm goes something like this..
while there exists a mobile integer k do
-->find the largest mobile integer k;
-->swap k and the adjacent integer its arrow points to;
-->reverse the direction of all integers that are larger than k
I'm using a character flag to maintain the direction of the mobile integers.. the code is simple, suggestions to make it more simple are always welcome
|
{"url":"http://krishnabharadwaj.info/Johnson-Trotter-Algorithm/","timestamp":"2014-04-20T03:24:51Z","content_type":null,"content_length":"9145","record_id":"<urn:uuid:f45db0d6-e878-423f-a529-1a89e734e178>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00565-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Should tables be sorted? Yes but--- a question
In Yao's paper
Should tables be sorted
he did the following (I am paraphrasing alot).
A (U,n;q)-WPDS (Word Probe Data Structure) for MEM is the following;
1. A function that does the following: Given A\substeq U, |A|=n, return the elements of A in some order. Think of them as being in CELL[1],...,CELL[n].
2. (Assume Step 1 has been done so there are elements in CELL[1],...,CELL[n].) Given u\in U we want to know if u\in A. We have a query algorithm that asks q adpative queries of the form What is in
CELL[i]?, and then outputs YES or NO. If YES then u\in A, if NO then u\notin A.
Thoughts, results, and a question I really want someone to answer.
1. I call it a WBDS to distinguish from the bit probe model.
2. The obvious appraoch would be to SORT the elements of A and use binary search. Hence q=log(n+1).
3. For some cases where U is small compared to n, there are constant-probe algorithms.
4. KEY: Yao showed that if U is large compared to n then sorting IS the best you can do. How big? Let R(a,b,c) be the RAMSEY number associated to a-uniform hypergraphs, b-sized monochromatic sets,
and c colors. Yao showed that if U\ge R(n,2n-1,n!) then sorting is optimal.
5. QUESTION: Is a lower value known? I have looked in various places but couldn't find a lower value. However, this may be a case where I'm better off asking an expert. I hope one is reading this
6. This is one of the first applications of Ramsey Theory to computer science. It may be the first, depending on how you defined application, Ramsey Theory, and computer science.
5 comments:
1. Is WBDS the same thing as cell-probe model?
I think it is kind of similiar to the deterministic dictionary problem, which is still open.
2. YES this is the CELL PROBE MODEL. I didn't call it that
since there are two kinds:
where the cells holds BITS
and where the cells hold
WORDS. But YES, it is
the Cell Probe Model
and I should have said so.
3. Well, it's not really the cell-probe model. In the cell-probe model, cells can store anything you want, not just the elements in the data structure.
People have called this model "implicit data structures" (see work by Munro, Francheschini, Grossi etc). Supposedly, "implicit" means that you're just storing a permutation, and the data
structuring is implicit in the order of this permutation.
More recent work on dictionaries (that the anonymous commenter alludes to) has focused on the true model, where you can store functions of your data elements in any cell. See work by [Fredman,
Komlos, Szemeredi] for randomized dictionaries, and [Hagerup, Miltersen, Pagh], [Pagh], [Ruzic] for deterministic.
By now, there is nothing interesting in Yao's paper for computer science, just for people passionate about Ramsey theory.
4. I believe what you are asking for is in the paper "Implicit O(1) probe search" by Fiat and Naor, SICOMP 1993. They show how to search in O(1) time for quite large U, but still much smaller than
the Ramsey numbers for which there is a lower bound.
5. Implicit probe search presents a trade-off between two parameters:
1. The size of the domain from which we are given n elements to store
2. The number of queries we need.
Fiat and Naor showed that if you are given a set of size n and the domain size is poly(n), you can perform implicit probe search in constant time. This was improved by Gabzion Shaltiel to a
domain size which is quasi polynomial in n.
Gabizon and Hassidim showed that you can get o(log n) even when the domain size is exponential in n.
|
{"url":"http://blog.computationalcomplexity.org/2008/11/should-tables-be-sorted-yes-but.html","timestamp":"2014-04-19T07:52:28Z","content_type":null,"content_length":"163833","record_id":"<urn:uuid:93f7a86e-ae0c-458d-82bb-1bc8c67c8e16>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00460-ip-10-147-4-33.ec2.internal.warc.gz"}
|
How Fast Are the Cars in Angry Birds GO!? | Science Blogs | WIRED
• By Rhett Allain
• 12.29.13 |
• 11:44 am |
I obviously love Angry Birds and physics (here is a bit.ly bundle with most of my Angry Birds posts). But what about Angry Birds Go!? This game is a bit different. Ok, it’s totally different except
that the same birds and pigs show up in the game. Oh, and there is still a slingshot.
Really, the big difference is that Angry Birds and Bad Piggies both have a side view of the world. Side views work quite well for video analysis (which is how I get most of my data from the game).
Angry Birds Go! uses a 3D view showing the motion from the perspective of the car and bird driving it (or just above the car).
Analyzing the motion in cases like this isn’t as straight forward as sideways motion. I’ve looked at similar cases before though. The one that comes to mind is this analysis of the Mars Curiosity
Landing video. The basic idea is that the farther away an object is from the “camera”, the smaller it appears. By looking at this angular size you can get a measure of the distance to the camera (or
viewer). Here is a useful illustration of the relationship between angular size and distance.
I can measure the angular size of some object in the video and from this get the distance. But there is an easier way which I will describe in a moment.
How Do You Get Data?
Right now, Angry Birds Go! is just on mobile devices. So, how do you get a video of the game? I used two things. First, there is this app for Mac OS x called Reflector. It turns your Mac OS X
computer into an airplay receiver. You can send the screen of your iPhone to your computer. I think there is something similar for Windows computers too. The next step is to capture the screen as a
video. Quicktime does an excellent job here. It’s that easy.
First Estimation of Speed
Honestly, this sort of feels like cheating since it is so simple. On some levels, you get check boxes for jumping the car over some set distance. Here is a sample of one of those levels.
You might not notice this in the middle of a race, but you can see it in this video. When you jump on these levels, it tells you how far you went. Well, it stops reporting jump distances after you
get over the required distance. I can use this reported distance along with the time of the jump to get a first approximation to the speed. How do you get the time? You could just look at the frame
number in the video, but I prefer to use Tracker Video Analysis to get the time.
For the first jump in my test video, the car traveled 40.6 meters (as reported by the game) and it took 0.95 seconds. This gives a speed of:
If you like different units, the speed is 95.6 mph. Zoom. Faster than I would have thought. Well, in my test video, I have two more jumps. Using the same idea, I get speeds of 44.90 m/s and 55.50 m/
How Steep Is the Race Track?
This is another approximation. However, let me assume that when the car jumps it starts out with a horizontal velocity and leaves off a vertical drop. This would make it just like projectile motion
(assuming that air resistance can be ignored). Here is a diagram.
The key to projectile motion is that the motion can be broken into a vertical and horizontal case. Each case can be treated separately except that they have the same time interval. For the vertical
motion, it’s not too difficult to calculate the height that the car falls. Assuming a constant vertical acceleration of -9.8 m/s^2 and an initial vertical velocity of 0 m/s, I can write the following
kinematic equation.
Since I know the time for this vertical motion (from the video), I can get the height. Using the 3 jumps in the test video above, I get vertical drops of 4.42 m, 3.01 m, and 3.02 meters. Remember, I
am making the assumption that the car starts off moving only horizontal. If instead the car left the ground at some angle above the horizontal, then the height would actually be lower. However, I
have to start somewhere. I have no easy way to measure this “launch angle” and it looks close to horizontal.
What about the angle of this course? If I use these three jumps as an estimate then I can calculate the angle based on the height and horizontal distance for these jumps.
Here I am making the assumption (yes, I am making a lot of assumptions) that the average slope of this track is about the same as the slope for these jumps. Even if it isn’t exactly true it’s a
pretty good approximation. So, based on the three jumps I get a slope angles of 6.19°, 4.89° and 4.34°. Let’s just call this an average slope of about 5°.
Now for the wild speculation. Suppose that I have my car and I drive with an average speed of 45 m/s down a slope that is inclined at 5°. I did this exact track and it took me 42 seconds to complete.
So, how long is the whole track? This is your most basic kinematics problem. Using the speed and time, I get a distance of 1890 meters or 1.17 miles.
How tall is this hill that contains this track? Assuming a constant slope, then I can find the height using a giant right triangle. The hypotenuse of this triangle is the 1890 meters and the angle is
5°. Using the sine function, I get a height of 164 meters. So, it’s a hill and not really a mountain. I guess you could call it a mountain if it made you happy.
More Questions
This is all just a rough approximation. I think I can do better by using the angular size of objects in the game. Once I do this, I won’t need these recorded jump distances to get the speed of the
car. After that, I can attempt to answer the following questions:
• How big are things? How big are the blocks and the birds and stuff? You would think I could just measure the angular size of these things, but I can’t. Well, I can but I don’t know the angular
field of view in the game.
• What do the different powers do? I assume that some of these powers make you go faster, but how much faster?
• Is there a correlation between car horsepower and speed?
• If the cars go at nearly a constant speed, what does this say about friction and air resistance?
• Is there air resistance when the cars jump?
Some of these questions are quite difficult. However, if I don’t write them down I will forget about them. Anyway, if you want to take a shot at any of these – go ahead. One thing that I need is a
better video. When I capture video into my computer from my phone, it’s a little choppy.
|
{"url":"http://www.wired.com/2013/12/how-fast-are-the-cars-in-angry-birds-go/","timestamp":"2014-04-16T08:19:17Z","content_type":null,"content_length":"108535","record_id":"<urn:uuid:77da2f0a-963e-44c0-98b7-73e83e66e523>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00641-ip-10-147-4-33.ec2.internal.warc.gz"}
|
PCA-based bootstrap confidence interval tests for gene-disease association involving multiple SNPs
• We are sorry, but NCBI web applications do not support your browser and may not function properly.
More information
PCA-based bootstrap confidence interval tests for gene-disease association involving multiple SNPs
Genetic association study is currently the primary vehicle for identification and characterization of disease-predisposing variant(s) which usually involves multiple single-nucleotide polymorphisms
(SNPs) available. However, SNP-wise association tests raise concerns over multiple testing. Haplotype-based methods have the advantage of being able to account for correlations between neighbouring
SNPs, yet assuming Hardy-Weinberg equilibrium (HWE) and potentially large number degrees of freedom can harm its statistical power and robustness. Approaches based on principal component analysis (
PCA) are preferable in this regard but their performance varies with methods of extracting principal components (PCs).
PCA-based bootstrap confidence interval test (PCA-BCIT), which directly uses the PC scores to assess gene-disease association, was developed and evaluated for three ways of extracting PCs, i.e.,
cases only(CAES), controls only(COES) and cases and controls combined(CES). Extraction of PCs with COES is preferred to that with CAES and CES. Performance of the test was examined via simulations as
well as analyses on data of rheumatoid arthritis and heroin addiction, which maintains nominal level under null hypothesis and showed comparable performance with permutation test.
PCA-BCIT is a valid and powerful method for assessing gene-disease association involving multiple SNPs.
Genetic association studies now customarily involve multiple SNPs in candidate genes or genomic regions and have a significant role in identifying and characterizing disease-predisposing variant(s).
A critical challenge in their statistical analysis is how to make optimal use of all available information. Population-based case-control studies have been very popular[1] and typically involve
contingency table tests of SNP-disease association[2]. Notably, the genotype-wise Armitage trend test does not require HWE and has equivalent power to its allele-wise counterpart under HWE[3,4]. A
thorny issue with individual tests of SNPs for linkage disequilibrium (LD) in such setting is multiple testing, however, methods for multiple testing adjustment assuming independence such as
Bonferroni's[5,6] is knowingly conservative[7]. It is therefore necessary to seek alternative approaches which can utilize multiple SNPs simultaneously. The genotype-wise Armitage trend test is
appealing since it is equivalent to the score test from logistic regression[8] of case-control status on dosage of disease-predisposing alleles of SNP. However, testing for the effects of multiple
SNPs simultaneously via logistic regression is no cure for difficulty with multicollinearity and curse of dimensionality[9]. Haplotype-based methods have many desirable properties[10] and could
possibly alleviate the problem[11-14], but assumption of HWE is usually required and a potentially large number of degrees of freedom are involved[7,11,15-18].
It has recently been proposed that PCA can be combined with logistic regression test (LRT)[7,16,17] in a unified framework so that PCA is conducted first to account for between-SNP correlations in a
candidate region, then LRT is applied as a formal test for the association between PC scores (linear combinations of the original SNPs) and disease. Since PCs are orthogonal, it avoids
multicollinearity and at the meantime is less computer-intensive than haplotype-based methods. Studies have shown that PCA-LRT is at least as powerful as genotype- and haplotype-based methods[7,16,17
]. Nevertheless, the power of PCA-based approaches vary with ways by which PCs are extracted, e.g., from genotype correlation, LD, or other kinds of metrics[17], and in principle can be employed in
frameworks other than logistic regression[7,16,17]. Here we investigate ways of extracting PCs using genotype correlation matrix from different types of samples in a case-control study, while
presenting a new approach testing for gene-disease association by direct use of PC scores in a PCA-based bootstrap confidence interval test (PCA-BCIT). We evaluated its performance via simulations
and compared it with PCA-LRT and permutation test using real data.
Assume that p SNPs in a candidate region of interest have coded values (X[1], X[2], X[p]) according to a given genetic model (e.g., additive model) whose correlation matrix is C. PCA solves the
following equation,
where i = 1,2, p, l[i ]= (l[i1], l[i2], l[ip])' are loadings of PCs. The score for an individual subject is
where cov (F[i], F[j]) = 0, i ≠ j, and var(F[1]) ≥ var(F[2]) ≥ F[p]).
Methods of extracting PCs
Potentially, PCA can be conducted via four distinct extracting strategies (ES) using case-control data, i.e., 0. Calculate PC scores of individuals in cases and controls separately (SES), 1. Use
cases only (CAES) to obtain loadings for calculation of PC scores for subjects in both cases and controls, 2. Use controls only (COES) to obtain the loadings for both groups, and 3. Use combined
cases and controls (CES) to obtain the loadings for both groups. It is likely that in a case-control association study, loadings calculated from cases and controls can have different connotations and
hence we only consider scenarios 1-3 hereafter. More formally, let (X[1], X[2], X[p]) and (Y[1], Y[2], Y[p]) be p-dimension vectors of SNPs at a given candidate region for cases and controls
respectively, then we have,
where C[XX ]is the correlation matrix of (X[1], X[2], X[p]), i = 1,2, p. The i^th PC for cases is calculated by
where C[YY ]is the correlation matrix of (Y[1], Y[2], Y[p]). The i^th PC for controls is calculated by
And for cases, the i^th PC, i = 1,2, p, is calculated by
where C is the correlation matrix obtained from the pooled data of cases and controls, i^th PC of cases is calculated by
The i^th PC of controls is calculated by
Given a sample of N cases and M controls with p-SNP genotypes (X[1], X[2], X[N])^T, (Y[1], Y[2], Y[M])^T, and X[i ]= (X[1i], X[2i], x[pi]) for the i^th case, Y[i ]= (Y[1i], Y[2i], y[pi]) for the i^th
control, a PCA-BCIT is furnished in three steps:
Step 1: Sampling
Replicate samples of cases and controls are obtained with replacement separately from (X[1]^(b, X[2]^(b), X[N]^(b))^T and (Y[1]^(b, Y[2]^(b), Y[M]^(b))^T, b = 1,2, B (B = 1000).
Step 2: PCA
For each replicate sample obtained at Step 1, PCA is conducted and a given number of PCs retained with a threshold of 80% explained variance for all three strategies[16], expressed as
Step 3: PCA-BCIT
3a) For each replicate, the mean of the k^th PC in cases is calculated by
and that of the k^th PC in controls is calculated by
3b) Given confidence level (1 - α ), the confidence interval of
The confidence interval of
3c) Confidence intervals of cases and controls are compared. The null hypothesis is rejected if 19], indicating the candidate region is significantly associated with disease at level α. Otherwise,
the candidate region is not significantly associated with disease at level α.
Simulation studies
We examine the performance of PCA-BCIT through simulations with data from the North American Rheumatoid Arthritis (RA) Consortium (NARAC) (868 cases and 1194 controls)[20], taking advantage of the
fact that association between protein tyrosine phosphatase non-receptor type 22 (PTPN22) and the development of RA has been established[21-24]. Nine SNPs have been selected from the PNPT22 region
(114157960-114215857), and most of the SNPs are within the same LD block (Figure (Figure1).1). Females are more predisposed (73.85%) and are used in our simulation to ensure homogeneity. The
corresponding steps for the simulation are as follows.
LD (r^2) among nine PTPN22 SNPs. The nine PTPN22 SNPs are rs971173, rs1217390, rs878129, rs11811771, rs11102703, rs7545038, rs1503832, rs12127377, rs11485101. The triangle marks a single LD block
within this region: (rs878129, rs11811771, rs11102703, rs7545038, ...
Step 1: Sampling
The observed genotype frequencies in the study sample are taken to be their true frequencies in populations of infinite sizes. Replicate samples of cases and controls of given size (N, N = 100, 200,
LD structure are maintained. Under null hypothesis, replicate cases and controls are sampled with replacement from the controls. Under alternative hypothesis, replicate cases and controls are sampled
with replacement from the cases and controls respectively.
Step 2: PCA-BCITing
For each replicate sample, PCA-BCITs are conducted through the three strategies of extracting PCs as outlined above on association between PC scores and disease (RA).
Step 3: Evaluating performance of PCA-BCITs
Repeat steps 1 and 2 for K ( K = 1000 ) times under both null and alternative hypotheses, and obtain the frequencies (P[α]) of rejecting null hypothesis at level α (α = 0.05).
PCA-BCITs are applied to both the NARAC data on PTPN22 in 1493 females (641 cases and 852 controls) described above and a data containing nine SNPs near μ-opioid receptor gene (OPRM1) in Han Chinese
from Shanghai (91 cases and 245 controls) with endophenotype of heroin-induced positive responses on first use[25]. There are two LD blocks in the region of gene OPRM1 (Figure (Figure22).
LD (r^2) among nine OPRM1 SNPs. The nine OPRM1 SNPs are rs1799971, rs510769, rs696522, rs1381376, rs3778151, rs2075572, rs533586, rs550014, rs658156. The triangles mark the LD block 1 (rs696522,
rs1381376, rs3778151) and LD block 2 (rs550014, rs658156). ...
Simulation study
The performance of PCA-BCIT is shown in Table Table11 for the three strategies given a range of sample sizes. It can be seen that strategies 2 and 3 both have type I error rates approaching the
nominal level (α = 0.05), but those from strategy 1 deviate heavily. When sample size larger than 800, the power of PCA-BCIT is above 0.8, and strategies 2 and 3 outperform strategy 1 slightly.
Performance of PCA-BCIT at level 0.05 with strategies 1-3†
For the NARAC data, Armitage trend test reveals none of the SNPs in significant association with RA using Bonferroni correction (Table (Table2),2), but the results of PCA-BCIT with strategies 2 and
3 show that the first PC extracted in region of PTPN22 is significantly associated with RA. The results are similar to that from permutation test (Table (Table33).
Armitage trend test on nine PTPN22 SNPs and RA susceptibility
PCA-BCIT, PCA-LRT and permutation test on real data
For the OPRM1 data, the sample characteristics are comparable between cases and controls (Table (Table4),4), and three SNPs (rs696522, rs1381376 and rs3778151) are showed significant association
with the endophenotype (Table (Table5).5). The results of PCA-BCIT with strategies 2 and 3 and permutation test are all significant at level α = 0.01. In contrast, result from PCA-LRT is not
significant at level α = 0.05 with strategy 2 (Table (Table3).3). The apparent separation of cases and controls are shown in Figure Figure33 for PCA-BCIT with strategy 3, suggesting an intuitive
Sample characteristics of heroin-induced positive responses on first use
Armitage trend tests on nine OPRM1 SNPs and heroin-induced positive responses on first use
Real data analyses by PCA-BCIT with strategy 3 and confidence level 0.95. The horizontal axis denotes studies and vertical axis mean(PC1), the statistic used to calculate confidence intervals for
cases and controls. PCA-BCITs with strategy 3 were significant ...
In this study, a PCA-based bootstrap confidence interval test[19,26-28] (PCA-BCIT) is developed to study gene-disease association using all SNPs genotyped in a given region. There are several
attractive features of PCA-based approaches. First of all, they are at least as powerful as genotype- and haplotype-based methods[7,16,17]. Secondly, they are able to capture LD information between
correlated SNPs and easy to compute with needless consideration of multicollinearity and multiple testing. Thirdly, BCIT integrates point estimation and hypothesis testing as a single inferential
statement of great intuitive appeal[29] and does not rely on the distributional assumption of the statistic used to calculate confidence interval[19,26-29].
While there have been several different but closely related forms of bootstrap confidence interval calculations[28], we focus on percentiles of the asymptotic distribution of PCs for given confidence
levels to estimate the confidence interval. PCA-BCIT is a data-learning method[29], and shown to be valid and powerful for sufficiently large number of replicates in our study. Our investigation
involving three strategies of extracting PCs reveals that strategy 1 is invalid, while strategies 2 and 3 are acceptable. From analyses of real data we find that PCA-BCIT is more favourable compared
with PCA-LRT and permutation test. It is suggested that a practical advantage of PCA-BCIT is that it offers an intuitive measure of difference between cases and controls by using the set of SNPs (PC
scores) in a candidate region (Figure (Figure3).3). As extraction of PCs through COES is more in line with the principle of a case-control study, it will be our method of choice given that it has a
comparable performance with CES. Nevertheless, PCA-BCIT has the limitation that it does not directly handle covariates as is usually done in a regression model.
PCA-BCIT is both a valid and a powerful PCA-based method which captures multi-SNP information in study of gene-disease association. While extracting PCs based on CAES, COES and CES all have good
performances, it appears that COES is more appropriate to use.
SNP: single nucleotide polymorphism; HWE: Hardy-Weinberg Equilibrium; LD: linkage disequilibrium; LRT: logistic regression test; PCA: principle component analysis; PC: principle component; ES:
extracting strategy; SES: separate case and control extracting strategy (strategy 0); CAES: case-based extracting strategy (strategy 1); COES: control-based extracting strategy (strategy 2); CES:
combined case and control extracting strategy (strategy 3); BCIT: bootstrap confidence interval test.
Authors' contributions
QQP, JHZ, and FZX conceptualized the study, acquired and analyzed the data and prepared for the manuscript. All authors approved the final manuscript.
This work was supported by grant from the National Natural Science Foundation of China (30871392). We wish to thank Dr. Dandan Zhang (Fudan University) and NARAC for supplying us with the data, and
comments from the Associate Editor and anonymous referees which greatly improved the manuscript. Special thanks to referee for the insightful comment that extraction of PCs with controls is line with
the case-control principles.
• Morton NE, Collins A. Tests and estimates of allelic association in comples. Proc Natl Acad Sci USA. 1998;95:11389–11393. doi: 10.1073/pnas.95.19.11389. [PMC free article] [PubMed] [Cross Ref]
• Sasieni PD. From genotypes to genes: doubling the sample size. Biometrics. 1997;53:1253–1261. doi: 10.2307/2533494. [PubMed] [Cross Ref]
• Gordon D, Haynes C, Yang Y, Kramer PL, Finch SJ. Linear trend tests for case-control genetic association that incorporate random phenotype and genotype misclassification error. Genet Epidemiol.
2007;31:853–870. doi: 10.1002/gepi.20246. [PubMed] [Cross Ref]
• Slager SL, Schaid DJ. Case-control studies of genetic markers: Power and sample size approximations for Armitage's test for trend. Human Heredity. 2001;52:149–153. doi: 10.1159/000053370. [PubMed
] [Cross Ref]
• Sidak Z. On Multivariate Normal Probabilities of Rectangles: Their Dependence on Correlations. The Annals of Mathematical Statistics. 1968;39:1425–1434.
• Sidak Z. On Probabilities of Rectangles in Multivariate Student Distributions: Their Dependence on Correlations. The Annals of Mathematical Statistics. 1971;42:169–175. doi: 10.1214/aoms/
1177693504. [Cross Ref]
• Zhang FY, Wagener D. An approach to incorporate linkage disequilibrium structure into genomic association analysis. Journal of Genetics and Genomics. 2008;35:381–385. doi: 10.1016/S1673-8527(08)
60055-7. [PMC free article] [PubMed] [Cross Ref]
• Balding DJ. A tutorial on statistical methods for population association studies. Nature Reviews Genetics. 2006;7:781–791. doi: 10.1038/nrg1916. [PubMed] [Cross Ref]
• Schaid DJ, McDonnell SK, Hebbring SJ, Cunningham JM, Thibodeau SN. Nonparametric tests of association of multiple genes with human disease. American Journal of Human Genetics. 2005;76:780–793.
doi: 10.1086/429838. [PMC free article] [PubMed] [Cross Ref]
• Becker T, Schumacher J, Cichon S, Baur MP, Knapp M. Haplotype interaction analysis of unlinked regions. Genetic Epidemiology. 2005;29:313–322. doi: 10.1002/gepi.20096. [PubMed] [Cross Ref]
• Chapman JM, Cooper JD, Todd JA, Clayton DG. Detecting disease associations due to linkage disequilibrium using haplotype tags: A class of tests and the determinants of statistical power. Human
Heredity. 2003;56:18–31. doi: 10.1159/000073729. [PubMed] [Cross Ref]
• Epstein MP, Satten GA. Inference on haplotype effects in case-control studies using unphased genotype data. American Journal of Human Genetics. 2003;73:1316–1329. doi: 10.1086/380204. [PMC free
article] [PubMed] [Cross Ref]
• Fallin D, Cohen A, Essioux L, Chumakov I, Blumenfeld M, Cohen D, Schork NJ. Genetic analysis of case/control data using estimated haplotype frequencies: Application to APOE locus variation and
Alzheimer's disease. Genome Research. 2001;11:143–151. doi: 10.1101/gr.148401. [PMC free article] [PubMed] [Cross Ref]
• Stram DO, Pearce CL, Bretsky P, Freedman M, Hirschhorn JN, Altshuler D, Kolonel LN, Henderson BE, Thomas DC. Modeling and E-M estimation of haplotype-specific relative risks from genotype data
for a case-control study of unrelated individuals. Human Heredity. 2003;55:179–190. doi: 10.1159/000073202. [PubMed] [Cross Ref]
• Clayton D, Chapman J, Cooper J. Use of unphased multilocus genotype data in indirect association studies. Genetic Epidemiology. 2004;27:415–428. doi: 10.1002/gepi.20032. [PubMed] [Cross Ref]
• Gauderman WJ, Murcray C, Gilliland F, Conti DV. Testing association between disease and multiple SNPs in a candidate gene. Genetic Epidemiology. 2007;31:383–395. doi: 10.1002/gepi.20219. [PubMed]
[Cross Ref]
• Oh S, Park T. Association tests based on the principal-component analysis. BMC Proc. 2007;1(Suppl 1):S130. doi: 10.1186/1753-6561-1-s1-s130. [PMC free article] [PubMed] [Cross Ref]
• Wang T, Elston RC. Improved power by use of a weighted score test for linkage disequilibrium mapping. American Journal of Human Genetics. 2007;80:353–360. doi: 10.1086/511312. [PMC free article]
[PubMed] [Cross Ref]
• Heller G, Venkatraman ES. Resampling procedures to compare two survival distributions in the presence of right-censored data. Biometrics. 1996;52:1204–1213. doi: 10.2307/2532836. [Cross Ref]
• Plenge RM, Seielstad M, Padyukov L, Lee AT, Remmers EF, Ding B, Liew A, Khalili H, Chandrasekaran A, Davies LRL. TRAF1-C5 as a risk locus for rheumatoid arthritis - A genomewide study. New
England Journal of Medicine. 2007;357:1199–1209. doi: 10.1056/NEJMoa073491. [PMC free article] [PubMed] [Cross Ref]
• Begovich AB, Carlton VE, Honigberg LA, Schrodi SJ, Chokkalingam AP, Alexander HC, Ardlie KG, Huang Q, Smith AM, Spoerke JM. A missense single-nucleotide polymorphism in a gene encoding a protein
tyrosine phosphatase (PTPN22) is associated with rheumatoid arthritis. Am J Hum Genet. 2004;75:330–337. doi: 10.1086/422827. [PMC free article] [PubMed] [Cross Ref]
• Carlton VEH, Hu XL, Chokkalingam AP, Schrodi SJ, Brandon R, Alexander HC, Chang M, Catanese JJ, Leong DU, Ardlie KG. PTPN22 genetic variation: Evidence for multiple variants associated with
rheumatoid arthritis. American Journal of Human Genetics. 2005;77:567–581. doi: 10.1086/468189. [PMC free article] [PubMed] [Cross Ref]
• Kallberg H, Padyukov L, Plenge RM, Ronnelid J, Gregersen PK, Helm-van Mil AHM van der, Toes REM, Huizinga TW, Klareskog L, Alfredsson L. Gene-gene and gene-environment interactions involving
HLA-DRB1, PTPN22, and smoking in two subsets of rheumatoid arthritis. American Journal of Human Genetics. 2007;80:867–875. doi: 10.1086/516736. [PMC free article] [PubMed] [Cross Ref]
• Plenge RM, Padyukov L, Remmers EF, Purcell S, Lee AT, Karlson EW, Wolfe F, Kastner DL, Alfredsson L, Altshuler D. Replication of putative candidate-gene associations with rheumatoid arthritis in
> 4,000 samples from North America and Sweden: Association of susceptibility with PTPN22, CTLA4, and PADI4. American Journal of Human Genetics. 2005;77:1044–1060. doi: 10.1086/498651. [PMC free
article] [PubMed] [Cross Ref]
• Zhang D, Shao C, Shao M, Yan P, Wang Y, Liu Y, Liu W, Lin T, Xie Y, Zhao Y. Effect of mu-opioid receptor gene polymorphisms on heroin-induced subjective responses in a Chinese population. Biol
Psychiatry. 2007;61:1244–1251. doi: 10.1016/j.biopsych.2006.07.012. [PubMed] [Cross Ref]
• Carpenter J. Test Inversion Bootstrap Confidence Intervals. Journal of the Royal Statistical Society Series B (Statistical Methodology) 1999;61:159–172. doi: 10.1111/1467-9868.00169. [Cross Ref]
• Davison AC, Hinkley DV, Young GA. Recent developments in bootstrap methodology. Statistical Science. 2003;18:141–157. doi: 10.1214/ss/1063994969. [Cross Ref]
• DiCiccio TJ, Efron B. Bootstrap confidence intervals. Statistical Science. 1996;11:189–212. doi: 10.1214/ss/1032280214. [Cross Ref]
• Efron B. Bootstrap Methods: Another Look at the Jackknife. The Annals of Statistics. 1979;7:1–26. doi: 10.1214/aos/1176344552. [Cross Ref]
Articles from BMC Genetics are provided here courtesy of BioMed Central
• MedGen
Related information in MedGen
• PubMed
PubMed citations for these articles
• SNP
Your browsing activity is empty.
Activity recording is turned off.
See more...
|
{"url":"http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2825231/?tool=pubmed","timestamp":"2014-04-19T17:47:57Z","content_type":null,"content_length":"107438","record_id":"<urn:uuid:ef29601b-0b3f-4c4f-90b7-d9391218a71d>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00093-ip-10-147-4-33.ec2.internal.warc.gz"}
|
How long does it take to drive 80 miles at 65 mph?
Speed limits in the United States are set by each state or territory. Speed limits are always posted in increments of five miles per hour. Some states have lower limits for trucks and at night, and
occasionally there are minimum speed limits. Most speed limits are set by state or local statute, although each state allows various agencies to set a different, generally lower, limit.
The highest speed limits are generally 75 mph (121 km/h) in western states and 70 mph (113 km/h) in eastern states. A few states, mainly in the Northeast Megalopolis, have 65 mph (105 km/h) limits,
and Hawaii only has 60 mph (97 km/h) maximum limits. A small portion of the Texas and Utah road networks have higher limits. For 13 years (1974–1987), federal law prohibited speed limits above 55 mph
(89 km/h).
Speed limits in the United States are set by each state or territory. Speed limits are always posted in increments of five miles per hour. Some states have lower limits for trucks and at night, and
occasionally there are minimum speed limits. Most speed limits are set by state or local statute, although each state allows various agencies to set a different, generally lower, limit.
The highest speed limits are generally 75 mph (121 km/h) in western states and 70 mph (113 km/h) in eastern states. A few states, mainly in the Northeast Megalopolis, have 65 mph (105 km/h) limits,
and Hawaii only has 60 mph (97 km/h) maximum limits. A small portion of the Texas and Utah road networks have higher limits. For 13 years (1974–1987), federal law prohibited speed limits above 55 mph
(89 km/h).
Hospitality Recreation
Related Websites:
|
{"url":"http://answerparty.com/question/answer/how-long-does-it-take-to-drive-80-miles-at-65-mph","timestamp":"2014-04-18T16:29:47Z","content_type":null,"content_length":"24260","record_id":"<urn:uuid:9af7f1bc-7b09-4f86-9e04-229d22b9522c>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00553-ip-10-147-4-33.ec2.internal.warc.gz"}
|
IntroductionStudy Area and DataRemote Sensing DataOfficial Yield StatisticsMethodsRS-Derived Biomass ProxiesStatistical ModelingResults and DiscussionOverall Predictive CapabilitiesPredicting Temporal VariabilityEffects of Data ScarcityConclusionsReferencesAppendixFigures and Tables
Wheat yield is estimated through empirical linear regression models relating RS-derived indicators of aboveground biomass to yield statistics at governorate level. Aboveground biomass is thus assumed
to be the main predictor of yield in this study area, characterized by low to moderate productivity compared to other regions of the world (e.g., European Union mean yield is above 5,000 kg/ha,
source: Eurostat). Limitations of this approach are represented by the marginal presence of high-yield irrigated crops for which grain productivity may not be linearly related to biomass and the
possible occurrence of meteorological (e.g., dry conditions, and heavy rains) or biological disturbances (e.g., diseases) affecting the crop during its late development stages and leading to yield
reduction not associated with green biomass reductions, and thus not easily detected by RS methods.
Four candidate BPs of increasing biophysical meaning have been selected from the range of existing techniques proposed to estimate vegetation biomass (see [1, 2]). The first two are computed
according to the simple and effective method used by the Centre National de la Cartographie et de la Télédétection (CNCT, Tunis) for the production of cereal production forecast bulletins. The method
assumes that the “greenness” attained at a given time of the year is a predictor of the final grain yield. This specific timing is selected by finding the NDVI (or FAPAR) dekad that provides the
highest correlation with yearly yield records (e.g., [21,22]). FAPAR is considered in the analysis in order to evaluate if it provides any improvement with respect to the vegetation index. The
retrieval of the appropriate dekad was performed at both national and governorate level. The governorate-level retrieval attempts to take into account possible phenological differences among
governorates. Hereafter, these proxies will be referred to as NDVI^x and FAPAR^x (where x indicates the selected dekad), respectively.
The last two proxies belong to the group of techniques opting for the integration of the RS indicator over an appropriate time interval (automatically retrieved of fixed a priori) rather than
selecting a single timing (e.g., [23, 24]). Such proxies are computed according to the phenologically-tuned method used by JRC-MARS to analyze the vegetation development in arid and semi-arid
ecosystems in the absence of ground measurements [25]. They represent two variations of the light use efficiency approach [26], in which the biomass production proxy is linearly related to the
integral of FAPAR and APAR, respectively. With FAPAR, the incident radiation is not considered a limiting factor. The integral is computed between the start of the growing period (start_dek), and the
beginning of the descending phase of the seasonal FAPAR profile (end_dek), which are computed for each pixel and each crop-growing season. The latter corresponds to the beginning of the senescence
phase, and roughly overlaps with anthesis. Hereafter, these proxies will be referred to as CUM[FAPAR] and CUM[APAR], respectively. Incident PAR needed to compute APAR is derived from ERA Interim and
Operational models estimate of incident global radiation produced by ECMWF (European Centre for Medium-Range Weather Forecasts), downscaled at 0.25° spatial resolution and aggregated at dekadal
temporal resolution [27]. No conversion factors (from global radiation to PAR, and from APAR to dry matter production) have been applied since the performance of linear regression models is
insensitive to linear transformations in the data.
The cumulative value is calculated, as shown in the example of Figure 3. First, satellite FAPAR data of each growing season are fitted by a Parametric Double Hyperbolic Tangent (PDHT) mathematical
model mimicking the seasonal trajectory [25]. Second, the integration limits are defined as follows: the growth phase (start_dek) starts when the value of the modeled time series exceeds the initial
base value (asymptotic model value before the growth phase) plus 5% of the seasonal growth amplitude; the decay phase (end_dek) starts when the value of the modeled time series drops below the
maximum fitted value minus 5% of the decay amplitude. Finally, such integration limits are used to compute CUM[FAPAR] (CUM[APAR]) as the integral of the modeled values (modeled values times incident
PAR) after the removal of the base FAPAR level.
All the candidate proxies are computed for each year in which RS data is available, and for each pixel in the study area. As an example, the spatial distribution of the CUM[APAR] is presented in
Figure 4, showing the North-South gradient of decreasing productivity.
Pixel level biomass proxies are then aggregated at the governorate level as the weighted average according to each pixel’s area occupied by cereals [28]. The cereal cover within the VGT pixel (Figure
1) was derived from the land cover/land use produced by the INFOTEL project at a spatial scale of 1:25,000 [29].
The conversion of biomass proxies into actual yields is not a straightforward task. First, the relationship between the two variables may vary, in functional form (e.g., from linear to logarithmic)
and in magnitude (i.e., value of coefficients), between crops and varieties, locations, and possibly from year to year. Second, both biomass proxies and yield data are prone to measurement errors.
Uncertainties in RS data result from a wide range of processes and are both systematic and random, while yield statistics may be affected by sampling and measurement biases and errors. Third, the
spatial aggregation methods used to retrieve the regional figures of the two variables may not be fully coherent. Typically, the regional average of the RS indicator is computed on a static crop mask
while the actual crop area may vary from year to year. Finally, data availability is a major concern since yield and RS time series “long enough” to allow for a reliable estimation of the conversion
parameters are often not available.
The empirical estimation of this relationship is often made through regression techniques. Models and specifications differ in the hypothesized nature of the link between the variables and in the
properties of the subsequent residuals. This is an important issue since wrong choices can lead to biased, inefficient, and inconsistent parameter estimates. The simplest and most widespread way of
modeling the relationship between yield and biomass proxies is through Ordinary Least Squares regression (OLS): Yield i , d = β 0 + β 1 * B P i , d + ε i , dwhere Yield[i,d] denotes the yield in year
i and governorate d, BP[i,d] is the biomass proxy for the same year and governorate, β[0] and β[1] are the parameters to be estimated, and ε[i,d] is the error term assumed to be Gaussian iid (0, σ[ε]
^2) (independent and identically distributed with zero mean and the same finite variance). The advantages of such a model are its simplicity, and its parsimony on the number of estimated parameters.
This specification, hereafter referred to as pooled OLS (P-OLS), assumes a constant relationship between yield and the BP in both space and time. This relation might not hold in all circumstances, in
particular with respect to spatial variation. Indeed, the harvest index may vary spatially because of different management practices, as well as water and nitrogen availability, leading to different
yields for the same amount of aboveground biomass. The typical mixture of elements within the elementary pixel (e.g., crops, bare soil, natural vegetation, water, etc.) may vary spatially, generating
differences in the relationship between the RS signal and the BP, and ultimately, with the measured yields. In addition, the relationship with vegetation indexes, such as NDVI, may change spatially
to account for external factors such as different soil reflectance or sowing practices leading to different 3D canopy structure. Finally, when considering NDVI^x and FAPAR^x (i.e., their value at a
given dekad of the year), they may refer to distinct stages of crop phenological cycle in different spatial locations, thus requiring governorate-specific tuning to estimate the final yield.
Although it is recognized that such differences are present at different geographic scales, the spatial information needed for their detailed modeling is not available. Therefore, an alternative
approach consists in estimating the yield at the governorate level (G-OLS) to account for these spatial heterogeneities: Yield i , d = β 0 , d + β 1 , d * B P i , d + ε i , dwhere β[0,d] and β[1,d]
are governorate-specific coefficients. Although this specification benefits from the fact it does not assume the BP-yield relation to be constant over space, it raises overparameterization concerns.
In fact, the number of estimated parameters is multiplied by the number of administrative units (G) present in the dataset. In the present study this means estimating 20 parameters given 128 data
points (10 governorates, 13 yearly records per governorate). An intermediate solution is then to specify either a model with a single intercept and governorate-specific slope (Equation (3)), or a
model with a single slope and governorate-specific intercepts (Equation (4)): Yield i , d = β 0 + β 1 , d * B P i , d + ε i , d Yield i , d = β 0 , d + β 1 * B P i , d + ε i , d
Both models estimate G + 1, instead of 2 × G parameters. Equation (3) refers to a model where only the slope is adjusted at the governorate level and it is named Governorate-Slope OLS (GS-OLS).
Equation (4) corresponds to a fixed effects (FE) panel model that can be expressed in the form of Equation (1), where the error term ε[i,d] is not assumed to be iid, but to have a fixed governorate
component [30]: ε i , d = u d + v i , dwhere v[i,d] is the iid(0,σ[v]^2) Gaussian error component and u[d] represents the governorate-specific unobservable effects such as those discussed above
(varying harvest index, land cover mixture within the pixel, etc.). This latter component is modeled in Equation (4) by the governorate specific intercepts. Here, it is worth noting that P-OLS is
nested within (i.e., it can be considered a restricted model of) G-OLS, GS-OLS and FE, and that the last two are nested within G-OLS. Therefore, the benefit of increases in model complexity can be
assessed with an F-test.
Although GS-OLS and FE models are more parsimonious than G-OLS, they can still suffer a significant loss of degrees of freedom from the estimation of governorate-specific parameters in datasets where
the number of governorates is large. This can be avoided if u[d] is assumed (i) to be iid(0, σ[u]^2), (ii) independent from v[i,d] and, (iii) independent from BP[i,d]. In this case, the random
effects model (RE) is suitable for a consistent, unbiased, and efficient estimation of the unobservable governorate-specific effects (see [30] for the details). The hypothesis underlying the RE model
can be tested through the Hausman Test [31] in order to determine if, or not, the FE model should be preferred to it, while the Breusch-Pagan Lagrange multiplier test (LM-test) [30] allows the
modeler to decide between the RE and the P-OLS. The use of this approach reduces the number of parameters to be estimated to 4, two for the slope and intercept, and two for the characterization of
the variances of the unobservable effects (i.e., σ[u]^2 and σ[v]^2). The estimation of the RE model and the predictions of the model has been performed based on the maximum likelihood technique
described in [30].
To take into account possible phenological differences among governorates when using NDVI^x and FAPAR^x as BPs, we also considered a G-OLS model for which the most correlated dekad x is selected at
the governorate level (named Dek-G-OLS). Note, that while this approach may be able to better adapt to the local phenology, it indirectly results in a further loss of degrees of freedom because the
retrieval of appropriate dekad is performed using the calibration data set.
All the models are assessed using a jackknife technique, leaving one year out at a time (G observations). The following prediction performance indicators are computed for predictions for the year
left out: root mean square error (RMSE), R overall 2 and R within 2. R overall 2 measures the fraction of yield variability that is explained by the model, in both the spatial and the temporal
dimensions. R within 2 aims to measure the performance of a model in reproducing the temporal variability of the data and not the spatial: R within 2 = 1 − ∑ i Y ∑ d D ( Yield i , d − Yield ^ i , d )
2 ∑ i Y ∑ d D ( Yield i , d − Yield ¯ d ) 2where Yield ^ i , d and Yield ¯ d are the yields predicted by the model in governorate d and year i and the average yield over time of governorate d. By
replacing the sum of squares of yields, present in the denominator of the second term on the right hand side when computing the R overall 2, by the sum of the squared yield deviations from the
governorate average, one measures to what extent the selected model performs better than a naïve model that every year predicts a yield that equals the governorate temporal average. The statistical
significance of the differences in R^2 across BP-statistical model combinations is tested following [32]. The test explicitly takes into account the correlation between the outputs of the models and,
consequently, does not rely on the hypothesis of independence.
Finally, as data scarcity is a major concern when modeling yields based on RS data, an analysis is run in order to understand how model performance deteriorates with respect to decreasing data
availability. We simulated increasing data scarcity in both the temporal and spatial dimensions. In the first case, jackknifed results are again reported but leaving n years out of the calibration
and predicting for those years (n × G observations). In the second case, the number of governorates included in the analysis is progressively reduced while leaving n years out for computing the
predictive performances. This analysis is of particular interest since the models used in the comparison estimate different number of parameters and are then expected to have different deterioration
To facilitate the analysis of the results, a table summarizing the acronyms used for the biomass proxies and statistical models is provided in Table A1 in Appendix.
|
{"url":"http://www.mdpi.com/2072-4292/5/2/539/xml","timestamp":"2014-04-17T18:25:10Z","content_type":null,"content_length":"98284","record_id":"<urn:uuid:085a0e13-c769-4e07-8e89-e7ca2c11bef8>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00028-ip-10-147-4-33.ec2.internal.warc.gz"}
|
A Word for the Slow
So, my solution for Masak’s p1 has the distinction of being by far the least efficient working solution. Which is a shame, because I think this one was my favorite solution of the contest. It may be
slow, and I’d never advise anyone to use it for anything practical (given the other, much more efficient solutions), but in my opinion it’s a charming piece of code.
The key organizational piece for my solution is the Ordering class. (BTW, I didn’t like that name at all, it’s probably my least favorite part of my solution. My theory is Masak was too awestruck by
my inefficiency to quibble about the name.) I was striking about for how to represent a series of matrix multiplications, and hit on the idea of using a very simple stack-based language to do it. The
language has two operations: an Int represents putting the matrix with that index on the stack. The string "*" represents popping the top two matrices on the stack, multiplying them, and pushing the
result back on the stack. Here’s the code for making that happen while tracking the total number of multiplications:
method calculate-multiplications(@matrices) {
my @stack;
my $total-multiplications = 0;
for @.ops {
when "*" {
my $a = @stack.pop;
my $b = @stack.pop;
my ($multiplications, $matrix) = multiply($a, $b);
$total-multiplications += $multiplications;
when Int {
I’m an old Forth programmer from way back, and I can’t begin to say how much I love how easy p6 makes it to implement a simple stack machine!
Getting the string version of this is equally easy:
method Str() {
my @stack;
for @.ops {
when "*" {
my $a = @stack.pop;
my $b = @stack.pop;
when Int {
@stack.push("A{$_ + 1}");
This time instead of a stack of Pairs (for the matrix size), we have a stack of Str representing each sub-matrix’s name. At the end we pop the last thing on the stack, and it’s the string
representing the entire multiplication. And by making this Ordering.Str, any time you print an Ordering you get this nice string form — handy both for the desired output of the program and for
I won’t comment on the guts of the generate-orderings function, which is heavily borrowed from HOP via List::Utils. Just note that given the number of matrices, it lazily generates all the possible
permutations — both the source of my code’s elegance and its extreme inefficiency.
Oonce you’ve got the array @matrices set up, calculating and reporting the best ordering (very slowly!) is as simple as
say generate-orderings(+@matrices).min(*.calculate-multiplications(@matrices));
(Note that I split the main body of this off into a function in the actual code, to make it easier to test internally.)
So clearly, I badly need to study more dynamic programming. But at the same time, I think there may be useful bits in my code that can be put to better use somewhere else.
|
{"url":"http://justrakudoit.wordpress.com/2011/01/20/a-word-for-the-slow/?like=1&_wpnonce=68d26d17fa","timestamp":"2014-04-19T01:53:05Z","content_type":null,"content_length":"48074","record_id":"<urn:uuid:a3e0d7fe-f893-492b-b80d-79524cbf3ae7>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00099-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Take each looped value and put it into an equation
MATLAB and Simulink resources for Arduino, LEGO, and Raspberry Pi
What I'm trying to get this to do is solve for K by incrementing i by 20 all the way up to 1000.
For each K I want to take that value, put it into the solve function and have it give me a numerical value for x. I also need x to be between 0 & 1.
|
{"url":"http://www.mathworks.com/matlabcentral/answers/54719","timestamp":"2014-04-20T00:45:26Z","content_type":null,"content_length":"23974","record_id":"<urn:uuid:b726a34b-fe4d-4d06-808a-db9cc7f24f37>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00365-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Interaction categories and typed concurrent programming
- In Proc. CONCUR , 1997
"... Abstract. We study properties of asynchronous communication independently of any concrete concurrent process paradigm. We give a general-purpose, mathematically rigorous definition of several
notions of asynchrony in a natural setting where an agent is asynchronous if its input and/or output is filt ..."
Cited by 23 (2 self)
Add to MetaCart
Abstract. We study properties of asynchronous communication independently of any concrete concurrent process paradigm. We give a general-purpose, mathematically rigorous definition of several notions
of asynchrony in a natural setting where an agent is asynchronous if its input and/or output is filtered through a buffer or a queue, possibly with feedback. In a series of theorems, we give
necessary and sufficient conditions for each of these notions in the form of simple first-order or second-order axioms. We illustrate the formalism by applying it to asynchronous CCS and the core
join calculus.
- Logic Journal of the IGPL , 1996
"... . This paper is included in a series aiming to contribute to the algebraic theory of distributed computation. The key problem in understanding Multi-Agent Systems is to find a theory which
integrates the reactive part and the control part of such systems. To this end we use the calculus of flownomi ..."
Cited by 9 (2 self)
Add to MetaCart
. This paper is included in a series aiming to contribute to the algebraic theory of distributed computation. The key problem in understanding Multi-Agent Systems is to find a theory which integrates
the reactive part and the control part of such systems. To this end we use the calculus of flownomials. It is a polynomial-like calculus for representing flowgraphs and their behaviours. An
`additive' interpretation of the calculus was intensively developed to study control flowcharts and finite automata. For instance, regular algebra and iteration theories are included in a unified
presentation. On the other hand, a `multiplicative' interpretation of the calculus of flownomials was developed to study dataflow networks. The claim of this series of papers is that the mixture of
the additive and multiplicative network algebras will contribute to the understanding of distributed computation. The role of this first paper is to present a few motivating examples. To appear in
Journal of IGPL....
- Preprint Series in Mathematics, Institute of Mathematics, Romanian Academy, No. 38/December , 1996
"... . This paper is included in a series aiming to contribute to the algebraic theory of distributed computation. The key problem in understanding Multi-Agent Systems is to find a theory which
integrates the reactive part and the control part of such systems. The claim of this series of papers is that ..."
Cited by 4 (0 self)
Add to MetaCart
. This paper is included in a series aiming to contribute to the algebraic theory of distributed computation. The key problem in understanding Multi-Agent Systems is to find a theory which integrates
the reactive part and the control part of such systems. The claim of this series of papers is that the mixture of the additive and multiplicative network algebras (MixNA) will contribute to the
understanding of distributed computation. The aim of this part of the series is to make a short introduction to the kernel language FEST (Flownomial Expressions and System Tasks) based on MixNA. 1
Introduction FEST (Flownomial Expressions and System Tasks) is a kernel language under construction at UniBuc. Its main feature is a full integration of reactive and control modules. It has a clear
mathematical semantics based on MixNA. 2 Unstructured FEST programs The unstructured FEST programs freely combine control and reactive modules. The wording "unstructured" referees to the fact that
the basic s...
, 1997
"... ii COPYRIGHT ..."
"... The paper presents a simple format for typed logics with states by adding a function for register update to standard typed lambda calculus. It is shown that universal validity of equality for
this extended language is decidable (extending a well-known result of Friedman for typed lambda calculus) . ..."
Add to MetaCart
The paper presents a simple format for typed logics with states by adding a function for register update to standard typed lambda calculus. It is shown that universal validity of equality for this
extended language is decidable (extending a well-known result of Friedman for typed lambda calculus) . This system is next extended to a full fledged typed dynamic logic, and it is illustrated how
the resulting format allows for very simple and intuitive representations of dynamic semantics for natural language and denotational semantics for imperative programming. The proposal is compared
with some alternative approaches to formulating typed versions of dynamic logics. Keywords: type theory, compositionality, denotational semantics, dynamic semantics 1 Introduction A slight extension
to the format of typed lambda calculus is enough to model states (assignments of values to storage cells) in a very natural way. Let a set R of registers or storage cells be given. If we assume that
the values...
"... We use Tarski's relational calculus to construct a model of linear temporal logic. Both discrete and dense time are covered and we obtain denotational domains for a large variety of reactive
systems. Keywords : Relational algebra, reactive systems, temporal algebra, temporal logic. 1 ..."
Add to MetaCart
We use Tarski's relational calculus to construct a model of linear temporal logic. Both discrete and dense time are covered and we obtain denotational domains for a large variety of reactive systems.
Keywords : Relational algebra, reactive systems, temporal algebra, temporal logic. 1
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=2354881","timestamp":"2014-04-17T07:59:31Z","content_type":null,"content_length":"25961","record_id":"<urn:uuid:039f7dbf-dd23-4c0c-bba5-6cc6df99294b>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00522-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Computational physics is sometimes described as using a backhoe instead of a spade – bringing computational power to problems that are just too much work to do by hand. But there is far more to
computational physics than brute strength and heavy number crunching. There is an art to it – an art which physicists facing computational problems may or may not excel at. Fortunately for Perimeter
researchers, the Institute has a computational artist on staff.
Erik Schnetter is a Perimeter researcher – he’s done widely respected work on the gravitational aspects of black holes – but he’s also a staff member whose mandate is to collaborate with other
researchers on computationally challenging problems.
Schnetter speaks of his role as being analogous to the engineering experts that help experimentalists with their work. “When one sets up an experiment in physics, the experiment needs to be
designed,” he says. “You need to know many details about how to do things. For instance, some experimental set-ups are technically feasible, and some are not, even though the physics being tested is
the same. So you need experience to set up a successful experiment.”
“Computational physics is much the same,” Schnetter continues. “If you have a small problem, you can simply crunch through it – Mathematica and other off-the-shelf tools are there to help you. But
for complex problems, you need to have experience and put effort into designing the computational infrastructure. You need to make sure the numerical algorithms are efficient and stable and produce
good, accurate results. It’s easy to make an error with an algorithm that causes things to just explode – small errors add up across many iterations like an interest rate, like having a mortgage for
a million years. No one wants that. And that’s just one of the things that can go wrong. It gets quite complicated. You need good design from the beginning.”
Schnetter works with researchers from all of Perimeter’s diverse fields of research. “That works because though the physics is very different, the computational methods are often quite similar,” he
says. Often, he’s pulled in for hallway consultations and 10-minute chats, but he’s been part of a number of major projects, too. For example, he’s recently collaborated with Faculty member Luis
Lehner on describing transient gravity waves, with Senior Researcher Christopher Fuchs on new approaches to probability in quantum theory, and with Faculty member Bianca Dittrich on a problem in
quantum gravity.
“Most of us in quantum gravity are rather new to computational physics,” remarks Dittrich. Quantum gravity in general studies the idea that spacetime itself might be made up of small grains, like
grains of sand, which cannot be divided further. Much of the work in the field involves investigating the properties of these individual grains of spacetime – they are called, confusingly, atoms of
spacetime. Dittrich’s work involves how to get from those atoms of spacetime back to the smooth structure we know spacetime has on a large scale. That’s where computational physics can come in.
Dittrich sums up the problem: “We have microscopic models for quantum gravity, which describe the properties of a spacetime atom, but we don’t know much about what happens when you put many of these
atoms together. It’s a very complex question because it involves lots of atoms – an infinite number of atoms – and one has to consider not just their static states, but the dynamics of them. For a
long time, there’s not been much progress on this front.”
This is where computational physics can be a game-changer. Dittrich explains: “What we did was to simplify the model of individual spacetime atoms, while keeping the main dynamical ingredients. Then,
we were able to do numerical simulations involving many of these simplified atoms.” This gave the researchers a glimpse of what quantum spacetime might look like on a large scale.
“It was very helpful to have Erik to collaborate with,” says Dittrich. “He suggested ways to design a computation that led to a much more effective simulation. In the end, we were very pleased with
the simulation we did run: we saw something beyond what we expected, which has opened a new research direction for us.”
“Places that require heavy computing invariably have someone who tends to be the whiz on getting things done computationally,” reflects Luis Lehner, who recently worked with Schnetter on a team that
described what signals might be picked up by the new generation of gravitational wave observatories. “It is unusual for the role to be formalized. And it’s very unusual to have someone who can see
across many fields and disciplines in the way Erik does.”
“Computational physics is important because many fascinating, deep, fundamental problems can only be tackled computationally,” adds Lehner. There are ideas that need testing and exploring that are
“outside the reach of what’s experimentally feasible,” he says. “Along with theoretical and experimental physics, computational physics is becoming a third pillar of fundamental research.”
Computational physics, then, is not so much like a backhoe as like a telescope – a new window on the universe, a tool to see further.
|
{"url":"http://perimeterinstitute.ca/node/88087","timestamp":"2014-04-18T23:00:45Z","content_type":null,"content_length":"33129","record_id":"<urn:uuid:4b8da3e5-7877-4f41-a427-3224d37cab48>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00590-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[R] rnorm() converted to daily
Meyners, Michael, LAUSANNE, AppliedMathematics Michael.Meyners at rdls.nestle.com
Fri Apr 17 10:21:41 CEST 2009
SD in y is more than 15 times (!!!) larger than in x and z,
respectively, and hence SD of the mean y is also larger. 100,000 values
are not enough to stabilize this. You could have obtained 0.09 or even
larger as well. Try y with different seeds, y varies much more than x
and z do. Or check the variances:
var.x <- var(replicate(100, mean( rnorm( 100000, mean=0.08, sd=0.25 ))))
var.y <- var(replicate(100, mean( rnorm( 100000, mean=(0.08/252),
sd=(0.25/sqrt(252)) )) * 252))
var.z <- var(replicate(100, mean( rnorm( 100000, mean=(0.08/252),
sd=0.001 )) * 252))
I guess what you are doing wrong is assuming that 0.25/sqrt(252) is
similar to 0.25/252, the latter being close so 0.001, while the former
is obviously not. Try to replace 0.25 in y by 0.0158, than you should be
close enough.
HTH, Michael
-----Original Message-----
From: r-help-bounces at r-project.org [mailto:r-help-bounces at r-project.org]
On Behalf Of Ken-JP
Sent: Freitag, 17. April 2009 09:26
To: r-help at r-project.org
Subject: [R] rnorm() converted to daily
yearly 8% drift, 25% sd
why are x and y so different (x and z look ok)?
Does this have something to do with biases due to the relationship
between population and sample-estimates as a function of n samples and
sd? Or am I doing something wrong?
set.seed( 1 );
x <- mean( rnorm( 100000, mean=0.08, sd=0.25 )); set.seed( 1 ); y <-
mean( rnorm( 100000, mean=(0.08/252), sd=(0.25/sqrt(252)) )) * 252;
set.seed( 1 ); z <- mean( rnorm( 100000, mean=(0.08/252), sd=0.001 )) *
252; #
x # 0.07943898
y # 0.07109407
z # 0.07943449
View this message in context:
Sent from the R help mailing list archive at Nabble.com.
R-help at r-project.org mailing list
PLEASE do read the posting guide
and provide commented, minimal, self-contained, reproducible code.
More information about the R-help mailing list
|
{"url":"https://stat.ethz.ch/pipermail/r-help/2009-April/195342.html","timestamp":"2014-04-18T02:59:12Z","content_type":null,"content_length":"5223","record_id":"<urn:uuid:4d3ced23-0bff-41ba-80d4-b6b05db94a26>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00134-ip-10-147-4-33.ec2.internal.warc.gz"}
|
( resource psdoc, float x, float y, float radius, float alpha, float beta )
Draws a portion of a circle with at middle point at (x, y). The arc starts at an angle of alpha and ends at an angle of beta. It is drawn counterclockwise (use ps_arcn() to draw clockwise). The
subpath added to the current path starts on the arc at angle alpha and ends on the arc at angle beta.
Resource identifier of the postscript file as returned by ps_new().
The x-coordinate of the circle's middle point.
The y-coordinate of the circle's middle point.
The radius of the circle
The start angle given in degrees.
The end angle given in degrees.
|
{"url":"http://idlebox.net/2007/apidocs/php-manual-20070505.zip/function.ps-arc.html","timestamp":"2014-04-18T00:48:14Z","content_type":null,"content_length":"4613","record_id":"<urn:uuid:7d757229-e97a-4c4c-afcc-4782f8f49cf2>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00312-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Fundamental To The Decimation In Time FFT Algorithm ... | Chegg.com
Image text transcribed for accessibility: Fundamental to the decimation in time FFT algorithm is the fact that an N-length signal x[n] can be split into a weighted sum of 2 N/2-sized DFTs. Suppose
you have a 4-point signal x[n]. It is split by even/odd indices: the two even ones to g[n] and the two odd ones to h[n]. The next step of the algorithm finds the DFTs of g[n] and h[n], and then uses
that to find X[k], the DFT of x[n]. If you are given: G[0] = a, G[1] = 6, H[0] = c, H[1] = d , then what is X[3]?
Electrical Engineering
|
{"url":"http://www.chegg.com/homework-help/questions-and-answers/fundamental-decimation-time-fft-algorithm-fact-n-length-signal-x-n-split-weighted-sum-2-n--q3128010","timestamp":"2014-04-20T18:52:42Z","content_type":null,"content_length":"20747","record_id":"<urn:uuid:bb84319e-883b-442a-96b8-efd286b07f48>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00316-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Hiram, GA Algebra 2 Tutor
Find a Hiram, GA Algebra 2 Tutor
...I also took College Algebra my Senior year, along with Advanced Algebra and Trigonometry. I know that Pre-Algebra, Algebra 1 and 2 provide a strong foundation for all other math courses taken,
both in high school and college. I will focus in on what the main problem areas are, and what they stem from.
38 Subjects: including algebra 2, English, SAT math, reading
I have been a teacher for seven years and currently teach 9th grade Coordinate Algebra and 10th grade Analytic Geometry. I am up to date with all of the requirements in preparation for the EOCT.
I am currently finishing up my masters degree from KSU.
4 Subjects: including algebra 2, geometry, algebra 1, prealgebra
...Although high schools now call it "Math 1" or "Math 2" (I've successfully taught both courses) the algebra remains the same. I can explain any algebra topic simply and clearly, though some
topics may require a review of related basic skills. I've taught high school and college algebra for over 10 years.
13 Subjects: including algebra 2, calculus, algebra 1, SAT math
...This makes it much easier to understand, and maybe even more important, it makes it a lot more interesting and fun for the student. I love standardized tests and have scored within the 99th
percentile for all tests I tutor. I have been able to consistently help my students increase their SAT sc...
19 Subjects: including algebra 2, physics, calculus, geometry
...While enjoying the classroom again, I also passed 6 actuarial exams covering Calculus (again), Probability, Applied Statistics, Numerical Methods, and Compound Interest. It's this spectrum of
mathematics, from high school through post baccalaureate, which I feel most comfortable tutoring. I also became even more proficient with Microsoft Excel, Word, and PowerPoint.
21 Subjects: including algebra 2, calculus, statistics, geometry
Related Hiram, GA Tutors
Hiram, GA Accounting Tutors
Hiram, GA ACT Tutors
Hiram, GA Algebra Tutors
Hiram, GA Algebra 2 Tutors
Hiram, GA Calculus Tutors
Hiram, GA Geometry Tutors
Hiram, GA Math Tutors
Hiram, GA Prealgebra Tutors
Hiram, GA Precalculus Tutors
Hiram, GA SAT Tutors
Hiram, GA SAT Math Tutors
Hiram, GA Science Tutors
Hiram, GA Statistics Tutors
Hiram, GA Trigonometry Tutors
|
{"url":"http://www.purplemath.com/Hiram_GA_algebra_2_tutors.php","timestamp":"2014-04-21T07:25:59Z","content_type":null,"content_length":"24018","record_id":"<urn:uuid:0b33767f-a2fe-437c-9ed9-1c951ede5435>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00662-ip-10-147-4-33.ec2.internal.warc.gz"}
|
How Long? How Wide? How Tall? How Deep?
In this lesson, students use historical nonstandard units (digits, hand, cubit, yard, foot, pace, fathom) to estimate the lengths of common objects and then measure using modern standard units. They
will discover the usefulness of standardized measurement units and tools.
Many students have not had enough experiences with nonstandard units and therefore have an incomplete understanding of measurement. This lesson provides more of these experiences as well as a bridge
into familiar standard units of measuring length. Interested teachers could also connect this lesson to information about measurement in many ancient cultures.
To begin the lesson, read How Big Is a Foot? to students. This amusing story tells of a king who wants to have a bed made just the right size for his queen. He measures her width and length with his
king-size feet. The job of building the bed falls to a little apprentice who carefully uses the king's dimensions, but uses his little feet as the unit. Students enjoy explaining why the bed turns
out to be too small for the queen and posing solutions to the dilemma.
Explain to the students that, although this is a fictional story, it is based upon fact. Our standard unit of measure, the foot, actually did come from making a model of a king's foot; and the
standardized tool became known as a "ruler." Show a ruler so students can imagine a king's foot.
Have each student trace around his or her shoe on construction paper and cut out about six of these paper feet. Tape them heel to toe. Let the students use this new "six-foot" measure to find and
record the length of common objects around the room.
After about ten minutes, lead the class in a discussion, comparing their measurements. Chart the data to use as a visual reference. Ask questions that help students compare their findings, for
• Who measured the height of the desk? What did you find?
• Who found a different measurement for the height of the desk?
• Why do you think it was different from ____'s?
• Is the desk really taller for ____ than for ____?
Show the students a variety of rulers (wooden, plastic, metal). Ask, does anyone have an idea about why we use rulers instead of paper feet taped together? Enjoy the idea-sharing! Note levels of
thinking, reasoning, and creativity.
Then, explain that inches began in medieval England and were based upon the width of the human thumb. Thumbs were excellent measuring tools because even the poorest individuals had them available
when they went to market.
Ask students to draw, along the edge of their construction paper, a line equal to the width of their thumbs. Cut the edge off the paper (about an inch wide), and accordion-fold the strip to show
12 student "inches."
Have students compare the length of their 12 inches to the tracing of their shoes. Share observations. (Note: 12 student inches should be about the same as 1 student foot.) Explain that body
measurements were probably the most convenient references for length measurement long ago.
Distribute the Body Parts Activity Sheet. Define, model, and have students repeat each of the body measurements on the chart.
With partners, have students measure and record the lengths of their own digits, hands, cubits, yards, and fathoms.
After about ten minutes, call students together to discuss the term "cubit." The cubit was devised by the Egyptians about 3000 BC, and is generally regarded as the most important length standard in
the ancient Mediterranean world. The Egyptians realized that a standardized cubit was necessary in order for measurements to be fair, so a master "royal cubit" was made of black granite. The present
system of comparing units of measure with a standard physical tool (such as a ruler or yardstick) follows directly from this Egyptian custom.
Ask for a volunteer and attempt to measure his or her height using your forearm (cubit). Ask for solutions to the difficulty and awkwardness. [One solution should be to make a model that is the
length of your own cubit.] Direct students to make a model of their cubits using either string, ribbon, adding machine tape, or interlocking cubes. Have partners check for accuracy.
Have students duplicate their cubit models and use them to estimate, measure, and record the height of several classmates. At the end of the activity (about ten minutes), have students share ideas of
which models worked best for measuring height.
• String, ribbon, adding machine tape, interlocking cubes
• Tools for measuring length (rulers, yardsticks, retractable and folding measuring tapes, trundle wheels)
• Construction paper
• How Big Is a Foot? by Rolf Myller
1. Collect the Body Parts Activity Sheet. Note whether data was complete and reasonable.
2. Have students record answers to key questions in math logs. Note whether students were able to explain their thinking or had insights about the mathematics.
Questions for Students
1. What did you learn, notice, or wonder about when measuring with nonstandard units (body parts)?
[Students may note that it was tricky using one unit over and over again, or that they got different answers each time they measured. They may even say using a ruler is better because it's not as
embarrassing as a cubit!]
2. What were some interesting words (vocabulary) you used in this lesson?
[Possible answers: cubit, apprentice, standardized, and ruler (as another name for "King").]
3. Why is it important to estimate before actually measuring?
[To make your answer reasonable, to catch errors.]
4. Explain, in your own words, why standardized units and tools are important when measuring.
[So you get the same answer every time, other people will get the same answer as you, and so all projects turn out the same.]
5. Can you ever get an exact measurement of length? Why or why not?
[You can get closer and closer, but you'll never get an exact measurement. Tools and units can get very accurate, but things you're measuring might be floppy or squishy.]
Teacher Reflection
• How did the students demonstrate understanding of the materials presented?
• What were some of the ways that the students demonstrated that they were actively engaged in the learning process?
• What worked with classroom behavior management? How would you change what didn’t work?
Learning Objectives
Students will:
• Become familiar with the language/vocabulary of measurement
• Gain an understanding of measuring length by estimating, making comparisons, handling materials to be measured, and measuring with tools
• Understand that all measurements are approximations
• Understand the need for measuring with standard units
Common Core State Standards – Mathematics
Grade 3, Measurement & Data
• CCSS.Math.Content.3.MD.B.4
Generate measurement data by measuring lengths using rulers marked with halves and fourths of an inch. Show the data by making a line plot, where the horizontal scale is marked off in appropriate
units-- whole numbers, halves, or quarters.
|
{"url":"http://illuminations.nctm.org/Lesson.aspx?id=2120","timestamp":"2014-04-20T10:48:47Z","content_type":null,"content_length":"71187","record_id":"<urn:uuid:113e952c-f1c4-4f3b-9252-0ef2ac838518>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00432-ip-10-147-4-33.ec2.internal.warc.gz"}
|
This challenge question kinda reminds me of Mickey Mouse
Yesterday I went to a museum and waited in line for a very long time for my turn to spend 5 minutes staring at a piece of art that completely mystified me. I guess it’s good art if I’m still thinking
about it a day later, even if the thoughts I’m having mostly revolve around my own resignation that I will never really understand art. But it wasn’t a complete loss! There was a circular pattern on
the floor that I spent a lot of time staring at while in line, and it ended up inspiring this challenge question.
As always, first correct answer in the comments will win a Math Guide. All the usual contest rules apply: previous winners can’t win; if you live outside the US you have to pay for shipping; etc.
In the figure above, two congruent circles are tangent at point D. Points D, E, and F are the midpoints of AB, AC, and BC, respectively. If AB = 12, what is the area of the shaded region?
Good luck!
UPDATE: Congratulations to John, who got it first. Solution below the cut…
When we’re asked to solve for the areas of weirdly shaped shaded regions, we’re almost always going to find the area of a larger thing that we know how to calculate, and then subtract small things we
know how to calculate until we’re left with the weird shaded bit:
A[whole] – A[unshaded] = A[shaded]
The first thing we should do is mark this bad boy up. We know AB = 12, and D is the midpoint of AB and also the endpoint of two radii. We also know E and F are endpoints of two radii, and midpoints
of AC and BC, respectively.
At this point, we actually know a great deal. First, we know the radius of each circle is 6. That means each circle has an area of π(6)^2 = 36π. We’ll come back to this in a minute.
It should also be obvious that ABC is an equilateral triangle. This is awesome, because equilateral triangles are easily broken into 30º-60º-90º triangles, which is what we’ll do to find the
triangle’s area.
So triangle ABC has a base of 12 and a height of 6√3.
Now that we have that, all we need to do is subtract the areas of the circle sectors (in green below) that aren’t included in the shaded region.
Areas of sectors are easy to calculate. All we do is figure out what fraction of the whole circle the sector covers by using the central angle. In this case, the angles are 60º, so we’re dealing with
60/360 = 1/6 of each circle.
We need to subtract two sectors from the area of triangle ABC to find our shaded region:
And there you have it! Cool, right?
|
{"url":"http://pwnthesat.com/wp/2013/09/this-challenge-question-kinda-reminds-me-of-mickey-mouse/","timestamp":"2014-04-18T15:38:28Z","content_type":null,"content_length":"68457","record_id":"<urn:uuid:a2b3e483-0885-48b8-9e6a-a9038f3d9781>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00293-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Sylow Subgroups
up vote 14 down vote favorite
I had been looking lately at Sylow subgroups of some specific groups and it got me to wondering about why Sylow subgroups exist. I'm very familiar with the proof of the theorems (something that
everyone learns at the beginning of their abstract algebra course) -- incidentally my favorite proof is the one by Wielandt -- but the statement of the three Sylow theorems still seems somewhat
miraculous. What got Sylow to imagine that they were true (especially the first -- the existence of a sylow subgroup)? Even the simpler case of Cauchy's theorem about the existence of an element of
order $p$ in a finite subgroup whose order is a multiple of $p$ although easy to prove (with the right trick) also seems a bit amazing. I believe that sometimes the hardest part of a proving a
theorem is believing that it might be true. So what can buttress the belief for the existence of Sylow subgroups?
gr.group-theory soft-question
1 Have you ever seen the proof of Sylow in the excercises to Jacobson's Basic Algebra I? It's one of the slickest proofs I've ever seen (I mean, I did the problem, but he leads you to this slick
solution.) – Harry Gindi Mar 19 '10 at 5:24
1 @fpqc - I think you're talking about Gallagher's proof of a generalization: the number of subgroups of order $p^k$, where $p^k$ divides $|G|$, is 1 mod p. – Steve D Mar 19 '10 at 5:51
3 Qiaochu: That the existence of Sylow subgroups is true for abelian group doesn't strike me as a good reason to expect it to be true in general finite groups. In a finite abelian group there is a
subgroup of every size which divides the size of the group. That's certainly not true in finite groups in general. And I don't think Sylow could have been inspired by any analogy with eigenspace
decompositions; the question asked what Sylow's motivation was. – KConrad Mar 19 '10 at 18:59
1 I just looked at the very nice paper by Waterhouse "The Early Proofs of Sylow's Theorem" which, among other things gives Sylow's original proof which is quite nice, and in fact corresponds to the
way that modern computer algebra packages find Sylow Subgroups. It also refers to a (by now obscure) result of Cauchy, where, among other things he gives an explicit construction for the p-Sylow
subgroups of $S_n$. – Victor Miller Mar 19 '10 at 19:33
1 Also take a look at Rod Gow's "Sylow's Proof of Sylow's Theorem", Irish Math. Soc. Bulletin 33 (1994), 55--63. – KConrad Mar 19 '10 at 20:14
show 5 more comments
4 Answers
active oldest votes
Victor, you should check out Sylow's paper. It's in Math. Annalen 5 (1872), 584--594. I am looking at it as I write this. He states Cauchy's theorem in the first sentence and then says
"This important theorem is contained in another more general theorem: if the order is divisible by a prime power then the group contains a subgroup of that size." (In particular, notice
Sylow's literal first theorem is more general than the traditional formulation.) Thus he was perhaps in part inspired by knowledge of Cauchy's theorem.
up vote 16
down vote Sylow also includes in his paper a theorem of Mathieu on transitive groups acting on sets of prime-power order (see p. 590), which is given a new proof by the work in this paper.
accepted Theorems like Mathieu's may have led him to investigate subgroups of prime-power order in a general finite group (of substitutions).
Thanks, I'm looking at it now. I also found a paper by Scharlau "The Discovery of the Sylow Theorems" (in German). The first paragraph of the Math. Reviews review is: The author
evokes primarily the works of Sylow that antedate 1872—the date of the first publication of Sylow’s famous theorems —and proposes to provide answers to the following questions: How
did Sylow arrive at the formulation of his theorems, what were the mathematical techniques at his disposal, what was the influence of his contemporaries, and what hopes did Sylow
have for his work? – Victor Miller Mar 19 '10 at 21:06
What I like about Sylow's paper is that he uses Cauchy's theorem of the existence of an element of order $p$ to "boot strap" up to a maximal $p$ subgroup. Once one accepts Cauchy's
1 result it makes clear why you would expect such subgroups to exist. Since Cauchy proved it by giving an explicit construction for $S_n$ Frobenius' proof is interesting since it gives
a way of transferring that construction to a general finite group. – Victor Miller Mar 19 '10 at 21:34
add comment
The Sylow theorems are finite group analogues of a bunch of results about "maximal unipotent subgroups" in algebraic groups. Basically, the Sylow subgroups play a role analogous to the role
played by the maximal unipotent subgroups.
In the case where the group is the general linear group, the maximal unipotent subgroup can be taken as the group of upper triangular matrices with 1s on the diagonal, for instance. There
are existence, conjugacy, and domination results for these analogous to the existence, conjugacy, and domination part of Sylow's theorems: maximal unipotents exist, every unipotent is
contained in a maximal unipotent, all maximal unipotents are conjugate. The role analogous to "order" is now played by "dimension".
The normalizer of the Sylow subgroup plays the role of the maximal connected solvable subgroup, also called the Borel subgroup (see Borel fixed-point theorem and Lie-Kolchin theorem). In
the case of the general linear group, this is the group of upper triangular invertible matrices.
up vote 9
down vote There are similar results for Lie algebras too, basically arising from Engel's theorem and Lie's theorem.
In fact, much of the study of simple groups and their geometry relies on this geometric interpretation of Sylow subgroups, p-subgroups, and their normalizers. This deeper study of the
geometry/combinatorics of simple groups is called local analysis in group theory and is closely related to the recently popular topic of "fusion systems" which are essentially studying the
conjugation action of a group on subgroups of a particular Sylow subgroup.
ADDED BASED ON COMMENT BELOW: For a finite field $F_q$ where q is a power of p, the maximal unipotent subgroup of $GL_n(F_q)$ is the $p$-Sylow subgroup. I had originally intended to mention
this, but forgot.
All this, though, cannot have been what suggested the Sylow theorem to Sylow! – Mariano Suárez-Alvarez♦ Mar 26 '10 at 2:02
Yes, that's true, because Sylow was thinking of things in terms of permutation groups, and to the best of my knowledge did not explore the connection with linear groups or algebraic
groups. Still, it's worth noticing that there are multiple reasons why one might suspect a statement to be true, or at least plausible. – Vipul Naik Mar 26 '10 at 2:08
1 That's a very good point! By the way, something that you probably thought too obvious to mention when revealing this remarkable analogy, is that for GL_n(F_p), maximal unipotent
subgroups ARE Sylow subgroups :-) – Vladimir Dotsenko Mar 26 '10 at 11:25
Yes, I had forgotten to mention that. Not just GL_n(F_p), but also GL_n(F_q) where q is a power of p. – Vipul Naik Mar 26 '10 at 13:30
add comment
An extension of the Vipul's ideas can be found in the article (couldn't find a link to the pdf with google)
Subgroup complexes by Peter Webb, pp. 349-365 in: ed. P. Fong, The Arcata Conference on Representations of Finite Groups, AMS Proceedings of Symposia in Pure Mathematics 47 (1987).
But as Mariano already commented, the analogy to the maximal unipotent subgroups of the general linear group was probably not Sylow's motivation. As commented before, he was maybe looking for
maximal $p$-subgroups (i.e., maximal with respect to be a $p$-subgroup).
This is also the leitmotif of my favorite proof of the Sylow theorems given by Michael Aschbacher in his book Finite Group Theory. It is based on Cauchy's theorem (best proved using
up vote J.H.McKay's trick to let $Z\_p$ act on the set of all $(x\_1, \dots, x\_p) \in G^p$ whose product is $1$ by rotating the entries) and goes essentially like this:
3 down
vote The group $G$ acts on the set $\mathrm{Syl}\_p(G)$ of its maximal $p$-subgroups by conjugation. Let $\Omega$ be a (nontrivial) orbit with $S\in\Omega$. If $P$ is a fixed point of the action
restricted to $S$ then $S$ normalizes $P$ and $PS=SP$ is a $p$-group. Hence $P=S$ by maximality of both $P$ and $S$, and $S$ has a unique fixed point. As $S$ is a $p$-group, all its orbits
have order $1$ or a multiple of $p$, in particular $|\mathrm{Syl}\_p(G)| = 1 \bmod p$. All orbits of $G$ are disjoint unions of orbits of $S$ proving $\Omega = 1 \bmod p$ and $\Omega' = 0 \
bmod p$ for all other orbits $\Omega'$ of $G$. This implies that $\Omega = \mathrm{Syl}\_p(G)$, as $\Omega$ was an arbitrary nontrivial orbit of $G$, showing that the action of $G$ is
transitive. The stabilizer of $S$ in $G$ is its normalizer $N\_G(S)$, and as the action is transitive $|G:N\_G(S)| = |\mathrm{Syl}\_p(G)| = 1 \bmod p$. It remains to show that $p$ does not
divide $|N\_G(S):S|=|N\_G(S)/S|$. Otherwise, by Cauchy's theorem there exists a nontrivial $p$-subgroup of $N\_G(S)/S$ whose preimage under the projection $N\_G(S) \to N\_G(S)/S$ is
$p$-subgroup properly containing $S$ contradicting the maximality of $S$.
add comment
I don't know if this was the original motivation, but this has some interesting motivating ideas: Abstract nonsense versions of "combinatorial" group theory questions
up vote 2 down vote
Thanks for that reference. It looks very interesting. – Victor Miller Mar 19 '10 at 13:40
add comment
Not the answer you're looking for? Browse other questions tagged gr.group-theory soft-question or ask your own question.
|
{"url":"http://mathoverflow.net/questions/18716/sylow-subgroups/18718","timestamp":"2014-04-18T03:20:06Z","content_type":null,"content_length":"81196","record_id":"<urn:uuid:66b9a544-4cf5-49f0-bebf-70c0b3a86b4d>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00068-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Swimming Pool Heater Guide
What Size Heater Do You Need?
Before you rush off madly spending money, it is a good idea to figure out what heating capacity you need for your swimming pool. Following the instructions following on this page you will be able to
calculate the volume of a pool and work out a rough minimum heating capacity for your pool. Going for a larger capacity heater will result in a shorter heating time and less wear and tear on the
heater. The energy consumption will be little changed so don’t opt for a smaller heater to save dollars – it doesn’t work that way. A larger heater overall costs little more to run than a smaller
So into the nasty mathematical stuff. A BTU or British Thermal Unit is the standard for measuring heating capacity and most pool heaters will be described in terms of their BTU output. What we will
do first is work out the number of BTUs you will need to increase the water temperature of your pool by 1^oF in one day. From that we can easily work out the rest. So some things to work out.
Pool Volume in Gallons
If you already know the capacity of your pool that’s great, if you don’t it’s not hard to get a rough idea. Since pools are not all the same shape we will have to work things a little differently for
the different shapes. If you can contact the pool builder, they should be able to give you an accurate volume for your pool and this is the best way to go.
Rectangular or square – pool length (in feet) x pool width (in feet) (this gives surface area of pool) x average depth (in feet) x 7.5
The average depth will be close enough if you add the shallowest depth to the deepest depth then divide by 2. The 7.5 gives us the answer in gallons instead of cubic feet (1 cubic foot = 7.48 gallons
So, for example, your pool is 20′ x 40′ and is 4′ in the shallow end and 6′ in the deep end.
You have an average depth of 5′ (4 + 6 = 10 divided by 2 = 5).
So 20 x 40 (= surface area of 800 sq ft) x 5 x 7.5 = 30,000 gallons.
Round – half pool width x half pool width x 3.14 (this gives surface area) x average depth x 7.5
For example, your pool is 30′ wide, 4′ at shallowest and 6′ at deepest.
Your average depth is 5′ (4 + 6 = 10 divided by 2 = 5)
So you have 15 x 15 x 3.14 (gives 706 sq feet area) x 5 x 7.5 = 26,493 gallons (we will round these figures so 26,500 gallons is good).
Oval – length x width x average depth (giving area) x 7.5 x 0.9
The 0.9 will help give the approximate volume – we don’t need to be exact.
Your pool is 30′ long, 15′ wide, 3′ at shallowest point and 6′ at deepest.
Average depth is 3 + 6 = 9 divided by 2 = 4.5′
So we get to 30 x 15 (x 0.9 = 405 sq ft surface area) x 4.5 x 7.5 x 0.9 = 13,669 gallons (13,700).
Kidney shape – pretty close to an oval pool – width x length (x 0.95 = surface area) x average depth x 7.5 x 0.95
For example, you have a width (edge to edge) of 15′ and length of 30′.
You have a shallow of 4′ and a deepest point at 6′ so average depth is 4 + 6 = 10 divided by 2 = 5′
Then we go 15 x 30 ( x 0.95 = 427 sq ft area) x 5 x 7.5 x 0.95 = 16,031 gallons.
Irregular shapes – sometimes you will be able to come to a reasonable estimate by looking at the shape and breaking it down into a few shapes joined together and then work out a volume for each
section and total them up. The best way would be to approach the builder of your pool as they should have a record and be able to inform you of the volume.
Heating Your Pool
Now you have the volume of your pool we can get to the nitty gritty of how much heating capacity you will need. There are two heating conditions you will need to be aware of - heating from cold and
maintaining the desired temperature.
Heating from cold – this is the initial heating phase when you will be bringing your pool from cold to your desired temperature. If you will be using your pool continuously this will not need to
happen often as you will reach your temperature and then maintain that temperature. If you will be using your pool occasionally then this will be done more often – gas heaters are usually better for
this. Now back to the maths class!
Fistly, you need to know 1 BTU is the heat energy required to raise 1 pound of water by 1^oF and there are 8.33 pounds of water in 1 gallon.
So Volume x 8.33 = total pounds of water then divide by 24 to give BTUs per hour required to heat your pool 1 degree F in a day.
For example, 30,000 x 8.33 = 249,900 then divide by 24 = 10,412 BTUs per hour for a 1 degree rise in 24 hours.
Now you need to measure the temperature of your pool. This you will subtract from your desired temperature (usually around 80-84^oF) to find the rise in temperature required. Suppose your pool is at
60^oF and you want it up to 80^oF then 80 – 60 = 20^oF rise.
Now multiply BTUs for a 1^oF rise by the rise required to get BTUs per hour.
For the 30,000 (20′ x 40′ pool) gallon example: 10,412 x 20 = 208,240 BTUs per hour.
Another example, if your pool is 15,000 gallons: 15,000 x 8.33 = 124,950 divide by 24 = 5,206
Pool is at 55^oF and required temp is 82^oF so rise is 82 – 55 = 27.
BTUs per hour required are: 27 x 5,206 = 140,562 BTUs.
It is a good idea to add an extra 15-20% to the heating capacity you arrive at to make up for inefficiencies in the system. Remember, the BTU figure we have arrived at here is to achieve the
temperature rise over one day. If you are going to do it in two days the per hour requirement is halved and if you want to get there in half a day then it is doubled. Also remeber your pool will be
losing some of the heat it has gained due to natural losses such as evaporation so the desired temperature may take a bit longer. A pool cover will help stop this heat loss by about 50-80% so the
benefits of a pool cover are obvious.
Maintaining desired temperature – you will require about 10 BTU per hour per sqaure foot of pool surface for each degree your pool temperature is above the air temperature. If we look at our 20′ x
40′ pool we have a surface area of 800 sq feet. If the air temperature is 60^oF and you wish to keep your pool at 80^oF then we have 80 – 60 = 20 x area of 800 = 16,000 x BTU per sq foot of 10 =
160,000 BTU per hour. With the addition of a pool cover this will drop dramatically.
Take the oval pool above for a second example. We have a surface area of 405 sq feet so with a temperature 20^oF above the unheated temperature we have 405 x 10 x 20 = 81,000 BTU per hour. Again,
drop this figure 50-80% with the addition of a pool cover.
And that is pretty well it. When all is said and done, a bigger capacity heater is always better than a lower capacity one. The energy cost difference is slight but wear and tear is significant. For
a pool which is not used often and will be heated from cold rather than maintained at a constant temperature, gas heaters are the best choice. Use the figures you obtained through your calculations
from above as a minimum rating for a heater.
Pick A Heater
Looking at heaters, if we take the 20′ x 40′ pool from above then we need to find a heater which is capable of at least 210,000 BTU per hour. Remember that a higher capacity heater will take less
time to do the same job and not use a great deal more fuel. To maintain 80^oF the heater would be running almost full time where a heater around 400,000 BTU per hour would run for about 30 minutes
per hour.
Looking at gas heaters, a 400,000 BTU unit would cost about $2,000 to $2,500 depending on which brand you chose. One thing to be wary of is how the manufacturer states the BTU capacity of the heater
– some use input rather than output capacity. If the input capacity is quoted then you must look at the efficiency rating. Say the heater is rated at 80% efficiency then the output capacity is about
80% of the input capacity. For example, a heater is rated at 400,000 BTU input and 80% efficiency then multiply 400,000 by 0.8 to give an output capacity of 320,000 BTU per hour.
With heat pumps, the normal maximum output is about 130,000 BTU per hour and the cost is between $3,000 and $4,000 for the unit. The big advantage with heat pumps is they cost about a third to a half
of the cost of a gas heater to operate. Some heat pumps are set up like reverse cycle airconditioners – that is, they can even cool your pool on those days when you jump in and get hotter! These
models generally cost about $1,000 more than the plain heating models. A heat pump with capacity of 130,000 BTU/hour will take about 48 hours to heat the 30,000 gallon (20′ x 40′) pool in the
examples above and would need a cover to be able to maintain the temperature.
Solar pool heaters are a different kettle of fish. Some manufacturers will quote a BTU per hour rating but there are too many variables to find an accurate result. Most solar pool systems will
recommend about 50-80% of the pools surface area in solar panel area. So the 20′ x 40′ pool with 800 sq feet will need about 400-640 sq feet of solar panel. If the panels are 4′ x 10′ this means
about 10 to 16 panels. Solar systems are best suited to extending your swimming season rather than trying to heat a cold pool to a comfortable temperature. If you want to spend some money to save
some money, a combination of solar and either gas or heat pump would give a good year round result.
|
{"url":"http://poolheaterguide.com/what-size-heater-do-you-need/","timestamp":"2014-04-19T17:51:52Z","content_type":null,"content_length":"31924","record_id":"<urn:uuid:ba758b48-9060-4fa9-8a83-bd336ebbf130>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00220-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Seat Pleasant, MD Precalculus Tutor
Find a Seat Pleasant, MD Precalculus Tutor
...As a camp counselor, I interacted with the children in Spanish and gave English lessons as well. As an undergraduate at Duke University, I took organic chemistry and received a high A in the
class. I have an in depth understanding the material and am more than capable of explaining the concepts and mechanisms.
17 Subjects: including precalculus, Spanish, writing, physics
...Some need homework help, others a plan to backtrack and review algebra. Some need to see every step in a solution while others just need a little explaining for a concept to make sense. Some
lack confidence, others lack motivation.
9 Subjects: including precalculus, calculus, physics, geometry
...I have taken the ACT previously and I have a mastery of all of the concepts covered in the ACT Math section, reinforced by my math classes throughout my chemistry degree. I have a master's
degree in chemistry from American University, and I provided independent tutoring for organic chemistry whi...
11 Subjects: including precalculus, chemistry, geometry, algebra 2
...While getting my PhD in Physics at the University of Florida, I frequently had homework assignments where this subject was extensively used. I have taken this class when I was an undergrad at
the Colorado School of Mines and got an A. I can provide a transcript of verification if necessary.
13 Subjects: including precalculus, chemistry, calculus, physics
...I have used linear algebra in my work as an electrical engineer for many years. As an electrical engineer for over 50 years, I have used MATLAB in my work for building and evaluating
mathematical models of real systems. In addition, I am a part time professor at Johns Hopkins University where I've been teaching a course in Microwave Receiver design for over 20 years.
17 Subjects: including precalculus, English, calculus, ASVAB
Related Seat Pleasant, MD Tutors
Seat Pleasant, MD Accounting Tutors
Seat Pleasant, MD ACT Tutors
Seat Pleasant, MD Algebra Tutors
Seat Pleasant, MD Algebra 2 Tutors
Seat Pleasant, MD Calculus Tutors
Seat Pleasant, MD Geometry Tutors
Seat Pleasant, MD Math Tutors
Seat Pleasant, MD Prealgebra Tutors
Seat Pleasant, MD Precalculus Tutors
Seat Pleasant, MD SAT Tutors
Seat Pleasant, MD SAT Math Tutors
Seat Pleasant, MD Science Tutors
Seat Pleasant, MD Statistics Tutors
Seat Pleasant, MD Trigonometry Tutors
Nearby Cities With precalculus Tutor
Berwyn Heights, MD precalculus Tutors
Bladensburg, MD precalculus Tutors
Brentwood, MD precalculus Tutors
Capitol Heights precalculus Tutors
Cheverly, MD precalculus Tutors
Colmar Manor, MD precalculus Tutors
Cottage City, MD precalculus Tutors
District Heights precalculus Tutors
Edmonston, MD precalculus Tutors
Fairmount Heights, MD precalculus Tutors
Glenarden, MD precalculus Tutors
Landover Hills, MD precalculus Tutors
Mount Rainier precalculus Tutors
North Brentwood, MD precalculus Tutors
Suitland precalculus Tutors
|
{"url":"http://www.purplemath.com/Seat_Pleasant_MD_precalculus_tutors.php","timestamp":"2014-04-17T15:38:16Z","content_type":null,"content_length":"24660","record_id":"<urn:uuid:58f94015-0a95-48ff-8782-09fae01bd4b5>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00032-ip-10-147-4-33.ec2.internal.warc.gz"}
|
This challenge question kinda reminds me of Mickey Mouse
Yesterday I went to a museum and waited in line for a very long time for my turn to spend 5 minutes staring at a piece of art that completely mystified me. I guess it’s good art if I’m still thinking
about it a day later, even if the thoughts I’m having mostly revolve around my own resignation that I will never really understand art. But it wasn’t a complete loss! There was a circular pattern on
the floor that I spent a lot of time staring at while in line, and it ended up inspiring this challenge question.
As always, first correct answer in the comments will win a Math Guide. All the usual contest rules apply: previous winners can’t win; if you live outside the US you have to pay for shipping; etc.
In the figure above, two congruent circles are tangent at point D. Points D, E, and F are the midpoints of AB, AC, and BC, respectively. If AB = 12, what is the area of the shaded region?
Good luck!
UPDATE: Congratulations to John, who got it first. Solution below the cut…
When we’re asked to solve for the areas of weirdly shaped shaded regions, we’re almost always going to find the area of a larger thing that we know how to calculate, and then subtract small things we
know how to calculate until we’re left with the weird shaded bit:
A[whole] – A[unshaded] = A[shaded]
The first thing we should do is mark this bad boy up. We know AB = 12, and D is the midpoint of AB and also the endpoint of two radii. We also know E and F are endpoints of two radii, and midpoints
of AC and BC, respectively.
At this point, we actually know a great deal. First, we know the radius of each circle is 6. That means each circle has an area of π(6)^2 = 36π. We’ll come back to this in a minute.
It should also be obvious that ABC is an equilateral triangle. This is awesome, because equilateral triangles are easily broken into 30º-60º-90º triangles, which is what we’ll do to find the
triangle’s area.
So triangle ABC has a base of 12 and a height of 6√3.
Now that we have that, all we need to do is subtract the areas of the circle sectors (in green below) that aren’t included in the shaded region.
Areas of sectors are easy to calculate. All we do is figure out what fraction of the whole circle the sector covers by using the central angle. In this case, the angles are 60º, so we’re dealing with
60/360 = 1/6 of each circle.
We need to subtract two sectors from the area of triangle ABC to find our shaded region:
And there you have it! Cool, right?
|
{"url":"http://pwnthesat.com/wp/2013/09/this-challenge-question-kinda-reminds-me-of-mickey-mouse/","timestamp":"2014-04-18T15:38:28Z","content_type":null,"content_length":"68457","record_id":"<urn:uuid:a2b3e483-0885-48b8-9e6a-a9038f3d9781>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00293-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[SciPy-user] [OpenOpt] evaluation of f(x) and df(x)
Emanuele Olivetti emanuele@relativita....
Mon Jul 21 09:04:35 CDT 2008
Hi Dmitrey,
I do not understand if your message answers my question so let me
reformulate. I have the exact gradient df(x) implemented in my code
so I don't use finite differences.
In my problem, In order to compute the gradient df(x=x1), I'd like to
take advantage of intermediate results of f(x=x1)'s compuation.
The re-use of these results is trivial to implement if
the sequence of function calls made by OpenOpt is, e.g., like this:
f(x0), df(x0), f(x1), f(x2), df(x2), f(x3), df(x3).... . Instead the
implementation could become quite difficult if the sequence would
be like this: f(x0), f(x1), df(x0), f(x2), f(x3), f(x4), df(x3),...
(i.e., the sequence of f / df is not evaluated on the same values).
Is OpenOpt working as in the first case?
dmitrey wrote:
> Hi Emanuele,
> if df(x1) is obtained via finite-difference calculations then f(x1) is
> stored and compared during next call to f / df, and vice versa: if f(x1)
> is called then the value obtained is stored and compared during next
> call to f and/or finite-difference df.
> At least it is intended so, I can take more precise look if you have
> noticed it doesn't work properly.
> Regards, D.
> Emanuele Olivetti wrote:
>> Dear All and Dmitrey,
>> in my code the evaluation of f(x) and df(x) shares many
>> intermediate steps. I'd like to re-use what is computed
>> inside f(x) to evaluate df(x) more efficiently, during f(x)
>> optimization. Then is it _always_ true that, when OpenOpt
>> evaluates df(x) at a certain x=x^*, f(x) too was previously
>> evaluated at x=x^*? And in case f(x) was evaluated multiple
>> times before evaluating df(x), is it true that the last x at
>> which f(x) was evaluated (before computing df(x=x^*))
>> was x=x^*?
>> If these assumptions holds (as it seems from preliminary
>> tests on NLP using ralg), the extra code to take advantage
>> of this fact is extremely simple.
>> Best,
>> Emanuele
>> P.S.: if the previous assumptions are false in general, I'd
>> like to know it they are true at least for the NLP case.
>> _______________________________________________
>> SciPy-user mailing list
>> SciPy-user@scipy.org
>> http://projects.scipy.org/mailman/listinfo/scipy-user
> _______________________________________________
> SciPy-user mailing list
> SciPy-user@scipy.org
> http://projects.scipy.org/mailman/listinfo/scipy-user
More information about the SciPy-user mailing list
|
{"url":"http://mail.scipy.org/pipermail/scipy-user/2008-July/017565.html","timestamp":"2014-04-18T06:33:32Z","content_type":null,"content_length":"5919","record_id":"<urn:uuid:5feda12e-30c7-4d29-b7cf-c9e0b984c3f2>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00003-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Help
November 4th 2008, 05:18 PM #1
Mar 2008
I was wondering if someone can check my work on these 2 problems about integrals.
A) the integrals from -2 to 3 (25)^2x+1 dx
the answer i got was [25^(7) / 2ln(25)] / [25^(-1) /2ln(25)]
B) the ingtegral 7x 6^(1-2x^2) dx
the answer i got was (-7/4) [6^1-x^2] / [4ln(6)] + C
thanks for the help
Use more brackets to make the integrand unabiguous.
Why have you not simplified this? It looks like it simplifies to 25^8 to me.
Your integrand is bounded above by 25^7 and is over an interval of length 5 so an upper bound on the integral is 5x25^7<25^8.
This integral look to be ~=25^6.4 to me.
November 5th 2008, 05:23 AM #2
Grand Panjandrum
Nov 2005
|
{"url":"http://mathhelpforum.com/calculus/57624-checkers.html","timestamp":"2014-04-16T10:47:02Z","content_type":null,"content_length":"29699","record_id":"<urn:uuid:c35cde61-3d3c-4f66-a8b8-5c13a6067dff>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00588-ip-10-147-4-33.ec2.internal.warc.gz"}
|
isomorphic groups
So we want to show that every element of [tex]\mathbb{Z}_3 \times \mathbb{Z}_7[/tex] is of the form [tex]n \cdot ( [1]_3, [1]_7 )[/tex]. So for [tex]x, y \in \mathbb{Z}[/tex], we want [tex]( [x]_3,
[y]_7 ) = n \cdot ( [1]_3, [1]_7 ) = ( [n]_3, [n]_7 )[/tex]. So we need to find a number n so that [tex]x \equiv n \mod 3[/tex] and [tex]y \equiv n \mod 7[/tex]. The
Chinese Remainder Theorem
is your friend.
|
{"url":"http://www.physicsforums.com/showthread.php?p=2470334","timestamp":"2014-04-16T04:37:23Z","content_type":null,"content_length":"31621","record_id":"<urn:uuid:6afc24a7-774e-40aa-bbcb-4e23fce8d3aa>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00257-ip-10-147-4-33.ec2.internal.warc.gz"}
|
A. Normal modes in nonequilibrium configurations
B. Partial Hessian vibrational analysis
C. The mobile block Hessian approach
D. Discussion: PHVA versus MBH
A. PHVA and MBH applied to the equilibrium structure
B. PHVA and MBH applied to partially optimized structures
V. APPLICATION TO DI--OCTYL-ETHER
|
{"url":"http://scitation.aip.org/content/aip/journal/jcp/126/22/10.1063/1.2737444","timestamp":"2014-04-16T07:56:11Z","content_type":null,"content_length":"90210","record_id":"<urn:uuid:74e74c76-5149-43d5-b023-5d552f7128bc>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00072-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The math limerick.
A real nerd can combine love of math and poetry, like so:
{(12+144+20+3(4)^0.5)/7}+5(11) = 81 + 0
It’s a true equation. And, it’s a limerick. Read it out loud and you’ll see:
A dozen, a gross, and a score
Plus three times the square-root of four,
Divided by seven,
Plus five times eleven,
Is nine squared and not a bit more.
(Actually, since it’s not dirty, this might not officially qualify as a limerick.)
1. #1 Emily September 6, 2006
How about this other one (an old one, but still my fav):
\int_(1)^(3(1/3))(z^2 dz) cos(3pi/9) = ln (e^(1/3))
The integral of z square dz
from one to the cube root of three
times the cosine
of three pi over nine
equals log of the cube root of e.
2. #2 chezjake September 6, 2006
It’s not dirty, but it is gross; so it surely qualifies.
3. #3 Scott Simmons September 6, 2006
We’ll promise not to sue, if you promise to never, ever, do this again …
4. #4 Chad Orzel September 7, 2006
The integral of z square dz
from one to the cube root of three
times the cosine
of three pi over nine
equals log of the cube root of e.
Plus a constant.
5. #5 beajerry September 7, 2006
6. #6 Emily September 7, 2006
Plus a constant.
Nope — it’s a definite integral
7. #7 Madame September 9, 2006
Madame DeFarge
8. #8 geekwraith September 10, 2006
Oh, it officially qualifies as a limerick, no question. Check out the Omnificent English Dictionary in Limerick form (its acronym gives rise to the oedilf-dot-com url. You ought to figure out
what word your lim could define and submit it. :oD
Here are some more math lims, just to give you an idea (no, I’m not the author — at least, not of these :oP ):
The relation where p exceeds b
Implies b’s never greater than p
(Unlike j = k,
Which means k = j),
So it’s antisymmetric, you see.
Using step-by-step math operations,
It performs with exact calculations.
An algorithm’s job
Is to work out a “prob”
With repeated precise computations.
And of course, they do try to sneak a little humor in:
If a matrix derives all its actors
From its parent’s square matrix cofactors,
It’s an adjoint. This knowledge
Was useful in college;
When dating, such facts are detractors.
And this one… well, I tip my hat to the guy who came up with it:
Now I note a verse rendering pi,
Within which the words strictly high-
light, adeptly encrypted,
How to get scripted
This number in digits. Just try!
(Author’s Note: This verse can be decrypted to give the value of π to 24 decimal places. Simply count the number of letters in each word and you will get 3.141592653589793238462643.)
… I love limericks. I’ve contributed upwards of 30 to the aforementioned “Limerictionary” myself, but I’m not posting any of them here. (Some of them are geeky, but not math-related. :oP)
9. #9 Jonathan Vos Post March 4, 2007
2 Biolimerix
Jonathan Vos Post
Some creatures attempt the invisible
we find the chameleon risible
one spots one at times
the way imperfect rhymes
in a poem stand out individual
Though the shell of a poem be bony
the sea-otter, he takes a stone, he
floats to dinner, dressed furrily,
cracks it open, then thoroughly,
eats the meat of the sweet abalone.
19 Nov 1978
|
{"url":"http://scienceblogs.com/ethicsandscience/2006/09/06/the-math-limerick/","timestamp":"2014-04-18T00:16:58Z","content_type":null,"content_length":"61884","record_id":"<urn:uuid:3e867ddd-08cf-4505-a95e-244d31089db9>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00140-ip-10-147-4-33.ec2.internal.warc.gz"}
|
optimization problem (duality etc)
March 20th 2007, 09:30 PM #1
Junior Member
Dec 2006
optimization problem (duality etc)
Prove that if the problem
max cT x
s.t. Ax = b
x >= 0
has a finite optimal solution, then the new problem
max cT x
s.t. Ax = bb
x >= 0
cannot be unbounded for any choice of the vector bb.
any1... this is takin me forever if sum1 can give me a clear explanation if they understand it please
Follow Math Help Forum on Facebook and Google+
|
{"url":"http://mathhelpforum.com/advanced-algebra/12802-optimization-problem-duality-etc.html","timestamp":"2014-04-16T10:50:37Z","content_type":null,"content_length":"28882","record_id":"<urn:uuid:fee15934-2706-45b0-8c7f-1633885c4265>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00141-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Riemann Integrability
May 15th 2008, 01:03 PM #1
Nov 2007
Riemann Integrability
I have no idea how to prove this. Can someone help?
Given a real number x, denote by [x] the largest integer less than or equal to x. Prove that the function defined
$f(x)$ = $\sum_{n = 0}^{\infty}\frac{nx-[nx]}{2^n}$ is Riemann integrable on any interval [a,b]. Then compute
$\int_0^1 f(x)dx$
I have no idea how to prove this. Can someone help?
Given a real number x, denote by [x] the largest integer less than or equal to x. Prove that the function defined
$f(x)$ = $\sum_{n = 0}^{\infty}\frac{nx-[nx]}{2^n}$ is Riemann integrable on any interval [a,b]. Then compute
$\int_0^1 f(x)dx$
for n = 0 the term in your series is 0. so i'll assume that n > 0. let $f_n(x)=\frac{nx - [nx]}{2^n}, \ n \geq 1.$
each $f_n$ is integrable on any interval [a, b], because it's continuous almost everywhere on the
interval. let $\sigma_n(x)=\sum_{j=1}^n f_j(x).$ so $\sigma_n$ is integrable on [a, b] for all n.
since $0 \leq f_n(x) \leq \frac{1}{2^n}, \ \forall x,$ by Weierstrass test, $\{\sigma_n \}$ is uniformly convergent. so $f(x)=\lim_{n\to\infty} \sigma_n(x)$
is integrable on [a, b] and $\int_a^b f(x) dx = \lim_{n\to\infty} \int_a^b \sigma_n(x) dx.$ if a = 0 and b = 1, then this gives us:
$\int_0^1 f(x) dx = \lim_{n\to\infty} \int_0^1 \sigma_n(x) dx = \lim_{n\to\infty} \int_0^1 \sum_{j=1}^n f_j(x) \ dx=\lim_{n\to\infty} \sum_{j=1}^n \int_0^1 f_j(x) dx$
$=\lim_{n\to\infty} \sum_{j=1}^n \int_0^1 \frac{jx -[jx]}{2^j}dx=\lim_{n\to\infty} \sum_{j=1}^n \frac{1}{2^{j+1}}=\sum_{j=1}^{\infty} \frac{1}{2^{j+1}}=\frac{1}{2}. \ \ \ \square$
I have no idea how to prove this. Can someone help?
Given a real number x, denote by [x] the largest integer less than or equal to x. Prove that the function defined
$f(x)$ = $\sum_{n = 0}^{\infty}\frac{nx-[nx]}{2^n}$ is Riemann integrable on any interval [a,b]. Then compute
$\int_0^1 f(x)dx$
I will do the special case $[a,b]=[0,1]$. Define a sequence of functions $f_n:[0,1]\mapsto \mathbb{R}$ as $f_n(x) = \tfrac{1}{2^n}(nx-[nx])$. Notice that $|f_n(x)| \leq \tfrac{1}{2^n}$. Since $\
sum _{n=0}^{\infty}\tfrac{1}{2^n} < \infty$ it follows by the Weierstrass test that the series of functions $\sum_{n=0}^{\infty} f_n(x)$ converges uniformly to a function $f(x)$. Each function
$f_n(x)$ is discontinous at $\tfrac{k}{n}$ for $1\leq k<n$. This implies that $f$ is integrable and $\int_0^1 f(x) dx = \sum_{n=0}^{\infty}\int_0^1 f_n(x) dx$. It remains to compute $\int_0^1 f_n
(x) dx$. The graph of $f_n$ consists of a $n$ congruent right-angles triangles each having height $\tfrac{1}{2^n}$ and with base width $1/n$. Thus, $\int_0^1 f_n(x) dx = \frac{1}{2} n\cdot \frac
{1}{n} \cdot \frac{1}{2^n} = \frac{1}{2^{n+1}}$. I might have missed maybe $\pm 1$ on the exponent, but you get the idea. Now this is a geometric series which sums easily.
EDIT: Got beaten in the response by a fellow algebraist.
Last edited by ThePerfectHacker; May 15th 2008 at 03:43 PM.
May 15th 2008, 03:11 PM #2
MHF Contributor
May 2008
May 15th 2008, 03:22 PM #3
Global Moderator
Nov 2005
New York City
|
{"url":"http://mathhelpforum.com/calculus/38442-riemann-integrability.html","timestamp":"2014-04-19T11:03:33Z","content_type":null,"content_length":"46921","record_id":"<urn:uuid:edc76a31-8df2-4486-85e8-06d708180f14>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00459-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Help Me Understand The (NEW TAKE OF THE) Single Photon Double-Slit Experiment
BuckG wrote:
Dmytry wrote:
MonaLisaOverdrive wrote:
People get confused because they remain bound to the idea that a photon is a single discrete thing, like a little ball, and if you shoot a ball at a wall with two slits, it can't go through both.
But isnt a photon a particle as well as a wave? Doesn't that mean that it is a tiny…albeit not necessarily a ball…fundamental element?
Basically, light really is a wave. We known it is a wave since something like 1670, being able to calculate how much light hits any object by assuming light is a wave since perhaps 1800 or earlier.
The double slit experiment (performed with a lot of light) is
. We ruled out the intuitive model of light as some itty bitty things that bounce around,
and that holds
But sometime in beginning of 20th century, it was discovered that light always interacting with stuff in discrete punches. The 'photon' is this discrete punch. Not some tiny itty bitty thing that
bounces around. The light intensity that the old wave formulas would give, turned out to correctly give probability of the photon hitting there ; the average photons detected per time. The wave
theory was never overturned; quite to the contrary.
In quantum mechanics, every thing is a wave - photons, electrons, nuclei, et cetera - the wave whose intensity of waving gives you the probability. When that wave is absorbed by some sensor that
detects 'individual photons', the absorbed probability is the probability of this sensor clicking. It must be noted that not the wave itself, but it's intensity gives the probability. The wave
'waves' between positive and negative (well actually between field pointing up and down or left and right, you get the idea), and two waves of equal nonzero intensity can add up to the intensity of
zero (if they are waving oppositely to each other), giving the probability of zero.
Then it had to be ruled out that somehow the light would be little tiny pieces of the wave, that each by themselves don't act like a wave, and that was ruled out by reducing the intensity so that the
detector clicks rarely enough, yet the interference pattern still exists (not that anything else should've been expected as nobody even had a sensible theory where a huge number of bouncing particles
would produce interference pattern).
The logic trap here that is incorrect is that you have to pick one (particle or wave).
You're applying macroscopic concepts to a microscopic world, and the concept of a 'wave' or a 'particle' are the opposite sides of the same coin. Particles at that level (light, electrons, and even
protons/neutrons to a degree), have both properties and are actually some mish/mash inbetween state our macroscopic minds can't fully understand.
Stating any particle is exclusively one or the other is technically false. The true answer is
either or both.
It can't be stressed enough tho how much closer is the wave aspect (as in everyday wave in a material) to the underlying math than the 'particle' as in everyday rubber ball. It just is - we do have
everyday phenomena for the wave aspect of it, but we do not have everyday phenomena for 'particle like' aspect of it.
This is why a lot of theoreticians just stick to doing the math, the interpretations tend to be biased and lead to confusion and incorrect conclusions. Math and raw data trump musings.
Of course. The math is wavefunctions though, for the huge part, which is aptly described as wavefunction - and also the whole quantization thing that is really very badly conveyed by the word
'particle'. Why? Because we don't have any everyday phenomena even remotely similar to this aspect of QM. So for the purpose of layman explanation - you CAN use waves as metaphor but you CAN'T use
particles unless your goal is to confuse/mystify (which, sadly, seem to be the goal of most of the popularization books)
And if you follow math all the way, what you have on the detector is all the atoms on it being in mix of the hit-by-photon and not-hit-by-photon states (proportionally to the intensity), there wasn't
even an impact on any defined position - not resembling macroscopic particle behaviour at all. You follow it further all the way into observer, you end up with MWI. That's what you get 'ignoring'
particle aspect, and it (MWI) is entirely valid alternative way to see things, it gives valid predictions. The shut up and calculate leads to MWI when you keep calculating (instead of stopping half
way and saying 'wavefunction collapse' and squaring the complex amplitudes). You get particle-like behaviour derived from the wave behaviour.
redleader wrote:
MonaLisaOverdrive wrote:
I have no idea what a field is...
The electromagnetic field is what light is.
MonaLisaOverdrive wrote:
but does the size of the field have some relationship with the size of an individual photon?
As I said before, the photon is essentially an attribute of the EM field. That is, if you have a field of light with a given color, it always contains X number of photons worth of energy where X is a
non-negative integer.
MonaLisaOverdrive wrote:
Or, my original question, are there different sizes of photons?
Well in so much as you cannot put a field into a box below a given size, and photon are essentially a measure of how much energy a field has, I would say yes. However from your phrasing it really
sounds like you've sort of ignored the half dozen posts above explaining that light is an EM field with X photons worth of energy...
...Would that make sense?
Before I go ahead, it may be helpful to put in a reminder that I am as far away from a physicist as a dog is from a cat. Worse...than general relativity is from quantum mechanics. So what may seem
like willful ignorance is probably complete confusion on my part. See Apteris's excellent earlier explanation of two blind men attempting to describe a photon.
That said, I'm going to brave another question before I, instead, ask for reading material that may help me conceptualize the myriad of discussions in this thread: Given what you said about a photon
being an attribute of a field, and that for every EM field of a color, there are X photons, what type of field would be used for the single photon, double-slit experiment?
Now, for the request:
Instead of driving the Ars physicists crazy in this thread with questions about their answers to previous questions, I'll ask for suggestions for reading material about what's been covered in this
thread. By "reading material" I mean books that would engage the intelligent layperson without diluting the material overmuch.
I've read The Elegant Universe up to the point where Greene starts in on string theory (and given the post in the Lounge about string theory being nearly dead I probably won't read much more) and
I've started in on From Eternity To Here which is mostly about trying to define time but also includes some quantum physics.
I've put Three Roads To Quantum Gravity in my cart based on a different Observatory post, but I'll hold off on pulling the trigger if someone suggests otherwise. Or until the one day delivery time
limit is getting close.
Dmytry wrote:
It can't be stressed enough tho how much closer is the wave aspect (as in everyday wave in a material) to the underlying math than the 'particle' as in everyday rubber ball. It just is - we do have
everyday phenomena for the wave aspect of it, but we do not have everyday phenomena for 'particle like' aspect of it.
Except that everyday waves in materials require a media and dissipate, and light can travel in a vacuum in a straight line. Depending on the experiment, you can also get light to behave very
in-elastically. So it ... depends on the conditions very much. As such this whole wave vs particle debate has never completely been resolved and the concept of duality is the currently accepted
Of course. The math is wavefunctions though, for the huge part, which is aptly described as wavefunction - and also the whole quantization thing that is really very badly conveyed by the word
'particle'. Why? Because we don't have any everyday phenomena even remotely similar to this aspect of QM. So for the purpose of layman explanation - you CAN use waves as metaphor but you CAN'T use
particles unless your goal is to confuse/mystify (which, sadly, seem to be the goal of most of the popularization books)
You can use wavefunctions to describe macroscopic objects, does this make them not be "particle" like?
Is a buckyball not a particle? You can do the same double slit experminent with those, why is light "more wave like" in this context? What's the distinction?
You're drawing very arbitrary lines in the sand, most likely due to the professor/teacher bias you were exposed to.
Light is not strictly a particle, for sure, but it is not strictly a wave either. Both are VERY wrong conclusions, and down playing one of them is also VERY wrong.
The end result is that the macroscopic analogies of wave and particle only go so far, and do not completely resolve our understanding of exactly what the hell is going on at that level. The true
answer is that light and other particles at that scale are some sort of bastard child of our macroscopic understanding of waves and particles.
And if you follow math all the way, what you have on the detector is all the atoms on it being in mix of the hit-by-photon and not-hit-by-photon states (proportionally to the intensity), there wasn't
even an impact on any defined position - not resembling macroscopic particle behaviour at all. You follow it further all the way into observer, you end up with MWI. That's what you get 'ignoring'
particle aspect, and it (MWI) is entirely valid alternative way to see things, it gives valid predictions. The shut up and calculate leads to MWI when you keep calculating (instead of stopping half
way and saying 'wavefunction collapse' and squaring the complex amplitudes). You get particle-like behaviour derived from the wave behaviour.
You're limiting defining what light is to the results of a single type of experiment. Other experiments bring out the particle like aspect of light, and while wavefunctions can be used to resolve
this sort of phenomena, it's pretty convoluted and an inelastic treatment gets the same results with much simpler analysis. All matter has particle and wave behaviors, the issue is that in the
macroscopic world the wave behaviour is much less relevant due to the relative size of the wavelength vs the size of the particle. In the microscopic world however, you do not ignore the particle
aspect. You can downplay it, or ignore it for a specific experiment but that doesn't mean it's not there. It's just irrelevant in the context of a specific aspect of a study.
MonaLisaOverdrive wrote:
I've read The Elegant Universe up to the point where Greene starts in on string theory (and given the post in the Lounge about string theory being nearly dead I probably won't read much more) and
I've started in on From Eternity To Here which is mostly about trying to define time but also includes some quantum physics.
I've put Three Roads To Quantum Gravity in my cart based on a different Observatory post, but I'll hold off on pulling the trigger if someone suggests otherwise. Or until the one day delivery time
limit is getting close.
I don't know what Lounge post you are talking about, but the Elegant Universe is worth finishing. It's very well written and covers some topics that are useful to understand when discussing physics.
Greene also covers (briefly) some alternative theories as well, although he's foremost a string theorist.
His 2nd book 'Fabric of the Cosmos' is less string theory oriented, and shows a nice progression of thought from major physicist minds (Einstein, Mach, Planck, Witten & others) & ideas to get to our
current understanding of the universe and what answers we are still seeking.
/offtopic: Firefox dictionary apparently recognizes Planck...
Arbelac wrote:
MonaLisaOverdrive wrote:
I've read The Elegant Universe up to the point where Greene starts in on string theory (and given the post in the Lounge about string theory being nearly dead I probably won't read much more) and
I've started in on From Eternity To Here which is mostly about trying to define time but also includes some quantum physics.
I've put Three Roads To Quantum Gravity in my cart based on a different Observatory post, but I'll hold off on pulling the trigger if someone suggests otherwise. Or until the one day delivery time
limit is getting close.
I don't know what Lounge post you are talking about, but the Elegant Universe is worth finishing. It's very well written and covers some topics that are useful to understand when discussing physics.
Greene also covers (briefly) some alternative theories as well, although he's foremost a string theorist.
His 2nd book 'Fabric of the Cosmos' is less string theory oriented, and shows a nice progression of thought from major physicist minds (Einstein, Mach, Planck, Witten & others) & ideas to get to our
current understanding of the universe and what answers we are still seeking.
/offtopic: Firefox dictionary apparently recognizes Planck...
The first part of the book covers general and special relativity and quantum mechanics and I've read it...although I'll probably re-read it again in light of this discussion. Are you saying that the
rest of the book, which seems to cover string theory, is worth reading as well?
If I had to choose between Fabric of the Cosmos and Three Roads to Quantum Gravity which would you choose?
EDIT: Or In Search of Schrodinger's Cat
MonaLisaOverdrive wrote:
Before I go ahead, it may be helpful to put in a reminder that I am as far away from a physicist as a dog is from a cat. Worse...than general relativity is from quantum mechanics. So what may seem
like willful ignorance is probably complete confusion on my part. See Apteris's excellent earlier explanation of two blind men attempting to describe a photon.
To be clear if you're just curious about the double slit experiment, this is 99% classical optics. The only quantum thing about it is that at very low intensities the field is quantized, whereas in
classical physics there are no "steps" in energy and you can keep reducing the intensity forever. The only thing QM really changes is that it says that theres some minimum amount of energy you can
have before you have none at all. Classical optics says (incorrectly) you can keep lowering the intensity without quite hitting zero if you want. Both give identical diffraction patterns. The QM
version is only interesting because it uses much more complicated concepts to arrive at the same conclusion.
MonaLisaOverdrive wrote:
That said, I'm going to brave another question before I, instead, ask for reading material that may help me conceptualize the myriad of discussions in this thread: Given what you said about a photon
being an attribute of a field, and that for every EM field of a color, there are X photons, what type of field would be used for the single photon, double-slit experiment?
The type of field is an electromagnetic field, that is, light.
MonaLisaOverdrive wrote:
Instead of driving the Ars physicists crazy
I'm not a physicist, I'm an electrical engineer.
MonaLisaOverdrive wrote:
By "reading material" I mean books that would engage the intelligent layperson without diluting the material overmuch.
I think you want to know more about QM based on the books you've mentioned, but pretty much everything you've asked about is just basic optics. If you really want to understand the double slit
experiment, then you need to understand diffraction and introductory optics. Thats not going to teach you anything about QM though.
MonaLisaOverdrive wrote:
The first part of the book covers general and special relativity and quantum mechanics and I've read it...although I'll probably re-read it again in light of this discussion. Are you saying that the
rest of the book, which seems to cover string theory, is worth reading as well?
If I had to choose between Fabric of the Cosmos and Three Roads to Quantum Gravity which would you choose?
EDIT: Or In Search of Schrodinger's Cat
Yes, the whole Elegant Universe book is worth reading. Understanding some of the later concepts used in string theory (correspondence, manifolds, transformations) have applications in other areas of
physics and will help with making those processes concrete in your mind.
I would probably read Fabric of the Cosmos, as it covers more than Three Roads. Lee Smolin is an excellent writer as well, but Greene's second book covers more topics. Three Roads is more of a
comparison book to Elegant Universe, in that they both talk significantly about quantum theories, whereas Fabric covers broader topics like cosmology, inflation, time, and entropy.
I haven't read In Search of Schrodinger's Cat, so I can't really offer an opinion on it.
BuckG wrote:
Dmytry wrote:
It can't be stressed enough tho how much closer is the wave aspect (as in everyday wave in a material) to the underlying math than the 'particle' as in everyday rubber ball. It just is - we do have
everyday phenomena for the wave aspect of it, but we do not have everyday phenomena for 'particle like' aspect of it.
Except that everyday waves in materials require a media and dissipate, and light can travel in a vacuum in a straight line.
Since diffraction exists, light cannot travel in a vacuum in a straight line. I also don't really agree that theres much difference between mechanical waves and EM waves. They're both described by
essentially the same processes. In fact they're so similar that people doing acoustic imaging (ultrasound/sonar) generally just use optical/radar textbooks.
BuckG wrote:
As such this whole wave vs particle debate has never completely been resolved and the concept of duality is the currently accepted theory.
BuckG wrote:
why is light "more wave like" in this context? What's the distinction?
The wavelength is just a little bit different between them. Light has a wavelength comparable to devices we are interested in fabricating, therefore its wave properties are overwhelmingly more
interesting in 99.9999% of applications. Hell many of the most useful applications involve wavelengths large compared even to individual people. In comparison, the wavelengths of atoms are much less
accessible to us. Thats pretty fucking fundamental difference.
BuckG wrote:
You're drawing very arbitrary lines in the sand, most likely due to the professor/teacher bias you were exposed to.
counterpoint: lol
BuckG wrote:
Light is not strictly a particle, for sure, but it is not strictly a wave either. Both are VERY wrong conclusions, and down playing one of them is also VERY wrong.
What exactly do you teach again? I'm guessing chemistry.
BuckG wrote:
Other experiments bring out the particle like aspect of light, and while wavefunctions can be used to resolve this sort of phenomena, it's pretty convoluted and an inelastic treatment gets the same
results with much simpler analysis.
I'm curious which experiments you're thinking of specifically?
BuckG wrote:
And if you follow math all the way, what you have on the detector is all the atoms on it being in mix of the hit-by-photon and not-hit-by-photon states (proportionally to the intensity), there wasn't
even an impact on any defined position - not resembling macroscopic particle behaviour at all. You follow it further all the way into observer, you end up with MWI. That's what you get 'ignoring'
particle aspect, and it (MWI) is entirely valid alternative way to see things, it gives valid predictions. The shut up and calculate leads to MWI when you keep calculating (instead of stopping half
way and saying 'wavefunction collapse' and squaring the complex amplitudes). You get particle-like behaviour derived from the wave behaviour.
You're limiting defining what light is to the results of a single type of experiment. Other experiments bring out the particle like aspect of light, and while wavefunctions can be used to resolve
this sort of phenomena, it's pretty convoluted and an inelastic treatment gets the same results with much simpler analysis. All matter has particle and wave behaviors, the issue is that in the
macroscopic world the wave behaviour is much less relevant due to the relative size of the wavelength vs the size of the particle. In the microscopic world however, you do not ignore the particle
aspect. You can downplay it, or ignore it for a specific experiment but that doesn't mean it's not there. It's just irrelevant in the context of a specific aspect of a study.
Read up on MWI. The wavefunction collapse never happens there. Not entirely sure what you even mean by the particle like behaviour. Yes the buckyball and macroscopic particles can behave very
'particle like' while obeying the quantum mechanics, but that's emergent phenomena of big systems, especially those with very many degrees of freedom (which end up decohering). The underlying
fundamental laws are very much wave like and lack anything even remotely resembling particle behaviour (I don't find quantization to resemble particle behaviour, it's just some weird stuff that we
have no words for). The particle behaviour emerges from those laws.
My current favourite example of very confusing verbiage:
http://en.wikipedia.org/wiki/Wheeler%27 ... experiment
According to the results of the double slit experiment, if experimenters do something to learn which slit the photon goes through, they change the outcome of the experiment and the behavior of the
photon. If the experimenters know which slit it goes through, the photon will behave as a particle. If they do not know which slit it goes through, the photon will behave as if it were a wave when it
is given an opportunity to interfere with itself. The double-slit experiment is meant to observe phenomena that indicate whether light has a particle nature or a wave nature. The fundamental lesson
of Wheeler's delayed choice experiment is that the result depends on whether the experiment is set up to detect waves or particles.
This is just bullshit. I know how this stuff is calculated, for god's sake, it's fairly basic optics, you never have some sort of complicated rules where you decide that with two telescopes the light
is a particle that goes through one slit and use some particle like equation here. When you have two telescopes in place of the screen, firstly, to resolve the slits the objectives of telescopes must
be larger than the interference fringe so you ain't having some sort of weird set-up where the telescopes are put in the dark spots on interference pattern yet detect the photons when the screen
would not. And secondarily the lens itself focusses the light because of how it wave-like interferes with itself after passing through the lens (or being reflected off mirror). And most importantly
you get correct result when you propagate the light as a wave all the way from source through both slits to the both telescopes, the same as you would when propagating it to the screen. Then, you
detect the photon only once (if at all 'cause it could've struck the plate), just as you detect it only once on the screen, and you can go omfg miracles, the photon gone through only one of the
holes, the photons, how do they know?
redleader wrote:
MonaLisaOverdrive wrote:
Before I go ahead, it may be helpful to put in a reminder that I am as far away from a physicist as a dog is from a cat. Worse...than general relativity is from quantum mechanics. So what may seem
like willful ignorance is probably complete confusion on my part. See Apteris's excellent earlier explanation of two blind men attempting to describe a photon.
To be clear if you're just curious about the double slit experiment, this is 99% classical optics. The only quantum thing about it is that at very low intensities the field is quantized, whereas in
classical physics there are no "steps" in energy and you can keep reducing the intensity forever. The only thing QM really changes is that it says that theres some minimum amount of energy you can
have before you have none at all. Classical optics says (incorrectly) you can keep lowering the intensity without quite hitting zero if you want. Both give identical diffraction patterns. The QM
version is only interesting because it uses much more complicated concepts to arrive at the same conclusion.
I read this twice while squinting and I *just* about understand it. My mental image of a field is clashing with the mental image of a photon that I'm trying to get rid of. Both of those are clashing
with the pictures I've seen of the double-slit experiment, that usually show light as a cone.
redleader wrote:
MonaLisaOverdrive wrote:
That said, I'm going to brave another question before I, instead, ask for reading material that may help me conceptualize the myriad of discussions in this thread: Given what you said about a photon
being an attribute of a field, and that for every EM field of a color, there are X photons, what type of field would be used for the single photon, double-slit experiment?
The type of field is an electromagnetic field, that is, light.
I'm making some progress. I see how stupid this question was.
At the risk of 20/20 hindsight, would this light even have a color?
Also, what you said:
redleader wrote:
I'm not a physicist, I'm an electrical engineer.
What I see:
redleader wrote:
I'm not a physicist, I'm a really smart person
My apologies for assuming you were a physicist.
redleader wrote:
MonaLisaOverdrive wrote:
By "reading material" I mean books that would engage the intelligent layperson without diluting the material overmuch.
I think you want to know more about QM based on the books you've mentioned, but pretty much everything you've asked about is just basic optics. If you really want to understand the double slit
experiment, then you need to understand diffraction and introductory optics. Thats not going to teach you anything about QM though.
Actually, what I'm trying to grasp is the lack of a unifying theory...i.e. why general relativity and quantum mechanics are not compatible and why it's so hard to find a theory that encompasses both.
Once I understand at least the basics of why it's so complicated I can stop thinking about the idea of one.
So yes. Books about that. I'll try to learn about the optics stuff on Wikipedia for now...
Arbelac wrote:
MonaLisaOverdrive wrote:
The first part of the book covers general and special relativity and quantum mechanics and I've read it...although I'll probably re-read it again in light of this discussion. Are you saying that the
rest of the book, which seems to cover string theory, is worth reading as well?
If I had to choose between Fabric of the Cosmos and Three Roads to Quantum Gravity which would you choose?
EDIT: Or In Search of Schrodinger's Cat
Yes, the whole Elegant Universe book is worth reading. Understanding some of the later concepts used in string theory (correspondence, manifolds, transformations) have applications in other areas of
physics and will help with making those processes concrete in your mind.
I would probably read Fabric of the Cosmos, as it covers more than Three Roads. Lee Smolin is an excellent writer as well, but Greene's second book covers more topics. Three Roads is more of a
comparison book to Elegant Universe, in that they both talk significantly about quantum theories, whereas Fabric covers broader topics like cosmology, inflation, time, and entropy.
Sweet, thanks! Off to buy a bunch of books...
MonaLisaOverdrive wrote:
Both of those are clashing with the pictures I've seen of the double-slit experiment, that usually show light as a cone.
In this case the cone edge is probably just the point where the field reach half intensity. Fields don't have corners, only smooth, gradual edges (since they can't have features smaller then a
wavelength), so its customary to define fields in terms of their full width-half max intensity or e^-1 intensity. But the field itself extends much further, just at a very low intensity as it
gradually dies away with distance.
MonaLisaOverdrive wrote:
I'm making some progress. I see how stupid this question was.
At the risk of 20/20 hindsight, would this light even have a color?
Yes, interference patterns like this generally only work at one wavelength. If theres multiple colors, each would generated a different fringe pattern and they'd average out. This is why you don't
see fringes all over shadows under normal sunlight. Incidentally this is why laser illumination generally looks awful, you get interference patterns everywhere and can't see anything under it all.
MonaLisaOverdrive wrote:
My apologies for assuming you were a physicist.
No thats fine.
MonaLisaOverdrive wrote:
Actually, what I'm trying to grasp is the lack of a unifying theory...i.e. why general relativity and quantum mechanics are not compatible and why it's so hard to find a theory that encompasses both.
Once I understand at least the basics of why it's so complicated I can stop thinking about the idea of one.
The basic answer is that one talks about small things, the other enormous things, and in between they don't really meet. So theres still something missing. Beyond that its just a lot of speculation
since no one really knows. You can read up on the various theories people have, but even then no one really knows if they're correct.
MonaLisaOverdrive wrote:
redleader wrote:
I think you want to know more about QM based on the books you've mentioned, but pretty much everything you've asked about is just basic optics. If you really want to understand the double slit
experiment, then you need to understand diffraction and introductory optics. Thats not going to teach you anything about QM though.
Actually, what I'm trying to grasp is the lack of a unifying theory...i.e. why general relativity and quantum mechanics are not compatible and why it's so hard to find a theory that encompasses both.
Once I understand at least the basics of why it's so complicated I can stop thinking about the idea of one.
Well its a question about mathematics, and very advanced mathematics at that, i mean, VERY advanced... I don't think the language is suitable for describing this mathematical challenge with any
fidelity. edit: and then there's indeed the experimental challenge, we just don't know how gravity works between tiny masses.
redleader wrote:
BuckG wrote:
I guess I may have overstated my point regarding, but I think that there still is a lot of truth in what I'm saying. I'm basing my opinion on my experience as a student asking a more senior student
about physics. I knew two more senior students, in my opinion, one smarter than the other, and would sometimes ask them about physics.
Sometimes it was more convenient for me to ask the (in my opinion) less smart student first about something. He actually had received a better physics education of the two. He'd hem and haw and
namedrop terminology and advanced theories and eventually when pushed, he'd mumble something like "this can't be explained without math or without the benefit of being educated in subject X."
Later, I'd ask the second guy, and he would bring it down and give an intuitive picture of the problem and give an answer that way without the need for the namedrops. Sometimes, but not very often,
the second student would admit that he didn't know the subject well enough to be able to give a good answer, but he always attributed it to his lack of intuition of the subject.
The person who really knows what he is doing can explain it in the most plain language as possible. When you only know how to access something by turning the crank on a complicated formalism, you
really don't understand it that well.
I can write software that would paint a pretty picture of whats going on in double slit experiment or any variation thereof. I actually think of stuff very visually, but it is very difficult to
convey Mona Lisa by words, right?
To convey what I visualize in my head, I either need to draw it by hand which is very tedious especially for the animation or for a three dimensional thing, or I need to use the equations, which I
can either give to the listener if he/she knows math and can visualize it, or draw them with the computer otherwise (which is frankly annoying, especially if you can see it clearly in your head and
don't need the computer to know what it looks like so you get pretty much no 'wow' factor for yourself, programming that stuff is then not very interesting for you). Describing that stuff in words
often evokes incorrect imagery, which is nonetheless vivid enough to go 'aha, i understand it!'.
I also have a question about single-photon double slit experiment. When learning about this in an introductory class, the professor showed us a video of the photographic plate used as a detector and
the different spots flashing on it and building up a cos^2 distribution over time. Presumably each flashing spot is a photon arriving and hitting the plate.
My question is, if a photon is just a quantum of energy in an electromagnetic field mode (in free space, one way to organize the modes is into plane waves, which are distributed over space), why does
each photon arrival correspond to a single little flashing spot and not an illumination of the entire plate with the cos^2 pattern? I would guess that instead of little dots appearing, the energy
absorbed by the plate as a function of space on the plate would correspond to the cos^2 distribution, but still the total energy absorbed by the plate as a function of time would be like a staircase,
like as if discrete bundles of energy hit the plate.
What mistake am I making here or what thing in the experiment am I overlooking?
silence_kit wrote:
I also have a question about single-photon double slit experiment. When learning about this in an introductory class, the professor showed us a video of the photographic plate used as a detector and
the different spots flashing on it and building up a cos^2 distribution over time. Presumably each flashing spot is a photon arriving and hitting the plate.
My question is, if a photon is just a quantum of energy in an electromagnetic field mode (in free space, one way to organize the modes is into plane waves, which are distributed over space), why does
each photon arrival correspond to a single little flashing spot and not an illumination of the entire plate with the cos^2 pattern? I would guess that instead of little dots appearing, the energy
absorbed by the plate as a function of space on the plate would correspond to the cos^2 distribution, but still the total energy absorbed by the plate as a function of time would be like a staircase,
like as if discrete bundles of energy hit the plate.
What mistake am I making here or what thing in the experiment am I overlooking?
That's the quantization thing. It hits everywhere, yet we see 1 dot somewhere, it's probability being given by that cos^2 . There's the many-worlds-interpretation explanation:
It hits everywhere and puts every point on plate into mixture of states where it is hit by photon in each point where photon could hit, the amplitude of each state being given by the amplitude of
photon at that spot. These states then evolve - the warm photographic plate consists of many atoms bumping into each other, and soon the states become orthogonal, i.e. non interfering (decoherence).
This decoherence propagates to the observer (usually through environment, but also when the observer looks at the plate), dividing observer up into zillion different non interacting observers. Each
of the observers perceives photon hitting in a single point.
There's the Copenhagen interpretation non-explanation:
When the photon is measured, wavefunction collapses into a single point on the wave, the probability of a point being chosen is given by the squared complex amplitude.
Let me explain my thought process because what I'm trying to ask may not be very clear.
I picture a single photon as a plane wave normal to the double-slits and what comes out of the slits is a photon's worth of energy distributed across the plane waves with the required k in order to
achieve the cos^2 distribution (which is a little fishy, I think--this means that if you could spatially filter out plane waves with a certain k, you could measure an energy less than a photon's
I always view it as it propagates as a wave and interacts as a particle.
Wudan Master wrote:
I always view it as it propagates as a wave and interacts as a particle.
That's a bit problematic because you can have it hit metal and become an excitation on surface of the metal ( http://en.wikipedia.org/wiki/Surface_plasmon ), still coherent, and still behaving as if
it did collide with metal everywhere.
edit: or less exotically you can do double slit experiment in water or inside glass. The photon is constantly interacting with molecules (and is slowed down), yet you still get interference pattern.
It is interacting like a wave.
silence_kit wrote:
I picture a single photon as a plane wave normal to the double-slits and what comes out of the slits is a photon's worth of energy distributed across the plane waves with the required k in order to
achieve the cos^2 distribution
Its probably worth pointing out at this point that although classical optics does not quantize the EM field, it none the less predicts that the photographic plate should appear pixel by pixel. This
is because the charge carriers (e.g. electrons or ions) in the plate are discrete. So even if the field is continuously distributed, only discrete points in the plate (atoms and so forth) can accept
energy and thus be transformed into light or dark spots.
So to answer you question, one very easy to understand reason you can't distribute a photons worth of energy across the entire plate uniformly is that the plate is made of atoms that individually
must absorb energy, and they will do so discretely.
OK i gotta say, the people who view the wavefunction of photons as essentially a mathematical model which adheres closest to a wave, the general argument is convincing. And it all seems logical and
plausible enough.
Problem is. Molecules. Buckyballs + Occam's razor.
Buckyballs are particles.
So we know that an object which is definately a particle still moves in a wavefunction. Ergo making the least assumptions it probably (not definately, but probably) means that photons are also
particles which just move/exist in wavefunctions.
We also have the photoelectric effect. Which shows that light is a particle.
None of that means that photons are a particle... but I think that we need something other then the existence of wavefunctions (ie light moving in waves) to say that they're not as other objects
which we know are particles can also move in the same way.
troymclure wrote:
OK i gotta say, the people who view the wavefunction of photons as essentially a mathematical model which adheres closest to a wave, the general argument is convincing. And it all seems logical and
plausible enough.
Problem is. Molecules. Buckyballs + Occam's razor.
Buckyballs are particles.
While they're particles in the same general sense as photons, remember that the wavelength here is really tiny. That link said 2.5 picometers in that buckyball experiment. Thats much smaller then the
lattice spacing of carbon in a diamond. So really we're talking about a wavelength thats smaller then what most people think of as the particle. I'm not a physicist, but that doesn't sound
unreasonable to me. I expect object-like things to have a wavelength thats negligible in ordinary circumstances.
troymclure wrote:
OK i gotta say, the people who view the wavefunction of photons as essentially a mathematical model which adheres closest to a wave, the general argument is convincing. And it all seems logical and
plausible enough.
Problem is. Molecules. Buckyballs + Occam's razor.
Buckyballs are particles.
Emergent behaviour of underlying non particle like math can be like a particle. As well as the limit case when the wavelength is extremely short.
For the photoelectric effect, that's quantization, mixing up the quantization with particle-likeness just confuses everyone.
edit: there's example, this completely awful and entirely incorrect visualization when it comes to showing electrons:
http://www.google.com/url?sa=t&rct=j&q= ... 5xxey5TZcA
the particle splits in two, goes through the slits, combines... then theres that stupid eye looking at one of the slit and suddenly the image on the screen became two sharp lines after the slits.
That's so terrible. No you don't get two sharp lines if you do this, you interact with electron to measure it, and that breaks coherence. But it still difracts. The one slit image not sharper than
two slit image, its like you blurred out the stripes on interference pattern. This is so horrible. A little wonder we aren't making any progress in science if we are showing the future Einsteins -
who are very visual people - such crap that doesn't make sense because it is just wrong.
But a buckypall is a particle.
That's what they look like under an electron microscope. How can they be a wave when we can see them existing solely as particles? And we can get them to interact with other particles solely as
emergent frigging behaviour and a limit case of a wave with wavelength being real tiny. It's a huge thing with giant number of degrees of freedom. Lets not confuse the emergent phenomena and the
approximate formulas for describing those phenomena (such as Newtonian physics) with underlying fundamental laws. edit: on macroscopic scale, do you say that there's Newtonian aspect to general
relativity? There isn't, the Newtonian physics derives from general relativity as approximation or limit case. Here too Newtonian physics with particles and stuff derives as an approximation or limit
when things are big and have a lot of degrees of freedom (and are warm). There's macroscopic quantum stuff like superfluids or superconductors but I digress.
troymclure wrote:
That's what they look like under an electron microscope. How can they be a wave when we can see them existing solely as particles?
To be clear, that image is formed by firing electrons (which are waves) at the electrons in that molecule (which are also waves) and then measuring the extent to which the beam is attenuated at each
position. You're literally just seeing a map of the probability of encountering an electron (averaged over a huge number probing events) at each location.
also those can't be carbon buckyballs. they aren't 200nm big. It's some viruses i think. Or spores of some kind. Or some really weird nano stuff that isn't buckyballs. edit: ya its made of DNA pieces
http://www.news.cornell.edu/stories/Aug ... balls.html
Dmytry wrote:
emergent frigging behaviour and a limit case of a wave with wavelength being real tiny. It's a huge thing with giant number of degrees of freedom. Lets not confuse the emergent phenomena and the
approximate formulas for describing those phenomena (such as newtonian physics) with underlying fundamental laws.
Care to expound on that? My general point is that while quantm sized objects definately exist as waves. I think they also exist as particles. Like I said previously, I envisage a wave made out of
ghost(virtual-like) particles in much the same way that an ocean wave is made up of sea water particles. The emergent behaviour for me is when it becomes a particle. At that point, the various
sub-realities of the ghost particles merge and become a real particle.*
Is this your general interpretation as well? I should point out that I do consider the particle to be for all intents and purposes an actual particle. Akin to a grain of sand is. It's just that at
the quantm level it doesn't "exist" in the classical sense. And i'm starting to suspect that the classical sense is the emergent behaviour.
Though I think you're saying that things exist as waves but the smallest possible size of a wave is say in the shape of a buckyball(or photon etc). Which is fine, though seems to be a bit about
semantics. Wether or not we call that a particle or a wave seems to be a matter of words rather then realities.
While they're particles in the same general sense as photons, remember that the wavelength here is really tiny. That link said 2.5 picometers in that buckyball experiment. Thats much smaller then the
lattice spacing of carbon in a diamond. So really we're talking about a wavelength thats smaller then what most people think of as the particle. I'm not a physicist, but that doesn't sound
unreasonable to me. I expect object-like things to have a wavelength thats negligible in ordinary circumstances.
Agreed, and it's probably why macroscopic objects don't have detectable wavefunctions. Of course this does imply that something about the interaction of particles (quantm sized ones) causes
wavefunction collapse.
My main point though is that given that they do still have detectable wavefunctions but are still definately discrete particles it behooves us not too rule out the possibility that a photon is also a
discrete particle. It just has a much larger wavefunction.
Ie an objects size* as a particle is inversely correlated with it's wavefunction size.
*Actually I suspect it's not due to size instead it's due to something else. A buckyball consists of a large number of particles joined together and possibly, the probabilities are reduced because of
that. Which makes me wonder if it's possible to create a macroscopic object made up of particles which share enough probabilities that you could detect a wavefunction in the macroscopic object?
To be clear, that image is formed by firing electrons (which are waves) at the electrons in that molecule (which are also waves) and then measuring the extent to which the beam is attenuated at each
position. You're literally just seeing a map of the probability of encountering an electron (averaged over a huge number probing events) at each location.
Good point. And thanks wasn't aware of that. Still, I feel pretty confident saying that a buckyball is a particle* due to the fact that we can use them to help construct carbon nanotubes. Some
nanotubes have a buckball hemisphere on the ends of them and we can make macroscopic objects out of nanotubes. Which requires that they exist in one place.
*again wave or particle kinda /shrug. If it's a wave which has all the properties of a particle then we may as well call it one as I suspect there doesn't exist such a thing as a particle otherwise ^
^. Ie Quite possibly everything has a wavefunction so it seems likely that what we think of now as particles are just objects which have a currently collapsed wavefunction.
Dmytry wrote:
also those can't be carbon buckyballs. they aren't 200nm big. It's some viruses i think. Or spores of some kind. Or some really weird nano stuff that isn't buckyballs. edit: ya its made of DNA pieces
http://www.news.cornell.edu/stories/Aug ... balls.html
I thought that the 200nm size scale was for the line underneath the 200nm. Which makes the objects in that picture about 400nm wide as per that article.
Though I just googled for buckyball electron microscope images so no guarentees.
troymclure wrote:
Dmytry wrote:
also those can't be carbon buckyballs. they aren't 200nm big. It's some viruses i think. Or spores of some kind. Or some really weird nano stuff that isn't buckyballs. edit: ya its made of DNA pieces
http://www.news.cornell.edu/stories/Aug ... balls.html
I thought that the 200nm size scale was for the line underneath the 200nm. Which makes the objects in that picture about 400nm wide as per that article.
Though I just googled for buckyball electron microscope images so no guarentees.
i just roughly meant the order of magnitude. The carbon-carbon bond size would be like 0.15nm or so.
redleader wrote:
BuckG wrote:
Other experiments bring out the particle like aspect of light, and while wavefunctions can be used to resolve this sort of phenomena, it's pretty convoluted and an inelastic treatment gets the same
results with much simpler analysis.
I'm curious which experiments you're thinking of specifically?
I don't know what BuckG is thinking about but Compton scatter of gammas off of electrons is one that comes to mind for me.
There's only three people in the world who understand quantum mechanics.
Apart from Oppeneheimer and Einstein, who was the third?
I think we shouldn't lose sight of the fact that QM is in fact very very strange and counter intuitive, just saying it's down to particles being waves is a bit hand wavey in my humble opinion.
Considering the buckyballs example, it's hard to convince a lay person that this roughly sperical object made out of atoms with a definite mass is actually a wave. In the double slit experiment,
firing single photons and seeing interference, it doesn't seem controversial to describe this as a photon, behaving as a wave interfering with itself.
But I suggest this argument is a lot harder for laymen to accept in the case of buckyballs. A single molecule, fired at a double slit, and you get the interference pattern? Behaving as if it passed
through both slits and interfered with itself? Surely you have to pick which of the slits those sixty carbon atoms actually passed through? Or did the molecule split up? Did some carbon atoms go
through one slit and some go through the other slit? I'm pretty sure that's not the case. So what is the reality? Do the maths? I believe the maths just tells you how to calculate the probabilities
that the molecule passed through either of the slits, or took any other possible path - it doesn't solve the counterintuitive situation wherein a single molecule appears to pass through both slits
and interfere with itself.
MonaLisaOverdrive wrote:
Actually, what I'm trying to grasp is the lack of a unifying theory...i.e. why general relativity and quantum mechanics are not compatible and why it's so hard to find a theory that encompasses both.
Once I understand at least the basics of why it's so complicated I can stop thinking about the idea of one.
One problem is that no one has come up with a successful quantum theory of gravity.
Another problem is that even the most successful/accurate quantum theory (QED) has problems at small scales. A mathematical trick is needed to get answers.
Dmytry wrote:
That's the quantization thing. It hits everywhere, yet we see 1 dot somewhere, it's probability being given by that cos^2 . There's the many-worlds-interpretation explanation:
It hits everywhere and puts every point on plate into mixture of states where it is hit by photon in each point where photon could hit, the amplitude of each state being given by the amplitude of
photon at that spot. These states then evolve - the warm photographic plate consists of many atoms bumping into each other, and soon the states become orthogonal, i.e. non interfering (decoherence).
This decoherence propagates to the observer (usually through environment, but also when the observer looks at the plate), dividing observer up into zillion different non interacting observers. Each
of the observers perceives photon hitting in a single point.
There's the Copenhagen interpretation non-explanation:
When the photon is measured, wavefunction collapses into a single point on the wave, the probability of a point being chosen is given by the squared complex amplitude.
I'm not sure if I follow, but are you claiming that an EM mode field distribution is more correctly interpreted as a probability distribution for a photon? I feel like that's not quite right ...
checking the wiki page on the experiment just now, it really suggests against this kind of thinking.
Edit: reading your post more carefully, you may be saying the same thing as redleader, except that you are also trying to justify why a particular electronic 'blip' got excited by the photon and not
a weird superposition of 'blips'. Am I following you correctly?
Its probably worth pointing out at this point that although classical optics does not quantize the EM field, it none the less predicts that the photographic plate should appear pixel by pixel. This
is because the charge carriers (e.g. electrons or ions) in the plate are discrete. So even if the field is continuously distributed, only discrete points in the plate (atoms and so forth) can accept
energy and thus be transformed into light or dark spots.
Okay, the spot size isn't really the photon's "size"--if I am understanding you correctly, it is more like the spatial extent of the excited electron's wavefunction in the photographic plate. In
principle, it could be the size of the plate if the plate were a perfect crystal, right?
Am I wrong if I were to say that the spatial extent of the photon is the spatial extent of the classical EM field distribution?
Geck0 wrote:
There's only three people in the world who understand quantum mechanics.
Apart from Oppeneheimer and Einstein, who was the third?
I think we shouldn't lose sight of the fact that QM is in fact very very strange and counter intuitive, just saying it's down to particles being waves is a bit hand wavey in my humble opinion.
Considering the buckyballs example, it's hard to convince a lay person that this roughly sperical object made out of atoms with a definite mass is actually a wave. In the double slit experiment,
firing single photons and seeing interference, it doesn't seem controversial to describe this as a photon, behaving as a wave interfering with itself.
But I suggest this argument is a lot harder for laymen to accept in the case of buckyballs. A single molecule, fired at a double slit, and you get the interference pattern? Behaving as if it passed
through both slits and interfered with itself? Surely you have to pick which of the slits those sixty carbon atoms actually passed through? Or did the molecule split up? Did some carbon atoms go
through one slit and some go through the other slit? I'm pretty sure that's not the case. So what is the reality? Do the maths? I believe the maths just tells you how to calculate the probabilities
that the molecule passed through either of the slits, or took any other possible path - it doesn't solve the counterintuitive situation wherein a single molecule appears to pass through both slits
and interfere with itself.
its a whole ton easier to swallow if you also give some information about how tiny and close the slits have to be and how coherent the beam of buckyballs has to be, how far this crap has to go from
the slit until you start seeing the pattern, how cold it has to be so that buckyball is frozen vs vibrating, etc. The wave is really, really, really tiny. The wave packet's fuzziness is also often a
lot smaller than buckyball itself. You could imagine a fuzzied out buckyball with a tiny tiny wavy halo around it, like the jpeg compression artefacts, and it sorts of blurs out over time getting
fuzzier and fuzzier, edit: or better you can imagine fuzzied out buckyball whicn if you zoom in a lot is made of really really really tiny wave, much tinier than buckyball). The limit behaviour of
waves, when they are tiny, is very much like beam of particles. But when you tell this stuff abstractly without giving the sizes to imagine, people just imagine it wrong, they imagine a really fuzzy
wave, and then it is hard to swallow because its so quantitatively wrong it gets pretty much qualitatively wrong. Then you remedy the swallowing difficulty by saying that it is 'sometimes like a
particle', so that the person does not throw away the obviously wrong mental image you shared but stashes it for 'when it is like a wave'. Then a person imagines like a really fuzzy wave or a sharp
nonfuzzy buckyball, and can't understand how those reconcile at all. The understanding is absent, the mental image is totally incorrect but the person is confused enough to go like, okay that's some
weird stuff, the guy must be really smart, i gotta go.
(Then you know what's also hard to swallow? That buckyball would be all over the place wave-style but MWI would be incorrect.)
The popularizations are complete shit. http://www.youtube.com/watch?v=DfPeprQ7oGc is an example of total shit (when video gets to the electrons). The stuff is really hard to swallow when you start
claiming things that do not in fact happen, e.g. ball electron splitting in two near the slits, passing through slits, merging afterwards before hitting screen... some particles going through space
being observed as particles... Then when that idiotic eye looks at the slits - what you get? you interact with electron, it loses coherence - what you get behind slits? You don't get two lines behind
each slit like in the video! The electron will still diffract on the slits! You will get even fuzzier stuff on the screen, with the interference pattern fuzzied out by the interaction between
electron and measurement device! No wonder it is hard to swallow when 90% of what is claimed is bullshit that happens only in the vacuum of the head of whoever consults for this kind of animation.
That person - I don't care what is his/her credentials, could be PhD for all i care - does not know jack shit, he's visualizing his misconceptions, which aren't counter-intuitive, they are simply
wrong and make no sense whatsoever visually.
No wonder we aren't making any progress in physics. The little today's Einsteins are exposed to that kind of crap - with best intentions - and they are taught: this is counter-intuitive stuff, your
correct ability to visualize stuff (Einstein was big on visualization) is wrong.
okay there's the applet visualizing the underlying math correctly:
http://phet.colorado.edu/en/simulation/ ... terference
Switch to single particles, enable double slit, look at it toggling between the magnitude, real part, and imaginary part. That's what underlying math looks like. Looks a bit like a particle when you
look at the amplitude eh? That's the wave packet. Waving can travel in a packet, you know, you can make a wavepacket in a pool of water. If you make a big wave packet made of really tiny waves, the
wave packet will work more like a particle. (that being said the water waves are actually a lot more mathematically complicated than most of the wavy stuff in QM; consider simplified idealized
(That is not to say the applet is without it's faults. The switch between electron and neutron should make for much tinier wavelength, that is not immediately obvious. edit: wait, it just makes for
slower speed so that wavelength is same. Take note of this. The wavelength depends to speed and mass).
edit: also try adding potential barriers and do single slit of different width. The narrower the slit the fuzzier is the image. Not what you would expect at all from particles, eh?
Seriously, such applet gives what a zillion popularization books written by incompetents won't ever give. This is the popularization I like: shut the hell up and calculate, and visualize the
calculations (and if one can't calculate, one should just shut up rather than write popularization books).
UserJoe wrote:
MonaLisaOverdrive wrote:
Actually, what I'm trying to grasp is the lack of a unifying theory...i.e. why general relativity and quantum mechanics are not compatible and why it's so hard to find a theory that encompasses both.
Once I understand at least the basics of why it's so complicated I can stop thinking about the idea of one.
One problem is that no one has come up with a successful quantum theory of gravity.
Yes, but why? That's what I want to read about, and all of the math and physics that goes along with it.
As a layperson, I don't understand why the lack of a unifying theory is a problem. There are a collection of rules and formulas for calculating the behavior of large things, and there are rules and
formulas...albeit complicated formulas...for calculating the behavior of small things, which to me sounds okay but apparently it's not.
redleader wrote:
troymclure wrote:
That's what they look like under an electron microscope. How can they be a wave when we can see them existing solely as particles?
To be clear, that image is formed by firing electrons (which are waves) at the electrons in that molecule (which are also waves) and then measuring the extent to which the beam is attenuated at each
position. You're literally just seeing a map of the probability of encountering an electron (averaged over a huge number probing events) at each location.
Here I go, putting my foot in it again...
If it's a probability map, where are the images of the other probably locations?
EDIT: Holy crap Dmytry! That application is excellent!
MonaLisaOverdrive wrote:
UserJoe wrote:
MonaLisaOverdrive wrote:
Actually, what I'm trying to grasp is the lack of a unifying theory...i.e. why general relativity and quantum mechanics are not compatible and why it's so hard to find a theory that encompasses both.
Once I understand at least the basics of why it's so complicated I can stop thinking about the idea of one.
One problem is that no one has come up with a successful quantum theory of gravity.
Yes, but why? That's what I want to read about, and all of the math and physics that goes along with it.
As a layperson, I don't understand why the lack of a unifying theory is a problem. There are a collection of rules and formulas for calculating the behavior of large things, and there are rules and
formulas...albeit complicated formulas...for calculating the behavior of small things, which to me sounds okay but apparently it's not.
The rough explanation is the problem of infinities. When you use the equations for large things on small things, you get infinities in your answer. Same for the reverse. Hence, the theories are
incomplete and no one has successfully come up with a formalism that correctly predicts the behaviour we see while avoiding the infinity problem at all scales.
|
{"url":"http://arstechnica.com/civis/viewtopic.php?p=22334769","timestamp":"2014-04-20T22:42:27Z","content_type":null,"content_length":"152631","record_id":"<urn:uuid:14579eea-0483-4415-83c3-d3367d8b31f5>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00367-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Using LINEST for non-linear curve fitting
A frequent question on internet forums everywhere is how to do a least squares fit of a non-linear trend line to a set of data. The most frequent answer is to plot the data on an XY (“scatter”)
chart, and then use the “Fit Trendline” option, with the “display equation on chart” box checked. The chart trendlines have the options of: Linear, Exponential, Logarithmic, Polynomial (up to order
6), and Power. There is also a “Moving Average” option, but this does not provide a trendline equation. The chart trendline solution is OK if what you want to do is display the trendline equation
on a chart, but if you want to use the numbers in some further analysis, or even just display them elsewhere in the spreadsheet, or copy them to another document, it is far from convenient.
Fortunately it is straightforward to get the trendline equations (and other statistics) for each of the chart trendline types using the LINEST worksheet function.
I have created a spreadsheet with examples of each trendline type, which may be downloaded here:
The functions used for linear and polynomial trendlines are shown in the screenshot below (click image for full size view):
Note that:
• The functions as displayed use named ranges (X_1 to X_3 and Y_1 to Y_3)
• The functions are entered as array functions to display all the return values; i.e. enter the function in a cell, select that cell and sufficient adjacent cells to display all the required
values, press F2, press Ctrl-Shift-Enter.
• Alternatively the INDEX function may be used to return specific values; e.g. to return the b value from the linear example use =INDEX(LINEST(Y_1, X_1),2)
• Higher order polynomial functions may be returned by simply adding to the list of powers in the curly brackets (but note that this is often not a good idea because of “over-fitting“)
Functions for exponential, power, and logarithmic trendlines are shown below:
In this case the process is not quite so straightforward, because in most cases one or both of the values returned by the function must be modified to give the values shown in the chart trend lines.
For these lines it is possible to use either the LINEST function, or the LOGEST function, but since LOGEST simply calls LINEST internally, and provides little if any extra convenience, it does not
seem to provide much value. In these examples note that:
• Equations are in the form: y = a.e^bx (exponential), y = a.x^b (power) or y = b.ln(x) + a (logarithmic). In each case in the examples the power factor (b) is shown in bold, and the constant term
(a) is shown in bold and italic.
• The LOGEST function returns an equation of the form y = a.b^x
• The LINEST function will return exactly the same values if entered as =EXP(LINEST(LN(Yrange), XRange)), and this line is equivalent to the y = a.e^bx line returned by the chart.
Update 27 Jan 2011:
Coincidentally, Chandoo at Pointy Haired Dilbert is also running a series on estimating trend lines in Excel, which is well worth a look at: Are You Trendy
54 Responses to Using LINEST for non-linear curve fitting
1. Pingback: Using Excel statistical functions for trend analysis. | Chandoo.org - Learn Microsoft Excel Online
2. I would really like to understand how microsoft’s trend line for a +4th order polynomial’s trend line always comes out smooth? What do they do in their algorithm that is different from the Givens
(Least Squares) method that almost any other curve fitting program can duplicate up to a 3rd order polynomial (well, the coefficients match at least), but after that MS Excel’s formula’s simply
don’t match any other results???? And the kicker is that if you plot their solution, the trend line ALSO does not come out the same? MS Excel has become a “standard” for regression analysis in
non-linear systems, so it would be really appreciated if they would provide insight into their algorithm’s.
3. Scott – can you give an example? With the same data I used for the curves up to cubic above (i.e. a circular quadrant with 11 points and radius 1), for a quartic I get:
-5.03771915 8.14914828 -4.68693650 0.64996956 0.99330532
with descending powers of x.
I get exactly the same coefficients from the chart trend line.
This is with Excel 2010, but I believe that the algorithm is unchanged since 2003. There are some acknowledged issues with earlier versions.
I’ll have a look with different software later, but that will probably have to wait to the weekend.
4. I’m referring to polynomial trend lines greater than 4th order. I’m trying to develop a program to curve fit signals with very steep sideband slopes, and when I use Excel, the skirts are well
adjusted to the curve (6th order polynomials in this example), but the trend line itself is incredibly smooth, and I have significant sign changes throughout the data, which should be reflected
in the curve fit as well, and simply isn’t. When I compare polynomial coefficients, they too are significantly skewed, its almost as if MS Excel smooths the data, or performs some other
optimization that I can not account for in my algorithm, however, if you take the trend line formula provided in Excel and plot it against the data in a different plotting tool (Matlab, LabVIEW,
etc…) the curve fit line does not appear to resemble the plot in Excel. So, I’m really curious as to how accurate the curve fit in Excel truly is, it could be as simple as different accuracy
weighting as well, but I am struggling finding a solution that matches Excel.
5. Scott – having looked at it again, there does seem to be a bug in the Linest results with polynomials of order 6 and higher. I have just fitted a polynomyial to the function y = 1/(x-4.99) for
100 points between x = 5 and x = 6. I used Linest, Excel chart trendline fitting, and the Alglib PolynomialFit routine.
I found that up to 5th order they all gave esentially the same results (but with rounding differences becoming significant at 5th order). For 6th order the chart line and the Alglib line were
very close, but the Linest line still followed the same form as the 5th order (i.e. maxima and minima were at about the same positions). For 7th order the Alglib line changed (as you would
expect), but the Linest line again stayed close to the 5th order position, and the chart line was past its limit.
I will write this up and post a UDF with the Alglib routine in the next few days. Drop me an e-mail (dougaj4 at google) if you would like a copy of the spreadsheet as it stands. You might also
like to have a look at the Alglib site which has some information about how they approach the problem.
6. The only discussion of a similar problem I found in a quick search was here:
The responses were not very helpful unfortunately.
7. Doug – I don’t think it’s a bug but a consequence of collinearity see http://support.microsoft.com/kb/828533. A column may be excluded if the sum of squared residuals on a regression of the other
predictor variables is small, see http://en.wikipedia.org/wiki/Multicollinearity. Unfortunately, it’s not clear from the description exactly how linest chooses in which order to exclude columns.
An equivalent of LINEST(Y,X) that doesn’t exclude columns is:
where N={1,0} for a linear fit, N={3,2,1,0} for a cubic fit, etc (X non-zero). However it’s not as numerically stable as the QR decomposition method that linest uses.
8. Lori, would you mind ellaborating on your collinearity theory, I’m not understanding why that would cause the problem I am seeing.
9. Scott – The remarks were in response to Doug’s specific example, i can’t claim that they are necessarily relevant in your case but it does show that the algorithms used may become significant for
higher order polynomials.
In my tests fitting a 6th degree polynomial gave an x^5 coefficient of zero for linest. This accords with the kb article since the results of linest(x^5,x^{1,2,3,4,6},,1) show that 1-R2=RSS/TSS
<1e-16 so x^5 is omitted.
It’s also interesting to compare with fitted values calculated from mmult(X^N,transpose(b)) where b is given by the formula i posted above. For N={6,5,4,3,2,1,0} there is close agreement with
linest but not with the chart line. If there is a follow up post a chart of this may shed more light.
10. I have used this function often for the pricing of fixed income securities, is is possible to modify it to allow for missing data? If for instance I do not have data in either the X or Y sets for
specific row in my spreadsheet?
11. Jeff – To find a y-value for a given x-value using a fitted cubic curve, you can try:
Or to find an x-value for a given y-value you can try to find the root of the cubic using:
These formulas are easily extended to other powers but you may be better off following the posts on splines for interpolating missing data.
□ Lori, can you explain more as to exactly what IRR does? Office just says that it is used to find the Internal Rate of Return. I think this is the solution I was needing for my problem but I
want to make sure. I have data that I want to find a 5th order polynomial equation to. Then, given a Y-value, find the X-value. The answers that I’m getting using this formula seem to be
reasonable, I just don’t know enough about the function of IRR to be confident its what I need.
☆ Tom – I’ll be interested to see Lori’s comments if she drops by, but my explanation of how it works is given here:
☆ Doug – Thanks for the explanation which is much more comprehensive than mine would have been!
Tom – As a check, try inputting the X Value returned into the TREND formula above and verify that the result equals the original Y value. In fact, a slightly simpler verion of the formula
I would like to add however, that while polynomial approximations can be very useful in theoretical analysis, there are rarely compelling reasons for choosing polynomial fits with
empirical data. Models should be based on a priori assumptions as far as possible to avoid problems of data-snooping. If you need a nonlinear approximation for estimation purposes there
are a multitutde of other smoothing methods that are often preferable.
12. Actually on rereading, the question is not directly about finding missing values but rather how to allow for missing values in data?
For a linear fit, SLOPE, INTERCEPT, RSQ and FORECAST skip rows containing blanks. For a nonlinear fit, it’s more challenging since for example LINEST(Y,X^{1,2,3}) errors if there are any blanks
in the range. One approach is to try instead:
The results should match the values given in the chart trendlines. An extension that also allows for filtered data is to replace ISNUMBER(X) in the formula above by SUBTOTAL(3,OFFSET(X,ROW(X)-MIN
(ROW(X)),,1)) and the same for ISNUMBER(Y). Similar substitutions can be applied to other formulas and you can assign names to the expressions for X and Y to keep things simple.
Clearly, there are other ways to achieve the same results, the obvious one being to make a copy without the rows containing the missing values, however this either needs to be done manually or a
macro setup to do it for you each time which is less efficient.
13. Jeff: see http://newtonexcelbach.wordpress.com/2011/05/14/using-linest-on-data-with-gaps/
Lori – thanks for the on-sheet solutions. I’m still trying to work out how the second one works!
14. Pingback: Data Analysis using Linest and the Data Table function. | Chandoo.org - Learn Microsoft Excel Online
15. I need to use linest to find the coefficients ‘a’ and ‘b’ that fit the curve y=a(1-exp(-bx)) to my data set. Judging by the comments here there are some clever people who would know how to do
that, however I’m not one of them, it has me stumped! Thanks to anyone who can offer any help…
16. Malcolm – as far as i know the easiest way is using the Excel Solver, as given by electricpete at eng-tips:
To use Linest you have to be able to convert the function to a linear function of some functions of x.
17. Hi all, multi-collinearity causes huge problems in polynomial regression if the range of x-values extends to values much larger than 1 in magnitude. For example, you might want to calculate the
correlation coefficients between x, x^2, x^3 and so on if x = 20,21,22,23…,30; they are all close to 1. This can even cause fatal interpretation errors if the estimated confidence intervals are
used for some kind of error propagation analysis without accounting for the large co-variances between the regression coefficients. The only way out of that trap is to use orthogonal (or even
orthonormal) polynomials like those of Legendre as basis functions. As these are defined on the interval [-1;1], the variables have to be transformed before the regression takes place. And always
mind the co-variances between the coefficients in an error propagation analysis, if you work with the coefficients for x, x^2…!
The usage of at least orthogonal polynomials is the only method that allows to reliably detect non-linear relationships far away from the origin, for example, when you want to do a non-linear
regression of income on age (30 – 70) or so.
□ Georg – Good points, some of these were alluded to in comments from the follow-up
post. For computing the coefficients, LINEST/TREND can be applied to data centered around the mean and the results are in close agreement to other high-precision polynomial regression
algorithms. The QR/SVD decomposition methods for calculating least squares estimates can be seen as finite dimensional analogs to orthogonal polynomial expansions of L^2-functions.
18. I would like to see how others graphically show the col-linearity – I have just posted on http://www.excelfox.com how to get a correlation map using colors. The post is under the heading
“Using property ColorScaleCriteria color you cells” (in the download center) – what it shows is a correlation matrix of 3 wet chemistry assays (Y variables) and Absorbencies as the X data — so
these are adjacent frequencies (X values) with a very high covariance.
Anybody who has examples of how to graphically show a map similar to what I just posted would be appreciated.
PS – I had to cut out data (max file size is 100kb) – so if you plot the X-values you will see ‘noise’ in the spectra.
□ Rasm – I couldn’t log in to view your file, but for a visual plot of collinearity in three regressor variables I would plot one variable against a best fit linear combination of the other
two. It’s insightful to do this for Doug’s polynomial example mentioned above.
Starting from a blank workbook, here’s a few steps to set up the chart and plot the data, no datasheets or code modules are required. You can just press Alt+F11 and enter sequentially in the
immediate window, the corresponding UI commands should be self-evident.
set s=activechart.SeriesCollection(1)
names.add "x", [4.99+row(sheet1!1:101)/100]
names.add "y", "=x^3"
names.add "z", "=trend(x^3,x^{1,2})"
s.formula = "=series(,sheet1!y,sheet1!z,1)"
s.trendlines.add DisplayRSquared:=True
This gives a near exact straight line with R^2=0.99999946. Choosing Debug>Add Watch with [linest(x^3,x^{1,2})] also gives the same value in the (3,1) element. Extending to [linest(x^6,x^
{1,2,3,4,5})] gives R^2=1 exactly to 15dp, so x^6 coefficeint is dropped from the calculation of coefficients as it adds no more information.
To reduce the collinearity the first step is to center around the mean by changing 4.99 to -0.51 above this gives R^2=0.8401098. The second step is to transform the columns so they are
uncorrelated, this is what Excel and other least squares methods do “behind the scenes”. Choosing the cubic Legendre polynomial in place of x^3 as below gives R^2=0.0019898.
names.add "x", [(x-average(x))*2]
names.add "y", "=2.5*x^3-1.5*x"
☆ Lori
I will try your method – but I have several 100s X values – up 1050.
I do preprocess the data i.e. SNV and typically a 1st derivative (with a smooth and a gap) – next I mean center the data. I typically find the best model using PLS or MLR. The data I work
with are spectra – that is why I have extreme col-linearity – the dependent variable is typical a concentration. But I do find the approach described in this threat very interesting.
Again thanks for your reply – I will try your method – may give me some inspiration.
19. In order to compare the extent of collinearity of two vectors V1 and V2 to our everyday experience of Euclidian sapce, it might help to calculate the angle A12 between them according to the
A result of 0 means totally collinear and 90 means totally independent.
20. Hi,
I want to find out the trendline for a set of data (x,Y) but I want to get the trendline in the form of sin(x) and cos(x). However, using the excel it is only possible to have it in the form of
linear, power, .. but not the sin or cos(x). How you think I can solve it?
□ Well asafa, if you really meant A*sin(x)+B*cos(x), you could use linest because your model is linear in the parameters. But I guess you want to solve something like A*cos(B*x)+C*cos(B*x) in
order to determine a spectral component. As this model is non-linear in the parameters, you have to use a non-linear least squares method, for example, Excel’s solver. My experience of 20
years of NLLSQ fits lets me strongly recommend to use VBA to interface to an external DLL that allows for suppying the derivatives of the model function with respect to the parameters
analytically. Doug has some posts on how to interface to Fortran. The procedure for interfacing to C is quite similar. There are free NLLSQ routines available for download on the web (NetLib,
AlgLib,…). You could use the free CAS Maxima, e.g., in order to determine the analytical derivatives if your models become more complicated than just being the sum of a sine and a cosine.
21. asafa and Georg – see:
for post on using the Excel Solver and the Alglib routines for non-linear regression in Excel.
I agree with Georg that for anyone doing serious work on this the purpose written routines such as those from Alglib offer much better performance than using Solver.
22. Hi everbody, great site!!!. I am working in forcasting project, and its so hard to read in the chart the exactly number of the trend, I am looking for information, how to get the values of the
linear, polynomial, exponential etc. lines but in values, for example my data is A1:A20 so I would like to see on B the linear values, C logarithmic, D polynomial etc. how can I do this, please
any help welcome.
23. Baum – Have a look at the download spreadsheet. It contains the examples shown in the screenshots which returns the data you want. If anything isn’t clear, please ask.
24. anyone knows any video link to see exactly how to get the coefficients trend lines.
25. Hi,
My name is Zack. I was wondering if there was any way to program an equation with a missing variable in excel, and have it calculate for the missing variable.
I.E.) X=Vo*t + 1/2*a*t^2 when knowing what x, Vo, and a are equal to, to try and find what “t” is.
or I.E.) V=Vo + a*t knowing Vo, a, and t to find “V”
If anyone could tell me how to do this with several other equations, I would greatly appreciate it.
my email is “zackgane@yahoo.com” thank you
26. Zack – for your first example you can use the Excel Goal-seek function (under the Data tab in 2007/2010). Also have a look at the Solver which gives more control and can solve more complex
problems (more than one unknown for instance). There are also some posts here with UDFs to solve polynomial equations, which will do the job more quickly and conveniently than Goal-seek if you
have a lot of them:
For the second example you can just enter the formula in a cell to find V, but maybe you meant to find t? In that case you can use the same methods as for the first example, but some simple
algebra will give the result: t = (V-Vo)/a. There is a formula solution to the first example as well; look up quadratic formula.
27. Hi
This is shiva
I have some points that should fit in a bell curve..
What i do to get the ordinates for any point.
Email. shiv_yers@hotmail.com
28. hi, i’m a final year civil engineering student and i’m doing a spreadsheet for the design of steel components using microsoft excel.. does anyone know how to generate the bending moment and shear
force diagrams in excel..? thanks :)
29. Hawwa – have you tried the Internet?
There are a stack of programs that will generate bending moments and shear forces, with different levels of complexity.
If you are looking for a continuous beam analysis you could start here:
30. Noob question and perhaps attributed to a superficialy understanding of arrays in excel (amongst other things) but for the life of me, I can’t understand why in the logarithmic example, the
forumlas in cells O55 and P55 return different values but appear to be exactly the same. What subtle point am I missing?
□ Scott – yes it is the use of array formulas that is confusing if you are not used to them.
The two cells contain the same formula, but it returns an array rather than a single cell, in this case the array is 1 row x 2 columns. The values are a and b in the formula
a(ln(x)) + b. You can see that the values match those of the trend line in the chart starting at C50.
To return both results of the array:
- Enter the formula in O55 in the usual way.
- Select O55:P55
- Press F2
- Press Ctrl-shift-enter
You should now get both results, and the formula will display in the edit line with {} around it, the same in both cells: =LINEST(Y_3,LN(X_3)).
You can also select O55:P55 to start with and enter with ctrl-shift-enter.
More details at: http://newtonexcelbach.wordpress.com/2011/05/10/using-array-formulas/
☆ Thanks for the quick reply :) Last question would then be whether its possible to actually have all of this in a single cell (i.e. a single formula that would return the value of y in a
cell by calculating a(ln(x)) + b) or do I essentially have to calculate a and b in separate cells like in your file and then do another cell to calculate y?
☆ Scott – it’s possible but the formula would get quite long. You can use the Index function to return any value from an array, the same as if it was a range, so:
=INDEX(LINEST(Y_3,LN(X_3)),2) will return the b value.
you would end up with something like:
=LINEST(Y_3,LN(X_3)) * LN(x) + INDEX(LINEST(Y_3,LN(X_3)),2)
But personally I’d rather return the results of the array formula in two cells, and use a third cell to get the desired result.
31. Thanks for the information. I want to ask one thing that in power series trend line, if the x axis is logarithmic will there be any effect on the values. Please reply.
□ The easiest way to check is to try it and see, but the answer is no, making the scale logarithmic only changes the way the graph is plotted, it doesn’t change the trend line results.
32. Your blog is very useful. As a new VBA user I’m having touble to extend this NL fit case a simple on variable case. Have tried bchanging both AL_NLFit and AL_NLFitText without success. I hit a
wall that says : cannot change part of an array. I even tried tp create a new one with the insert function for yusing UDF but could not get it to work.
Can you please help me? I’m a curious student in BA and I want to learn how to this. Thanks in advamnce.
□ Have a look at:
If you still have problems after reading that you could send a sample file to dougaj4 at the usual google mail address.
☆ I did it and it works!
Thank you very much!
33. Thanks for the information! I have a question, how do I calculate the b, c and d for the Cubic curve estimation example? When I use the formula =Linest(Y_1,X_1,^{1,2,3}) I get the number
presented as a (-2,08199), but how do I calculate the numbers b, c and d?
□ Lenn – sorry for the delay in replying.
You have to enter the Linest function as an array function to display all the results.
If you already have the function entered and displaying the first result then:
-Select that cell and the three adjacent cells to the right.
-Press F2 to enter Edit mode.
-Press Ctrl-shift-enter
The four results should display in the selected cells.
☆ Thanks for the explanation! That makes sense!
34. Thank you for the file.
This entry was posted in Charts, Excel, Maths, Newton and tagged curve fitting, Excel, least squares, Linest, LOGEST, XY chart. Bookmark the permalink.
|
{"url":"http://newtonexcelbach.wordpress.com/2011/01/19/using-linest-for-non-linear-curve-fitting/","timestamp":"2014-04-20T04:56:07Z","content_type":null,"content_length":"156455","record_id":"<urn:uuid:64a055bb-1850-41c5-ab55-6f396c12bc6e>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00083-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Civil-Comp Press - Publications - ISBN 0-948749-89-X - Contents Page
Civil-Comp Press
Computational, Engineering & Technology
Conferences and Publications
PROCEEDINGS OF THE NINTH INTERNATIONAL CONFERENCE ON CIVIL AND STRUCTURAL
ENGINEERING COMPUTING
Edited by: B.H.V. Topping
click on a paper title to read the abstract or obtain the full-text paper from
I INTERNET APPLICATIONS
1 The Influence of Internet-Based Construction Portals
P.J. Gardner
2 Web Based Computation for Urban Earthquake Disaster Mitigation
P. Zhu, M. Abe and J. Kiyono
II SOFTWARE DEVELOPMENTS AND APPLICATIONS
3 Data Extraction in Engineering Software using XML
M.E. Williams, G.R. Consolazio and M.I. Hoit
4 Extending Finite Element Software by Component-Oriented Technology
M. Dolenc
III CONSTRUCTION ENGINEERING: DESIGN, CONTROL AND MANAGEMENT
5 Virtual Experiments for Innovative Construction Operations
H. Li and S. Kong
6 Spatio-Temporal Consistency Evaluation on Dynamic 3D Space System
Y. Song, D.K.H. Chua, C.L. Chang and S.H. Bok
7 Visual Product Chronology: A Solution for Linking Product Modelling
Technology with Practical Construction Needs
K. Kähkönen and J. Leinonen
8 Analytic Modelling, Diagnostic and Change-Engineering Tools for Use by
Management to Foster Learning in Construction Design Organisations
M. Phiri
9 The Application of an On-Site Inspection Support System to a
Hydropower Plant
T. Sakata and N. Yabuki
10 Efficient Algorithms for Octree-Based Geometric Modelling
R.-P. Mundani, H.-J. Bungartz, E. Rank, R. Romberg and A. Niggl
IV STRUCTURAL ANALYSIS AND STRUCTURAL RE-ANALYSIS
11 An Efficient Method for Decomposition of Regular Structures using
Algebraic Graph Theory
A. Kaveh and H. Rahami
12 Derivation and Implementation of a Flexibility-Based Large Increment
Method for Solving Non-Linear Structural Problems
W. Barham, A.J. Aref and G.F. Dargush
13 The Theorems of Structural Variation for Rectangular Finite Elements
for Plate Flexure
M.P. Saka
14 A Triangular Finite Element for the Geometrically Nonlinear Analysis
of Composite Shells
E. Gal and R. Levy
V CHAOS
15 Spatial Chaos of Buckled Elastica using the Kirchhoff Analogy of a
A.Y.T. Leung, J.L. Kuang, C.W. Lim and B. Zhu
VI BOUNDARY AND FINITE ELEMENT METHODS: THEORY AND METHODS
16 Boundary Element Analysis of Contact Film Stiffness
R.S. Hack and A.A. Becker
17 Dynamics of a Tunnel: Coupling of Finite Element (FEM) and Integral
Transform Techniques (ITM)
H. Grundmann and K. Müller
18 A Mixed Enthalpy-Temperature Finite Element Method for Generalized
Phase-Change Problems
K. Krabbenhoft and L. Damkilde
19 On Multi-Field Approximation Methods
G. Romano, F. Marotti de Sciarra and M. Diaco
20 Automatic Differentiation in Computational Mechanics
P.R.B. Devloo and E.S.R. Santos
VII MODELLING AND FINITE ELEMENT MESH GENERATION
21 Improvement of Mesh Quality by Combining Smoothing Techniques and
Local Refinement
J.M. Escobar, R. Montenegro, G. Montero, E. Rodríguez and J.M.
22 hp Auto Adaptive Finite Element Method on 3D Heterogeneous Meshes
P.R.B Devloo, C.M.A.A. Bravo and E.C. Rylo
VIII SOLUTION METHODS FOR LARGE SCALE PROBLEMS
23 Modified Versions of QMR-Type Methods
M.D. García, E. Flórez, A. Suárez, L. González and G. Montero
24 Numerical Solution of Coupled Problems
J. Kruis, T. Krejcí and Z. Bittnar
IX FINITE ELEMENT STUDIES
25 The Fatigue Life Remaining in an Airfield Runway Following an
Underground Explosion
J.W. Bull and C.H. Woodford
26 A Design Chart for the Design of Flexible Pavements Based on Finite
B.C. Bodhinayake and M.N.S. Hadi
27 Modelling of Ferrule Strap Connections to uPVC Pipes
F. Pozzessere, N.A. Alexander and R. Potter
28 Finite Element Modelling of Interactions between Openings in OSB
Webbed Timber I-Beams
E.C. Zhu, Z.W. Guan, P.D. Rodd, D.J. Pope
29 Finite Element Modelling of Glulam Beams Prestressed with Pultruded
Z.W. Guan, P.D. Rodd and D.J. Pope
30 Numerical Study on Semi-Rigid Racking Frames
M. Abdel-Jaber, R.G. Beale and M.H.R. Godley
31 Numerical Evaluation of Required Ductility and Load Bearing Capacity
for Aluminium Alloy Continuous Beams
M. Manganiello, G. De Matteis, R. Landolfo and F.M. Mazzolani
X ANALYSIS OF PLATES
32 Non-Linear Finite Element Analysis of Functionally Graded Material
Sector Plates
M. Salehi and M. Tayefeh
33 Micro as Required for Macromechanics of Circular, Annular and Sector
M. Salehi and M. Tayefeh
34 An Explicit Geometric Stiffness Matrix of a Triangular Flat Plate
Element for the Geometric Nonlinear Analysis of Shell Structures
J.-T. Chang and I.-D. Huang
35 Annular Sector Plates: Comparison of Full-Section and Layer Yield
G.J. Turvey and M. Salehi
36 Analysis of Stiffened Plates: An Effective Semi-Analytical Method
J.S. Kuang and H.X. Zhang
37 Reissner-Mindlin Plate Bending Elements with Shear Freedoms
B.A. Izzuddin and D. Lloyd Smith
38 Experimental Response and Numerical Simulation of Plates Submitted to
Small Mass Impact
H. Lopes, R.M. Guedes, M.A. Vaz and J.D. Rodrigues
39 Analysis of Cracked Plates using Hierarchical Trigonometric Functions
Y.V. Satish Kumar and Y.S. Suh
40 On the Computation of Stress Resultants for Plates with Free Edges
using the Ritz Method
C.M. Wang and Y. Xiang
41 Implementation of a Hybrid-Mixed Stress Model based on the Use of
L.M. Santos Castro and A.R. Barbosa
XI COMPUTER AIDED DESIGN AND ANALYSIS OF STEEL STRUCTURES
SESSION ORGANISED BY M. IVÁNYI
42 Buckling Modes of Flattened Edges Rectangular Hollow Members
A. Fülöp and M. Iványi
43 Object-Oriented Implementation of a Modified Heterosis Plate Finite
J. Balogh, M. Iványi and R.M. Gutkowski
44 Numerical Study on Eccentrically Loaded Hot Rolled Steel Single Angle
S. Sambasiva Rao, S.R. Satish Kumar and V. Kalyanaraman
45 Integrated Explosion and Fire Analysis of Space Steel Frame Structures
H. Chen and J.Y.R. Liew
46 Finite Element Simulations of Lateral Torsional Buckling of Tapered
Cantilever Beams
P. Buffel, G. Lagae, R. Van Impe, W. Vanlaere and M. De Beule
XII REINFORCED CONCRETE MODELLING AND ANALYSIS
47 Hybrid-Mixed Stress Model for the Non-Linear Analysis of Concrete
C.M. Silva and L.M. Santos Castro
48 Damage-Based Computational Model for Concrete
A.H. Al-Gadhib
49 An Advanced Concrete Model for RC and Composite Floor Slabs subject to
Extreme Loading
B.A. Izzuddin and A.Y. Elghazouli
50 A Unified Failure Criterion for Finite Element Analysis of Concrete
P.E.C. Seow, S. Swaddiwudhipong and K.K. Tho
51 Evaluation of the Fiber Orientation Effects on the Ductility of the
Confined Concrete Elements
L. Anania, A. Badalà and G. Failla
52 Analytical Integration over Cross-Sections in the Analysis of Spatial
Reinforced-Concrete Beams
D. Zupan and M. Saje
XIII REINFORCED CONCRETE STRUCTURES: ANALYSIS AND DESIGN
53 Combined Finite Strip and Beam Elements for Double Tee Slabs
M.A. Ghadeer, J.Q. Ye and A.H. Mansouri
54 Effect of Support Conditions on Strut-and-Tie Model of Deep Beams with
Web Openings
H. Guan, J. Parsons and S. Fragomeni
55 Cyclic Response of RC Shear Walls
H.G. Kwak and D.Y. Kim
56 Modelling of Interior Column Loads Transmission through Flat-Plate
S.A. Ali Shah and Y. Ribakov
57 Size Effect of Compressed Concrete in the Ultimate Limit States of RC
A.P. Fantilli, I. Iori and P. Vallini
58 Limit Analysis of Reinforced Concrete Shells of Revolution and its
M.A. Danieli (Danielashvili)
XIV MATERIALS MODELLING
59 Adaptive Simulation of Materials with Quasi-Brittle Failure
D. Rypl, B. Patzák and Z. Bittnar
60 Modelling of High Strength Concrete Structures
J. Nemecek and Z. Bittnar
61 Flowable Concrete: Three-Dimensional Quantitative Simulation and
M.A. Noor and T. Uomoto
62 Analytical Modeling of Rheology of High Flowing Mortar and Concrete
M.A. Noor and T. Uomoto
63 Material Sensitivity Studies for Homogenised Superconducting
M. Kaminski
64 Discontinuous Models for Modelling Fracture of Quasi-Brittle Materials
K. De Proft, W.P. De Wilde, G.N. Wells and L.J. Sluys
XV STATIC AND DYNAMIC ANALYSIS OF STEEL AND COMPOSITE STRUCTURES
SESSION ORGANISED BY P.C.G. DA S. VELLASCO
65 Effect of Cooling on the Behaviour of a Steel Beam under Fire Loading
including the End Joint Response
A. Santiago, L. Simões da Silva, P. Vila Real and J.M. Franssen
66 Influence of Joint Slippage on the Cyclic Response of Steel Frames
P. Nogueiro, L. Simões da Silva and R. Bento
67 Behaviour of Pin Connected Tension Joints
R. Simões and L. Simões da Silva
68 Characterisation of the Behaviour of the Column Web Loaded in
Out-of-Plane Bending in the Framework of the Component Method
L.C. Neves, L. Simões da Silva and P.C.G. da S. Vellasco
69 Evaluation of the Post-Limit Stiffness of Beam-to-Column Semi-Rigid
Joints using Genetic Algorithms
L.A.C. Borges, L.R.O. de Lima, L.A.P.S. da Silva and P.C.G. da S.
70 The Influence of Structural Steel Design Models on the Behaviour of
Slender Transmission and Telecommunication Towers
J.G.S. da Silva, P.C.G. da S. Vellasco, S.A.L. de Andrade and M.I.R.
de Oliveira
71 Partial-Strength Beam-to-Column Joints for High Ductile Steel-Concrete
Composite Frames
O.S. Bursi, D. Lucchesi and W. Salvatore
XVI VIBRATION ENGINEERING
72 A Dynamical Parametric Analysis of Semi-Rigid Portal Frames
J.G.S. da Silva, P.C.G. da S. Vellasco, S.A.L. de Andrade, L.R.O. de
Lima and R. de K.D. Lopes
73 A Survey of Vibration Serviceability Criteria for Structures
A. Ebrahimpour and R.L. Sack
74 Free Vibration of Metallic and Composite Beams Exhibiting
Bending-Torsion Coupling
H. Su, C.W. Cheung and J.R. Banerjee
75 Hybrid Finite Element Analysis of Vibrations of Anisotropic
Cylindrical Shells Conveying Fluid
M.H. Toorani, A.A. Lakis and M. Gou
XVII BEHAVIOUR OF STRUCTURES FOR DYNAMIC AND MOVING LOADS
SESSION ORGANISED BY D. LE HOUÉDEC AND L. FRÝBA
76 Stress Ranges in Bridges under High Speed Trains
L. Frýba, C. Fischer and J.-D. Yau
77 FEM and FEM-BEM Application for Vibration Prediction and Mitigation of
Track and Ground Dynamic Interaction under High-Speed Trains
H. Takemiya and M. Kojima
78 Dynamic Behaviour of Ballasted Railway Tracks: a Discrete/Continuous
L. Ricci, V.H. Nguyen, K. Sab, D. Duhamel and L. Schmitt
79 Modelling of Multilayer Viscoelastic Road Structures under Moving
D. Duhamel, V.H. Nguyen, A. Chabot and P. Tamagny
80 Numerical and Experimental Comparison of 3D-Model for the Study of
Railway Vibrations
B. Picoux and D. Le Houédec
81 Train-Bridge Interaction
G. De Roeck, E. Claes and H. Xia
82 Influence of the Second Flexural Mode on the Response of High-Speed
P. Museros and E. Alarcón
83 Modal Contributions to the Dynamic Response of Simply Supported
Bridges for High Speed Vehicles
M.D. Martínez-Rodrigo, P. Museros and M.L. Romero
84 Dynamic Diagnosis of Bridges
J. Bencat
85 Influence of the High Speeds of Moving Trains on the Dynamic Behaviour
of Multi-Span Bridges: Comparative Study with Various Types of French
K. Henchi, M. Fafard and C. Quézel
XVIII BRIDGE, RAILWAY AND ROAD ENGINEERING: DYNAMICS AND MODELLING
86 Stochastic Analysis of Suspension Bridges for Different Correlation
S. Adanur, A.A. Dumanoglu and K. Soyluk
87 Train-Induced Ground Vibrations: Experiments and Theory
A. Ditzel and G.C. Herman
88 Wheel-Rail Contact Elements Incorporating Rail Irregularities
C.J. Bowe and T.P. Mullarkey
89 Analysis of Bridge-Vehicle Interaction by Component-Mode Synthesis
B. Biondi, G. Muscolino and A. Sofi
90 Analysis of Cable-Stayed Bridges Under Propagating Excitation by
Random Vibration and Deterministic Methods
K. Soyluk and A.A. Dumanoglu
91 Harmonic Excitation of Bridges by Traffic Loads
M.M. Husain and M.K. Swailem
92 Dynamic Effect of Vehicles on Multispan Pre-Stressed Concrete Bridges
over Rivers
A.Z. Awad and M.K. Swailem
93 Development and Application of an IFC-Based Bridge Product Model
N. Yabuki and T. Shitani
94 High Performance Computing for High Speed Railways
L. Argandoña, E. Arias, J. Benet, F. Cuartero and T. Rojo
95 On the Analysis of Structure and Ground Borne Noise from Moving
L. Andersen, S.R.K. Nielsen and S. Krenk
XIX COMPUTATIONAL TECHNIQUES FOR COMPOSITE MATERIALS
SESSION ORGANISED BY A. RICCIO
96 Influence of Loading Conditions on Impact Induced Delamination in
Stiffened Composite Panels
A. Riccio and N. Tessitore
97 Optimisation of Fibre Arrangement of Filament Wound Liquid Oxygen
Composite Tanks
R. Barboni, G. Tomassetti and M. de Benedetti
98 Simulating Damage and Permanent Strain in Composites under In-Plane
Fatigue Loading
W. Van Paepegem and J. Degrieck
XX ANALYSIS OF MASONRY STRUCTURES
99 Investigation of FRP Consolidated Masonry Panels
A. Baratta and I. Corbi
100 Finite Element Model of a Brick Masonry Four-Sided Cloister Vault
Reinforced with FRPs
F. Portioli and R. Landolfo
101 Modelling Masonry Arch Bridges using Commercial Finite Element
T.E. Ford, C.E. Augarde and S.S. Tuxford
102 Collapse Analysis of Masonry Arch Bridges
T. Aoki and D. Sabia
103 Limit Analysis of No Tension Bodies and Non Linear Programming
A. Baratta and O. Corbi
104 The Computational Efficiency of Two Rigid Block Analysis Formulations
for Application to Masonry Structures
H.M. Ahmed and M. Gilbert
XXI SEISMIC ANALYSIS AND DESIGN
105 Site Effect Induced in the El-Asnam (Algeria) Earthquake of 10 October
K. Tounsi and M. Hammoutène
106 Influence of Damping Systems on Building Structures Subject to Seismic
J. Marko, D. Thambiratnam and N. Perera
107 A New Approach to Seismic Correction using Recursive Least Squares and
Wavelet De-Noising
A.A. Chanerley and N.A. Alexander
108 Nonlinear Dynamic Analysis of RC Frames under Earthquake Loading
H.G. Kwak and S.P. Kim
109 Probabilistic Model for Seismogenetic Areas in Seismic Risk Analyses
A. Baratta and I. Corbi
110 Behaviour of Solid Waste Landfill Liners under Earthquake Loading
S.P. Gopal Madabhushi and S. Singh
111 Energy Dissipation and Behaviour of Building Façade Systems under
Seismic Loads
R. Hareer, D. Thambiratnam and N. Perera
112 Dam-Reservoir Interaction for Incompressible-Unbounded Fluid Domains
using a New Truncation Boundary Condition
S. Küçükarslan
XXII ACTIVE AND PASSIVE CONTROL OF STRUCTURES
113 Geometrically Nonlinear Spring and Dash-pot Elements in Base Isolation
C.P. Katsaras, V.K. Koumousis and P. Tsopelas
114 Design of Smart Beams for Suppression of Wind-Induced Vibrations
G.E. Stavroulakis, G. Foutsitzi, V. Hadjigeorgiou, D. Marinov and C.C.
115 Continuous Bounded Controller for Active Control of Structures
Y. Arfiadi and M.N.S. Hadi
XXIII STRUCTURAL IDENTIFICATION AND DAMAGE DETECTION
116 Parameter Identification Method using Wavelet Transform
T. Ohkami, J. Nagao and S. Koyama
117 Damage Location Plot: A Non-Destructive Structural Damage Detection
D. Huynh, J. He and D. Tran
XXIV STRUCTURAL RELIABILITY: ANALYSIS AND DESIGN
118 Simulation-Based Reliability Assessment of Tension Structures
S. Kmet, M. Tomko and J. Brda
119 Fuzzy Cluster Design: A New Way for Structural Design
B. Möller, M. Beer and M. Liebscher
120 Numerical Estimation of Sensitivities for Complex
Probabilistically-Described Systems
R.E. Melchers and M. Ahammed
XXV WATER ENGINEERING
121 A Peaking Factor Based Statistical Approach to the Incorporation of
Variations in Demands in the Reliability Analysis of Water
Distribution Systems
S. Surendran, T.T. Tanyimboh and M. Tabesh
122 Water System Entropy: A Study of Redundancy as a Possible Lurking
Y. Setiadi, T.T. Tanyimboh, A.B. Templeman and B. Tahar
XXVI GEOTECHNICAL ENGINEERING
123 Analysis of Pipe-Soil Interaction for Pipejacking
K.J. Shou and F.W. Chang
124 Static and Pseudo-Static Retaining Wall Earth Pressure Analysis using
the Discrete Element Method
A.A. Mirghasemi and M. Maleki-Javan
125 Numerical and Physical Modelling of the Behaviour of Vertical Anchor
Walls in Cohesionless Soil
E.A. Dickin
126 Ground Displacements around a Tunnel using Three Dimensional Modelling
M.K. Swailem and A.Z. Awad
127 Effects of Inertial Interaction in Seismic Soil-Pile-Structure
D.M. Chu and K.Z. Truman
128 Wave-Induced Pore Pressure and Effective Stresses in the Vicinity of a
D.-S. Jeng and M. Lin
XXVII STRUCTURAL OPTIMISATION
129 Multi-Objective Optimization Approach to Design and Detailing of RC
M. Leps, R. Vondrácek, J. Zeman and Z. Bittnar
130 Design of Frames using Genetic Algorithms, Force Method and Graph
A. Kaveh and M. Abdie
131 Topology Optimization using Homogenization
Y. Wang, M. Xie and D. Tran
132 Evolutionary Topological Design of Three Dimensional Solid Structures
S. Savas, M. Ulker and M.P. Saka
133 Reliability Based Optimization of Complex Structures using Competitive
C.K. Dimou and V.K. Koumousis
134 Optimal Design of Curved Pre-Stressed Box Girder Bridges
N. Maniatis and V. Koumousis
135 A Simple Self-Design Methodology to Minimise Mass for Composite
M. Walker, R. Smith and D. Jonson
XXVIII PARALLEL AND DISTRIBUTED COMPUTATIONS
136 Distributed Finite Element Analysis and the .NET Framework
R.I. Mackie
137 A Low-Cost Parallel Architecture, the Hybrid System, for Solving a
Large Linear Matrix System
C.S. Leo, G. Leedham, C.J. Leo and H. Schroder
138 Static Partitioning for Heterogeneous Computational Environments
P. Iványi and B.H.V. Topping
XXIX EDUCATION
139 EuroCADcrete, a Concrete Exercise with the Help of Computer Aided
R. Weener and B. Kumar
click on a paper title to read the abstract or obtain the full-text paper from CTResources.info
I INTERNET APPLICATIONS
1 The Influence of Internet-Based Construction Portals
P.J. Gardner
2 Web Based Computation for Urban Earthquake Disaster Mitigation
P. Zhu, M. Abe and J. Kiyono
II SOFTWARE DEVELOPMENTS AND APPLICATIONS
3 Data Extraction in Engineering Software using XML
M.E. Williams, G.R. Consolazio and M.I. Hoit
4 Extending Finite Element Software by Component-Oriented Technology
M. Dolenc
III CONSTRUCTION ENGINEERING: DESIGN, CONTROL AND MANAGEMENT
5 Virtual Experiments for Innovative Construction Operations
H. Li and S. Kong
6 Spatio-Temporal Consistency Evaluation on Dynamic 3D Space System Model
Y. Song, D.K.H. Chua, C.L. Chang and S.H. Bok
7 Visual Product Chronology: A Solution for Linking Product Modelling Technology with Practical Construction Needs
K. Kähkönen and J. Leinonen
8 Analytic Modelling, Diagnostic and Change-Engineering Tools for Use by Management to Foster Learning in Construction Design Organisations
M. Phiri
9 The Application of an On-Site Inspection Support System to a Hydropower Plant
T. Sakata and N. Yabuki
10 Efficient Algorithms for Octree-Based Geometric Modelling
R.-P. Mundani, H.-J. Bungartz, E. Rank, R. Romberg and A. Niggl
IV STRUCTURAL ANALYSIS AND STRUCTURAL RE-ANALYSIS
11 An Efficient Method for Decomposition of Regular Structures using Algebraic Graph Theory
A. Kaveh and H. Rahami
12 Derivation and Implementation of a Flexibility-Based Large Increment Method for Solving Non-Linear Structural Problems
W. Barham, A.J. Aref and G.F. Dargush
13 The Theorems of Structural Variation for Rectangular Finite Elements for Plate Flexure
M.P. Saka
14 A Triangular Finite Element for the Geometrically Nonlinear Analysis of Composite Shells
E. Gal and R. Levy
V CHAOS
15 Spatial Chaos of Buckled Elastica using the Kirchhoff Analogy of a Gyrostat
A.Y.T. Leung, J.L. Kuang, C.W. Lim and B. Zhu
VI BOUNDARY AND FINITE ELEMENT METHODS: THEORY AND METHODS
16 Boundary Element Analysis of Contact Film Stiffness
R.S. Hack and A.A. Becker
17 Dynamics of a Tunnel: Coupling of Finite Element (FEM) and Integral Transform Techniques (ITM)
H. Grundmann and K. Müller
18 A Mixed Enthalpy-Temperature Finite Element Method for Generalized Phase-Change Problems
K. Krabbenhoft and L. Damkilde
19 On Multi-Field Approximation Methods
G. Romano, F. Marotti de Sciarra and M. Diaco
20 Automatic Differentiation in Computational Mechanics
P.R.B. Devloo and E.S.R. Santos
VII MODELLING AND FINITE ELEMENT MESH GENERATION
21 Improvement of Mesh Quality by Combining Smoothing Techniques and Local Refinement
J.M. Escobar, R. Montenegro, G. Montero, E. Rodríguez and J.M. González-Yuste
22 hp Auto Adaptive Finite Element Method on 3D Heterogeneous Meshes
P.R.B Devloo, C.M.A.A. Bravo and E.C. Rylo
VIII SOLUTION METHODS FOR LARGE SCALE PROBLEMS
23 Modified Versions of QMR-Type Methods
M.D. García, E. Flórez, A. Suárez, L. González and G. Montero
24 Numerical Solution of Coupled Problems
J. Kruis, T. Krejcí and Z. Bittnar
IX FINITE ELEMENT STUDIES
25 The Fatigue Life Remaining in an Airfield Runway Following an Underground Explosion
J.W. Bull and C.H. Woodford
26 A Design Chart for the Design of Flexible Pavements Based on Finite Elements
B.C. Bodhinayake and M.N.S. Hadi
27 Modelling of Ferrule Strap Connections to uPVC Pipes
F. Pozzessere, N.A. Alexander and R. Potter
28 Finite Element Modelling of Interactions between Openings in OSB Webbed Timber I-Beams
E.C. Zhu, Z.W. Guan, P.D. Rodd, D.J. Pope
29 Finite Element Modelling of Glulam Beams Prestressed with Pultruded GRP
Z.W. Guan, P.D. Rodd and D.J. Pope
30 Numerical Study on Semi-Rigid Racking Frames
M. Abdel-Jaber, R.G. Beale and M.H.R. Godley
31 Numerical Evaluation of Required Ductility and Load Bearing Capacity for Aluminium Alloy Continuous Beams
M. Manganiello, G. De Matteis, R. Landolfo and F.M. Mazzolani
X ANALYSIS OF PLATES
32 Non-Linear Finite Element Analysis of Functionally Graded Material Sector Plates
M. Salehi and M. Tayefeh
33 Micro as Required for Macromechanics of Circular, Annular and Sector Plates
M. Salehi and M. Tayefeh
34 An Explicit Geometric Stiffness Matrix of a Triangular Flat Plate Element for the Geometric Nonlinear Analysis of Shell Structures
J.-T. Chang and I.-D. Huang
35 Annular Sector Plates: Comparison of Full-Section and Layer Yield Predictions
G.J. Turvey and M. Salehi
36 Analysis of Stiffened Plates: An Effective Semi-Analytical Method
J.S. Kuang and H.X. Zhang
37 Reissner-Mindlin Plate Bending Elements with Shear Freedoms
B.A. Izzuddin and D. Lloyd Smith
38 Experimental Response and Numerical Simulation of Plates Submitted to Small Mass Impact
H. Lopes, R.M. Guedes, M.A. Vaz and J.D. Rodrigues
39 Analysis of Cracked Plates using Hierarchical Trigonometric Functions
Y.V. Satish Kumar and Y.S. Suh
40 On the Computation of Stress Resultants for Plates with Free Edges using the Ritz Method
C.M. Wang and Y. Xiang
41 Implementation of a Hybrid-Mixed Stress Model based on the Use of Wavelets
L.M. Santos Castro and A.R. Barbosa
XI COMPUTER AIDED DESIGN AND ANALYSIS OF STEEL STRUCTURES
SESSION ORGANISED BY M. IVÁNYI
42 Buckling Modes of Flattened Edges Rectangular Hollow Members
A. Fülöp and M. Iványi
43 Object-Oriented Implementation of a Modified Heterosis Plate Finite Element
J. Balogh, M. Iványi and R.M. Gutkowski
44 Numerical Study on Eccentrically Loaded Hot Rolled Steel Single Angle Struts
S. Sambasiva Rao, S.R. Satish Kumar and V. Kalyanaraman
45 Integrated Explosion and Fire Analysis of Space Steel Frame Structures
H. Chen and J.Y.R. Liew
46 Finite Element Simulations of Lateral Torsional Buckling of Tapered Cantilever Beams
P. Buffel, G. Lagae, R. Van Impe, W. Vanlaere and M. De Beule
XII REINFORCED CONCRETE MODELLING AND ANALYSIS
47 Hybrid-Mixed Stress Model for the Non-Linear Analysis of Concrete Structures
C.M. Silva and L.M. Santos Castro
48 Damage-Based Computational Model for Concrete
A.H. Al-Gadhib
49 An Advanced Concrete Model for RC and Composite Floor Slabs subject to Extreme Loading
B.A. Izzuddin and A.Y. Elghazouli
50 A Unified Failure Criterion for Finite Element Analysis of Concrete Structures
P.E.C. Seow, S. Swaddiwudhipong and K.K. Tho
51 Evaluation of the Fiber Orientation Effects on the Ductility of the Confined Concrete Elements
L. Anania, A. Badalà and G. Failla
52 Analytical Integration over Cross-Sections in the Analysis of Spatial Reinforced-Concrete Beams
D. Zupan and M. Saje
53 Combined Finite Strip and Beam Elements for Double Tee Slabs
M.A. Ghadeer, J.Q. Ye and A.H. Mansouri
54 Effect of Support Conditions on Strut-and-Tie Model of Deep Beams with Web Openings
H. Guan, J. Parsons and S. Fragomeni
55 Cyclic Response of RC Shear Walls
H.G. Kwak and D.Y. Kim
56 Modelling of Interior Column Loads Transmission through Flat-Plate Floors
S.A. Ali Shah and Y. Ribakov
57 Size Effect of Compressed Concrete in the Ultimate Limit States of RC Elements
A.P. Fantilli, I. Iori and P. Vallini
58 Limit Analysis of Reinforced Concrete Shells of Revolution and its Application
M.A. Danieli (Danielashvili)
XIV MATERIALS MODELLING
59 Adaptive Simulation of Materials with Quasi-Brittle Failure
D. Rypl, B. Patzák and Z. Bittnar
60 Modelling of High Strength Concrete Structures
J. Nemecek and Z. Bittnar
61 Flowable Concrete: Three-Dimensional Quantitative Simulation and Applications
M.A. Noor and T. Uomoto
62 Analytical Modeling of Rheology of High Flowing Mortar and Concrete
M.A. Noor and T. Uomoto
63 Material Sensitivity Studies for Homogenised Superconducting Composites
M. Kaminski
64 Discontinuous Models for Modelling Fracture of Quasi-Brittle Materials
K. De Proft, W.P. De Wilde, G.N. Wells and L.J. Sluys
XV STATIC AND DYNAMIC ANALYSIS OF STEEL AND COMPOSITE STRUCTURES
SESSION ORGANISED BY P.C.G. DA S. VELLASCO
65 Effect of Cooling on the Behaviour of a Steel Beam under Fire Loading including the End Joint Response
A. Santiago, L. Simões da Silva, P. Vila Real and J.M. Franssen
66 Influence of Joint Slippage on the Cyclic Response of Steel Frames
P. Nogueiro, L. Simões da Silva and R. Bento
67 Behaviour of Pin Connected Tension Joints
R. Simões and L. Simões da Silva
68 Characterisation of the Behaviour of the Column Web Loaded in Out-of-Plane Bending in the Framework of the Component Method
L.C. Neves, L. Simões da Silva and P.C.G. da S. Vellasco
69 Evaluation of the Post-Limit Stiffness of Beam-to-Column Semi-Rigid Joints using Genetic Algorithms
L.A.C. Borges, L.R.O. de Lima, L.A.P.S. da Silva and P.C.G. da S. Vellasco
70 The Influence of Structural Steel Design Models on the Behaviour of Slender Transmission and Telecommunication Towers
J.G.S. da Silva, P.C.G. da S. Vellasco, S.A.L. de Andrade and M.I.R. de Oliveira
71 Partial-Strength Beam-to-Column Joints for High Ductile Steel-Concrete Composite Frames
O.S. Bursi, D. Lucchesi and W. Salvatore
XVI VIBRATION ENGINEERING
72 A Dynamical Parametric Analysis of Semi-Rigid Portal Frames
J.G.S. da Silva, P.C.G. da S. Vellasco, S.A.L. de Andrade, L.R.O. de Lima and R. de K.D. Lopes
73 A Survey of Vibration Serviceability Criteria for Structures
A. Ebrahimpour and R.L. Sack
74 Free Vibration of Metallic and Composite Beams Exhibiting Bending-Torsion Coupling
H. Su, C.W. Cheung and J.R. Banerjee
75 Hybrid Finite Element Analysis of Vibrations of Anisotropic Cylindrical Shells Conveying Fluid
M.H. Toorani, A.A. Lakis and M. Gou
XVII BEHAVIOUR OF STRUCTURES FOR DYNAMIC AND MOVING LOADS
SESSION ORGANISED BY D. LE HOUÉDEC AND L. FRÝBA
76 Stress Ranges in Bridges under High Speed Trains
L. Frýba, C. Fischer and J.-D. Yau
77 FEM and FEM-BEM Application for Vibration Prediction and Mitigation of Track and Ground Dynamic Interaction under High-Speed Trains
H. Takemiya and M. Kojima
78 Dynamic Behaviour of Ballasted Railway Tracks: a Discrete/Continuous Approach
L. Ricci, V.H. Nguyen, K. Sab, D. Duhamel and L. Schmitt
79 Modelling of Multilayer Viscoelastic Road Structures under Moving Loads
D. Duhamel, V.H. Nguyen, A. Chabot and P. Tamagny
80 Numerical and Experimental Comparison of 3D-Model for the Study of Railway Vibrations
B. Picoux and D. Le Houédec
81 Train-Bridge Interaction
G. De Roeck, E. Claes and H. Xia
82 Influence of the Second Flexural Mode on the Response of High-Speed Bridges
P. Museros and E. Alarcón
83 Modal Contributions to the Dynamic Response of Simply Supported Bridges for High Speed Vehicles
M.D. Martínez-Rodrigo, P. Museros and M.L. Romero
84 Dynamic Diagnosis of Bridges
J. Bencat
85 Influence of the High Speeds of Moving Trains on the Dynamic Behaviour of Multi-Span Bridges: Comparative Study with Various Types of French Bridges
K. Henchi, M. Fafard and C. Quézel
86 Stochastic Analysis of Suspension Bridges for Different Correlation Functions
S. Adanur, A.A. Dumanoglu and K. Soyluk
87 Train-Induced Ground Vibrations: Experiments and Theory
A. Ditzel and G.C. Herman
88 Wheel-Rail Contact Elements Incorporating Rail Irregularities
C.J. Bowe and T.P. Mullarkey
89 Analysis of Bridge-Vehicle Interaction by Component-Mode Synthesis Method
B. Biondi, G. Muscolino and A. Sofi
90 Analysis of Cable-Stayed Bridges Under Propagating Excitation by Random Vibration and Deterministic Methods
K. Soyluk and A.A. Dumanoglu
91 Harmonic Excitation of Bridges by Traffic Loads
M.M. Husain and M.K. Swailem
92 Dynamic Effect of Vehicles on Multispan Pre-Stressed Concrete Bridges over Rivers
A.Z. Awad and M.K. Swailem
93 Development and Application of an IFC-Based Bridge Product Model
N. Yabuki and T. Shitani
94 High Performance Computing for High Speed Railways
L. Argandoña, E. Arias, J. Benet, F. Cuartero and T. Rojo
95 On the Analysis of Structure and Ground Borne Noise from Moving Sources
L. Andersen, S.R.K. Nielsen and S. Krenk
SESSION ORGANISED BY A. RICCIO
96 Influence of Loading Conditions on Impact Induced Delamination in Stiffened Composite Panels
A. Riccio and N. Tessitore
97 Optimisation of Fibre Arrangement of Filament Wound Liquid Oxygen Composite Tanks
R. Barboni, G. Tomassetti and M. de Benedetti
98 Simulating Damage and Permanent Strain in Composites under In-Plane Fatigue Loading
W. Van Paepegem and J. Degrieck
XX ANALYSIS OF MASONRY STRUCTURES
99 Investigation of FRP Consolidated Masonry Panels
A. Baratta and I. Corbi
100 Finite Element Model of a Brick Masonry Four-Sided Cloister Vault Reinforced with FRPs
F. Portioli and R. Landolfo
101 Modelling Masonry Arch Bridges using Commercial Finite Element Software
T.E. Ford, C.E. Augarde and S.S. Tuxford
102 Collapse Analysis of Masonry Arch Bridges
T. Aoki and D. Sabia
103 Limit Analysis of No Tension Bodies and Non Linear Programming
A. Baratta and O. Corbi
104 The Computational Efficiency of Two Rigid Block Analysis Formulations for Application to Masonry Structures
H.M. Ahmed and M. Gilbert
XXI SEISMIC ANALYSIS AND DESIGN
105 Site Effect Induced in the El-Asnam (Algeria) Earthquake of 10 October 10 1980
K. Tounsi and M. Hammoutène
106 Influence of Damping Systems on Building Structures Subject to Seismic Effects
J. Marko, D. Thambiratnam and N. Perera
107 A New Approach to Seismic Correction using Recursive Least Squares and Wavelet De-Noising
A.A. Chanerley and N.A. Alexander
108 Nonlinear Dynamic Analysis of RC Frames under Earthquake Loading
H.G. Kwak and S.P. Kim
109 Probabilistic Model for Seismogenetic Areas in Seismic Risk Analyses
A. Baratta and I. Corbi
110 Behaviour of Solid Waste Landfill Liners under Earthquake Loading
S.P. Gopal Madabhushi and S. Singh
111 Energy Dissipation and Behaviour of Building Façade Systems under Seismic Loads
R. Hareer, D. Thambiratnam and N. Perera
112 Dam-Reservoir Interaction for Incompressible-Unbounded Fluid Domains using a New Truncation Boundary Condition
S. Küçükarslan
XXII ACTIVE AND PASSIVE CONTROL OF STRUCTURES
113 Geometrically Nonlinear Spring and Dash-pot Elements in Base Isolation Systems
C.P. Katsaras, V.K. Koumousis and P. Tsopelas
114 Design of Smart Beams for Suppression of Wind-Induced Vibrations
G.E. Stavroulakis, G. Foutsitzi, V. Hadjigeorgiou, D. Marinov and C.C. Baniotopoulos
115 Continuous Bounded Controller for Active Control of Structures
Y. Arfiadi and M.N.S. Hadi
116 Parameter Identification Method using Wavelet Transform
T. Ohkami, J. Nagao and S. Koyama
117 Damage Location Plot: A Non-Destructive Structural Damage Detection Technique
D. Huynh, J. He and D. Tran
XXIV STRUCTURAL RELIABILITY: ANALYSIS AND DESIGN
118 Simulation-Based Reliability Assessment of Tension Structures
S. Kmet, M. Tomko and J. Brda
119 Fuzzy Cluster Design: A New Way for Structural Design
B. Möller, M. Beer and M. Liebscher
120 Numerical Estimation of Sensitivities for Complex Probabilistically-Described Systems
R.E. Melchers and M. Ahammed
XXV WATER ENGINEERING
121 A Peaking Factor Based Statistical Approach to the Incorporation of Variations in Demands in the Reliability Analysis of Water Distribution Systems
S. Surendran, T.T. Tanyimboh and M. Tabesh
122 Water System Entropy: A Study of Redundancy as a Possible Lurking Variable
Y. Setiadi, T.T. Tanyimboh, A.B. Templeman and B. Tahar
XXVI GEOTECHNICAL ENGINEERING
123 Analysis of Pipe-Soil Interaction for Pipejacking
K.J. Shou and F.W. Chang
124 Static and Pseudo-Static Retaining Wall Earth Pressure Analysis using the Discrete Element Method
A.A. Mirghasemi and M. Maleki-Javan
125 Numerical and Physical Modelling of the Behaviour of Vertical Anchor Walls in Cohesionless Soil
E.A. Dickin
126 Ground Displacements around a Tunnel using Three Dimensional Modelling
M.K. Swailem and A.Z. Awad
127 Effects of Inertial Interaction in Seismic Soil-Pile-Structure Interaction
D.M. Chu and K.Z. Truman
128 Wave-Induced Pore Pressure and Effective Stresses in the Vicinity of a Breakwater
D.-S. Jeng and M. Lin
129 Multi-Objective Optimization Approach to Design and Detailing of RC Frames
M. Leps, R. Vondrácek, J. Zeman and Z. Bittnar
130 Design of Frames using Genetic Algorithms, Force Method and Graph Theory
A. Kaveh and M. Abdie
131 Topology Optimization using Homogenization
Y. Wang, M. Xie and D. Tran
132 Evolutionary Topological Design of Three Dimensional Solid Structures
S. Savas, M. Ulker and M.P. Saka
133 Reliability Based Optimization of Complex Structures using Competitive GAs
C.K. Dimou and V.K. Koumousis
134 Optimal Design of Curved Pre-Stressed Box Girder Bridges
N. Maniatis and V. Koumousis
135 A Simple Self-Design Methodology to Minimise Mass for Composite Structures
M. Walker, R. Smith and D. Jonson
136 Distributed Finite Element Analysis and the .NET Framework
R.I. Mackie
137 A Low-Cost Parallel Architecture, the Hybrid System, for Solving a Large Linear Matrix System
C.S. Leo, G. Leedham, C.J. Leo and H. Schroder
138 Static Partitioning for Heterogeneous Computational Environments
P. Iványi and B.H.V. Topping
XXIX EDUCATION
139 EuroCADcrete, a Concrete Exercise with the Help of Computer Aided Learning
R. Weener and B. Kumar
|
{"url":"http://www.civil-comp.com/pubs/catalog.htm?t=contents&f=89_X","timestamp":"2014-04-18T18:10:24Z","content_type":null,"content_length":"39784","record_id":"<urn:uuid:6da7a542-ff59-46d9-b22f-c580fbfe0bf1>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00427-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Distance of two moving objects.
March 18th 2010, 03:48 AM
Distance of two moving objects.
At noon ship A is 100 km west of ship B. Ship A travels south at 35km/h and ship B travels North at 25 km/hr. At 4pm how fast does the distance between them change?
I don't quite understand how to do this so a complete explanation would be best.
March 18th 2010, 04:37 AM
Haha, I have nearly the same problem.
This has to do with implicit differentiation
Use the formula d = v*t
I would say you could also use a^2+b^2=c^2 but they're going in complete opposite directions.
By the way your problem is featured on youtube.
March 18th 2010, 05:41 AM
Set up a coordinate system so that ship A is at (0, 0) and ship B is at (100, 0). t hours after noon, ship A is at (0, -35t) and ship B is at (100, 25t).
The distance between them, as a function of t, is $\sqrt{100^2+ (25t-(-35t))^2}= \sqrt{100^2+ 60^2t^2}$.
Find the derivative of that at t= 4.
|
{"url":"http://mathhelpforum.com/calculus/134422-distance-two-moving-objects-print.html","timestamp":"2014-04-17T01:25:25Z","content_type":null,"content_length":"5429","record_id":"<urn:uuid:5f1ab957-77ed-471d-83fe-bacd3046f4a0>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00430-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Noam Greenberg
My main research interests are computability theory, algorithmic randomness, reverse mathematics, higher recursion theory, computable model theory, and set theory.
From 2011 until 2015 I am a Rutherford Discovery fellow. From 2012 to 2015 I am a Turing Research fellow.
I am currently the coordinating editor of the Journal of Symbolic Logic.
The proof that there are incomparable Turing degrees is really intended as a joke; there have been some misunderstandings.
|
{"url":"http://homepages.mcs.vuw.ac.nz/~greenberg/","timestamp":"2014-04-17T01:25:10Z","content_type":null,"content_length":"3232","record_id":"<urn:uuid:d6d510da-98e7-42a1-9403-e51e599d755e>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00474-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Weyl module
From Encyclopedia of Mathematics
Objects that are of fundamental importance for the representation theory of reductive algebraic groups Representation of a group; Reductive group; Algebraic group). Considering such groups as group
schemes (cf. Group scheme), that is, as a family of groups Finite group, representation of a). However, whereas for finite groups the reduction modulo a prime
Below, the example of general linear groups will be discussed in more detail to illuminate this reduction process. For these, R. Carter and G. Lusztig used the term "Weyl module" the first time in
their fundamental paper [a5], where they discussed polynomial representations of general linear groups and indicated how their methods generalize to arbitrary reductive groups. There these modules
were constructed in the "same" way as in [a17].
General linear groups.
Let General linear group). Let [a12]. He rederived all these results in a paper [a13] of 1927 in terms of the module Young diagram consisting of crosses in the plane in
Let Young tableau). The
considered as a subgroup of
It is now possible to define Weyl modules for
the element
The vector
which again is denoted by Character of a group). Thus, the weights
where Character formula). This will be explained below in detail, in the more general context of reductive groups. It turns out that
It is obvious that no proper submodule of
The space
where socle is simple and is isomorphic to
There is another interpretation of the Borel subgroup algebraic group
For further reference and results on the special case of general linear groups, in particular for explicit formulas for bases of Weyl modules and induced modules in terms of bi-determinants, see the
fundamental monograph of J.A. Green [a6].
Reductive groups.
All this generalizes to arbitrary reductive groups. For simplicity it is assumed that
Associated with
First, one may extend
The second approach for setting up Weyl modules involves the complex simple Lie algebra root system
There is a duality between the Euclidean space killing form) and the real space generated by the Chevalley basis of
The toral subalgebra Representation of a Lie algebra). They play a role similar to Weyl modules for Character formula).
Let faithful representation of base change to obtain a transition from Universal enveloping algebra). This is the
for Chevalley group
This is applied to the irreducible
Character formulas.
One of the most outstanding open (1998) problems concerning Weyl modules is to determine the composition factors of those, or, in the language of Brauer theory, to determine the decomposition matrix
There is a similar problem in the representation theory of the Lie algebra [a7]; it was proven shortly after in [a3] and, independently, in [a4]. The combinatorics in this formula are given by
Kazhdan–Lusztig polynomials, which are based on properties of the Hecke algebra, a deformation of the Weyl group of
The Lusztig conjecture predicts similarly a character formula for Weyl modules in terms of certain Kazhdan–Lusztig polynomials; however, under additional assumptions on the characteristic [a1], H.H.
Andersen, J.C. Jantzen and W. Soergel proved the Lusztig conjecture for large
The notion of a category Kac–Moody algebra) (especially in the affine case), and the Kazhdan–Lusztig conjecture is true here as well by a result of M. Kashiwara and T. Tanisaki, [a10]. There is
another remarkable extension of the theory to a new class of objects, called quantum groups, which are deformations involving a parameter [a8], [a9], Kazhdan and Lusztig produced an equivalence
between a certain category [a1].
The formal characters of the irreducible representations of quantum groups at roots of unity or, equivalently, the decomposition multiplicities of irreducible modules in [a14], [a15] character
formulas for indecomposable tilting modules. Those give another basis of the Grothendieck group and his approach provides a much faster algorithm to compute the decomposition matrices of quantum
groups at roots of unity in characteristic
When computing the crystal basis of the Fock space, A. Lascoux, B. Leclerc and J.-Y. Thibon noticed an astonishing coincidence of their calculations with decomposition tables for general linear
groups and symmetric groups, and conjectured in [a11] that one can derive the decomposition matrices of quantum groups of type [a18], [a19]) at roots of unity by comparing the standard and the
crystal basis of Fock space. This conjecture was extended and proved by S. Ariki in [a2], [a16]. The concrete computation is given again by evaluating certain polynomials at one. It is remarkable
that those have non-negative integral coefficients. It is conjectured that these have a deeper meaning: They should give the composition multiplicities of the various layers in the Jantzen filtration
of the
[a1] H.H. Andersen, J.C. Jantzen, W. Soergel, "Representations of quantum groups at a Astérisque , 220 (1994) pp. 1–321 MR1272539
[a2] S. Ariki, "On the decomposition numbers of the Hecke algebra of J. Math. Kyoto Univ. , 36 : 4 (1996) pp. 789–808 MR1443748 Zbl 0888.20011
[a3] A.A. Beilinson, I.N. Bernstein, "Localisation de C.R. Acad. Sci. Paris Ser. I Math. , 292 : 1 (1981) pp. 15–18 MR610137
[a4] J.L. Brylinski, M. Kashiwara, "Kazhdan Lusztig conjecture and holonomic systems" Invent. Math. , 64 : 3 (1981) pp. 387–410 MR0632980 Zbl 0473.22009
[a5] R. Carter, G. Lusztig, "On the modular representations of the general linear and symmetric groups" Math. Z. , 136 (1974) pp. 193–242 MR0369503 MR0354887 Zbl 0301.20005 Zbl 0298.20009
[a6] J.A. Green, "Polynomial representations of Lecture Notes Math. , 830 , Springer (1980) MR0606556 Zbl 0451.20037
[a7] D. Kazhdan, G. Lusztig, "Representations of Coxeter groups and Hecke algebras" Invent. Math. , 53 : 2 (1979) pp. 165–184 MR0560412 Zbl 0499.20035
[a8] D. Kazhdan, G. Lusztig, "Affine Lie algebras and quantum groups" Duke Math. J. , 62 (1991) (Also: Internat. Math. Res. Notices 2 (1991), 21-29) MR1104840 Zbl 0726.17015
[a9] D. Kazhdan, G. Lusztig, "Tensor structures arising from affine Lie algebras I-III, III-IV" J. Amer. Math. Soc. , 6–7 (1993/94) pp. 905–1011; 335–453
[a10] M. Kashiwara, T. Tanisaki, "Kazhdan–Lusztig conjecture for affine Lie algebras with negative level. I,II" Duke Math. J. , 77–84 (1995-1996) pp. 21–62; 771–81 MR1408544 MR1317626
[a11] A. Lascoux, B. Leclerc, J.-Y. Thibon, "Hecke algebras at roots of unity and crystal bases of quantum affine algebras" Comm. Math. Phys. , 181 : 1 (1996) pp. 205–263 MR1410572 Zbl 0874.17009
[a12] I. Schur, "Uber eine Klasse von Matrizen, die sich einer gegebenen Matrix zuordnen lassen (1901)" , I. Schur, Gesammelte Abhandlungen I , Springer (1973) pp. 1–70
[a13] I. Schur, "Uber die rationalen Darstellungen der allgemeinen linearen Gruppe (1927)" , I. Schur, Gesammelte Abhandlungen III , Springer (1973) pp. 68–85
[a14] W. Soergel, "Charakterformeln für Kipp–Moduln über Kac–Moody–Algebren" Represent. Theory , 1 (1997) pp. 115–132 (Electronic},) Zbl 0964.17019
[a15] W. Soergel, "Kazhdan–Lusztig polynomials and a combinatoric[s] for tilting modules" Represent. Theory , 1 (1997) pp. 83–114 MR1444322
[a16] M. Varagnolo, E. Vasserot, "Canonical bases and Lusztig conjecture for quantized sl(N) at roots of unity" Preprint (1998) (math.QA/9803023)
[a17] H. Weyl, "The classical groups, their invariants and representations" , Princeton Univ. Press (1966) MR0000255 Zbl 1024.20501 Zbl 1024.20502 Zbl 0020.20601 Zbl 65.0058.02
[a18] R. Dipper, G. James, "The Proc. London Math. Soc. , 59 (1989) pp. 23–50 MR997250
[a19] R. Dipper, G. James, "Trans. Amer. Math. Soc. , 327 (1991) pp. 251–282 MR1012527
How to Cite This Entry:
Weyl module. Encyclopedia of Mathematics. URL: http://www.encyclopediaofmath.org/index.php?title=Weyl_module&oldid=21961
This article was adapted from an original article by R. Dipper (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098.
See original article
|
{"url":"http://www.encyclopediaofmath.org/index.php?title=Weyl_module","timestamp":"2014-04-21T09:36:14Z","content_type":null,"content_length":"85372","record_id":"<urn:uuid:5f6ae453-a056-4d06-9ee5-7f7c8a5c7b76>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00384-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[R] levelplot/heatmap question
Deepayan Sarkar deepayan.sarkar at gmail.com
Thu Sep 25 17:37:28 CEST 2008
On 9/24/08, cerman at u.washington.edu <cerman at u.washington.edu> wrote:
> Hello!
> I have data containing a large number of probabilities (about 60) of
> nonzero coefficients to predict 10 different independent variables (in 10
> different BMA models). i've arranged these probabilities in a matrix like
> so:
> (IV1) (IV2) (IV3) ...
> p(b0) p(b0) p(b0)
> p(b1) p(b1) p(b1)
> p(b2) p(b2) p(b2)
> ...
> where p(b1) for independent variable 1 is p(b1 != 0) (given model
> uncertainty - using the BMA package). i've also set it so that if the
> coefficient is negative, the probability is listed as negative (to be able
> to distinguish between significant positive and negative effects by color).
> i'd like to create a plot which is a 10x60 grid of rectangles, where each
> rectangle is colored according to its probability of being nonzero
> (preferably white would correspond to a zero probability). i've looked into
> levelplot, heatmap, and image, and cant seem to get exactly what im looking
> for.
> heatmap gives me problems in that the output is inconsistent with the data
> - among other things, the first and last rows do not seem to show up (they
> are just white, despite clearly nonzero probabilities). even if i do not
> use the dendrogram (Rowv and Colv set to NA), i still seem to have an issue
> with a probability in a given row not corresponding to the same color as the
> same probability in a different row.
> levelplot seems to do exactly what i want it to do, except that i cant find
> a way to label the individual columns and rows, which I really need
The matrix method for levelplot uses rownames and column names to
label columns and rows; e.g.,
x = matrix(1:12, 3, 4)
rownames(x) = letters[1:3]
colnames(x) = LETTERS[1:4]
We need more details to figure out why that doesn't work for you.
More information about the R-help mailing list
|
{"url":"https://stat.ethz.ch/pipermail/r-help/2008-September/174947.html","timestamp":"2014-04-17T12:30:54Z","content_type":null,"content_length":"4908","record_id":"<urn:uuid:40f6fefb-0130-47cd-a004-9cf624712114>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00342-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[Numpy-discussion] sorting -inf, nan, inf
Charles R Harris charlesr.harris at gmail.com
Tue Sep 19 19:55:17 CDT 2006
On 9/19/06, A. M. Archibald <peridot.faceted at gmail.com> wrote:
> On 19/09/06, Charles R Harris <charlesr.harris at gmail.com> wrote:
> >
> >
> >
> > For floats we could use something like:
> >
> > lessthan(a,b) := a < b || (a == nan && b != nan)
> >
> > Which would put all the nans at one end and might not add too much
> overhead.
> You could put an any(isnan()) out front and run this slower version
> only if there are any NaNs (also, you can't use == for NaNs, you have
> to use C isNaN). But I'm starting to see the wisdom in simply throwing
> an exception, since sorting is not well-defined with NaNs.
Looks like mergesort can be modified to sort around the NaNs without too
much trouble if there is a good isnan function available: just cause the
pointers to skip over them. I see that most of the isnan stuff seems to be
in the ufunc source and isn't terribly simple. Could be broken out into a
separate include, I suppose.
I still wonder if it is worth the trouble. As to raising an exception, I
seem to recall reading somewhere that exception code tends to be expensive,
I haven't done any benchmarks myself.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://projects.scipy.org/pipermail/numpy-discussion/attachments/20060919/a9ae46bf/attachment.html
More information about the Numpy-discussion mailing list
|
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2006-September/010837.html","timestamp":"2014-04-18T23:45:42Z","content_type":null,"content_length":"4345","record_id":"<urn:uuid:7a9827b8-9b68-40ff-97fa-ceca20ce1dc9>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00501-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Coconut Grove, FL Algebra 2 Tutor
Find a Coconut Grove, FL Algebra 2 Tutor
I studied Biology with a minor in mathematics at Purdue University. Currently I am an adjunct instructor at a college. Most recently, I taught Algebra and Geometry at the high school level.
18 Subjects: including algebra 2, chemistry, calculus, geometry
I am a senior in college majoring in Biology with minors in Mathematics and Exercise Physiology. In the past I have tutored students ranging from elementary school to college in a variety of
topics including FCAT preparation, Biology, Anatomy, Math and Spanish. I enjoy teaching and helping others ...
30 Subjects: including algebra 2, reading, biology, algebra 1
...I have also translated a couple of books and manuals into Spanish. I earned a master's degree in Education after an undergraduate degree in Biochemistry. As for my teaching philosophy, I do my
best to to make the subjects fun, applicable and interesting!
16 Subjects: including algebra 2, chemistry, Spanish, geometry
...With me as an assistant, the student will be able to practice this, until it becomes second nature. The student will be able to establish similarity, familiarity, opposition and differences
between words and phrases, and know what to use when. My tutoring experience, familiarity with difficulti...
20 Subjects: including algebra 2, English, reading, ESL/ESOL
...I have worked as a private tutor for 5 years in a variety of subjects. I am very patient and believe in teaching by example. My general teaching strategy is the following: I generally cover
the topic, then explain in detail, make the student do some problems or write depending on the subject, and finally I make them explain and teach the topic back to me.
30 Subjects: including algebra 2, chemistry, English, geometry
Related Coconut Grove, FL Tutors
Coconut Grove, FL Accounting Tutors
Coconut Grove, FL ACT Tutors
Coconut Grove, FL Algebra Tutors
Coconut Grove, FL Algebra 2 Tutors
Coconut Grove, FL Calculus Tutors
Coconut Grove, FL Geometry Tutors
Coconut Grove, FL Math Tutors
Coconut Grove, FL Prealgebra Tutors
Coconut Grove, FL Precalculus Tutors
Coconut Grove, FL SAT Tutors
Coconut Grove, FL SAT Math Tutors
Coconut Grove, FL Science Tutors
Coconut Grove, FL Statistics Tutors
Coconut Grove, FL Trigonometry Tutors
Nearby Cities With algebra 2 Tutor
Coral Gables, FL algebra 2 Tutors
Crossings, FL algebra 2 Tutors
Gables By The Sea, FL algebra 2 Tutors
Kendall, FL algebra 2 Tutors
Maimi, OK algebra 2 Tutors
Miami algebra 2 Tutors
Olympia Heights, FL algebra 2 Tutors
Perrine, FL algebra 2 Tutors
Richmond Heights, FL algebra 2 Tutors
Seybold, FL algebra 2 Tutors
Snapper Creek, FL algebra 2 Tutors
South Miami, FL algebra 2 Tutors
Village Of Palmetto Bay, FL algebra 2 Tutors
West Miami, FL algebra 2 Tutors
Westchester, FL algebra 2 Tutors
|
{"url":"http://www.purplemath.com/Coconut_Grove_FL_Algebra_2_tutors.php","timestamp":"2014-04-17T10:54:41Z","content_type":null,"content_length":"24390","record_id":"<urn:uuid:92ff47c4-e73f-45dc-b759-21dad45a2041>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00574-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Basics: Proof by Contradiction
I haven’t written a basics post in a while, because for the most part, that well has run dry, but once
in a while, one still pops up. I got an email recently asking about proofs by contradiction and
counterexamples, and I thought that would be a great subject for a post. The email was really
someone trying to get me to do their homework for them, which I’m not going to do – but I can
explain the ideas, and the relationships and differences between them.
Proof by contradiction, also known as “reductio ad absurdum”, is one of the most beautiful proof
techniques in math. In my experience, among proofs of difficult theorems, proofs by contradiction are the
most easy to understand. The basic idea of them is very simple. Want to prove that something is true? Look
at what would happen if it were false. If you get a nonsensical, contradictory result from assuming its
false, then it must be true.
Let’s be a bit more precise. The principle of proof by contradiction comes from the logical law of the
excluded middle, which says “for any statement S, (S or not S) is true” – that is, S must be either true or false. From that, we can infer that if S in true, not S must be false; if not S is true,
then S must be false. There is no third option. So if we can prove that (not S) is false, then we know that S must be true. The way that we can prove that (not S) is false is by assuming that it’s
true, and showing that
that leads to a contradictory result.
The alterative form (which is really the same thing, but can look different when it’s presented by
mediocre teachers) is proving that something is false. Just switch S and not S in the above discussion. Proving that S is false is just another way of saying that we want to prove that (not S) is
true. In a proof by contradiction, we can do that by assuming that S is true, and showing that that
leads to a contradictory result.
As always, things are best with an example. Since I’m a computer scientist, I’ll pull out
my favorite proof by contradiction: the proof of the Halting theorem.
Suppose you have a computer, which we’ll call φ. Every program for φ can be
described by as a number. Similarly, every possibly input for a program can be represented
as a number. The result of running a particular program on φ can be described as a function
φ(p,i) where p is the program, and i is the input.
One thing about programs is that they can contain infinite loops: there are some programs
which for some inputs will run forever without producing any results. One thing that we would
really like to know is for a program p with input i, will φ(p,i) ever return a result? If
it does, we say that program p halts with input i. The big question is, can we
write a program h such that takes any pair of p and i as inputs, and tells us whether p halts with input i?
The answer is no. The way that we prove that is by a classic proof by contradiction. So we
start by assuming that it’s true:
1. Suppose that we do have a program h such that φ(h,(p,i))=true if φ(p,i)
halts, and false otherwise.
2. We can write a program q, where φ(q,i) runs φ(h,(q,i)) as a subroutine. If
φ(h,(q,i)) returns true, then q enters an endless loop. Otherwise, q halts.
3. For program q, if φ(h,(q,i)) says that q halts, then q doesn’t halt. If φ(h,(q,i))
says that q doesn’t halt, then q halts. Therefore h isn’t a program which correctly says
whether another program will halt. This contradicts the assumption in step one, so that
assumption must be false.
For another example, one of the classic logic errors is the misuse of implication. If you have a logical statement
that for all possible values X, if A is true for X, then B must also true for that X, and you know that A is true for some specific thing Y, then you can infer that B must be true for Y. There’s a
common error where you get that backwards: for all X, if A is true for X, then B must be true for X, and you know B is true for some specific Y, then inferring A is true for Y. That is not valid
inference – it’s false.
We can prove that that kind of inference is invalid. The way we’ll do it is by assuming its true,
and then reasoning from it.
1. Assume that it is true that “for all values X, if A is true for X, then B must be true for X”, and
“B is true for Y”, then “A is true for Y”.
2. Take the classic example statement of this type: “If X is a man, then X is mortal”,
and we’ll use it as an instantiation of the rule above: If we know that “If X is a man, then X is mortal, and we know that X is a mortal, then X is a man.”
3. The pet dog that I grew up died about 15 years ago. Since he died, he must have been mortal.
4. By the statement we just derived, we can conclude that my pet dog was a man.
5. But my pet dog was not a man, he was a dog. So we have a contradiction, and
that means that the statement cannot be true.
Presto, one proof by contradiction.
Often, as in the example above, when you do proof by contradiction, what you do is find a specific
example which leads to a contradiction. If you can do that, that example is called a
counter-example for the disproven statement. Not all proofs by contradiction use specific
counter-examples. A proof by contradiction can be done using a specific counterexample for which the
statement is false. But it can also be done in a more general way by using general principles to show that
there is a contradiction. Stylistically, it’s usually considered more elegant in a proof by
contradiction to show a way o constructing a specific counterexample. In the two example proofs
I showed above, both proofs used counterexamples. The first proof used a constructed counter-example: it didn’t show a specific counter-example, but it showed how to construct
a counterexample. The second proof used a specific counter-example. Most proofs
by contradiction rely on a constructed counter-example, but sometimes you can simply show by
pure logic that the assumption leads to a contradiction.
An important thing to be careful for in proofs by contradiction is that you are actually obtaining
true contradictions. Many creationist “proofs” about evolution, age of the universe, radiological
dating, and many other things are structured as proofs by contradiction; but the conclusion is merely
something that seems unlikely, without actually being a logical contradiction. A very common
example of this is the big-numbers argument, in which the creationist says “Suppose that life were the
product of natural processes. Then the following unlikely events would have had to occur. The probability
of that combination of events is incredibly small, therefore it couldn’t have happened.” That’s not
a proof of any kind. There is no logical contradiction between “X has a probability of 1 in 10^10^10 of occuring”, “X occured”. Improbable never equals impossible, no matter
how small the probability – and so it can’t create a logical contradiction, which is required for
a proof by contradiction.
As an interesting concluding side-note, there are schools of mathematical thought that do not
fully accept proof by contradiction. For example, intuitionism doesn’t accept the law
of the excluded middle, which limits proof by contradiction. Intuitionism also, by its
nature, requires that proofs of existence show how to construct an example. So a proof by
contradiction that proves that something exists, by showing that assuming its non-existence
leads to a logical contradiction, are not considered valid existence proofs in intuitionistic
1. #1 Thony C. November 14, 2007
My two favourite proofs of all time are both fairly basic and are both reductio ad absurdum, the first is the proof that the square root of two is not rational and the second is Euclid’s proof
that there is no greatest prime. I find both proofs elegant, easy to follow and totally convincing and both have the beauty of a good poem or piece of music to my mind. A close run third, in my
book, is Cantor’s proof of the non countability of the reals a reductio ad absurdum of true simpicity and elegance.
2. #2 Elad November 14, 2007
Great post! I can see two pitfalls, though:
1. In the Halting problem, you’re representing programs and inputs by integers. This is trivial of course, but confusing to the uninitiated. I can definitely see someone voicing a complaint like:
“but you’re marking a crazy assumption — that programs are numbers. Show me which programmer writes their programs by numbers”. It all follows from a trivial one-to-one equivalence, of course,
but I would mention that in some way
2. Your objection to creationism sounds so tiny and irrelevant that you might come off as supporting creationism. I would make it clear that the problem with the “tiny probability” argument is
not only that it’s not a proof, but a much more major problem: Example of that problem: if there is a 10^(-9) chance of winning the lottery, then there’s still a person that wins it, and that
person can might as well say “what, there is a tiny tiny probability to win the lottery, so I obviously didn’t win”. However, someone _does_ win the lottery.
3. #3 Blaise Pascal November 14, 2007
With regard to computed counterexample/specific counterexample/no counterexample, how do you categorize proofs by infinite descent?
Example: Proof that 2 has no rational square root:
Assume 2 does have a rational square root m/n, with m, n being natural numbers. We have 2 = (m/n)^2 = m^2/n^2, and therefore 2n^2 = m^2. Because only even numbers have even squares, m must be
even: m = 2m’. Therefore 2n^2 = (2m’)^2 = 4m’^2, or n^2 = 2m’^2. Similarly, n must be even, so n=2n’, yielding (2n’)^2 = 2m’^2, or 2 = m’/n’, with m>m’ and n>n’. No assumptions were made about m
or n, so it is possible to repeat this procedure indefinitely, yielding two infinite decreasing sequences if natural numbers m>m’>m”>m”’… and n>n’>n”>n”’… But natural numbers can’t decrease
indefinitely, so these sequences are an absurdity. Therefore, our assumption that 2= (m/n)^2 exists must be false. Therefore 2 cannot have a rational square root.
4. #4 Elad November 14, 2007
Answer to Blaise Pascal: I would characterize them as proof by contradiction, where the proof uses induction. (or the other way around maybe? It’s late here).
5. #5 Sander November 14, 2007
@Blaise Pascal:
I think that’s just a normal proof by contradiction. The assumption leads to the existence of an infinite decreasing sequence, which is just a normal contradiction. X->(there exists an infinite
decreasing sequence of naturals) is just as good as X->(my pet dog was a man) or X->false.
Of course you can also start with “Assume 2 does have a rational square root m/n with gcd(m,n)=1″ and avoid this altogether.
6. #6 Anonymous November 14, 2007
MarkCC wrote:
For example, intuitionism doesn’t accept the law of the excluded middle, which limits proof by contradiction.
Intuitionism only rejects tertium non datur for situations invoving actual infinities so both of the Euclidean proofs by contradiction that I mentioned above are accepted by Intuitionists but the
Cantor proof is not.
7. #7 Anonymous November 14, 2007
We can write a program q, where φ(q,i) runs φ(h,(q,i)) as a subroutine.
Note that q is defined in terms of itself, in the sense that it must have access to its own source code (to plug into h). The recursion theorem says this can be done, and it’s not hard when you
know the proof of the recursion theorem. However, if you hand someone a C implementation of a halting tester h and ask them to write code for q, they’ll find it tricky if they haven’t seen this
At the cost of a little extra convolution, you can get around this issue by defining q differently. Define q so that φ(q,i) runs φ(h,(i,i)) as a subroutine and then does the opposite of whatever
it outputs. (If φ(h,(i,i)) says φ(i,i) halts, then φ(q,i) will go into an infinite loop. If it says φ(i,i) doesn’t halt, then φ(q,i) immediately halts.) This definition does not define q in terms
of itself, so we don’t need the recursion theorem.
Now plugging in i=q shows that φ(q,q) halts iff φ(h,(q,q)) says φ(q,q) doesn’t halt, so h is wrong for input (q,q).
8. #8 Craig Helfgott November 14, 2007
Blaise: I recently saw a great (new!!) proof that sqrt(2) is irrational. Also proof-by-contradiction-by-infinite-descent.
Assume sqrt(2) is rational. Then you can draw a 45-45-90 triangle ABC with integer sides. (Let C be the right angle).
Draw the following picture (you don’t need to label points).
Draw a circular arc, centered at A, with radius AC. It intersects the hypotenuse at D. Draw the tangent to the circle at D, it intersects BC at E.
AD=AC is an integer, AB is an integer, so BD=DE=EC is an integer. BC is an integer, so BE is an integer, and BDE is a (smaller) 45-45-90 triangle with integer sides (D is the right angle).
Repeat ad infinitum, and you get an infinite decreasing sequence of integral triangles. Pretty picture, too.
9. #9 Thony C. November 14, 2007
I don’t know what happened here but the “anonymous” post on intuitionism is mine but as I tried to post it the system said I couldn’t! Now I come back and the post is posted as anonymous!
10. #10 Doug November 14, 2007
I think Elad had a good point there about “cretinism”. One might show this better by using Zadeh’s idea of constructing probability vs. possibility distributions, with possibility indicating
‘degree of ease’. For instance, if we want to talk about the probability/possibility of eating a certain number of oranges tommorow, and rounding to the first place after the decimal, we would
have something like
Prob. .5 .4 .1 0 0 0 0 0 0 0
Poss. 1 1 1 1 1 .8 .6 .4 .3 .1
Or one can talk about eating a certain number of eggs for breakfast, the number of people that can fit in a car, the number of keystrokes someone will type in a day, or the probability vs.
possibility of winning a lottery etc.
Concerning intuitionism/property of the excluded middle… there still exist mathematical schools of thought which don’t accept proof by contradiction, or possible schools of mathematical thought
at least. For instance, a logic with truth values of {T, U, F}, where any combination of U (undecided) and a member of {T, U, F} yields a U for the five basic logic operations (and, or, not,
implication, logical equivalence), doesn’t have the property of the excluded middle, nor that of contradiction as a logical theorem. Of course, in such a school if a particular proposition has a
truth value in {T, F}, those properties will still hold. But, first one has to establish the propositon as in {T, F} as opposed to {T, U, F}, which doesn’t get assumed in such a (possible) school
of thought.
11. #11 Torbjörn Larsson, OM November 14, 2007
Many creationist “proofs” about evolution, age of the universe, radiological dating, and many other things are structured as proofs by contradiction; but the conclusion is merely something
that seems unlikely, without actually being a logical contradiction.
More generally it is their modus operandi, as they can’t make positive assertions or their scam would be given up. This reminds me of a relation to the structure as proof by contradiction in
their false dilemmas, such as that if evolution is wrong when creationism is correct.
(An added irony is that the most realistic alternative, lamarkianism, that could explain many if not all observations outside genetics and supplement evolution proper, even have a possible
mechanism in epigenetics. The only thing is that it hasn’t been observed yet.)
12. #12 ruidh November 14, 2007
I use indirect proof in solving sudoku. I avoid the “easy” and “medium” puzzles and go straight for the “hard”, “challenging” and “brain twister” puzzles. Occasionally, I get stuck with about 2/
3rds of the puzzle filled in. I find a cell that has only two possibilities and write one number in the upper left and the other in the lower right. Then I continue working both the upper left
and lower right until I hit a dead end or find a contradiction. If I find a contradiction, I know that the lower right of my starting cell is a correct entry.
Sometimes, I even find a cell that has the same entry for both the upper left and lower right. I know that’s a correct entry because A->B and ~A->B means B.
13. #13 Doug November 14, 2007
[An added irony is that the most realistic alternative...]
That can make things interesting. One could almost pit epigenetics vs. genetics as casuative in say a high school biology classroom, and we would have something very close/close enough to a real
scientific controversy, or at least one that can become scientific depending on our scientific evidence. Or one could reasonably pit “nature vs. nuture” in biology and there exists something
controversial among biologists. Think of Dawkins vs. Gould on this. I suppose that might qualify as more philosophy of biology than biology, but it comes close enough and it definitely affects
how people do biology AND biologists DO have different takes on the issue. Maybe teaching such a controversy comes as a pedagogical poor idea depending on one’s educational philosophy, but at
least “nature vs. nuture” or “epigenetics vs. genetics” presents something with real biological or, in the case of epigenetics vs. genetics, scientific content to it, as opposed to the lying
tactics of “cretinists” which go so far that some of them committed fraud not just on a community, but a court of law. If those people really wanted alternatives and wanted extremely loose
defintions of science, why didn’t they want to teach Lamarckism too?
14. #14 Torbjörn Larsson, OM November 14, 2007
Correction of my previous comment: “lamarkianism” – lamarckism.
One could almost pit epigenetics vs. genetics as casuative in say a high school biology classroom,
I think they are, especially as AFAIU it is hard to test evolutionary mechanisms. (The lamarckist mechanisms probably awaits positive research results.)
I suppose that might qualify as more philosophy of biology than biology,
Or possibly it is unduly promoted as such, because I don’t see why you couldn’t call them different research strategies. I.e. for visible traits you have to choose a null hypothesis, and
adaptation seems like a sound choice. Pluralists don’t like that, and perhaps they use drift as a null hypothesis.
Similar circumstances applies for genetic vs epigenetic regulation et cetera.
In the end it shouldn’t matter, as they should test their ideas. (Or they are just-so stories, an earlier complaint when adaptations were routinely used for explanations.)
15. #15 Anonymous November 15, 2007
Elad: “if there is a 10^(-9) chance of winning the lottery, then there’s still a person that wins it, and that person can might as well say “what, there is a tiny tiny probability to win the
lottery, so I obviously didn’t win”. However, someone _does_ win the lottery.”
There’s another problem with most such arguments that ends up being even more significant. The point that someone wins the lottery becomes somewhat weaker when the probabilities claimed are
something like 10^-140^ (which is closer to the order that I tend to see claimed by people making these arguments).
Every argument I’ve seen that yielded such numbers misapplied probability theory. E.g. they’ll claim that the probability of forming a self-replicating system by chance is low because the
probability of getting a chain of a given number of amino acids selected at random to form a given sequence is on such an absurdly small order (I’ve seen this argument presented seriously many
times). Besides being a caricature of abiogenesis, this begs the question by assuming that there’s only one possible state that self-replicates.
The fact is that it’s not quite so trivial to determine what fraction of the space of possibilities of some complicated chemical system replicate, but just assuming it to be one is the most
egregious deck-stacking I tend to see in such arguments.
16. #16 John S. Wilkins November 15, 2007
Out of interest, what does supervaluative logic and dialethism do to this proof?
17. #17 Jason November 15, 2007
Dialethism or, more properly when you’re talking about the logic, paraconsistency, fucks it up good and proper. (As John knows, in a paraconsistent logic, some things can be both true and false.)
The details depend on which paraconsistent logic you use, which is convenient for me because I can’t remember the details for any of them. However, it’s always going to be complicated, because in
any paraconsistent system, by definition, it’s only SOME statements which can be both true and false, not all of them, and the logic itself doesn’t tell you which ones. See http://
plato.stanford.edu/entries/dialetheism/ and http://plato.stanford.edu/entries/logic-paraconsistent/.
Don’t know about supervaluationism. Good question.
Thony, thanks for the point about intuitionism. I didn’t know that, or had forgotten.
18. #18 Torbjörn Larsson, OM November 15, 2007
Um, dialetheism, perhaps?
So, with a true contradiction we know we can prove anything within ordinary math. But I see that paraconsistent logic is an out, and that it may not necessarily drop reductio.
The approach taken in large software systems is keep the rules of weakening and double negation elimination and to restrict the rule of reductio ad absurdum (Hewitt [2007]).
Intriguing. Yes, do tell, how does restriction or other alternatives work regarding such a proof?
but just assuming it to be one is the most egregious deck-stacking I tend to see in such arguments.
Besides that, the natural assumption is that since the likelihood for life is ~ 1 (seeing that we exist) we are more or less certain that there is some mechanism that give a reasonable
probability for life in the universe. I’m reminded of the cart and the horse.
19. #19 Mark C. Chu-Carroll November 15, 2007
You’re absolutely right, and that’s something that I’ve discussed extensively in other posts. I’ve got my own
taxonomy of statistical errors that people use, and what you’re talking about is what I call the “fake numbers” gambit. It’s where someone wants to make an argument that something must be
impossible without some kind of deliberate intervention. It’s clearly possible, because it happened, but they want to argue that it couldn’t have happened without God, or some intelligent agent.
So they can’t really argue that it’s impossible. So they resort to arguing that it’s improbable, and coming up with some arbitrary probability number beyond which something is impossible. (Like
Dembski’s “Universal probability bound”). Then they start slapping together a bunch of numbers to try to create a probability that’s beyond their bound.
In every case that I’ve seen where someone tries to make an argument of that form, the numbers used to generate the probability are fake.
Search the GM/BM archives for “Berlinski” for one of the most egregious examples of this from an arrogant SOB who knows better, but makes the dishonest argument anyway.
20. #20 Thony C. November 15, 2007
Out of interest, what does supervaluative logic and dialethism do to this proof?
As reductio ad absurdum is dependent on tertium non datur any system of logic that strays from the strict dichotomy of a two valued logic would lose reductio as a method of proof.
21. #21 Torbjörn Larsson, OM November 15, 2007
any system of logic that strays from the strict dichotomy of a two valued logic would lose reductio as a method of proof.
But “restriction” doesn’t seem to mean “elimination”. (See the quote in my comment #17.) However, when I looked at the reference on Hewitt, it seems to concur:
Direct Logic supports the following nontriviality principles for paraconsistent
Direct Nontriviality [...] which states that if the negation of a sentence holds, then it cannot be proved
Meta Nontriviality [...] which states that if the negation of sentence can be proved, then it cannot be proved.
So the “restriction” seems to be that this elimination applies to “non-trivial” theories. What a wanker Wikipedia is at times!
22. #22 Jason November 15, 2007
> Um, dialetheism, perhaps?
It can be spelled either way. Apparently Routley/Sylvan spelled it one way and Priest the other. Wikipedia’s official policy is to keep whichever spelling turns up in a given Wikipedia article
first, unless there’s a good argument to the contrary. I hope to avoid flame wars by pointing this out!
23. #23 Doug November 15, 2007
Thony C, Torbjörn Larsson, and anyone else who thought the following correct, or might have an inclination to read on.
[As reductio ad absurdum is dependent on tertium non datur any system of logic that strays from the strict dichotomy of a two valued logic would lose reductio as a method of proof.]
NO! I said this basically in another comment on another post. For a reduction ad absurdum one needs the property of non-contradiction, and in some instances the property of the excluded middle.
Proposition 1: For a bounded infinite-valued logic on [0, 1] with 1-a for negation which has max(0, a+b-1) for intersection i, min(1, a+b) for union u, i(a, c(a)=0 and
u(a, c(a))=1.
Demonstration: b=c(a) by definition. Consequently,
i(a, c(a))=max(0, a+c(a)-1)=max(0, a+1-a-1)=max(0, 0)=0
u(a, c(a))=min(1, a+c(a))=min(1, a+1-a)=min(1, 1)=1.
Also, an infinite-valued logic on [0, 1] with 1-a as negation, and with drastic union and intersection still maintains the properties of excludedu middle and contradcition. By drastic union and
intersection, I mean
u(a, b)=a if b=0
b if a=0
1 otherwise
i(a, b)=a if b=1
b if a=1
0 otherwise.
Proposition 2: For an infinite-valued logic on [0, 1] with drastic union, intersection and the 1-a operator for complement, i(a, c(a))=0, u(a, c(a))=1… the properties of contradiction and
excluded middle.
Demonstration: For the property of contradcition
Let a=0, then u(a, c(a))=u(0, 1)=1
Let a>0, then u(a, c(a))=1, since neither truth value equalling 0 directly implies u(a, b)=1.
So, u(a, c(a))=1
For the property of excluded middle
Let a=1, then i(a, c(a))=i(1, 0)=0
Let a<1, then i(a, c(a))=0, since neither truth value equalling 1 directly implies i(a, b)=0.
So, i(a, c(a))=0.
Of course, from the infinite-valued cases, a three-valued logic on {0, 1/2, 1} with the similar rules above still can use reduction ad absurdum. So, can a five-valued logic on {0, 1/4, 1/2, 3/4,
1}, or a four-valued logic on {0, 1/3, 2/3, 1}, or any “symmetrical” n-valued logic (the symmetry implies that closure gets ensured for negation.)
24. #24 Doug November 15, 2007
To perhaps buttress my argument here for someone who claims I haven’t shown anything about the contradiction and excluded middle properties, suppose we changed our truth values from the usual 1=
T, 0=F, to 3=T, 5=F in a logic with {T, F} as its truth set. The properties now state
i(a, c(a))=5
u(a, c(a))=3
Now, suppose we have an infinite-valued logic on [3, 5]. One can have such a logic with the following operators
i(a, b)=a if b=3
b if a=3
5 otherwise
u(a, b)=a if b=5
b if a=5
3 otherwise
c(a)=5 if a=3
3 if a=5
4 otherwise.
For i(a, c(a) we’ll first
let a=3, then i(a, c(a))=i(3, 5)=5 or false. Then,
let 3 let a=5, then i(a, c(a))=i(5, 3)=5 or false.
For u(a, c(a)), we’ll first
let a=3, then u(a, c(a))=u(3, 5)=3 or true. Then,
let 3 let a=5, then u(a, c(a))=u(5, 3)=3 or true.
Please note that such a logic still works out as infinite-valued because it can have an infinity of values for its truth inputs, even thought the operators of negation, intersection, and union
land in only three values (I merely presented a simplified example, as I don’t have operators with more continuous-valued outputs at hand).
One can even do this in letters. Suppose we have a five-valued logic with on {T, H, U, L, F} (where H stands for high truthe value, U stands for undecided truth value, L for low truth value). One
then can have operators like
c(T)=F, c(H)=L, c(U)=U, c(L)=H, c(F)=T.
i(a, b)=a if b=T
b if a=T
F otherwise.
u(a, b)=a if b=F
b if a=F
T otherwise.
For the property of contradiction we have the followng cases
a=T, implies i(T, F)=F
a=H, implies i(H, L)=F
a=U, implies i(U, U)=F
a=L, implies i(L, H)=F
a=F, implies i(F, T)=F
So, the property of contradiction holds.
For the property of the excluded middle we have
a=T, implies u(T, F)=T
a=H, implies u(H, L)=T
a=U, implies u(U, U)=T
a=L, implies u(L, H)=T
a=F, implies u(F, T)=T.
So, the property of excluded middle and contradiction holds *for such a logic and SOME other multi or infinite-valued logics*.
25. #25 Thony C. November 16, 2007
Doug; all you have shown is that if, by a series of definitions, you restrict your infinite valued logic to a sub-domain of two values then it behaves like a two valued logic!
26. #26 Doug November 16, 2007
Thony C,
[Doug; all you have shown is that if, by a series of definitions, you restrict your infinite valued logic to a sub-domain of two values then it behaves like a two valued logic!]
If I could go back in time I would have suggested you check all logical properties of clasical logic in the referenced logics before you attempt to make such an assertion. More on point, I don’t
know why you’ve tried to pass that off, honestly, after I’ve given proofs (not just linguistic arguments as in other cases) which indicate otherwise. Those proofs show that “the logical
conjunction of A and not A yields a truth value of false (in the classical sense of false)” and “the logical union of A and not A yields a truth value of true (in the classical sense of true),”
for the logical systems involved, when they get translated into words. I suspect that even a semi-objective observer can see this and you’ve discredited yourself somewhat by your statement here,
as I showed MORE than your statement about a sub-domain. I think (although maybe it wasn’t you) I’ve also showed to you specifically that such logics do NOT behave like two-valued logic in
another significant sense, as they don’t have ALL the same theorems as two-valued logic. I say all of this, as a sort of warning, in the hope you’ll see how you might make yourself look foolish
here, if you didn’t already have an inclination that this might happen.
Look, the domain of one of the proposed infinite-valued logics comes as [0, 1], as the inputs for the functions intersection, union, and complement get defined on [0, 1]. Rather clearly, if I
restrict such a domain to {0, 1/4, 1/2, 3/4, 1}, I have a subdomain of [0, 1]. This doesn’t behave like two-valued logic in many ways. First, the input values can come as something other than a
member of {0, 1}. Consequently, the notion of truth works out differently. The domain of the truth set works out differently. The methods used in calculation work out differently, as the rules of
classical logic state the for the truth set {T, F}
i(a, b)=T if a=T and b=T
F otherwise
u(a, b)=F it a=F and b=F
T otherwise
c(T)=F, c(F)=T.
I didn’t restrict my infinite-valued logic to a sub-domain of two values, as for the max(0, a+b-1),
min(1, a+b) values, if I let a=1/4 and thus c(a)=3/4 with c(a)=1-a, I get
max(0, 1/4+3/4-1)=max(0, 0)=0
min(1, 1/4+3/4)=min(1, 1).
Hopefully the aforementioned points, and especially the last part would tip you off that something different behavior goes on. Perhaps, however, this comes to no avail. Fine. In classical logic,
there exist two basic properties:
idempotency, and distributivity, or in symbols
i(a, a)=a, u(a, a)=a
i(a, u(b, c))=u(i(a, b), i(a, c))
u(a, i(b, c))=i(u(a, b), i(a, c))
Suppose we use drastic union and intersection for these operations. Then it follows that,
i(.5, .5)=0 u(.5, .5)=1, or more generally let a belong to (0, 1). Then, i(a, a)=0 which does NOT equal a, and u(a, a)=1 which also does NOT equal a. So, idempotency fails for a logic with
drastic union and interesection.
i(.4, u(.3, .2))=i(.4, 1)=.4,
i(u(.4, .3), u(.4, .2))=i(1, 1)=1. So, distributivity fails for drastic union and intersection.
If you go and check for yourself you can see that distributivity and idempotency also do not hold as theorems for the max(0, a+b-1), min(1, a+b) operations. One can go further, as Klir and Yuan
do in their text _Fuzzy Sets and Fuzzy Logic: Theory and applications_, on p. 87 and 88, and prove that for a logic which uses dual t-norms, and t-conorms, meaning that
c(i(a, b))=u(c(a), c(b)) and c(u(a, b))=i(c(a), c(b)) AND satisified the property of the excluded middle as well as the property of contradiction, then both distributive properties do NOT hold as
Again, I didn’t restrict an infinite-valued logic to a sub-domain of two valued. And I didn’t show that a logic with the properties of the excluded middle and contradiction behaves like a
two-valued logic.
27. #27 MF November 17, 2007
I find your post here interesting, and no doubt you showed more than Thony C. thought you did, but you’re basically wrong about the principles of excluded middle and contradiction, as well as
about reductio ad absurdum. The principles of excluded middle and contradiction better stated say “The proposition ‘a is True or not a is True’ is True”, and “The proposition ‘a is True and not a
is True” is False.” The principles simply aren’t purely syntatical statements. They don’t say i(a, c(a))=T, u(a, c(a))=F. Of course, if one translates them into symbolic logic, one can write them
that way… but something of meaning gets lost in such a translation. Yes, your “properties” get written symbolically the same way as the classical laws of logic and in a formalistic sense BEHAVE
like them, but they simply don’t capture the essence of those statements. The symbolic translation, loses that “is True” bit after a, and not a. For reductio ad absurdum proofs one doesn’t just
assume the symbolic a^c(a)=F or however you want to write it. The reductio ad absurdum assumes the meaning in the statement “a is True, or a is not True.”
Now, what’s more, I haven’t really and completely stated the classical laws of logic. They don’t just say what they say, they implicitly assume only True and False propositions as possible.
Consequently, an attempt to write the classical laws of logic in non-classical logic fails. It furthermore fails, in that the terms “True” and “not True” necessarily correspond to the extrema of
your fuzzy truth sets. Consequently, it is impossible can’t write those classical laws in any n/infinite valued logic. Sure, you can write your properties, and claim the “behavior” works
sufficiently similar, but it is just not enough. You don’t have the essence of classical logic without confining truth values to True and False only.
I do want to say that you were on to something when you renamed the law of contradiction and excluded middle properties, as you talk about something different. This extends farther than those
laws. There is no law of idempotency in the standard fuzzy logic, but in classical logic there is such for intersection. I mean to say, that in classical logic we have
The proposition “A is True and A is True” is True. In a fuzzy logic with the minimum function it is
i(A, A)=A.
In other words, you can and probably should replace the identity predicate with the equality predicate, and consequently re-name all those principles, or laws, properties. When they are true, of
28. #28 Xanthir, FCD November 17, 2007
I really don’t feel like getting into these arguments, because I quickly lose track of just what the hell we’re supposed to be talking about, but MF, I believe you’re wrong.
(a || ~a) = T is the most essential and true form of that statement. It’s more true than the english statement, because it’s well-defined. It seems like it is perfectly captured by what Doug is
saying, as well.
Logic doesn’t care about meaning. It’s a game of symbols which have no intrinsic meaning. This is what makes it so powerful. We assign meaning to the symbols afterwards. As long as the behavior
is the same, nobody cares what meaning you assign to two things – they are the same.
29. #29 Doug November 17, 2007
Can you explain the (a || ~a) = T statement… I don’t get the ‘||’. I’ve seen it used in set theory to mean “a is not comparable with b”… do you mean that?
I can translate your statements into my notational style like this:
i(a=T, c(a=T)=F)=F
u(a=T, c(a=T)=F)=T.
Consequently, it seems that even though your phrasing looks more precise, it loses some meaning, as I can translate your versions as saying i(T, F)=F, u(T, F)=T. If you mean that by the
principles of classical logic, well I can basically claim that fuzzy logic has those as axioms. To say that classical logic assumes only true and false propositions as possible and POC and POEM
must likewise assume such begs the question.
You do have a point about reductio ad absurdum assuming something different than the properties of classical logic. It doesn’t depend so much on tertium non datur, as I showed logical systems
where the property of excluded middle holds, but I failed to show how reductio ad absurdum works in such logical systems. Even if tertium non datur means ‘a third-term not given’ this still
doesn’t imply that one can’t use a reductio ad absurdum argument. Someone on my site made a comment which suggested to me the following idea.
Suppose we have a notion of ‘absurdity’ or ‘internal contradiction’ within our given sytem. Suppose, we have a three-valued system. Suppose we also know that either A or the negation of A or the
other truth value for A holds. We also have the rule that if the assumption of a statement leads to an absurdity, then the reasoning works out as invalid and consequently our originally statement
works out as invalid. Well, we can then use reductio ad absurdum in the following way. Assume A holds, then deduce an absurdity. So, A doesn’t hold. Assume the negation of A holds. Deduce an
absurdity. So, the negation of A doesn’t hold. Consequenlty, the other truth value for A holds. In other words, I propose that for a n-valued logic, reductio ad absurdum still can work… we just
have to eliminate n-1 possibilities by reducing them each to absurdity. For an infinite-valued logic, we have to reduce all other possible cases than the one we seek to prove to absurdity. If
this sort of persepctive sufficeintly works, one might just substitute “process of elimination” for “reductio ad absurdum”.
30. #30 Doug November 17, 2007
Maybe this qualifies as an example of my idea, maybe not. This sort of reductio/process of elimination seem almost too simple.
Suppose that a member of {3, 5, 7} solves the equation 3x=15.
Well, if we assume 3 works, then we have 9=15, a contradiction. Assume 7 works. Then we have 21=15, another contradiction. Consequently, given our assumpiton as true, 5 sovles the equation.
Perhaps better, suppose that a member of (-oo, +oo) solves the equation 4x=29. Well, for all cases where x<7.5, then we have a contradiction. For all cases where x>7.5, then we also have a
contradiction. Consequently, x=7.5
Although, maybe you consider such examples “poor” since we already know the answer and demonstrate such by simple substitution.
31. #31 Doug November 17, 2007
Maybe a better example of a reductio argument in three-valued logic comes as the following. Please note I don’t use the property of the excluded middle, and I don’t even need the property of
contradiction for this example at least.
Assume all statements either true, false, or undecided. Assume there exists no distinction between an object language and a meta-language. A statement X becomes absurd, and therefore rejected,
when it has more than one truth value. Now, consider the statement
“this statement is false.”
Assume such a statement true (first truth value). Then, as it says, it consequently becomes false (second truth value). So, we have an absurdity, and thus such a statement doesn’t work out as
true. Second, assume such a statement false. Then it speaks accurately about itself (remember… assume no object/meta language distinction), and thus becomes true. Again, we have an absurdity,
this time by assuming such a statement false. Since we have either T, F, or U, and both T as well as F lead to absurdities, we have F by a reductio based on the uniquenss of truth values.
32. #32 Paul G November 20, 2007
Yep, great post – but Mark, could you proof-read it, please? There are quite a few typos that make it pretty hard to understand in places.
33. #33 Kristjan Wager November 23, 2007
Can you explain the (a || ~a) = T statement… I don’t get the ‘||’. I’ve seen it used in set theory to mean “a is not comparable with b”… do you mean that?
I think that Xanthir means this is a matter which is closer to code, so it could be translated as ‘(a or not a) equal true‘, where the two lines have the meaning or.
34. #34 Xanthir, FCD November 23, 2007
Thank you, Kristjan. Yes, that’s what I meant. ^_^ C++ was my first language, so I still use its symbols for “and” and “or” when I’m typing. Very slightly simpler than using html entities, though
I suppose I should put forth the effort to avoid confusion in the future.
Consider the line to instead be (a ∨ ¬a) = T.
35. #35 Cléo Saulnier November 28, 2007
The halting problem as defined is a cop out. It only talks about algorithms q that call h. That’s recursive, so you could keep applying algorithms and what you’d get is a succession of results
(true and false) that are correct each time at that particular point. Just because you implemented the function incorrectly doesn’t mean that you’ve proven anything.
“The big question is, can we write a program h such that takes any pair of p and i as inputs, and tells us whether p halts with input i?”
For most p and i, you haven’t proven anything. You’ve only shown that one particular case of recursion and one particular implementation, this doesn’t work. It doesn’t mean that there isn’t a
solution. In fact, you say this:
“If φ(h,(q,i)) returns true, then q enters an endless loop. Otherwise, q halts.”
This is patently absurd and doesn’t do what you think it does. A program could look at this and see that q is dependent on h. Since q can both terminate and not terminate depending on h, both
results are valid. So h1 = !h0. This isn’t a contradiction. It simply means that every successive h seen is the opposite of the next one.
If the top-most h returns true, then the inner most h used by q would return false. Since q sees false, then it will halt. And indeed, the top-most h does return true that it halts. WOW! No
contradiction. This is time-based results. Much of science and technology is based on osciallating results. There’s no contradiction here I’m afraid. Only time-based (or iteration based) results.
Your example is completely arbitrary and says NOTHING about anything except an irrelevant example. It especially says nothing about software that is not dependent on h. And 100% of software in
actual use is NOT dependent on h.
Sorry, better luck next time.
36. #36 Xanthir, FCD November 28, 2007
You are incorrect, Cleo. The halting problem as defined by MCC does not involve iteration or oscillation in any way. The program h analyzes the structure of q to determine the result – it does
not actually *run* q, as that would prevent it from returning an answer when q looped, but h is defined so as to always return an answer. Thus, h gives a particular answer, yes or no, to whether
or not ψ(q,i) halts. It can’t give a “maybe” or a “wait and see”, nor can it change it’s answer as time goes by (unless time is part of the input, but we’re talking about supplying it with a
*particular* input).
Because h is *defined* to always give an answer, and more importantly, must give the *same* answer, then the contradiction becomes obvious. No matter what answer h gives by analyzing the code of
q, no matter how clever it is in deconstructing code and teasing out infinite loops, it must in the end return a Yes or No, a Halts or Loops, and q can then use that answer to do the opposite.
This is patently absurd and doesn’t do what you think it does. A program could look at this and see that q is dependent on h. Since q can both terminate and not terminate depending on h, both
results are valid. So h1 = !h0. This isn’t a contradiction. It simply means that every successive h seen is the opposite of the next one.
See, this is where you make your mistake. You’re asserting that h returns different results given the exact same inputs. The function h has definite, finite code (by definition, as a function
with infinite code could require an infinite amount of time to execute, and would thus be indistinguishable from an infinite loop, but h was defined as always halting). The function q has
definite, finite code. The input i is finite as well. Thus, since everything involved here is well-defined and finite, h has a well-defined answer when you provide (q,i) as an input. There is no
infinite regress of h’s and q’s that form an oscillating answer as you walk back up to the top level. H1=h0 by definition – there’s only one program h. H uses a finite block of code to evaluate
finite inputs and comes up with the same answer every time. The only problem is that it can be wrong sometimes no matter what you do to improve it.
Your example is completely arbitrary and says NOTHING about anything except an irrelevant example. It especially says nothing about software that is not dependent on h. And 100% of software
in actual use is NOT dependent on h.
Um, yeah, we know. The halting problem doesn’t have a thing to do with the vast majority of software. It’s a result of computational theory. It does end up limiting what is *possible* to compute,
but most of the time we write program well within the domain of ordinary computability, and so the halting problem doesn’t do a thing for us.
37. #37 Cléo Saulnier November 29, 2007
“The halting problem as defined by MCC does not involve iteration or oscillation in any way.”
Of course it does, it includes a recursive dependancy.
“The program h analyzes the structure of q to determine the result – it does not actually *run* q, as that would prevent it from returning an answer when q looped, but h is defined so as to
always return an answer.”
Why is h defined as to always return an answer? That’s the problem with your proof right there. You’re discarding valid answers. If I ask if something returns true or false and don’t accept any
of those answers, it’s easy to say it’s undecidable. That’s what’s going on with this proof. It’s a complete joke.
“Because h is *defined* to always give an answer, and more importantly, must give the *same* answer, then the contradiction becomes obvious.”
Even if we accept that h always gives an answer, you’re still wrong. If h cannot exist, then neither can q because h makes up part of q. All this proof does is say that h cannot exist for
programs that cannot exist. Big frickin’ deal. It still doesn’t say anything about programs that can exist.
“You’re asserting that h returns different results given the exact same inputs.”
No, I’m saying you SHOULD accept different answers other than true or false such as relationships. h could returns the same relationship every single time. The only reason other answers aren’t
accepted is because of some human definition. This makes effectively renders the proof meaningless.
“The halting problem doesn’t have a thing to do with the vast majority of software.”
I don’t think you understand. The halting problem PROOF (as it’s called) has nothing to do with ANY existing software. The counter-example CANNOT exist. That means that h cannot exist for q’s
that do not exist. It’s saying I can’t write a program to process input that doesn’t exist. Big deal. The proof is invalid. It’s completely bogus even as it’s defined.
38. #38 Cléo Saulnier November 29, 2007
In the second part, I ment to say “Why is h defined as to always return one specific type of answer?”
39. #39 Cléo Saulnier November 29, 2007
I just wanted to add one point about proofs by contradiction. If you set up a premise for your proof, you must make sure that that premise remains valid throughout. If your proof ends up
invalidating the premise of the proof, the proof is also invalidated.
And this is exactly what’s happening with the halting problem proof. The premise is that a program p exists. But this isn’t where you want a contradiction. If there’s a contradiction here, the
proof falls apart. Where you want the contradiction is what the question is asking. If there exists a program h that can decide the outcome of p. But p must remain valid throughout. It’s h and
only h that must be invalid. Unfortunately, the counter-example is set up that if h is invalid then so is p. The proof CANNOT succeed under any circumstance.
Please do not spread these kinds of invalid proofs around. Some people are bound to believe this kind of garbage. One more thing… if you decide to not accept certain kinds of results, don’t be
surprised that there isn’t any. I’m sorry, but that’s not a way to obtain a proof. That’s a way to obain ridicule.
40. #40 Antendren November 29, 2007
“Of course it does, it includes a recursive dependancy.”
Please illustrate.
“Why is h defined as to always return one specific type of answer?”
Because that’s the assumption that we wish to contradict. We’re trying to prove that there is no program which exactly determines when a program halts, so we assume that there is one and call it
h. So by assumption, h returns either true or false.
“If h cannot exist, then neither can q because h makes up part of q.”
This statement is both true and irrelevant.
“All this proof does is say that h cannot exist for programs that cannot exist.”
There is but one h. h isn’t defined for a specific program; it either exists or it doesn’t. What this proof (correctly) says is that it doesn’t.
“If your proof ends up invalidating the premise of the proof, the proof is also invalidated.”
No. Then your proof is a valid proof that the premise is false.
“The premise is that a program p exists.”
There’s no such premise here. The only place p is used is in the definition of h, where it is a variable which ranges over all programs.
41. #41 Cléo Saulnier November 29, 2007
Of course it does, it includes a recursive dependancy.
Please illustrate.
What part of recursion do you not understand? h makes up part of q. h takes q as input which has h within its makeup. That’s a recursive dependancy.
If h cannot exist, then neither can q because h makes up part of q.
This statement is both true and irrelevant.
What? No support for your argument? This is the most important part of the proof. It’s where the proof invalidates itself. It says that the counter-example doesn’t exist (hence the proof doesn’t
exist). Who cares about proofs that only says something about programs that do not exist. It’ll NEVER come up. EVER! It’s outside the realm of reality.
There is but one h. h isn’t defined for a specific program; it either exists or it doesn’t. What this proof (correctly) says is that it doesn’t.
No no no. You have a p that DOESN’T EXIST!!! The proof only says that h doesn’t exist for this ONE kind of p that DOESN’T EXIST. It says nothing about ALL p or all h.
Of course it’s impossble to tell if programs that don’t exist will terminate or not. THEY DON’T EXIST! That’s why you can’t tell. It says NOTHING about programs that DO exist.
If your proof ends up invalidating the premise of the proof, the proof is also invalidated.
No. Then your proof is a valid proof that the premise is false.
WOW! You want the premise (program q) to remain true. You’ve invalidated the WRONG part which happens to be your proof itself. h is what you want invalidated, NOT q. If q goes, so does your
The premise is that a program p exists.
There’s no such premise here. The only place p is used is in the definition of h, where it is a variable which ranges over all programs.
If there’s no such premise, then there’s no such proof either. Sorry, but you need to rethink your arguments. p is defined to call h, so your statement is false right there. If you wish to claim
that p does NOT call h, then I’m all ears. I’d love to see that version of your proof.
42. #42 Torbjörn Larsson, OM November 30, 2007
h takes q as input which has h within its makeup. That’s a recursive dependancy.
I don’t pretend to understand this thoroughly, but it seems to me that h doesn’t execute the input. It analyzes its string version and decides if it would halt or not with the given input. If it
didn’t, but instead executed it, it wouldn’t halt itself. (And it would be rather pointless to boot.)
As h is not executing the code it is not making a recursion in any sense that I know of. And I believe Xanthir has said so already:
The program h analyzes the structure of q to determine the result – it does not actually *run* q,
I’m not a CS, but I can’t imagine any science where a wellknown basic proof that is generally accepted would contain any simple error.
43. #43 Mark C. Chu-Carroll November 30, 2007
I’ve been avoiding this, because it’s most likely to be pointless; when someone makes the claim that a fundamental proof is invalid, they’re generally unconvinceable. It’s like spending time with
people who insist that Cantor’s diagonalization is invalid. What’s the point of arguing?
The halting problem proof is remarkably simple, and the problems that Cleo is alleging simply don’t exist.
The proof is based on the supposition that I can create a program H that takes another program Q as input, and returns an answer about whether or not that program will halt. The point of the
proof is to show that that supposition inevitably leads to a contradiction.
Note that the proof does not say that H runs Q. In fact, if it ran Q, then if Q didn’t halt, then H wouldn’t halt, and therefore H wouldn’t be a halting oracle. So if there were a halting oracle
H, it by definition could not run Q. The point of a halting oracle is to answer by analysis only whether or not a particular target program Q will halt for a particular input.
Q does invoke H. But there’s no problem with that. By the fixed point principle, we know that it’s possible to implement self-reproducing programs – we call them Quines – and Q can quine to
invoke H.
The point of the proof is that for any supposed
halting oracle H, we can construct a program Q for which H must fail. If H returns the wrong answer for anything, then it’s not a halting oracle – and in fact, we can’t trust it for anything,
because we know that it doesn’t always return the correct answer.
It’s a very simple proof by contradiction. We want to prove a statement not-A, where A is “There exists a halting oracle H”. So in absolutely classic traditional form, we suppose A is true, and
then show that the truth of A necessarily creates logical contradictions, which means that A cannot be true, and therefore not-A is true.
Where is there any room for argument in this?
44. #44 Xanthir, FCD November 30, 2007
Well, everyone else answered for me. Once again, h does not run the program that is passed to it as input. As Mark says directly above, and I said in my original response, h *can’t* run the
program that is passed to it, because h is defined to halt and provide an answer.
Why is h defined to halt? Why is this a valid step? Because we’re looking for a halting oracle: a program that will *always* tell us whether another program will halt. We’re not saying, “Here,
I’ve created a function h, and I will not arbitrarily declare that it has properties that I want.” Instead, we’re saying, “Any program that can call itself a halting oracle must have this
property. Thus, if we design a function called h and want to say it’s a halting oracle, it must have this property. So, we’ll assume that it does for the moment and see where it leads us.”
If h actually ran q, it *would* create a recursive relationship, as you note. H would run q which would run h which would run q, and so on. This cannot be true, though. If q never halts, then
when h runs q, h doesn’t halt either. Thus, any program which wants to call itself a halting oracle *cannot* run the program passed to it. Since we’re assuming h is a halting oracle, we must then
assume that h, as well, doesn’t run q. It merely analyzes q’s structure and returns a result based on that.
Now, as for q running h, this is also trivial. H is a finite-length program (because it has to halt if it wants to be a halting oracle), and so you can easily insert the code of h into another
program, such as q.
The difficulty only comes when you try to call h(q,i) from within q. Since you pass the source code of the function to h, this means that q has to contain it’s entire source code within itself.
This seems to make q infinite length (q contains q which contains q which contains q…), but luckily the Fixed Point Theorem which Mark referenced fixes this. Essentially, this guarantees that
there will *always* exist programs which *can* output their source code as long as the language they are written in is Turing equivalent. Quines are really interesting, you should google them.
They’re one of the first things people often ask for when you create a new language. ^_^
So, q can utilize quining tricks to reproduce it’s source code from within itself. We’ve rescued q from being unconstructable, so we can use it now, and so the conclusion follows trivially.
The point of the whole thing is not to show that one particular program isn’t a halting oracle, but rather to show that *any* program which claims to be a halting oracle will have some input for
which it fails. Thus, it cannot be an oracle. Since this is true of *any* program, there can be no halting oracles, ever.
45. #45 Cléo Saulnier November 30, 2007
Well, so far there are no arguments against what I say. That H doesn’t execute Q is not a valid argument because I never said it does.
I also don’t accept papers or proofs just because a so-called “giant” wrote it. If it’s flawed, it’s flawed. And this one is clearly flawed big time. It’s rather obvious too.
The proof is based on the supposition that I can create a program H that takes another program Q as input, and returns an answer about whether or not that program will halt. The point of the
proof is to show that that supposition inevitably leads to a contradiction.
Right, but a contradiction of what exactly? I’m saying it can’t be a contradiction on Q. If Q cannot exist, then you’ve only proven something about Q’s that don’t exist. It says nothing about all
P’s. Do you see that?
Q does invoke H. But there’s no problem with that. By the fixed point principle, we know that it’s possible to implement self-reproducing programs – we call them Quines – and Q can quine to
invoke H.
So what? You’re going to have to find a valid H to include. If you can’t, then Q doesn’t exist. That only speaks about Q, not all P’s. Why is this so difficult to see?
The point of the proof is that for any supposed
halting oracle H, we can construct a program Q for which H must fail.
Right, but the proof doesn’t accomplish that point. The H in the proof is supposed to handle programs that do not exist. Why in the world would H need to take into account programs that
don’t exist?
Don’t you see. In order for Q to exist, you NEED a valid H that can work on that Q. THERE IS NO SUCH PROGRAM where the combination of both Q and H exists at the same time! This renders Q
non-existant. So H does not need to take it into account after all.
This is really simple stuff. Not sure why there is any confusion here. Your “any” must exist!
It’s a very simple proof by contradiction. We want to prove a statement not-A, where A is “There exists a halting oracle H”. So in absolutely classic traditional form, we suppose A is
true, and then show that the truth of A necessarily creates logical contradictions, which means that A cannot be true, and therefore not-A is true.
Where is there any room for argument in this?
Easy! Because your not-A only applies for things that do not exist. Why would anyone care about a proof that says something about things that do not exist?
Answer me this!
Why must H take into account programs that can never exist? No one will ever be able to write Q. EVER!
46. #46 Cléo Saulnier November 30, 2007
(Sorry about the double post. Fixing the formatting)
Well, so far there are no arguments against what I say. That H doesn’t execute Q is not a valid argument because I never said it does.
I also don’t accept papers or proofs just because a so-called “giant” wrote it. If it’s flawed, it’s flawed. And this one is clearly flawed big time. It’s rather obvious too.
The proof is based on the supposition that I can create a program H that takes another program Q as input, and returns an answer about whether or not that program will halt. The point of the
proof is to show that that supposition inevitably leads to a contradiction.
Right, but a contradiction of what exactly? I’m saying it can’t be a contradiction on Q. If Q cannot exist, then you’ve only proven something about Q’s that don’t exist. It says nothing about all
P’s. Do you see that?
Q does invoke H. But there’s no problem with that. By the fixed point principle, we know that it’s possible to implement self-reproducing programs – we call them Quines – and Q can quine to
invoke H.
So what? You’re going to have to find a valid H to include. If you can’t, then Q doesn’t exist. That only speaks about Q, not all P’s. Why is this so difficult to see?
The point of the proof is that for any supposed
halting oracle H, we can construct a program Q for which H must fail.
Right, but the proof doesn’t accomplish that point. The H in the proof is supposed to handle programs that do not exist. Why in the world would H need to take into account programs that don’t
Don’t you see. In order for Q to exist, you NEED a valid H that can work on that Q. THERE IS NO SUCH PROGRAM where the combination of both Q and H exists at the same time! This renders Q
non-existant. So H does not need to take it into account after all.
This is really simple stuff. Not sure why there is any confusion here. Your “any” must exist!
It’s a very simple proof by contradiction. We want to prove a statement not-A, where A is “There exists a halting oracle H”. So in absolutely classic traditional form, we suppose A is true,
and then show that the truth of A necessarily creates logical contradictions, which means that A cannot be true, and therefore not-A is true.
Where is there any room for argument in this?
Easy! Because your not-A only applies for things that do not exist. Why would anyone care about a proof that says something about things that do not exist?
Answer me this!
Why must H take into account programs that can never exist? No one will ever be able to write Q. EVER!
47. #47 Cléo Saulnier November 30, 2007
To Xanthir:
The point of the whole thing is not to show that one particular program isn’t a halting oracle, but rather to show that *any* program which claims to be a halting oracle will have some input
for which it fails.
Yeah, but your “*any* program” must exist. Your proof just ends up saying that it doesn’t. So Q is NOT *any* one such program. Proof = BAD!
48. #48 Mark C. Chu-Carroll November 30, 2007
Your argument is, basically, that proof by contradiction can’t possibly work for anything involving programs. That’s a damned stupid thing to say.
The premise of a proof by contradiction is to say “Suppose this thing exists”. That’s what we do in the halting proof. We suppose that the halting oracle exists.
*If* there is a halting oracle, *then* the program Q can be written. That’s standard proof by contradiction reasoning. We’ve taken the existence of H as a supposition, and we’re showing that it
leads to contradiction. By incredibly simple logic, if any program H exists, then there’s a program Q. If Q can’t exist – then that satisfies the proof by contradiction! Because by the existence
of H, we can prove that a corresponding Q *must* exist. If no Q can exist, that can only mean that the halting oracle cannot exist either.
This isn’t exactly rocket science. This is incredibly simple, basic, straightforward proof by contradiction. If you don’t get it, that’s your problem, not a problem in the proof.
49. #49 Cléo Saulnier November 30, 2007
I never said that proof by contradiction cannot work for anything involving programs. Not sure what would make you think such a thing.
You must understand that there are two parts here. H and Q. Q needs H. So tell me what part you want to contradict. If you end up assuming that H exists and then show that it cannot for Q, you
must make sure that Q exists throughout. Otherwise you’ve only shown that H cannot exist for a Q that doesn’t exist. That’s what really stupid here.
I do understand all of this. Yet no one has yet answered why H must take into account a program Q that will never exist. Answer me that if you’re so sure of yourself. I agree with one point. This
isn’t rocket science.
50. #50 Mark C. Chu-Carroll November 30, 2007
You keep ignoring the point.
If H exists, then Q exists. If Q doesn’t exist, then H doesn’t exist. They’re instrinsically connected.
If H exists, then it’s a program. If a program H exists,
then by the fixed point theorem, the program Q is constructable. So the existence of Q is logically implied by the existence of H.
The reason that I say you’re essentially arguing that proof by contradiction can’t work for programs is that this proof is a canonical example of how we use proof by contradiction to show when
something is non-computable: we suppose the existence of a program for something non-computable. Then we show how that leads to a contradiction, by using the supposed program as an element of a
51. #51 Cléo Saulnier November 30, 2007
If H exists, then Q exists. If Q doesn’t exist, then H doesn’t exist. They’re instrinsically connected.
Both of your first two statements are incorrect. There’s no basis for it. Please back this up.
If H exists, there’s no basis to assume the existance of Q. As for the other statement, if Q doesn’t exist, then H doesn’t need to take Q into account and H can very well exist.
If H exists, then it’s a program. If a program H exists, then by the fixed point theorem, the program Q is constructable. So the existence of Q is logically implied by the existence of H.
The program Q is only constructable as long as H works on that Q. If there is no such H for Q, then Q cannot exist and you’ve defeated your argument. This proof only shows that Q cannot exist. It
says nothing about H. You’re using circular logic to try and make your argument. I’m sorry, but this does not hold water. Here’s why.
You’re assuming that H exists, right? Ok, let’s go with that. You want to contradict its existance, correct? Fine. What is the tool you’re going to use to show the contradiction? You’re using Q.
But if Q fails to exist, then so does the tool that shows the contradiction. So the contradiction disapears. Please understand this before continuing. If the tool used for showing the
contradiction doesn’t exist, there can be no contradiction. Plain as that. It’s the subject matter, not the tool, that you want to contradict.
52. #52 Mark C. Chu-Carroll November 30, 2007
H is a halting oracle: by definition, it works on all programs. A halting oracle is a program, H, which takes another program – any other program – as input, and determines whether or that input
program halts. The other program – the program that H tests for halting – is a *parameter*. H works *for all programs*. If H doesn’t work for all programs, then it’s not a halting oracle. The
point of the halting theorem is that you can’t build a halting oracle.
No one would dispute that for many specific programs, you can write a halting prover for that specific program. The question is, can you write a general one? That’s the whole point of the
theorem. Not “Given a program Q, can I write a program H which determines whether Q halts?”, but “Can I write a single program H which for *any* program Q determines whether or not Q halts? For
the earlier question – can I write a program H[Q] which determines whether or not a specific program Q halts, it’s true that without knowing Q, you can’t construct H[Q]. But for the latter
question, if the program H exists, then it has to be able to answer correctly for all possible inputs – so you don’t need to know the specific problematic input when you write H – no matter how
you write H, no matter what you do, any program that claims to be a halting oracle can be defeated by constructing a program like Q for that halting oracle.
So, if H is a halting oracle, then we can construct Q very easily, by the fixed point theorem, in exactly the method shown in the post above! Q is a program that incorporates H, and invokes H(Q),
and chooses what to do based on what H(Q) predicts it well do. The program for Q is trivial: “If H(Q)=Halt, then loop forever else halt”.
The only trick in constructing Q is making Q be able to invoke H using Q itself as a parameter. But the fixed point theorem proves that it’s possible. Q is obviously constructable – it’s a
trivial if/then statement, with an invocation of H(Q) in the condition.
53. #53 Cléo Saulnier November 30, 2007
You’re arguing on the basis that I don’t understand the halting problem. While that’s rather insulting, it doesn’t show how my argumments are false.
Your last part doesn’t make sense. How can Q invoke H if you say it doesn’t exist for that Q? I understand you keep trying to show that it should work for *ALL* P. But if Q doesn’t exist, it’s
NOT part of *ALL* P. That’s why you’re using circular logic. And you completely avoided my argument.
Look, you initially said that H exists and works an all P, correct? P includes Q as one of the possibilities, right? Then you go and show that an H that works on all P doesn’t exist. But this
means that Q cannot exist either because it has nothing to call. You agree? So we must remove Q from the set of P. Hence, you’ve proven NOTHING! This ends up saying nothing about P where Q is not
a part of it. In fact, we MUST omit Q from the set of P because Q does not exist.
You cannot construct the program Q as you claim. You believe it to be a simple matter of if/then, but that’s not true. The input of H (as a program) is invalid to that Q. H can exist for Q if you
accept relationships. But the definition of the problem doesn’t allow Q to use this H, so Q cannot call H. It’s an invalid input. The proof doesn’t mean that H doesn’t exist. It again means that
Q cannot exist as described. This is what I meant that the humans who wrote up the descriptions are intentionally selecting what answers they accept. Even so, the proof still falls apart.
However, you’re refusing to see logic. You completely avoided my previous comment. Please consider it and understand it. You’re trying to sweep the Q self-contradiction under the rug.
54. #54 Mark C. Chu-Carroll November 30, 2007
You’re ignoring simple logic.
As I keep saying: the way that a proof by contradiction works is by assuming the statement you want to disprove, and then showing that the statement leads to a contradiction.
If you assume the existence of a halting oracle, H, then that implies the existence of the counterexample, Q. It’s a construction from H. As I keep saying: *IF* you have a halting oracle, then by
the fixed point theorem, Q is constructable from H. If Q isn’t constructable – then we have our contradiction – because by fixed point plus construction, Q *must* exist. So the proof by
contradiction works. If Q *IS* constructable, then it’s a counterexample to H being a halting oracle, and so it’s a contradiction. So either way, it’s a contradiction caused by assuming the
existence of a halting oracle. Which means the proof stands.
55. #55 Cléo Saulnier December 1, 2007
As I keep saying: the way that a proof by contradiction works is by assuming the statement you want to disprove, and then showing that the statement leads to a contradiction.
That’s not entirely correct (and again you ignored my comments). If you disprove the contradiction itself, then there cannot be a contradiction (IOW, you’re contradicting the contradiction). When
using proof by contradiction, you have to be VERY careful what it is you are contradicting. You cannot throw caution to the wind and use all conclusions without first looking at what exactly it
is your are disproving. By your statements, I’m having serious doubt if you understand this part of proofs by contradiction.
If you assume the existence of a halting oracle, H, then that implies the existence of the counterexample, Q.
Wrong! Why must Q exist? You still haven’t shown any reason why this must be so. I’m sorry, but I’m not just going to take your word for it. Show me why Q must exist?
It’s a construction from H.
An H that assumes P contains Q and H, sure. If P ends up not containing Q or H, you’ve only proven something about an H that worked on an incorrect set of P. Big deal. You didn’t prove anything.
Fix P and try again. We’re only interested in P having programs that exist.
*IF* you have a halting oracle, then by the fixed point theorem, Q is constructable from H.
This only holds when Q and an H that works on Q (as well as all other P) is part of P. If it’s found that Q (or H that works on Q) is not part of P, then you cannot use the fixed point theorem
(for one thing, you cannot enumerate things that don’t exist, much less compose something of them) and your argument falls apart. Like I said, it’s a self-destructing proof. It cannot succeed.
I don’t think you realise that I don’t need to have a halting oracle for programs that do not exist. You keep tripping yourself up on that point. There is no composability possible for things
that do not exist. No fixed point theorem would apply.
If Q isn’t constructable – then we have our contradiction – because by fixed point plus construction, Q *must* exist.
Well no. Your premise of constructability is flawed. You built Q out of something that doesn’t exist (and need not exist). No contradiction there.
BTW, in the definition of the proof, it is not accepted that Q can invoke H in the first place even though it says it should. I don’t think you realise that. There’s more than fixed point.
There’s program validity too. You can’t compose programs from incompatible parts.
You’re only resorting to repeating yourself. Notice that I’ve responded to each and every one of your arguments. But you have not done the same. You completely ignore them. I must conclude that
you are using wishful thinking at this point. You seem to be genuinly convinced that there cannot be fault in this proof and therefore you do not even consider it. If you work from this frame of
mind, it is no wonder that you do not respond to my arguments. However, I was hoping for a more open discussion and implore you to answer my questions put forth in this discussion. Why must an H
work on a program that does not exist?
56. #56 Cléo Saulnier December 1, 2007
Let’s redo your proof, shall we? But I’m going to clarify one particular point so that your proof doesn’t result in a contradiction where even you will agree.
H is only required to output true or false for valid programs, correct? For invalid programs, H does not need to answer correctly because they are not in the set P of valid programs. Invalid
programs neither halt nor not halt. They may run for a while and then crash. But they’re still invalid. So we’re going to have H return function F whenever a program S acts on the result of H(S)
unless S is H. In that last case, it’ll return false. Basically, it returns a function F for all invalid programs (which includes all those that try to act on H(self)).
Now, let’s redo your proof and we’ll use your words.
If you assume the existence of a halting oracle, H, then that implies the existence of the counterexample, Q. It’s a construction from H. As I keep saying: *IF* you have a halting oracle,
then by the fixed point theorem, Q is constructable from H.
See, that last sentence doesn’t hold up. Q is invalid if it tries to use H because Q doesn’t know what to do with a returned function. And since Q is invalid, H correctly has no requirement to
produce a true or false answer. All statements are in accord and that means H remains valid for all P. No contradiction.
See, the original proof has a fatal flaw. It requires that H produce valid results for invalid programs. There is no such requirement. If you want valid results for invalid programs, then of
course you won’t find an H that exists. This is why I say the proof sets up its own results. But such a requirement, as there is in your proof, is unfounded. The H that you assume exists cannot
exist as defined in the proof. And that’s ultimately why the proof cannot succeed. Both the premise (invalid H) and the conclusion (invalid H) are the same. But as the above example shows, it’s
also possible to define a (possibly) valid H that survives your proof intact without any contradiction just as long as you remove the need to give valid results for invalid programs.
57. #57 Thony C. December 1, 2007
Hey Cléo, instead of wasting your time and energy on us pea brains here at GM/BM why don’t you write up your stroke of genius and send it off to the Journal of Symbolic Logic? I’m sure you’re in
the running for at least a Field’s if not a Nobel, after all you have solved a problem that defeated the combined efforts of such pea brains as Turing, Post, Church, Gödel, Kleene, Rosser and
Davis. Come on, after you get published the name Saulnier will be up there alongside all the other immortals of meta-mathematics. I’ll even let you autograph my copy of Davis’ The Undecidable!
58. #58 Mark C. Chu-Carroll December 1, 2007
In the theoretical realm where this exists, there’s no such thing as a program that crashes – there’s no such thing as an invalid program!
In terms of primitive computation theory, “halt” means “completes computation without error”; “doesn’t halt” means that either the program never completes the computation, or encounters an error
that renders the computation invalid.
If you want to think of it in terms of real programs and real computers: on a modern computer, you can take any sequence of bytes you want, and put them into memory, and then perform a jump to
the first byte of the sequence. There are a number of things that could happen. You could encounter something that doesn’t form a valid instruction. That’s an error, and so the computation
doesn’t complete without error – so it’s considered non-halting. It could be a valid set of instructions, which contains something that produces an error, like dividing by zero, addressing an
invalid memory location, etc. That’s an error – so the computation doesn’t complete without error, and doesn’t halt. It could contain a jump to an address that’s not part of the program, and
which contains garbage. THat’s an error – the computation doesn’t complete without error, so it doesn’t halt. It could be a valid set of instructions that does nothing wrong, but contains an
infinite loop. Then the computation won’t complete without error, so it doesn’t halt. Or it could be a valid set of instructions that does nothing wrong, and eventually completes execution
without error – a halting program.
That’s the theoretical abstraction of programs. Every program either halts (that is, completes execution without error), or doesn’t halt (which means either never halts, or encounters an error).
no such thing as an “invalid” program: there are programs that halt, and programs that don’t. What we would call a buggy crashing program on real hardware is a non-terminating program in the
Get that? The definition of “program” in the realm of “effective computing systems”, the domain of this proof, is that every program is valid. *Every* program, by definition, when executed either
halts (completes without error) or doesn’t halt (doesn’t complete without error). And a halting oracle *always* halts, and answers “Yes” if its input program halts, and “No” if it doesn’t. Since
*every* program halts or doesn’t halt, and the halting oracle must always halt with a correct answer for every possible input, there *is no program* for which the halting oracle doesn’t produce a
correct answer.
Further, the construction of Q given H is trivial. As I’ve repeatedly said, if you have a halting oracle H, generating Q from it is a completely mechanical process. The logic is the trivial
conditional that I showed above. And the self-embedding of Q, allowing it to pass itself to H, is guaranteed to be possible – if it’s not, then you’re not using a complete effective computing
system, and the proof is only about ECS’s.
59. #59 Xanthir, FCD December 1, 2007
It seems this is the heart of your misunderstanding:
The program Q is only constructable as long as H works on that Q.
If H doesn’t work on that Q, then H isn’t a halting oracle. That’s what we were trying to prove in the first place.
Though, this part might also be it:
Your last part doesn’t make sense. How can Q invoke H if you say it doesn’t exist for that Q? I understand you keep trying to show that it should work for *ALL* P. But if Q doesn’t exist,
it’s NOT part of *ALL* P. That’s why you’re using circular logic. And you completely avoided my argument.
You seem not to understand the role of hypothetical constructions, and their role in proof by contradiction. When you make a statement like the one above, you are quite literally saying, “I don’t
understand how proof by contradiction works.”
First, you *assume* something. Then you find some logical consequence of that assumption. Finally, you show that the logical consequence is untrue. This proves your assumption false.
In this case, you assume that a halting oracle exists, called H. The logical consequence of H existing is that it should be able to correctly decide whether or not *any* program halts. Then you
construct Q, which H cannot correctly decide on. Thus our assumption is false, and there cannot be a halting oracle.
The point of the whole thing is that when Q is analyzed, it looks like it does one thing, but when it’s actually run, it does the other.
Wrong! Why must Q exist? You still haven’t shown any reason why this must be so. I’m sorry, but I’m not just going to take your word for it. Show me why Q must exist?
Um, because we showed how to construct it? There’s only one tricky step in the construction of Q, and that is inserting Q’s own source code into the call to H. But we can prove that this step is
possible. The rest of Q is completely trivial.
So, that’s our reason why Q must exist. It’s trivially easy to construct, and we showed exactly how to do it. Prove us wrong.
H is only required to output true or false for valid programs, correct? For invalid programs, H does not need to answer correctly because they are not in the set P of valid programs. Invalid
programs neither halt nor not halt. They may run for a while and then crash. But they’re still invalid. So we’re going to have H return function F whenever a program S acts on the result of H
(S) unless S is H. In that last case, it’ll return false. Basically, it returns a function F for all invalid programs (which includes all those that try to act on H(self)).
Wow. If we define H as something other than a halting oracle, then the proof that H isn’t a halting oracle fails! Truly, sir, your command of logic is astonishing.
What you have defined is not a halting oracle. H must return true or false for *all* programs. If a program is ‘invalid’ (which I’m guessing means that it would cause errors on compilation or
execution), it definitely still halts or loops. A crash is simply a halt. It may not be the type of halt you want, but the program definitely stops. Thus, H would say that any incorrectly written
programs halt. (Unless they exploit some bug in the compiler/computer architecture causing an infinite loop upon crashing, in which case H will say that the program loops. Because it’s an oracle,
and knows these things.)
If I just feed H my favorite brownie recipe with the quantities of ingredients as input, it’ll crash, because my brownie recipe isn’t syntactically valid Lisp (I’m assuming for the moment that
the Oracle is written in Lisp, because both are magical).
But as the above example shows, it’s also possible to define a (possibly) valid H that survives your proof intact without any contradiction just as long as you remove the need to give valid
results for invalid programs.
Yup, by defining H so that it isn’t a halting oracle anymore.
60. #60 Cléo Saulnier December 1, 2007
Thony: Your defense is that “giants” wrote it. Weak!
Mark C. Chu-Carroll:
In the theoretical realm where this exists, there’s no such thing as a program that crashes – there’s no such thing as an invalid program!
It’s trite to include invalid programs and will not waste my time with it. If you insist on this, then I’m satisfied that your proof is bogus. We can end it there. The very definition of invalid
programs is that you don’t know what happens. Gee! How amazing? We can’t tell what happens to programs defined as being unknowable. BORING! Show me a real proof! Don’t include invalid programs.
This one just repeats a definition. Irrelevant.
the domain of this proof, is that every program is valid.
Trite! Who cares about programs that don’t work? Wish you’d mentioned this before though. I could have laughed it off and moved on. At least now I can back up the fact that this proof is a joke.
I never thought you’d admit the inclusion of invalid programs though. Thank you for being big enough to admit it!
If H doesn’t work on that Q, then H isn’t a halting oracle.
Of course it is. Q doesn’t exist. So H doesn’t need to work on it. You’ve invalidated the set that H was supposed to work on, not H itself. Your definition ended up being wrong, not the existence
of the oracle program.
Then you find some logical consequence of that assumption. Finally, you show that the logical consequence is untrue. This proves your assumption false.
You don’t understand how proof by contradiction works. If it ends up that your contradiction is invalid, so is your proof. Do you understand that? Obviously not. It doesn’t matter what conclusion
you seem to think your proof comes to if the contradiction doesn’t hold up. Do you not get that? Without a contradiction, you get nothing.
The point of the whole thing is that when Q is analyzed, it looks like it does one thing, but when it’s actually run, it does the other.
Again, this can only happen if Q exists. If it doesn’t, then your argument falls apart because there is no contradiction.
Don’t you see? You’re saying Q is a contradiction, yet Q doesn’t exist to show this contradtion. You can’t do that with proofs by contradiction. This is basic stuff.
The rest of Q is completely trivial.
No, this is wrong. Let’s have a set R that includes all programs including Q. Let’s also have a set S that does NOT include Q. Your proof only says that R cannot exist since Q does not exist. So
all it says is that H could have never worked on this set in the first place. You used the WRONG SET of programs in your assumption. Big deal. Who cares about proofs that says something about a
set of programs that included non-existant programs? Of course you’re going to get a result of undecidability. You choose that result by including a non-existant program in the set of all
If we define H as something other than a halting oracle, then the proof that H isn’t a halting oracle fails!
You’ve finally got it! This is what your proof does. It’s your definition that falls apart. NOT H.
That’s exactly right! This is what I’m saying all along. The way your proof is set up, you don’t know for sure that H is the oracle that works on all P. You’re ASSUMING that P is indeed the full
set of programs. But it ends up that P isn’t the P you thought it was. So you contradicted P, NOT H. P is contradicted because Q doesn’t exist within P. This is what I’ve been saying all along.
It says NOTHING about H.
A crash is simply a halt. It may not be the type of halt you want, but the program definitely stops.
You can’t say this. Invalid opcodes are undefined. You’re making arbitrary decisions on how the machine works. You can’t do that. If you do, you’re making stuff up. Again, this is my point
exactly. Thank you for making the demonstration.
Yup, by defining H so that it isn’t a halting oracle anymore.
But your assumption is linked to Q. If Q doesn’t hold up, your definition of H is invalid meaning you were incorrect in assuming it was the oracle in the first place. This means your definition
of the oracle was wrong. Not that there wasn’t any. You seem to think that just because you declared it the oracle that this is all you need. Not so. You’ve linked this definition of H to P. If P
fails, then the H in your wording was not really H after all. It’d be a different story if Q existed throughout because P would not be affected and your assumption would remain valid and there
would be a valid contradiction. But this doesn’t happen. You’re invalidating your definition of H, not H itself.
Unfortunately, I’m not interested in flights of fantasy. So if there is an insistence that invalid programs are part of P, then DUH! it’s undecidable. Damn, I can tell you right now no one can
tell what an invalid program will do. It’s invalid. Don’t need a proof for that. That’s the VERY definition of invalid programs. Mark just made my point for me that the proof sets up its own
answer of undecidabality by forcing a valid answer on invalid programs. Thank you for at least admitting that much.
61. #61 Mark C. Chu-Carroll December 1, 2007
This is exactly why I originally didn’t want to get into this argument. It’s exactly like the other example I mentioned – the old crackpots who try to argue against Cantor’s diagonalization.
Seriously, if you believe that your refutation is valid, you should go and publish it. This proof is a fundamental proof from the mathematics of the 20th century. *If* you really believe that
you’ve got a valid refutation of it, it’s enough to guarantee you millions of dollars of prize money, fame, and a faculty position or job at an institution of your choice.
Two questions for you, which you have so far refused to answer.
Why do you think that the program Q doesn’t exist? If you accept the existence of H, and H halts and gives an answer, then why can’t Q be constructed? Its an incredibly straightforward
construction; given H, construct Q.
Second, proof by contradiction is a whole lot simpler than you make out. The *only* necessity in a PbC is to show that if you accept the premise, and follow valid logical inferences from it, you
get a contradiction. The steps just need to be valid logical inferences.
In this case:
(1) There exists a halting oracle, H. (The assumption.)
(2) Properties of effective computing systems:
(2a) In an effective computing system, given a program A,
you can write a program B that invokes A. (A premise; fundamental property of effective computing systems, sometimes called the recursion principle.)
(2b) In an effective computing system, given programs
A, B, and C, where A is guaranteed to halt, I can construct a program that runs A, then runs B if A answers “yes”, or otherwise runs B. (We’ll take this as a premise;
It’s provable from the recursion principle.)
(3) By 1 and 2a, I can construct a program that invokes
(4) By the fixed point theorem, if I can construct a program
that invokes H, I can construct a program that invokes
H *on itself*.
(5) By 3 and 4, I can construct a program that invokes H
on itself.
(6) By 5 and 2b, I can construct exactly the program Q in
the original proof.
What step in this is invalid logic? Where’s the invalid inference that invalidates the proof? If each of these inferences is valid, then the proof is correct.
Given a program H, I can construct a program Q(H). (A
logical inference from the fact that H exists, and
programs can
62. #62 Thony C. December 1, 2007
Thony: Your defense is that “giants” wrote it. Weak!
I did not defend anything! I said that if you genuinly believe that the proof of the halting problem is false then publish your results because if you are right, it makes you the greatest
meta-mathematician of the last hundred years and you wouldn’t want to miss out on all the fame and glory that that entails, would you?
63. #63 Cléo Saulnier December 1, 2007
Are you guys really pround to say that H(undecidable) = undecidable? That’s what the proof says. Can someone tell me why anyone would accept a proof that just repeats a definition? You know
BEFOREHAND that the program in undecidable. By using that definition, you then go out to prove that it’s undecidable. It’s so dumb, it’s ridiculous.
Mark, there’s a serious flaw in your understanding of proofs by contradiction. Look at the prime proof.
The prime proof says that each item in the set P MUST be prime. If at any point during the proof by contradiction it is shown that any of the items were NOT prime, then the proof would fail. This
would be so even if you somehow managed to show some other contradiction. You’d say Q is a new prime and thus the set P could not have held ALL primes. But this conclusion wouldn’t hold up if
it’s found that one of the p’s is not actually a prime. In that case, the proof would break down.
This is exactly what’s goin on with the halting problem proof. The rules that you set up fail. They break down. If Q ends up not holding, then the set P was incorrectly defined. The items in it
were not what you claimed they were.
Your proof ends up saying that H can’t exist, right? So Q could not possibly invoke it, right? That means neither H nor Q were actually part of P as you thought and could never have been built in
the first place. (And if you pass H to Q, then the combination of (Q,I) can’t exist, so whatever way you define your sets, it still fails. I use set P for simplicity where Q has H as part of
itself.) So the H you defined wasn’t the oracle after all. Your proof broke down in the same manner the prime proof would break down if the set of P were found out to contain compound or
non-existent numbers.
In your proof, there are many flaws. First, your assumption isn’t that H is the oracle. It’s that H is the oracle if and only if the set P includes H and Q. If H or Q are found to not exist, then
you are precluded from coming to any conclusion on the existance of an oracle program because P was incorrectly defined and thus H was not actually the oracle.
This is a fundamental rule of proof by contradiction. I can’t believe that you’re not mentioning it. You must make sure that the rules used to show the contradiction don’t fall apart. You can’t
end up contradicting the contradiction. Proofs by contradiction that do this are invalid.
Thony: I’ll take that suggestion under consideration.
64. #64 Mark C. Chu-Carroll December 1, 2007
H(undecidable)=undecidable *is* a profound result. Before Gödel and Turing, it was widely believed that it was possible to produce a mathematical system in which all statements were provably true
or false. That would mean that it would be possible to create a device which could, for *any* well-formed logical statement, produce a proof that it was either true or false. That every decision
problem was solvable in a finite amount of time. Showing that that could not be done was astonishingly profound, and had an immeasurable impact. (That’s also why Thony and I are saying that if
you really believe your “disproof”, that you should publish it. You’d be overturning one of the most important mathematical discoveries of the last century. You’d be putting yourself right up
there in the ranks of the greatest mathematicians of all time: Gödel, Cantor, Turing, and Saulnier! If you really have discovered a fundamental flaw in this, why are you wasting your time arguing
with a bunch of two-bit nobodies on a blog?
Shifting gears:
The assumption *IS* that H is the oracle. You don’t need to specify that it’s a halting oracle that will work for the program Q – it works *for all programs*. If it doesn’t, it’s not a halting
oracle. (If it makes you happier, you can make it be a three state oracle: instead of returning “Halts”, or “Doesn’t Halt”, it can return “Halts”, “Doesn’t Halt”, “Invalid”. But it’s part of the
definition that it has to work on all inputs, or it’s not a halting oracle.)
Given the existence of a halting oracle, H, the existence of Q is proven by the premises + inferences that I listed above. The applicability of H on Q is proven by the fact that to be an oracle,
H must work for all possible input programs Q.
The steps are there. Which one is invalid?
The supposition is that H is a halting oracle – which means that H is a program, and H works for all possible input programs. The steps in my previous comment show how the existence of H
necessarily implies the existence of Q, and
exactly how to construct Q as a valid program. If you want the three-state oracle, it doesn’t actually change Q *at all* – because the construction, by the very nature of computing systems,
guarantees that the construction will produce a valid program Q.
65. #65 Cléo Saulnier December 1, 2007
Your logical steps are flawed.
First, H is not the oracle unconditionally. It’s assumed to be the oracle based on certain conditions. If these conditions fail, then H was NOT the oracle after all. These conditions are that P
contains H (as defined) and Q. If either ceases to exist in this proof, then the proof fails. Got that? You want to contradict H, but not the assumed conditions needed for its existence. If the
proof ultimately proves that H or Q does not exist, then the proof fails. Simple as that. It’s not complicated. This is why this proof cannot succeed.
66. #66 Mark C. Chu-Carroll December 1, 2007
Where are the logical steps flawed? You keep saying they are – but you can’t say where the flaw is.
Note the definition of H in the assumption: H is a halting oracle. Not “H is an oracle *IF* the following conditions are met.” The assumption is, unconditionally, H is a halting oracle for all
programs in the effective computing system. No ifs, ands, or buts. A halting oracle is a program that *for all programs* provides a yes or no answer to “Does it halt?”.
If H fails to be a halting oracle, then the proof *succeeds*. You’re adding conditions that aren’t there. If we assume that H exists, then everything else follows from that – per the proof and
construction above. Every bit of the proof logically follows from either “H is a halting oracle”, or from the definitions of an effective computing system.
It’s a simple chain of logic, which I’ve provided with labelled steps. Exactly what step of the logic do you claim is invalid? And by what rule of logic is it invalid?
67. #67 Cléo Saulnier December 1, 2007
Your definition of H is incompatible with that of P. Your proof can’t succeed. P contains a program that does not exist and H has no requirement to process it. That’s one of the flaws. You think
that H must give a result for Q. It does not.
I know you’re trying to say that we only prove it doesn’t exist afterwards. But that’s where you trip yourself up. We’re talking about existence here. So if something ends up not being in P, H
never had to process it and it’s perfectly valid for it to not give an answer for things that do not exist. So you can’t make conclusions on the results that H (as defined in your proof) gives
according to the way your proof is laid out. You assume it gives either true or false when it has no obligation to do so.
68. #68 Mark C. Chu-Carroll December 1, 2007
This is the last time I’m going to reply. This is getting stupid.
A halting oracle *must* return an answer for *any* program. There’s no wiggle room for “it doesn’t need to answer for *Q*”; if Q is a constructable program, then H *must* answer it.
Further, if H exists, *then* Q exists. Period. There’s no wiggle room to argue that Q doesn’t exist. The *only* way that Q doesn’t exist is if H doesn’t exist. It’s impossible for there to be a
halting oracle H for which you cannot construct a deceiver Q.
Once again, I’ll say it: there’s a proof up there, formed from a sequence of logical inferences. If the proof is invalid, then one of those inferences must be wrong. Which step is invalid?
They’re *labeled* for goodness sake – which one isn’t a valid inference and why?
69. #69 Cléo Saulnier December 1, 2007
Step #1: You assume that H exists. Fine. But that’s not all there is. H must exist for all P. If the actual definition of P ends up being different in actuality than your assumption, then your
proof falls apart. Note that a Q that exists and one that doesn’t would create two different P’s. If set X is different than set Y, you would expect that it’d be possible to have two different
functions to deal with each one. Function 1 need not deal with items in Y and vice-versa. Now apply this concept to H. This means you MUST make sure P NEVER changes throughout your proof,
otherwise H will need to be updated. If H is in P, it must remain throughout and cannot be invalidated without invalidating this proof. See infinite prime proof for an example.
Step #2A: This is flawed because it’s possible that A cannot invoke B.
Step #3: You can’t invoke H if you don’t exist or if H doesn’t exist with the assumed P. If you want your proof to hold, both the invoker and H must exist throughout. You can’t have a
contradiction on either H or the program invoking it for this step to hold true. Otherwise, you are changing the definition of P and we’d need to start over.
Step #4-6: As long as H exists, you’re fine. But there cannot be a contradiction here otherwise your proof fails. P must not change!
So I agree with your steps on the condition that at the conclusion of your proof, you do not invalidate H (though I have issues with the way you compose programs). If you invalidate H, every step
of your proof fails. They are all conditional on the existence of H. See the infinite prime proof for an example where the properties of the items within P must remain intact. Any contradiction
showing that H is invalid will defeat your proof! All your steps are conditional on this.
70. #70 Mark C. Chu-Carroll December 2, 2007
Gods but you’re dense!
Any proof of computation starts with something called an *effective computing system* – that is, essentially, a formal mathematical way of saying “A turing complete computer”. The set of all
possible programs for an ECS is fixed: it’s a recursively enumerable set, which cannot change, and which contains all possible computable functions. You’re basically insisting on a mutable set of
possible programs. *There isn’t*. You can’t change the set of programs. It’s fixed. And the definition of a halting oracle H is that it works *on all programs defined for the ECS.* You can’t
change that set. And if you exclude anything from it, you’ve admitted that H isn’t a halting oracle.
From there, everything else is perfectly straightforward, derived from the *definition* of a computing system.
Step 2a of my proof is valid by the definition of a computing system. If A is a program, then you can write a program B that invokes A. That’s not up to argument: it’s part of the definition of
an effective computing system. There are no exceptions.
If you reject that, then sure, you can reject this proof. But – and this is a huge but – if you reject 2a, you’re rejecting the recursion principle, and you’re no longer talking about a Turing
complete computing system. You’ve downward defined the computing system to something sub-Turing.
The proof doesn’t hold for computing systems that don’t support the recursion principle. But that’s not a particularly interesting fact: we know that there are sub-Turing computing systems for
which the Halting problem is solvable.
If the recursion principle holds, then you’ve got a Turing complete system, and the proof is interesting. But you can’t go raving about “P” changing. The premise is that H is a program. By the
simple logic above, if H is a program, then Q is a program (∃H ⇒ ∃Q). Again, by simple logic, if Q is *not* a program, then H is not a program.
((∃H ⇒ ∃Q) ∧ ¬∃Q ⇒ ¬∃H).
The only way that your supposed refutation stands is if you reject the fundamental properties of an effective computing system. But the proof is built on the premise that we are talking about a
71. #71 Cléo Saulnier December 2, 2007
I don’t think we need to resort to name calling.
You’re basically insisting on a mutable set of possible programs. *There isn’t*.
That’s MY argument. Your proof is changing P and you won’t accept this fact. P cannot change, so I cannot accept any proof that changes P (unless H is the only thing removed from it).
Let’s assume that Q is fictional. That it never was in P. I know this isn’t a proof or anything, but I just want to show why your proof is bad. Assume Q is fictional. Does H have to process it?
No. H does not need to give a valid answer. And if you look at your proof, the HUMAN who wrote it is assuming that H will return true or false for Q. But there is no such requirement on fictional
programs. This third option is left out and you must consider it when talking about existence. So when H cannot possibly return a valid answer on something that does not exist, you claim victory.
That’s an ugly, ugly argument.
Or simply take the infinite prime proof. If I end up concluding that one of the items in P is not a prime, the proof would fall apart, correct? Why does this principle not apply to the halting
problem proof is all I’m saying. At least ONE thing must hold up in a proof. 100% of everything is the halting problem proof gets contradicted INCLUDING the contradiction itself. There’s a REAL
reason why all items in P must be prime througout the prime proof. Please consider this. Tell me what part holds up in your proof.
I’m also having doubt that you understand that Q could never be built.
72. #72 Mark C. Chu-Carroll December 2, 2007
Q is fictional *if and only if* H is fictional.
What you don’t understand is that your insistence that the existence of Q requires exactly two things: (one) that we have an effective computing system, and (two) that H exists.
By the fundamental, definitional properties of effective computing systems, the construction of Q follows by simple logic from the existence of H. Q exists IF AND ONLY IF H exists. There is
exactly one possible reason why Q wouldn’t exist in an ECS: because H doesn’t exist. *IF* there is a halting oracle, *THEN* the construction is valid and Q exists. *IF* Q doesn’t exist, the *ONLY
REASON* that I can’t exist is because H doesn’t exist.
You *cannot* have a halting oracle H such that a deceiver Q doesn’t exist: the existence of Q *necessarily follows* from the existence of H.
The construction of Q isn’t *creating* a new program. It’s just *identifying* an existing program. The rules of construction, including the recursion principle, are built on the fact that *every
possible program* is already in the enumeration E of programs that are possible inputs to the system. The construction rules are *total functions* from programs in E to programs in E. They don’t
*create* new programs; they *select* programs for the infinite enumeration of possible programs. Any program which can be constructed by valid construction rules *already exist* in the
enumeration of programs.
In formal computation theory, we even model programs as natural numbers, where every natural number is a program, and the construction rules are *total functions* from natural numbers to natural
numbers. If H is a program, then there exists some natural number h that represents the program H. The construction of Q is a function from a number h to a number q, such that the number q
represents the program Q. The construction doesn’t *create* a new number. It just maps a *known* number h – the representation of a program we have in our hands – to another number, which is a
desired program based on h. The construction rules are closed total operations on the set of programs. It is *impossible* for there to be an H for which a Q isn’t computable; it’s like saying
that there is an integer N for which the function “f(N) = N^2 + 3n – 2″ doesn’t
compute an integer.
73. #73 Cléo Saulnier December 2, 2007
This will be my last comment.
About your construction of Q, what if H returns something that Q can’t handle? Q is a recursive definition. We all know that in a recursive definition, you will get a relationship. Relationships
are, by definition, undecidable. So you start with the premise of undecidability because this is the only valid answer H can give. You don’t even need a proof. This is the premise.
Let’s keep going though. You don’t allow H to return this relationship because the halting problem defines the oracle as only returning true or false, and this effectively tosses out valid
answers. But H has no choice but to return a relationship. It’s the only correct answer. So H will return a relationship, but it won’t be the oracle as far as Q is concerned because it doesn’t
fit the official definition. So Q ends up being invalid because H can’t be combined with Q. Q, as DEFINED, was never in P to begin with. The oracle still exists. It just doesn’t exist in a way
that Q can use it. So this means that the oracle will indeed still return true or false for all P and the proof fails. Q was never in P to begin with, so H never needed to give it a correct
Only if you allow invalid programs can your proof succeed. Like I said before: undecidable = H(undecidable). Big deal. You don’t prove it’s undecidable. You start off that way.
74. #74 Mark C. Chu-Carroll December 3, 2007
Most importantly: the entire point of the halting proof is that there *exist* undecidable programs. Before the development of the theory of computation, most mathematicians believed that *all*
problems were decidable. If you accept the idea of an undecidable program, the question is, *why*? The halting proof is so important because it provides a simple demonstration that there *are*
Second: the proof is about the halting problem. You don’t get to redefine the halting problem, and then criticize the proof because it doesn’t work for your redefined problem.
The basic ideas behind the halting problem are incredibly simple.
If you run a program on a computing device, either the program will eventually stop; or it won’t. There is no grey area. There’s no third choice. There’s no fuzzy relation. Run any computer
program, and eventually it will stop, or it won’t. It’s completely deterministic,
with no fuzz.
A halting oracle is just a program that takes an input, and generates a result “Halt” or “Doesn’t halt”. The input is another program. Regardless of what that other program is,
the oracle will *always* halt, and will *always* generate an answer. If there exist any input for which the oracle doesn’t return either “Halts” or “Doesn’t halt”, then it’s not a halting oracle.
Once again, there is no wiggle room here.
The oracle is a program. Once again, no wiggle room.
The construction of an oracle deceiver is a total function from programs to programs. By the very definition of computing systems, there is no such thing as a program for which a valid
application of program construction rules does not result in a valid program. Once again – no wiggle room. The construction is total – there is no possible input for which the construction
doesn’t generate a valid program. What this means is that *IF* H is a program, there is *no possible way* that Q is not a program, unless you aren’t dealing with a Turing-equivalent computing
system. There is no possible way for the construction to generate something outside of the domain of H.
Think of programs as natural numbers. You can enumerate the set of all possible programs – so assign each program to a natural number. Then the construction is a total function from natural
numbers to natural numbers; and the supposed oracle is a natural number. What you are arguing is that somehow, taking a total function from naturals to naturals, and applying it to a natural
number, is allowed to produce a result that is not a natural number. Nope, sorry. You can’t do that. There’s no wiggle room: a total function is a total function; the domain of the function is
total; the range of the function doesn’t include any non-natural numbers. There is *no room* in the definition for the result of the function to be outside the set of natural numbers (that is,
program labels) for any of its inputs. *It’s a total function* from natural to natural. There is no way for it to produce a result outside of the set of natural numbers.
That’s exactly what’s going on in the halting proof. Under the supposition, the supposed oracle is a program. Therefore the construction process *must* produce a program – because by the
definition of a computing system, the construction is
a *total* function from programs to programs. Given *any* program as input, you get
a program as output. There is no possible way to produce something outside the set of valid programs using the construction rules from the definition of a computing system.
The oracle *by definition*, it total. By definition, must work on *all programs*. Therefore if the oracle is in fact an oracle, it *must* work on the constructed program – because by the
definition of a halting oracle, it must work on all programs; and by the definition of a computing system, if the halting oracle is a valid program, then the deceiver is a program. There is
absolutely no wiggle room here.
There is no such thing as a result from a halting oracle other than “Halts” or “Doesn’t halt”. There is no such thing as a result from valid construction that isn’t a valid program. There is no
such thing as a program for which a genuine halting oracle doesn’t generate a result. There is no wiggle room. *If* a halting oracle exists, *then* a deceiver exists. *If* a deciever does not
exist, *then* a halting oracle does not exist.
The only way around any of that is by either redefining “halting oracle”, or by redefining “computing system”. But then you’re no longer talking about this proof – because the halting proof uses
the standard definitions of computing system and halting oracle.
75. #75 Thony C. December 3, 2007
Cléo I have found just the journal for your revolutionary disproof; here!
76. #76 Jonathan Vos Post December 3, 2007
Cléo Saulnier seems uncomfortable with basic defintions about formal computing systems, but not troubled by integers or equations. Hence I suggest reading (from which I give an excerpt):
A Wonderful Read On Gödel’s Theorem
Let us say that a formal system has an arithmetical component if it is possible to interpret some of its statements as statements about the natural numbers, in such a way that the system proves
some basic principles of arithmetic having to do with summation and multiplication. Given this, we can produce (using Barkley Rosser’s strengthening of Gödel’s theorem in conjunction with the
proof of the Matiyasevich-Davis-Robinson-Putnam theorem about the representability of recursively enumerable sets by Diophantine equations) a particular statement of the form “The Diophantine
p(x1 , . . . , xn ) = 0 has no solution” which is undecidable in the theory, provided it is consistent.
Franzén remarks that
….it is mathematically a very striking fact that any sufficiently strong consistent formal system is incomplete with respect to this class of statements,….
However, no famous arithmetical conjecture has been shown to be undecidable in ZFC…..From a logician’s point of view, it would be immensely interesting if some famous arithmetical conjecture
turned out to be undecidable in ZFC, but it seems too much to hope for.
Wikipedia has a short list of mathematical statements that are independent of ZFC on this page. You may find it interesting.
Saharon Shelah’s result to the effect that the Whitehead problem is undecidable in ZFC is discussed in this paper by Eklof. Shelah has also shown that the problem is independent of ZFC+CH.
There is also an interesting post on a combinatorial problem that is independent of ZFC on the complexity blog. That post also includes several pointers to pieces that are well worth reading.
77. #77 Cléo Saulnier December 11, 2007
I wanted to drop this, but for fear that people may actually believe what Mark is saying, I want to clear up a few things.
First, your undecidable program is undecidable because it’s is invalid. Not because there is a deceiver.
Second, I do get to criticize the halting problem. Open your mind a little. What good does it do to restrict answers? If you do this, then the problem is defining its own answer of undecidability
as a premise and has nothing to do with computing systems at all. It’s restricting what answers are allowed. By doing this, you get invalid, and thus undecidable, programs. The undecidability
comes from the wording of the problem. NOT from any computing system, however it may be defined.
But I don’t need to redefine the problem to show that the proof is incorrect.
You say a program is completely deterministic. I agree. But only for valid programs.
If there exist any input for which the oracle doesn’t return either “Halts” or “Doesn’t halt”, then it’s not a halting oracle.
Sure it is. If you don’t allow answers other than “Halts” or “Doesn’t halt”, then you’re refusing an answer of a relationship which is the only correct answer for a program that acts on the
result of the oracle. Again, you are the one who is deciding that this problem is undecidable, not the computing system. The only way that a relation is not allowed is if you are dealing with
something that is not a formal computing system. This is why I keep pointing back to the flawed definition and why you start off with a premise of undecidability.
Why is it so difficult to accept to let the oracle return whatever it wants, but if it ends up returning true or false for all programs, then it’s the oracle? Why is this any different? Ah!!!
Because this would show why Q is not a valid program. Again, human decisions get into it rather than determining a true property of computing systems. So instead of accepting that Q could never
be built, you’re arguing that it’s the oracle instead that can’t be built. The only way this holds up is if you disallow the oracle from returning correct answers. Unfortunately, you don’t seem
to realise that if you do this, you’re also contradicting the definition of a computing system.
By the very definition of computing systems, there is no such thing as a program for which a valid application of program construction rules does not result in a valid program.
Not so. You must stop repeating this lie. If you write a program that can only handle 0 and 1 but the function can return any natural number, the definition of your program will be invalid. It
won’t exist. Your definition is flawed. Forget composability. You can’t build your definition from incompatible parts. Sure, you can create the program ANYWAYS. But it won’t be the program you
think it is. Like I said, something about your proof must hold up. If Q ceases to be Q, then your proof falls apart.
What this means is that *IF* H is a program, there is *no possible way* that Q is not a program, unless you aren’t dealing with a Turing-equivalent computing system.
No! You’re stuck in assuming that the parts exist before checking if your definition of the program is valid. Invalid programs can’t exist. If what you say it does and what it actually does
aren’t the same, then the program is invalid. It exists, but it’s invalid.
There is no possible way for the construction to generate something outside of the domain of H.
Nobody said this. You’re the only one pushing this argument. I’m saying the program is invalid. It doesn’t do what you claim it does.
Think of programs as natural numbers. You can enumerate the set of all possible programs – so assign each program to a natural number. Then the construction is a total function from natural
numbers to natural numbers; and the supposed oracle is a natural number. What you are arguing is that somehow, taking a total function from naturals to naturals, and applying it to a natural
number, is allowed to produce a result that is not a natural number. Nope, sorry. You can’t do that.
First, you’re assuming there is a number that does what Q does. There is no such number because the oracle MUST return a relation for any program that acts on the results of the oracle. Again,
any program may return any sequence of natural numbers. YOUR DEFINITION! Yet you do not allow this for H? Why not? Ah, the definition! A human restriction. Do you not see that a relationship is
the ONLY correct answer? Allow it to return the only possible correct result. If your argument is that it’s a human restriction in the wording of the halting problem, you must accept that the
proof and the problem say nothing about computing systems. If you allow the correct result from the oracle, there is no natural number that represents your description for Q. Refusing to accept
the correct answer from the oracle is trite. It’s really childish. It means you’re starting with the premise of undecidability since you’re restricting the definition of a computing system.
There is *no room* in the definition for the result of the function to be outside the set of natural numbers (that is, program labels) for any of its inputs.
Ah, so you allow the oracle to have the same range for its results? See, you’re arguing on both sides of the fence. I’ve discredited every single one of your arguments. Yet you have not debunked
a single one of mine. You keep repeating things you’ve read and assume I can’t be right. That’s an unhealthy frame of mind. Look at what the halting problem REALLY means. Open your mind and look
again for the first time without all the baggage you’ve been preconditioned to believe about the halting problem.
There is no possible way to produce something outside the set of valid programs using the construction rules from the definition of a computing system.
Of course there is. If your definition is wrong, you won’t find any natural number that represents your program. Like saying that R is a program that returns two boolean values using one bit.
Can’t do it. That program is invalid. You could still create one where one of the values is lost. The program exists, but it’s invalid. It doesn’t do what you think it does. That’s what you’re
doing with H (and Q). You’re saying the oracle is the oracle, but not the oracle. Well, big deal if you prove a contradiction there.
There is no such thing as a result from a halting oracle other than “Halts” or “Doesn’t halt”.
This is only true if no program can call the oracle and act on the results. If a program does act on the result, then “Halt” or “Doesn’t halt” are invalid results. They are incorrect. It doesn’t
mean it’s undecidable. It simply means that your definition of the oracle is incompatible with your definition of possible programs.
There is no such thing as a program for which a genuine halting oracle doesn’t generate a result.
But you just said you don’t allow valid results (a relation). Don’t you see that if a program acts on the results of the oracle, it MUST be a relation. That this is the only correct answer. If
you don’t allow it, then it’s a human decision to reject that answer. It means that the premise is that of undecidability and not a conclusion.
BTW, why is this an invalid program and not your Q? I hope you say it’s the definition because then you’d have to admit that the wording of the halting problem is incompatible with that of any
formal computing system.
The only way around any of that is by either redefining “halting oracle”, or by redefining “computing system”. But then you’re no longer talking about this proof – because the halting proof
uses the standard definitions of computing system and halting oracle.
You have it backwards. The wording of the halting problem is incompatible with that of a computing system by not allowing the oracle to return the only possible correct value of a relation. You
don’t even need Q to do the opposite. It could do exactly what the oracle says and you’d still have a result of undecidability. This is because both true and false are valid. Hence a relation.
Mark, you really need to take a second look at the halting problem or start using a little logic. Up until now, you’re still convinced I don’t understand the halting problem. That’s your
argument. It’s rather insulting, but one that is ultimately incorrect. The wording of the halting problem is incompatible with computing systems. Nothing about the proof holds up. Not even the
definition of Q. Have you ever thought that the only reason Q can exist under the so-called proof is because the definition of the oracle is incompatible with that of any formal computing system?
Open your mind for once.
Besides, I’ve answered and rebuked every single one of your arguments. OTOH, you persist on thinking that I don’t understand the problem and only repeat what you’ve read in books (or wherever
else) assuming that everything you read is correct at face value. You completely ignore my arguments instead of attacking them. Also note that my conclusions are more consistent than the proof.
Under my arguments, the conclusions are the same no matter if a program does exactly what the oracle says or if it does the opposite. The current proof cannot do that.
78. #78 Mark C. Chu-Carroll December 11, 2007
A Field Medal and million dollar prize are waiting for you. Why don’t you stop wasting time on this pathetic little blog, and go publish your brilliant results?
Oh, yeah. It’s because you’re full of shit.
*If* you redefine the halting problem, *and* you redefine what it means to be a computing system, *then* you are correct. Of course, if you do that, then you’re not actually disproving the
halting theorem. You’re disproving “Cleo’s Idiotic Halting Theorem”.
You can’t redefine a halting oracle to do something other than what it’s defined to do, and then claim that because your definition allows it to do something that a computing system *can’t* do,
that a proof based on the standard meaning of the halting problem is invalid.
If you allow “maybe” as an answer, then you can define a “halting oracle” which is always correct. It’s really simple: always answer “maybe”, and you’re never wrong. But it’s *not* a halting
oracle. It doesn’t tell you anything useful. The whole point of the halting problem is to answer a very simple question: if I run a program P, will it ever stop?
If a computing system is deterministic, then you can simply answer “yes” or “no”. There’s no such thing as a deterministic program for which the answer to that question is anything other than
“yes” or “no”. There is no “maybe”. There is no relationship. There is no fuzz. Either it will stop, or it won’t. A halting oracle will either say it halts, or it doesn’t.
You keep trying to introduce this idea of an “invalid program*. What you don’t seem to be able to understand is that that’s not possible. The rules of program construction describe “how to
construct valid programs*. There is *no way* to construct an invalid program by following the construction rules of the computing system.
The only way that you can get an *invalid* program from the construction rules is if you’re not using an effective computing system. (So, for example, if you’re using a linear-bounded automaton,
then some applications of the recursion principle will create programs that can’t run on an LBA. But if you’re using an ECS – that is, a turing equivalent computing system – then there is no such
thing as an invalid program generated by construction from ECS rules.
That was the point of my “natural numbers” example. If you have a total, one-to-one, onto function f from the natural numbers to the natural numbers. then there’s no such thing as a natural
number n such that f(n) isn’t a natural number. If both f and g are total, one-to-one, onto functions from natural numbers to natural numbers, then the composition f*g is a total, one-to-one onto
function, and there is no natural number n such that f*g(n) is not a natural number.
Program construction rules are like that. If you have a valid program, P – then invoking P as a subroutine is a valid step in another program. If you have a valid program P that generates a
result, then the invocation of P can be used to control a choice between two alternative sub-computations. There is no way around this.
To be very concrete about it – an example of what a program construction rule says is: if E is a program that computes the value of a boolean expression, and F and G are two different valid
programs, then the if/then/else statement “if E then F else G” is a valid program.
That’s *all* that the halting construction does. If that construction doesn’t produce a valid program, then *by definition* you aren’t talking about an effective computing system.
The only way that your “disproof” is correct is if you redefine halting oracle to mean something different than what mathematicians mean by “halting oracle”, and if you redefine “computing
system” to mean something different than what mathematicians mean by “computing system”.
|
{"url":"http://scienceblogs.com/goodmath/2007/11/14/basics-proof-by-contradiction/","timestamp":"2014-04-17T07:35:20Z","content_type":null,"content_length":"229532","record_id":"<urn:uuid:bd118cbe-3191-42ee-a216-d1336470046d>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00470-ip-10-147-4-33.ec2.internal.warc.gz"}
|
6.13 ALL — All values in MASK along DIM are true
ALL(MASK [, DIM]) determines if all the values are true in MASK in the array along dimension DIM.
F95 and later
Transformational function
RESULT = ALL(MASK [, DIM])
MASK The type of the argument shall be LOGICAL(*) and it shall not be scalar.
DIM (Optional) DIM shall be a scalar integer with a value that lies between one and the rank of MASK.
Return value:
ALL(MASK) returns a scalar value of type LOGICAL(*) where the kind type parameter is the same as the kind type parameter of MASK. If DIM is present, then ALL(MASK, DIM) returns an array with the
rank of MASK minus 1. The shape is determined from the shape of MASK where the DIM dimension is elided.
ALL(MASK) is true if all elements of MASK are true. It also is true if MASK has zero size; otherwise, it is false.
If the rank of MASK is one, then ALL(MASK,DIM) is equivalent to ALL(MASK). If the rank is greater than one, then ALL(MASK,DIM) is determined by applying ALL to the array sections.
program test_all
logical l
l = all((/.true., .true., .true./))
print *, l
call section
subroutine section
integer a(2,3), b(2,3)
a = 1
b = 1
b(2,2) = 2
print *, all(a .eq. b, 1)
print *, all(a .eq. b, 2)
end subroutine section
end program test_all
|
{"url":"http://gcc.gnu.org/onlinedocs/gcc-4.2.4/gfortran/ALL.html","timestamp":"2014-04-16T20:40:01Z","content_type":null,"content_length":"5181","record_id":"<urn:uuid:744d16a5-7bd2-4aa1-b00e-423d87abce92>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00221-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Symmetry and Perturbation Theory in Nonlinear Dynamics (Lecture Notes in Physics M 57)
von Giampaolo Cicogna
Kategorie: Dynamische Systeme
ISBN: 3540659048
Synopsis This text examines the theory of Poincare-Birkhoff normal forms, studying symmetric systems in particular. Attention is focused on general Lie point symmetries and not just on symmetries
acting linearly. Some results on the simultaneous normalization of a vector field describing a dynamical system and vector fields describing its symmetry are presented, and a perturbative approach is
also used. Attention is given to the problem of convergence of the normalizing transformation in the presence of symmetry, with some other extensions of the theory. The results are discussed for the
general case of dynamical systems and also for the specific Hamiltonian setting.
Umschlagtext This book deals with the theory of Poincaré--Birkhoff normal forms, studying symmetric systems in particular. Attention is focused on general Lie point symmetries, and not just on
symmetries acting linearly. Some results on the simultaneous normalization of a vector field describing a dynamical system and vector fields describing its symmetry are presented and a perturbative
approach is also used. Attention is given to the problem of convergence of the normalizing transformation in the...
Dynamische Systeme > Symmetry and Perturbation Theory in Nonlinear Dynamics (Lecture Notes in Physics M 57)
|
{"url":"http://www.uni-protokolle.de/buecher/isbn/3540659048/","timestamp":"2014-04-19T19:45:18Z","content_type":null,"content_length":"6749","record_id":"<urn:uuid:6d56042a-80d1-4641-a9cc-51d07ba2fed8>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00623-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Help me understand rate of twist
March 25, 2013, 05:21 PM
First, I know what 1:9 or 1:7 means, but I want to understand why its important, especially with bullet weights. For example I've read that a 1:7 twist in an AR is more desirable for a heavier bullet
than 1:9.
1. General explanation or anything I'm wrong on.
2. Is the extra twist purely for extra speed with a heavier round?
3. What is considered too heavy a round for a lower rate of twist in an AR example? With 55 grain being normal, is 62 grain OK or pushing it? 75 grain?
Thanks in advance.
|
{"url":"http://www.thehighroad.org/archive/index.php/t-710069.html&s=9af9c680ddc730151477d3b3a8bf4145&","timestamp":"2014-04-16T04:52:49Z","content_type":null,"content_length":"12452","record_id":"<urn:uuid:715363a5-fc48-43a8-8d0b-9eb309317f8f>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00220-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Pages: 1 2
Post reply
Re: Problem
24= 3x8
There must be some link...................
School is practice for the future. Practice makes perfect. But - nobody's perfect, so why practice?
Re: Problem
I know that 3^0 is not the same as 3, but it was said that we need to use 8,8,3,3 - and as you can see every of these numbers are used. I though of it logically, that's why i did it in this way. ????
Re: Problem
I've got the answer...at least I think so....
((8 x 3!)/3)+8
= ((8 x 3 x 2 x 1)/3)+8
= (48/3)+8
= (16)+8
= 24
Re: Problem
Its about half past midnight, but I just thought of a genius answer (that doesn't have factorials) and I couldn't wait to share!
I'm so happy!
Re: Problem
So impressed with both solutions ... !
And I thought we were just being teased.
I might add this one to our "official" puzzle list
"The physicists defer only to mathematicians, and the mathematicians defer only to God ..." - Leon M. Lederman
Super Member
Re: Problem
Gasp! The horror.
Boy let me tell you what:
I bet you didn't know it, but I'm a fiddle player too.
And if you'd care to take a dare, I'll make a bet with you.
Re: Problem
What's the official puzzle list?
Re: Problem
Like, this page.
School is practice for the future. Practice makes perfect. But - nobody's perfect, so why practice?
Re: Problem
Re: Problem
That page.
School is practice for the future. Practice makes perfect. But - nobody's perfect, so why practice?
Re: Problem
"Click and ye shall find"
"The physicists defer only to mathematicians, and the mathematicians defer only to God ..." - Leon M. Lederman
Re: Problem
Ohh! I see. Is there are way for you to make all links colourful or underlined or something in your forum? You can never tell if it is linked or not... or is that just me?
Re: Problem
Oh... I wonder why not. It is underlined in Internet Explorer and Firefox on my PC.
Do you have anything unusual about your setup? Operating System / Browser or something?
"The physicists defer only to mathematicians, and the mathematicians defer only to God ..." - Leon M. Lederman
Re: Problem
Okie dokie, fixed it... I'm using firefox.
I just ticked underline links!
Re: Problem
Well, the answer is not easy, but it exists:
8/3=2,6666666666666666666... (=2,66periodic) (=8/3)
3-2,66p=0,33p (in other way, 9/3-8/3=1/3)
8/0,33p=24 (in other way, 8/1/3=8*3=24)
Super Member
Re: Problem
You realise that answer was said all the way up there, right?
Boy let me tell you what:
I bet you didn't know it, but I'm a fiddle player too.
And if you'd care to take a dare, I'll make a bet with you.
Re: Problem
8/(3-(8/3) =
8/(3-(2 2/3) =
8/(1/3) = 24
Re: Problem
Zach wrote:
You realise that answer was said all the way up there, right?
And plus, you realise that this post is one year old? Almost exactly.
Oh well, well done for working it out. You missed a closing parenthesis in the second line though.
Re: Problem
Interesting... Can't you make some program, which gives all possible solutions?
IPBLE: Increasing Performance By Lowering Expectations.
Re: Problem
Making progress...
IPBLE: Increasing Performance By Lowering Expectations.
Re: Problem
My program is ready. And guess what-there aren't another solutions except:
IPBLE: Increasing Performance By Lowering Expectations.
Re: Problem
This program was personal challenge.
Here's list of all numbers, which can be expressed using 8,8,3,3:
Last edited by krassi_holmz (2006-06-04 19:17:46)
IPBLE: Increasing Performance By Lowering Expectations.
Re: Problem
Here's the code (Mathematica, rewritten, but really messy and hard-to-understand):
K[n1_, n2_] := Union[{n1 + n2, n1 - n2, n1*n2, n1/n2}];
KK[list_, num_] := Union[Flatten[Table[K[list[[i]], num], {i,
1, Length[list]}]]];
list2_] :=
Union[Flatten[Table[KK[list1, list2[[i]]], {i, 1, Length[list2]}]]];
d[a_, b_, c_, d_, f_] := {
f[{a}, f[{b}, f[{c}, {d}]]],
f[{a}, f[f[{b}, {c}], {d}]],
f[f[{a}, f[{b}, {c}]], {d}],
f[f[{a}, {b}], f[{c}, {d}]],
f[f[f[{a}, {b}], {c}], {d}]
d[l_, f_] := d[l[[1]], l[[2]], l[[3]], l[[4]], f];
dd[l_, f_] := dd[l[[1]], l[[2]], l[[3]], l[[4]], f];
dd[a_, b_, c_, d_, f_] :=
Print["abcdfff:", f[{a}, f[{b}, f[{c}, {d}]]]];
Print["abcfdff:", f[{a}, f[f[{b}, {c}], {d}]]];
Print["abcffdf:", f[f[{a}, f[{b}, {c}]], {d}]];
Print["abfcdff:", f[f[{a}, {b}], f[{c}, {d}]]];
Print["abfcfdf:", f[f[f[{a}, {b}], {c}], {d}]];
p = Permutations[{3, 3, 8, 8}];
res = Table[Union[Flatten[d[p[[i]], KKK]]], {i, 1, Length[p]}];
Could explain and rewrite it later.
IPBLE: Increasing Performance By Lowering Expectations.
Re: Problem
Post reply
Pages: 1 2
|
{"url":"http://www.mathisfunforum.com/viewtopic.php?id=128&p=2","timestamp":"2014-04-20T08:23:22Z","content_type":null,"content_length":"34397","record_id":"<urn:uuid:72ad37eb-d7c6-4223-88d5-f7a6c473480e>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00487-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Boffins solve pizza slicing dilemma
Ever worried you're getting the rough end of the stick when it comes to sharing pizza? Mathematicians have worked out the answer to one of life's cheesiest queries, New Scientists's Stephen Ornes
Lunch with a colleague from work should be a time to unwind - the most taxing task being to decide what to eat, drink and choose for dessert.
For Rick Mabry and Paul Deiermann it has never been that simple. They can't think about sharing a pizza, for example, without falling headlong into the mathematics of how to slice it up. "We went to
lunch together at least once a week," says Mabry, recalling the early 1990s when they were both at Louisiana State University, Shreveport. "One of us would bring a notebook, and we'd draw pictures
while our food was getting cold."
The problem that bothered them was this. Suppose the harried waiter cuts the pizza off-centre, but with all the edge-to-edge cuts crossing at a single point, and with the same angle between adjacent
cuts. The off-centre cuts mean the slices will not all be the same size, so if two people take turns to take neighbouring slices, will they get equal shares by the time they have gone right round the
pizza - and if not, who will get more?
Of course you could estimate the area of each slice, tot them all up and work out each person's total from that. But these guys are mathematicians, and so that wouldn't quite do. They wanted to be
able to distil the problem down to a few general, provable rules that avoid exact calculations, and that work every time for any circular pizza.
Cutting through the centre
As with many mathematical conundrums, the answer has arrived in stages - each looking at different possible cases of the problem. The easiest example to consider is when at least one cut passes plumb
through the centre of the pizza. A quick sketch shows that the pieces then pair up on either side of the cut through the centre, and so can be divided evenly between the two diners, no matter how
many cuts there are.
So far so good, but what if none of the cuts passes through the centre? For a pizza cut once, the answer is obvious by inspection: whoever eats the centre eats more. The case of a pizza cut twice,
yielding four slices, shows the same result: the person who eats the slice that contains the centre gets the bigger portion. That turns out to be an anomaly to the three general rules that deal with
greater numbers of cuts, which would emerge over subsequent years to form the complete pizza theorem.
The first proposes that if you cut a pizza through the chosen point with an even number of cuts more than 2, the pizza will be divided evenly between two diners who each take alternate slices. This
side of the problem was first explored in 1967 by one L. J. Upton in Mathematics Magazine (vol 40, p 163). Upton didn't bother with two cuts: he asked readers to prove that in the case of four cuts
(making eight slices) the diners can share the pizza equally.
Slice dilemma
Next came the general solution for an even number of cuts greater than 4, which first turned up as an answer to Upton's challenge in 1968, with elementary algebraic calculations of the exact area of
the different slices revealing that, again, the pizza is always divided equally between the two diners.
With an odd number of cuts, things start to get more complicated. Here the pizza theorem says that if you cut the pizza with 3, 7, 11, 15... cuts, and no cut goes through the centre, then the person
who gets the slice that includes the centre of the pizza eats more in total. If you use 5, 9, 13, 17... cuts, the person who gets the centre ends up with less.
Rigorously proving this to be true, however, has been a tough nut to crack. So difficult, in fact, that Mabry and Deiermann have only just finalised a proof that covers all possible cases.
Their quest started in 1994, when Deiermann showed Mabry a revised version of the pizza problem, again published in Mathematics Magazine (vol 67, p 304). Readers were invited to prove two specific
cases of the pizza theorem. First, that if a pizza is cut three times (into six slices), the person who eats the slice containing the pizza's centre eats more. Second, that if the pizza is cut five
times (making 10 slices), the opposite is true and the person who eats the centre eats less.
Asterisk spells trouble
The first statement was posed as a teaser: it had already been proved by the authors. The second statement, however, was preceded by an asterisk - a tiny symbol which, in Mathematics Magazine, can
mean big trouble. It indicates that the proposers haven't yet proved the proposition themselves. "Perhaps most mathematicians would have thought, 'If those guys can't solve it, I'm not going to look
at it.'" Mabry says. "We were stupid enough to look at it."
Deiermann quickly sketched a solution to the three-cut problem - "one of the most clever things I've ever seen," as Mabry recalls. The pair went on to prove the statement for five cuts - even though
new tangles emerged in the process - and then proved that if you cut the pizza seven times, you get the same result as for three cuts: the person who eats the centre of the pizza ends up with more.
Boosted by their success, they thought they might have stumbled across a technique that could prove the entire pizza theorem once and for all. For an odd number of cuts, opposing slices inevitably go
to different diners, so an intuitive solution is to simply compare the sizes of opposing slices and figure out who gets more, and by how much, before moving on to the next pair. Working your way
around the pizza pan, you tot up the differences and there's your answer.
Geometrical trick
Simple enough in principle, but it turned out to be horribly difficult in practice to come up with a solution that covered all the possible numbers of odd cuts. Mabry and Deiermann hoped they might
be able to deploy a deft geometrical trick to simplify the problem.
The key was the area of the rectangular strips lying between each cut and a parallel line passing through the centre of the pizza (see diagram). That's because the difference in area between two
opposing slices can be easily expressed in terms of the areas of the rectangular strips defined by the cuts. "The formula for [the area of] strips is easier than for slices," Mabry says. "And the
strips give some very nice visual proofs of certain aspects of the problem."
Unfortunately, the solution still included a complicated set of sums of algebraic series involving tricky powers of trigonometric functions. The expression was ugly, and even though Mabry and
Deiermann didn't have to calculate the total exactly, they still had to prove it was positive or negative to find out who gets the bigger portion. It turned out to be a massive hurdle. "It ultimately
took 11 years to figure that out," says Mabry.
Over the following years, the pair returned occasionally to the pizza problem, but with only limited success. The breakthrough came in 2006, when Mabry was on a vacation in Kempten im Allgäu in the
far south of Germany. "I had a nice hotel room, a nice cool environment, and no computer," he says. "I started thinking about it again, and that's when it all started working."
Programmes help solve problem
Mabry and Deiermann - who by now was at Southeast Missouri State University in Cape Girardeau - had been using computer programs to test their results, but it wasn't until Mabry put the technology
aside that he saw the problem clearly. He managed to refashion the algebra into a manageable, more elegant form.
Back home, he put computer technology to work again. He suspected that someone, somewhere must already have worked out the simple-looking sums at the heart of the new expression, so he trawled the
online world for theorems in the vast field of combinatorics - an area of pure mathematics concerned with listing, counting and rearranging - that might provide the key result he was looking for.
Eventually he found what he was after: a 1999 paper that referenced a mathematical statement from 1979. There, Mabry found the tools he and Deiermann needed to show whether the complex algebra of the
rectangular strips came out positive or negative. The rest of the proof then fell into place.
So, with the pizza theorem proved, will all kinds of important practical problems now be easier to deal with? In fact there don't seem to be any such applications - not that Mabry is unduly upset.
"It's a funny thing about some mathematicians," he says.
Additional questions
"We often don't care if the results have applications because the results are themselves so pretty." Sometimes these solutions to abstract mathematical problems do show their face in unexpected
places. For example, a 19th-century mathematical curiosity called the "space-filling curve" - a sort of early fractal curve - recently resurfaced as a model for the shape of the human genome.
Mabry and Deiermann have gone on to examine a host of other pizza-related problems. Who gets more crust, for example, and who will eat the most cheese? And what happens if the pizza is square?
Equally appetising to the mathematical mind is the question of what happens if you add extra dimensions to the pizza. A three-dimensional pizza, one might argue, is a calzone - a bread pocket filled
with pizza toppings - suggesting a whole host of calzone conjectures, many of which Mabry and Deiermann have already proved.
It's a passion that has become increasingly theoretical over the years. So if on your next trip to a pizza joint you see someone scribbling formulae on a napkin, it's probably not Mabry. "This may
ruin any pizza endorsements I ever hoped to get," he says, "but I don't eat much American pizza these days."
|
{"url":"http://www.sbs.com.au/news/article/2009/12/11/boffins-solve-pizza-slicing-dilemma","timestamp":"2014-04-19T22:11:27Z","content_type":null,"content_length":"93329","record_id":"<urn:uuid:c37c1b0a-d103-412d-bee7-df6d4b39767c>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00582-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Stone Park Math Tutor
Find a Stone Park Math Tutor
...I also maintain high standards for myself and will periodically ask for feedback. I will never charge for a lesson the student or the parent is unsatisfied with the lesson. I am flexible with
23 Subjects: including ACT Math, reading, English, writing
...I used Matlab excessively for analyzing the vibration profiles of the engine with help of digital signal processing tools I can relate a lot of engineering concepts to applications used in the
industry, which helps students to understand them very easily. I have a great desire to share knowled...
16 Subjects: including trigonometry, statistics, discrete math, differential equations
...On the AP Physics B exam, almost half of my students earn scores of 5, the highest score possible, while most of the other half receive scores of 4! In addition to teaching physics, I am also
an instructional coach, hired to help high school science and math teachers improve their professional p...
2 Subjects: including algebra 1, physics
...I believe every student can learn and succeed. I think tutors should explain things in simple ways and work until the student fully understands a concept before moving on. I have acquired more
than 900 hours of tutoring experience over the past 6 years.
26 Subjects: including statistics, SAT writing, linear algebra, logic
...I've tutored for about 5 years in multiple subjects including English, organization, high school math, reading, writing and ACT preparation. I attended the University of Illinois,
Urbana-Champaign, The John Marshall Law School, and I'm currently an LL.M. student at Northwestern University School...
36 Subjects: including geometry, prealgebra, precalculus, trigonometry
|
{"url":"http://www.purplemath.com/Stone_Park_Math_tutors.php","timestamp":"2014-04-19T05:07:24Z","content_type":null,"content_length":"23685","record_id":"<urn:uuid:276e3625-9c53-466b-89c1-e519c9830479>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00467-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Design of a Circularly Polarized 2 x 2 Patch Array Operating in the 2.45 GHz ISM Band
Technical Feature
Design of a Circularly Polarized 2 x 2 Patch Array Operating in the 2.45 GHz ISM Band
The design of a corporate feed network producing a sequential rotation for a 2 x 2 circular polarized patch array is presented in this article. The feed network has also been designed to produce
equal power excitation for each patch and a match condition at the feed point. The design of the array is based on a new and simplified expression for the input impedance of a rectangular patch
antenna. Compared with a single patch, the designed antenna produces an increased bandwidth for the return loss and axial ratio. There is good agreement between the simulated and experimental results
for the return loss and axial ratio.
M. Mathian, E. Korolkewicz, P. Gale and E.G. Lim
Communication Research Group
University of Northumbria at Newcastle, UK
Microstrip antennas have a simple planar structure, low profile and can be easily fabricated using printed circuit technology.^1 Consequently, they are increasingly used in a variety of wireless
communication systems. Circular polarized patch arrays normally consist of identical rectangular or square patches fed by a corporate feed network using couplers or power splitters.^2,3 This article
describes the design of a serial corporate feed network producing sequential rotation for a 2 x 2 patch array. Sequential rotation improves polarization purity and radiation pattern symmetry over a
wide range of frequencies.^4,5 The power splitters used in the feed network consist of seven quarter-wave transformers; consequently, it is not possible to obtain closed form solutions for the
design. The design is therefore based on the required power split for each patch, the maximum realizable impedance values for the microstrip lines to reduce spurious radiation and coupling by the
feed network, and to obtain a match at the feed point.
Fig. 1 Feed network for the 2 x 2 patch array.
Design of a Sequential Rotation Corporate Feed Network
Figure 1 shows a 2 x 2 circularly polarized patch array consisting of four dual-feed circular polarized square patches, each with an input impedance Z[incp] and a series feed network producing the
sequential rotation. The feed network is designed to produce a match at the feed point, a 90° phase difference between adjacent patches and an equal power feed to each patch.
The transmission line equivalent circuit of the array is shown in Figure 2 . To reduce spurious radiation and coupling effects, it is important that the width of the microstrip feed lines be as
narrow as possible and the characteristic impedances Z[1] , Z[2] ,…Z[7] should be as high as can be practically realized.
Fig. 2 Equivalent transmission line circuit of the array.
In the design of the feed network, the following assumptions are made: The input impedance Z[incp] of each individual two-feed circularly polarized patch antenna is 50 Ω; the highest characteristic
impedance that can be practically realized is 140 Ω using a PCB (FR4) substrate (e[r] = 4.3, tanδ = 0.017, h = 1.575 mm and t = 0.035 mm).
The power P fed into junction V1 by the source is
Z[0] = 50 Ω
For the required power split
Z[in1] = 200 Ω
Z[1] = 100 Ω
Z[in2] = 66.7 Ω
At junction V[3] , to obtain narrow width feed lines, it is assumed that Z[5] = 120 Ω and since equal power is required to be fed into patches 3 and 4, then Z[7] = Z[6] = 77.5 Ω, Z[inB] = 60 Ω. The
feed network at junction V[2] now reduces to the one shown in Figure 3 . At junction V[2] , one third of the input power is fed into patch 2 and the remainder of the power is fed into patches 3 and 4
so that
The feed network is now reduced to three variables Z[2] , Z[3] and Z[4] . It is necessary to make an assumption for one of these impedances. If Z[3] = 120 Ω, it can be shown that Z[4] = 93 Ω and Z[2]
= 80 Ω.
Fig. 3 Feed network at V[2] .
Design of a Two-feed Circularly Polarized Patch Antenna
The design of a two-feed circular polarized patch antenna is discussed in the following section, where the patch is modeled as a parallel-tuned circuit taking into account copper, dielectric and
radiation losses.
Fig. 4 Transmission line and parallel tuned circuit models of the patch antenna.
Modeling of the Patch Antenna by a Parallel-tuned Circuit
Figure 4 shows a rectangular patch antenna of length L and width W. The transmission line model of the antenna is also shown where G[R] and C represent the radiation losses and fringing effects,
respectively. A transmission line of length L, having a low characteristic impedance Z[0] , connects the two parallel C-G[R] circuits. The length L is designed to be slightly less than a
half-wavelength at the design frequency, so that the input admittance is given by Y[1] = G-jB[c] . The problem with the transmission line model is that it does not take into account the dielectric
and copper losses. However, the antenna can now be modeled as a parallel G-L-C tuned circuit, where the conductance G represents the total losses.
Based on the parallel equivalent circuit C-L-G, it can be shown that a simplified expression for the input impedance of a rectangular patch, for the 10 and 01 modes, is given by^6
k = a complex phase constant where the losses (copper, dielectric and radiation) of the patch are included by using the quality factor Q.
The dielectric under the patch can be considered to be lossy due to copper (Q[c] ), dielectric (Q[d] ) and radiation (Q[r] ) losses. The permittivity of the substrate e[r] can then be replaced by
Q = total quality loss factor given by
These losses can be determined using the following equations
The characteristic impedance Z[0] of the patch is given by
e[reff] = effective permittivity of the substrate
s[C] = metal conductivity
The total conductance G is given by
G = 2(G[R] ± G[12] ) (11)
G[R] is the radiation conductance and G[12] is the coupled conductance between the radiating slots of the antenna.
The mutual conductance G[12] can be expressed as
k[0] = phase constant in free space
q = variable of the spherical coordinate system used to evaluate the radiated power from the patch antenna
A square patch antenna was designed to operate at 2.45 GHz. The predicted input impedance at resonance and the Q-factor of the antenna were determined using the above theory and compared with
experimental measurements and full-wave analysis software (Ensemble v.7). The results are shown in Table 1 .
│ Table 1 │
│ Input Impedance │
│ │Rin (Ω)│Q-Factor │
│Predicted │ 180 │ 34.90 │
│Practical │ 189 │ 35.35 │
│Simulation │ 194 │ 34.01 │
Design of Dual-feed Single Patch Circularly Polarized Antenna and a 2 x 2 Patch Array
Figure 5 shows a two-feed power splitting arrangement for a square patch antenna to produce circular polarization and an input impedance Z[incp] = 50 Ω. The transmission line equivalent circuit of
the single two-feed patch is shown in Figure 6 .
Fig. 5 Two-feed circularly polarized square patch antenna.
Fig. 6 Transmission line model of the circularly polarized patch antenna.
The lengths l[1] and l[2] were designed to produce a 90° phase shift between the two feed points of the square patch. For Z[inp] = 180 Ω and Z[incp] = 50 Ω, then Z[1] = 100 Ω and Z[2] = 134 Ω. The
equivalent circuit for the circular polarized patch antenna shown in Figure 7 was modeled using Microwave Office 2001.^8
Fig. 7 Equivalent circuit of the circularly polarized patch antenna.
It is possible using this software to determine the magnitude and phase of the voltages V[x] and V[y] across the two tuned parallel circuits. The axial ratio (AR) for the patch antenna is given by^9
E[x] = magnitude of the electric field in the x-direction
E[y] = magnitude of the electric field in the y-direction
q = phase difference between the two electrical field components
Fig. 8 Two-feed cicrularly polarized patch antenna.
Fig. 9 Return loss.
Fig. 10 Axial ratio vs. frequency.
The printed circuit board of the designed antenna is shown in Figure 8 . Figures 9 and 10 show the measured and computer-predicted return loss and axial ratio as function of frequency, using
Microwave Office 2001 and Ensemble v.7. Figure 11 gives the axial ratio as a function of the angle q simulated with Ensemble v.7 and measured experimentally. The corporate feed network was designed
(as previously discussed) and a photograph of the circuit board of the array is shown in Figure 12 . The equivalent circuit of the 2 x 2 circularly polarized patch array shown in Figure 13 was
simulated using Microwave Office 2001 to predict the return loss and axial ratio. Figures 14 to 17 show a comparison between the experimental and predicted results for the return loss and axial ratio
of the designed array.
Fig. 11 Axial ratio as a function of q.
Fig. 12 Printed circuit of the 2 x 2 array.
Fig. 13 Equivalent cicruit model of the 2 x 2 circularly polarized patch array.
The design of a sequential rotation corporate feed network for a 2 x 2 patch array has been presented. The fundamental element of the array is the circular polarized square patch. In this design the
input impedance of the patch has been modeled as a parallel-tuned circuit where copper, dielectric and radiation losses have been taken into account. For the single patch and the array there is good
agreement between theory, simulation and experimental results confirming the described design. The designed array shows a wide bandwidth for the return loss and axial ratio.
Fig. 14 Return loss.
Fig. 15 Axial ratio vs. frequency.
Fig. 16 Axial ratio as a function of q.
Fig. 17 Polar pattern (RHCP, LHCP) at 2.45 GHz for j = 0.
1. J.R. James, P.S. Hall and C. Wood, "Microstrip Antenna: Theory and Design," IEE Electromagnetic Waves , Series 12, Peter Peregrines, 1986.
2. Y.T. Lo, W.F. Richards, P.S. Simon, J.E. Brewer and C.P. Yuan, "Study of Microstrip Antenna Elements, Arrays, Feeds, Losses and Applications," Final Technical Report , RADC-TR-81-98, June 1981.
3. H.J. Song and M.E. Bialkowski, "Ku-band 16 x16 Planar Array with Aperture-coupled Microstrip-patch Elements," IEE Antennas and Propagation Magazine , Vol. 40, No. 5, October 1998.
4. P.S. Hall and C.M. Hall, "Coplanar Corporate Feed Effects on Microstrip Patch Array Design," IEE Proceedings , 4, 135, (3), 1998, pp. 180-186.
5. A.E. Efanor and H.W. Tim, "Corporate-fed 2 x 2 Planar Microstrip Patch Subarray for the 35 GHz Band," IEE Antennas and Propagation Magazine , Vol. 37, No. 5, October 1995, pp. 49-51.
6. E.G. Lim, E. Korolkiewicz, S. Scott and B. Al-jibouri, "An Efficient Formula for the Input Impedance of a Microstrip Rectangular Patch Antenna With a Microstrip Offset Feed," Internal Report,
Communication Research Group, School of Engineering, University of Northumbria, Newcastle-Upon-Tyne, UK, April 2001.
7. Ansoft Ensemble© v7 - Software Based on Full Wave Analysis.
8. Microwave Office© 2001 - Full Wave Spectral Galerkin Method of Moments.
9. C.A. Balanis, Antenna Theory Analysis and Design , John Wiley & Sons Inc., New York, NY 1997.
|
{"url":"http://www.microwavejournal.com/articles/3448-design-of-a-circularly-polarized-2-x-2-patch-array-operating-in-the-2-45-ghz-ism-band","timestamp":"2014-04-20T22:09:50Z","content_type":null,"content_length":"67647","record_id":"<urn:uuid:d7b99df2-ca5e-436b-8041-087c545e53d0>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00269-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: Negative, Complex Dimensions
Replies: 7 Last Post: Aug 10, 2001 3:11 AM
Messages: [ Previous | Next ]
Re: Negative, Complex Dimensions
Posted: Aug 9, 2001 7:54 PM
"Alexander Sheppard" <alex1s1emc22@icqmail.com> wrote in message
> Are there any definitions for negative or complex dimensions?
I've never seen anything involving complex dimensions but if you want you
can *sort of* extend
dimensions into the negatives, consider the following analogy:
If you take a point (0D) and extend it a finite distance in 1-space, you
find a line segment.
If you take a line segment and extend it a finite distance in 2-space,
you find a square.
If you take a square and extend it a finite distance in 3-space, you
find a cube.
If you take a cube and extend it a finite distance in 4-space, you find
a tesseract.
This process can be repeated forever. What if we look at it in the opposite
direction? In some
negative dimensional space, there must (loosely using the word "must" here)
exist some -1
dimensional that when extended a finite distance, you arrive at a point.
This is almost impossible
to visualize, even harder than high dimensions like a 15D hypercube.
However, here's the way
I would visualize it: think of a point as the basic unit for all zero and
positive dimensions. You
can form the analogy to the atom. Now, think of the
electrons/protons/neutrons as the -1 dimensional
objects (for the purposes here, I'll just call them -1-points). They're like
little strings that when
all hooked together we get a 0D point. To extend it into -2 dimensions,
think of -2-points as little
quark-like strings that hook together into a -1-point.
However, complex dimensions (as far as I can see) serve no real purpose.
Maybe in the future there
will be some field of math which grows around them.
-- Entropix
Date Subject Author
8/8/01 Negative, Complex Dimensions Alex S.
8/9/01 Re: Negative, Complex Dimensions Twisted Topologist
8/9/01 Re: Negative, Complex Dimensions Jonathan Hoyle
8/9/01 Re: Negative, Complex Dimensions Russell Harper
8/9/01 Re:Negative,Complex Dimensions Leroy Quet
8/9/01 Re: Negative, Complex Dimensions Earthlink News
8/10/01 Re: Negative, Complex Dimensions Don McConnell
8/10/01 Re: Negative, Complex Dimensions Fred Galvin
|
{"url":"http://mathforum.org/kb/thread.jspa?threadID=73505&messageID=328928","timestamp":"2014-04-17T13:02:21Z","content_type":null,"content_length":"25932","record_id":"<urn:uuid:724949c6-d574-4013-8d99-38ce47006649>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00577-ip-10-147-4-33.ec2.internal.warc.gz"}
|
I have to write a program to read 10 integers into an array
Newbie Poster
4 posts since Mar 2008
Reputation Points: 0 [?]
Q&As Helped to Solve: 0 [?]
Skill Endorsements: 0 [?]
I have to write a program to read 10 integers into an array. It will then read in one more integer. My program should then compute and output how many distinct pairs of integers in the array add up
to the last number that was input. Note I cannot use the same number twice in a sum, unless it has been input two or more times.
How do I go about doing this?
I really don't know how to go about doing this, I know that I have to have the program promt you to input 10 numbers, and then I have to get the program to compare all the possible sums of two of the
integers and then say if they equal 9.
So far I have:
#include <iostream.h>
#include <stdlib.h>
int main()
int num, count;
int List[10],i;
// input 10 integers into an array and compute the average
for (i=0;i<10;i++)
cout << "Please enter an integer ";
cin >> List ;
But I don't know where to go from here
Practically a Master Poster
694 posts since Jul 2007
Reputation Points: 84 [?]
Q&As Helped to Solve: 66 [?]
Skill Endorsements: 6 [?]
>> Try to use Codes in Coding tags
>> Try to use cout statement out of looping statement .
int main()
int arr[10];
int i,j,sum=0,flag;
cout<<"Enter the Integers : ";
cin>>arr[i]; //Loop Takes ten integer
flag=0; // Flag is assigned 0 intially
if(arr[i]==arr[j]) // Any integer repeat then flag becomes 1
if(flag==0) //If any integer is repeated then it is not counted
cout<<" Sum is :"<<sum; //Sum is displayed
return 0;
I think it might help you..
|
{"url":"http://www.daniweb.com/software-development/cpp/threads/116338/i-have-to-write-a-program-to-read-10-integers-into-an-array","timestamp":"2014-04-18T23:17:07Z","content_type":null,"content_length":"34650","record_id":"<urn:uuid:5529b21e-1a27-4460-bb1e-4ce2482dfd1c>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00594-ip-10-147-4-33.ec2.internal.warc.gz"}
|
ELMRES embedded in SPARSKIT
The package sparski.tar.gz is a preliminary version of SPARSKIT with the minimal revisions necessary to use ELMRES (have to include an extra argument for pivoting, see paper on elmres above). Update
on May 15, 2001 to remove extra object files which prevent successful builds. Observe that some makefile will have to be changed to get the correct blas library on your machine. Also the -O2 flag may
be helpful in getting faster code. Please send comments when this doesn't install well. By the way, the run programs are in ITSOL but to change them you need to .. a directory and do a global make,
then go back to ITSOL and do a make. Else you do not change the matrix you are using. ALSO THE ACTUAL SOLVERS ARE IN ITSOL/iters.f which also has some documentation on the ipar and fpar parameters
which control preconditioning and convergence criteria and are set in riters.f (for riters.ex executable). This will create a directory called svdblast. May also include FELMRES (ELMRES with flexible
Tar file of Fortran 77 codes for BR iteration.
BR iteration finds eigenvalues of small-band Hessenberg matrices. These include br*.f files which actually perform br iterations, I'm not currently clear which ones are best. There are also some qmr
things in this directory (qmr is a look-ahead Lanczos method, which can be used to return some extremal eigenvalues). The qmr is due to Freund and Nachtigal.
Fortran 77 codes for Householder bidiagonalization
This package contains a bidiagonalization routine which performs Householder bidiagonalization rather faster than the current LAPACK dgebrd, and which is designed to be LAPACK compatible (for
inclusion in LAPACK). This work was supported by National Science Foundation Grant EIA-0103642.
Tutorials The following are HPC courses and short courses presented by Gary Howell at NC State in 2004 and 2005. http://www.ncsu.itd/hpc/Courses/Courses_content.html
Rick Weed's MPI tutorial
These powerpoint files are a short course presented by Rick Weed in 1999.
Proposals MSPA-MCS Bidiagonalization and PCA: Algorithms and Architectures (pdf)
Final Report: EIA-0103642: Cache Efficient and Parallel Householder Bidiagonalization (pdf)
|
{"url":"http://ncsu.edu/hpc/Documents/Publications/gary_howell/contents.html","timestamp":"2014-04-17T03:54:08Z","content_type":null,"content_length":"14038","record_id":"<urn:uuid:ab7a9546-35ad-485c-8000-e6d5e6ed4abc>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00573-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The cool maths that games can teach you!
Come on Ryos, you just scramled for something to shut me up with
Remember, we're talking about a computer implementation algo.
How would you even define (store) an open sphere except by triangulating it or using an infinite amount of data if the aperture (opening) is irregular? Tough one, isn't it?
And even if you could, somehow - other than triangulating -, store your open sphere, wouldn't it make the entire exercise futile seeing as how a normal (closed) sphere is defined so much simpler: a
vertex (centre position) and a radius?
But the short, simple answer is, as you might expect, "I don't know".
But I've never thougt of it from this angle... what a difference it makes to be outside the box, no?
2nd. point:
Adminy, will you take me up on my offer to coproduce a 3D engine? you good at coding?
Oh, and here's a 3D treat ;D :
Applying fish eye lens projection effect :
and, analogue,
Next time I'll paste an description of curved interpolation. Stay tuned!!!
Enjoy. I'll be launching a HomeSite (eventually - I'm such a sloth
Last edited by sonyafterdark (2005-09-18 13:35:39)
An intelligent man usually knows how to reach his goals. A wise man also knows what goals to reach...
|
{"url":"http://www.mathisfunforum.com/viewtopic.php?pid=14483","timestamp":"2014-04-18T13:34:38Z","content_type":null,"content_length":"25642","record_id":"<urn:uuid:fe0d7f8c-9a00-4984-9d4c-08a36938a164>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00335-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Given that a + b(1+x)^3 + c(1+2x)^3 + d(1+3x)^3 = x^3, find the values for a, b c, d.
• one year ago
• one year ago
Best Response
You've already chosen the best response.
so do i need to expand the things using binomial, or what??
Best Response
You've already chosen the best response.
because it's a mixture of AP's and GP's.
Best Response
You've already chosen the best response.
use binomial please if confused ask me please
Best Response
You've already chosen the best response.
so i expanded it, but everything is in b's and c's and d's.
Best Response
You've already chosen the best response.
so use systems of equations.
Best Response
You've already chosen the best response.
but then everything is equated to 0.
Best Response
You've already chosen the best response.
does anyone have any idea as to what i could do here??
Best Response
You've already chosen the best response.
x^3+d (-1-9 x-27 x^2-27 x^3)+b (-1-3 x-3 x^2-x^3)-c(1+2 x)^3=a
Best Response
You've already chosen the best response.
a + b(1+x)^3 + c(1+2x)^3 + d(1+3x)^3 = x^3 a + b(1 + 3x + 3x^2 + x^3) + c(1 + 6x + 12x^2 + 8x^3) + d(1 + 9x + 27x^2 + 27x^3)=x^3 Now leave the RHS, and group the LHS accordingly: (b + 8c + 27c)x^
3 + (3b + 12c + 27d)x^2 + (3b + 6c + 9c)x + (a + b + c + d) = x^3 it follows that: b + 8c + 27c=1 3b + 12c + 27d=0 3b + 6c + 9c=0 a + b + c + d =0 solve that.
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/50ce116fe4b0031882dc63b3","timestamp":"2014-04-19T15:37:05Z","content_type":null,"content_length":"47008","record_id":"<urn:uuid:f749d409-a523-41e4-b8cf-b20c685da17d>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00126-ip-10-147-4-33.ec2.internal.warc.gz"}
|
About the definition of Borel and Radon measures
up vote 6 down vote favorite
I am trying to understand the notion of Radon measure, but I am a little bit lost with the different conventions used in the litterature. More precisely, I have a doubt about the very definition of
Borel measure.
Suppose that $(X,\mathcal{B},\mu)$ is a measure space, where $X$ is a topological space. I have find two different definitions for "$\mu$ is a Borel measure":
-Def 1 : $\mu$ is a Borel measure if $\mathcal{B}$ contains the Borel $\sigma$-algebra of $X$,
-Def 2: $\mu$ is a Borel measure if $\mathcal{B}$ is exactly the Borel $\sigma$-algebra of $X$.
The same thing happens for the notion of Radon measure, as it can be either considered as Borel measure in the sense of Def 1, or in the sense of Def 2.
Of course, Def 1 gives a more general notion of Borel or Radon measure. For example the Lebesgue measure (defined on the Lebesgue $\sigma$-algebra of $\mathbb{R}^n$) is Radon in the sense of Def 1,
but not in the sense of Def 2.
Are there (other) reasons as to why one may prefer Def 1 to Def 2 or vice versa ?
Apparently, Def 2 makes it quite difficult to have a "complete Radon measure", which makes me think that it is a little bit artificial or restrictive. But maybe many results hold only for Radon
measures in the sense of Def 2, without possible extension to Radon measures in the sense of Def 1 ? Or maybe there is a trivial way to transfer any result involving a Borel measure in the sense of
Def 2 to a result involving a Borel measure in the sense of Def 1 ?
A related question is the following : if $\mu$ is Radon in the sense of Def 2, will its completion be Radon in the sense of Def 1 ? Same question when you replace "Radon" by "inner regular", "outer
regular", and "locally finite".
measure-theory real-analysis integration
1 I'm not familiar with definition 1 for Borel measures, but the main poin of Radon measures is that they are inner regular, so the measurable sets can be approximated by Borel sets anyways. –
Michael Greinecker Oct 13 '12 at 10:20
2 According to Bourbaki's definition, a Radon Measure is a certain kind of linear functional on a certain kind of space of continuous functions. So to start with it is not even defined on Borel
sets. – Gerald Edgar Oct 13 '12 at 16:10
add comment
2 Answers
active oldest votes
Let $(X,\mathcal M, \mu)$ be a measure space, where $\mu$ is a positive measure and $X$ is topological space. Let $\mathcal B$ the Borel $\sigma$-algebra on $X$.
The measure $\mu$ is called a Borel measure whenever $\mathcal M\supset \mathcal B$ and $\mu$ is finite on compact sets.
up vote A Radon measure $L$ on $X$ is a continuous linear form on the vector space $C_c(X;\mathbb R)$ (real-valued continuous functions with compact support). The celebrated Riesz-Markov
5 down representation theorem establishes that if $X$ is a locally compact space and $L$ is positive (i.e. non-negative on non-negative functions) then there exists a complete outer regular measure
vote space $(X,\mathcal M, \mu)$ such that $\mu$ is a Borel measure and $$ Lf=\int_Xfd\mu,\quad\text{for $f\in C_c(X;\mathbb R)$}. $$ Inner regularity is true when $X$ is $\sigma$-compact. Walter
Rudin classical book, Real and Complex Analysis remains the best reference in the literature.
add comment
From a geometric measure theory perspective, it is standard to define Radon measures $\mu$ to be Borel regular measures that give finite measure to any compact set. Of course, their
connection with linear functionals is very important, but in all the references I know, they start with a notion of a Radon measure and then prove representation theorems that represent
linear functionals by integration against Radon measures.
Here are some examples:
$\color{blue}{I:}$ Evans and Gariepy's Measure Theory and Fine Properties of Functions states it this way:
1. A [outer] measure $\mu$ on $X$ is regular if for each set $A \subset X$ there exists a $\mu$-measurable set $B$ such that $A\subset B$ and $\mu(A)=\mu(B)$.
2. A measure $\mu$ on $\Bbb{R}^n$ is called Borel if every Borel set is $\mu$-measurable.
3. A measure $\mu$ on $\Bbb{R}^n$ is Borel regular if $\mu$ is Borel and for each $A\subset\Bbb{R}^n$ there exists a Borel set $B$ such that $A\subset B$ and $\mu(A) = \mu(B)$.
4. A measure $\mu$ on $\Bbb{R}^n$ is a Radon measure if $\mu$ is Borel regular and $\mu(K) < \infty$ for each compact set $K\subset \Bbb{R}^n$.
$\color{blue}{II:}$ In De Lellis' very nice exposition of Preiss' big paper, he doesn't even define Radon explicitly, but rather talks about Borel Regular measures that are also locally
finite, by which he means $\mu(K) < \infty$ for all compact $K$. His Borel regular is a bit different in that he only considers measurable sets -- $\mu$ is Borel regular if any measurable
set $A$ is contained in a Borel set $B$ such that $\mu(A) = \mu(B)$. (I am referring to Rectifiable Sets, Densities and Tangent Measures by Camillo De Lellis.)
$\color{blue}{III:}$ In Leon Simon's Lectures on Geometric Measure Theory, he defines Radon measures on locally compact and separable spaces to be those that are Borel Regular and finite on
compact subests.
$\color{blue}{IV:}$ Federer 2.2.5 defines Radon Measures to be measure a $\mu$, over a locally compact Hausdorff spaces, that satisfy the following three properties:
1. If $K\subset X$ is compact, then $\mu(K) < \infty$.
2. If $V\subset X$ is open, then $V$ is $\mu$ measurable and
$\hspace{1in} \mu(V) = \sup\mu(K): K\text{ is compact, } K\subset V$
3. If $A\subset X$, then
$\hspace{1in} \mu(A) = \inf\mu(V): V\text{ is open, } A\subset V$
Note: it is a theorem (actually, a Corollary 1.11 in Mattila's Geometry of Sets and Measures in Euclidean Spaces) that a measure is a Radon a la Federer if and only if it is Borel Regular
and locally finite. I.e {Federer Radon} $\Leftrightarrow$ {Simon or Evans and Gariepy Radon}. (I am referring of course to Herbert Federer's 1969 text Geometric Measure Theory.)
$\color{blue}{V:}$ For comparison, Folland (in his real analysis book) defines things a bit differently. For example, he defines regularity differently than the first, third and fourth texts
up vote above. In those, a measure $\mu$ is regular if for any $A\subset X$ there is a $\mu$-measurable set $B$ such that $A\subset B$ and $\mu(A) = \mu(B)$. In Folland, a Borel measure $\mu$ is
5 down regular if all Borel sets are approximated from the outside by open sets and from the inside by compact sets. I.e. if
$\hspace{1in}\mu(B) = \inf \mu(V): V\text{ is open, } B\subset V$
$\hspace{1in}\mu(B) = \sup \mu(K): K\text{ is compact, } K\subset B$
for all Borel $B\subset X$.
Folland's definition of Radon is very similar to Federer's but not quite the same:
A measure $\mu$ is Radon if it is a Borel measure that satisfies:
1. If $K\subset X$ is compact, then $\mu(K) < \infty$.
2. If $V\subset X$ is open, then
$\hspace{1in} \mu(V) = \sup\mu(K): K\text{ is compact, } K\subset V$
3. If $A\subset X$ and $A$ is Borel then
$\hspace{1in} \mu(A) = \inf\mu(V): V\text{ is open, } A\subset V$
... and by Borel measure, Folland means a measure whose measuralbe sets are exactly the Borel sets.
Discussion: Why choose one definition over another? Partly personal preference -- I prefer the typical approach taken in geometric measure theory, starting with an outer measure and
progressing to Radon measures a la Evans and Gariepy or Simon or Federer or Mattila. It seems, somehow, more natural and harmonious with the Caratheodory criterion and Caratheodory
construction used to generate measures, like the Hausdorff measures.
With this approach, for example, sets with an outer measure of 0 are automatically measurable.
Another reason not to use the more restrictive definition 2 (in the question above) it makes sense to require that continuous images of Borel sets be measurable. But all we know is that
continuous maps map Borel to Suslin sets. And there are Suslin sets which are not Borel! If we use the definition of Borel regular, as in I,III and IV above, then Suslin sets are measurable.
There is a very nice discussion of this in section 1.7 of Krantz and Parks' Geometric Integration Theory -- see that reference for the definition of Suslin sets. (Krantz and Parks is yet
another text I could have added to the above list that agrees with I, III, and IV as far as Radon, Borel regular, etc. goes.
What would be a good example of a non-Borel regular Borel measure on a second countable metric space? All the standard procedures for constructing measures I'm aware of seem to yield
measures satisfying this condition. The examples I was able to produce either live on large (i.e. non-separable) spaces or fail to measure all Borel sets. – Theo Buehler Jan 1 '13 at 22:51
Quick answer is that I don't know (haven't thought about it). I have student who loves counterexamples and is something of an expert on them. I will ask him. – Kevin R. Vixie Jan 2 '13 at
Thanks! Meanwhile, I remembered an old construction due to Oxtoby dx.doi.org/10.1090/S0002-9947-1946-0018188-5 which produces a non-trivial (invariant and inner regular) Borel measure on
every separable completely metrizable group. If the group is not locally compact then every open set has infinite measure, but there are always sets of finite measure, thus Borel
regularity fails. This construction can be adapted to give a non-Borel regular Borel measure even on $\mathbb{R}$, by working on the set of irrationals and using that they are homeomorphic
to $\Bbb{Z^N}$. – Theo Buehler Jan 4 '13 at 16:42
To get your (Oxtoby's) example are you using the version of regularity used by Folland? I assume so ... – Kevin R. Vixie Jan 4 '13 at 19:00
add comment
Not the answer you're looking for? Browse other questions tagged measure-theory real-analysis integration or ask your own question.
|
{"url":"http://mathoverflow.net/questions/109505/about-the-definition-of-borel-and-radon-measures","timestamp":"2014-04-20T06:13:47Z","content_type":null,"content_length":"68102","record_id":"<urn:uuid:06cf476d-1255-49db-bf34-51a0e83dccbf>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00021-ip-10-147-4-33.ec2.internal.warc.gz"}
|
De Rham-Hodge theory for L p -cohomology of infinite coverings, Topology 16
"... Given a C ∗-algebra A with a semicontinuous semifinite trace τ acting on the Hilbert space H, we define the family A R of bounded Riemann measurable elements w.r.t. τ as a suitable closure, à la
Dedekind, of A, in analogy with one of the classical characterizations of Riemann measurable functions [1 ..."
Cited by 4 (4 self)
Add to MetaCart
Given a C ∗-algebra A with a semicontinuous semifinite trace τ acting on the Hilbert space H, we define the family A R of bounded Riemann measurable elements w.r.t. τ as a suitable closure, à la
Dedekind, of A, in analogy with one of the classical characterizations of Riemann measurable functions [16], and show that A R is a C ∗-algebra, and τ extends to a semicontinuous semifinite trace on
A R. Then, unbounded Riemann measurable operators are defined as the closed operators on H which are affiliated to A ′′ and can be approximated in measure by operators in A R, in analogy with
improper Riemann integration. Unbounded Riemann measurable operators form a τ-a.e. bimodule on A R, denoted by AR, and such bimodule contains the functional calculi of selfadjoint elements of A R
under unbounded Riemann measurable functions. Besides, τ extends to a bimodule trace on AR. As type II1 singular traces for a semifinite von Neumann algebra M with a normal semifinite faithful
(non-atomic) trace τ have been defined as traces on M − M-bimodules of unbounded τ-measurable operators [5], type II1 singular traces for a C ∗-algebra A with a semicontinuous semifinite (non-atomic)
trace τ are defined here as traces on A − A-bimodules of unbounded Riemann measurable operators (in AR) for any faithful representation of A. An application of singular traces for C ∗-algebras is
contained in [6].
- J. Funct. Anal , 1995
"... The space ΓX of all locally finite configurations in a Riemannian manifold X of infinite volume is considered. The deRham complex of square-integrable differential forms over ΓX, equipped with
the Poisson measure, and the corresponding deRham cohomology are studied. The latter is shown to be unitari ..."
Cited by 4 (0 self)
Add to MetaCart
The space ΓX of all locally finite configurations in a Riemannian manifold X of infinite volume is considered. The deRham complex of square-integrable differential forms over ΓX, equipped with the
Poisson measure, and the corresponding deRham cohomology are studied. The latter is shown to be unitarily isomorphic to a certain Hilbert tensor algebra generated by the L 2-cohomology of the
underlying manifold X.
, 2001
"... A semicontinuous semifinite trace is constructed on the C*-algebra ..."
, 1996
"... Given a unital complex *-algebra A, a tracial positive linear functional ø on A that factors through a *-representation of A on Hilbert space, and an A- module M possessing a resolution by
finitely generated projective A-modules, we construct homology spaces H k (A; ø; M ) for k = 0; 1; : : : . Ea ..."
Cited by 2 (0 self)
Add to MetaCart
Given a unital complex *-algebra A, a tracial positive linear functional ø on A that factors through a *-representation of A on Hilbert space, and an A- module M possessing a resolution by finitely
generated projective A-modules, we construct homology spaces H k (A; ø; M ) for k = 0; 1; : : : . Each is a Hilbert space equipped with a *-representation of A, independent (up to unitary
equivalence) of the given resolution of M . A short exact sequence of A-modules gives rise to a long weakly exact sequence of homology spaces. There is a Kunneth formula for tensor products. The von
Neumann dimension which is defined for A-invariant subspaces of L 2 (A; ø ) n gives well-behaved Betti numbers and an Euler characteristic for M with respect to A and ø .
"... We develop the theory of twisted L²-cohomology and twisted spectral invariants for at Hilbertian bundles over compact manifolds. They can be viewed as functions on H¹(M, R) and they generalize
the standard notions. A new feature of the twisted L²-cohomology theory is that in addition ..."
Cited by 1 (1 self)
Add to MetaCart
We develop the theory of twisted L²-cohomology and twisted spectral invariants for at Hilbertian bundles over compact manifolds. They can be viewed as functions on H¹(M, R) and they
generalize the standard notions. A new feature of the twisted L²-cohomology theory is that in addition to satisfying the standard L² Morse inequalities, they also satisfy certain asymptotic
L² Morse inequalities. These reduce to the standard Morse inequalities in the finite dimensional case, and when the Morse 1-form is exact. We de ne the extended twisted L² de Rham
cohomology and prove the asymptotic L² Morse-Farber inequalities, which give quantitative lower bounds for the Morse numbers of a Morse 1-form on M.
"... Let X be a Riemannian manifold endowed with a co-compact isometric action of an infinite discrete group. We consider L 2 spaces of harmonic vector-valued forms on the product manifold X N, which
are invariant with respect to an action of the braid group BN, and compute their von Neumann dimensions ( ..."
Add to MetaCart
Let X be a Riemannian manifold endowed with a co-compact isometric action of an infinite discrete group. We consider L 2 spaces of harmonic vector-valued forms on the product manifold X N, which are
invariant with respect to an action of the braid group BN, and compute their von Neumann dimensions (the braided L 2- Betti numbers).
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=1927670","timestamp":"2014-04-16T22:51:58Z","content_type":null,"content_length":"26248","record_id":"<urn:uuid:2fb47b7c-1a46-4597-9955-83aaeee6741f>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00083-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Binary Logarithm Template
The class template in <boost/integer/static_log2.hpp> determines the position of the highest bit in a given value. This facility is useful for solving generic programming problems.
namespace boost
typedef implementation-defined static_log2_argument_type;
typedef implementation-defined static_log2_result_type;
template < static_log2_argument_type arg >
struct static_log2
static const static_log2_result_type value = implementation-defined;
template < >
struct static_log2< 0 >
// The logarithm of zero is undefined.
} // namespace boost
The boost::static_log2 class template takes one template parameter, a value of type static_log2_argument_type. The template only defines one member, value, which gives the truncated base-two
logarithm of the template argument.
Since the logarithm of zero, for any base, is undefined, there is a specialization of static_log2 for a template argument of zero. This specialization has no members, so an attempt to use the
base-two logarithm of zero results in a compile-time error.
• static_log2_argument_type is an unsigned integer type (C++ standard, 3.9.1p3).
• static_log2_result_type is an integer type (C++ standard, 3.9.1p7).
#include "boost/integer/static_log2.hpp"
template < boost::static_log2_argument_type value >
bool is_it_what()
typedef boost::static_log2<value> lb_type;
int temp = lb_type::value;
return (temp % 2) != 0;
int main()
bool temp = is_it_what<2000>();
# if 0
temp = is_it_what<0>(); // would give an error
# endif
temp = is_it_what<24>();
The program static_log2_test.cpp is a simplistic demonstration of the results from instantiating various examples of the binary logarithm class template.
The base-two (binary) logarithm, abbreviated lb, function is occasionally used to give order-estimates of computer algorithms. The truncated logarithm can be considered the highest power-of-two in a
value, which corresponds to the value's highest set bit (for binary integers). Sometimes the highest-bit position could be used in generic programming, which requires the position to be statically (
i.e. at compile-time) available.
• New in version 1.32.0:
The argument type and the result type of boost::static_log2 are now typedef'd. Formerly, they were hardcoded as unsigned long and int respectively. Please, use the provided typedefs in new code
(and update old code as soon as possible).
The original version of the Boost binary logarithm class template was written by Daryle Walker and then enhanced by Giovanni Bajo with support for compilers without partial template specialization.
The current version was suggested, together with a reference implementation, by Vesa Karvonen. Gennaro Prota wrote the actual source file.
Revised July 19, 2004
© Copyright Daryle Walker 2001.
© Copyright Gennaro Prota 2004.
Distributed under the Boost Software License, Version 1.0. (See accompanying file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
|
{"url":"http://www.boost.org/doc/libs/1_34_1/libs/integer/doc/static_log2.html","timestamp":"2014-04-17T14:16:16Z","content_type":null,"content_length":"7057","record_id":"<urn:uuid:f0c71c38-d6d5-4081-be5c-c7c80aa32a97>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00273-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
. A 22 kg sled is pushed for 5.2 m with a horizontal force of 20 N, starting from rest. Ignore friction. Find the final speed of the sled.
• one year ago
• one year ago
Best Response
You've already chosen the best response.
2.2 m/s 3.1 m/s 4.7 m/s 9.5 m/s
Best Response
You've already chosen the best response.
work done = force x distance, so 5.2m x 20 = 104J, so thats kinetic energy, knowing mass we can use formula to get velocity so E = 1/2mv^2 so 104 = 1/2 * 22kg * v^2 rearrange for v = SQRT(208/22)
Best Response
You've already chosen the best response.
which is approx 3.1m/s
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/51278b67e4b0dbff5b3d1a0d","timestamp":"2014-04-16T04:50:44Z","content_type":null,"content_length":"32601","record_id":"<urn:uuid:482c89f4-6a9a-470b-84fc-5424c5d5c0c3>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00580-ip-10-147-4-33.ec2.internal.warc.gz"}
|