content
stringlengths 86
994k
| meta
stringlengths 288
619
|
|---|---|
World's First Commercial Quantum Computer Demonstrated
Canadian firm D-Wave Systems unveiled and demonstrated today what it calls “the world's first commercially viable quantum computer.” Company officials announced the technology at the Computer History
Museum in Mountain View, California in a demonstration intended to show how the machine can run commercial applications and is better suited to the types of problems that have stymied conventional
(digital) computers.
The demonstration of the technology was held at the Computer History Museum, but the actual hardware remained in Burnaby, BC where it was being chilled down to 5 millikelvin, or minus 273.145 degrees
Celsius (colder than interstellar space), with liquid helium.
Quantum computers rely on quantum mechanics, the rules that underlie the behavior of all matter and energy, to accelerate computation. It has been known for some time that once some simple features
of quantum mechanics are harnessed, machines will be built capable of outperforming any conceivable conventional supercomputer. But D-Wave explains that its new device is intended as a complement to
conventional computers, to augment existing machines and their market, not to replace them.
To make the technology commercially applicable, D-Wave used the processes and infrastructure associated with the semiconductor industry. The D-Wave computer, dubbed Orion, is based on a silicon chip
containing 16 quantum bits, or “qubits,” which are capable of retaining both binary values of zero and one. The qubits mimic each others’ values allowing for an amplification of their computational
power. D-Wave says that its system is scalable by adding multiples of qubits. The company expects to have 32-qubit systems by the end of this year, and as many as 1024-qubit systems by the end of
"D-Wave's breakthrough in quantum technology represents a substantial step forward in solving commercial and scientific problems which, until now, were considered intractable. Digital technology
stands to reap the benefits of enhanced performance and broader application," said Herb Martin, chief executive officer.
Quantum-computer technology can solve what is known as "NP-complete" problems. These are the problems where the sheer volume of complex data and variables prevent digital computers from achieving
results in a reasonable amount of time. Such problems are associated with life sciences, biometrics, logistics, parametric database search and quantitative finance, among many other commercial and
scientific areas.
As an example, consider the modeling of a nanosized structure, such as a drug molecule, using non-quantum computers. Solving the Schrodinger Equation more than doubles in difficulty for every
electron in the molecule. This is called exponential scaling, and prohibits solution of the Schrodinger Equation for systems greater than about 30 electrons. A single caffeine molecule has more than
100 electrons, making it roughly 10^44 times harder to solve than a 30-electron system, which itself makes even high-end supercomputers choke.
Quantum computers are capable of solving the Schrodinger Equation with linear scaling exponentially faster and with exponentially less hardware than conventional computers. For a quantum computers,
the difficulty in solving the Schrodinger Equation increases by a small, fixed amount for every electron in a system. Even very primitive quantum computers will be able to outperform supercomputers
in simulating nature.
"Quantum technology delivers precise answers to problems that can only be answered today in general terms. This creates a new and much broader dimension of computer applications," Martin said.
"Digital computing delivers value in a wide range of applications to business, government and scientific users. In many cases the applications are computationally simple and in others accuracy is
forfeited for getting adequate solutions in a reasonable amount of time. Both of these cases will maintain the status quo and continue their use of classical digital systems," he said.
"It's rational to assume that quantum computers will always contain a digital computing element thereby increasing the amortization of investments already made while expediting the availability of
the power of quantum acceleration," he said.
For more technical information quantum computing, read D-Wave founder and CTO Geordie Rose’s blog.
|
{"url":"http://www.dailytech.com/article.aspx?newsid=6102&commentid=107515&threshhold=1&red=1437","timestamp":"2014-04-18T05:43:34Z","content_type":null,"content_length":"49660","record_id":"<urn:uuid:9d229609-0e3f-4996-ad4f-e7ec7463fa05>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00554-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Posted on January 4, 2012 @ 01:12:29 PM by Paul Meagher
Currently reading some books on Matlab (so that I can program in it's opensource version, Octave, better). In Matlab/Octave you can construct a matrix like so:
mat = [1:3; 4:6]
which returns:
mat =
Perhaps in PHP we should be able to do something similiar like this:
$A = mat('1:3; 4:6');
Where mat(' replaces [ and '); replaces ]. Implementing a mat function like this would require implementing 1) a parser for the expression between single quotes (or double quotes), and 2) a code
generator that uses the parsed expression to generate the appropriate array or nested arrays.
In the matrix constructor for the JAMA package, there are methods to "fill" the arrays with values but they are very primative compared to matlab/octaves notation for constructing filled matrices.
The proposed "mat" function could work in a very complementary way with JAMA's matrix constructor.
A mat function is only one matrix manipulation function that php might implement to make matrix manipulation easier. Other functions that would be useful are linspace, reshape, and repmat. Matlab's/
Octave's method for transposing a matrix involves using a single quote operator ' outside the right bracket (e.g., mat = [1:3; 4:6]') that might be functionalized as a transpose or trans operator.
Finally, Matlab/Octave also supports an expressive language for accessing the contents of a matrix that would also be useful to have. One possibility is that the mat function might take a second
argument, a matrix, that would be accessed via an expression between the single quotes of the first argument like so:
$b = mat('end, 1', $A);
Which equates to:
$b = 4;
I would argue that for PHP to be a usable language for matrix manipulation it should offer matrix manipulation functions like the Matlab/Octave ones proposed here. A few functions like these,
inspired by the way Matlab/Octive implements these functions, would take PHP a long way towards being a useful matrix manipulation language. They would be complementary with the JAMA Linear Algebra
Package and would make it easier to port Matlab/Octave-based matrix algorithms to PHP. Many vectorized algorithms require the ability to efficiently slice and dice matrices as the algorithm proceeds
towards a solution which is what these methods provide.
|
{"url":"http://www.phpmath.com/home?op=perm&nid=121","timestamp":"2014-04-18T16:53:41Z","content_type":null,"content_length":"13410","record_id":"<urn:uuid:71ece49d-ec7d-477d-ae71-c3a65ab9779d>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00294-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Farmers Branch, TX SAT Math Tutor
Find a Farmers Branch, TX SAT Math Tutor
...Above all, I enjoy tutoring and always try to pass on that same passion (though admittedly not always successfully) to all my students.In addition to taking College Algebra, I have also studied
and tutored higher level related mathematics such as Linear Algebra and Modern (Abstract) Algebra. It ...
41 Subjects: including SAT math, chemistry, French, calculus
...Whether you need to pass a particularly difficult exam or you want to learn or improve English or Spanish, we will work together to succeed. Send me an email and we will strategize on how to
structure your lessons in order to reach your goals.I am qualified to teach Spanish, especially here in D...
29 Subjects: including SAT math, reading, Spanish, GRE
...In addition, I have a thorough understanding of effective study skills, organization, and test taking skills. Sometimes, this is all a student needs in order to achieve success in the
classroom.I have a degree in elementary education from Texas Wesleyan University, as well as a grade 1 - 8 Texas...
39 Subjects: including SAT math, chemistry, English, writing
...I have taught at various institutions and universities, both in Mexico and the United States. I am bilingual, fluent in both English and Spanish. I have taught both, English to Spanish-speaking
individuals, and Spanish to English-speaking individuals.
37 Subjects: including SAT math, chemistry, Spanish, English
...Pre-Algebra is a vital base for the math classes that will follow it. I know I can help you truly understand this material, so you have a solid foundation to continue your math education!
Pre-Calculus can be a broad subject, including advanced algebra, real and complex numbers, vectors, trigonometry, and of course, the beginnings of calculus.
23 Subjects: including SAT math, English, calculus, writing
|
{"url":"http://www.purplemath.com/Farmers_Branch_TX_SAT_Math_tutors.php","timestamp":"2014-04-19T12:13:31Z","content_type":null,"content_length":"24320","record_id":"<urn:uuid:6df1e034-9d8c-4741-8c4b-d31f22a06ec5>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00192-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Physics Forums - View Single Post - Time Evolution of Quantum systems.
James Jackson
Juan R.
There is not relativistic quantum mechanics. In fact, it cannot exist.
Errm, what? As a (simple) example, the Dirac equation is the result of making Schrodinger's Equation Lorentz invarient.
No. It is not so
. Dirac equation is incosistent and the developing of relativistic quantum mechanics is imposible. This is the reason of that Dirac equation was abandoned in relativistic quantum field theory, where
the basic dinamical equation is a Schrodinger like one.
i \hbar \frac{\partial \Psi}{ \partial t} = H \Psi
Note: Dirac equation is not the result of making Schrodinger's Equation Lorentz invarient: that is the well-known relativistic Schrodinger equation.
Dirac equation was derived from the searching of a first order (in time) representation of Klein-Gordon equation more spin corrections for hidrogen atom.
|
{"url":"http://www.physicsforums.com/showpost.php?p=687604&postcount=16","timestamp":"2014-04-16T04:26:58Z","content_type":null,"content_length":"8629","record_id":"<urn:uuid:e05253da-a522-4652-aff6-7d172b9c1d5e>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00042-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Bessel's Inequality
November 14th 2011, 12:34 PM #1
Junior Member
May 2011
Bessel's Inequality
I'm having trouble with this problem. I've tried rewriting things in about every way I know how to but I haven't arrived at an answer. I'd appreciate some help.
"Let $V$ be an inner product space and let $S=\{v_1, ..., v_n\}$ be an orthonormal subset of $V$. Prove that for any $x \in V$ we have $||x||^2 \ge \displaystyle\sum^n_{i=1} |\langle x,v_i \
rangle |^2.$"
As a hint the book says to use the fact that $x \in V$ can be written uniquely as $w+w'$ where $w \in W=\mbox{span}(S)$ and $w' \in W^\perp$, the orthogonal complement of W, and use the fact that
for $x$, $y$ orthogonal, $||x+y||^2 = ||x||^2 + ||y||^2$.
Thanks for any help.
Re: Bessel's Inequality
I'm having trouble with this problem. I've tried rewriting things in about every way I know how to but I haven't arrived at an answer. I'd appreciate some help.
"Let $V$ be an inner product space and let $S=\{v_1, ..., v_n\}$ be an orthonormal subset of $V$. Prove that for any $x \in V$ we have $||x||^2 \ge \displaystyle\sum^n_{i=1} |\langle x,v_i \
rangle |^2.$"
As a hint the book says to use the fact that $x \in V$ can be written uniquely as $w+w'$ where $w \in W=\mbox{span}(S)$ and $w' \in W^\perp$, the orthogonal complement of W, and use the fact that
for $x$, $y$ orthogonal, $||x+y||^2 = ||x||^2 + ||y||^2$.
Thanks for any help.
As the hint gives
$||\mathbf{x}||^2=<\mathbf{x},\mathbf{x}>=<\mathbf{ x}_{v}+\mathbf{x}_{v\perp},\mathbf{x}_{v}+\mathbf{ x}_{v\perp}>=||\mathbf{x}_v||^2+||\mathbf{x}_{v \perp}||^2$
Remember that
$\mathbf{x}_{v}=\sum_{i=1}^{n}<\mathbf{x},\mathbf{v }_i>\mathbf{v}_i \implies ||\mathbf{x}_v||^2=\sum_{i=1}^{n}|<\mathbf{x}, \mathbf{v}_i >|^2$
Just put these two facts together and remember that the modulus of a vector is always positive to finish.
Re: Bessel's Inequality
Wow. I was so close to completing it that it's painful that I didn't see it. Thanks.
November 14th 2011, 02:50 PM #2
November 14th 2011, 06:07 PM #3
Junior Member
May 2011
|
{"url":"http://mathhelpforum.com/advanced-algebra/191911-bessel-s-inequality.html","timestamp":"2014-04-16T05:50:06Z","content_type":null,"content_length":"41660","record_id":"<urn:uuid:320c783e-305f-44b2-a383-68d7f60fc63f>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00285-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Discover K Game!
It is a game for the written press: daily, weekly, monthly as well as for TV shows.
K Game is a grid comprising 36 compartments: 6×6.
There are five compartments with bolded sides and 31 simple compartments. Each of the five bolded-side compartments is composed of two numbers and two arrows. Each arrow leads to one column or one
row (just as in crosswords).
In the grid, the Small Diagonal corresponds to the compartments including a circle.
In each one of the 31 simple compartments, one must input 25 numbers (see below), and 6 times the letter “K” which is equal to zero.
Each one of the 25 numbers is unique among the grid. There are six letters “K” to be placed on the grid based of the following:
• 1 letter “K” per row,
• 1 letter “K” per column,
• 1 letter “K” in the Small Diagonal (rounded compartments).
The 25 numbers should be placed according to the value found in the bolded-side compartments and their corresponding arrow.
The 25 numbers are:
• 1-2-3-4-5-6-7-8-9-10-11-12
• 20-25-50-75
• 100-200-300-400-500-600-700-800-900.
|
{"url":"http://kgame.me/","timestamp":"2014-04-19T22:25:22Z","content_type":null,"content_length":"13136","record_id":"<urn:uuid:19643950-99b8-44af-bda9-ac236248fb77>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00237-ip-10-147-4-33.ec2.internal.warc.gz"}
|
C'est la Z
• Twitch Coding10 Apr 2014
We have the kids write programs in all sorts of ways
□ on paper
□ solo
□ informally in pairs
□ "pair programming"
□ We have them trade code, pick up each others projects, and more.
We do lots of different things to engage the kids in a lot of different ways and I love it when someone comes up with a new technique.
My friend, colleague, and incidentally, former student, Sam had such an idea the other day. Sam started his teaching career at Francis Lewis High School and it took us a while to convince him to
join the team, but he's been with us for about three years now and he's terrific.
Sam's also our resident gamer so I guess I shouldn't have been surprised when Sam said he was going to do Twitch Pokemon coding with his classes. It sounded great.
In Twitch Pokemen, users type moves into a chat window and a bot reads the commands to control a Pokemon. Sam's idea was to apply it to a CS class.
I loved the idea so I tried it in my classes.
First cut, I did it with stacks. We had a basic design in mind and then we started the "Twitch Coding."
We went up and down the rows. When it was a students turn, they could either add a word, line, or concept, delete one, or change one.
So, for example, if the state of the code was:
public Node top;
public void push(String s) {
Node n = new Node(s);
a student could:
□ add n.setNext(top) to the push routine
□ change public to private in the declaration of top
Or the somewhat lame
□ add a // above the push declaration line
or something else.
If a student gets stuck, it's up to the class to "go Price is Right" on them and give suggestions.
It worked great in one class, forced in another, and somewhere in the middle in the third. Overall, I was happy with the results.
We tried it again today as we implemented a queue.
This time, we prepped a little better and the results were better.
The idea needs some fine tuning, but I think it's a fun and different way to engage the class and I think we'll be playing with twitch coding some more in the coming months.
Read more - comments
• Announcing SHIP07 Apr 2014
It's been a while since my last post.
That's mostly since I've getting things ready for this announcement.
I've talked about our non-profit CSTUY before. Well, we've been hard at work getting things together for our first summer program:
SHIP - Summer Hackers Immersion Program
We're really excited about this - taking our years of experience teaching kids at our schools and taking it to a wider audience.
SHIP is being hosted at St. Joseph's College in Brooklyn and runs from July 7th through July 31.
So, if you know a rising 9th through 12th grader in or around New York City let them know about this great opportuity.
Information can be found at http://programs.cstuy.org/ship.
And the application is here: http://programs.cstuy.org/ship-apply
Information on our overall plan is here: http://programs.cstuy.org/ship-outreach
We're also still raising funds for the program, so if you or someone you know can help, they can contact me or donate directly here.
Read more - comments
• Sorting - Subtle Errors17 Mar 2014
Time to wrap up sorting for a while. We just finished quicksort having gone through a series of lessons
□ We started with Quickselect.
□ Then we did a quicksort, copying to new arrays during the partition
□ Then finally to an in place quicksort.
For the final quicksort we used a partition algorithm pretty much the same as the one described here.
We started testing using by building a randomly filled array like this:
Random rnd = new Random();
int a[] = new int[n];
for (int i=0;i<n;i++) {
a[i] = rnd.nextInt(100);
And everything seemed terrific.
Just like when we did the mergesort, we started to increase n. First 20, then 100, then 1000 and so on.
All of a sudden, we started getting a stack overflow. We only made it to about 450,000. Mergesort got to arrays of about 40,000,000 items before we started to have memory problems.
Our algorithm was sound. It worked on everything up to about 450,000. Since Mergesort worked well into the tens of millions, quicksort should have as well.
What was wrong?
We changed the code a bit:
Random rnd = new Random();
int a[] = new int[n];
for (int i=0;i<n;i++) {
a[i] = rnd.nextInt(10000);
Instead of an array of 450,000 values between 0 and 100, our elements now went fro 0 to 10,000.
All of a sudden things were good.
Why? It wasn't long before the student saw that 500,000 elements with values between 0 and 100 meant lots of duplicates. Our partition didn't account for that. If we had duplicate pivots, only
one is moved into place, the rest are left unsorted taking us closer to worst case performance and blowing our stack.
Fortunately there was an easy fix:
public int partition(int[] a, int l, int r) {
int tmp;
int pivotIndex = l+rnd.nextInt(r-l);
int pivot = a[pivotIndex];
tmp = a[r];
a[r] = a[pivotIndex];
int wall=l;
int pcount=1;
for (int i=l;i<r;i++) {
if (a[i]<pivot) {
tmp = a[i];
if (a[i]==pivot)
// now copy over all the pivots
int rwall=wall;
tmp = a[rwall];
for (int i=rwall+1;i<=r;i++) {
if (a[i]==pivot) {
tmp = a[rwall];
return (wall+rwall)/2;
When we partition the array, move all the elements equal to the partition to the middle of the array.
That did the trick.
All of a sudden we were blazing through data sets upwards of 100,000,000 elements.
We're done for sorting for a while, at least until the heapsort but it's been a fun couple of weeks
Read more - comments
• From selection to sorting12 Mar 2014
When I first saw the quicksort it was in an algorithms class back in the day. We first learned the quicksort, then choosing a good pivot element and then as an afterthought we did quickselect.
Fast forward to teaching. I was never really happy teaching quicksort. Mergesort is easy to motivate and it's pretty easy to write. Quicksort always felt a little forced.
I thought I'd try switching things up this time and doing quickselect first.
The motivating problem: find the K^th smallest item in a list - in our case the list is an array of ints.
I want to start with the least efficient algorithm so I stack the deck. I remind them that we've been finding the smallest item in a list for two years now.
They don't disappoint and suggest something like this:
L = [10,3,28,82,14,42,66,74,81]
def findKth(L,k):
for i in range(k):
for item in L:
if item < ans and item not in omits:
return ans
print findKth(L,3)
Clearly an \(O(n^2)\) algorithm.
Can we do better?
The students then suggest sorting the data set first. If we use mergesort, we can sort in \(O(nLg (n))\) time. This lead to a great conversation about sorting being so fast it's practically free
and that you don't have to hard code everything from scratch. Not only is sorting the data set then plucking the k^th item out much faster, if you already have a sort written or if you use your
language's library's sort, it's much easier as well:
def findKth(L,k):
tmp = sorted(L)
return tmp[k]
But we can do even better. So now we talk about quickselect
We pick a random pivot, partition the list a la quicksort (reorder the list such that all items less than the pivot are to its left, and all items greater than the pivot are to its right).
We now know that after partitioning. the pivot is in it's exact location. If its index is k then we're done. If not, we can recursively quickselect on either the left or right side.
Pretty cool, but is it faster?
It's easy to see that if we keep choosing a bad pivot (the smallest or largest in the list), each iteration takes \(n\) time to partition and each iteration takes one item out of contention. This
takes us back to \(O(n^2)\).
If we choose a good partition – at the middle of the list, each partition takes less and less time. We get a run time of:
\(n+\frac{n}{2} +\frac{n}{4}+\frac{n}{8}+\dots\) and since \(\frac{n}{2} +\frac{n}{4}+\frac{n}{8}\dots=n\) this becomes an \(O(2n)\), or \(O(n)\) algorithm.
That's really cool.
Homework was the actual implementation.
I think this might be a better way to approach quicksort. It seems less forced, plus the class gets to go through the exercise of taking an algorithm form \(O(n^2)\) to \(O(nlg(n))\) to \(O(n)\).
Next, moving to the quicksort and also showing that we can indeed avoid those really bad pivots.
We moved to quicksort today and overall I'm happy with this approach. The only thing I think needs tweaking is going from the idea of partitioning to Java code. Java makes it somewhat of a bear.
Read more - comments
• The new SAT - the more things stay the same11 Mar 2014
Plus ça change, plus c'est la même chose
The more things change, the more they stay the same.
Last week we heard all about the new SAT. Going back to 1600 points, writing optional, and reworking the verbal section.
Immediate responses ranged from the usual fact that SAT doesn't correlate with college success to the idea that the motive was not to improve the test but rather to recapture market share from
the ACT.
Personally, I'm not a fan of the test but I do see the desire to have some consistent measure across students and schools. While an "A" might say something about perseverance and hard work, the
value of one school's "A" is not necessarily the same as the value from another.
But that's not what I wanted to write about.
A big criticism of the SAT is the fact that it can be prepped for and thus gives a huge advantage to students and families of means. The test can be gamed, one can take prep courses, hire private
tutors, etc.
As I said at the top…
Hot on the heels of the new SAT came the announcement that Khan Academy will be offering free prep for the new SAT. That sounds terrific.
I read it differently. I'm all for free educational materials being universally available, but if Khan Academy can indeed offer test prep for the new SAT then so can everyone else and people of
means can and will avail themselves of the Khan Academy material plus a wealth of other resources.
So, new SAT but nothings changed.
Read more - comments
• Be the ball09 Mar 2014
Crystal Furman wrote a nice post titled Coding Comprehension about a week ago. There was a little buzz about it in the APCS Facebook group and shortly after, Alfred Thompson added his two cents.
I thought I'd add mine, at least a couple of thoughts.
There are a lot of issues - long term retention, transfer of knowledge from the basics to more advanced tools, pattern recognition, and more.
It reminded me of Benjamin Zander's talk "Playing on one Buttock":
Check out the first five minutes.
Code reading is important, pair programming, where students are constantly explaining to each other helps, and there are other techniques.
We can also model thinking like a computer from day one.
Many of us start day one with exercises where students are the computer. Perhaps using a simplified made up language or maybe by just throwing some task at the kids and having them write
instruction lists for each other. That's a great start, but we can continue drawing the relationship between the way we think and the way a computer works.
Take a simple intro problem – finding the largest value in a list of numbers.
The ultimate solution in Java might be:
public int findMax(int[] L){
maxIndex = 0;
for (int i=0;i<L.length;i++){
if (a[i]<a[maxIndex]){
maxIndex = i;
return maxIndex;
Somewhere along the development process, I ask my students how they would find the largest value in list. If the list was short, they might just scan it. If the list was very long, they do the
same thing as our Java snippet does - remember the largest so far as we scan down the list one by one. At first, we just think we're scanning the list, but if we slow things down, we see that
we're following pretty much the same algorithm as what we'd write in code.
I use this technique throughout all my classes - slow ourselves down and really analyze the steps towards solving the problem. No single technique is going to teach our kids how to think about
and comprehend code, but it's another tool in our bag of tricks.
Side note
This is my first post written using Emacs Org mode. I've been using it for years but only now discovering how amazing a tool it is.
Read more - comments
• I guess I'm a dumbass27 Feb 2014
I like a fairly informal atmosphere in my classes. Students have to know that there’s a line between teacher and student but I also want them to feel like we’re all part of the Stuy CS family.
Whenever we start a new term, it takes a while to break down the walls. The students don’t know what to expect of me, can they trust me? Am I a bozo? Who knows.
It helps when some of the class had me as a teacher before, but it still takes time.
I’m glad that this term, things are coming along nicely.
Let me share what happened in class today.
I was introducing merge sort - their first nlgn sorting algorithm. Before class, one of the students slipped off his seat and landed on the floor with a thud. He was fine although the brief butt,
if you would, of jokes.
I relayed a story - many years ago, Ilya, one of the gang, was accused of being a dumbass. He responded “hey, it’s never missed the seat.” The class had a good laugh over it.
Fast forward a bit.
I had a deck of cards I wanted sorted. As a Stuy grad, I’m as lazy as the next guy so I didn’t want to sort them, but I also didn’t want to violate one of our two class tenets “Don’t be a jerk”
so rather than giving the cards to a student to sort, I split the deck in half and gave each half to a student.
They quickly caught on and subdivided the deck and gave away their halves. We did this until all the students had, at some point had one or more cards.
Then we got to the merge part. Each student sorted his or her pile and passed it back to the student who they got the cards from. This student then merged the two piles and passed the cards back.
As the cards made their way back to me a student noted “hey, one of my piles isn’t in order.” I commented that “the algorithm might fail if at some points you give your cards to a dumbass.” This
got a good laugh.
Finally, two pile of cards made their way to me and I started to merge then. At which point, I promptly dropped the cards all over the floor.
One of my students exclaimed: “That’s what happens when you give you cards to a dumbass!!!!!”
It was awesome. We all cracked up.
I don’t think I’ve been “insulted” quite so perfectly since my daughter called me an idiot in class last year (I fed her the straight line and she didn’t disappoint).
I love it that my kids feel comfortable enough to joke but also know where the line is.
Read more - comments
• Change the data26 Feb 2014
Patient: “Doctor, it hurts when I do this.”
Doctor: “So, don’t do that.”
We’ve been spending time on State Space Search. It’s a great topic. We deal with or at least introduce:
□ Recursion
□ Blind search
□ Heuristic search
□ foreshadowing things like A* and Dijkstra’s algorithm.
and more. Today, however. I want to talk about something else.
We started by developing a maze solver. It reads a text file representing the maze and then proceeds to find an exit. One version of the source code can be found here.
It’s really cool to see how such a short program, about 10 lines of work code, can solve such an open sounding problem. From there we talk about state spaces, graphs, etc. We then moved on to the
Knight’s tour. By viewing it as a state space problem we can look at it just like the maze.
We represented a state as a board with the knight’s current position and where it’s been. An easy way to do this is to use an array of ints. So we have an empty 5x5 board:
Or a board after a few moves:
The kids saw three base cases:
1. When our count got up to n^2 (and in fact, we’re done)
2. When we land on a non-zero space (when we just return or backtrack)
3. When we try to move off the board, for an index out of bounds error.
I wanted to look at that third one. We talked for a bit about using an if or a try/catch but I pointed out that I didn’t like either. Looking at our maze code, we never checked bounds there. Why
not. Well it turns out that our maze had wall all around. It was stored in a 2D array but the entire outer edge was wall. Why not do the same for the chess board:
-1 -1 -1 -1 -1 -1 -1 -1 -1
-1 -1 -1 -1 -1 -1 -1 -1 -1
-1 -1 0 0 0 0 0 -1 -1
-1 -1 0 0 0 0 0 -1 -1
-1 -1 0 0 0 0 0 -1 -1
-1 -1 0 0 0 0 0 -1 -1
-1 -1 0 0 0 0 0 -1 -1
-1 -1 -1 -1 -1 -1 -1 -1 -1
-1 -1 -1 -1 -1 -1 -1 -1 -1
Now, as long as we start on the board, if the Knight jumps off the edge, it will end on a -1 square and backtrack. By modifying our data structure and data to contain a border, we’ve eliminated
the special case of index out of bounds.
I always like doing that.
Some Links
Read more - comments
• Fibonacci by the tail13 Feb 2014
We’re ramping up for recursion in my junior classes - state space search, nlg(n) sorts, etc. As a refresher, we took a quick look at the Fibonacci numbers.
Now, some people seem to think that it’s a tired problem. It’s mathy, it’s played out, it’s boring etc.. They just might be missing the point.
The beauty isn’t in the problem itself, but rather, that it’s a platform on which you can look at many problem solving techniques.
We can look at the basic, straightforward , imperative solution:
public int fib1(int n) {
int a=1,b=1;
for (int i=0;i<n;i++){
int c=a+b;
return a;
It’s straightforward and fast - no recursion needed.
Next, we can look at the basic recursive version:
public int fib2(int n) {
if (n<=1)
return 1;
return fib2(n-1)+fib2(n-2);
The advantages (of recursive solutions in general):
□ It’s a direct translation from the recursive mathematical formula.
□ It’s elegant, clean, and concise.
□ It can make hard problems much easier (see: Towers, Hanoi).
□ We can use same thought process that led to this solution to solve problems like finding our way out of a maze.
The downside:
So, how do we address this?
One way is via memoization - when we find a value, store it in a table, then we can use the look up table instead of recalculating over and over:
java public int[] fibnums = new int[100000]; public int fib3(int n) { if (n<=1) return 1; else if (fibnums[n] != 0) return fibnums[n]; else { fibnums[n] fib3(n-1)+fib3(n-2); return fibnums
[n]; } }
This is a terrific thing to show a class since it’s easy for students to wrap their heads around, it really speeds things up, and it’s a precursor to lots of neat algorithms.
Finally, we can look at re-writing Fibonacci using tail recursion. This one can be a little hard for students to grasp. I like building it up from the iterative solution. In that solution, we use
a, and b to “walk down” the list of Fibonacci numbers. At any point in time, a and b represent where we are in the sequence. We also use c but that’s really just a temporary place to add a and b
The problem with doing this in a recursive solution is that we can’t have a and b as local variables as each recursive call will have new a and bs and no information will be transferred.
Since we’re working in Java, it doesn’t take long for some students to come up with the idea of using instance variables to store a and b and just use the recursion for the “loop.”:
public int a=1, b=1
public int fib4(int n) {
if (n==1)
return a;
else {
int c=a+b;
return fib4(n-1)
Great, but using instance variables in this way is very inelegant and messy. Better, use extra parameters to store the values from call to call:
public int fib5(int n,int a, int b) {
if (n==1)
return a;
return fib4(n-1,b,a+b)
Looking at Fib5(5) we get for n, a, and b:
□ 5,1,1
□ 4 1,2
□ 3,2,3
□ 2,3,5
□ 1,5,8
At which point we just return the 8
Clean, elegant, fast, and easy to understand.
Each of these four techniques are important and will be used time and time again and here we have one simple problem that allows us to explore them all.
Some Links
Project Euler: Problem #2 - Even Fibonacci numbers
Memoized Fibonacci Numbers with Java 8
The quadratic formula and Fibonacci numbers
Monte Carlo Simulations, Fibonacci Numbers, and Other Number Tests: Why Developers Still Need The Basics
TED: Arthur Benjamin: The magic of Fibonacci numbers - Arthur Benjamin (2013)
Fibonacci Numbers in the Real World
Read more - comments
• StuyCS family from coast to coast11 Feb 2014
<img height=250px src=”/img/tapia/alums.jpg”></img>
I think my one regret over the years is that I haven’t done much travel. So, when I had the opportunity to go to the 2014 Tapia conference, I jumped at the chance. I didn’t get to see too much of
Seattle, but that’s OK. Now I’ve more incentive to go back.
In addition to seeing new sites, it also gave me an opportunity to see friends that I don’t get to see too often.
I frequently talk about the StuyCS family. That it’s all about people and all about community. I’m proud that I started this family and I’m always blown away by the guys. It’s one thing for alums
to swing by the school a couple of years after they graduate but it’s another when they want to see me as much as I them even after ten or twenty years.
It was a little tricky to coordinate between conference commitments and their schedules but we ended up having two dinners.
First I got together with Mike (StuyCS ‘97), his wife Linsday, and Helene. I’ll always remember Mike for being the police officer when he, William, Emil (I think) and Paul were the Village People
for Halloween. It was amazing to see him, meet his wife talk about old times and see what he’s doing today.
Helene isn’t StuyCS, but she is an educator of a kindred spirit, over the years, I’ve met many CS educators. Some “get it” many don’t. Helene clearly does. We clicked right off the bat. As with
my alums outside of the city, I regret that we only get to see each other once every few years.
Two nights later, I got together with Sam (StuyCS ‘96), Daemon (StuyCS ‘06) and Matt (StuyCS ‘07) along with Boback, Alan, and Ephraim, three other high school teachers who attended the
I hope we didn’t bore the teachers with our high school stories, but I really had a blast. We talked about old times, we talked of current issues, good food and good friends. We spanned 27 years
of Stuy but that didn’t matter. Get a bunch of bright interesting like minded people together and good things happen.
I feel amazingly blessed that I can travel across the country and students from 10 and 20 years ago want to get together to catch up. I guess I should really say friends, not students.
I’ve been thinking a lot about my career as I close in on 25 years of teaching. What impact I’ve had, what I could have done better, what I did right.
I think I can live with the StuyCS family as my legacy.
Read more - comments
|
{"url":"http://cestlaz.github.io/","timestamp":"2014-04-17T15:27:11Z","content_type":null,"content_length":"47063","record_id":"<urn:uuid:601f45a0-7c5e-4c76-95a7-4dfed95eb8b0>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00051-ip-10-147-4-33.ec2.internal.warc.gz"}
|
3rd and final optimization problem
July 28th 2012, 06:04 PM #1
Jul 2012
Front Royal, VA
3rd and final optimization problem
Trying to design an aluminum can containing a volume of 2,000 cubic centimeters.
Find the dimensions of the can that will minimize the amount of aluminum used.
The three formulas I'm working with will obviously
Volume: π r^2 h=2000
Vertical surface area: 2π r h
Lid and base surface area: π r^2 (x2)
I understand that basically what I'm trying to do is isolate one variable (for example make h=2000/π r^2) so that I can now substitute that equation in for the variable h in another formula.
That will result in a quadratic equation, from which I will find the derivative, find max and min, and figure out the volume at the minimum. Am I missing something? I keep attempting this, but I
keep getting stuck after I substitute 2000/πr^2 for h because I don't have a quadratic equation to work with.
Re: 3rd and final optimization problem
you don't always need a quadratic equation to solve a optimization problem,
when the volume is $V$,
\begin{align*}V&=\pi r^2h\\h&=\frac{V}{\pi r^2}\\\end{align*}
\begin{align*}A&=2\pi rh +2\pi r^2\\&=2\pi r\left[\frac{V}{\pi r^2}\right]+2\pi r^2\\ &= 2 \left[ \frac{V}{r}+\pi r^2\right] \end{align*}
to find the maximum point you can use differentiation,
$\frac{dA}{dr}=2\left[2\pi r- \frac{V}{r^2}\right]$
equate this to zero to find an extrema.
Re: 3rd and final optimization problem
Once again you come to the rescue! Thank you so very much for explaining things in a way that I can easily understand them. You have already helped me more than my textbook has.
Re: 3rd and final optimization problem
I'm glad that I could be a help
July 28th 2012, 07:31 PM #2
July 29th 2012, 04:21 PM #3
Jul 2012
Front Royal, VA
July 30th 2012, 07:28 AM #4
|
{"url":"http://mathhelpforum.com/calculus/201466-3rd-final-optimization-problem.html","timestamp":"2014-04-23T12:37:06Z","content_type":null,"content_length":"38735","record_id":"<urn:uuid:00b2747a-476b-47f0-a618-15d4b47d1b87>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00591-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Hello Everyone
First off, I want to say thank you to everyone who has had part in making this website what it is. From the web designers (proper terminology?) to people who have submitted personal accounts,
relative information, or replies and comments to accounts and such. This is a great website, interesting and easy to navigate.
Also, I have a few various accounts that are unexplained that I will eventually thread onto the web. I am a theorist, a critic, a total believer. I like to think on a deeper level on things such as
human psyche and relativity to all things. I believe all is possible and at the same time some things are only possible with belief. Oh, the only thing I don't believe and think is total bullshit is
our government... all government. War is real, politics are a... well we say it all the time, a race.
Buuut anyhoo, anytime somebody might want an opinion from a ;)gramatticcaly corect ;) rule breaker, I'm always up for a good chat. I don't mind sharing a bit of info that titillates my own reality as
Remember There is always truth
That would make two of us. I love the nickname! There's intergalactic war going on concerning that very point.
truth apears only to exist in mathematics, all else is merely present desires being expressed through words.
the early writings of the Jewish/Christians were full of bad grammer and misspellings, now they have been edited and are called Holy.
In Reply To:
truth apears only to exist in mathematics, all else is merely present desires being expressed through words.
mathematics is all astral, though. So is mysticism. I don't see why math always wins the prize when both are based on some sort of ethereal reality.
the truths found in mathematics would be true regardless of the creatures or technology that discovers them. mysticism appears to be linked to biological creatures in a more depended manner. I refer
you to Russell and Whitehead and the development of positivism at the start of the 1900's. mysticism comes with desires and their interpretations, math produces truths that can only be supported by
logic and reasoning, it is a study that appears to be different in that regards. I do not assert one to be >better< than the other. Merely different than the other. With Aspergerm Syndrome I do what
I can to limit my interactions with people. The internet has that distanding aspect to it those with AS seem to perfer.
Oh dear,you seemed to have started a debate already.
Enjoy the site,I know I do!!
Some good points have already been made. Let me say that mathematics are numeral. Though the numbers are not solid matter, it is what they represent. The rules cannot, to my knowledge, be broken in
terms of numbers. They are solid state terms that are paired with objects or things to be multiplied. 1 apple is 1 apple. You cannot make 3 apples if you are adding 1 and 1 no matter how you try. So
in turn yes mathematics are a simple yet firm truth.
Words on the other hand words and letters can be bent slightly, and vary greatly between culture. F can be made with ph in america, and in other countries those letters may not even exist. 1 apple is
1 apple in america, japan, or russia all the same. Though these should not be compared as they are of totally different aspects of reality and life. Sometimes the truth may be unbelievable, but fact
is fact, and there is always a right and a wrong answer to an equation.
Thank you jeff, I never thought it would start such a fire lol. It was the first thing I came up with, and it was probably heavily influenced by me reading the moon anomalies. Well at the end of the
day, let's all be a friends and have a beer! Your reality may differ from mine, I may be the space creature who defies the laws of time and space. The truth is there, even if noone knows what it is.
the counting system being used can give different answers. The third prime three digit number base 10 is 107, but the third digit number base 12 is 111 [157 in base 10]. 23 in base 10 counting is 2
tens and three ones which is 23, but 23 in base 12 counting system is two 12's and three ones which is 27 in base 10. You can change the starting point and the rules of counting [by 12 instead of 10]
yet what you do is still counting and you arrive at an equivalent answer. In France they used to count base 20 [they used both hands and toes ?]. You need to know the counting system and then what
the digit means in the place where it is. 107 base 10 is 107. 107 base 12 is 1B [ 96 and 11 base 10 ] or in base 20 it is 57 [ 100 and 7 ]. We tend to see the world around us through the filter of
our lives experiences, math tries to limit the impact of that filter by the use of firm rules and methods. I see that there is an effort to apply method to the study of some aspects of paranormal
events. Is it right or correct to do this?
I recall talking with our department chairman, Dr. Carella, in the late 1980's regarding a stock insurance plan using derivatives as a method to put "confidence" into market investments. He said it
would allow folks to invest with greater assurance and lead to a market expansion, which it did. He died in the late 1990's knowing he was right. I cautioned against applying mathematical absrtacts
against real market events, when it all comes back to roost, then it is my turn to be correct. 90 % plus of all banking transactions these days [2012] empoly the use of RSA encoding using two 200
digit long prime numbers to produce a 400 digit long combined number. In 2003 I came up with a method to break the large combined numberm down into it's large primes. I live with apprehension that
some group of teenagers may come by the method and apply it to their own ends. It is good to be able to figure it out, but bad if it gets out. Russia has a different method in their banking system
based on the CzeckSloviakian mechine the German army used during World War II.
Math is neither good nor bad, it nepends on how it is used.
1B is 23 base 12 counting system, 107 is 8B base 12. Oops!
Oops, is what I intend to say if some teenagers get the formula for breaking the RSA numbers and most of the world wide economic system comes crashing down in just a few days. Perhaps, "Oopsie ?"
I for one have not got a clue what you are on about!
This is a very interestibg subject. Though I am completely lost, I'd love to figure it out. I do agree, though, that math and numbers are used to govern people and their outlook on reality.
there is an effort to encode information in order to facilitate the movement or transference of goods and ideas. one such device is the RSA method [ RSA = Rivest, Shamir, and Adleman ] of encodement.
They take two very large prime numbers and produce a combimed number by multipying them together. One factor is kept as a private key and the other one is made public so that people can sent you
secure information. Banks use their system for transfering funds from one account to another.
E E^2
A B A^2 C M B^2
/----d----/ /-------Ad----------/-n-/-n-/
A is the smaller prime number
B is a prime number larger than A
d is the distance between the two numbers
E is the number equal distant between the two primes
C is the combined number produced by A x B
E^2 is the square of the number E
M is the number equal distant in the middle between A^2 and B^2
n is the square root [sqr] of some number such that the sqr of n is equal to 1/2d
Ad is the distance between A^2 and C
Bd is the distance between C and B^2
sqr of n =1/2d, therefore, the sqr of n^2 + C [the combined number produced by AB] will produce one and only one integer
such that there will be a sqr of E^2 [the number in the middle of the factors of A and B]. That gives us E and 1/2d and
from there A and B. You will need a program of Multiple Precision Arthmetic Library [MPAL] that can handle long digit numbers and offer the precision needed. The QBasic program [which is limited to
relatively small digit numbers] would
read as follows:
10 COUNT = 0
11 FOR N = 1 TO 1500
12 COUNT = SQR((N^2) + 11426419)
13 IF COUNT = INT(COUNT)THEN PRINT COUNT ELSE 17
14 PRINT N
15 PRINT INT(COUNT) - N
16 PRINT INT(COUNT) + N
17 NEXT N
18 END
This produces the results:
E is 3590
1/2dn is 1209
A is 3590 - 1209 = 2381
B is 3590 + 1209 = 4799
A x B is 4799 x 2381 = 11426419
have fun but do no harm,
with best regards
for you and yours
San Diego, Calif.
/ so people can send you secure...
the program moved all the positionings to the far left, now is does not appear clear. I will work on presenting that image in ma correct manner.
Gary, you should knock up a program in visual basic that can code/decode using your method. I think it would be a perfect educational tool.
As for my stance on math...
In an ideal existence, mathematics would be obsolete because we as conscious souls would transcend it. One apple, as stated before, could turn into two turtles at will.
The thing is...using mathematics, we can uncover the physics and properties that would make this occur.
Please log in or become a member to add a post.
|
{"url":"http://www.paranormalnews.com/forumdetails.aspx?ID=70ab13bc-7a95-4b1f-bc76-079a9cb8c6e2&page=1&parentid=b52feeef-2eff-4f6a-a6f9-3a669e12c287","timestamp":"2014-04-17T01:00:17Z","content_type":null,"content_length":"69703","record_id":"<urn:uuid:1c5e5b0c-d9b7-47c0-8413-1593c3ac60d3>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00637-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Fokker Planck Modeling of Electron Kinetics in Plasmas and Semiconductors
Vladimir I. Kolobov^^, ^^, ^a
^aComputational Fluid Dynamics Research Corporation, Huntsville, AL 35805, USA
The paper reviews physical principles and computational methods of solving the multi-dimensional Fokker-Planck equation (FPE) in application to electron kinetics in gas discharge plasmas and
semiconductor devices. The four-dimensional (3 spatial coordinates + energy) FPE is obtained from the 6D Boltzmann Transport Equation (BTE) for the case when momentum relaxation occurs faster than
energy relaxation. The FPE- based methods offer a very good compromise between physical accuracy and numerical efficiency. We have developed a general purpose 4D FPE solver coupled to
electromagnetic, chemistry and other models for self-consistent kinetic simulations of weakly ionized plasmas and semiconductor devices. The FPE describes the Electron Energy Probability Function
(EEPF) which provides macroscopic characteristics (electron density, fluxes, rates of electron induced chemical reactions, etc). Using these quantities, the transport of ions and holes is simulated
using continuum models. The electromagnetic fields are calculated by solving Maxwell equations for scalar electric and vector magnetic potentials. This paper presents several examples of hybrid
kinetic simulations of plasma reactors for semiconductor manufacturing and silicon based semiconductor devices. It also outlines direct methods of solving the BTE to compute the distribution
functions with arbitrary anisotropy such as electron and ion beams and ballistic carrier transport in semiconductors.
Author Keywords:
|
{"url":"http://www.asdn.net/publications/cms/abstracts/sd_kolobov/","timestamp":"2014-04-17T00:49:05Z","content_type":null,"content_length":"3317","record_id":"<urn:uuid:c3b47519-5716-4212-b5d4-c9e6559e0c79>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00435-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Help with Derivative problem involving ARCSIN
October 30th 2008, 07:28 PM #1
Oct 2008
Help with Derivative problem involving ARCSIN
Directions: Find the derivative of y with respect to the appropriate variabl.
Sin^-1 is the ARCSIN, and is the part I'm having trouble with. The answer is Square root of 2 over the square root of (1 - 2t^2), but I need help on how to obtain that answer. Thanks for your
Directions: Find the derivative of y with respect to the appropriate variabl.
Sin^-1 is the ARCSIN, and is the part I'm having trouble with. The answer is Square root of 2 over the square root of (1 - 2t^2), but I need help on how to obtain that answer. Thanks for your
The derivative of $y = \sin^{-1} (2t)^{1/2}$ (what you posted) is NOT $\frac{\sqrt{2}}{\sqrt{1 - 2t^2}}$.
For the given answer to be correct the question should be find the derivative of $y = \sin^{-1} (2^{1/2} t)$.
In which case you let $u = 2^{1/2} t$ and use the chain rule. You should know that if $y = \sin^{-1} u$ then $\frac{dy}{du} = \frac{1}{\sqrt{1 - u^2}}$ ....
October 31st 2008, 03:04 AM #2
|
{"url":"http://mathhelpforum.com/calculus/56716-help-derivative-problem-involving-arcsin.html","timestamp":"2014-04-17T13:23:11Z","content_type":null,"content_length":"34850","record_id":"<urn:uuid:ee3b43cd-dae3-442c-a14b-c05d2bf54c64>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00294-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Stephanie Weirich sweirich at cis.upenn.edu
Wed Feb 15 10:53:48 EST 2006
Hi John,
Dimitrios, Simon and I have been talking about FirstClassExistentials
recently. (Since they aren't currently implemented in any Haskell
compiler, their inclusion in Haskell' seems unlikely, but I would like
to seem them discussed for not-too future versions of Haskell.)
In particular, they do seem like a natural extension to arbitrary-rank
polymorphism and I agree that they have the potential to be much more
convenient than ExistentialQuantification, particularly in conjunction
with GADTs and ImpredicativePolymorphism. I don't think it makes sense
to add them without some sort of arbitrary-rank polymorphism (as
discussed in the paper "Practical type inference for arbitrary-rank
types") because they need to be enabled by typing annotations (and
perhaps some kind of local inference) as is necessary for arbitrary-rank
universals. Like first-class universals, types containing first-class
existentials cannot be guessed, so if a function should return something
of type exists a. Show a => a, then the type of the function should be
The tricky parts that Dimitrios, Simon and I are talking about now have
to do with the subsumption relation: when is one type "more general"
than another, given that both may contain arbitrary universal and
existential types. This subsumption relation is important for
automatically coercing into and out of the existential type.
For example, if
f :: (exists a. (a, a -> a)) -> Int
then it should be sound to apply f to the argument
(3, \x -> x + 1)
just like it is sound to apply
g :: forall a. (a, a->a) -> Int
to the same argument.
You are right in that "strictly contravariant" existentials (to coin a
new term) don't add much to expressiveness, and perhaps that observation
will help in the development of our subsumption relation. (As an aside,
I think that this is analogous to the fact that "strictly covariant"
first-class universals aren't all that important.
f :: Int -> (forall a. a -> a)
should be just as good as
f :: forall a. Int -> a -> a.
Some arbitrary rank papers disallow types like the former: in the
practical type inference paper, we just make sure that these two types
are equivalent. )
John Meacham wrote:
> On Tue, Feb 14, 2006 at 11:56:25PM +0000, Ross Paterson wrote:
>> Is this the same as ExistentialQuantification?
>> (And what would an existential in a covariant position look like?)
> well, the ExistentialQuantification page is restricted to declaring data types
> as having existential components. in my opinion that page name is
> something of a misnomer. by ExistentialQuantifiers I mean being able
> to use 'exists <vars> . <type>' as a first class type anywhere you can
> use a type.
> data Foo = Foo (exists a . a)
> type Any = exists a . a
> by the page I mainly meant the syntatic use of being able to use
> 'exists'. Depending on where your type system actually typechecks it as
> proper, it implys other extensions.
> if you allow them as the components of data structures, then that is
> what the current ExistentialQuantification page is about, and all
> haskell compilers support this though with differing syntax as described
> on that page.
> when existential types are allowed in covarient positions you have
> 'FirstClassExistentials' meaning you can pass them around just like
> normal values, which is a signifigantly harder extension than either of
> the other two. It is also equivalent to dropping the restriction that
> existential types cannot 'escape' mentioned in the ghc manual on
> existential types.
> an example might be
> returnSomeFoo :: Int -> Char -> exists a . Foo a => a
> which means that returnsSomeFoo will return some object, we don't know
> what, except that is a member of Foo. so we can use all of Foos class
> methods on it, but know nothing else about it. This is very similar to
> OO style programing.
> when in contravarient position:
> takesSomeFoo :: Int -> (exists a . Foo a => a) -> Char
> then this can be simply desugared to
> takesSomeFoo :: forall a . Foo a => Int -> a -> Char
> so it is a signifgantly simpler thing to do.
> A plausable desugaring of FirstClassExistentials would be to have them
> turn into an unboxed tuple of the value and its assosiated dictionary
> and type. but there are a few subtleties in defining the translation as
> straight haskell -> haskell (we need unboxed tuples for one :)) but the
> translation to something like system F is more straightforward.
> there is also room for other extensions between the two which I am
> trying to explore with jhc. full first class existentials are somewhat
> tricky, but the OO style programming is something many people have
> expressed interest in so perhaps there is room for something interesting
> in the middle that satiates the need for OO programming but isn't as
> tricky as full first class existentials. I will certainly report back
> here if I find anything...
> Also, someone in the know should verify my theory and terminology :)
> John
More information about the Haskell-prime mailing list
|
{"url":"http://www.haskell.org/pipermail/haskell-prime/2006-February/000581.html","timestamp":"2014-04-24T22:47:33Z","content_type":null,"content_length":"8242","record_id":"<urn:uuid:8f4f90dc-00b3-437a-b127-6d6cf22ef959>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00165-ip-10-147-4-33.ec2.internal.warc.gz"}
|
DBSCAN clustering algorithm
is a well-known clustering algorithm, which is easy to implement. Quoting Wikipedia: "
Basically, a point q is directly density-reachable from a point p if it is not farther away than a given distance ε (i.e., is part of its ε-neighborhood), and if p is surrounded by sufficiently many
points such that one may consider p and q be part of a cluster.... For practical considerations, the time complexity is mostly governed by the number of getNeighbors queries. DBSCAN executes exactly
one such query for each point, and if an indexing structure is used that executes such a neighborhood query in O(logn), an overall runtime complexity of $O(n \cdot \log n)$ is obtained.
Some DBSCAN advantages:
1. DBScan does not require you to know the number of clusters in the data a priori, as opposed to k-means.
2. DBScan can find arbitrarily shaped clusters.
3. DBScan has a notion of noise.
4. DBScan requires just two parameters and is mostly insensitive to the ordering of the points in the database
The most important DBSCAN disadvantages:
1. DBScan needs to materialize the distance matrix for finding the neighbords. It has a complexity of O((n^2-n)/2) since only an upper matrix is needed. Within the distance matrix the nearest
neighbors can be detected by selecting a tuple with minimums functions over the rows and columns. Databases solve the neighborhood problem with indexes specifically designed for this type of
application. For large scale applications, you cannot afford to materialize the distance matrix
2. Finding neighbords is an operation based on distance (generally the Euclidean distance) and the algorithm may find the curse of dimensionality problem
Here you have a
DBSCAN code
implemented in C++, boost and stl
10 comments:
1. Thanks, for sharing the code.
That would be nice to see how the DBSCAN will work if one would use the local Adaptive Metric(Mahalanobis metric) for the distance instead of Euclidean distance.
I am trying to implement it for 6D(x,y,z,vx,vy,vz)
2. Thanks for releasing the code. However; could you specify the license under which it is released?
3. Hi,
Thank you for sharing the codes. However, I met a problem when I try to compile the codes with visual studio 2005. I had installed boost on my computer, but it still does not work out. Can you
please show me some possible solutions to this problem?
Thank you so much!
4. hi,
Thank you for sharing the codes. However, I met a problem when I tried to compile the codes in visual studio. Then, I installed boost on my computer. However, it does not work out. Could you
please show me some possible solutions to this problem?
Thank you so much!
5. At lines 16 and 33 in distance.h I had to change
typedef typename VEC_T vector_type;
typedef VEC_T vector_type;
to get it to compile.
6. The example is clustering random numbers into 1 cluster. Actually would be good if the example was in some way illustrative of this method - on a more "shapy" data - like in the paper on DBSCAN.
7. actually I tried this code on ELKI example, it produced just one cluster, where ELKI gave several clusters with the same parameters.
8. thanks for sharing the code..where I can find real dataset for applying DBSCAN?
9. Hi
I've found a very very small bug in your code.
in clusters.cpp file
in findNeighbours function:
if ((pid != j ) && (_sim(pid, j)) > threshold)
it's supposed to be less than
if ((pid != j ) && (_sim(pid, j)) < threshold)
after changing this it works like charm, thanks for the code
10. Hi,
Thanks for sharing this code. The last commenter (M^3 Team) 's correction is wrong. Similarity, not distance is being compared! I wrote this small extra bit of code to provide the example where
one can see several clusters.
void randomInit_twopopulations (Points & ps, unsigned int dims,
unsigned int num_points)
for (unsigned int j = 0; j < num_points; j++)
Point p(dims);
double added;
if (j < num_points/2)
added = -5.0;
added = 5.0;
for (unsigned int i = 0; i < dims; i++)
p(i) = (added + rand() * (2.0) / RAND_MAX);
// std::cout << p(i) << ' ';
// std::cout << std::endl;
|
{"url":"http://codingplayground.blogspot.de/2009/11/dbscan-clustering-algorithm.html","timestamp":"2014-04-17T06:41:41Z","content_type":null,"content_length":"155592","record_id":"<urn:uuid:1a600874-fa21-4a98-a553-e329a03be6d2>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00600-ip-10-147-4-33.ec2.internal.warc.gz"}
|
FOM: Does Mathematics Need New Axioms? Research Update
Harvey Friedman friedman at math.ohio-state.edu
Fri Feb 18 10:34:48 EST 2000
This is an update to my posting Do We Need New Axioms? Results/Conjectures
8:09PM 2/11/00.
The main conjectures remain the same (although we now think it best to use
linear growth rather than quadratic growth), but the particular weakenings
of the main conjectures that look both promising to establish and quite
strong have been improved. Also, we have added the dimensionality
This posting is self contained.
Let N be the set of all nonnegative integers. A multivariate function on N
is a function f such that
i) there exists k >= 1 such that the domain of f is N^k;
ii) the range of f is a subset of N.
For A containedin N, we write fA = {f(x): every coordinate of x lies in A}.
The main conjectures concern the following family of 2^2^9 statements (up
to formal Boolean equivalence):
*Let f,g be multivariate functions on N obeying the inequality c|x| <=
f(x),g(x) <= d|x|, where c,d are rational constants > 1. There exist
infinite sets A,B,C containedin N such that a given Boolean equation in
A,B,C,fA,fB,fC,gA,gB,gC holds.*
Here |x| can be taken to be any l_p norm, 0 < p <= infinity - the
inequality has the same meaning for any such norm. And Boolean equations
are simply equations between terms using the Boolean operations of union,
intersection, and complement (relative to N).
Note that there are 2^2^9 formally inequivalent = semantically inequivalent
Boolean equations, but in this case, the number can readily be cut down
somewhat because of relations like A containedin B implies fA containedin
THEOREM 1 (66%). There is an instance of * that is provably equivalent to
the 1-consistency of Mahlo cardinals of finite order, over ACA. Conjectures
2 and 3 for * (see below) each imply the 1-consistency of Mahlo cardinals
of finite order, over ACA.
CONJECTURE 1. Informally speaking, it is necessary and sufficient to use
Mahlo cardinals of all finite orders in order to determine the truth values
of all instances of *. In particular, every instance of * is either
refutable in RCA_0, provable in ACA, or provably equivalent to the
1-consistency of Mahlo cardinals of finite order over ACA, with all three
possibilities present. Furthermore, in the first of the three cases, the
refutation proof can be given with fewer than 100,000 symbols in RCA_0 with
DIMENSIONALITY CONJECTURE 2. If in *, we require that the given functions
are binary, then we obtain the same true instances.
FINITE OBSTRUCTION CONJECTURE 3. Any particular instance of * that holds
with "infinite" replaced by "arbitrarily large finite" must hold in its
original form.
Consider the following more general family of statements, indexed by n,m >=
1 and Boolean equations in (n+1)m set variables:
**Let f_1,...,f_n be multivariate functions on N obeying the inequality
c|x| <= f_1(x),...,f_n(x) <= d|x|, where c,d are rational constants > 1.
There exist infinite sets A_1,...,A_m containedin N such that a given
Boolean equation in
EXTENDED CONJECTURES 4,5,6. Same as conjectures 1,2,3, but for **.
At this time, even the proofs of conjectures 1-3 are some distance into the
future. In order to tame these conjectures for near term results, we first
restate them slightly where we insist that the A's form a tower under
proper inclusion, and fix on the case of two functions and three sets as in
***Let f,g be multivariate functions on N obeying the inequality c|x| <=
f(x),g(x) <= d|x|, where c,d are rational constants > 1. There exist
infinite sets A properlycontainedin B properlycontainedin C containedin N
such that a given Boolean equation in A,B,C,fA,fB,fC,gA,gB,gC holds.***
And we have
THEOREM 2 (66%). There is an instance of *** that is provably equivalent to
the 1-consistency of Mahlo cardinals of finite order, over ACA. Conjectures
2 and 3 for *** imply the 1-consistency of Mahlo cardinals of finite order,
over ACA.
And we restate conjectures 1-3 using ***.
However, even the proofs of these restricted conjectures are some distance
into the future. So further taming is necessary for near term progress.
So we consider
****Let f,g be multivariate functions on N obeying the inequality c|x| <=
f(x),g(x) <= d|x|, where c,d are rational constants > 1. There exist
infinite sets A properlycontainedin B properlycontainedin C containedin
N\gC such that a given Boolean equation in A,B,C,fA,fB,fC,gA,gB,gC
And again we have
THEOREM 3 (66%). There is an instance of **** that is provably equivalent
to the 1-consistency of Mahlo cardinals of finite order, over ACA.
Conjectures 2 and 3 for **** imply the 1-consistency of Mahlo cardinals of
finite order, over ACA.
And we restate conjectures 1-3 using ****.
We have now weakened this conjecture sufficiently so that we are hopeful of
proving it. We are making considerable progress on this.
And if we get stuck with this, here is an attractive further restriction,
which should be considerably easier to handle:
*****Let f,g be multivariate functions on N obeying the inequality c|x| <=
f(x),g(x) <= d|x|, where c,d are rational constants > 1. There exist
infinite sets A properlycontainedin B properlycontainedin C containedin
N\gC such that a given Boolean equation in A,B,C,fA,fB,gB,gC holds.*****
THEOREM 4 (66%). There is an instance of ***** that is provably equivalent
to the 1-consistency of Mahlo cardinals of finite order, over ACA.
Conjectures 2 and 3 for ***** imply the 1-consistency of Mahlo cardinals of
finite order, over ACA.
Obviously, conjectures 1-3 for ***** are going to be a lot easier since we
have only to work with Boolean equations in 7 variables rather than in 9
More information about the FOM mailing list
|
{"url":"http://www.cs.nyu.edu/pipermail/fom/2000-February/003772.html","timestamp":"2014-04-21T05:54:55Z","content_type":null,"content_length":"8184","record_id":"<urn:uuid:99700cfe-5533-49cf-a243-49aee7f67eed>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00290-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Mathematics for decision making
Mathematics for decision making: an introduction
Frank J. Fleming, Roy Luke
Merrill, Mar 1, 1974 - Business & Economics - 321 pages
From inside the book
21 pages matching integers in this book
Where's the rest of this book?
Results 1-3 of 21
What people are saying - Write a review
We haven't found any reviews in the usual places.
Related books
Sets and Logic 1
Sequences 27
Mathematics of Investment and Finance 61
8 other sections not shown
Common terms and phrases
A U B acceptable basic solution acceptable solution amount annual rate arithmetic sequence binomial called Cartesian product Chapter coin column confidence interval cost deck of 52 defective
Definition deposit digits dimes earned elementary row operations equal event Example 1 Find Figure Find the number finite five function geometric sequence given graph Hence inequalities integers
interest intersection invested linear equations linear programming loan maximum mean minimum value MISCELLANEOUS PROBLEMS nonnegative nonzero entry normal distribution number of elements obtain
ordered pairs parameter payment period player point lattice population pounds Pr(A Pr(E present value probability random sample sample space score Section selected at random set of constraints slack
variables solution set standard deviation statement subsets system of equations system of linear techniques Theorem tossed transform the matrix truth set unique solution universal set uppermost
variables Venn diagram white balls white marbles xu x2
Bibliographic information
Mathematics for decision making: an introduction
Frank J. Fleming, Roy Luke
Merrill, Mar 1, 1974 - Business & Economics - 321 pages
|
{"url":"http://books.google.com/books?id=9cNUAAAAYAAJ&q=integers&dq=related:ISBN9715420583&source=gbs_word_cloud_r&cad=5","timestamp":"2014-04-16T08:26:10Z","content_type":null,"content_length":"107207","record_id":"<urn:uuid:bed4ae41-0401-4521-b05d-99d757a965a8>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00587-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Homer Glen Algebra 1 Tutor
...I have a bachelor's in Mathematics, so I have used algebra for many years. I have a knowledge of tips and tricks to simplify elementary algebra concepts. As an undergraduate math major, I
volunteered with a program that tutored local middle school students in math.
5 Subjects: including algebra 1, statistics, prealgebra, probability
...I will help you become good at it and help you appreciate it. I love algebra and want to make it easier and fun for those who need help. Algebra is like a game...it takes practice and a while
to learn but once you got it..you start to like it.
13 Subjects: including algebra 1, chemistry, English, biology
I am an enthusiastic, caring, and knowledgeable educator with lots of experience and a love for helping kids and teens have success in school. I received a Master's Degree in School Counseling so
that I could better understand the many parts of the student that contribute to both struggles and succ...
15 Subjects: including algebra 1, reading, English, elementary (k-6th)
...I am available for an interview prior to any tutoring begins. Danti OTutor I have been a Wyzant math tutor since Sept 2009. I have 182 ratings, 179 of which are five stars and three are four
18 Subjects: including algebra 1, geometry, ASVAB, GED
...I have helped friends and family with math and science for a long time. I believe in encouraging students as they make the small leaps towards bigger ones. As a young student back in the day, I
remember the early frustration but I also remember the great feeling when you say "Aha, so that's how it's done." This is what happens when you put the work in.
2 Subjects: including algebra 1, ACT Math
|
{"url":"http://www.purplemath.com/Homer_Glen_algebra_1_tutors.php","timestamp":"2014-04-21T05:02:22Z","content_type":null,"content_length":"23860","record_id":"<urn:uuid:c26f6f98-8729-4fe1-9d77-50648852d570>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00049-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Generalized Fibonacci Polynomials
Yashwant K. Panwar^1, , B. Singh^2, V.K. Gupta^3
^1Mandsaur Institute of Technology, Mandsaur, India
^2School of Studies in Mathematics, Vikram University, Ujjain, India
^3Govt. Madhav Science College, Ujjain, India
In this study, we present generalized Fibonacci polynomials. We have used their Binet’s formula and generating function to derive the identities. The proofs of the main theorems are based on special
functions, simple algebra and give several interesting properties involving them.
Keywords: generalized Fibonacci polynomials, Binet’s formula, generating function
Turkish Journal of Analysis and Number Theory, 2013 1 (1), pp 43-47.
DOI: 10.12691/tjant-1-1-9
Received August 22, 2013; Revised October 24, 2013; Accepted November 16, 2013
© 2013 Science and Education Publishing. All Rights Reserved.
Cite this article:
• Panwar, Yashwant K., B. Singh, and V.K. Gupta. "Generalized Fibonacci Polynomials." Turkish Journal of Analysis and Number Theory 1.1 (2013): 43-47.
• Panwar, Y. K. , Singh, B. , & Gupta, V. (2013). Generalized Fibonacci Polynomials. Turkish Journal of Analysis and Number Theory, 1(1), 43-47.
• Panwar, Yashwant K., B. Singh, and V.K. Gupta. "Generalized Fibonacci Polynomials." Turkish Journal of Analysis and Number Theory 1, no. 1 (2013): 43-47.
Import into BibTeX Import into EndNote Import into RefMan Import into RefWorks
1. Introduction
Fibonacci polynomials are a great importance in mathematics. Large classes of polynomials can be defined by Fibonacci-like recurrence relation and yield Fibonacci numbers ^[15]. Such polynomials,
called the Fibonacci polynomials, were studied in 1883 by the Belgian Mathematician Eugene Charles Catalan and the German Mathematician E. Jacobsthal.
The polynomials
The Fibonacci polynomials studied by Jacobsthal were defined by
The Pell polynomials
The Lucas polynomials ( ) nl x , originally studied in 1970 by Bicknell, are defined by
It is well known that the Fibonacci polynomials and Lucas polynomials are closely related. Obviously, they have a deep relationship with the famous Fibonacci and Lucas sequences. That is ^[11]
defined the Fibonacci Polynomials and obtained some more identities for these polynomials. Hogget and Lind ^[17] make a similar “symbolic substitution” of certain sequences into the Fibonacci
polynomials, they extend these results to the substitution of any recur rent sequence into any sequence of polynomials obeying a recurrence relation with polynomial coefficients. Since then many
problems about the polynomials have been proposed in various issue of the Fibonacci Quarterly. Hoggatt, Philips and Leonard ^[16] have obtained some more identities involving Fibonacci Polynomials
and Lucas polynomials. A. Lupas ^[3] present many interesting properties of Fibonacci and Lucas Polynomials. C. Berg ^[4] defined Fibonacci numbers and orthogonal polynomials. S. Falcon and A. Plaza
^[13] defined the k-Fibonacci polynomials are the natural extension of the k-Fibonacci numbers and many of their properties admit a straightforward proof and many relations for the derivatives of
Fibonacci polynomials are proven. K. Kaygisiz and A. Sahin ^[10] present new generalizations of the Lucas numbers by matrix representation, using Generalized Lucas Polynomials. G. Y. Lee and M. Asci
^[8], consider the Pascal matrix and define a new generalization of Fibonacci polynomials called (p, q)-Fibonacci polynomials. They obtain combinatorial identities and by using Riordan method they
get a factorizations of Pascal matrix involving (p, q)-Fibonacci polynomials. Many authors have studied Fibonacci polynomials. In this paper, we present generalization of Fibonacci and Lucas
Polynomials by changing the initial terms but the recurrence relation is preserved.
2. Generalized Fibonacci Polynomials
The generalized Fibonacci polynomials defined by
If s=1, then we obtained classical Fibonacci polynomial sequence.
It is well known that the Fibonacci polynomials and Lucas Polynomials are closely related. The
generalized Lucas polynomials defined by
If s=1, then we obtained classical Lucas polynomial sequence.
In the 19th century, the French mathematician Binet devised two remarkable analytical formulas for the Fibonacci and Lucas numbers. In our case, Binet’s formula allows us to express the generalized
Fibonacci Polynomials in function of the roots
3. Properties of Generalized Fibonacci Polynomials
Theorem 1: (Binet's formula). The nth generalized Fibonacci Polynomials is given by
where are the roots of the characteristic equation (3),
Proof: we use the Principle of Mathematical Induction (PMI) on n. It is clear the result is true for n = 0 and n = 1 by hypothesis. Assume that it is true for i such that 0 ≤ i ≤ r +1, then
It follows from definition of generalized Fibonacci Polynomials and from equation (3.1),
Thus, the formula is true for any positive integer n.
Theorem 2: (Binet's formula). The nth generalized Lucas Polynomials is given by
Proposition 3: For any integer n ≥ 1,
Proof: Since
now, multiplying both sides of these equations by
Proposition 4: For any integer n ≥ 1,
Proof: By using Eq. (3.1) in the R.H.S. of Eq. (3.5) and taking in to account that
Proposition 5: For any integer n,
Proof: From the Binet’s formula of generalized Fibonacci Polynomials
If n is even,
If n is odd,
Let us denote
Then previous formula become:
4. Sums of Generalized Fibonacci Polynomials
In this section, we study the sums of generalized Fibonacci Polynomials. This enables us to give in a straightforward way several formulas for the sums of such Polynomials.
Lemma 6: For fixed integers p, q with
Proof: From the Binet’s formula of generalized Fibonacci and Lucas Polynomials,
then, the equation becomes,
Proposition 7: For fixed integers p, q with
Proof: Applying Binet’s formula of generalized Fibonacci Polynomials,
Corollary 7.1: Sum of odd generalized Fibonacci polynomials
If p=2m+1, then Eq.(3.9) is
For example
(1) If m=0 then p=1
(i) For q=0:
(2) If m=0 then p=3
(i) For q=0:
(ii) For q=1:
(iii) For q=2:
(2) If m=2 then p=5
(i) For q=0:
(ii) For q=1:
(iii) For q=2:
(iv) For q=3:
(v) For q=4:
Corollary 7.2: Sum of even generalized Fibonacci polynomials
If p=2m, then Eq.(3.9) is
For example
(1) If m=1 then p=2
(i) For q=0:
(ii) For q=1:
(iii) For q=2:
(2) If m=2 then p=4
(i) For q=0:
(ii) For q=1:
(iii) For q=2:
(iv) For q=3:
(v) For q=4:
(3) If m=3 then p=6
(i) For q=0:
(ii) For q=1:
(iii) For q=2:
(iv) For q=3:
Proposition 8: For fixed integers p, q with
Proof: Applying Binet’s formula of generalized Fibonacci Polynomials, the proof is clear. For different values of p&q:
5. Confluent Hypergeometric Identities of Generalized Fibonacci Polynomials
A. Lupas ^[3], present a guide of Fibonacci and Lucas Polynomial and defined Fibonacci and Lucas Polynomial in terms of hypergeometric form. K. Dilcher ^[9], defined Fibonacci numbers in terms of
hypergeometric function. C. Berg ^[4], defined Fibonacci numbers and orthogonal polynomials.
In this section, we established some properties of generalized Fibonacci Polynomials in terms of confluent hypergeometric function. Proofs of the theorem are based on special function, simple algebra
and give several interesting identities involving them.
Theorem 9: If
Proof (i): Since the generating function of the generalized Fibonacci Polynomials is,
Proof (ii): Since the generating function of the generalized Lucas Polynomials is,
We can easily get the following recurrence relation by using (4.1) and (4.2)
Theorem 10: If
Generating functions are very helpful in finding of relations for sequences of integers. Some authors found miscellaneous identities for the Fibonacci polynomials and Lucas polynomials by
manipulation with their generating functions. Our approach is rather different in this section.
Corollary 10.1:
Corollary 10.3:
Proposition 11: Prove that
Proof: Using the generating function, the proof is clear.
6. Conclusion
We have derived many fundamental properties in this paper. We describe sums of generalized Fibonacci Polynomials. This enables us to give in a straightforward way several formulas for the sums of
such Polynomials. These identities can be used to develop new identities of polynomials. Also we describe some confluent hypergeometric identities of generalized Fibonacci and Lucas polynomials. In
Theorem: 10 we use c, is the arbitrary constants of integration and give several interesting identities involving them.
[1] A. F. Horadam, “Extension of a synthesis for a class of polynomial sequences,” The Fibonacci Quarterly, vol. 34; 1966, no. 1, 68-74.
[2] Nalli and P. Haukkanen, “On generalized Fibonacci and Lucas polynomials,” Chaos, Solitons and Fractals, vol. 42; 2009, no. 5, 3179-3186.
[3] Alexandru Lupas, A Guide of Fibonacci and Lucas Polynomial, Octagon Mathematics Magazine, vol. 7(1); 1999, 2-12.
[4] Christian Berg, Fibonacci numbers and orthogonal polynomials, Arab Journal of Mathematical Sciences, vol.17; 2011, 75-88.
[5] E. Artin, Collected Papers, Ed. S. Lang and J. T. Tate, New York, springer-Vaerlag, 1965.
[6] E. D. Rainville, Special Function, Macmillan, New York, 1960.
[7] G. S. Cheon, H. Kim, and L. W. Shapiro, “A generalization of Lucas polynomial sequence,” Discrete Applied Mathematics, vol. 157; 2009, no. 5, 920-927.
[8] G. Y. Lee and M. Asci, Some Properties of the (p, q)-Fibonacci and (p, q)-Lucas Polynomials, Journal of Applied Mathematics, Vol. 2012, Article ID 264842, 18 pages, 2012.
[9] Karl Dilcher, “Hypergeometric functions and Fibonacci numbers”, The Fibonacci Quarterly, vol. 38; 2000, no. 4, 342-363.
[10] K. Kaygisiz and A. Sahin, New Generalizations of Lucas Numbers, Gen. Math. Notes, Vol. 10; 2012, no. 1, 63-77.
[11] M. N. S. Swamy, “Problem B – 74”, The Fibonacci Quarterly, vol. 3; 1965, no. 3, 236.
[12] N. Robbins, Vieta's triangular array and a related family of polynomials, Internat. J. Mayth. & Math. Sci., Vol. 14; 1991, no. 2, 239-244.
[13] S. Falcon and A. Plaza, On k-Fibonacci sequences and polynomials and their derivatives, Chaos, Solitons and Fractals 39; 2009. 1005-1019.
[14] S. Vajda, Fibonacci & Lucas numbers, and the Golden Section. Theory and applications, Chichester: Ellis Horwood, 1989.
[15] T. Koshy, Fibonacci and Lucas Numbers with Applications, Toronto, New York, NY, USA, 2001.
[16] V. E. Hoggatt, Jr., Leonard, H. T. Jr. and Philips, J. W., Twenty four Master Identities, The Fibonacci Quarterly, Vol. 9; 1971, no. 1, 1-17.
[17] V. E. Hoggatt, Jr. and D. A. Lind, Symbolic Substitutions in to Fibonacci Polynomials, The Fibonacci Quarterly, Vol. 6; 1968, no. 5, 55-74.
|
{"url":"http://pubs.sciepub.com/tjant/1/1/9/","timestamp":"2014-04-20T01:32:03Z","content_type":null,"content_length":"68779","record_id":"<urn:uuid:f0a72152-381c-420f-9b41-38634be23418>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00146-ip-10-147-4-33.ec2.internal.warc.gz"}
|
MathFiction: Mad Destroyer (Fletcher Pratt)
a list compiled by Alex Kasman (College of Charleston)
Home All New Browse Search About
Mad Destroyer (1930)
Fletcher Pratt
(click on names to see more mathematical fiction by the same author)
Contributed by Vijay Fafat
The story is about a mathematician/astronomer who has discovered an exact solution to the multi-body problem in gravitation i.e. a formula which can easily calculate the positions and velocities of N
bodies moving under mutual gravitation, for all N >= 2. Based on this formulation, he solves for the position of the asteroid, Eros, moving under the influence of Sun, Earth and Venus and predicts
that it will crash into the sun in 3 years with enough force that a small outer layer of the Sun will get blown off, destroying all life on Earth. Some melodrama follows and people now wait for the
predicted date for denouement.
There is a very good description of the 3-body problem in layman's terms and the scientist acknowledges Karl Sundman's 3-body solution as well ("Sundman of Helsingfors"), with the note that Sundman's
solution is not very user-friendly for calculations in the (computer-less) real world. The author does make a mistake when his scientist claims that a collision between Eros and Earth can cause only
a few thousand people to die in the immediate vicinity of the collision but in general, a very nicely written story.
In addition to underestimating the damage caused by the collision of an asteroid with the Earth, as Vijay mentions, the author overestimates the consequences of a collision with the sun (which is
lucky for us since I suspect that things crash into the sun relatively often).
Some may be interested in the religious aspects of the story. In particular, the astronomer interprets his discovery allowing an exact solution of the n-body problem to God and the imminent collision
with Eros as "Judgment Day".
Published in Science Wonder Quarterly Spring 1930.
(Note: This is just one work of mathematical fiction from the list. To see the entire list or to see more works of mathematical fiction, return to the Homepage.)
Works Similar to Mad Destroyer
According to my `secret formula', the following works of mathematical fiction are similar to this one:
1. N Day by Philip Latham
2. The Adventure of the Russian Grave by William Barton / Michael Capobianco
3. The Brink of Infinity by Stanley G. Weinbaum
4. The Mathematics of Magic by L. Sprague de Camp / Fletcher Pratt
5. The Gostak and the Doshes by Miles J. Breuer (M.D.)
6. Blowups Happen by Robert A. Heinlein
7. The Devouring Tide by John Russell Fearn (under the pseudonym Polton Cross)
8. The Star by Herbert George Wells
9. The Hyperboloid of Engineer Garin by Aleksei Nikolaevich Tolstoi
10. The Imaginary by Isaac Asimov
Ratings for Mad Destroyer:
Have you seen/read this work of mathematical fiction? Then click here to enter your own votes on its mathematical content and literary quality or
PLEASE HELP US OUT BY ENTERING YOUR OWN RATINGS send me comments to post on this Webpage.
FOR THIS WORK.
Genre Science Fiction,
Motif Future Prediction through Math, Religion,
Topic Mathematical Physics, Real Mathematics,
Medium Short Stories,
Home All New Browse Search About
Your Help Needed: Some site visitors remember reading works of mathematical fiction that neither they nor I can identify. It is time to crowdsource this problem and ask for your help! You would help
a neighbor find a missing pet...can't you also help a fellow site visitor find some missing works of mathematical fiction? Please take a look and let us know if you have seen these missing stories
(Maintained by Alex Kasman, College of Charleston)
|
{"url":"http://kasmana.people.cofc.edu/MATHFICT/mfview.php?callnumber=mf824","timestamp":"2014-04-19T17:27:01Z","content_type":null,"content_length":"9575","record_id":"<urn:uuid:7b7b55c3-5450-46bb-a310-7572231aa440>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00177-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Texturing a triangle strip of varying width [Archive] - OpenGL Discussion and Help Forums
I'm working on a project where I'd like to render a strip of variable width, and texture it in such a way that the texture changes width (stretches/shrinks) along with the strip. The first way I
could think of was to specify a triangle strip and set the texture coordinates at each vertex to (percent, 0) or (percent, 1), depending on whether the vertex is on the bottom or the top of the
strip, and where percent is a variable from 0 .0 to 1.0 that represents how far along the strip the vertex is.
Unfortunately, I've run into a snag! Because the triangles in the strip are interpolating their "varying"s independently, each triangle ends up interpolating the texture coordinates in such a way
that the texture warping is not continuous in each segment of the triangle strip. The attached image shows what I'm talking about. Within triangles (p0, p1, p3) and (p0, p3, p2), the texture is
interpolated correctly, but in the quad (p0, p1, p2, p3), it's incorrect.
Anyone have any thoughts on how to do this the right way?
|
{"url":"http://www.opengl.org/discussion_boards/archive/index.php/t-176534.html","timestamp":"2014-04-18T11:01:59Z","content_type":null,"content_length":"4894","record_id":"<urn:uuid:17ca4c8d-e1a2-4bfc-9b17-9b8e32992414>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00245-ip-10-147-4-33.ec2.internal.warc.gz"}
|
the encyclopedic entry of unsuitably
David Hilbert (January 23, 1862 – February 14, 1943) was a German mathematician, recognized as one of the most influential and universal mathematicians of the 19th and early 20th centuries. He
invented or developed a broad range of fundamental ideas, in invariant theory, the axiomatization of geometry, and with the notion of Hilbert space, one of the foundations of functional analysis.
Hilbert adopted and warmly defended Georg Cantor's set theory and transfinite numbers. A famous example of his leadership in mathematics is his 1900 presentation of a collection of problems that set
the course for much of the mathematical research of the 20th century.
Hilbert and his students supplied significant portions of the mathematical infrastructure required for quantum mechanics and general relativity. He is also known as one of the founders of proof
theory, mathematical logic and the distinction between mathematics and metamathematics.
Hilbert, the first of two children and only son of Otto and Maria Therese (Erdtmann) Hilbert, was born in either
(according to Hilberts own statement) or in Wehlau (today
Znamensk, Kaliningrad Oblast
)) near Königsberg where his father was occupied at the time of his birth in the
Province of Prussia
. In the fall of 1872 he entered the Friedrichskolleg
(the same school that
Immanuel Kant
had attended 140 years before), but after an unhappy duration he transferred (fall 1879) to and graduated from (spring 1880) the more science-oriented Wilhelm Gymnasium. Upon graduation he enrolled
(autumn 1880) at the
University of Königsberg
, the "Albertina". In the spring of 1882
Hermann Minkowski
(two years younger than Hilbert and also a native of Königsberg but so talented he had graduated early from his gymnasium and gone to Berlin for three semesters), returned to Königsberg and entered
the university. "Hilbert knew his luck when he saw it. In spite of his father's disapproval, he soon became friends with the shy, gifted Minkowski. In 1884
Adolf Hurwitz
arrived from Göttingen as an
, i.e., an associate professor. An intense and fruitful scientific exchange between the three began and especially Minkowski and Hilbert would exercise a reciprocal influence over each other at
various times in their scientific careers. Hilbert obtained his doctorate in 1885, with a dissertation, written under
Ferdinand von Lindemann
, titled
Über invariante Eigenschaften spezieller binärer Formen, insbesondere der Kugelfunktionen
("On the invariant properties of special
binary forms
, in particular the spherical harmonic functions").
Hilbert remained at the University of Königsberg as a professor from 1886 to 1895. In 1892, Hilbert married Käthe Jerosch (1864–1945), "the daughter of a Konigsberg merchant, an outspoken young lady
with an independence of mind that matched his own". While at Königsberg they had their one child Franz Hilbert (1893–1969). In 1895, as a result of intervention on his behalf by Felix Klein he
obtained the position of Chairman of Mathematics at the University of Göttingen, at that time the best research center for mathematics in the world and where he remained for the rest of his life.
His son Franz would suffer his entire life from an (undiagnosed) mental illness, his inferior intellect a terrible disappointment to his father and this tragedy a matter of distress to the
mathematicians and students at Göttingen. Sadly, Minkowski — Hilbert's "best and truest friend — would die prematurely of a ruptured appendix in 1909.
The Göttingen school
Among the students of Hilbert, there were
Hermann Weyl
, the champion of chess
Emanuel Lasker
Ernst Zermelo
, and
Carl Gustav Hempel
John von Neumann
was his assistant. At the University of Göttingen, Hilbert was surrounded by a social circle of some of the most important mathematicians of the 20th century, such as
Emmy Noether
Alonzo Church
Among his 69 Ph.D. students in Göttingen were many who later became famous mathematicians, including (with date of thesis): Otto Blumenthal (1898), Felix Bernstein (1901), Hermann Weyl (1908),
Richard Courant (1910), Erich Hecke (1910), Hugo Steinhaus (1911), Wilhelm Ackermann (1925). Between 1902 and 1939 Hilbert was editor of the Mathematische Annalen, the leading mathematical journal of
the time.
Later years
Hilbert lived to see the
purge many of the prominent faculty members at
University of Göttingen
, in 1933. Among those forced out were
Hermann Weyl
, who had taken Hilbert's chair when he retired in 1930,
Emmy Noether
Edmund Landau
. One of those who had to leave Germany was
Paul Bernays
, Hilbert's collaborator in
mathematical logic
, and co-author with him of the important book
Die Grundlagen der Mathematik
(which eventually appeared in two volumes, in 1934 and 1939). This was a sequel to the Hilbert –
Principles of Theoretical Logic
from 1928.
About a year later, he attended a banquet, and was seated next to the new Minister of Education, Bernhard Rust. Rust asked, "How is mathematics in Göttingen now that it has been freed of the Jewish
influence?" Hilbert replied, "Mathematics in Göttingen? There is really none any more.
By the time Hilbert died in 1943, the Nazis had nearly completely restructured the university, many of the former faculty being either Jewish or married to Jews. Hilbert's funeral was attended by
fewer than a dozen people, only two of whom were fellow academics, among them Arnold Sommerfeld, a theoretical physicist and also a son of the City of Königsberg.. News of his death only became known
to the wider world six months after he had died.
On his tombstone, at Göttingen, one can read his epitaph, the famous lines he had spoken at the end of his retirement address to the Society of German Scientists and Physicians in the fall of 1930:
Wir müssen wissen.
Wir werden wissen.
As translated into English the inscriptions read:
We must know.
We will know.
(Ironically, the day before Hilbert pronounced this phrase at the 1930 annual meeting of the Society of German Scientists and Physicians Kurt Gödel—in a roundtable discussion during the Conference on
Epistemology held jointly with the Society meetings—tentatively announced the first expression of his (now-famous) incompleteness theorem., the news of which would make Hilbert "somewhat angry".)
The finiteness theorem
Hilbert's first work on invariant functions led him to the demonstration in 1888 of his famous
finiteness theorem
. Twenty years earlier,
Paul Gordan
had demonstrated the
of the finiteness of generators for binary forms using a complex computational approach. The attempts to generalize his method to functions with more than two variables failed because of the enormous
difficulty of the calculations involved. Hilbert realized that it was necessary to take a completely different path. As a result, he demonstrated
Hilbert's basis theorem
: showing the existence of a finite set of generators, for the invariants of
in any number of variables, but in an abstract form. That is, while demonstrating the existence of such a set, it was not a
constructive proof
— it did not display "an object" — but rather, it was an
existence proof
and relied on use of the
Law of Excluded Middle
in an infinite extension.
Hilbert sent his results to the Mathematische Annalen. Gordan, the house expert on the theory of invariants for the Mathematische Annalen, was not able to appreciate the revolutionary nature of
Hilbert's theorem and rejected the article, criticizing the exposition because it was insufficiently comprehensive. His comment was:
Das ist nicht Mathematik. Das ist Theologie.
(This is not Mathematics. This is Theology.)
Klein, on the other hand, recognized the importance of the work, and guaranteed that it would be published without any alterations. Encouraged by Klein and by the comments of Gordan, Hilbert in a
second article extended his method, providing estimations on the maximum degree of the minimum set of generators, and he sent it once more to the Annalen. After having read the manuscript, Klein
wrote to him, saying:
Without doubt this is the most important work on general algebra that the Annalen has ever published.
Later, after the usefulness of Hilbert's method was universally recognized, Gordan himself would say:
I have convinced myself that even theology has its merits.
For all his successes, the nature of his proof stirred up more trouble than Hilbert could imagine at the time. Although Kronecker had conceded, Hilbert would later respond to others' similar
criticisms that "many different constructions are subsumed under one fundamental idea" — in other words (to quote Reid): "Through a proof of existence, Hilbert had been able to obtain a
construction"; "the proof" (i.e. the symbols on the page) was "the object". Not all were convinced. While Kronecker would die soon after, his constructivist banner would be carried forward in full
cry by the young Brouwer and his developing intuitionist "school", much to Hilbert's torment in his later years. Indeed Hilbert would lose his "gifted pupil" Weyl to intuitionism — "Hilbert was
disturbed by his former student's fascination with the ideas of Brouwer, which aroused in Hilbert the memory of Kronecker". Brouwer the intuitionist in particular raged against the use of the Law of
Excluded Middle over infinite sets (as Hilbert had used it). Hilbert would respond:
" 'Taking the Principle of the Excluded Middle from the mathematician ... is the same as ... prohibiting the boxer the use of his fists.'
"The possible loss did not seem to bother Weyl.
Axiomatization of geometry
The text Grundlagen der Geometrie (tr.: Foundations of Geometry) published by Hilbert in 1899 proposes a formal set, the Hilbert's axioms, substituting the traditional axioms of Euclid. They avoid
weaknesses identified in those of Euclid, whose works at the time were still used textbook-fashion. Independently and contemporaneously, a 19-year-old American student named Robert Lee Moore
published an equivalent set of axioms. Some of the axioms coincide, while some of the axioms in Moore's system are theorems in Hilbert's and vice-versa.
Hilbert's approach signaled the shift to the modern axiomatic method. Axioms are not taken as self-evident truths. Geometry may treat things, about which we have powerful intuitions, but it is not
necessary to assign any explicit meaning to the undefined concepts. The elements, such as point, line, plane, and others, could be substituted, as Hilbert says, by tables, chairs, glasses of beer and
other such objects. It is their defined relationships that are discussed.
Hilbert first enumerates the undefined concepts: point, line, plane, lying on (a relation between points and planes), betweenness, congruence of pairs of points, and congruence of angles. The axioms
unify both the plane geometry and solid geometry of Euclid in a single system.
The 23 Problems
He put forth a most influential list of 23 unsolved problems at the International Congress of Mathematicians in Paris in 1900. This is generally reckoned the most successful and deeply considered
compilation of open problems ever to be produced by an individual mathematician.
After re-working the foundations of classical geometry, Hilbert could have extrapolated to the rest of mathematics. His approach differed, however, from the later 'foundationalist' Russell-Whitehead
or 'encyclopedist' Nicolas Bourbaki, and from his contemporary Giuseppe Peano. The mathematical community as a whole could enlist in problems, which he had identified as crucial aspects of the areas
of mathematics he took to be key.
The problem set was launched as a talk "The Problems of Mathematics" presented during the course of the Second International Congress of Mathematicians held in Paris. Here is the introduction of the
speech that Hilbert gave:
Who among us would not be happy to lift the veil behind which is hidden the future; to gaze at the coming developments of our science and at the secrets of its development in the centuries to
come? What will be the ends toward which the spirit of future generations of mathematicians will tend? What methods, what new facts will the new century reveal in the vast and rich field of
mathematical thought?
He presented fewer than half the problems at the Congress, which were published in the acts of the Congress. In a subsequent publication, he extended the panorama, and arrived at the formulation of
the now-canonical 23 Problems of Hilbert. The full text is important, since the exegesis of the questions still can be a matter of inevitable debate, whenever it is asked how many have been solved.
Some of these were solved within a short time. Others have been discussed throughout the 20th century, with a few now taken to be unsuitably open-ended to come to closure. Some even continue to this
day to remain a challenge for mathematicians.
In an account that had become standard by the mid-century, Hilbert's problem set was also a kind of manifesto, that opened the way for the development of the
school, one of three major schools of mathematics of the 20th century. According to the formalist, mathematics is a game devoid of meaning in which one plays with symbols devoid of meaning according
to formal rules which are agreed upon in advance. It is therefore an autonomous activity of thought. There is, however, room to doubt whether Hilbert's own views were simplistically formalist in this
Hilbert's program
In 1920 he proposed explicitly a research project (in
, as it was then termed) that became known as
Hilbert's program
. He wanted
to be formulated on a solid and complete logical foundation. He believed that in principle this could be done, by showing that:
1. all of mathematics follows from a correctly-chosen finite system of axioms; and
2. that some such axiom system is provably consistent through some means such as the epsilon calculus.
He seems to have had both technical and philosophical reasons for formulating this proposal. It affirmed his dislike of what had become known as the ignorabimus, still an active issue in his time in
German thought, and traced back in that formulation to Emil du Bois-Reymond.
This program is still recognizable in the most popular philosophy of mathematics, where it is usually called formalism. For example, the Bourbaki group adopted a watered-down and selective version of
it as adequate to the requirements of their twin projects of (a) writing encyclopedic foundational works, and (b) supporting the axiomatic method as a research tool. This approach has been successful
and influential in relation with Hilbert's work in algebra and functional analysis, but has failed to engage in the same way with his interests in physics and logic.
Gödel's work
Hilbert and the talented mathematicians who worked with him in his enterprise were committed to the project. His attempt to support axiomatized mathematics with definitive principles, which could
banish theoretical uncertainties, was however to end in failure.
Gödel demonstrated that any non-contradictory formal system, which was comprehensive enough to include at least arithmetic, cannot demonstrate its completeness by way of its own axioms. In 1931 his
incompleteness theorem showed that Hilbert's grand plan was impossible as stated. The second point cannot in any reasonable way be combined with the first point, as long as the axiom system is
genuinely finitary.
Nevertheless, the subsequent achievements of proof theory at the very least clarified consistency as it relates to theories of central concern to mathematicians. Hilbert's work had started logic on
this course of clarification; the need to understand Gödel's work then led to the development of recursion theory and then mathematical logic as an autonomous discipline in the 1930s. The basis for
later theoretical computer science, in Alonzo Church and Alan Turing also grew directly out of this 'debate'.
Functional analysis
Around 1909, Hilbert dedicated himself to the study of differential and
integral equations
; his work had direct consequences for important parts of modern functional analysis. In order to carry out these studies, Hilbert introduced the concept of an infinite dimensional
Euclidean space
, later called
Hilbert space
. His work in this part of analysis provided the basis for important contributions to the mathematics of physics in the next two decades, though from an unanticipated direction. Later on,
Stefan Banach
amplified the concept, defining
Banach spaces
. Hilbert space is the most important single idea in the area of
functional analysis
that grew up around it during the 20th century.
Until 1912, Hilbert was almost exclusively a "pure" mathematician. When planning a visit from Bonn, where he was immersed in studying physics, his fellow mathematician and friend
Hermann Minkowski
joked he had to spend 10 days in quarantine before being able to visit Hilbert. In fact, Minkowski seems responsible for most of Hilbert's physics investigations prior to 1912, including their joint
seminar in the subject in 1905.
In 1912, three years after his friend's death, Hilbert turned his focus to the subject almost exclusively. He arranged to have a "physics tutor" for himself. He started studying kinetic gas theory
and moved on to elementary radiation theory and the molecular theory of matter. Even after the war started in 1914, he continued seminars and classes where the works of Einstein and others were
followed closely.
Hilbert invited Einstein to Göttingen to deliver a week of lectures in June-July 1915 on general relativity and his developing theory of gravity. The exchange of ideas led to the final form of the
field equations of General Relativity, namely the Einstein field equations and the Einstein-Hilbert action. In spite of the fact that Einstein and Hilbert never engaged in a public priority dispute,
there has been some dispute about the discovery of the field equations.
Additionally, Hilbert's work anticipated and assisted several advances in the mathematical formulation of quantum mechanics. His work was a key aspect of Hermann Weyl and John von Neumann's work on
the mathematical equivalence of Werner Heisenberg's matrix mechanics and Erwin Schrödinger's wave equation and his namesake Hilbert space plays an important part in quantum theory. In 1926 von Neuman
showed that if atomic states were understood as vectors in Hilbert space, then they would correspond with both Schrödinger's wave function theory and Heisenberg's matrices.
Throughout this immersion in physics, Hilbert worked on putting rigor into the mathematics of physics. While highly dependent on higher math, the physicist tended to be "sloppy" with it. To a "pure"
mathematician like Hilbert, this was both "ugly" and difficult to understand. As he began to understand the physics and how the physicists were using mathematics, he developed a coherent mathematical
theory for what he found, most importantly in the area of integral equations. When his colleague Richard Courant wrote the now classic Methods of Mathematical Physics including some of Hilbert's
ideas, he added Hilbert's name as author even though Hilbert had not directly contributed to the writing. Hilbert said "Physics is too hard for physicists", implying that the necessary mathematics
was generally beyond them; the Courant-Hilbert book made it easier for them.
Number theory
Hilbert unified the field of
algebraic number theory
with his 1897 treatise
(literally "report on numbers"). He disposed of
Waring's problem
in the wide sense. He then had little more to publish on the subject; but the emergence of
Hilbert modular forms
in the dissertation of a student means his name is further attached to a major area.
He made a series of conjectures on class field theory. The concepts were highly influential, and his own contribution is seen in the names of the Hilbert class field and the Hilbert symbol of local
class field theory. Results on them were mostly proved by 1930, after breakthrough work by Teiji Takagi that established him as Japan's first mathematician of international stature.
Hilbert did not work in the central areas of analytic number theory, but his name has become known for the Hilbert–Pólya conjecture, for reasons that are anecdotal.
Miscellaneous talks, essays, and contributions
See also
Primary literature in English translation
Secondary literature
External links
|
{"url":"http://www.reference.com/browse/unsuitably","timestamp":"2014-04-17T10:02:23Z","content_type":null,"content_length":"120836","record_id":"<urn:uuid:9e049902-aa71-441a-8061-dcf39fd6ad61>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00088-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Y= -16x^2+190+0 A= B= C= graph when done.
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/5061bd80e4b0583d5cd2c50c","timestamp":"2014-04-20T21:02:44Z","content_type":null,"content_length":"91558","record_id":"<urn:uuid:c7dbcd79-77a5-48e7-9c0d-c96fae3fdd31>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00381-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Westfield, NJ Geometry Tutor
Find a Westfield, NJ Geometry Tutor
...I was a tennis instructor for two consecutive summers at a pool club. I've been studying and practicing yoga now (specifically vinyasa style) for two years. I teach yoga classes to students
grade K-4 three times per week.
22 Subjects: including geometry, reading, algebra 1, ESL/ESOL
...I scored 790M/780W/760CR on my SATs; I am a National Merit Finalist for the PSAT, and I earned perfect 5s on: Physics C E&M, Calculus BC, Physics C Mech, Biology, Psychology, Physics B, and
English Lang. I am a recipient of College Board's National AP Scholar award, and I scored an 800 on the SA...
26 Subjects: including geometry, English, calculus, physics
...I am patient and cater to my students needs. I use diagrams and a systematic approach towards solving problems in my teaching, which allows my students to grasp the deeper concepts, rather
than just solve the problem. My passion is to work with students of all ages (mostly high school and colle...
39 Subjects: including geometry, chemistry, writing, physics
...My goal is to not only treat the symptom (single test performance) but “cure” their difficulties through motivation and reassessing their foundational understanding. I am able to do this
because of my interdisciplinary background and can work with students who are pursuing almost all career path...
34 Subjects: including geometry, chemistry, physics, calculus
...I've been playing the saxophone for 10+ years. I began studying in fifth grade, via the alto saxophone. By seventh grade, I was playing both the tenor and baritone sax and participating in an
advanced jazz ensemble.
11 Subjects: including geometry, reading, writing, algebra 1
Related Westfield, NJ Tutors
Westfield, NJ Accounting Tutors
Westfield, NJ ACT Tutors
Westfield, NJ Algebra Tutors
Westfield, NJ Algebra 2 Tutors
Westfield, NJ Calculus Tutors
Westfield, NJ Geometry Tutors
Westfield, NJ Math Tutors
Westfield, NJ Prealgebra Tutors
Westfield, NJ Precalculus Tutors
Westfield, NJ SAT Tutors
Westfield, NJ SAT Math Tutors
Westfield, NJ Science Tutors
Westfield, NJ Statistics Tutors
Westfield, NJ Trigonometry Tutors
|
{"url":"http://www.purplemath.com/Westfield_NJ_Geometry_tutors.php","timestamp":"2014-04-18T06:19:18Z","content_type":null,"content_length":"23933","record_id":"<urn:uuid:1ec9b9c1-eeae-4d5b-bad0-a603bee3a429>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00205-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Methods for Synthesis of Multiple-Input Translinear Element Networks
Abstract (Summary)
Translinear circuits are circuits in which the exponential relationship between the output current and input voltage of a circuit element is exploited to realize various algebraic or differential
equations. This thesis is concerned with a subclass of translinear circuits, in which the basic translinear element, called a multiple-input translinear element (MITE), has an output current that is
exponentially related to a weighted sum of its input voltages. MITE networks can be used for the implementation of the same class of functions as traditional translinear circuits. The implementation
of algebraic or (algebraic) differential equations using MITEs can be reduced to the implementation of the product-of-power-law (POPL) relationships, in which an output is given by the product of
inputs raised to different powers. Hence, the synthesis of POPL relationships, and their optimization with respect to the relevant cost functions, is very important in the theory of MITE networks. In
this thesis, different constraints on the topology of POPL networks that result in desirable system behavior are explored and different methods of synthesis, subject to these constraints, are
developed. The constraints are usually conditions on certain matrices of the network, which characterize the weights in the relevant MITEs. Some of these constraints are related to the uniqueness of
the operating point of the network and the stability of the network. Conditions that satisfy these constraints are developed in this work. The cost functions to be minimized are the number of MITEs
and the number of input gates in each MITE. A complete solution to POPL network synthesis is presented here that minimizes the number of MITEs first and then minimizes the number of input gates to
each MITE. A procedure for synthesizing POPL relationships optimally when the number of gates is minimal, i.e., 2, has also been developed here for the single--output case. A MITE structure that
produces the maximum number of functions with minimal reconfigurability is developed for use in MITE field--programmable analog arrays. The extension of these constraints to the synthesis of linear
filters is also explored, the constraint here being that the filter network should have a unique operating point in the presence of nonidealities. Synthesis examples presented here include nonlinear
functions like the arctangent and the gaussian function which find application in analog implementations of particle filters. Synthesis of dynamical systems is presented here using the examples of a
Lorenz system and a sinusoidal oscillator. The procedures developed here provide a structured way to automate the synthesis of nonlinear algebraic functions and differential equations using MITEs.
Bibliographical Information:
Advisor:Anderson, David; Hasler, Paul; McClellan, James; Minch, Bradley; Habetler, Thomas
School:Georgia Institute of Technology
School Location:USA - Georgia
Source Type:Master's Thesis
Keywords:electrical and computer engineering
Date of Publication:08/24/2007
|
{"url":"http://www.openthesis.org/documents/Methods-Synthesis-Multiple-Input-Translinear-265512.html","timestamp":"2014-04-21T07:11:38Z","content_type":null,"content_length":"10719","record_id":"<urn:uuid:5c872cd5-b631-4ba9-bad2-e06093a8701b>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00050-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Please help, Finding f o g and g o f ?
March 18th 2013, 10:38 AM #1
Mar 2013
Please help, Finding f o g and g o f ?
Hello all, I need the answer to the following equation, I'd very much appreciate it!
find A)f o g and b) g o f given F(x) = square root of (x+4) and g(x) = x^2 - 4
if you could give me the answer and briefly go over the steps in finding the equation, i'd be glad!
Re: Please help, Finding f o g and g o f ?
do you understand what fog and gof means....it is easy...just replace in the place of x the other function
ie. gof(x): get g(x) as function and instead of x replace (substitute) the whole f(x) as it is......same for fog revise the chaper of the composite functions pls.
Re: Please help, Finding f o g and g o f ?
If, for example, $f(x)= 3x^2- x+ 6$ and $g(x)= x^3+ 2x$ then fog(x)= f(x^3+ 2x)= 3(x^3+ 2x)^2- (x^3+ 2x)+ 6[/tex] and $gof(x)= g(3x^2- x+ 6)= (3x^2- x+ 6)^3+ 2(3x^2- x+ 6)$.
Re: Please help, Finding f o g and g o f ?
bump. are both f o g and g o f = to x?
Re: Please help, Finding f o g and g o f ?
Re: Please help, Finding f o g and g o f ?
Re: Please help, Finding f o g and g o f ?
March 18th 2013, 11:13 AM #2
Senior Member
Feb 2013
Saudi Arabia
March 18th 2013, 11:21 AM #3
MHF Contributor
Apr 2005
March 20th 2013, 10:13 AM #4
Mar 2013
March 20th 2013, 10:58 AM #5
March 20th 2013, 11:16 AM #6
Mar 2013
March 20th 2013, 11:29 AM #7
|
{"url":"http://mathhelpforum.com/advanced-algebra/215009-please-help-finding-f-o-g-g-o-f.html","timestamp":"2014-04-16T15:01:54Z","content_type":null,"content_length":"52438","record_id":"<urn:uuid:c5d10edc-ca14-42d1-8428-8c94ec8a0309>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00596-ip-10-147-4-33.ec2.internal.warc.gz"}
|
All Implemented Interfaces:
Serializable, Cloneable
Direct Known Subclasses:
BinaryInvariant, DiffDummyInvariant, DummyInvariant, Equality, Joiner, TernaryInvariant, UnaryInvariant
public abstract class Invariant
extends Object
implements Serializable, Cloneable
Base implementation for Invariant objects. Intended to be subclassed but not to be directly instantiated. Rules/assumptions for invariants:
For each program point's set of VarInfos, there exists exactly no more than one invariant of its type. For example, between variables a and b at PptTopLevel T, there will not be two instances of
invariant I(a, b).
See Also:
|
{"url":"http://plse.cs.washington.edu/daikon/download/jdoc/daikon/inv/Invariant.html","timestamp":"2014-04-19T04:29:29Z","content_type":null,"content_length":"124150","record_id":"<urn:uuid:4f826714-84ed-4ae6-8af7-c569bcc283df>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00117-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Bensalem Calculus Tutor
Find a Bensalem Calculus Tutor
...There is a big difference between a student striving for 700s on the SAT and one hoping to reach the 500s. Likewise, a student struggling with Algebra I is a far cry from one going for an A in
honors pre-calculus. I am comfortable and experienced with all levels of students.
23 Subjects: including calculus, English, geometry, statistics
...This will especially help to personalize the teaching experience and is an effective way to create a trusting relationship. I am especially personable and I know I have the ability to inspire
students to have success beyond their expectations especially with the creative method I use for teachin...
16 Subjects: including calculus, Spanish, physics, algebra 1
...In between formal tutoring sessions, I offer my students FREE email support to keep them moving past particularly tough problems. In addition, I offer FREE ALL NIGHT email/phone support just
before the “big" exam, for students who pull "all nighters". One quick note about my cancellation policy...
14 Subjects: including calculus, physics, geometry, ASVAB
...My GPA is a 3.0, and I can tutor in secondary education and post secondary education. I specialize in Chemistry, Biology, and Mathematics. The thing that separates me from the other tutors is
that I will teach you the what, how, why, and when of everything science.
18 Subjects: including calculus, chemistry, biology, algebra 1
...I took World Religions course, a 3 credit college course, and passed with a B+ at Bucks County Community College. I completed and received an A grade in a 3 credit college course entitled, "The
Art of Public Speaking". I learned all of the key skills needed to successfully speak in front of lar...
18 Subjects: including calculus, English, writing, accounting
|
{"url":"http://www.purplemath.com/bensalem_calculus_tutors.php","timestamp":"2014-04-21T04:37:04Z","content_type":null,"content_length":"24028","record_id":"<urn:uuid:5a598d9f-929e-462f-a3cb-bba158f8a897>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00368-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Bensalem Calculus Tutor
Find a Bensalem Calculus Tutor
...There is a big difference between a student striving for 700s on the SAT and one hoping to reach the 500s. Likewise, a student struggling with Algebra I is a far cry from one going for an A in
honors pre-calculus. I am comfortable and experienced with all levels of students.
23 Subjects: including calculus, English, geometry, statistics
...This will especially help to personalize the teaching experience and is an effective way to create a trusting relationship. I am especially personable and I know I have the ability to inspire
students to have success beyond their expectations especially with the creative method I use for teachin...
16 Subjects: including calculus, Spanish, physics, algebra 1
...In between formal tutoring sessions, I offer my students FREE email support to keep them moving past particularly tough problems. In addition, I offer FREE ALL NIGHT email/phone support just
before the “big" exam, for students who pull "all nighters". One quick note about my cancellation policy...
14 Subjects: including calculus, physics, geometry, ASVAB
...My GPA is a 3.0, and I can tutor in secondary education and post secondary education. I specialize in Chemistry, Biology, and Mathematics. The thing that separates me from the other tutors is
that I will teach you the what, how, why, and when of everything science.
18 Subjects: including calculus, chemistry, biology, algebra 1
...I took World Religions course, a 3 credit college course, and passed with a B+ at Bucks County Community College. I completed and received an A grade in a 3 credit college course entitled, "The
Art of Public Speaking". I learned all of the key skills needed to successfully speak in front of lar...
18 Subjects: including calculus, English, writing, accounting
|
{"url":"http://www.purplemath.com/bensalem_calculus_tutors.php","timestamp":"2014-04-21T04:37:04Z","content_type":null,"content_length":"24028","record_id":"<urn:uuid:5a598d9f-929e-462f-a3cb-bba158f8a897>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00368-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Plane curves
Space curves
Curvature of curves This worksheet contains tools to create animated pictures of plane curves or space curves together with their Frenet coordinate system. Tool to create animated pictures of plane
curves together with their - tangent vector, - normal vector, - "acceleration vector" (second derivative) - osculating circle, - curvature function. JSFH Just place the cursor on the red word
"restart" below and press enter. If you want to create a picture of your own curve you can change the definitions marked with "(can be modified)" below. restart:with(plots):with(plottools): place the
cursor on the red word "restart" and press enter To start the animations, right-click in the graphic and choose > Animation > Play . Name:="Lissajous-Curve": the name of the curve (can be modified to
plot a different curve) x(t):=cos(3*t): x-coordinate of the curve (can be modified to plot a different curve) y(t):=sin(2*t): y-coordinate of the curve (can be modified to plot a different curve)
Length:=4.86*Pi: the (approximate) total length of the curve (can be modified to plot a longer curve) n:=100: the number of frames of the animated motion (can be modified) Note: it is not necessary
to give the curve parametrized by arc length, that is, such that it evolves with constant velocity. The algorithm below is trying to create frames of the motion which simulate movement with constant
velocity by increasing the parameter "t" in such steps that the steps of the arc length approximately equal "Length/n". Since this is achieved only approximately, the specified value of "Length" will
only be the approximate total length of the curve. c(t):=<x(t),y(t)>: the curve as function in t with values in R^2 dx(t):=diff(x(t),t): dy(t):=diff(y(t),t): the derivatives with respect to "t" dc
(t):=<dx(t),dy(t)>: the tangent vector to the curve at time "t" v(t):=simplify(sqrt(dx(t)^2+dy(t)^2)): the velocity; it is equal to the length of the tangent vector "dc(t)" T(t):=simplify(dc(t)/v
(t)): the tangent unit vector which is normalized to length 1 N(t):=simplify(<-dy(t),dx(t)>/v(t)): the normal unit vector ddx(t):=diff(x(t),[t$2]): ddy(t):=diff(y(t),[t$2]): the second derivatives
with respect to "t" kappa(t):=simplify((dx(t)*ddy(t)-ddx(t)*dy(t))/v(t)^3): the curvature at time "t" K(t):=simplify(kappa(t)*N(t)): the "acceleration vector" (the second derivative of the curve with
respect to the arc length) R(t):=1/kappa(t): M(t):=c(t)+R(t)*N(t): the radius and center of the osculating circle The algorithm to plot "n" frames containing the curve, its various vectors, the
osculating circle, and the curvature function: t[-1]:=0: Deltat[-1]:=0: initial values for i from 0 to n do creates the i-th frame t[i]:=evalf(t[i-1]+Deltat[i-1]): increases the parameter "t" by step
"Delta[i-1]" RedCurve[i]:=plot([x(t),y(t),t=0..t[i]],thickness=5): plots the red curve in the i-th frame Tangentvector[i]:=arrow(evalf(eval(c(t),t=t[i])),evalf(eval(T(t),t=t[i])),.1,.2,.1,colour=
green): computes its tangent unit vector in the i-th frame Normalvector[i]:=arrow(evalf(eval(c(t),t=t[i])),evalf(eval(N(t),t=t[i])),.1,.2,.1,colour=yellow): computes the normal unit vector in the
i-th frame Curvature[i]:=evalf(eval(kappa(t),t=t[i])): computes the curvature in the i-th frame Curvaturefunction[i]:=display(plot(kappa(t),t=0..t[i],color=blue, thickness=2)): plots the curvature
function in the i-th frame Accelerationvector[i]:=arrow(evalf(eval(c(t),t=t[i])),evalf(eval(K(t),t=t[i])),.03,.1,.1,colour=blue): computes the "acceleration vector" in the i-th frame Osculatingcircle
[i]:=circle(convert(evalf(eval(M(t), t = t[i])), list),evalf(eval(R(t),t=t[i])),colour=black,thickness=0): plots the osculating circle in the i-th frame CurveWithT[i]:=display((RedCurve
[i],Tangentvector[i]), title=cat(Name," with tangent vector"),titlefont=[TIMES,BOLD,14]): plots the curve with tangent unit vector in the i-th frame CurveWithTandN[i]:=display((RedCurve
[i],Tangentvector[i],Normalvector[i]), title=cat(Name," with tangent vector (green) and normal vector (yellow)"),titlefont=[TIMES,BOLD,14]): plots the curve with tangent unit vector and normal unit
vector in the i-th frame CurveWithTNandK[i]:=display((RedCurve[i],Tangentvector[i],Accelerationvector[i],Normalvector[i]), title=cat(Name," with tangent vector (green), normal vector (yellow), and \
134"acceleration vector\134" (blue)"),titlefont=[TIMES,BOLD,14], view=[-2..2,-2..2],scaling=constrained): plots the curve with tangent unit vector, normal unit vector and acceleration vector in the
i-th frame CurveWithTNKandCircle[i]:=display((RedCurve[i],Tangentvector[i],Accelerationvector[i],Normalvector[i],Osculatingcircle[i]), title=cat(Name," with tangent vector (green), normal vector
(yellow), \134"acceleration vector\134" (blue), and osculating circle"), titlefont=[TIMES,BOLD,14], view=[-2..2,-2..2],scaling=constrained): plots the curve with tangent unit vector, normal unit
vector, acceleration vector, and osculating circle in the i-th frame Deltat[i]:=evalf(eval(Length/(n*v(t)),t=t[i])): the increment of the parameter "t" such that the arc length approximately
increases by "Length / n" end do: Now all "n" frames are created. BlackCurve:=plot([x(t),y(t),t=0..t[n]],thickness=1,color=black): plots the black curve in all frames The frames are put together to
movies: display(seq(RedCurve[i],i=0..n),insequence=true,labels=[x,y],scaling=constrained, title=Name,titlefont=[TIMES,BOLD,14]); plots the curve evolving in time display(seq(display
(BlackCurve,CurveWithT[i]),i=0..n),insequence=true,labels=[x,y],scaling=constrained); plots the curve with tangent unit vector display(seq(display(BlackCurve,CurveWithTandN[i]),i=0..n),insequence=
true,labels=[x,y],scaling=constrained); plots the curve with tangent unit vector and normal unit vector display(seq(display(BlackCurve,CurveWithTNandK[i]),i=0..n),insequence=true,labels=[x,y],scaling
=constrained); plots the curve with tangent unit vector, normal unit vector and acceleration vector display(seq(display(BlackCurve,CurveWithTNKandCircle[i]),i=0..n),insequence=true,labels=
[x,y],scaling=constrained); plots the curve with tangent unit vector, normal unit vector, acceleration vector, and osculating circle display(Array(1..2,1..1,[[display((seq(display
[TIMES,BOLD,14],labels=[" ",Curvature])]])); plots the curve with tangent unit vector, normal unit vector and acceleration vector, and in a separate box the graph of the curvature function JSFH Tool
to create animated pictures of space curves together with their - tangent vector, - normal vector, - binormal vector, - curvature function, and - torsion function. Just place the cursor on the red
word "restart" and press enter. If you want to create a picture of your own curve you can change the definitions marked with "(can be modified)" below. restart:with(plots):with(plottools):with
(LinearAlgebra): place the cursor on the red word "restart" and press enter Name:="Torus knot": the name of the curve (can be modified to plot a different curve) x(t):=1/6*(cos(3*t)+5)*cos(2*t):
x-coordinate of the curve (can be modified to plot a different curve) y(t):=1/6*(cos(3*t)+5)*sin(2*t): y-coordinate of the curve (can be modified to plot a different curve) z(t):=1/6*sin(3*t):
z-coordinate of the curve (can be modified to plot a different curve) Length:=3.5*Pi: the (approximate) total length of the curve (can be modified to plot a longer curve) n:=100: the number of frames
of the animated motion (can be modified) Note: it is not necessary to give the curve parametrized by arc length, that is, such that it evolves with constant velocity. The algorithm below is trying to
create frames of the motion which simulate movement with constant velocity by increasing the parameter "t" in such steps that the steps of the arc length approximately equal "Length/n". Since this is
achieved only approximately, the specified value of "Length" will only be the approximate total length of the curve. c(t):=<x(t),y(t),z(t)>: the curve as function in t with values in R^3 dx(t):=diff
(x(t),t): dy(t):=diff(y(t),t): dz(t):=diff(z(t),t): the derivatives with respect to "t" dc(t):=<dx(t),dy(t),dz(t)>: the tangent vector to the curve at time "t" v(t):=simplify(sqrt(dx(t)^2+dy(t)^2+dz
(t)^2)): the velocity; it is equal to the length of the tangent vector "dc(t)" T(t):=simplify(dc(t)/v(t)): the tangent unit vector which is normalized to length 1 ddx(t):=diff(x(t),[t$2]): ddy(t):=
diff(y(t),[t$2]): ddz(t):=diff(z(t),[t$2]): ddc(t):=<ddx(t),ddy(t),ddz(t)>: the second derivatives with respect to "t" dc_times_ddc(t):=dc(t)&x ddc(t): the cross product vector l(t):=sqrt(DotProduct
(dc_times_ddc(t),dc_times_ddc(t),conjugate=false)): the length of this cross product vector B(t):=simplify(dc_times_ddc(t)/l(t)): the binormal unit vector at time "t" N(t):=simplify(B(t)&x T(t)): the
normal unit vector kappa(t):=simplify(l(t)/v(t)^3): the curvature at time "t" K(t):=simplify(kappa(t)*N(t)): the "acceleration vector" (the second derivative of the curve with respect to the arc
length) dddx(t):=diff(x(t),[t$3]): dddy(t):=diff(y(t),[t$3]): dddz(t):=diff(z(t),[t$3]): dddc(t):=<dddx(t),dddy(t),dddz(t)>: the third derivatives with respect to "t" tau(t):=DotProduct(dc_times_ddc
(t),dddc(t),conjugate=false)/l(t)^2: the torsion at time "t" The algorithm to plot "n" frames containing the curve, its various vectors, the curvature and the torsion function: t[-1]:=0: Deltat[-1]:=
0: initial values for i from 0 to n do creates the i-th frame t[i]:=evalf(t[i-1]+Deltat[i-1]): increases the parameter "t" by step "Delta[i-1]" RedCurve[i]:=spacecurve([x(t),y(t),z(t)],t=0..t
[i],thickness=5,color=red): plots the red curve in the i-th frame BlackCurve[i]:=spacecurve([x(t),y(t),z(t)],t=0..Length,thickness=1,color=black): plots the black curve in the i-th frame
Tangentvector[i]:=arrow(evalf(eval(c(t),t=t[i])),evalf(eval(T(t),t=t[i])),.1,.2,.1,cylindrical_arrow,colour=RGB(1,165/255,0)): computes its tangent unit vector in the i-th frame Normalvector[i]:=
arrow(evalf(eval(c(t),t=t[i])),evalf(eval(N(t),t=t[i])),.1,.2,.1, cylindrical_arrow,colour=RGB(0,128/255,0)): computes the normal unit vector in the i-th frame Binormalvector[i]:=arrow(evalf(eval(c
(t),t=t[i])),evalf(eval(B(t),t=t[i])),.1,.2,.1, cylindrical_arrow,colour=blue): computes the binormal unit vector in the i-th frame Curvature[i]:=evalf(eval(kappa(t),t=t[i])): computes the curvature
in the i-th frame Curvaturefunction[i]:=display(plot(kappa(t),t=0..t[i],color="Green", thickness=2),labels=[t,Curvature]): plots the curvature function in the i-th frame Torsion[i]:=evalf(eval(tau
(t),t=t[i])): computes the torsion in the i-th frame Torsionfunction[i]:=display(plot(tau(t),t=0..t[i],color=blue, thickness=2),labels=[t,Torsion]): plots the torsion function in the i-th frame
Accelerationvector[i]:=arrow(evalf(eval(c(t),t=t[i])),evalf(eval(K(t),t=t[i])),.03,.1,.1,colour=blue): computes the "acceleration vector" in the i-th frame Curve[i]:=display3d((RedCurve[i]), title=
Name,titlefont=[TIMES,BOLD,14]): plots the curve in the i-th frame CurveWithT[i]:=display3d((Tangentvector[i],Curve[i]), title=cat(Name," with tangent vector"),titlefont=[TIMES,BOLD,14]): plots the
curve with tangent unit vector in the i-th frame CurveWithTandN[i]:=display3d((Tangentvector[i],Normalvector[i],Curve[i]), title=cat(Name," with tangent vector (brown) and normal vector (green)
"),titlefont=[TIMES,BOLD,14]): plots the curve with tangent unit vector and normal unit vector in the i-th frame CurveWithTNandB[i]:=display3d((Tangentvector[i],Binormalvector[i],Normalvector
[i],Curve[i]), title=cat(Name," with tangent vector (brown), normal vector (green) and binormal vector (blue)"),titlefont=[TIMES,BOLD,14],scaling=constrained): plots the curve with tangent unit
vector, normal unit vector and binormal vector in the i-th frame Deltat[i]:=evalf(eval(Length/(n*v(t)),t=t[i])): the increment of the parameter "t" such that the arc length approximately increases by
"Length / n" end do: Now all frames are created . . . . . . and they are put together to movies: display3d(seq(Curve[i],i=0..n),insequence=true,labels=[x,y,z],scaling=constrained); plots the curve
evolving in time display3d(seq(CurveWithT[i],i=0..n),insequence=true,labels=[x,y,z],scaling=constrained); plots the curve with tangent unit vector display3d(seq(CurveWithTandN[i],i=0..n),insequence=
true,labels=[x,y,z],scaling=constrained); plots the curve with tangent unit vector and normal unit vector display3d(seq(CurveWithTNandB[i],i=0..n),insequence=true,labels=[x,y,z],scaling=constrained);
plots the curve with tangent unit vector, normal unit vector and binormal vector display3d(Array(1..2,1..1,[[display((seq(display(CurveWithTNandB[i],labels=[x,y,z],scaling=constrained),i=
0..n)),insequence=true)],[display((seq(display(Curvaturefunction[i],Torsionfunction[i]),i=0..n)),labelfont=[TIMES,BOLD,14],labels=[" ","Curvature\134n (green)\134n\134n Torsion\134n (blue)
"],insequence=true)]])); plots the curve with tangent unit vector, normal unit vector and acceleration vector, and in a separate box the graph of the curvature function JSFH Urs Hartl 2012 This file
is licensed under the Creative Commons Attribution-Share Alike 3.0 Unported license (http://creativecommons.org/licenses/by-sa/3.0/deed.en). You are free: to share \342\200\223 to copy, distribute
and transmit the work to remix \342\200\223 to adapt the work Under the following conditions: attribution \342\200\223 You must attribute the work in the manner specified by the author or licensor
(but not in any way that suggests that they endorse you or your use of the work). share alike \342\200\223 If you alter, transform, or build upon this work, you may distribute the resulting work only
under the same or similar license to this one. LUklbXJvd0c2Iy9JK21vZHVsZW5hbWVHNiJJLFR5cGVzZXR0aW5nR0koX3N5c2xpYkdGJzYjLUkjbWlHRiQ2I1EhRic= JSFH
|
{"url":"http://www.math.uni-muenster.de/u/urs.hartl/gifs/CurvatureAndTorsionOfCurves.mw","timestamp":"2014-04-19T17:01:33Z","content_type":null,"content_length":"52341","record_id":"<urn:uuid:19e3fe22-0b05-4036-a931-5ce2b7bcf199>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00183-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[aprssig] Position Ambituity in APRS!
Andrew Rich vk4tec at people.net.au Tue Jan 8 21:15:00 UTC 2008
Nothing is going to change is it .
Andrew Rich VK4TEC
vk4tec at people.net.au <mailto:vk4tec at people.net.au>
-----Original Message-----
From: aprssig-bounces at lists.tapr.org
[mailto:aprssig-bounces at lists.tapr.org]On Behalf Of Steve Noskowicz
Sent: Wednesday, 9 January 2008 3:40 AM
To: bruninga at usna.edu; TAPR APRS Mailing List
Subject: RE: [aprssig] Position Ambituity in APRS!
RE: Steve's response...
Well Steve, I am an Engineer and understand your response (paraphrasing -
would be nice...") , but it didn't answer my question. Perhaps I should
RE: Bob's response...
I understand your response as well (paraphrasing - ("Precision vs.
accuracy &
The system is designed to..."), but I'll choose my words more carefully.
I have a D700. What does *it* do to the GPS Lat/Lon data when Pos-ambig
turned on? ...While completely ignoring what any application may do in its
display of the transmitted data. To give examples of the type of answer I am
seeking, I can think of these three possibilities:
1- Truncate some posit digits (coming from the GPS) before (converting to
94 or whatever and) transmitting the data. Thus placing you at the (lower
right - in the US) corner of the resulting spherical polygon.
2- Round the posit data to a more significant digit. Thus placing you at
corner of one of the adjacent spherical polygons.
3- Otherwise manipulate the posit data to add ambiguity perhaps in a manner
already described by Bob...which I'd have to think more about the
of, but am not after that much detail.
Seems to my engineering and math sense that even any #3 results in
similar to #2 regardless pf the encoding algorithm because a limited
(limited digits) in the tx data must "represent" the corner of one polygon
another. [[I understand the concept that sending the missing digits as
is much different in implication than omitting them, but the data without
digits still "represents" the polygon corner. You can't get around that.]]
This is what I am trying to understand at a basic level without the math.
some understanding of the encoding algo.
If it takes a long explanation, you may save bits and your otherwise
time . I was hoping it was a simple thing and don't need to satisfy my
curiosity that much. I and you both have spent way more typint time time
i wanted to on this.
73, Steve
--- Robert Bruninga <bruninga at usna.edu> wrote:
> Welcome to the 15 year debate!
> >> Is, or is not, position ambiguity at the
> >> transmit end simply the
> >> truncation of lat/lon digits?
> It is not. It is NOT truncation. It is simply the transmission
> of the AVAILABLE DIGITS to the degree of precision desired or
> known. If you have only degrees, you transmit only degrees,
> which give a position to the nearest degree (60 miles). If you
> have only degrees and minutes, you transmit only degrees and
> minutes which is the position to the nearest minute (1 mile).
> If you have a position known only to the nearest tenth of a
> minute, then you only transmit the degrees, minutes and the
> tenth of a minute that you have (position known to the nearest
> tenth of a mile). This is not truncation. It is transmitting
> what you know, and NOT implying additional digits of precision
> that you do not know.
> That is all that APRS position ambiguity means. You transmit
> the position only with the number of digits that match the
> precision that you have. And you do NOT add precise decimal
> digits beyond your knowledge. The position field in APRS is a
> CHARACTER STRING that happens to have room for digits of
> precision down to hundredths of a minute. It is NOT a numeric
> field which many programmers incorrectly implemented.
> > You have to understand the way Bob's mind works.
> Yes, it is simple. If the position is known to be 38 degrees 58
> minutes, then you transmit ONLY "3858. N" It is absolutely
> WRONG to send "3858.00N". Any middle school science teacher can
> tell us that.
> > This tops my most-hated of Bob's excursions.
> > The engineering way to handle uncertainty with
> > position is as has been suggested, a precise
> > position representing the best guess, and an
> > altitude-like extension representing the
> > approximate radius of uncertainty.
> Absolutely wrong. That precise estimate implies a PRECISION
> that does not exist. Such simplifications by APRS clones
> unwilling to properly implement this simplest of concepts
> undermines the integrity of information from sender to receiver.
> If the sender does not know the precise position, then he should
> not under any circumstances send it as a precise position. He
> should transmit only what he knows so that the recepient cleary
> sees the same level of ambiguity.
> > Google does this, Garmin does this, Trimble does
> > this, but Bob? To save a few bytes in the protocol,
> > Bob reused bytes in the lat/lon.
> Not true. I did not reuse "bytes". What I refused to do was to
> put in higher digits of precision when those digits ARE NOT
> KNOWN. To do so would violate every principle of "precison" as
> taught in middle school.
> > His intention was not that this be interpreted as a
> > question mark in the lat/lon, a literal uncertainty
> > interpreted as a polygon, as an engineer would, but
> > rather simply as a magnitude of uncertainty.
> Partly right. Because the uncertainty is not a precise polygon;
> it is a lack of additional precision. It is an uncertanty of
> the number of digits of precision by the sender, and an EXACT
> transfer of that same uncertany to the recepient. In that sense
> it conveys the "magnitude of uncertainty" from the sender to the
> recepient in an exact format that cannot be missinterpreted.
> > The problem is that this representation does
> > not fit the reality.
> It may not fit with the reality of some APRS implementations
> that took the simplistic approach of truncating digits, but it
> does transfer exactly from the sender to the receiver the
> knowledge of the ambiguity if displayed properly. If one
> doesn't have a digit of precision, then he should NOT stick in a
> ZERO. Stick in a SPACE character, just like he would write it
> on a piece of paper.
> > So, think of ambiguity as representing a circle,
> > taking the center of the polygon described by the
> > lowest and highest values of the missing digits,
> > and with the radius of the magnitude of the missing
> > digits.
> Yes, now we are talking about how to display it. This now is
> why it is so important to do it consistently across all APRS
> clients so that everyone gets the same visualization that the
> sender intended...
> What you describe above is what I intended for display but with
> one additional tweak as implemented in APRSdos. And that is to
> provide a SLIGHT random offset within that area of uncertanty so
> that if multiple APRS positions are reported in that same area,
> that they are not all stacked on top of each other so that only
> the top one appears.
> If they all use the same precise center of the area, then only
> the latest ICON shows on the map and only one CIRCLE of
> ambiguity shows. This can be very missleading to the casual
> viewer of the map. But in APRSdos, if there are 6 such stations
> reporting ambiguity in the same polygon, then each of their
> circles of ambiguity will each show, but slightly offset so that
> they all individually appear and so at a glance, one can see
> that there are 6 stations there.
> It is very simple. This is the definition of APRS ambiguity:
> 1) The APRS position field is a CHARACTER string
> 2) The sender only includes the digits he knows
> 3) On receipt a circle of ambiguity is displayed that represents
> the possible ambiguity due to the lack of precision
> 4) For display purposes, thse circles are offset slightly so
> that multiple stations reporting the same ambiguous position do
> not all appear as a single display.
> In addition, the original APRSdos does the following:
> A) The SYMBOL is only shown as long as the size of the symbol
> overlaps the size of the circle. In this case the circle is
> hidden or not displayed. Example, viewing a .1 mile ambiguous
> station on a 100 mile map scale, you see the symbol and all
> looks normal.
> B) As one zooms in, and the circle becomes larger than the
> symbol, then the SYMBOL disappears and only the circle is
> displayed. This avoids the appearance of the symbol as an
> "exact location inside a circle". It is not. At this point,
> the circle is the best representation for that station, the
> symbol is not.
> C) Originally, APRSdos simply let the SIZE of the symbol expand
> so that it always covered the area of ambiguiy as the map was
> zoomed. But this can clutter the map. My favorite example, is
> when I arrive in a city airport and I enter the estimated
> position of the city into my HT with a 10 mile ambiguity just to
> show where I am (without carrying a GPS). I do not want my
> SYMBOL to cover the entire city!
> So that is why I fell back to (B) above as the best way to
> convey the ambiguity to the recepient at high map zooms.
> Bob, WB4APR
> _______________________________________________
> aprssig mailing list
> aprssig at lists.tapr.org
> https://lists.tapr.org/cgi-bin/mailman/listinfo/aprssig
73, Steve, K9DCI
Looking for last minute shopping deals?
Find them fast with Yahoo! Search.
aprssig mailing list
aprssig at lists.tapr.org
More information about the aprssig mailing list
|
{"url":"http://www.tapr.org/pipermail/aprssig/2008-January/022937.html","timestamp":"2014-04-19T09:25:49Z","content_type":null,"content_length":"17586","record_id":"<urn:uuid:3d6d6fee-6eb8-4576-a9c3-193741977199>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00532-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Complex Analysis-Liouville's theorem&Taylor
January 16th 2010, 01:17 AM #1
Dec 2009
Complex Analysis-Liouville's theorem&Taylor
Hey there, I realy need some guidance in the following questions:
1. Let f,g be entire functions and there exists a real constant M such as Re(f(z)) <= M * Re(g(z)) for every z in C.
Prove that there exist complex numbers a,b such as f(z) = a*g(z) +b for every z in C.
2. Let f(z) be analytic at the open unit circle and
|f ' (z) | <= 1/ (1 - |z| )for every |z| < 1.
Prove that the coefficients in that taylor series f(z) = Sigma_ an*zn are : |an| < e for every n>=1.
About 1- If we'll define h(z) = f(z)/g(z) we'll get a constant function by Liouville's theorem. But I can't understand how to get the "b" in the expression...
About 2-I'm pretty sure we need to use Cauchy's Inequality but I can't figure out how...
I will be delighted to get some guidance around here...
Thanks a lot
Follow Math Help Forum on Facebook and Google+
|
{"url":"http://mathhelpforum.com/differential-geometry/123993-complex-analysis-liouville-s-theorem-taylor.html","timestamp":"2014-04-19T16:14:57Z","content_type":null,"content_length":"30211","record_id":"<urn:uuid:faf0541e-8652-4793-9926-1bc923e8ba32>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00285-ip-10-147-4-33.ec2.internal.warc.gz"}
|
FOM: Feasibility and (in)determinateness of the standard numbers
Vladimir Sazonov sazonov at logic.botik.ru
Tue Mar 31 16:00:10 EST 1998
Martin Davis wrote:
> The view I expressed on fom is the only problem I have
> with the cumulative hierarchy is its length.
It seems that if this length increases then the "length" of the
"standard model" of arithmetic will increase too. The stronger
are axioms the more we will have examples of concrete Turing
machines which could be (feasibly) proved to halt and therefore
the more we will have corresponding "provably recursive/existing
natural numbers". (Examples of such comparatively small numbers
provably existing in PRA or even in much weaker systems of
arithmetic: 2^1000, 2^{2^1000}), etc.) Therefore the "length"
of the "standard model" seems to be rather indeterminate.
By the way, the following questions arise. Is feasibly provable
ordering m <_fp n between (feasibly) provably existing natural
numbers m and n a *linear* ordering? (More precisely, m <_fp n
means that there exists a proof of feasible length in a version
of arithmetic of the statement m < n.) Can we always provably
decide equality m=n between provably existing numbers? In
particular, is the position of the number 2^1000 fully
determinate among all provably existing numbers. In other
words, what is the "proper value" of the number 2^1000? "How
large" is it? (Compare: How large is the cardinal 2^{aleph_0}?)
Have these questions any meaning?
Torkel Franzen wrote:
> Vladimir Sazonov says:
> >Changing (or rejection)
> >of some of these rules may result in a different notion of natural
> >numbers with different understanding and intuition.
> This points up a general problem with the sort of revisionism you
> are arguing for. Your general ideas and aims are quite intelligible on
> the basis of our ordinary understanding of arithmetic. You suggest (in
> your paper) that this "ordinary understanding" is, on closer
> inspection, contrary to basic intuition and experience, and that it
> could be replaced, as stated above. However, you present these views
> using a logical apparatus steeped in the tradition that you describe
> as contrary to basic intuition and experience,
Yes, I really use some traditional proof theoretic finitary
considerations involving infeasible numbers like 2^1000 in the
ordinary way with the goal to find *any* reasonable guarantee
that some simple formal version of feasible arithmetic (FA,
called also FEAS) is *almost* (or feasibly) consistent. If you
see now that FA is indeed feasibly consistent then the goal was
achieved. This is somewhat analogous to using non-finitary
theory ZFC or induction up to epsilon_0 for proving the
consistency of "finitary" theory PA.
On the other hand, there exists *no* model of FA represented in
ZFC universe. (Here an illusion may arise of a contradiction
with G"odel completeness theorem for predicate calculus. But it
is not the case!) Nevertheless, we have an *informal model* of
feasible numbers which is therefore very different from the
ordinary first order models. This is the place where the basic
intuition and experience does not work or is insufficient and
should be somewhat corrected or extended. Also note that the
formalization of FA has rather unusual features. Say,
implication is in general non-transitive, and even the feasible
number 1000 behaves as a kind of infinity. (Moreover, there is
yet another, even more strange peculiarity of FA).
> The view that the traditional conception can
> or should be dispensed with, on the other hand, is a kind of
> revisionary enthusiasm difficult to substantiate.
My "enthusiasm" has rather different character. Nothing that
was achieved by a heavy work "should be dispensed with".
However, what about the ordinary intention of mathematicians to
embed "everything" in a unique mathematical "universe" like that
for ZFC? I only say that feasibility concept taken
"foundationally" is not embeddable straightforwardly in the
ordinary mathematics as we know it. This seems to touch on some
illusions we have. If you, Torkel, can fully realize this
concept without changing anything in your views, I am very glad.
However, I have some doubts on this especially in connection
with the very unusual intended "model" for FA.
> >What makes the powerset 2^N of natural numbers (i.e. the set of
> >infinite binary strings) to be indeterminate *in contrast to*
> >the powerset 2^1000={0,1}^1000 of {1,2,...1000} which should be
> >determinate (according to the traditional view and *contrary* to
> >my intuition)?
> Answers to this question tend to boil down, one way or another, to
> the statement that 2^N is not generated by a rule.
Which rule then generates 2^1000 (as a set of binary strings)?
Of course, enumerating all these strings in the lexicographical
order is irrelevant as an infeasible and highly non-realistic
Vladimir Sazonov
Program Systems Institute, | Tel. +7-08535-98945 (Inst.),
Russian Acad. of Sci. | Fax. +7-08535-20566
Pereslavl-Zalessky, | e-mail: sazonov at logic.botik.ru
152140, RUSSIA | http://www.botik.ru/~logic/SAZONOV/
More information about the FOM mailing list
|
{"url":"http://www.cs.nyu.edu/pipermail/fom/1998-March/001785.html","timestamp":"2014-04-19T07:02:32Z","content_type":null,"content_length":"7874","record_id":"<urn:uuid:3c0cdbd3-1e17-4a09-9613-4ecabd0b94ca>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00582-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Milankovich vs the Ice Ages
guest post by Blake Pollard
Hi! My name is Blake S. Pollard. I am a physics graduate student working under Professor Baez at the University of California, Riverside. I studied Applied Physics as an undergraduate at Columbia
University. As an undergraduate my research was more on the environmental side; working as a researcher at the Water Center, a part of the Earth Institute at Columbia University, I developed methods
using time-series satellite data to keep track of irrigated agriculture over northwestern India for the past decade.
I am passionate about physics, but have the desire to apply my skills in more terrestrial settings. That is why I decided to come to UC Riverside and work with Professor Baez on some potentially more
practical cross-disciplinary problems. Before starting work on my PhD I spent a year surfing in Hawaii, where I also worked in experimental particle physics at the University of Hawaii at Manoa. My
current interests (besides passing my classes) lie in exploring potential applications of the analogy between information and entropy, as well as in understanding parallels between statistical,
stochastic, and quantum mechanics.
Glacial cycles are one essential feature of Earth’s climate dynamics over timescales on the order of 100′s of kiloyears (kyr). It is often accepted as common knowledge that these glacial cycles are
in some way forced by variations in the Earth’s orbit. In particular many have argued that the approximate 100 kyr period of glacial cycles corresponds to variations in the Earth’s eccentricity. As
we saw in Professor Baez’s earlier posts, while the variation of eccentricity does affect the total insolation arriving to Earth, this variation is small. Thus many have proposed the existence of a
nonlinear mechanism by which such small variations become amplified enough to drive the glacial cycles. Others have proposed that eccentricity is not primarily responsible for the 100 kyr period of
the glacial cycles.
Here is a brief summary of some time series analysis I performed in order to better understand the relationship between the Earth’s Ice Ages and the Milankovich cycles.
I used publicly available data on the Earth’s orbital parameters computed by André Berger (see below for all references). This data includes an estimate of the insolation derived from these
parameters, which is plotted below against the Earth’s temperature, as estimated using deuterium concentrations in an ice core from a site in the Antarctic called EPICA Dome C:
As you can see, it’s a complicated mess, even when you click to enlarge it! However, I’m going to focus on the orbital parameters themselves, which behave more simply. Below you can see graphs of
three important parameters:
• obliquity (tilt of the Earth’s axis),
• precession (direction the tilted axis is pointing),
• eccentricity (how much the Earth’s orbit deviates from being circular).
You can click on any of the graphs here to enlarge them:
Richard Muller and Gordon MacDonald have argued that another astronomical parameter is important: the angle between the plane Earth’s orbit and the ‘invariant plane’ of the solar system. This
invariant plane of the solar system depends on the angular momenta of the planets, but roughly coincides with the plane of Jupiter’s orbit, from what I understand. Here is a plot of the orbital plane
inclination for the past 800 kyr:
One can see from these plots, or from some spectral analysis, that the main periodicities of the orbital parameters are:
• Obliquity ~ 42 kyr
• Precession ~ 21 kyr
• Eccentricity ~100 kyr
• Orbital plane ~ 100 kyr
Of course the curves clearly are not simple sine waves with those frequencies. Fourier transforms give information regarding the relative power of different frequencies occurring in a time series,
but there is no information left regarding the time dependence of these frequencies as the time dependence is integrated out in the Fourier transform.
The Gabor transform is a generalization of the Fourier transform, sometimes referred to as the ‘windowed’ Fourier transform. For the Fourier transform:
$\displaystyle{ F(w) = \dfrac{1}{\sqrt{2\pi}} \int_{-\infty}^{\infty} f(t) e^{-iwt} \, dt}$
one may think of $e^{-iwt}$, the ‘kernel function’, as the guy acting as your basis element in both spaces. For the Gabor transform instead of $e^{-iwt}$ one defines a family of functions,
$g_{(b,\omega)}(t) = e^{i\omega(t-b)}g(t-b)$
where $g \in L^{2}(\mathbb{R})$ is called the window function. Typical windows are square windows and triangular (Bartlett) windows, but the most common is the Gaussian:
$\displaystyle{ g(t)= e^{-kt^2} }$
which is used in the analysis below. The Gabor transform of a function $f(t)$ is then given by
$\displaystyle{ G_{f}(b,w) = \int_{-\infty}^\infty f(t) \overline{g(t-b)} e^{-iw(t-b)} \, dt }$
Note the output of a Gabor transform, like the Fourier transform, is a complex function. The modulus of this function indicates the strength of a particular frequency in the signal, while the phase
carries information about the… well, phase.
For example the modulus of the Gabor transform of
$\displaystyle{ f(t)=\sin(\dfrac{2\pi t}{100}) }$
is shown below. For these I used the package Rwave, originally written in S by Rene Carmona and Bruno Torresani; R port by Brandon Whitcher.
You can see that the line centered at a frequency of .01 corresponds to the function’s period of 100 time units.
A Fourier transform would do okay for such a function, but consider now a sine wave whose frequency increases linearly. As you can see below, the Gabor transform of such a function shows the linear
increase of frequency with time:
The window parameter in both of the above Gabor transforms is 100 time units. Adjusting this parameter effects the vertical blurriness of the Gabor transform. For example here is the same plot as a
above, but with window parameters of 300, 200, 100, and 50 time units:
You can see as you make the window smaller the line gets sharper, but only to a point. When the window becomes approximately smaller than a given period of the signal the line starts to blur again.
This makes sense, because you can’t know the frequency of a signal precisely at a precise moment in time… just like you can’t precisely know both the momentum and position of a particle in quantum
mechanics! The math is related, in fact.
Now let’s look at the Earth’s temperature over the past 800 kyr, estimated from the EPICA ice core deuterium concentrations:
When you look at this, first you notice spikes occurring about every 100 kyr. You can also see that the last 5 of these spikes appear to be bigger and more dramatic than the ones occurring before 500
kyr ago. Roughly speaking, each of these spikes corresponds to rapid warming of the Earth, after which occurs slightly less rapid cooling, and then a slow decrease in temperature until the next spike
occurs. These are the Earth’s glacial cycles.
At the bottom of the curve, where the temperature is about about 4 °C cooler than the mean of this curve, glaciers are forming and extending down across the northern hemisphere. The relatively warm
periods on the top of the spikes, about 10 °C hotter than the glacial periods. are called the interglacials. You can see that we are currently in the middle of an interglacial, so the Earth is
relatively warm compared to rest of the glacial cycles.
Now we’ll take a look at the windowed Fourier transform, or the Gabor transform, of this data. The window size for these plots is 300 kyr.
Zooming in a bit, one can see a few interesting features in this plot:
We see one line at a frequency of about .024, with a sampling rate of 1 kyr, corresponds to a period of about 42 kyr, close to the period of obliquity. We also see a few things going on around a
frequency of .01, corresponding to a 100 kyr period.
The band at .024 appears to be relatively horizontal, indicating an approximately constant frequency. Around the 100 kyr periods there is more going on. At a slightly higher frequency, about .015,
there appears to be a band of slowly increasing frequency. Also, around .01 it’s hard to say what is really going on. It is possible that we see a combination of two frequency elements, one
increasing, one decreasing, but almost symmetric. This may just be an artifact of the Gabor transform or the window and frequency parameters.
The window size for the plots below is slightly smaller, about 250 kyr. If we put the temperature and obliquity Gabor Transforms side by side, we see this:
It’s clear the lines at .024 line up pretty well.
Doing the same with eccentricity:
Eccentricity does not line up well with temperature in this exercise though both have bright bands above and below .01 .
Now for temperature and orbital inclination:
One sees that the frequencies line up better for this than for eccentricity, but one has to keep in mind that there is a nonlinear transformation performed on the ‘raw’ orbital plane data to project
this down into the ‘invariant plane’ of the solar system. While this is physically motivated, it surely nudges the spectrum.
The temperature data clearly has a component with a period of approximately 42 kyr, matching well with obliquity. If you tilt your head a bit you can also see an indication of a fainter response at a
frequency a bit above .04, corresponding roughly to period just below 25 kyrs, close to that of precession.
As far as the 100 kyr period goes, which is the periodicity of the glacial cycles, this analysis confirms much of what is known, namely that we can’t say for sure. Eccentricity seems to line up well
with a periodicity of approximately 100 kyr, but on closer inspection there seems to be some discrepancies if you try to understand the glacial cycles as being forced by variations in eccentricity.
The orbital plane inclination has a more similar Gabor transform modulus than does eccentricity.
A good next step would be to look the relative phases of the orbital parameters versus the temperature, but that’s all for now.
If you have any questions or comments or suggestions, please let me know!
The orbital data used above is due to André Berger et al and can be obtained here:
• Orbital variations and insolation database, NOAA/NCDC/WDC Paleoclimatology.
The temperature proxy is due to J. Jouzel et al, and it’s based on changes in deuterium concentrations from the EPICA Antarctic ice core dating back over 800 kyr. This data can be found here:
• EPICA Dome C – 800 kyr deuterium data and temperature estimates, NOAA Paleoclimatology.
Here are the papers by Muller and Macdonald that I mentioned:
• Richard Muller and Gordan MacDonald, Glacial cycles and astronomical forcing, Science 277 (1997), 215–218.
• Richard Muller and Gordan MacDonald, Spectrum of 100-kyr glacial cycle: orbital inclination, noteccentricity, PNAS 1997, 8329–8334.
They also have a book:
• Richard Muller and Gordan MacDonald, Ice Ages and Astronomical Causes, Springer, Berlin, 2002.
You can also get files of the data I used here:
• Berger et al orbital parameter data, with explanatory text here.
• Jouzel et al EPICA Dome C temperature data, with explanatory text here.
35 Responses to Milankovich vs the Ice Ages
1. Interesting article and nice graphs.
One other thing I would love to see is how changing the window size affects the temperature transform. For example it would be great if you could make an animation showing temperature transform
graph while varying the window size from say 800ky to 20ky or so.
Another thing that I would like to know more about is the reliability of the starting temperature record, for one why are there no error bars? How many ice cores is it based on? How much
reconstructions from different sites differ?
□ The temperature data Blake was using coming from a single ice core, called EPICA Dome C. The actual measured data not temperature but the concentration of deuterium in the ice. The scientists
claim their work gives a
High-resolution (55cm.) deuterium (dDice) profile from the EPICA Dome C Ice Core, Antarctica (75º 06′ S, 123º 21′ E), with an optimal accuracy of ± 0.5 ‰ (1 sigma), from the surface down
to 3259.7 m.
If you click the link you’ll see the deuterium concentration varies between 370 and 450 parts per thousand. And they seem to be measuring it to an accuracy of roughly 0.5 parts per
thousand—that’s what ‰ means.
However, I don’t know what the term ‘optimal accuracy’ means! Accuracy at most this? That would not be very helpful.
Luckily, there are plenty of ice cores, so we can compare the deuterium concentration in one to the deuterium concentration of others. Someone must already be doing this, so there must be
papers to read about this.
The much more tricky question is how accurately we can estimate the temperature from the deuterium concentration. I guess the only way we can check these estimates, except for the recent
past, is by 1) thinking hard about the physical processes involved and 2) comparing these estimates to estimates done using other methods. So, this is probably one of those stories where
scientists slowly bootstrap their way to the truth by a long and complicated line of work. There must be tons of papers about this too, but I bet it takes quite a while to reach the level of
expertise needed to understand them, much less assess them.
2. I tried to win an Innocentive challenge similar to your study, some time ago, and I tried a different method.
The differential equations are the optimal approximator: the solution of a differential equation are the general mathematical series, so why you must use the series (Fourier, Taylor or Laplace)
instead to use the differential equation (point of view of mathematical physics)?
The differential equation is complex to obtain for discrete data, so I tried a Not-Linear Prediction Code, that mixed (in your case) the inputs (obliquity,precession and eccentricity) and output
When you obtain the right discrete differential equation, then you can try to write the continuous differential equation: in this case you must approximate a continuous differential equation,
with free parameters, with the discrete differential equation; so the optimal solution can be write in the continuous equation (you can use the optimal parameter obtained minimizing with L-BFGS,
in the original differential equation).
□ Hi Domenico,
Sounds interesting, but I’m not sure I understand entirely what you tried to do. Did you try to find a discrete non-linear differential equation for insolation with obliquity, precession, and
eccentricity as the independent variables? How did that work? I was under the impression that the insolation is calculated from those parameters, using a particular formula, so maybe you
meant temperature as the output?
Thanks for your comment.
☆ I tried to make solar broadcasting for an Innocentive challenge, but it is not simple to download nasa satellite data (there is not a standard, and it is not a perfect open data, and this
is a pity).
I tried to obtain differential equations from artificial data (data obtained from computer real function): my idea is that I can give the differential equations from a real functions.
The differential equation are surface in the derivative space, then I think that chaotic data cover better the differential surface; the solution is not unique: the derivative of a
differential equation is a solution, and the multiplication for an arbitrary function is a solution; the solution of an arbitrary differential equation is unique.
I search to solve the equation
$i_n = O_{j} o_{n-j} + P_{j} p_{n-j} + E_{j} e_{n-j} + OP_{sj} o_{n-s} p_{n-j} + \cdots$
I can send you my Gnu Public License (free and public) C program, and challenge solution, that I wrote some time ago (if you are interested).
○ I think that the right N-Linear Prediction Code must use temperature like output.
The influence of the insolation over the temperature is complex (change of the Earth meteorology), and the solar cycle is complex.
I think that a right equation can incorporate all the effect (for example using the correlation from obliquity, precession, eccentricity and orbital plane to obtain the solar cycle
I think that can be possible to use the derivative of the orbital parameters, so that can be possible to obtain the real differential equation. It is more simple to optimize:
$0=F(T,\dot T, \ddot T, ...)$
the constant parameters of the differential equation contain implicitly all the variables of the system: this is a universal approximation (it contains all the possible series).
The solution of the differential equation can give the temperature in the years, and it can be possible the future extrapolation (I have never done); the parameters of the equation
are lesser of a N- Linear Prediction Code, but it is complex to obtain derivative of real discrete data. If the temperature data is separated in a training set, then it is possible to
verify the model on a test set.
○ I check the program, and there is an error (excuse me: I have changed the order of a nested for(…), to calculate the derivative of order j).
The differential equation for insolation is:
$0=9.967135 10^{-1}-6.790669e-03 T+6.668373 10^{-3} \dot T-8.041466 10^{-2} \ddot T+1.541889 10^{-5} T^2-3.028208 10^{-5} T \dot T+3.652445 10^{-4} T \ddot T+1.548389 10^{-5} \dot T^
2-3.608935 10^{-4} \ddot T \dot T+2.188199 10^{-3} \ddot T^2-1.166788 10^{-8} T^3+3.437282 10^{-8} T^2 \dot T-4.146608 10^{-7} T^2 \ddot T-3.513633 10^{-8} T \dot T^2+8.194270 10^{-7}
T \dot T \ddot T-4.969360 10^{-6} T \ddot T^2+1.173496 10^{-8} \dot T^3-4.195800 10^{-7} \dot T^2 \ddot T+4.934515 10^{-6} \dot T \ddot T^2-2.005966 10^{-5} \ddot T^3$
the differential equation for temperature is:
$0=-5.855272 10^{-9}-3.439536 10^{-9} T-5.427249 10^{-6} \dot T-5.877310 10^{-5} \ddot T-7.013261 10^{-10} T^2-1.332483 10^{-6} T \dot T-1.456132 10^{-5} T \ddot T-3.985140 10^{-5} \
dot T^2-2.451893 10^{-4} \dot T \ddot T+1.575887 10^{-3} \ddot T^2-4.339529 10^{-11} T^3-8.919583 10^{-8} T^2 \dot T-9.166997 10^{-7} T^2 \ddot T+6.526599 10^{-6} T \dot T^2+5.729854
10^{-4} T \dot T \ddot T+4.181654 10^{-3} T \ddot T^2+1.906309 10^{-3} \dot T^3+5.224223 10^{-2} \dot T^2 \ddot T+4.031408 10^{-1} \dot T \ddot T^2+9.136325 10^{-1} \ddot T^3$
only the use of L-BFGS permit the error minimization in a short time.
It is not necessary, sometime, the minimization of the error function: it is possible to visualize the trajectory in the derivative space $(y,\dot y, \ddot y)$; if the trajectory
cover a surface (for example a plane, a cylinder), then the surface is the differential equation.
☆ I tried to solve, in these days, the temperature dynamic with differential equation; and I obtain strange results: I need some days to control programm and results.
I search the differential equation with only the temperature. I start (I stay ever simple, slow and robust, for start) with derivative
with differential equation
$0=\sum_{i_1>=i_2>=...i_n} T^{(i_1)} ... T^{(i_n)}$
$T^{(0)}=1,T^{(1)}=T,T^{(2)}=\dot T,...$
If I use this definition, then I can change the order, and degree, of the differential equation with command line.
I see that the acceleration of the temperature change with the time, that is lower in old time, and in the current time is higher (human cause? Atmospheric change for microbiological
The differential equation is (it must be verified):
$0= 1.878403 10^{-10}-2.763633 10^{-12} T-1.808749 10^{-07} \dot T-3.276873 10^{-06} \ddot T-1.645263 10^{-11} T^2-9.999851 10^{-08} T \dot T-3.550997 10^{-06} T \ddot T-2.581885 10^{-07}
\dot T^2-2.328727 10^{-05} \dot T \ddot T-1.105296 10^{-04} \ddot T^2-1.998163 10^{-12} T^3-1.099685 10^{-08} T^2 \dot T-3.609520 10^{-07} T^2 \ddot T+3.915039 10^{-08} T \dot T^
2-2.930157 10^{-06} T \dot T \ddot T-2.179395 10^{-05} T \ddot T^2+4.196426 10^{-04} \dot T^3+1.718297 10^{-02} \dot T^2 \ddot T+2.270826 10^{-01} \dot T \ddot T^2+9.737238 10^{-01} \ddot
This is only a joke, because the temperature variation is not continuous (so that is not correct to use the derivative), but this differential equation can be view like a discrete
equation (with the variable $t_n,T_n$).
I am sure that can be write a non-linear discrete equation
but I don’t know if it is possible a solution in the time, and the data have not the form $T_0+n\Delta.$
The discrete solution can obtained using a $T(t)$ series function, with some free parameter, and minimizing the $F^2(T)$ equation, with the solution in the discrete time.
I think that can be possible a interpolation of the temperature, in equal intervals, using the sparse data.
○ I tried some program changes.
I improved the calculation of the derivatives:
the error is of the order of $T^{(i+1)}_n$.
I modified the error function of the differential equation (D is the degree):
$E = \sum_n \left|\frac{\sum_{j_1\geq j_2\geq \cdots j_n} w_{j_1 \cdots j_n} T^{(j_1)}_n \cdots T^{(j_n)}_n}{\sum_{j_1\geq j_2\geq \cdots j_n} |w_{j_1 \cdots j_n}|}\right|^{1/D}$
with this definition the error $F(T,\dot T,\dot T)$ is equal to error $F(T,\dot T,\dot T)^k$; so that the strategy is to try all the differential equation with order, and degree,
growing; until I obtain a drastic reduction of the error (this is the likely solution, not certain).
With this program I got these two differential equation (5 minutes for each program, with degree, and order, less of 4)
$0=6.707205\ 10^{-1}-1.522946\ 10^{-3} I-3.063433\ 10^{-4} \dot I-4.323044\ 10^{-2} \ddot I-3.597673\ 10^{-3} \dddot I-2.806221\ 10^{-1} \ddddot I$
$I(t kyears)=440.41+A\ e^{-0.00436839 t} cos[0.233573 t]+B\ e^{-0.00204178 t} cos(0.315336 t)+C\ e^{-0.00436839 t} sin(0.233573 t)+D\ e^{-0.00204178 t} sin(0.315336 t)$
$0=-3.669786\ 10^{-9}-6.951456\ 10^{-10} T-1.787467\ 10^{-6} \dot T-2.169652\ 10^{-3} \ddot T-1.633270\ 10^{-3} \dddot T-9.961953\ 10^{-1} \ddddot T$
$T(t years)=-5.27916+E\ e^{-0.000412053 t} cos(0.00038827 t)+F\ e^{-0.000407701 t} cos(0.046656 t)+G\ e^{-0.000412053 t} sin(0.00038827 t)+H\ e^{-0.000407701 t} sin(0.046656 t)$
data insolation are correct (the data function is continuous), while the graph of the temperature is not continuous.
I share everithing (as always), even these partial results; they may be interesting to others (that can make better).
○ I increase the derivative precision of the derivative (with iteration for higher degree):
$\dot f(t) = \frac{H^2\ f(x+h)+(h^2-H^2)\ f(x)-h^2\ f(x-H)}{H\ h(H+h)} + O(\dddot f(t))$
with these definition, and with an optimization of the program velocity (more step in the same time) I obtain (this is not the absolute minimum, but the first minimum: there is a
little difference):
$0=6.741495\ 10^{-1}-1.530728\ 10^{-3} I -3.154860\ 10^{-4} \dot I-4.302107\ 10^{-2} \ddot I-3.739970\ 10^{-3} \dddot I-2.772432\ 10^{-1} \ddddot I$
$0=3.648229\ 10^{-9}+6.617805\ 10^{-10} T-1.099853\ 10^{-6} \dot T+2.183413\ 10^{-3} \ddot T-1.337230\ 10^{-3} \dddot T+9.964783\ 10^{-1} \ddddot T$
I verify, with Mathematica, a solution that grow exponentially in the range of the time negative (for insolation).
This happen because the differential equation is linear.
The geometric solution of the differential equation is limited in derivative, and value (the warming and cooling of the Earth cannot be infinite); this can happen if the solution is a
closed surface in the derivativ space $(I,\dot I, \ddot I, \cdots)$ ; so that the trajectory is limited trajectory on the surface.
If this is true, then must happen three connected surface: normal temperature surface, glacial period and global warming.
I think that the transition from normal temperature to glaciation can be studied in old data, to understand the geometry of the transition.
3. Hi Blake,
I´d love to hear how you produced the code! I am a math kid not really a programmer. But Gabor transforms look really cool. But then I LOVE transforms.
□ Most of the hard work was done by Rene Carmona, Bruno Torresani, and Wen L. Hwang, who wrote the package Rwave (R is a free programming language commonly used in statistics). The package has
implementations of Gabor transforms, continuous and discrete wavelet transforms, all kinds of fun stuff. Their accompanying book is also very helpful, Practical Time-Frequency Analysis by
Carmona, Torresani, and Hwang.
4. Hi Blake. I enjoyed the post. But Garbor transforms are new to me, so I have a couple of questions about what it is you’re actually plotting in those pretty Rwave plots. So, I’ll tell you my
guesses and, if you would, please correct me if they’re wrong.
So, using your notation, it looks to me like you have $w$ on the vertical axis (labeled frequency), $b$ on the horizontal axis (labeled time), and $|G_f (w,b)|$ using some false color scale for
the magnitudes. So, far so good?
Now, the thing I’m most puzzled by is the thing with time units you’re calling the “window”. You said your window function in the Garbor transform was a Gaussian $g(t)=e^{-kt^2}$, so I’m guessing
the window is the standard deviation $1/\sqrt{2k}$?
Is that right? Thanks in advance. And I’m looking forward to the next post.
□ Hi Dan,
Yep, your guesses are right on. For the Gaussian window,
$\displaystyle{ g(t) = e^{-kt^2} }$
in the implementation I used the window parameter is $\frac{1}{2k},$ so it’s the standard deviation squared, or the variance.
Thanks for your comment.
☆ Thanks. So, does that mean when you say a plot was done with a window parameter of 300 time units, say, you really mean time units squared? Or are you converting to standard deviations
before reporting, so that $1/2k=300^2$?
○ Hi Dan,
I mean 300 time units. Sorry I took so long to reply, I had to dig up the C functions that the R library, Rwave, actually calls to make sure.
5. Hi Blake, nice post !
Do you have a file with the eccentricity, obliquity, precession, inclination, and temperature data ? I’d like to try a couple of things myself …
□ That data is all available at the links provided in the References of this blog article, but Blake probably has created a more nicely formatted file with that data… and if he emails it to me,
I’ll put it on my website and give everyone a link here.
☆ Nice, thanks! A matrix where each rows contains the five values at a given time would be great, but i’ll take any format.
□ The data is now available on my website. You can reach it by looking at the the References section at the bottom of Blake’s post.
If you have any questions, Giampiero, please ask! And if you succeed in doing any interesting analysis of this data, let us know! You can’t post pictures in comments to this blog, but if you
post a link to a picture I can turn it into a picture. Or, you could even write a blog article here if you want.
☆ Thanks, I got the data and performed some (mostly linear) system identification to see if you could find a linear differential equation that could approximate well temperature when the
other 4 data are given as inputs (forcing terms in the equation).
The very preliminary answers seems to be “not really”, as the best system somehow reproduces the temperature baseline of the validation data (from obliquity and eccentricity alone) but is
not able to really reproduce the peaks:
So it certainly looks like nonlinearity (if not even other input variables) does play an important role in the final temperature history. I might try something more on the nonlinear
identification front in the next few weeks.
Have people been successful in devising a model capable of reproducing past temperature successfully? And how successfully?
☆ That graph of yours is very interesting, Giampiero! The colored curve does match some important features of the observed temperature (as estimated from deuterium concentration), but it
clearly misses the big peaks.
Have people been successful in devising a model capable of reproducing past temperature successfully? And how successfully?
I’m not an expert on these attempts… I should become one!
For now, I recommend that you look at the very simple model of Didier Paillard, discussed in Part 10 of my ‘Mathematics of the Environment’ course. This is highly nonlinear qualitative
model where the Earth has just 3 states, and it changes state according to certain rules involving the insolation (a quantity computed from the orbital parameters). But if you read his
paper—which I will send you—you’ll see he has a similar more detailed quantitative model.
I’m not saying this is the only model or the best one. It’s the one I happen to know.
○ Interesting, thanks. So I have read the whole “math of the environment” series, including part 10, and then the article that you sent me. Here are a few sparse thoughts:
I would not say that the qualitative model has “3 states” (e.g. as in 3 different dimensions like position and velocity), but just one state and 3 different allowed values (regimes or
modes) along that dimension (that is i, g and G). This is just a language issue but i thought it’s better to state it clearly to avoid misunderstanding as to what a state means to
different people.
One thing that would be interesting to add to this post would be the Gabor transform of the insolation, to see how it compares to obliquity, eccentricity and temperature. My guess is
that it should contain frequencies of both obliquity and eccentricity and overlap well with temperature.
Having the insolation data, i could rerun the identification to see if i can reproduce the results in the graph from insolation alone, as the paper suggest it might be possible. Yes i
have seen the insolation page on Azimuth, but i wish there was a formula that i could directly apply to the data.
I like very much the quantitative 2-state model in Paillard’s paper, i wish he had described better (with an actual formula) the switching between i,g,and G modes. However that model
describes the ice volume, not the temperature, so i wonder if one can assume a direct proportional (or probably affine) relationship between ice volume and temperature. If so it could
be straightforward to see what kind of fitting you could do by adjusting that parameter.
One final comment is that it looks as if he used the whole data for 800KYrs to fit the model, and then validated the model using the same exact data, which is not the best of
practices. It is true that since the model is very small that shouldn’t be a big issue, but nevertheless i am left wondering, especially given the results of Fig4, where it looks like
he played with radiative forcing as well.
○ Giampiero wrote:
I would not say that the qualitative model has “3 states” (e.g. as in 3 different dimensions like position and velocity), but just one state and 3 different allowed values
(regimes or modes) along that dimension (that is i, g and G).
I should give you a more substantial reply, but I’m in a rush, and trivial issues of language are fun to argue about, so I’ll only do that.
In physics, the state of a system is a complete description of the way it is at some given time. For example, if we have a particle on a line with position 3 and velocity -7, its
state is (3,-7). In traditional physics there’s usually an infinite set of states, since the set of real numbers is infinite. But there are also systems with finitely many states, and
computer science gives many examples, called ‘finite-state machines’.
In physics, if we describe the state of a system by a list of n numbers, we say the system has n degrees of freedom.
So, I was trying to say that Paillard’s model has 3 different states: i, g, and G, together with certain rules for hopping between these states.
This is perhaps oversimplified since these rules involve how long the system has been in the g (mild glacial) state. So, one can argue that the amount of time spent in the mild
glacial state should also be counted as part of the description of that state. Doing this makes sense because he essentially assumes that ice builds up linearly with time in this
state, if the insolation stays low enough. So, the amount of time is a proxy for the amount of ice.
If you have the time and desire to do any more calculations or modelling, please let me know, and I’ll be glad to help out: I like to think and I don’t like to program!
I really liked your ‘best linear prediction’ here:
and I would like to know exactly how it’s defined. I think we could have fun quantitatively measuring the amount of nonlinearity of the Earth’s climate system, and I have some ideas
for how to do that.
○ John wrote:
In physics, the state of a system is a complete description of the way it is at some given time.
Yes, assuming at least that the system is autonomous (no input), I am very comfortable with this definition, and perhaps it’s the best way to define the state even when the system has
Traditionally in control engineering too the cardinality of the set of possible states is infinity (yes, i know, technically aleph … something), and so we informally (and incorrectly)
refer to the “number of states” as the number of variables that are needed to define the state of a system. I guess that “degrees of freedom” could be confusing when talking to
mechanical engineers since a mechanical system that is said to have, for example, 2 DOF, needs a vector of 4 numbers (2 positions and 2 velocities) to represent its state.
I really liked your ‘best linear prediction’ here and I would like to know exactly how it’s defined.
That was a model identified with an ARMAX structure (see this and this). Very roughly speaking this is a discrete-time transfer function with some filtered additive noise. The
transfer function (TF) has 2 poles, 2 zeros, and another zero that serves as a pure delay (this TF can be converted into a state space discrete-time model with 1 output, 2 inputs and
2 “degrees of freedom”). The noise filter also has 2 zeros (and the same 2 poles of the TF).
I am sorry if this is incomprehensible, but right now i can’t say much more, i’ll try to say more later if you are interested.
I think we could have fun quantitatively measuring the amount of nonlinearity of the Earth’s climate system, and I have some ideas for how to do that.
This looks really interesting, I wonder how you would do it, maybe just subtracting the outputs of the linear and nonlinear model and have a look at its energy.
Perhaps I can try to do something just once in a while. I think that however the first step would be to agree on which input to use (just insolation, obliquity, eccentricity, or maybe
all of the above?) and which output to measure (temperature or ice volume). Otherwise I don’t think we can make an apple to apple comparison.
○ By the way the structure of the ARMAX model is here:
where $y$ is the temperature, $u$ is a vector containing obliquity and eccentricity, $e$ is white noise.
○ Giampiero,
the missed peaks in your graph remind me of a Kalman filter I used at work to predict electricity load: It was very bad at predicting the daily peak.
Alas that job is gone and I haven’t gone any deep into time series stochastics stuff (some gargantuan Excel wizardry turned out sufficient to significantly improve prognosis). And
before the job I never felt interested – which I regret now. Any book recommendation beyond A.N. Shiryaev’s Probability book?
Autoregressive models smell a bit oversimplifying to me – but that’s probably because I didn’t get the knack yet. My intuition (possibly wrong) is: ARMAX is a sort of “small” Kalman
○ Florifulgurator,
in my experience such missing peaks are often one of the telltale signs of unmodeled nonlinear behavior. You should have tried perhaps with an Extended or Ensemble KF, but at the end
of the day they are only as good as the (nonlinear) model of the system you have. If your model is not that great, they cannot make miracles.
Regarding your last comment, well, not really. A KF is a filter that you attach to a system in order to observe (reconstruct and filter out the noise) the state of the system at that
particular time. It needs to have an internal model of the system to work (as well as access the system’s inputs and outputs).
An ARMAX model is just a model of the system, with no particular purposes rather than trying to reproduce the system’s output given its inputs.
The armax system that i have identified is this (i’ll copy and paste more info here below):
amx2221 =
Discrete-time ARMAX model: A(z)y(t) = B(z)u(t) + C(z)e(t)
A(z) = 1 – 1.873 z^-1 + 0.8778 z^-2
B1(z) = 19.87 z^-1 – 19.91 z^-2
B2(z) = 21.26 z^-1 – 21.68 z^-2
C(z) = 1 – 0.6272 z^-1 – 0.2794 z^-2
Name: amx2221
Sample time: 1 seconds
Polynomial orders: na=2 nb=[2 2] nc=2 nk=[1 1]
Number of free coefficients: 8
Use “polydata”, “getpvec”, “getcov” for parameters and their uncertainties.
Estimated using POLYEST on time domain data “0-400-12″.
Fit to estimation data: 79% (prediction focus)
FPE: 0.3427, MSE: 0.3291
It is very simple indeed, just a few parameters, but often simple things work better and are anyway more useful than complicated ones. Indeed this outperforms more complicated
Don’t try to read into these equations too much, stuff like “A(z)y(t)” is really a shorthand notation for convolution not a multiplication. And the sample time is not really one
second (which is the default one) but one Kyear (this would matter when converting the model to a continuous-time one).
6. These Garbor transform graphs are great, and the evidence for obliquity and some combination of inclination and eccentricity as drivers of temperature seems convincing. (If one didn’t know
Newton’s laws, one could conversely conclude that temperature changes were driving changes in the earth’s orbital parameters, which would make human caused temperature changes much more
However, since the variations in the orbital parameters are presumably persistent and not localized in time, why use a time-local transform like the Garbor transform (which was new to me,
thanks!) instead of one that samples your entire data set with equal weight? In these plots, I see horizontal lines that vary a little bit in intensity, but no salient features that really appear
and disappear over time.
Temperature changes not related to our orbit should suddenly appear and disappear, though. How would the current temperature spike look in a Garbor plot made by scientists millions of years in
our future? If the Permian-Triassic extinction event, for example, were caused by the exponential growth in CO2 production by a race of technological squids and the consequent warming, would we
be able to see that local event on a Garbor temperature plot extending back 200 million years?
□ Over long enough time scales, the variations in orbital parameters aren’t constant. Apart from anything else, the Earth-Moon distance is steadily increasing, and the Earth’s spin slowing.
This alone would be enough to change the periods of the variations.
On a multi-million year time scale, the sensitivity of climate to orbital parameters also changes, thanks to continental drift. The Arctic may be especially sensitive at the moment, being an
almost landlocked sea. The Antarctic, an island continent, may be less sensitive.
Presumably, the Garbor transform filters out such effects.
□ David Lyon wrote:
However, since the variations in the orbital parameters are presumably persistent and not localized in time, why use a time-local transform like the Garbor transform (which was new to me,
thanks!) instead of one that samples your entire data set with equal weight?
Your point is a good one. While there are certain slow changes in the cyclic variations of the Earth’s Milankovitch cycles, these wouldn’t be noticeable over the rather short (800,000-year)
time period covered by the EPICA Dome C ice core. So in this post, taking the Gabor transform of the Milankovitch cycles is essentially just a charismatic way to draw their Fourier transform!
The Gabor transform of the temperature, however, has the potential to be more interesting. Over the period covered here, the way it changes with time is very subtle:
But if Blake gets ahold of good data going back over a million years ago, he should be able to study a famous phenomenon: the way the dominant period of the glacial cycle shifted from 41 kyr
to 100 kyr about one million years ago!
The most famous data illustrating this is Lisiecki and Raymo’s collection of ocean sediment cores taken from 57 different locations. It gives this graph:
The data is publicly available:
• Lorraine E. Lisiecki and Maureen E. Raymo, A Pliocene-Pleistocene stack of 57 globally distributed benthic δ^18O records, Paleoceanography 20 (2005), PA1003. Data available on Lisiecki’s
Unfortunately, these authors used Milankovitch cycles to ‘improve’ the dating of their sediment cores! This introduces a kind of circular reasoning if we try to compare their data to
Milankovitch cycles.
Right now Blake is trying to get ahold of some data going back over 1 million years that wasn’t ‘improved’ in this way. The Gabor transform of this could be very interesting.
7. When I was younger, I read many things, many things by Isaac Asimov and my favourite (as well as his favourite) were his essays written for F&SF (which I read as books of 17 issues each). One was
on the Milankovich cycles: “Oblique the centric globe”.
8. The thing to keep in mind here is that Milankovitch cycles have been around for far longer than the current Ice Age glaciations have been around, therefore one cannot assume that Milankovitch
cycles are the cause of glaciations. This is a case of confusing correlations with causes — a big no-no in statistics. The other thing is the abuse of the word “average”. It is misleading to say
that glaciations, on average, last 100,000 years, therefore Milankovitch cycles must be causing them since it doesn’t address probability distribution. Glaciations typically have lasted either
80,000 years or 120,000 years — and the average of those two numbers is 100,000 years, so just because a correlation exists, even a very good one, doesn’t prove there is any relationship between
it and reality. And if you really want to do the math, remember that the change in solar insolation due to Milankovitch cycles isn’t enough to explain the change in temperature during glaciation
events. The more rational explanation is that glaciations are synchronized to Milankovitch cycles, not caused by them.
9. The mechanism linking orbital cycles and the ice ages is simple: solar intensity, which varies with orbital forcing, correlates with the derivative of global ice volume. For details, see Roe G.
(2006), “In defense of Milankovitch”, Geophysical Research Letters, 33, L24703; doi:10.1029/2006GL027817.
I published a non-technical explanation of that in the Wall Street Journal (5 April 2011). The article is How scientific is climate science?; see the box entitled “An insupportable assumption”.
10. This talk presents a lot of what Blake Pollard and I have said about Milankovitch cycles, in a condensed way.
You can use HTML in your comments. You can also use LaTeX, like this: $latex E = m c^2 $. The word 'latex' comes right after the first dollar sign, with a space after it.
|
{"url":"http://johncarlosbaez.wordpress.com/2013/01/30/milankovich-vs-the-ice-ages/","timestamp":"2014-04-17T00:49:46Z","content_type":null,"content_length":"151572","record_id":"<urn:uuid:5dc85356-91d9-476b-9b9c-dc9b1dd034f1>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00247-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Plantersville, TX Math Tutor
Find a Plantersville, TX Math Tutor
...Electrical Engineering, M.S. Industrial Management, M.S. Physics, M.S.
54 Subjects: including statistics, Praxis, MCAT, public speaking
...I am also a certified Principal in the State of Texas and has worked as a Case Manager, Dept Chair as well as Team Leader.I specialize in increasing skill sets with students who struggle with
abstract and algebraic concepts. I am Certified EC-12 & Principal Certified in all grade levels. I wrot...
33 Subjects: including calculus, vocabulary, grammar, precalculus
...I am available to help tutor in pharmacology (related to nursing), all nursing courses at the undergraduate level, anatomy and physiology, and study skills. I am also available to help
individuals with organization and time management. In order to achieve the best possible outcomes with my tutoring sessions, I would tailor my tutoring methods to each individual's learning
11 Subjects: including prealgebra, biology, nursing, anatomy
I am a certified High School math teacher who enjoys working one on one or with a few students, challenging them to overcome their fears and struggles with math. My tutoring style is much like a
coach who encourages and supports his players but demands hard work and good thinking. I am most concer...
11 Subjects: including calculus, precalculus, statistics, probability
...Math has always been one of my favorite subjects, and I'm looking to tutor others in my spare time after work. I'm a very patient person when it comes to teaching others, and I am motivated
for others to learn the material. I'm a very motivated and driven individual with a positive outlook on life.
9 Subjects: including algebra 1, algebra 2, calculus, geometry
Related Plantersville, TX Tutors
Plantersville, TX Accounting Tutors
Plantersville, TX ACT Tutors
Plantersville, TX Algebra Tutors
Plantersville, TX Algebra 2 Tutors
Plantersville, TX Calculus Tutors
Plantersville, TX Geometry Tutors
Plantersville, TX Math Tutors
Plantersville, TX Prealgebra Tutors
Plantersville, TX Precalculus Tutors
Plantersville, TX SAT Tutors
Plantersville, TX SAT Math Tutors
Plantersville, TX Science Tutors
Plantersville, TX Statistics Tutors
Plantersville, TX Trigonometry Tutors
|
{"url":"http://www.purplemath.com/plantersville_tx_math_tutors.php","timestamp":"2014-04-20T19:18:40Z","content_type":null,"content_length":"23955","record_id":"<urn:uuid:0989d923-afdd-424f-b1c0-58bd02aebc59>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00499-ip-10-147-4-33.ec2.internal.warc.gz"}
|
parameter estimation
December 1st 2008, 06:49 AM #1
Nov 2008
parameter estimation
I'm working with the following conditioned probability density
$\Pr(R|N)= \binom {L} {L-R}\sum_{r=1}^{R}\binom {R} {r} (-1)^{R-r}\left(\frac{r}{L}\right)^N <br />$
where $L,R,N$ are discrete variables, $L,R,N>0$and $R<=L$.
What I need is to find the maximum likelihood value for N that maximize the probability to have a given R. The approach I've tried is the classical ML estimation, so I set
$\frac{\partial\Pr\left( R=R'|N\right)}{\partial N}=0$
that gives
$\binom {L} {L-R}\sum_{r=1}^{R}\binom {R} {r} (-1)^{R-r}\left(\frac{r}{L}\right)^N\ln\frac{r}{L} =0<br />$
The problem is that now I'm not able to solve this equation for $N$
Is there any other way to find an estimate of $N$given a particular $M$ from the first conditioned probability? I need only an estimate so I can consider to make some approximations...but I need
a closed form, I cannot resort to numerical methods.
Thanks in advance!
Follow Math Help Forum on Facebook and Google+
|
{"url":"http://mathhelpforum.com/advanced-statistics/62625-parameter-estimation.html","timestamp":"2014-04-16T15:13:37Z","content_type":null,"content_length":"31340","record_id":"<urn:uuid:ca466400-fb50-47b5-84eb-cef66b697c72>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00172-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Google Answers: Wealth Comparison Through the Years
Hi joearchivist,
The following is the result of my research for online databases that
calculate and compare the value of the dollar between today and in
past years.
-How Much Is That Worth Today?-
"Comparing the purchasing power of money in the United States (or
colonies) from 1665 to 2003.
To determine the value of an amount of money in one year compared to
another, enter the values in the appropriate places below. For
example, you may want to know: How much money would you need today to
have the same "purchasing power" of $500 in year 1970 If you entered
these values in the correct places, you will find that the answer is
You can make this computation among all the years between 1665 and 2003."
How much money in the year ---- has the same "purchasing power" as $
---- in the year ----?
I entered the following data as an example:
How much money in the year 2003 has the same "purchasing power" as
$1000 in the year 1890?
The result is:
$20189.36 in the year 2003 has the same "purchase power" as $1000 in the year 1890.
Economic History Services
-What is a Dollar Worth?-
Directions: Enter years as 4 digits (i.e. 1913) through 2004. Enter
dollar amount without commas or $ sign in box on first line. Click
Calculate button to compute dollar amount shown on second line.
If in ----(year)
I bought goods or services for $ ---- ,
then in ----(year)
the same goods or services would cost $ ----
*Limited to years from 1913 to 2004.
*Data from consumer price indexes for all major expenditure class items.
*An estimate for 2004 is based on the change in the CPI from first
quarter 2003 to first quarter 2004.
Federal Reserve Bank of Minneapolis: CPI Calculator
-The Inflation Calculator-
The following form adjusts any given amount of money for inflation,
according to the Consumer Price Index, from 1800 to 2003.
I entered the following as an example
Enter the amount of money: $1000
Enter the initial year (1800-2003): 1850
Enter the final year (1800-2003): 2003
The result:
What cost $1000 in 1850 would cost $21109.69 in 2003.
Also, if you were to buy exactly the same products in 2003 and 1850,
they would cost you $1000 and $47.37 respectively.
The Inflation Calculator
-Inflation Calculator-
For the years 1913 - 2004.
$ ----
in ---- (year)
has the same buying power as
$ ----
in ----(year).
Bureau of Labor Statistics: Consumer Price Indexes
Some directories you may find useful:
Current Value of Old Money
Duke University Library: Reference Resources - Calculators
Search criteria:
dollar any given year worth today
I hope the information provided is helpful. If you have any questions
regarding my answer please don?t hesitate to ask before rating it.
Best regards,
|
{"url":"http://answers.google.com/answers/threadview?id=377279","timestamp":"2014-04-18T13:48:02Z","content_type":null,"content_length":"11640","record_id":"<urn:uuid:eea41100-e95e-47d6-bccd-5eab595390d3>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00238-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Woodstock, GA ACT Tutor
Find a Woodstock, GA ACT Tutor
...Knowing how to tutor is more than mastering math or science, it’s about how to relate this material to other people. Analyzing people to discover their mental blocks, preferred learning
methods, and style of approach is what I do best. I’m also great for you procrastinators out there with test ...
25 Subjects: including ACT Math, chemistry, calculus, reading
...I have also had a great deal of scientific experience and have worked around the world on different physics projects. I have been to Germany, Poland, Canada and still do research locally at
Rutgers. I have actively been involved with RUPA-The Rutgers University Programming Association throughou...
41 Subjects: including ACT Math, reading, physics, writing
...I am very familiar with navigating in GUI based Ubuntu platforms as well as the more terminal style CentOS platform. I can provide beginner level support with Linux and performing basic tasks
such as installing packages and setting up a development environment. As a programmer, I work with logic in my development everyday.
24 Subjects: including ACT Math, calculus, geometry, statistics
...I love helping students reach their full potential! I have found that most of the time all a student needs is someone encouraging them and letting them know that they are SMART and that they
CAN do it! I cover Algebra I and II, Geometry, Chemistry, and SAT/ACT Math.
15 Subjects: including ACT Math, chemistry, geometry, biology
...I have taught phonics to grammar school students, high school students, college students, and students learning English as a second language. I have taught phonics in private grammar and high
schools, college level courses, adult education courses and seminars. I have taught English and phonics for Literacy Action in Atlanta, Georgia and Literacy Forsyth in Cumming, Georgia.
42 Subjects: including ACT Math, reading, English, ASVAB
Related Woodstock, GA Tutors
Woodstock, GA Accounting Tutors
Woodstock, GA ACT Tutors
Woodstock, GA Algebra Tutors
Woodstock, GA Algebra 2 Tutors
Woodstock, GA Calculus Tutors
Woodstock, GA Geometry Tutors
Woodstock, GA Math Tutors
Woodstock, GA Prealgebra Tutors
Woodstock, GA Precalculus Tutors
Woodstock, GA SAT Tutors
Woodstock, GA SAT Math Tutors
Woodstock, GA Science Tutors
Woodstock, GA Statistics Tutors
Woodstock, GA Trigonometry Tutors
Nearby Cities With ACT Tutor
Acworth, GA ACT Tutors
Alpharetta ACT Tutors
Canton, GA ACT Tutors
Douglasville ACT Tutors
Dunwoody, GA ACT Tutors
Holly Springs, GA ACT Tutors
Kennesaw ACT Tutors
Lebanon, GA ACT Tutors
Mableton ACT Tutors
Marietta, GA ACT Tutors
Milton, GA ACT Tutors
Roswell, GA ACT Tutors
Sandy Springs, GA ACT Tutors
Smyrna, GA ACT Tutors
Snellville ACT Tutors
|
{"url":"http://www.purplemath.com/woodstock_ga_act_tutors.php","timestamp":"2014-04-21T07:34:51Z","content_type":null,"content_length":"23902","record_id":"<urn:uuid:18eab1f8-173f-43fb-bd4e-53a66778c6ea>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00651-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Re: Re: st: Fitting probit - estat gof puzzling results
Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: Re: st: Fitting probit - estat gof puzzling results
From J Gonzalez <jgonzalez.1981@yahoo.com>
To "statalist@hsphsun2.harvard.edu" <statalist@hsphsun2.harvard.edu>
Subject Re: Re: st: Fitting probit - estat gof puzzling results
Date Thu, 1 Sep 2011 10:42:58 +0100 (BST)
Clyde Schechter, thank you very much for your guidance. It has been very helpful indeed.
Although, for my current project I think I will stop there in that model, your explanation raised a couple of questions to me.
You said models like this (that discriminate quite well but are not well calibrated), "may still be useful for understanding factors that promote or inhibit applying, even if they are not well calibrated--but such models would not be suitable for some other purposes". My questions are two-fold:
1) If the data generating process does not match with the model I am using, then the assumptions about variable's distribution might not hold (for example probability distribution of residuals). Therefore, every statistic that depends on such assumptions might also be unreliable, and hence, the simplest hypothesis testing might be untrustworthy (for example, if one variable coefficient is statistically different from 0). Am I right? If I am right why and/or how the model "may still be useful for understanding factors that promote or inhibit applying"?. I am pretty sure that you are right when said that, however, I cannot figure out how it would be justified from a theoretical point of view.
Well, as I said I am not an expert in this, so I apologize in advance if the question sounds quite silly, but everything I read about econometrics points out (or implies, or suggest) that if the assumption of the data generating process nature does not hold, the model is not reliable (not consistent, and sometimes even biased). That is probably because I am relying mostly on text books (Wooldridge, 2002) and hence, trying to test every testable assumption of the model, but it would be great if you can point me to literature where I can go deeper on this issue of the usefulness and properties of non-well-calibrated models.
2) What are the kind of purposes for which such a model might not be suitable? For example, an out-of-sample prediction of the probability of applying given the RHS variables?
Once again thank your for your help.
Best regards,
Jesús González
Wooldridge, J.M. (2002). Econometric Analysis of Cross Section and Panel Data. MIT Press, 2002.
----- Ursprüngliche Message -----
Von: Clyde B Schechter <clyde.schechter@einstein.yu.edu>
An: "statalist@hsphsun2.harvard.edu" <statalist@hsphsun2.harvard.edu>
Gesendet: 18:40 Dienstag, 30.August 2011
Betreff: Re: Re: st: Fitting probit - estat gof puzzling results
So, after revising his model, Jesus Gonzalez gets these calibration results:
| Group | Prob | Obs_1 | Exp_1 | Obs_0 | Exp_0 | Total |
| 1 | 0.0968 | 225 | 222.5 | 4021 | 4023.5 | 4246 |
| 2 | 0.1928 | 635 | 607.4 | 3610 | 3637.6 | 4245 |
| 3 | 0.3265 | 1080 | 1083.3 | 3165 | 3161.7 | 4245 |
| 4 | 0.5803 | 1861 | 1873.9 | 2384 | 2371.1 | 4245 |
| 5 | 0.8053 | 3097 | 3020.4 | 1148 | 1224.6 | 4245 |
| 6 | 0.8871 | 3669 | 3610.0 | 576 | 635.0 | 4245 |
| 7 | 0.9342 | 3861 | 3873.4 | 384 | 371.6 | 4245 |
| 8 | 0.9665 | 4016 | 4038.1 | 229 | 206.9 | 4245 |
| 9 | 0.9899 | 4122 | 4155.9 | 123 | 89.1 | 4245 |
| 10 | 1.0000 | 4204 | 4229.5 | 41 | 15.5 | 4245 |
I would consider these eye-poppingly good. And the goal being to understand the factors that influence the decision to apply for the program, I think it would be difficult to meaningfully improve on this. I would still disregard the p-value: it will remain "on steroids" as long as you use this huge sample. Perhaps further tweaking of the model will produce slight improvements in fit, but I'd be surprised if what you learn from them will be worth the effort.
Remember, it is highly unlikely that the real data generating process here is in fact a probit model based on variables you have measured or even could measure in principle. This is one of those wrong models that I think Box would have called useful. As long as there is even a tiny difference between the real data generating process and your statistical model, you are likely to detect that difference in a sample this size when you test calibration. It is likely that any attempt to get your H-L chi square into non-significant territory will either fail, or will succeed at the price of fitting the noise in your data (e.g. a saturated model).
Finally, let me rant (mildly and briefly) about your lower level of concern for discrimination. Suppose the real data generating process were that participants apply to the program with probability p = some function of an unobserved variable, u, which his independent of all your observed variables. When you fit a model based on the x's, you will, with some noise, get a model that predicts, more or less, probability = p0 for all comers(where p0 is the marginal probability of applying to the program). That model will be almost perfectly calibrated: in each decile of predicted probability the observed and predicted probabilities will match up very closely, both being approximately p0: but the model is completely uninformative as to _which_ subjects are applying and which are not. You would need to look at the area under the ROC curve, which will be very close to 0.5 in this situation, to find that out from a summary statistic.
Now that is probably not a very realistic scenario, but I'm trying to make clear the point that even a perfectly calibrated model can fail to distinguish appliers from non-appliers in any useful way. If the discrimination is not good, the model is not useful for your purposes, no matter how well it is calibrated. (By contrast, models that discriminate well may still be useful for understanding factors that promote or inhibit applying, even if they are not well calibrated--but such models would not be suitable for some other purposes.)
Good luck with the rest of your project!
Clyde Schechter
Dept. of Family & Social Medicine
Albert Einstein College of Medicine
Bronx, NY, USA
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
|
{"url":"http://www.stata.com/statalist/archive/2011-09/msg00015.html","timestamp":"2014-04-17T07:55:23Z","content_type":null,"content_length":"14619","record_id":"<urn:uuid:897c5822-6bee-4efe-87ba-b249d64ef5d9>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00479-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Re: Re: Re: st: Tabulate summary statistics by percentiles and save outp
Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: Re: Re: st: Tabulate summary statistics by percentiles and save output
From annoporci <annoporci@gmail.com>
To statalist@hsphsun2.harvard.edu
Subject Re: Re: Re: st: Tabulate summary statistics by percentiles and save output
Date Sun, 30 Dec 2012 05:35:21 +0800
I wish to tabulate some summary statistics for some percentiles and to
export the tables to files in tex format.
It turns out that my tabulations had serious problems, caused by a
misunderstanding of the Stata syntax, which Nick kindly pointed out. For
the record, I copy below the code which, I think, achieves the first of my
objectives, namely summary statistics for different percentiles. I'm still
working on exporting that in latex tables.
cap log close
set more off
cd c:\stata\
use ibm,clear
tsset date
local variables ibm ///spx
/* Tabulate moments for different percentiles */
/// summarize produces only a few selected percentiles
foreach var of varlist `variables' {
quietly summarize `var', detail
summarize `var' if inrange(`var',`=r(p1)',`=r(p10)'), detail
quietly summarize `var', detail
summarize `var' if inrange(`var',`=r(p90)',`=r(p100)'), detail
/* Tabulate moments for different percentiles */
/// uses the percentiles computed from the previous subset of data used
by summarize
/// NOTE: most likely not what is intended
foreach var of varlist `variables' {
quietly summarize `var', detail
summarize `var' if inrange(`var',`=r(p1)',`=r(p10)'), detail
summarize `var' if inrange(`var',`=r(p90)',`=r(p100)'), detail
/* Tabulate moments for different percentiles */
/// Alternative approach using centile and tabstat
/// if percentiles beyond those returned by summarize are needed
/* Compute and store percentiles */
foreach var of varlist `variables' {
quietly centile `var', centile(1(1)100) normal
forval i = 1(1)100 {
scalar `var'_p`i' = r(c_`i')
/* Compute and store first moments between pi(i=1..100) and p100 */
foreach var of varlist `variables' {
forvalues i = 1(1)99 {
quietly tabstat `var' if inrange(`var',`=`var'_p`i'',`=`var'_p100') ///
, stat(count mean sd skewness kurtosis) save
*return list
tempname total
matrix `total' = r(StatTotal)
*matrix list `total'
scalar `var'_ob_p`i'_p100 = `total'[1,1]
scalar `var'_mu_p`i'_p100 = `total'[2,1]
scalar `var'_sd_p`i'_p100 = `total'[3,1]
scalar `var'_sk_p`i'_p100 = `total'[4,1]
scalar `var'_kt_p`i'_p100 = `total'[5,1]
scalar list
Remark: I saved my variables of interest as scalars, not sure if that's
the smart way.
Obviously, in practice, I do not intend to do so many computations for so
many percentiles, the above is merely an illustration of what's possible
with my current skill level. The reason for using -tabstat- with -centile-
is that -summarize,detail- returns only a selection of percentiles, not
enough for my purpose.
The code I wrote computes for p1-p100, then p2-p100, p3-p100, etc. which
is useful for my ultimate purpose. Other ways based on the same code are
obviously possible, e.g. p49-p51, p48-p52, p47-p53, etc..
I am posting this as a form of follow-up, for the record only, and with no
guarantee, I have essentially no experience and competence in Stata and
statistics. I hope this is not against the statalist etiquette.
Patrick Toche.
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/faqs/resources/statalist-faq/
* http://www.ats.ucla.edu/stat/stata/
|
{"url":"http://www.stata.com/statalist/archive/2012-12/msg00965.html","timestamp":"2014-04-16T04:23:07Z","content_type":null,"content_length":"11256","record_id":"<urn:uuid:27dc1a6f-5f28-45a4-a7c9-316b29b388f2>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00602-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Teaching Textbooks Inc Teaching Textbooks: Algebra 1, Textbook with Answer Key, Version 2.0
Designed for additional students using
Teaching Textbooks Algebra 1, Version 2.0
, this student workbook and answer booklet will allow students to complete the course in their own book. Perfect for co-ops or siblings! The student textbook contains 142 lessons and is 854 pages,
softcover, spiralbound; the answer key/test bank contains 19 tests and is 177 pages, softcover.
This kit does NOT contain the Teaching Textbooks Algebra 1 2.0 CD-ROM Set.
Customer Questions & Answers:
|
{"url":"http://answers.christianbook.com/answers/2016/product/880202/teaching-textbooks-inc-teaching-textbooks-algebra-1-textbook-with-answer-key-version-2-0-questions-answers/questions.htm","timestamp":"2014-04-21T08:12:49Z","content_type":null,"content_length":"77660","record_id":"<urn:uuid:f651ad55-ab4c-4248-a704-0c34199955a8>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00651-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Park Row, TX Math Tutor
Find a Park Row, TX Math Tutor
...I plan on attending an institution where I can achieve a medical doctorate simultaneously with a degree in jurisdictional prudence. Five years at the UT has taught me how to take in
complicated information, process it and teach it to fellow students. In a more professional atmosphere, I have helped students achieve their academic goals through tutoring on a high school and
college level.
38 Subjects: including calculus, prealgebra, ACT Math, algebra 1
...Those problems that appear complicated are just a series of simple concepts woven together. I deconstruct the problem for students so they can understand the simple problems woven in to what
"appears" to be a complicated problem. Typically, if math is complicated it's because the method of instruction is complicated.
16 Subjects: including algebra 1, algebra 2, calculus, ACT Math
...I even have the recommendations from all my previous teaching experiences highlighting my wonderful teaching skills.I can effectively tutor Academic, AP and Pre-AP students. I am sure my
students will be benefited by the way I teach and the knowledge I possess. I am a certified teacher with a Master's degree in Physics and a Bachelor's in Education.
15 Subjects: including algebra 2, SAT math, precalculus, linear algebra
I am a certified High School math teacher who enjoys working one on one or with a few students, challenging them to overcome their fears and struggles with math. My tutoring style is much like a
coach who encourages and supports his players but demands hard work and good thinking. I am most concer...
11 Subjects: including calculus, precalculus, statistics, probability
...I offer a no-fail guarantee (contact me via WyzAnt for details). I am available at any time of the day; I try to be as flexible as possible. I try as much as possible to work in the comfort of
your own home at a schedule convenient to you. I operate my business with the highest ethical standar...
35 Subjects: including calculus, ACT Math, discrete math, differential equations
Related Park Row, TX Tutors
Park Row, TX Accounting Tutors
Park Row, TX ACT Tutors
Park Row, TX Algebra Tutors
Park Row, TX Algebra 2 Tutors
Park Row, TX Calculus Tutors
Park Row, TX Geometry Tutors
Park Row, TX Math Tutors
Park Row, TX Prealgebra Tutors
Park Row, TX Precalculus Tutors
Park Row, TX SAT Tutors
Park Row, TX SAT Math Tutors
Park Row, TX Science Tutors
Park Row, TX Statistics Tutors
Park Row, TX Trigonometry Tutors
Nearby Cities With Math Tutor
Addicks Barker, TX Math Tutors
Addicks, TX Math Tutors
Aldine, TX Math Tutors
Astrodome, TX Math Tutors
Bammel, TX Math Tutors
Cloverleaf, TX Math Tutors
Houston Heights, TX Math Tutors
Howellville, TX Math Tutors
Kohrville, TX Math Tutors
Oak Forest, TX Math Tutors
Panther Creek, TX Math Tutors
Satsuma, TX Math Tutors
Sienna Plantation, TX Math Tutors
Trammells, TX Math Tutors
Valley Lodge, TX Math Tutors
|
{"url":"http://www.purplemath.com/Park_Row_TX_Math_tutors.php","timestamp":"2014-04-21T07:26:22Z","content_type":null,"content_length":"24141","record_id":"<urn:uuid:93792a10-21d5-424d-bf30-37b92f387e7f>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00093-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Continuity of a function
February 15th 2009, 07:59 AM #1
Oct 2007
Continuity of a function
Q: f is function from the reals to the reals. f(x) = x for x rational, f(x) = 1 - x otherwise. Find the set of all a such that f is continous at a.
A: I think f is continous at x = 1/2. Is this correct? Are there any other points? I thought not because if we choose any a not equal to one half, then we can always find some rational number
between x and a, as we proceed to the limit. Is this correct reasoning?
Q: f is function from the reals to the reals. f(x) = x for x rational, f(x) = 1 - x otherwise. Find the set of all a such that f is continous at a.
A: I think f is continous at x = 1/2. Is this correct? Are there any other points? I thought not because if we choose any a not equal to one half, then we can always find some rational number
between x and a, as we proceed to the limit. Is this correct reasoning?
Your reasoning isn't very clear. Yes, given a not equal to 1/2 we choose some rational number between x and a- but that is true with any number replacing 1/2! What's special about 1/2?
And why do you mention only rational numbers? In order for a function to be continous at a point, its limit must exist and be equal to the value of the function. But, given any x, there will be
rational numbers as close as we please to x and so values close to x itself. There will also be IRRATIONAL numbers arbitrarily close to x so there will be values close to 1- x. In order that the
limit exist, we must have x= 1- x. That's what is special about 1/2! I'm sure that's what you were thinking but you need to state clearly why this would work with 1/2 only and include the
irrational numbers.
Thanks HallsofIvy, that makes it clearer. I had noticed 1/2 is the only number which satisfied x = 1 - x, which is why i went for that one!
I have another attempted answer to a question, maybe you could help we with this?
Q: Prove that f(x) tends to infinity as x tends to infinity iff f(x_n) tends to infinity for every sequence such that x_n tends to infinity:
IF: if f(x_n) tends to infinity for all (x_n) such that x_n tends to infinity, we have (x_n) increasing, and unbounded, so if we take x = x_i, y = x_j, then, for j >= i, y>=x.
Now, for j >= i, f(x_j) >= f(x_i), i.e. for y>=x, f(y)>=f(x), so we have f(x) increasing and unbounded, therefore f(x) tends to infinity as x tends to infinity.
ONLY IF: if f(x) tends to infinity as x tends to infinity, and we take a sequence (x_n) which tends to infinity, we have x_(n+1) >= x_n, so f(x_(n+1)) >= f(x_n), i.e. f(x_n) increasing, and
unbounded. So f(x_n) tends to infinity.
Is this clear/correct reasoning?
February 15th 2009, 10:41 AM #2
MHF Contributor
Apr 2005
February 15th 2009, 11:31 AM #3
Oct 2007
|
{"url":"http://mathhelpforum.com/calculus/73716-continuity-function.html","timestamp":"2014-04-20T05:03:11Z","content_type":null,"content_length":"36929","record_id":"<urn:uuid:43a94f8d-5c11-43fe-a69f-193141affe73>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00561-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Peretek, Inc. - PFSA
PFSA (Personal Finance Scenario Analyzer) is a small, convenient application for iPhone that allows you to perform a number of financial calculations. These include compound interest, present value,
future value, loan terms, and of course tips!
We know, the name could be better.
Where can I get it?
iTunes Link: [ PFSA]
Show me!
First, let's talk about Compound Interest.
In this example, I set the initial investment to 1000.00. The interest rate I plan to earn is 12%, and I plan to wait for 25 years before I run off to Switzerland. In this case, as you can see, I'll
have 19788.47 (for a profit of 18788.47).
Now, we need a little Present Value.
Here, I have a bond for 10000.00 that I can redeem in 5 years. I plan to earn 7.5%. The present value of that bond is 6965.59.
Next, let's consider Future Value.
I want to retire in 20 years. I plan (like Warren Buffet) to average 12% for my investments. When I retire, I want to have 1.2 million to help fund my plans for world domination. If I contribute
monthly (and the interest is calculated at the same rate), I need to put add 1213.03 each month for the next 20 years. Nice dreams.
Never sign for one, until you know all the numbers for a Loan.
This is the canonical loan calculation. Given a loan of 175000.00 -- house or Ferrari, doesn't matter -- and the interest rate is 4.5%, and with a plan to pay it off in 15 years, which of course is
the 180 months you see here, one would have to pay a mere 1338.74 per month.
Don't be greedy. Tip your server.
You can't live without a tip calculator. Turn your iPhone into Seinfeld's dad's Willard. Whip it out and impress everyone. The current US standard tip is 15%, so that's the default. Change it as you
wish. In the example shown, we ate well and had some wine, so our bill is 130.00. Tipping the standard 15% gives us a total bill of 149.50.
It's Just an App!
Note the results of PFSA are for informational and educational purposes only. Don't depend on the answer you get here as being precise, but it is a close approximation!
Help Me!
Need help? E-mail us at support@peretek.com
|
{"url":"http://www.peretek.com/pfsa/","timestamp":"2014-04-20T21:16:35Z","content_type":null,"content_length":"6536","record_id":"<urn:uuid:3a47c134-4e9d-47e8-bb04-68cb16e3cda9>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00057-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Diagram Chasing
Model Categories, 2
February 26, 2010
Continuing my discussion of Model Categories (again, mostly through Hovey’s book, with my own reorganization and interpretations.) I’m having my notes follow the lectures I’ve been giving in seminar.
They should be readable, but questions are welcome.
Algebraic Coding Theory
February 24, 2010
I just found a survey paper I wrote for 6.UAP. Unfortunately, it’s just a draft, and I’ve lost my backup with the .tex file in it, so it’s missing pieces.
In a nutshell: the project was to do some reading on results about error correcting codes. In particular, we can consider error-correcting codes an independent subsets of vertices of hamming graphs,
whose vertices are given by bit-strings, with two distinct vertices adjacent whenever the hamming distance between these bit-strings is small. These basic concepts are introduced, and bounds on sizes
of independent sets of vertices are discussed. This was written for an audience with a basic understanding of linear programming and linear algebra, so it should be accessible to ambitious
undergraduates, for example.
Model Categories
January 25, 2010
Model categories abstract key map lifting and extension properties so that homotopy theory can be performed in categories other than Top.
I’m preparing a short introduction for a seminar I will be taking this term. Here are my notes as of the moment. (Does anyone know nice ways of converting LaTeX to wordpress? Doing things by hand can
be really annoying.)
Steenrod Squares
November 23, 2009
Steenrod squares recover “stable” information from the cup product structure that otherwise disappears. Recall that ${H^*(\Sigma X)}$ has no nontrivial cup products because it is the union of two
contractible open sets. This means that we cannot determine information about $\Sigma X$ through its cohomology ring structure. However, we can determine information about the homotopy type of $\
Sigma X$ through the cup product structure on $X$, using properties of Steenrod squares.
For each ${i \ge 0}$, there is a mod-2 cohomology operation ${{\mathrm {sq}}^i : H^{n} \rightarrow H^{n+i}}$ satisfying the following properties:
1. ${{\mathrm {sq}}^i \circ f^* = f^* \circ {\mathrm {sq}}^i}$ (naturality)
2. ${{\mathrm {sq}}^i(x+y) = {\mathrm {sq}}^i(x) + {\mathrm {sq}}^i(y)}$
3. ${{\mathrm {sq}}^n(x \smile y) = \sum_i {\mathrm {sq}}^i(x) \smile {\mathrm {sq}}^{n-i}(y)}$ (Cartan formula)
4. ${{\mathrm {sq}}^i \circ \Sigma = \Sigma \circ {\mathrm {sq}}^i}$ (stability)
5. $\displaystyle \mathrm{sq}^i(\alpha) = \alpha^2 \textrm{ if } i = |\alpha|$,
6. $\displaystyle \mathrm{sq}^i(\alpha) = 0 \textrm{ if } i > |\alpha|$,
7. ${{\mathrm {sq}}^0 = \mathrm{id}}$
8. ${{\mathrm {sq}}^1}$ is the Bockstein homomorphism for the short exact sequence of coefficients
$\displaystyle 0 \rightarrow {\mathbb Z}/2 \rightarrow {\mathbb Z}/4 \rightarrow {\mathbb Z}/2 \rightarrow 0.$
If ${I = (i_1,\dots,i_k)}$, we may write
$\displaystyle {\mathrm {sq}}^I := {\mathrm {sq}}^{i_1}{\mathrm {sq}}^{i_2} \dots {\mathrm {sq}}^{i_k}.$
If, for each ${r}$, ${i_r \ge 2 i_{r+1}}$ (counting ${i_{k+1} = 0}$), we call ${I}$admissible. The excess of an admissible ${I}$ is the quantity
$\displaystyle e(I) = \sum_{r} i_r - 2i_{r+1} = i_1 - \sum_{r > 1} i_r.$
The following example shows how Steenrod squares can use the cup product structure in ${X}$ to tell us about the homotopy type of ${\Sigma X}$.
Proposition 1 Let ${X = S^1 \vee S^2}$. Then ${X}$ is not stably homotopic to ${P := \mathbb{C}P^2}$.
Proof: Take cohomology modulo 2. Let ${x \in H^1(X;{\mathbb Z}/2) = {\mathbb Z}/2}$ be a generator, and let ${y \in H^1(P;{\mathbb Z}/2)}$ be a generator. Then ${x^2 = 0}$ while ${y^2 e 0}$ is a
generator of ${H^2(P;{\mathbb Z}/2)}$. Therefore cup products alone tell us that ${X}$ and ${P}$ are not homotopic. However, ${H^n(X;{\mathbb Z}) = H^n(P;{\mathbb Z})}$ for every ${n}$, so cohomology
groups alone are not sufficient; unfortunately then the cohomology groups ${H^n(\Sigma^k X; {\mathbb Z}) = H^n(\Sigma^k P; {\mathbb Z})}$, and the cup product structure of the cohomology rings are
However, Steenrod squares allow us to recover some information from the cup product; indeed ${{\mathrm {sq}}^1(\Sigma^k x) = \Sigma^k({\mathrm {sq}}^1 x) = \Sigma^k(x^2) = 0}$, while ${{\mathrm {sq}}
^1(\Sigma^k y) = \Sigma^k(y^2) e 0}$. Therefore ${\Sigma^k X}$ and ${\Sigma^k P}$ are not stably homotopic. $\Box$
It turns out that Steenrod squares generate all mod-2 cohomology operations, which can be seen through the calculation of the cohomology of Eilenberg-Maclane spaces $K(\mathbb{Z}/2,n)$, which will be
discussed in a later post. This calculation expresses the cohomology groups $H^m(K(\mathbb{Z}/2,n);\mathbb{Z}/2)$ in terms of $\mathrm{sq}^I$ over admissible sequences $I$ where $e(I) \le n$.
Fun Proof of Elementary Fact
October 25, 2009
While thinking about a number theory problem, I stumbled into a proof of a classical fact: the classification of primitive pythagorean triples $(x,y,z)$ where $x^2 + y^2 = z^2$. Here is a proof using
unique prime factorization in the Gaussian integers.
Claim: If $x^2 + y^2 = z^2$ where $(x,y,z) = (1)$, then there are integers c,d such that $x = (c^2-d^2), y = 2cd$.
Proof: Suppose $x^2 + y^2 = z^2$. Then we have $(x+iy)(x-iy) = z^2$. We know $(x+iy,x-iy) \supset (2x,2y) = (2) = (1+i)^2$ But if $1+i \mid x+iy$, then $2 \mid x^2 + y^2 = z^2$, so then $4 \mid x^2+y
^2$ and we must have that $x,y$ are even, which is impossible for primitive triples (x,y,z). Therefore $(x+iy,x-iy) = (1)$, and so (x+iy) and (x-iy) must both be squares. So, we can write $x+iy =
(c+di)^2 = (c^2-d^2) + (2cd)i$ for some integers c and d, as desired.
The above turns out the be pretty standard, but I still find it amusing and elegant.
Cup Products
August 20, 2009
I’ve always found cup products to be somewhat mysterious. The cup product on a cohomology is a natural product in terms of maps of spaces, but it doesn’t mandate a canonical definition like you might
expect. In fact, the cup product descends from a map on the cochain level that is quite difficult to reason with, and it is only upon passing to cohomology that it gains many of its nicer features.
The cup product on cohomology is the composition of a cohomology cross product, which is simple and easy to understand, and a diagonal approximation, which is easy to understand in low dimensions.
Definition: The algebraic cohomology cross product $\times^{\mathrm{alg}} : H^p(C^*) \otimes H^q(D^*) \to H^{p+q}((C_* \otimes D_*)^*)$ is defined by $[f] \otimes [g] \mapsto [\sigma \otimes \tau \
mapsto f(\sigma)g(\tau)]$.
It is a simple exercise to show this is well-defined; it’s induced from the standard embedding of $C^* \otimes D^*$ into $(C_* \otimes D_*)^*$.
Definition: A diagonal approximation is a natural chain map $\tau : S_*(X) \to S_*(X) \otimes S_*(X)$ satisfying $\tau(\sigma) = \sigma \otimes \sigma$ for 0-simplices $\sigma$.
Since the functor $S_*(-)$ is free with models $\Delta^n$ and $S_*(-) \otimes S_*(-)$ is acyclic on these models, a diagonal approximation exists, and furthermore any two are chain homotopic. A
standard choice of $\tau$ is the Alexander-Whitney diagonal approximation given by $\tau(\sigma) = \sum_{p+q=|\sigma|} { }_p\sigma \otimes \sigma_q$. We let ${ }_p\sigma$ denote the “front face” of $
\sigma$ consisting of its first p vertices, and $\sigma_q$ denotes its back q vertices. This choice of $\tau$ is clearly a diagonal approximation, but what is its geometric significance? Not much as
far as I can tell — the big advantage of the Alexander-Whitney map, in my opinion, is that it is a clean algebraic device, which is (relatively) easy to calculate, and as such provides for clean
Oh, and before I forget:
Definition (Cup Product): The cup product $\smile : H^*(X) \otimes H^*(X) \to H^*(X)$ is the composition $(\tau)^* \circ \times^\mathrm{alg} : H^*(X) \otimes H^*(X) \to H^*((S_*X \otimes S_*X)^*) \to
H^*(X)$, where $\tau$ is any diagonal approximation.
Since any two diagonal approximations are chain homotopic, all choices of $\tau$ give the same cup product. Using the acyclic models theorem, one can easily show associativity and graded
commutativity of $\smile$. However, without a choice of $\tau$, it’s not clear how to do any computations with this thing, you have to either use naturality or drop down to the cochain level and the
Alexander-Whitney map to know more.
Proposition: $(a \smile b) = (-1)^{|a||b|}(b \smile a)$ for $a,b \in H^*(X)$.
Proof: Consider $T : S_*(X) \otimes S_*(X) \to S_*(X) \otimes S_*(X)$ given by $T(\sigma \otimes \tau) = (-1)^{|\tau||\sigma|} (\sigma \otimes \tau)$.Then $T \circ \tau$ is a diagonal approximation;
so by uniqueness of the cup product $a \smile b = (-1)^{|b||a|}(b \smile a)$.
Abstract nonsense is great because it simplifies arguments by exploiting symmetry. However, sometimes you need to get to the nitty-gritty just a little bit to do some actual computations. Try proving
$a \smile b = (-1)^{|a||b|} b \smile a$without using the acyclic models theorem, just using the Alexander-Whitney map directly — I doubt it would be a fun endeavor.
Acyclic Models, the Eilenberg-Zilber Theorem, and Excision
August 13, 2009
The acyclic models theorem, in the full generality that I stated in my previous post, is not usually the form in which it is used. More generally:
Corollary 1: If $F$ and $G$ are functors from a category $\mathcal{A}$ to the category of augmented chain complexes, such that $F$ and $G$ are free and acyclic on models $\mathcal{M}$, then there
exists a natural transformation $F \to G$ extending the identity map $\mathbb{Z} \to \mathbb{Z}$, which is unique up to natural chain homotopy.
Corollary 2: If $F$ and $G$ are functors to the category of (non-augmented) chain complexes which are free, and acyclic in dimensions above zero, then any natural chain map between them that
restricts to an isomorphism on $H_0(F) \to H_0(G)$ is a chain homotopy equivalence.
Corollaries 1 and 2 above are effectively restatements of the same powerful idea. The acyclic models theorem reduces equality arguments in homology to proving that appropriate chain complexes are
free and acyclic.
Theorem (Eilenberg-Zilber): Let $S(X)$ denote the singular chain complex of the space $X$. Then the functors $S(X \times Y)$ and $S(X) \otimes S(Y)$ from $\mathrm{Top}^2$ to $\partial \mathfrak{G}$
are naturally chain homotopic. As a result, $H_n(X \times Y) = H_n(S(X) \otimes S(Y))$.
Proof: By definition both functors $S(X \times Y)$ and $S(X) \otimes S(Y)$ are free with models $\{(\Delta^n,\Delta^m) | n,m \ge 0\}$ (in fact, $S(X \times Y)$ is free with models $\{(\Delta^n, \
Delta^n)\}$). Since the space $\Delta^n \times \Delta^m$ is contractible, $S(X \times Y)$ is acyclic in positive dimensions. Since $\Delta^n$ is itself contractible, $S(\Delta^n) \otimes S(\Delta^m)
$ is chain homotopic to $S(*) \otimes S(*)$, which is acyclic in positive dimensions as well. Therefore both functors are free and acyclic.
Consider the (natural) map $S_0(X \times Y) \to (S(X) \otimes S(Y))_0 = S_0(X) \otimes S_0(Y)$ taking $\sigma \times \tau \mapsto \sigma \otimes \tau$. This map is invertible and so induces an
isomorphism on $H_0$. Therefore the two functors are chain homotopic. QED.
And it is, remarkably, that simple, although I spent quite a while convincing myself that I hadn’t missed any steps (and I still hope I haven’t). It just turns out that the acyclic models theorem is
incredibly powerful; especially in the proof of the excision theorem.
And, yes, I have done some weasely magic by assuming that homology is invariant over homotopic spaces; although that’s a standard result, the Eilenberg-Zilber theorem does yield a very simple proof
that homotopic maps induce isomorphisms on homology — one could hypothetically just compute the homology of simplices directly if one wanted to be a purist.
The excision theorem is proven in a similar way; at the core of excision is the claim that if $A$ and $B$ are subspaces of $X$ whose interiors cover $X$, then $S(A + B)$ and $S(X)$ are chain
homotopic, where $S(A+B)$ is the chain complex of simplices whose image lies completely in $A$ or completely in $B$.
Excision is typically proven through barycentric subdivision, with the geometric intuition being that every simplex in $X$ can be subdivided into smaller simplices, and eventually each simplex will
be small enough to be covered either by the interior of $A$ or the interior of $B$. However, you could check in Hatcher’s book to see that this argument is far less simple than you would like it to
be, taking up a few pages to hammer out all of the algebraic (and geometric!) details.
Instead, the acyclic models theorem opens up another avenue of attack: it is painfully clear that $S_0(A + B)$ and $S_0(X)$ are isomorphic, since the interiors of $A$ and $B$ form an open cover of
$X$. One can consider the category of spaces with open coverings, which has objects $(X, \mathcal{U})$, where $\mathcal{U}$ is an open cover of $X$, and whose maps respect these coverings in the
natural way.
Then we can associate to $(X,\mathcal{U})$ the chain complex $S(X, \mathcal{U})$ of simplices whose image is contained wholly in one of the open sets of $\mathcal{U}$, and we can also associate to it
the standard chain complex $S(X)$. One can show both are acyclic and free with models $(\square^n, \{\square^n\})$, where $\square^n$ is the $n$-cube, without too much trouble (I will omit the
details, but it’s a basic lebesgue number /compactness argument). Therefore, these functors are naturally chain homotopic, and excision is proved.
(Covering bases: the well-definedness of derived functors is not an acyclic models argument so much as it is nearly identical to the proof of acyclic models. Sorry about that. )
Next I will be moving into a brief look at cohomology, cup products, and the power of acyclic models in that context for proving things like associativity and commutativity.
1. R. Schon, Acyclic Models and Excision. Proceedings of the American Mathematical Society, 1976.
Acyclic Models
August 11, 2009
Acyclic Models is a powerful technique for constructing maps between chain complexes. This method can be used to show the equivalence of chain complexes, and thereby construct isomorphisms of
homology theories for spaces. Four elementary uses of acyclic models are:
• Excision
• Eilenberg-Zilber theorem
• Construction of Derived functors (for example, Ext and Tor)
• Commutativity of the Cup Product
The Setup
Let $\mathcal{C}$ be a category, and $\mathcal{M}$ be a collection of objects in $\mathcal{C}$, called model objects. Let $T : \mathcal{C} \to \mathrm{Ab}$ be a functor, where $\mathrm{Ab}$ is the
category of abelian groups. If there are objects $g_M \in T(M)$ for $M \in \mathcal{M}$ such that $T(A)$ is freely generated by the images $T(f)(g_M)$, then $T$ is said to be free with models $\
For example, the singular $n$-chains $C_n(X)$ on a space $X$ are free with a single model $\Delta^n$.
The Big Theorem
Let $K$ and $L$ be covariant functors on $\mathcal{C}$ with values in the category of chain complexes, and let $f : K \to L$ be a chain map defined in dimensions $< q$. Then if $K_n$ is free with
models $\mathcal{M}$ for all $n \ge q$ and $H_n(L(M)) = 0$ for all $n \ge q-1$ and all $M \in \mathcal{M}$, then $f$ extends to a chain map $f' : K \to L$; furthermore, $f'$ is unique up to a chain
homotopy $D$ satisfying $D_n = 0$ for every $n < q$.
In words: if $K$ is free with models and $L$ is acyclic on those models, then partially defined maps $K \to L$ can be extended uniquely up to chain homotopy. The interesting corollary is that if $K$
and $L$ are both free and acyclic on the same models, then they are chain equivalent, provided an isomorphism in dimension zero.
This can be particularly useful because the reduced singular chain complex of a space is free with models $\Delta^n$, and since simplices are contractible $\widetilde{H}_*(\Delta^n) = 0.$ Note that
the (unreduced) singular chain complex is not acyclic since $H_0(\Delta^n) = \mathbb{Z}$.
In my next post, I will sketch a proof of the excision theorem in singular homology, using the method of Acyclic Models.
1. Samuel Eilenberg and Saunders MacLane, “Acyclic models,” American Journal of Mathematics, vol. 75 (1953), pp. 189-199
August 10, 2009
I set this up to discuss ideas from mathematics that I find intriguing or am learning about at the moment. My focus is in algebraic topology, so notes will mostly be in that domain. WordPress is
nice for this because it has support for $\LaTeX$, which makes things awfully convenient for mathematical discourse.
|
{"url":"http://abeliangrapes.wordpress.com/","timestamp":"2014-04-17T18:24:35Z","content_type":null,"content_length":"71258","record_id":"<urn:uuid:8f3b17c4-3185-4865-9dc7-ab3befdfbef7>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00596-ip-10-147-4-33.ec2.internal.warc.gz"}
|
A function $f$ from $A$ to $B$ is injective if $x = y$ whenever $f(x) = f(y)$. An injective function is also called one-to-one or an injection; it is the same as a monomorphism in the category of
A bijection is a function that is both injective and surjective.
In constructive mathematics, a strongly extensional function between sets equipped with tight apartness relations is called strongly injective if $f(x) e f(y)$ whenever $x e y$ (which implies that
the function is injective). This is the same as a regular monomorphism in the category of such sets and strongly extensional functions (while any merely injective function, if strongly extensional,
is still a monomorphism). Some authors use ‘one-to-one’ for an injective function as defined above and reserve ‘injective’ for the stronger notion.
Revised on August 31, 2012 15:20:09 by
Urs Schreiber
|
{"url":"http://www.ncatlab.org/nlab/show/injection","timestamp":"2014-04-18T05:31:53Z","content_type":null,"content_length":"18531","record_id":"<urn:uuid:6ac50eb5-af57-4d9b-aef3-1d32012d8ac2>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00065-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Conductor Size & Voltage Drop Calculators
for Single and Triple Phase AC
Available Cable Calculators
1ø V[D] Single Phase Voltage Drop
1ø CS Single Phase Conductor Size
3ø V[D] Three Phase Voltage Drop
3ø CS Three Phase Conductor Size
<< Cable Calculator Home
Three Phase Conductor Size
About the formula | Use the calculator | Use the walkthrough
About the Conductor Size Formula
In the following formula, we will use the letter "K" to represent Specific Resistance, which has a value of 10.8 when using copper conductors. "I" represents Amperage. "L" represents the Length of
the run."CMA" is the Circular Mil Area, or the cross section of the conductor measured in mils (.001 inch). We multiply K by the square root of 3 in three phase. For single phase, we would multiply K
by 2.
You can click on the different parts of the formula below for an explanation of each variable.
Please note that this formula is for three-phase (1ø) rigs only. Click here for the single-phase formula.
In a power distribution system, the Voltage Drop (VD) is the portion of the voltage lost during a particular run. It is important to know the VD of a run in order to ensure that we stay inside the
Allowable Voltage Drop (AVD), as defined by the National Electric Code (NEC).
The AVD is 3% for a Main Circuit (aka: Feeder Circuit) and 2% for a Branch Circuit for a total of 5%. If your VD is greater than the AVD for the circuit type in question, you are not in compliance
with the NEC standard. You will need to lower your VD, until you are compliant.
The NEC defines the following AVDs:
Voltage: 120v 208v 240v 480v
Feeder Circuit AV[D] 3.6 6.24 7.2 14.4
Branch Circuit AV[D] 2.4 4.16 4.8 9.6
Please note: Calculations from this page are set up as guidelines. You still have to make the decision as to how much cable to lay down in any rig. If you have any questions contact us.
Conductor Size Calculator
Line Size: CMA:
18.706 × Amps (I) per Leg: × Length (L) of run in feet: 4/0 211,600
2/0 133,100
Total CMA = Allowable Voltage Drop #2 66,360
#4 41,740
Enter AVD above or select Circuit Type and Voltage. #6 26,240
#12 6,530
Calculator Walkthrough
This section will walk you through the calculator while filling in the values for the calculator above. For a detailed explanation of this formula, please see the Formula Explanation.
Note that you can calculate the Total CMA by skipping directly to step 5 below.
1. Are we looking at a Main Run (Feeder) or Branch circuit?
2. What Voltage are we using?
3. What is our AVD?
(May be autocalculated from steps 1 and 2 above)
4. How many Amps (i) does the circuit draw (per leg)?
5. What is the Length (L) of the circuit in feet?
6. Answer: press "Calculate" below.
AMO's Cable Calculators are a service of AMPRentals.com Site by Discoverfire
© 2014 Associated Mobile Power, Inc.
|
{"url":"http://cablecalculator.com/conductor_size_three_phase.php","timestamp":"2014-04-19T04:54:52Z","content_type":null,"content_length":"23238","record_id":"<urn:uuid:00ec5af2-e156-4ebc-9341-7de455a326df>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00636-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The Vibonacci Numbers
The Vibonacci Numbers
The Fibonacci numbers 1, 1, 2, 3, 5, 8, 13, . . . are one of the royal families of mathematics. Like other royalty, they have an ancient pedigree, everybody knows all about them, and no one takes
them very seriously anymore. The Fibonaccis and their relations have been thoroughly studied for centuries, so no one would expect much in the way of novelty or innovation to turn up among them.
Nevertheless, a whole new branch of the family has just sprouted up. What's more, the new cousins are a highly erratic bunch—sports in the royal lineage.
The pattern in the sequence of Fibonacci numbers is easy to see: Each term (except for the first two) is the sum of the two preceding terms. Expressed as a formula: f(n)=f(n–2)+f(n–1). The new
variation on the series changes this formula in only one detail. Instead of always adding two terms to produce the next term, you either add or subtract, depending on the flip of a coin at each stage
in the calculation. If the coin comes up heads, say, you add as usual, but if the result is tails, you subtract f(n–1) from f(n–2). In other words, the formula becomes f(n)=f(n–2)±f(n–1), where the
symbol "±" signifies that you choose either addition or subtraction randomly and with equal probability.
Here are a few short sequences generated by this random sum-or-difference algorithm:
1, 1, 0, 1, –1, 2, –3, –1, –2, 1, –3, 4, 1, 3, 4, 7, 11
1, 1, 2, –1, 3, 2, 1, 1, 0, 1, –1, 0, –1, 1, –2, –1, –1
1, 1, 2, –1, 3, 2, 1, 3, –2, 1, –3, –2, –5, 3, –8, –5, –13
1, 1, 0, 1, –1, 0, –1, 1, –2, 3, 1, 2, –1, 3, –4, 7, –11
1, 1, 0, 1, 1, 2, 3, 5, –2, 7, 5, 12, –7, 19, –26, 45, 19
Certain properties of the Fibonacci sequence continue to show through here, most notably the alternation of two odds and an even in all the series. But the steady growth of the Fibonacci numbers is
replaced by fluctuations of increasing amplitude. The way the numbers seem to vibrate between negative and positive values leads me to suggest the name Vibonacci series; I shall designate them by the
symbol v(n).
Looking at the fluctuations, you might conclude that the randomness in the formula has wiped out all traces of order. And it is certainly true that the Vibonacci sequence is nondeterministic. Unlike
the conventional Fibonacci numbers, where the nth term has a single, definite value that needs to be calculated only once, the nth term of a Vibonacci series has a distribution of possible values; v
(n) is likely to be different every time you compute it. Nevertheless, a great deal of order persists in the randomized sequences. In particular, the absolute value of v(n) grows exponentially as n
increases. You can measure the growth rate in simple computer experiments. What's more remarkable, the value of the number that determines the growth rate has been pinned down in a mathematical
Breeding Like Rabbits
The Fibonacci numbers were introduced to the world 800 years ago by Leonardo of Pisa, who also made another major contribution to mathematics: It was he who brought Indo-Arabic numerals into European
culture. "Fibonacci" was apparently Leonardo's nickname, a shortening of Filius Bonacci, or son of Bonacci. Of course Leonardo did not call his sequence the Fibonacci numbers; the name was
popularized by the 19th-century French mathematician Edouard Lucas.
The whole matter began with a contrived problem about the breeding of rabbits. Suppose a pair of rabbits breeds once a month and always produces a single pair of offspring, which breeds the following
month. Each pair breeds twice and then dies. Starting with a single pair of rabbits under these assumptions, how many pairs will be living after n months? The answer is f(n).
Leonardo's fanciful problem is not of any great interest to population biologists, but the Fibonacci numbers are famous for turning up in many other contexts. Their patterns appear in seashells and
sunflowers and pinecones; they count the ways of tiling a checkerboard with dominos; they are present in Pascal's Triangle if you know where to look for them. The Fibonacci numbers also played an
essential role in the solution to Hilbert's Tenth Problem. They even have their own journal, the Fibonacci Quarterly, published since 1963.
One of the most important properties of the Fibonacci numbers was first noted in the 17th century by Johannes Kepler: The ratios of successive Fibonacci numbers, f(n–1)/f(n–2), form a new sequence,
beginning 1, 2, 1.5, 1.666, . . . , 1.6, 1.625. This series converges on a value of 1.618033989 . . . , known as Φ, or the golden ratio. Over the years the golden ratio has acquired something of a
cult following, which I would not want to encourage, and yet it truly is a remarkable number. It has the curious property that if you take its inverse (that is, 1/Φ) and then add 1, you recover the
original number; in other words f is a solution to the equation 1/x=x–1. This equation can be rearranged as x^2–x–1=0, for which the quadratic formula gives the solutions (1+√5)/2 and (1–√5)/2. The
first of these numbers is Φ; the second is 1–Φ, or –0.618033989....
Powers of Φ provide close approximations to the Fibonacci numbers. Since Φ is the limiting value of the ratio between successive Fibonacci numbers, multiplying any member of the series by Φ yields an
approximation to the next member. Looking at the approximation process through the other end of the telescope, the nth root of the nth Fibonacci number approximates Φ. A more complex formula, (Φ^n–
(1–Φ)^n)/√5, gives the exact value of f(n); this strange congeries of irrationals always reduces to an integer, as long as n itself is an integer.
The Fibonacci theme has many variations. Edouard Lucas, the mathematician who named the Fibonacci numbers, has a related series named after him: The Lucas numbers are defined by the same recurrence
formula but start with the integers 2, 1 instead of 1, 1. The first few members of the Lucas series are 2, 1, 3, 4, 7, 11, 18, 29, 47, 76. Remarkably, the ratio of successive Lucas numbers also
converges to Φ; indeed, it turns out that you can start a Fibonacci-like series with any pair of numbers, and the limit of the growth rate will always be Φ.
Another variation has come to be known as the Tribonacci series. Instead of adding the previous two terms at each step, you add the previous three. A convenient way to get such a sequence started is
to pretend that the initial 1 is preceded by an indefinite list of 0s. With this convention, the Tribonacci sequence begins 1, 1, 2, 4, 7, 13, 24, 44, 81, 149. In this case the growth rate is not Φ;
instead the ratio of successive terms approaches 1.83929 . . . . The analogous Tetrabonacci series, where each term is the sum of the previous four, begins 1, 1, 2, 4, 8, 15, 29, 108, and the ratio
of terms converges to 1.92756 . . . . Clearly this process can be generalized to k-bonacci series. You might want to think about what happens in the limiting case where each term is the sum of all
the previous terms. (The answer is given below.)
Rabbit Cannibalism
If the Fibonacci series describes rabbit breeding, what does the Vibonacci series describe? A cute answer might be the breeding of cannibalistic rabbits—animals that sometimes reproduce normally but
at other times consume their own young or their own parents. But the inventor of the series had nothing so whimsical in mind. He was working on a problem in numerical analysis, the branch of
mathematics concerned with large-scale computations.
The inventor is Divakar Viswanath, a young mathematician and computer scientist who earned his Ph.D. last year at Cornell University. He has spent the past academic year at the Mathematical Sciences
Research Institute in Berkeley; in the fall he will take up a position in the departments of mathematics and computer science at the University of Chicago. His paper on random Fibonacci sequences
will be published in Mathematics of Computation.
The most symmetric version of the Vibonacci series assigns randomly chosen signs to both of the preceding terms in the sequence; that is, the recurrence formula is v(n)=±v(n–2)±v(n–1). But if you are
interested mainly in the absolute value of each term—ignoring the sign—the version with just a single random sign yields the same result. The main question in the study of the series is how fast the
absolute value of v(n) grows as n increases. In other words, the aim is to find for the Vibonacci numbers a constant C that plays the same role as Φ does for the Fibonacci numbers. This hypothetical
growth rate C is defined as the nth root of |v(n)|, where the notation |x| signifies the absolute value of x.
It is not immediately obvious that |v(n)| should be expected to grow at all in the long run. With equal numbers of random additions and subtractions, you might guess that the series would hover
around some fixed average, so that the nth root of |v(n)| would converge to a value of 1, signifying no growth. Or, conversely, the Vibonacci numbers might bounce around so chaotically that the
growth rate would never converge on any stable value; the limit of the nth root of |v(n)| might simply not exist.
The question of whether a limiting growth rate exists was settled 40 years ago by Harry Furstenberg and Harry Kesten, then both at Princeton University. For a broad class of random processes,
including the one I'm calling the Vibonacci series, they showed that a limit does exist, given a few mild assumptions. Three years later Furstenberg proved that the growth rate is "almost surely"
greater than 1. The "almost surely" disclaimer is needed because of the probabilistic nature of the system. Some sequences do fail to grow, and you cannot absolutely exclude the possibility of
stumbling onto one. For example, a carefully chosen pattern of alternating plus and minus signs generates the cyclic Vibonacci series 1, 1, 0, 1, 1, 0, . . . . But such exceptional sequences are
unlikely; indeed, in the limit as n goes to infinity, they have probability zero. They exist, but you have no chance of ever finding them. Thus Furstenberg's "almost surely" result is actually quite
a strong statement. It implies not merely that almost all Vibonacci sequences grow but that any individual sequence grows with probability 1, if it is allowed to continue long enough.
Unfortunately, apart from showing that C must exist and be greater than 1, Furstenberg's theorem gives no information about the magnitude of C. The value 1 is merely a lower bound. There is a
complementary upper bound: The value of C cannot be greater than 1.618 . . ., since that is the growth rate of the ordinary Fibonacci series, with all additions and no subtractions.
Numerical experiments provide estimates of C. For small n, it's easy to enumerate all possible n-step Vibonacci seqences and calculate their growth rates by taking the nth root of the final term.
Since each of these series is equally likely, the arithmetic average is an estimate of C. For example, there are four Vibonacci series for n = 4, namely 1, 1, 0, –1; 1, 1, 0, 1; 1, 1, 2, 1; and 1, 1,
2, 3. Thus the final terms are –1, 1, 1, 3, and the average of the fourth roots of their absolute values is about 1.08. For n=20 the corresponding estimate of C is about 1.18. But tracing out the
entire tree of Vibonacci sequences becomes impractical for large n. At n=20 there are already half a million branches to be tabulated.
Random sampling gives approximations to C for much larger values of n. Figure 1 shows the outcome of a single computer run generating Vibonacci numbers up to v(10^6). There is no question that this
particular sequence is growing exponentially; it attains heights of greater than 10^50,000. The value of C calculated from the series exhibits distinctive fluctuations, which appear to diminish in
amplitude as n increases, but which also tend toward longer wavelengths at higher n. Convergence is slow. The growth rate appears to be somewhere near 1.13, but from these data it would be difficult
to estimate C with greater precision.
C = 1.13198824 . . .
Viswanath's approach to determining C is indirect, and indeed it takes a detour through some areas of the mathematical landscape that might seem to have no connection with Fibonacci-like series. But
by transposing the problem into another realm, where more powerful tools can be brought to bear, he is able not merely to produce an empirical estimate of C but to prove that the value of the
constant must lie in a certain narrow interval. The proof is not a simple one, and here I shall present only a hasty sketch; the details are in Viswanath's paper.
The first step is to recast the Vibonacci process in terms of matrices. The ordinary Fibonacci series can be defined by the matrix equation:
Applying the rules for matrix multiplication confirms that the equation has the expected behavior. In the product on the left side of the equation the first row is given by the sum 0 x f(n–2)+1 x f
(n–1), which of course is just f(n–1); the second row of the product is 1 x f(n–2)+1 x f(n–1), which is the definition of f(n).
Repeating the matrix multiplication generates successive terms of the Fibonacci sequence. For example, given the initial terms 1, 1, three matrix multiplications produce the terms 3, 5:
Continuing in the same way generates every Fibonacci number in turn, and only those numbers.
The Vibonacci sequence can also be redefined as a matrix product, the only difference being that there are two matrices to choose from:
At each step in generating the series, one of these matrices is selected at random with probability 1/2. Whenever the second matrix happens to be picked, the minus sign in the lower right corner has
the effect of subtracting v(n–1) from v(n–2), rather than adding the two terms, just as in the more direct definition of the Vibonacci series.
Why bother with this elaborate reformulation of the problem if it merely reproduces the same result? Because the study of products of random matrices offers a handy toolkit of useful methods. It
allows the Vibonacci process to be viewed in a geometrical context. Suppose that any two adjacent Vibonacci numbers, v(n–1) and v(n–2), represent the coordinates of a point in the x,y plane. Drawing
a line from the origin at 0,0 to this point defines a direction in the plane, specified by an angle u or a slope m. The slope is simply y/x, so it is given directly by the coordinates of the point.
Now multiply the pair of coordinates by one of the 2 x 2 Vibonacci matrices. What happens to the point? It is mapped into a new point—with coordinates v(n) and v(n–1)—which defines a new direction
from the origin. Specifically, multiplying by one of the Vibonacci matrices causes a rotation to a new slope of either (1+1/m) or (1/m–1).
These transformations are easier to understand through a brief example. Starting with a slope of m=1 (equivalent to an angle of 45 degrees), the m→(1+1/m) transformation yields a new slope of 2
(about 63 degrees); applying the m→(1/m–1) transformation rotates the line to a slope of –1/2 (about 333 degrees). If you continue to iterate this process, always taking the last value of m and
replacing it with a random choice of either (1+1/m) or (1/m–1), you create a kind of random walk through the space of possible slopes. The progression of slopes seems to have nothing in common with
the Vibonacci series, and yet the connection through products of random matrices shows they are really just two manifestations of the same process. (And the connection isn't as mysterious as it might
seem. Note that repeating the transformation m→(1+1/m) yields a series of numbers that converges to Φ for any starting m. Hidden in the mapping m→(1+1/m) is the equation 1/x=x–1 that defines Φ.)
The slope m in these formulas can take on any value along the real number line, from negative infinity to positive infinity, but not all slopes are equally likely. The key to understanding the random
walk—and also to calculating the growth rate of the Vibonacci series—is identifying the probability distribution that determines the likelihood of every possible slope. Viswanath's main contribution
was finding a way to estimate this distribution to any desired degree of accuracy.
The probability distribution is a peculiar one—not at all like the smooth Gaussian curve that describes so many random processes. Instead it has multiple spiky peaks and deep canyons, and if you look
closer, you find that the spikes have spicules, and the canyons are creased by smaller canyons. Thus the distribution appears to be a fractal landscape that cannot be described by any continuous
function. Viswanath sidestepped this problem by constructing an ingenious discrete partitioning of the real number line. The structure is called the Stern-Brocot tree, after the mathematician Moriz
Stern and the watchmaker Achille Brocot, who discovered it more than a century before Viswanath did.
The basic idea of the tree is to divide the set of real numbers into progressively finer—but not necessarily equal—intervals. Writing zero as 0/1 and representing infinity by the notation 1/0, all
positive real numbers lie in the interval [0/1, 1/0]. This half of the number line is broken down into the subintervals [0/1, 1/1] and [1/1, 1/0], extending from zero to 1 and from 1 to infinity
respectively. The left-hand subnode then splits into [0/1, 1/2] and [1/2, 1/1], and the right-hand subnode into [1/1, 2/1] and [2/1, 1/0]. The general rule is that an interval [a/b, c/d] splits into
[a/b, (a+c)/(b+d)] and [(a+b)/(c+d), c/d]. All the numbers in the negative half of the real line are classified in the same way, and the two halves are joined by a special root node labeled [–1/0, 1/
The Stern-Brocot tree has a place in this story because there is a direct correspondence between the random walk among slopes m described above and paths traced out through the branches of the tree.
Any such a path can be written as a sequence of left and right commands, giving directions for how to get from the root of the tree to a specific interior node. For example, from the [0/1, 1/0] node,
a path of right, left, left brings you to the [1/1, 3/2] node. Suppose the current value of the slope m lies somewhere within this interval. After a transition from m to 1/m, the new slope must lie
in the [2/3, 1/1] node, which is the mirror image of the original node; to reach it, you follow the opposite sequence of turns, left, right, right. Mapping the transformations m → (1 + 1/m) and m →
(1/m – 1) into tree paths is only a little more complicated.Through this mapping, the intervals of the Stern-Brocot tree yield up an approximation to the probability distribution of the Vibonacci
Viswanath tabulated all paths through the Stern-Brocot tree to a depth of 28 levels, where the tree has more than 50 million nodes. In this way he was able to calculate the value of C to eight
decimal places. His result is C = 1.13198824 . . . . It has the same "almost surely" status as Furstenberg's proof; that is, every instance of the Vibonacci series will grow at this rate with
probability 1 if it is continued long enough.
An unusual feature of Viswanath's proof is that it relies on floating-point computer arithmetic. Computer-aided proofs are no longer a novelty in mathematics, but few of them adopt floating-point
methods because the results are not exact. Viswanath includes an error analysis showing that any arithmetic inaccuracies are much smaller than the uncertainty introduced by truncating the
Stern-Brocot tree at a finite depth.
Variations and Generalizations
All the well-known variations on the Fibonacci series also have analogues in the randomized world of the Vibonacci numbers.
It was noted above that the Fibonacci recurrence has the same growth rate starting from any two initial terms. Does the Vibonacci series share this property? Numerical experiments quickly affirm that
it does.
A randomized variant of the Tribonacci series defines each term as the sum of the previous three terms with randomly chosen signs. Not surprisingly, the absolute value of the three-term series grows
faster than that of the two-term Vibonacci recurrence. Experiments suggest that the random Tribonacci growth constant is about 1.22. For the random-sign analogue of the Tetrabonacci series, with four
terms included in each sum, the growth rate is roughly 1.27. It appears that the growth rate continues increasing slowly as more terms are included.
What happens in the limiting case, where all earlier terms are gathered up into the sum? In the pure Fibonacci version, without randomness, the first few terms of this series are 1, 1, 2, 4, 8, 16,
32, . . . . These are the successive powers of 2, and so the exponential growth rate is exactly 2. The random analogue of this series is actually the problem that got Viswanath started on his
investigation of the Vibonacci numbers. Working with Lloyd N. Trefethen, his dissertation supervisor at Cornell, he was studying the series r(n)=±r(1)±r(2)± . . . ±r(n–1), where each term is
calculated by giving random signs to all the preceding terms before taking the sum. Finding the asymptotic growth rate of this sequence would settle an outstanding question in the theory of random
matrices, proposed by Trefethen. Viswanath was unable to find a rigorous solution and so turned to the two-term Vibonacci sum as a simpler model. Numerical experiments suggest that the growth rate
for the random sum of all terms is about 1.32.
Another way of generalizing the Vibonacci process is to consider what happens when the probabilities of choosing plus or minus are not equal. Intuition suggests that any bias in the probabilities,
favoring either plus or minus, ought to cause faster growth in the absolute value of v(n). In other words, the sequence with equal probabilities should have the minimum growth rate, with C increasing
toward Φ in the extreme cases of all-plus or all-minus sequences. Experiments support these inferences, but the exact behavior of the series with skewed probabilities is unknown. The Stern-Brocot
proof works only when plus and minus are chosen with equal probability.
Perhaps the most interesting Vibonacci variation has been studied by Mark Embree and Trefethen, who is now at the of the University of Oxford. Instead of skewing the probabilities, they scale one of
the two terms in the random sum by an adjustable factor, which they denote β. That is, the recurrence relation is v(n)=v(n–1)±βv(n–2). If you try a value of β=1/2 in this equation, you will
immediately see a dramatic change. The sequence no longer grows exponentially; on the contrary, it rapidly dwindles away. In other words, the exponential growth rate is less than 1; numerical
evidence gives a value of about 0.929.
Setting β = 1 recovers the original Vibonacci sequence, where of course the growth rate is known to be 1.13198824 . . . . If the series decays for β = 1/2 and grows for β = 1, there must be some
intermediate value of β where it is "neutrally buoyant," neither rising nor falling on average. Embree and Trefethen have searched for this point of equilibrium, designated β*, and they find the
closest approximation at β*=0.70258.... Computer runs at this setting develop large and erratic fluctuations, but they seem not to veer off into unbounded growth or decay.
Searching for the value of β that minimizes the growth rate (or in other words maximizes the decay rate) turned up more surprises. The minimum cannot be at β=0, because that is another neutral point,
where v(n)=1 for all n. Embree and Trefethen found the minimum at β=0.36747.... In the course of the search they discovered that the curve recording the variation of C as a function of b is not a
smooth one. The dips and humps in the curve appear to have a fractal structure, similar to itself at all scales of magnification.
A final question: What is the meaning of numbers such as C = 1.13198824... and β* = 0.70258...? Where do they come from? The number f is given by a simple analytic expression, (1+√5)/2. Is there any
similar formula for C or for β*? Probably not. Embree and Trefethen point out that C is 0.4 percent greater than the fourth root of Φ; it is even closer to four-fifths of √2, but these numerical
coincidences are surely meaningless. Numbers are so plentiful that you can always find relations among them if you look hard enough, but C and β* will probably have to stand on their own as new
constants of nature, or of mathematics. Embree and Trefethen suggest calling 1.13198824... Viswanath's constant.
© Brian Hayes
|
{"url":"http://www.americanscientist.org/issues/id.3340,y.0,no.,content.true,page.2,css.print/issue.aspx","timestamp":"2014-04-17T21:25:37Z","content_type":null,"content_length":"134737","record_id":"<urn:uuid:8857ef9e-9828-4eb5-afdb-52446571565c>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00029-ip-10-147-4-33.ec2.internal.warc.gz"}
|
North Richland Hills ACT Tutor
Find a North Richland Hills ACT Tutor
...I achieved a 5 on the AP Calculus BC exam and a 4 on the AP Statistics exam. During my time in college, I have completed various courses in math including Multivariable Calculus, Linear
Algebra, Differential Equations, Theoretical Concepts of Calculus, Abstract Algebra, Mathematical Analysis, Pr...
7 Subjects: including ACT Math, algebra 1, algebra 2, SAT math
...I understand more than anyone that science is not the easiest field to study especially when it is new. Please do contact me if you are struggling in school or just need some extra help. I will
be more than happy to give you or your child the studying help you need.
28 Subjects: including ACT Math, English, physics, biology
...I have had several students increase their average by one to two letter grades with tutoring. I have over ten years experience teaching high school Geometry. I have helped many student increase
their average by more than one letter grade.
15 Subjects: including ACT Math, chemistry, calculus, geometry
...Every student is a different person and learns in a different way, whether its visual, auditory, or kinesthetic. I ask students how THEY want/prefer to learn and work from there. I like making
science relevant to students' daily lives and prefer to have discussions about science rather than a m...
28 Subjects: including ACT Math, reading, chemistry, English
...I am certified to teach in math. I have taught for seven years. I have also taught at elementary and junior high schools.
37 Subjects: including ACT Math, Spanish, calculus, reading
|
{"url":"http://www.purplemath.com/north_richland_hills_act_tutors.php","timestamp":"2014-04-16T21:59:18Z","content_type":null,"content_length":"23951","record_id":"<urn:uuid:0842eab3-0d01-4d8e-a1b3-f58b33db71f6>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00535-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Testing Hypothesis - How do we do this?
September 5th 2012, 07:13 AM #1
Sep 2012
Testing Hypothesis - How do we do this?
Suppose you are considering two normal populations with unknown means μ1 and μ2 respectively. Variances are known to be σ1 and σ2 respectively. Show how you would test the following hypothesis
against the two-sided alternatives H0 : μ−μ=0
Re: Testing Hypothesis - How do we do this?
Hey homalina.
Hint: what is the distribution of (X_bar - Y_bar) - (mu1 - mu2) going to be? You already know the variances and that they come from a normal distribution, and any linear combination of normal
distributions gives back a normal.
With this information, what is the distribution of (X_bar - Y_Bar) and how do you standardize it? (Another hint: you assume mu1 - mu2 = 0).
September 11th 2012, 11:12 PM #2
MHF Contributor
Sep 2012
|
{"url":"http://mathhelpforum.com/advanced-statistics/202953-testing-hypothesis-how-do-we-do.html","timestamp":"2014-04-18T23:57:12Z","content_type":null,"content_length":"31889","record_id":"<urn:uuid:c9d8464a-a9ce-4bd8-95b4-532dfeba6f52>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00196-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Dynamic Earth - Dating rocks
radioactive decay. The underlying principle is that the probability of an individual radioactive atom breaking down (to create a daughter atom) is constant. Different radioactive decay systems have
different probabilities and these are expressed as their DECAY CONSTANT. For a given parent to daughter decay system (e.g. potassium 40 goes to argon 40) and its unique decay constant, the number of
daughter atoms created depends on the amount of time and the original number of parent atoms. This can be tracked graphically.
In practice the determination of ages uses ratios between different isotopes, measured with great precision in modern mass spectrometers. The results can be interpreted graphically on something
called an isochron plot. Isochron plots for the rubidium-strontium system applied to old rocks from Greenland and for chrondritic meteorites.
In practice great care is necessary in applying isotopic methods to date rocks. A key assumption is that a sample has remain a closed system so that the number of parent and daughter atoms can be
fully audited. To examine these problems of diffusion, click here. Note however, these problems also work to our advantage. We can use the leaky nature of rocks and minerals to isotopic diffusion to
estimate the cooling history of rocks - which is very important in tracking the passage of rocks to the surface as their overburden is eroded.
|
{"url":"http://www.see.leeds.ac.uk/structure/dynamicearth/dating/index.htm","timestamp":"2014-04-16T16:32:06Z","content_type":null,"content_length":"3630","record_id":"<urn:uuid:327e7a61-a669-4627-bcb0-9c683255f787>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00326-ip-10-147-4-33.ec2.internal.warc.gz"}
|
RSA encryption based from a Sample Alice and Bob's key.
September 21st, 2011, 07:30 AM
RSA encryption based from a Sample Alice and Bob's key.
i followed the step by step procedure how to encrypt and decipher a message based on a basic RSA encryption/deciphering steps from this article The RSA public key cryptographic system (in
Technology > Encryption @ iusmentis.com)
Code :
public class RSA {
public static void main(String[] args) {
int p = 5,
q = 11,
e = 7,
d = 23;
int encrypted = encrypt(2, e, p, q);
System.out.println("Encrypted :" + encrypted);
int deciphered = decipher(encrypted, d, p, q);
System.out.println("Deciphered" + deciphered);
public static int encrypt(int sampleChar, int e, int p, int q) {
return (int) (Math.pow(sampleChar, e) % (p * q));
public static int decipher(int encryptedSampleChar, int d, int p, int q) {
return (int) (Math.pow(encryptedSampleChar, d) % (p * q));
an original message which is 2 is encrypted by [2 raise to e modulo of (p * q)] resulting into 18
but in the state of deciphering by [18 raise to d modulo of (p * q)] results into 37, instead of 2
i directly assigned 23 as the value of d, to lessen some code because its already given on the article sample
i just dont get it right, why do i get 37 instead of 2 when i try to revert the process (deciphering) ?
September 21st, 2011, 09:58 AM
Re: RSA encryption based from a Sample Alice and Bob's key.
int encrypted = encrypt(2, e, p, q); // here value is 2
int encrypted = encrypt('2', e, p, q); // here it is '2' or 50
September 21st, 2011, 07:04 PM
Re: RSA encryption based from a Sample Alice and Bob's key.
let me edit my post, im sorry i put single quotation on 2
September 21st, 2011, 07:09 PM
Re: RSA encryption based from a Sample Alice and Bob's key.
i just dont get it right.. im following the formula , but i ended up in a confusing value.
September 21st, 2011, 07:23 PM
Re: RSA encryption based from a Sample Alice and Bob's key.
That's the way it goes sometimes.
Double check your code.
September 21st, 2011, 07:36 PM
Re: RSA encryption based from a Sample Alice and Bob's key.
i did everthing i could .. i really dont get the math :(
September 21st, 2011, 07:43 PM
Re: RSA encryption based from a Sample Alice and Bob's key.
the formula to encrypt a message is
m raise to e mod pq
where <m> is the original message(character)
the formula to decipher is
c raise to e mod pq
where <c> is the encrypted message(character)
i followed these steps.. im really stuck up :(
September 21st, 2011, 07:55 PM
Re: RSA encryption based from a Sample Alice and Bob's key.
You'll have to find a math guy for help on this one.
September 22nd, 2011, 06:36 AM
Re: RSA encryption based from a Sample Alice and Bob's key.
You'll have to find a math guy for help on this one
so basically, theres no easy way for this one? .. :( , i just really want to know how to make the equation right...
September 22nd, 2011, 06:50 AM
Re: RSA encryption based from a Sample Alice and Bob's key.
18^23 mod 55 = 18^1 *18^2 * 18^4 * 18^16 mod 55 = 18*49*36*26 mod 55 = 825552 = 15010*55 + 2 = 2 mod 55.
i followed this formula but i get the value 37 instead of 2, from 18^23 mod 55
September 22nd, 2011, 07:21 AM
Re: RSA encryption based from a Sample Alice and Bob's key.
So the website is wrong.
September 22nd, 2011, 07:21 AM
Re: RSA encryption based from a Sample Alice and Bob's key.
I read your link and it seems the person who wrote it may not have been paying attention during the last example:
1823 mod 55 = 18^1 *18^2 * 18^4 * 18^16 mod 55 = 18*49*36*26 mod 55 = 825552
Does that look right to you?
How does your code work if you use the values in the WP article?
RSA - Wikipedia, the free encyclopedia
edit: D'oh! Too slow again
September 22nd, 2011, 07:25 AM
Re: RSA encryption based from a Sample Alice and Bob's key.
Something that you might want to bear in mind is that you might be using quite large numbers and a technique that depends on integer values with Math.pow(). You might get lucky as long as your
numbers are very small, but still...
September 22nd, 2011, 07:32 AM
Re: RSA encryption based from a Sample Alice and Bob's key.
Very good Sean. Using BigDecimal does it.
September 22nd, 2011, 07:36 AM
Re: RSA encryption based from a Sample Alice and Bob's key.
Using BigDecimal does it.
There's one slightly even better choice for large integers!
September 22nd, 2011, 07:40 AM
Re: RSA encryption based from a Sample Alice and Bob's key.
yes , now i notice this part and how does the 18 raise to 23 was divided into 4 parts
18^1 *18^2 * 18^4 * 18^16
or how does the exponent 23, was divided into 4 parts... and i was also thinking that I MIGHT be calculating the number so large that it generates a "inexact" value?
September 22nd, 2011, 07:49 AM
Re: RSA encryption based from a Sample Alice and Bob's key.
while im reading the WP article, i think i should ask this to have a bottom point for the problem. i somehow resolve my issue by using the
18^1 *18^2 * 18^4 * 18^16 % 55
explicitly, my question will be, what is the logic of how does the (d exponent) in this case 23, divided into 4 parts?
September 22nd, 2011, 07:49 AM
Re: RSA encryption based from a Sample Alice and Bob's key.
That part is right, it's the next bit that's completely wrong. I only looked at it because I wondered if you were getting integer overflow. 18 is a bit more than 2^4. a^b^c is a^b*c, so 2^92 (4
times 23) ish will blow an int away completely - and probably a double too.
Code java:
package com.javaprogrammingforums.domyhomework;
public class RSAIntBad
public static void main(String[] args)
for (int i = 0; i <= 23; i++)
System.out.println("18^" + i + " is " + (int)Math.pow(18, i) + " should be " + new java.math.BigInteger("18").pow(i));
September 22nd, 2011, 07:51 AM
Re: RSA encryption based from a Sample Alice and Bob's key.
a^(b+c) = a^b * a^c
September 22nd, 2011, 07:54 AM
Re: RSA encryption based from a Sample Alice and Bob's key.
Now we're getting into higher math.
September 22nd, 2011, 08:06 AM
Re: RSA encryption based from a Sample Alice and Bob's key.
well yeah, i have 2 ways to deal with the problem, studying the use of the BigDecimal/BigInteger, and trying to logicaly understand
18^1 *18^2 * 18^4 * 18^16 / a^(b+c) = a^b * a^c
, just excuse my slowness with some equations,
September 22nd, 2011, 08:11 AM
Re: RSA encryption based from a Sample Alice and Bob's key.
1+2+4+16 = 23
September 22nd, 2011, 08:13 AM
Re: RSA encryption based from a Sample Alice and Bob's key.
1+2+4+16 = 23
yes i do get that one, but my concern is, how am i going to divide into parts and how many parts if my exponent is , for example , 30 or 17 or 40 or 28 etc..
September 22nd, 2011, 08:24 AM
Re: RSA encryption based from a Sample Alice and Bob's key.
The maths is quite useless to you as a programmer. I keep meaning to write a Number class that is based on factorisation (so it never actually does any maths, just remembers lists of factors
until you explicitly ask for a typed result) which would take advantage of this sort of thing, but it would be an intellectual curiosity.
<phone rings>you got the job, pending references checking out</phone rings>Woohoo! Rain in the desert! I'm not wet yet, but I am ready to drink!
Back to the plot. The only reason it's interesting right now is that the author of that page seems to have mangled his maths and somehow came up with the answer that was required to make the
example work, when as you have observed, it does not.
I'm not sure how BigInteger works internally, but it does avoid any issues with breaking down the problem into more manageable pieces by always guaranteeing to be 'big enough'. You're only
looking at a toy example at the moment, but from looking at what RSA does, you'll be computing some meaninglessly large numbers when you use that code in anger. int and double, even though 4
billion and [whatever it is that double offers] sound large, are pitifully small in comparison.
September 22nd, 2011, 08:38 AM
Re: RSA encryption based from a Sample Alice and Bob's key.
maybe i should end the discussion here, i just noticed that this will be a bottomless pit, out of the curiosity that became an obssesion that i got from that basic RSA, and i never thought that
there are classes like BigIntegers/BigDecimal that solves an impractical approach to simple integer calculations, thanks again
|
{"url":"http://www.javaprogrammingforums.com/%20whats-wrong-my-code/11040-rsa-encryption-based-sample-alice-bobs-key-printingthethread.html","timestamp":"2014-04-19T22:17:46Z","content_type":null,"content_length":"24280","record_id":"<urn:uuid:56251510-9897-4528-92be-535e4eccb7fa>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00247-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Englewood, CO Algebra 2 Tutor
Find an Englewood, CO Algebra 2 Tutor
...I was employed by independent contract in Austin for 4+ years tutoring one-on-one. I just moved to the Denver area. I can do almost any statistics course (social, psych, business, some econ)
and am also good to go on basic math, algebra, pre-calc, and in a desperate situation calculus.
7 Subjects: including algebra 2, statistics, geometry, SAT math
...Quattro, Lotus 1-2-3, and now Excel. I've used Excel on the job extensively for over six years. Pivot tables, vlookups, filtering and sorting, IF statements, the usual.
12 Subjects: including algebra 2, chemistry, precalculus, calculus
...I have experience working with special needs students, as well as am the parent of one. This gives me the ability to understand the frustration a parent and child feels when the lessons don't
come easily. Having this experience has also helped me to learn to look outside the box for learning methods.
7 Subjects: including algebra 2, reading, geometry, writing
...It was required to complete the Advanced Mathematics classes in the first two years of the college in my major. I did very well in this class. I just tested myself using the midterm exams from
University of Toronto, and I got 90% of the questions correct.
27 Subjects: including algebra 2, calculus, physics, geometry
...When helping students in trigonometry, I prefer to focus on how to derive the various identities rather than memorize. This helps take away some of the test anxiety and fear of forgetting.
When helping prepare students for taking the SAT, I like to spend time discussing test taking strategies in addition to brushing up on math skills.
10 Subjects: including algebra 2, geometry, ASVAB, algebra 1
Related Englewood, CO Tutors
Englewood, CO Accounting Tutors
Englewood, CO ACT Tutors
Englewood, CO Algebra Tutors
Englewood, CO Algebra 2 Tutors
Englewood, CO Calculus Tutors
Englewood, CO Geometry Tutors
Englewood, CO Math Tutors
Englewood, CO Prealgebra Tutors
Englewood, CO Precalculus Tutors
Englewood, CO SAT Tutors
Englewood, CO SAT Math Tutors
Englewood, CO Science Tutors
Englewood, CO Statistics Tutors
Englewood, CO Trigonometry Tutors
Nearby Cities With algebra 2 Tutor
Arvada, CO algebra 2 Tutors
Aurora, CO algebra 2 Tutors
Bow Mar, CO algebra 2 Tutors
Centennial, CO algebra 2 Tutors
Cherry Hills Village, CO algebra 2 Tutors
Columbine Valley, CO algebra 2 Tutors
Denver algebra 2 Tutors
Glendale, CO algebra 2 Tutors
Greenwood Village, CO algebra 2 Tutors
Highlands Ranch, CO algebra 2 Tutors
Lakewood, CO algebra 2 Tutors
Littleton, CO algebra 2 Tutors
Sheridan, CO algebra 2 Tutors
Westminster, CO algebra 2 Tutors
Wheat Ridge algebra 2 Tutors
|
{"url":"http://www.purplemath.com/Englewood_CO_algebra_2_tutors.php","timestamp":"2014-04-20T16:04:40Z","content_type":null,"content_length":"24153","record_id":"<urn:uuid:08cc49c7-c1a8-45d8-8148-09bfe7242c55>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00371-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Arithmetic using doubles
double Nag = 25; Nag = Nag + Nag * (1/5); System.out.println(Nag); Why does Nag equal 25?????
1/5 is integer math resulting in 0 as plain numbers in code (without decimals) are integers. to use doubles use 1/5.0 to "declare" one of the numbers a double and force double arithmetic rather than
Integer arithmetic.
... or simply write Nag/5 or if you want to make it look fancy: Nag*0.2 kind regards, Jos
thank you guys - I was having a lot of trouble with this problem for some time!
|
{"url":"http://www.java-forums.org/new-java/33689-arithmetic-using-doubles-print.html","timestamp":"2014-04-20T05:17:02Z","content_type":null,"content_length":"5735","record_id":"<urn:uuid:3477f065-d0a6-4896-afcb-0283070f4778>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00561-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Dictionary of Terms
Absolute value: The absolute value of a number is the distance from the number to zero on the real number line. The absolute value of a is denoted by |a|.
Adjacent side (to The leg of a right triangle that touches the acute angle
Arc length: The measure of an arc is called the arc length.
Arc: An arc is the portion of the circumference of a circle intercepted by a central angle.
Area: A measure of the space bounded by a 2-dimensional object.
Asymptote: An asymptote is a straight line on a graph that a curve approaches, but never meets. Asymptotes are classified as horizontal, vertical or oblique. Oblique means the line is neither
horizontal nor vertical.
Base: The repeated factor in an exponential expression.
Base of a logarithm: This is the value of a in the following definition. If a and n are positive integers, then log[a]n=k if and only if a^k=n.
Base of an exponential expression: This is the value a in the notation a^n; it is the repeated factor.
Binomial: A binomial is a polynomial with exactly two terms. Example:
Cartesian product: The Cartesian product of two sets A and B is denoted A ´ B and is the set of ordered pairs where the first coordinate is from A and the second is from B.
Central angle: An angle whose vertex is at the center of a circle.
Change of base formula:where a, b, c >0.
Circumference: Circumference is the perimeter of a circle.
Coefficient: The coefficient of a term is the constant or numerical portion. Example: In the polynomial,
Common denominator: A common denominator of two fractions is the least common multiple of the two denominators.
Complex number: A complex number is a number of the form a+bi, where a and b are real numbers and
Composite number: A composite number is a whole number with more than two factors.
Continuous function: A continuous function is one whose graph has no holes and no jumps. The graph can be drawn without lifting the pencil off the paper.
Coordinate: One of the two numbers used to designate a point in the Cartesian graphing system. For example, 3 is the first coordinate of the point (3,5).
Coordinate axes: The two mutually perpendicular axes which split the Cartesian plane into fourths. A point in the plane is given coordinates based on the distance of the point from the axes.
Coterminal angles: These are angles which share a terminal side, when in standard position, such as
Critical point: The point at which a graph changes directions between up and down. In this graph, the critical points are circled:
Degree: In a polynomial function, the degree is the highest non-zero power of the independent variable in the function. For example, the polynomial
f (x)=3x^4-5x^2+7x+10 has degree 4.
Denominator: The denominator is the integer on the bottom of a fraction.
Dependent Variable: The dependent variable is the variable whose value depends on the other variable. For example, in the function y variable is the dependent variable, since its value is determined
by the value given to the x variable. (To be more specific, if x = 2, then we know that y = 4. However, if y = 4, then it could be that x = 2 or that x = -2; we can’t be sure which.)
Difference: The result of a subtraction problem is called the difference.
Divisible by: Let x and y be two whole numbers, where y does not equal 0. We say y is divisible by x, if x divides y.
Divisor of: Let x and y be two whole numbers, where y does not equal 0. We say x is a divisor of y, if x divides y.
Domain: The domain of a function is the set of values taken by the independent variable.
e: A real number quantity defined by . It is frequently used as the base in functions modeling continuous exponential growth or decay.
Element: An element is a member of a set.
Empty set: The empty set is the set without any elements. It is written as
Equation: An equation is two expressions connected by an equals sign. Example:
Exponent: A positive integer exponent (like n in an) tells how many times the base appears as a factor.
Exponential function: This is a function of the form f(x)=b(a^x), where a>0 and b is a real number. The quantity a is called the base of the function; b is the initial value.
Expression: An expression is a value that is represented by a combination of numbers and variables using mathematical operations such as addition, multiplication, roots, etc. A single variable or
number is also considered an expression. Example:
Factor: Let x and y be two whole numbers, where y does not equal 0. We say x is a factor of y, if x divides y.
Fraction: A fraction is a ratio of integers,
Function: A function is a relation, which assigns a unique value to the dependent variable (usually the y variable) for each value of the independent variable (usually the x variable). For example,
the equation y value for each x value. The equation y values for one x value; for instance, when x = 4, y = 2 and y = -2.
Greatest common factor OR Greatest common divisor: Let x and y be two nonzero whole numbers. The greatest common factor of x and y is the largest whole number that is a factor of both x and y. This
is written as gcf(x, y) or GCF(x, y). Some books use greatest common divisor instead of factor. This is written as gcd(x, y) or GCD(x, y).
Hypotenuse: The longest side of a right triangle; always opposite the right angle.
Improper fraction: An improper fraction is one where the numerator is larger than the denominator.
Independent variable: The independent variable is the variable whose value determines the value taken by the dependent variable. For example, in the function x variable is independent, since it
determines the value taken by the y variable. (To be more specific, if x = 2, then we know that y = 4. However, if y = 4, then it could be that x = 2 or that x = -2; we can’t be sure which.)
Integer: The integers are the natural numbers, zero and the negatives of the natural numbers. Z = {...-3, -2, -1, 0, 1, 2, 3...}
Intercept: An intercept is where a graph intersects one of the two axes. The x-intercept(s) is where a graph intersects the x-axis, and the y-intercept(s) is where a graph intersects the y-axis. If
an intercept is (a,0), we often just say that the x-intercept is a. Similarly, if a y-intercept is (0,b), we often say that the y-intercept is b.
Intersection: The intersection of sets A and B, written
Irrational number: An irrational number is a number that cannot be written as a ratio of integers. An irrational number has a non-repeating and non-terminating decimal expansion.
Leading coefficient: The leading coefficient is the number, called a coefficient, which is multiplies the highest non-zero power of the independent variable in a polynomial function. For example, the
Least common multiple: Let x and y be two nonzero whole numbers. The least common multiple of x and y is the smallest whole number that is a multiple of both x and y. This is written as lcm(x, y) or
Logarithmic function: This is a function of the form g(x)=b(log[a]x), where a>0 and b is a real number.
Major axis of ellipse: This is the longer axis of an ellipse.
Minor axis of ellipse: This is the shorter axis of an ellipse.
Mixed number: A mixed number refers to an improper fraction which is written with an integer and a fractional part.
Monomial: A monomial is a polynomial with a single term. Examples:
Multiple: Let x and y be two whole numbers where y does not equal 0. We say y is a multiple of x, if x divides y.
Natural number: The natural numbers are the counting numbers. N = {1, 2, 3, 4, 5, 6...}
Numerator: The numerator is the integer on the top of a fraction.
One-to-one function: A function is one-to-one if each element in the range corresponds to exactly one element in the domain.
Opposite side (to The leg of a right triangle that does not touch the acute angle
Origin: The origin is the point where the x- and y-axes cross each other, and is assigned the coordinates (0,0).
Parabola: A parabola is the graph associated with a quadratic function, i.e. a function of the form
Perimeter: The distance around the exterior of a 2-dimensional object is the perimeter.
Point: A point is a specific place on a graph given by specific coordinates, e.g. (3,5).
Polynomial: A polynomial is any function of the form
f (x) = a[n]x^n + a[n-1]x^n-1 + … + a[1]x + a[0].
Prime factorization: Every composite number can be written as a product of prime numbers. This is called the prime factorization of a number.
Prime number: A prime number is a whole number with exactly two factors.
Product: The result of a multiplication problem is called the product.
Proper subset: Set B is a proper subset of set A if and only if every element in B is an element of A and A contains at least one element that is not in B. This is denoted by
Proportion: A proportion is an equation in which two ratios are set equal to each other.
Quadrantal angle: This is an angle where the terminal side is coincident with a coordinate axis, such as
Quadrant: The coordinate axes divide the Cartesian plane into four sections; these are called the quadrants of the plane and numbered I, II, III, IV. In quadrant I, both the x and y coordinates are
positive; the numbering continues in a counterclockwise fashion.
Quadratic: Quadratic is another way of saying a polynomial of degree 2. Example:
Quotient: The result of a division problem is called the quotient.
Radian: A radian is a unit of angle measure; a central angle of 1 radian intercepts an arc of measure 1 unit on a unit circle.
Range: The range of a function is the set of values taken by the dependent variable.
Ratio: A ratio is an ordered pair of numbers a and b, written a:b where b0.
Rational number: If a number can be written as a ratio of integers, then it is called a rational number. A rational number has either a repeating or finite decimal expansion. Q is used to denote the
set of rational numbers.
Real number: The set of real numbers is the union of the set of rational numbers and the set of irrational numbers. R is used to denote the set of real numbers.
Reciprocal: The reciprocal of a number a, where a
Reciprocal function: This is a function of the form
Reference angle: If x-axis make an acute angle reference angle for
Reference triangle: A reference triangle is a right triangle containing a reference angle as one of its vertices; used to compute trigonometric function values of an angle in standard position.
Root: A root of a polynomial Example: Let
Set: A set is a collection of objects.
Slope: The slope is the measure of the steepness of a straight line; the change in its y coordinates divided by the change in its x coordinates. Its equation is normally represented as
Straight angle: A straight angle measures
Subset: Set B is a subset of set A if and only if every element of B is also an element of A. This is denoted
Sum: The result of an addition problem is called the sum.
Surface area: A measure of the space on the exterior of a 3-dimensional object.
Symmetry: A graph is said to have “_____ symmetry” if it would remain unchanged under some translation, rotation, or reflection. For example, these two graphs have symmetry:
Trigonometric identities: These are equalities involving trigonometric functions; often used to simplify equations.
Union: The union of sets A and B, written
Unit circle: The unit circle is a circle of radius 1 with center at the origin.
Vertex: The vertex is the unique critical point on a parabola. Here is a parabola with its vertex labeled:
Volume: A measure of the amount of space contained within a 3-dimensional object.
Whole number: The whole numbers are the natural numbers and zero. {0, 1, 2, 3, 4, 5...}
|
{"url":"http://www.uiowa.edu/~examserv/mathmatters/dictionary/dictionary.html","timestamp":"2014-04-20T23:40:08Z","content_type":null,"content_length":"31936","record_id":"<urn:uuid:10d315b6-b2c8-4497-ae2b-879ff412c3fb>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00533-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Putting Einstein to the Test
“The views of space and time which I wish to lay before you have sprung from the soil of experimental physics, and therein lies their strength. They are radical. Henceforth space by itself, and
time by itself, are doomed to fade away into mere shadows, and only a kind of union of the two will preserve an independent reality.” -Hermann Minkowski
When it comes to gravity, you probably think you understand it pretty well.
Everything with mass (or energy) attracts everything else with mass-or-energy, explaining everything from falling terrestrial objects to the orbits of the planets to the formation of the largest
structures in the cosmos.
And yet, this picture is only an approximation of what we know to be a more fundamental truth. The idea that objects feel gravitational forces and accelerate in response to them falls well within the
realm of our common experience, and it’s very tempting to describe all gravitational interactions in those terms. This is what our best understanding of reality was for centuries, thanks to the work
of Isaac Newton.
But if we did that, we’d miss out on some very, very important subtleties of Einstein’s relativity. In particular, the biggest revolution that came along with Einstein’s work was the idea that
instead of space and time being independent, fixed entities, they were actually an inseparable combination — spacetime — whose shape itself determines the trajectory of all objects, both massive and
massless, that lie within it.
In addition, the shape (or curvature) of spacetime is determined by the presence and distribution of all the matter and energy that exist in that spacetime itself! When we have an idealized system,
like a very heavy mass that’s orbited at a large distance by a much smaller mass, Newton’s gravity — and Newton’s picture of forces and acceleration — are an excellent approximation.
But even excellent approximations have their limits.
One of the remarkable conclusions you arrive at in Newtonian gravity is that any tiny mass that orbits a much larger one will revolve around it in a perfect ellipse, returning along the same exact
path each and every revolution. When Kepler discovered that the planets did, in fact, make ellipses around the Sun, this was an unexplained phenomenon until Newton’s law of gravity came along. But,
like I said, even though it’s a very good one for our Solar System, this is only an approximation.
In reality, all of the planets fail to make a closed ellipse in their orbit around the Sun, missing by just a tiny amount. Interestingly enough, the closer you are to the largest mass, the more you
miss your last orbit by. This is because spacetime is actually curved more severely the closer you are to a larger mass, and where the spacetime curvature is more severe, that’s where the most
interesting, non-Newtonian predictions of General Relativity come into play.
One of the more spectacular predictions of General Relativity — in sharp contrast to Newtonian gravity — is that not only do these orbits fail to close, but over long enough timescales, they
actually decay. That’s right, if you wait long enough, the planets will all eventually spiral inwards towards the center of our Solar System, where they’ll be gobbled up (or, for a less scary
phrasing, where they’ll merge) with the mass at our center.
Don’t be alarmed by this; it’ll take some 10^150 years for this to happen, far longer than the lifetime of any star in the Universe. But that’s only because all the planets are so far away from the
Sun, relative to its paltry mass. But this means if we can find a system where a mass orbits much, much closer to the dominant mass in its system, we should be able to test this relativistic
prediction, and see whether, in fact, the orbit does decay, and whether it decays at the rate predicted by Einstein’s theory or not.
To practically test this, a Sun-like star will simply not do, for the simple reason that it’s too big! But if we had an object that was as massive as the Sun, but maybe only the physical size of a
mountain, we’d be in business. Luckily for us, not only the Universe but our own galactic neighborhood is littered with these objects: neutron stars!
These objects are the leftover cores of supermassive stars that have exploded in a Type II supernova, but are not quite massive enough to collapse down into black holes. One of the most massive
neutron stars known is PSR J0348+0432, which weighs in at about twice the mass of the Sun, but is only maybe 10 kilometers (6 miles) in radius. For a neutron star, it’s remarkable for three reasons:
1. It’s a pulsar, which means that, as it rotates, it sends out radio emissions in two beams. While it’s conceivable that all neutron stars are pulsars, we’re fortunate enough to have one of these
beams point directly at us, which is very rare. 25-times-per-second, we receive a very regular pulse from this neutron star, which is observable with a good radio telescope.
2. It’s in a binary system, which means that there’s another mass orbiting it. This is a very special case for Einstein’s relativity, as we’ll not only have precessing elliptical orbits, but also
orbital decay and — if we can someday detect it — gravitational radiation.
3. And finally, that other mass is a white dwarf star, a very small object about the mass of the Sun but the physical size of Earth, that’s so close to the neutron star that it completes an orbit
every 2.5 hours, meaning that the entire orbit of the white dwarf around this neutron star would fit inside of our Sun!
Here’s an artist’s impression of what this would look like.
Wow. Now, make no mistake, the
decay of gravitational orbits
has been
well-observed for decades
, even
resulting in a Nobel Prize
But there’s never been a system found where gravitational decay was occurring this fast, or where we’ve been able to study spacetime that’s curved this strongly. In other words, this is new territory
for relativity, and one of the strongest tests ever performed! Want to know what we found?
The orbital period of the binary changes by a cumulative eight microseconds-per-year, in exact agreement with Einstein’s predictions! This is really remarkable, because many of the serious competing
alternatives to Einstein’s relativity have much larger predictions in regions of strongly curved spacetime; this observation rules them out!
So if you’ve been wondering what Einstein and General Relativity have done for you lately, here’s your answer: in the most extreme conditions ever tested, where the curvature of spacetime is stronger
than any system we’ve ever tested before, General Relativity’s predictions exactly matched the effects we painstakingly observed.
Challenge Einstein at your own peril, folks, because nature — at every turn we’ve been able to test — obeys General Relativity every single time, including in this new way!
1. #1 Eric Lund May 1, 2013
Coincidentally, today’s XKCD was on the topic of proving Einstein wrong, but about something more mundane than this.
2. #2 SCHWAR_A May 1, 2013
Two masses orbiting each other change their positions all the time (both masses the same for simplicity).
Due to runtime of gravitation one of the masses “feel” the gravity of the other from an angle, which is a bit less than 90° to its own velocity direction and the “felt” distance is a bit smaller.
The mass will accelerate and lower its orbit to counterbalance.
Thus no energy is needed to be radiated from this system.
What is wrong with this idea?
3. #3 Sina May 2, 2013
thanks ethan
4. #4 Wow May 2, 2013
Ethan, did this:
bring on the idea for this thread???
5. #5 Wow May 2, 2013
SHWAR_A, a dipole will radiate if it is made to oscillate the charge.
The masses are “charges” for gravity.
Therefore, like with electric fields, the rotating system will radiate gravity waves.
Your scenario falls down in that conservation of momentum AND energy must BOTH be conserved. Therefore the acceleration of the two objects cannot be done to the same level as to take up the
energy lost to gravitational potential and the excess energy (or lost momentum) must be radiated away (or taken from a third mass) to balance the equation.
6. #6 BenHead
New York
May 2, 2013
I’m kind of glad this wasn’t about an experiment at LHC I just read about that’s seeking to test whether antimatter experiences anti-gravity. I was kind of dumbfounded by that…. Does ANY serious
theory predict that? Do ANY serious scientists expect to find it so?
7. [...] via Putting Einstein to the Test – Starts With A Bang. [...]
8. #8 OKThen
Thanks, I did not know, 2?'s
May 2, 2013
Very nice observations and explanation.
I did not know of this prediction of general relativity.
Can someone clarify, is this phenomenon related to frame dragging or something else?
Here we have a neutron star of mass N and a white dwarf of mass W that are orbiting at a radius R. Is there a simple equation (derived from general relativity with the appropriate assumptions)
that gives the change of radius R as a function of
dR = f(N, W, R) dt or something like this. what is this equation?
Thanks anyone.
9. #9 CB May 2, 2013
@ BenHead: The expected result of that experiment would be that antimatter experiences gravity like normal. There are all sorts of problems if this wasn’t the case. Like, Conservation of Energy
and Einstein Equivalence (inertial mass = gravitational mass) type problems.
But it’s never been measured, so we don’t know for sure. That’s one of the things I love about science — always seeking to verify if reality conforms to our expectations.
P.S. It’s the ALPHA project that is doing the instrument, which is part of CERN but not part of the LHC.
10. #10 Sinisa Lazarek May 2, 2013
you can find some of the formulas here
pages 16,17,18
11. #11 SCHWAR_A May 2, 2013
@Wow (#5):
Does this mean that accelerating a mass, like shooting a bullet to a metal target, will generate gravitational waves? Both times, at firing and hitting moment?
I do not understand, why there is “excess energy” and why “acceleration [...] cannot be done to the same level”. I have calculated the scenario and found that the part of transformed energy is
complete: the result contains the exact formula for gravitational redshift. Could you please explain a bit more specific (perhaps with formulae)? Thanks a lot in advance.
12. #12 Vdgg
May 3, 2013
So when the White dwarf will ‘merge’ with the Neutron Star, it should result in a type IIA supernova.
And if we can predict this decay, we should be able to predict the time of the supernova.. Hmm not sure if my thoughts are true
13. #13 Wow May 3, 2013
As much as you get an antenna from sending electrons once down a line and then stopping.
I.e. naff all to consider.
The “Excess energy” is that the rotational momentum goes as velocity times radius of motion, and kinetic energy goes as velocity squared. Therefore it isn’t possible to conserve both at the same
time without either gaining momentum or losing energy when lowering to a different orbit.
It doesn’t have anything to do with gravitational redshift (at least as far as I’m aware). Just the two classical mechanical equations for rotational momentum and kinetic energy. One has a v in
there once, the other has a v in there twice.
14. #14 SCHWAR_A May 3, 2013
@Wow (#13):
OK, I assume we could never detect gravitational effects generated by home-made acceleration.
In our binary system above, if energy stays constant, the rotational momentum of the decaying orbit decreases, but the spins of both masses increase by the same factor. Rotational momentum is
transported within the system.
Assuming both masses always face the same side to each other: while their orbit decays they both gain exactly that amount of rotational momentum needed to maintain this feature.
Following this the complete system conserves energy _and_ rotational momentum.
What do you think?
15. #15 CB May 3, 2013
@ SCHWAR_A:
We can easily rule that scenario out by virtue of this being a pulsar with a rotational period of 40 ms, but an orbital period of 2.5 hrs.
I’m not sure it’d work out that the increase in rotational angular energy would match the decrease in orbital angular energy — the size of the moment of inertia for the orbital energy seems like
it would just be too much higher. But I’d have to do the math and I have other math I should be doing. >_<
16. #16 Robert P May 3, 2013
But it doesn’t obey at the nano scale. But for everything else E’s your man.
17. #17 CB May 3, 2013
Anyway, I think it has the exact same problem — the inertia goes up or down as w, and the energy as w^2
18. #18 Wow May 3, 2013
In our binary system above, if energy stays constant.
That’s where you have your problem.
The energy cannot stay constant within the binary system AND get closer together.
The “if” doesn’t happen.
Might as well say “If the Flying Spaghetti Monster can travel faster than light, then it can make changes before its presence can be seen!”.
19. #19 SCHWAR_A May 5, 2013
@Wow (#5):
“The masses are “charges” for gravity. Therefore, like with electric fields, the rotating system will radiate gravity waves.”
The big difference is that the electron’s spin can’t change and thus “needs” to radiate away the excess energy.
With gravitational “charges” this is different. Their parts all have their own gravitational reaction to their specific positions within the current field.
@CB (#15):
Think about convections within stars: how would they look like if there would not be this feature that all atoms gain the rotational momentum during descent?
If you sit on a slowly rotating turntable with a globe in your outstretched hand: if the globe does not rotate relative to you, it will also not start rotating if you pull it towards you. This is
known as spin-orbit-interaction…
20. #20 Wow May 5, 2013
The big difference is that the electron’s spin can’t change and thus “needs” to radiate away the excess energy.
The spin of an electron has no relevance here, Schwar.
Not a sausage.
Bugger all.
21. #21 CB May 6, 2013
In convection the falling material does so because it is losing energy. It then rises again after gaining energy from the hotter interior of the star. That’s the whole point of convection.
22. #22 SCHWAR_A May 7, 2013
@CB (#15):
“…I’m not sure it’d work out … But I’d have to do the math and I have other math I should be doing.”
OK, here is the math:
We start with
G·M·m/r = m·v² = m·ω²·r² = ω·L_orb
where m is the mass orbiting around M with distance r, speed v, angular velocity ω and rotational momentum L_orb.
Now we decrease the distance a bit by the factor
1 – Δr/r
Therefore the speed increases with
(1 + Δv/v)² = 1 / (1 – Δr/r)
The angular velocity is affected by
(1 + Δω/ω)² = (1 + Δv/v)² / (1 – Δr/r)² = 1 / (1 – Δr/r)³
which is KEPLER’s 3rd law.
The rotational momentum of the orbit finally changes with
1 – ΔL_orb/L_orb = (1 + Δv/v)² / (1 + Δω/ω) =
= √(1 – Δr/r)³ / (1 – Δr/r) = √(1 – Δr/r)
This is the excess, which must be counterbalanced by the spin of m for conservation.
We assume “face-to-face” spinning of m, i.e. one turn of spin per turn of orbit:
ω_spin = ω_orb = ω
The rotational momenta are
L_orb = m·ω·r²
L_spin = m·ω·R²
with R some constant radius of m related to its rotational momentum.
We eliminate ω and get
L_spin = R²/r² · L_orb
Changing L_spin means increasing its angular velocity by (1 + Δω_spin/ω),
changing L_orb means decreasing its angular velocity by (1 – Δω_orb/ω) due to
L_spin·(1 – ΔL_spin/L_spin) = R²/r² · L_orb · √(1 – Δr/r) / (1 – Δr/r)² =
= R²/r² · L_orb·(1 + Δω_orb/ω)
and with the changes extracted
ΔL_spin = R²/r² · L_orb · Δω_orb/ω
which is
m·Δω_spin·R² = R²/r² · m·Δω_orb·r²
and thus
Δω_spin = Δω_orb
The increase in rotational angular energy (spin) matches the decrease in orbital angular energy.
23. #23 Wow May 7, 2013
“This is the excess, which must be counterbalanced by the spin of m for conservation.”
And this is moved across HOW?
Look up why, despite an axial rotation being possible for a diatomic molecule, there is no rotational state for the diatom along its longitudinal axis: no asymetry to measure the rotation with ==
no axial rotation.
24. #24 SCHWAR_A May 7, 2013
@Wow (#23):
“And this is moved across HOW?”
Please look at my post #2.
Those parts of m, which are farther away from M “feel” an angle that is a little bit smaller than the angle of those parts nearer to M. This effect causes an acceleration of m’s spin at the same
time as m’s orbit decays.
“…diatomic molecule…”
As far as I know these are more alike gyroscopes circling around their long axis and thus react with inertia to changes of current rotations. You may watch typical turntable experiments with a
handled gyroscope.
By the way, what is the rotational momentum of such molecule? I assume ħ/2 and thus could not handle changes in angular momentum. Is this correct?
25. #25 eric May 7, 2013
We assume “face-to-face” spinning of m, i.e. one turn of spin per turn of orbit:.
This assumption is emprically false. See again see CB @15.
So what you have done here is – at best – shown that for one special, theoretical type of two-body case, (spin+orbital angular energy) is conserved. But that does not prove that that quantity is
always conserved as a general rule, and your case doesn’t seem to apply to the system given in the post. Or any other real world system I can think of, with the possible exception of a
geosynchonous satellite always facing down/inwards.
26. #26 CB May 7, 2013
SCHWAR_A, I do believe you made a math error somewhere. My forum-fu is weak, so forgive the bad formatting. For starters, I’m using u for mu and w for omega. Let:
r = initial orbital radius
f = ratio of new orbital radius to initial.
R = planetary radius.
u = G(M+m) = (w^2)(r^3)
w0 = initial angular velocity = (u/r^3)^(1/2)
L_r0 = initial oRbital angular momentum = w0 * r^2
L_s0 = initial planetary “spin” angular momentum = w0 * R^2 = (R^2 / r^2) * L_r0
w1 = final ang. v. = (u/(fr)^3)^(1/2) = f^(-3/2) * (u/r^3)^(1/2) = f^(-3/2) * w0
L_r1 = final orbital ang. m. = w1 * (fr)^2 = f^(-3/2)*w0 * (fr)^2 = f^(1/2) * r^2*w0 = f^(1/2) * L_r0
L_s1 = final planet ang. m. = w1 * R^2 = f^(-3/2)*w0 * R^2 = f^(-3/2)*L_s0 = f^(-3/2) * (R^2 / r^2) * L_r0
L_s1 = f^(-3/2) * (R^2 / r^2) * L_r0
Your supposition is that momentum is conserved between the initial and final states, as in:
L_r0 + L_s0 = L_r1 + L_s1
L_s0 = (R^2 / r^2) * L_r0
L_r1 = f^(1/2) * L_r0
L_s1 = f^(-3/2) * (R^2 / r^2) * L_r0
L_r0 + (R^2 /r ^2) * L_r0 = f^(1/2) * L_r0 + f^(-3/2) * (R^2 / r^2) * L_r0
1 + (R^2 / r^2) = f^(1/2) + f^(-3/2) * (R^2 / r^2)
This equation has at most two real roots, one of which is 1.
Momentum is not conserved when changing orbits and maintaining tidal lock.
27. #27 CB May 7, 2013
If forgot a factor of “m” in all the angular momentum equations, and similarly w*R^2 assumes a spherical shell. Doesn’t change anything; the m would just get divided out, and (R^2/r^2) would just
become some other constant.
28. #28 CB May 7, 2013
Now what if you allowed the planet to get out of tidal lock as the orbital radius reduced? After all, if you did the math on the gravitational force that you believe produces the torque to speed
up the planet’s rotation as it spirals inward, it might end up that it accelerates it faster than necessary to keep tidal lock.
The problem then is that by matching the momentum by changing the angular velocity, you will not be able to match the energy because of the square factor.
Either energy or momentum must change in the orbital system — GR predicts that energy would be emitted, solving the conundrum.
29. #29 CB May 7, 2013
my_last_post =~ s/Spherical shell/ring/;
30. #30 Wow May 7, 2013
“By the way, what is the rotational momentum of such molecule? I assume ħ/2 and thus could not handle changes in angular momentum. Is this correct?”
You’re talking about rotation around the normal axis to the long axis.
I’m not talking about that.
I’m talking about the rotation AROUND THE LONG AXIS, not the axis at right angles to that.
31. #31 SCHWAR_A May 8, 2013
@eric (#25):
You may vary R as you like in my above calculation to achieve any ratio between ω_orb and ω_spin. R doesn’t matter for the principle: Δω_spin = Δω_orb. Therefore my calculation is not just a
specific case. This case is just more easy to understand.
32. #32 SCHWAR_A May 8, 2013
@CB (#26):
It is quite easy: write the special characters in a word document and just copy and paste them. Or, copy and paste from this list:
I have put it into a text-document and copy from it…
33. #33 Wow May 8, 2013
You can’t vary R as you want. You have to solve all equations at the same time if you don’t want radiation of energy via gravity waves to happen.
34. #34 CB May 8, 2013
Varying R (or really, what we should just be using, the moment of inertia of the planet about its rotational axis) doesn’t change the ratio of the angular velocities by itself. Because in general
there is no relationship between orbital angular velocity and rotational angular velocity, so these would just be two independent measured values. The only reason there was a relationship was
because you proposed one of the special cases where there is.
35. #35 eric May 8, 2013
Schwar: your “R” is the radius of your object (“with R some constant radius of m”), it can’t vary at all unless your object is deforming or losing mass. If either of those things are happening,
then rotational energy is not all going to spin (or vice versa), and you’re still wrong.
But I think you are still missing the conceptual point. You cannot assume face to face spinning of the two bodies, because that isn’t what is actually happening. To the extent that your model
depends on that assumption to work, it is not describing any real phenomena.
36. #36 SCHWAR_A May 8, 2013
Thanks to all.
I unfortunately wrote “with R some constant radius of m”. R is constant, but not really the radius, which is the boundary of visible matter. R is just a parameter describing the spinning mass of
m: L = m·ω·R².
Instead I should better have written L = m·ω_spin·x·R_m² with x the ratio of ω and ω_spin. With this in my calculations is R² = x·R_m² constant. Both Δω are affected by this ratio in the same
way. Thus the increase of the angular velocities are the same. If for example m spins with 2ω and orbits around M with ω, its spin increases to 2ω+Δω and its orbit to ω+Δω.
And yes, I can see that the sum of rotational momentum seems not conserved. Thus my supposition from #14 “Following this the complete system conserves energy _and_ rotational momentum” must be
denied so far.
Again thanks a lot for this discussion.
37. #37 SCHWAR_A May 8, 2013
I scrutinized the excess of rotational momentum from CB’s calculation L_s1: it is exactly related to the increase of the kinetic energy of the spinning m.
With this we would have an energy transfer from orbit decay into increasing kinetic energy of m’s spinning…
38. #38 Wow May 8, 2013
With no mechanism that would let that happen…
39. #39 CB May 8, 2013
But it still doesn’t work! You can’t conserve both energy and momentum. Let’s start with the same scenario, a planet tidally locked to the star it orbits. But let’s let the final angular velocity
of the planet vary as necessary to conserve momentum. Let:
q = ratio of initial orbital moment of inertia to moment of inertia of the planet about its rotational axis. This replaces (R^2/r^2) in my previous equations.
s = ratio of initial planetary angular velocity (=ω0) to the final angular velocity of the planet’s rotation.
L_s1 = s*L_s0 = s*q*L_r0
So going back to my penultimate equation but changing the value of L_s1, we have:
L_r0 + q*L_r0 = f^(1/2)*L_r0 + s*q*L_r0
Solving for s to see what is needed to preserve momentum:
s = (1 + q – f^(1/2)) / q
Now let’s look at conservation of energy:
ω0*L_r0 + ω0*L_s0 = ω1*L_r1 + s*ω0 * L_s1
ω0*L_r0 + ω0*q*L_r0 = (ω0*f^(-3/2))*(f^(1/2)*L_r0) + s*ω0*s*q*L_r0
ω0*L_r0 + ω0*q*L_r0 = ω0*f^(-1)*L_r0 + s^(2)*ω0*q*L_r0
divide by ω0*L_r0:
1 + q = (1/f) + q*s ^2
s = √( (1 + q – (1/f)) / q)
√( (1 + q – (1/f)) / q) = (1 + q – f^(1/2)) / q
Guess when this equation holds true? Go on, guess. That’s right! When f =1! And therefore s is also 1! Only when nothing changes are both energy and momentum conserved.
40. #40 CB May 8, 2013
This is why the Conservation Laws are so powerful. Having to obey both Conservation of Energy and Conservation of Momentum constrains possible scenarios greatly. Just by doing this math I can
show that it is impossible for an orbit to decay by any mechanism unless that mechanism causes the system to lose energy somehow.
Friction (or collisions in general) are one such mechanism. Energy is transferred to the objects collided with, allowing orbital decay. In the absence of that, however, we should never see an
orbit decay. Unless, of course, there are gravity waves.
41. #41 SCHWAR_A May 9, 2013
OK so far: in our system both energies increase when momentum is conserved. This excess has to be radiated away.
I just noticed that we never considered both masses in the system, m and M. Mustn’t we handle both the same, but contrary balanced?
I.e. E_M = E_m and L_M = L_m, which means in our binary system:
E_m – E_M = 0
L_m – L_M = 0
which is valid all the time…
What do you think?
42. #42 Wow May 9, 2013
You can simplify by changing to the CoM of one of the bodies. No need to work from an inertial frame where BOTH bodies masses are moving.
43. #43 SCHWAR_A May 9, 2013
OK, but then we would never apply forces to M itself. But it is part of the system, isn’t it? M has its own orbit around the barycenter and its own rotational momentum, both influenced by m.
Thus I think we cannot just reduce M to a point in the barycenter with no other attribute than having a mass.
M and m have to balance: actio = reactio.
If we would like to detect gravitational waves we are in line with the frame of the barycenter, not in that of M. In the frame of the barycenter everything is balanced.
Any idea?
44. #44 Sinisa Lazarek May 9, 2013
“Thus I think we cannot just reduce M to a point in the barycenter with no other attribute than having a mass.”
guess that’s why it’s called an approximation
45. #45 Wow May 9, 2013
And the error of that approximation depends on any dipole moment of the individual mass.
For dense heavy stars like white dwarfs and neutron stars, that’s so close to zero as to make no odds for any calculation necessary. The density makes any deviation at the macro scale from a
perfect sphere so energetically precluded as to be impossible to maintain.
46. #46 SCHWAR_A May 9, 2013
What “approximation” do you mean?
Both masses of PSR J0348+0432 are about the same (~2x and ~1x solar mass) and their barycenter is far outside both bodies somewhere between (~550000 km / ~280000 km).
Just looking at the runtimes we find factor ~3 related to a system with an adjusted mass placed at the barycenter instead.
47. #47 Sinisa Lazarek May 9, 2013
meaning one object is much much smaller than the first. thus it’s approximated to a point mass.
48. #48 SCHWAR_A May 9, 2013
@Sinisa Lazarek:
…then you didn’t get my idea, sorry.
The basic idea is that gravity needs time and while in our binary system the sending M is moving, the receiving m is moving also, which will reduce the “felt” distance of one mass to the other.
This small delta in distance causes acceleration. This principle shall be valid for both bodies.
In this scenario the expansion of any body doesn’t matter, only their distance and orbital speeds are of interrest.
49. #49 Sinisa Lazarek May 9, 2013
But I don’t understand why all of this is dealing with orbit’s and Kepler etc.. when it’s not about that. It’s about relativity.
The fact that you have 2 pretty significant masses (solar mass) orbiting one another at huge speeds and small distance, means you’re dealing with space curvatures and high energies. It’s not
about Kepler, it’s about Einstein. Since they both have similar mass, means they curve spacetime to a similar degree, just one is much smaller and orbits.. as it curves space-time is “drags” it,
sending ripples through spacetime. That’s the energy that’s being lost, and that’s what causes the orbit to decay.
50. #50 Sinisa Lazarek May 9, 2013
for IMO good explanation check wiki article on grav. waves, subsection “Power radiated by orbiting bodies”
51. #51 CB May 9, 2013
@ SCHWAR_A
“I just noticed that we never considered both masses in the system, m and M. Mustn’t we handle both the same, but contrary balanced?
What do you think?”
I think you’d have the SAME DANG PROBLEM. You’ll have four equations with four variables, the two orbit ratios and the two spin ratios, and once you solve you’ll find that the spin shows up in
one place regular, and another place squared, because that’s how the term shows up in momentum and energy. So you’ll have an equation with at most two real roots, one of which is 1.
52. #52 CB May 9, 2013
@ Sinisa Lazarek
Their whole thing is trying to explain orbital decay without General Relativity. Instead they’re using some kind of relativistic Newtonian gravity — same equation as Newton but the interaction is
limited to c so the apparent location of the central body would ‘lag’ behind it’s actual location.
Frankly I’m willing to bet that they messed up the math showing this would cause orbital decay. Intuitively it seems that it would just move the barycenter’s effective location.
53. #53 SCHWAR_A May 9, 2013
@CB (#51):
Thanks, I just realized my mistake in #41: I considered the special case of equal masses and equal orbital radius…
@Sinisa Lazarek (#49) & CB (#52):
CB is right: GR predicts the decay of orbits. Also, the idea above predicts the decay of orbits. Certainly, we do not “really” need another method, but for me it is interesting that simply
regarding runtime of gravity yields the same result at Keplerian/Newtonian level…
The shortened “runtime-related” distance is like Lorentz contraction, which is the same as increased mass using non-contracted distance. The result: Acceleration of orbital velocity.
The barycenter will actually get an orbit, with opposit current positions for both masses at the same time.
54. #54 Sinisa Lazarek May 9, 2013
I for one would thrilled to find out old Albert got it wrong at some point, but it ain’t the case here. As counter intuitive as it is at the beginning, the more you dwell on spacetime and curves,
the more sense it starts to make. Don’t mean the math, but logicaly. Once you view spacetime as an entity, it’s pretty easy to understand grav. waves. Everything interacts with everything else,
and all of that interacts with spacetime. So no “forever” is possible, everything that has energy has mass and thus curves and thus there is grav. friction and something gets radiated away.
55. #55 SCHWAR_A May 9, 2013
@Sinisa Lazarek (#54):
I also think, Albert Einstein is right, but he did not tell us all about his way towards the simple result. Before developing his curved spacetime metric he certainly also regarded runtime of
gravity. But you can’t calculate with it as easily as with curved spacetime, which quasi prefetches all results seen from the sight of one mass’ frame.
But: what if you move several masses within spacetime? Easy to calculate? May be in special cases its easier to use runtime of gravitation together with Kepler/Newton…
Thanks a lot for this nice discussion, also to CB!
56. #56 CB May 9, 2013
I highly recommend “Relativity: The Special and General Theory” by Albert Einstein. It’s pretty accessible for such a topic.
In the section on General Relativity he shows how starting from the basic postulate that the General Principle of Relativity (that the laws of physics have the same formulation in ALL reference
frames, including accelerating ones), one must conclude that gravity and other forms of acceleration are indistinguishable.
He then uses a thought experiment very similar to famous train thought experiment from SR, only now one observer is standing on the outside of a rotating ring. Taking into account the observer’s
centripital acceleration — which Einstein literally refers to as a “gravitational field” having already explained why he is justified in doing so — he shows that to maintain a consistent picture
of events that the space between him and some other reference frame cannot be Euclidean. The gravitational field must, necessarily from the postulates, mean that the spacetime metric is changed.
It’s a great book that really helped illuminate the motivation behind relativity for me.
I don’t know that this was the actual progression of his thought when coming up with the idea, but it makes a lot of sense that when trying to extend his Special Theory of Relativity to the
general case, he’d start with extending the Special Principle of Relativity that was the foundation of SR to the General Principle and move on from there in much the same way he did for SR.
Anyway, If he did try just adding non-instantaneous gravity, he must have found it didn’t work. Certainly one would not expect it by itself to produce all the predictions of GR. And I do believe
it doesn’t produce any of the same predictions, and your calculation that it results in orbital decay is wrong. The slightly different angle would remain constant making it simply appear as
though the central body was also orbiting a slightly different barycenter than you otherwise get, and the acceleration being higher than it would be than in the instantaneous case would only mean
that orbital velocity is slightly higher than it would be in the Newton formulation. Not that it would spiral inward.
Finally, your formulation in #41 is wrong in all cases. Yes the energies of two identical objects orbiting each other around a common barycenter would have the same energy. But that’s not how
Conservation of Energy works. For that you need to know that the SUM of energies is a constant, as in:
E_m + E_M = C where C is some constant.
In your special case, that would just be 2E_m = C, and you’d have to show that it remains true as the orbit changes, along with 2L_m = D also holding true, and that will not happen.
57. #57 CB May 9, 2013
My other motivation for saying your calculation must be wrong is that 1) your result violates Conservation Laws, and 2) light-speed gravity would appear to be rotationally symmetric and
translation-ally symmetric (the result is the same regardless of the specific orientation with respect to the rest of the universe, and the specific date and location) which by Noether’s Theorem
means it should obey both Energy and Angular Momentum Conservation Laws.
58. #58 eric May 9, 2013
GR predicts the decay of orbits. Also, the idea above predicts the decay of orbits. Certainly, we do not “really” need another method, but for me it is interesting that simply regarding
runtime of gravity yields the same result at Keplerian/Newtonian level…
Schwar, it seems you’ve gone from a radical hypothesis that predicts something very surprising (i.e., that orbital decay with no emitted energy is possible) to a radical hypothesis that predicts
exactly the same thing as GR. But only for this case – your idea does not have the breadth of application or wide variety of independent predictions that GR does.
In some respects this makes your hypothesis safer, in that it now seems to be not immediately falsifiable (I’ll let others discuss possible math problems). But in other respects, it makes your
idea much much less valuable, because really the only thing it had going for it from a ‘competing hypothesis’ perspective was a novel and eyebrow-raising prediction. If its just “I have an
alternative to GR…that predicts absolutely nothing different from it,” well, physicists see a lot of that. It doesn’t generally impress.
59. #59 CB May 9, 2013
“Schwar, it seems you’ve gone from a radical hypothesis that predicts something very surprising (i.e., that orbital decay with no emitted energy is possible) to a radical hypothesis that predicts
exactly the same thing as GR. ”
Nope, it’s a very different prediction from GR: orbital decay without gravity waves, violating conservation laws.
That their hypothesis *should* obey conservation laws leads me to believe that they simply did the math wrong, and the prediction of orbital decay is not there.
60. #60 SCHWAR_A May 10, 2013
My entry to this discussion indeed was the now refuted (thanks to CB) idea of no-GWs.
Then further analysis of this idea lead me to the illustrative parallelings with SR and GR. If the idea is actually in parallel, it cannot predict different things…(thus it may even be useless!)
“…orbital decay…”
Please look at this figure.
A shorter “felt” distance 2r’ yields acceleration of m. At the same time its actual barycenter B’ is orbiting B. Because in this way for r>0 the aberration is Δφ>0, m will be attracted a bit more
than with Newton alone (Δφ=0). Consequently m spirals into M. Vice versa for M related to m.
61. #61 CB May 10, 2013
“Then further analysis of this idea lead me to the illustrative parallelings with SR and GR. If the idea is actually in parallel, it cannot predict different things…”
What? No. Only when two things are mathematically IDENTICAL are they incapable of predicting different things. Merely being “parallel” in one particular aspect (light speed propagation of
gravity) does not ensure this at all.
“m will be attracted a bit more than with Newton alone (Δφ=0). Consequently m spirals into M. ”
Simply experiencing more gravity than Netwon’s Law would not result in orbital decay. It would instead result in orbital velocity being higher than Newton would predict via F=mrω^2, the equation
used to derive ω in the first place. It’s like if G was slightly bigger — so what? It’s not like G is fine-tuned for the only value that allows stable orbits.
Your link is broken, but I do believe so is your math. Like I said, I believe it is impossible for your math to be right (did you do math or just draw a diagram?) because your premise is
symmetrical but your result does not follow conservation laws. Noether says that’s a no-no.
If you’re going to try to calculate it again, I recommend ditching the Δ syntax. Using needlessly complicated expressions like (1 – Δr/r) instead of just the ratio f is my guess for part of why
your previous calculations of momentum/energy were wrong. And in particular for showing that your idea results in orbital decay, the “aberration” from Newton is irrelevant. Just calculate the
result of your idea directly and show that it causes decay.
But keep in mind that doing so means you have almost certainly made a mistake.
62. #62 eric May 10, 2013
Schwar, very gracious answer, thank you. Most of the time, internet posters (occasionally myself included, mea culpa) are profoundly lacking in the ability to take constructive criticism. Thumbs
up for doing so.
63. #63 SCHWAR_A May 10, 2013
@ eric (#62):
Thank you for the flowers :-))
Sorry, the trick with the href=”data:image/jpg;base64,…” seems no longer working…
* Do you have an idea how to include an image in an answer here?
* Is there a preview possibility?
@CB (#61):
Thanks for all the hints.
The picture contains a circle around the barycenter B with both masses M and m in opposite positions, and additionally the previous position of M on its way, connected to m with speed of light c,
while m is orbiting with speed v. This demonstrates the aberration sin φ = v/c.
It should illustrate the contraction of the radius of m by Δr from r to r’.
“Simply experiencing more gravity than Netwon’s Law would not result in orbital decay.”
I think it is not just simply more gravity. It is the rotating barycenter. Mustn’t that cause quadrupole radiation, i.e. gravitational waves, causing orbital decay?
64. #64 CB May 10, 2013
“My entry to this discussion indeed was the now refuted (thanks to CB) idea of no-GWs.”
Which is intimately linked to the idea of orbital decay occurring in the first place.
The thing you may not appreciate is that Conservation Laws work both ways. Yes you can work backwards from Conservation of Energy/Momentum which is very useful. But you can also work forward from
your rules of mechanics and arrive at a result that conserves both quantities. Newton’s Laws of Motion and Gravity by their very formulation conserve energy and momentum — Newton’s 3rd Law is
essentially another way of stating momentum conservation. The Conservation Laws were not *additions* to Newton’s Laws, it was just a new way of looking at Newton’s Laws which already had the
Conservation Laws built in.
Noether’s Theorem was a way of proving that physical laws with certain features, like Newton’s Laws, Maxwell’s Laws, or General Relativity, would naturally have conserved quantities like energy
and momentum.
So in GR, it’s not that it predicts orbital decay, and Oh Noes that would violate conservation of energy but Whew there’s this gravity wave thing that saves us. It’s that the exact same mechanism
that causes orbital decay also causes gravity waves, and by the nature of that mechanism the gravity waves’ energy exactly matches that lost in the orbit. Not because you work backward from CoE
and say that this must be true (though this is often useful to do), but because that’s what GR says would happen in the first place.
Your idea has no mechanism for losing energy. It should not predict losing energy. You must have messed up your math somewhere.
” It is the rotating barycenter. Mustn’t that cause quadrupole radiation, i.e. gravitational waves, causing orbital decay?”
65. #65 CB May 10, 2013
” It is the rotating barycenter. Mustn’t that cause quadrupole radiation, i.e. gravitational waves, causing orbital decay?”
No, why would it? In GR, a normal orbit with a fixed barycenter radiates gravity waves. All that you need is non-spherical and non-cylindrically symmetric rotation.
66. #66 SCHWAR_A May 11, 2013
@CB (#65):
“In GR, a normal orbit with a fixed barycenter radiates gravity waves.”
… because GR warps the Euclidian space in a way that we can use the virtual mean fixed barycenter as seen from an outside frame. A very clever mathematical trick. It preserves us from regarding
aberration and thus runtimes…
“Your idea has no mechanism for losing energy. It should not predict losing energy. You must have messed up your math somewhere.”
Thanks to your help I’m working on that… (please hold the line ;-))…
67. #67 John Duffield May 12, 2013
Schwar: I’d say it’s important to start at the beginning with all this – with electron spin. We can make an electron (and a positron) from light in pair production. The electron has its magnetic
moment, and the Einstein–de Haas effect demonstrates that “spin angular momentum is indeed of the same nature as the angular momentum of rotating bodies as conceived in classical mechanics”. Then
when you annihilate the electron with a positron you get light. So start with a simple electron model consisting of light going round and round, then simplify it further to light going round a
square path. Light bends in a gravitational field, so draw the horizontals curving down a little, and your pen doesn’t end up where it started. Hence the electron falls down. It acquires kinetic
energy, not from some magical source, but from itself, because “the coordinate speed of light varies in a non-inertial reference frame”. It’s actually the speed of light that is reducing
vertically. See the Einstein quotes below. When you stop the electron with say an electromagnetic field, the kinetic energy is radiated away as a photon. That’s not to say gravitational waves
don’t exist, but instead that they’re more like photons than people appreciate. So trying to detect gravitational waves with LIGO might be like a Flatlander trying to measure stretch with a
rubber ruler.
1911: If we call the velocity of light at the origin of co-ordinates co, then the velocity of light c at a place with the gravitation potential Φ will be given by the relation c = co(1 + Φ/c²).
1912: On the other hand I am of the view that the principle of the constancy of the velocity of light can be maintained only insofar as one restricts oneself to spatio-temporal regions of
constant gravitational potential.
1913: I arrived at the result that the velocity of light is not to be regarded as independent of the gravitational potential. Thus the principle of the constancy of the velocity of light is
incompatible with the equivalence hypothesis.
1915: the writer of these lines is of the opinion that the theory of relativity is still in need of generalization, in the sense that the principle of the constancy of the velocity of light is to
be abandoned.
1916: In the second place our result shows that, according to the general theory of relativity, the law of the constancy of the velocity of light in vacuo, which constitutes one of the two
fundamental assumptions in the special theory of relativity and to which we have already frequently referred, cannot claim any unlimited validity. A curvature of rays of light can only take place
when the velocity of propagation of light varies with position. Now we might think that as a consequence of this, the special theory of relativity and with it the whole theory of relativity would
be laid in the dust. But in reality this is not the case. We can only conclude that the special theory of relativity cannot claim an unlimited domain of validity; its results hold only so long as
we are able to disregard the influences of gravitational fields on the phenomena (e.g. of light).
The word “velocity” is a problem in the English translations. The word he used in German was geschwindigkeit which translates to speed, he referred to c, and to one of the two fundamental
assumptions. That’s the special relativity postulate, which is the constant speed of light. It isn’t constant, and Einstein said it. Repeatedly.
68. #68 SCHWAR_A May 13, 2013
Very interesting, thanks.
Where do you think is the problem translating “Geschwindigkeit” to “velocity” or “speed”? For me both are the same…
Is the “Lichtgeschwindigkeit” actually higer in gravitational fields, or is in c = co(1 + Φ/c²) the potential negative?
69. #69 Wow May 13, 2013
“In GR, a normal orbit with a fixed barycenter radiates gravity waves.”
… because GR warps the Euclidian space in a way that we can use the virtual mean fixed barycenter as seen from an outside frame.
Merely ascertaining the gravitational potential at a distance from the orbiting body will show you that you will get changes in the potential at that point.
Therefore giving and taking away energy from that object.
Therefore making it move, just like it was a duck on the ocean. Lots of ducks bobbing up and down, with NAFF ALL to do with “warping euclidian space”.
70. #70 Wow May 13, 2013
That’s the special relativity postulate, which is the constant speed of light. It isn’t constant, and Einstein said it. Repeatedly.
It isn’t constant in a non-inertial frame.
|
{"url":"http://scienceblogs.com/startswithabang/2013/05/01/putting-einstein-to-the-test/","timestamp":"2014-04-20T21:43:18Z","content_type":null,"content_length":"171157","record_id":"<urn:uuid:a1df209a-23a8-4557-ad5c-1b8679618b3d>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00485-ip-10-147-4-33.ec2.internal.warc.gz"}
|
188 helpers are online right now
75% of questions are answered within 5 minutes.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/users/badreferences/asked/1","timestamp":"2014-04-17T18:54:03Z","content_type":null,"content_length":"122276","record_id":"<urn:uuid:9df51834-7b18-4fee-aa68-14bc3b82a72f>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00083-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Results 1 - 10 of 12
- Biometrika , 1996
"... this paper, we will only consider undirected graphical models. For details of Bayesian model selection for directed graphical models see Madigan et al (1995). An (undirected) graphical model is
determined by a set of conditional independence constraints of the form `fl 1 is independent of fl 2 condi ..."
Cited by 55 (8 self)
Add to MetaCart
this paper, we will only consider undirected graphical models. For details of Bayesian model selection for directed graphical models see Madigan et al (1995). An (undirected) graphical model is
determined by a set of conditional independence constraints of the form `fl 1 is independent of fl 2 conditional on all other fl i 2 C'. Graphical models are so called because they can each be
represented as a graph with vertex set C and an edge between each pair fl 1 and fl 2 unless fl 1 and fl 2 are conditionally independent as described above. Darroch, Lauritzen and Speed (1980) show
that each graphical log-linear model is hierarchical, with generators given by the cliques (complete subgraphs) of the graph. The total number of possible graphical models is clearly given by 2 (
- Artificial Intelligence , 1999
"... In this paper we discuss and present in detail the implementation of Gibbs variable selection as defined by Dellaportas et al. (2000, 2002) using the BUGS software (Spiegelhalter et al.,
1996a,b,c). The specification of the likelihood, prior and pseudo-prior distributions of the parameters as well a ..."
Cited by 9 (0 self)
Add to MetaCart
In this paper we discuss and present in detail the implementation of Gibbs variable selection as defined by Dellaportas et al. (2000, 2002) using the BUGS software (Spiegelhalter et al., 1996a,b,c).
The specification of the likelihood, prior and pseudo-prior distributions of the parameters as well as the prior term and model probabilities are described in detail. Guidance is also provided for
the calculation of the posterior probabilities within BUGS environment when the number of models is limited. We illustrate the application of this methodology in a variety of problems including
linear regression, log-linear and binomial response models.
- JOURNAL OF STATISTICAL PLANNING AND INFERENCE , 2006
"... ..."
, 1997
"... this article we approximate the rate of convergence of the Gibbs sampler by a normal approximation of the target distribution. Based on this approximation, we consider many implementational
issues for the Gibbs sampler, e.g., updating strategy, parameterization and blocking. We give theoretical resu ..."
Cited by 3 (3 self)
Add to MetaCart
this article we approximate the rate of convergence of the Gibbs sampler by a normal approximation of the target distribution. Based on this approximation, we consider many implementational issues
for the Gibbs sampler, e.g., updating strategy, parameterization and blocking. We give theoretical results to justify our approximation and illustrate our methods in a number of realistic examples.
Key words: Correlation Structure; Gaussian distribution; Generalized linear models; Gibbs sampler; Markov chain Monte Carlo; Parameterization; Random scan; Rates of convergence.
, 2009
"... In Bayesian analysis of multi-way contingency tables, the selection of a prior distribution for either the log-linear parameters or the cell probabilities parameters is a major challenge. In
this paper, we define a flexible family of conjugate priors for the wide class of discrete hierarchical log-l ..."
Cited by 2 (1 self)
Add to MetaCart
In Bayesian analysis of multi-way contingency tables, the selection of a prior distribution for either the log-linear parameters or the cell probabilities parameters is a major challenge. In this
paper, we define a flexible family of conjugate priors for the wide class of discrete hierarchical log-linear models, which includes the class of graphical models. These priors are defined as the
Diaconis–Ylvisaker conjugate priors on the log-linear parameters subject to “baseline constraints” under multinomial sampling. We also derive the induced prior on the cell probabilities and show that
the induced prior is a generalization of the hyper Dirichlet prior. We show that this prior has several desirable properties and illustrate its usefulness by identifying the most probable
decomposable, graphical and hierarchical log-linear models for a six-way contingency table.
"... We develop computational strategies for extended maximum likelihood estimation, as defined in Rinaldo (2006), for general classes of log-linear models of widespred use, under Poisson and
product-multinomial sampling schemes. We derive numerically efficient procedures for generating and manipulating ..."
Add to MetaCart
We develop computational strategies for extended maximum likelihood estimation, as defined in Rinaldo (2006), for general classes of log-linear models of widespred use, under Poisson and
product-multinomial sampling schemes. We derive numerically efficient procedures for generating and manipulating design matrices and we propose various algorithms for computing the extended maximum
likelihood estimates of the expectations of the cell counts. These algorithms allow to identify the set of estimable cell means for any given observable table and can be used for modifying
traditional goodness-of-fit tests to accommodate for a nonexistent MLE. We describe and take advantage of the connections between extended maximum likelihood
, 2008
"... We consider the specification of prior distributions for Bayesian model comparison, focusing on regression-type models. We propose a particular joint specification of the prior distribution
across models so that sensitivity of posterior model probabilities to the dispersion of prior distributions fo ..."
Add to MetaCart
We consider the specification of prior distributions for Bayesian model comparison, focusing on regression-type models. We propose a particular joint specification of the prior distribution across
models so that sensitivity of posterior model probabilities to the dispersion of prior distributions for the parameters of individual models (Lindley’s paradox) is diminished. We illustrate the
behavior of inferential and predictive posterior quantities in linear and log-linear regressions under our proposed prior densities with a series of simulated and real data examples.
, 2006
"... We develop computational strategies for extended maximum likelihood estimation, as defined in Rinaldo (2006), for general classes of log-linear models of widespred use, under Poisson and
product-multinomial sampling schemes. We derive numerically efficient procedures for generating and manipulating ..."
Add to MetaCart
We develop computational strategies for extended maximum likelihood estimation, as defined in Rinaldo (2006), for general classes of log-linear models of widespred use, under Poisson and
product-multinomial sampling schemes. We derive numerically efficient procedures for generating and manipulating design matrices and we propose various algorithms for computing the extended maximum
likelihood estimates of the expectations of the cell counts. These algorithms allow to identify the set of estimable cell means for any given observable table and can be used for modifying
traditional goodness-of-fit tests to accommodate for a nonexistent MLE. We describe and take advantage of the connections between extended maximum likelihood
, 2008
"... A general methodology is presented for the construction and effective use of control variates for reversible MCMC samplers. The values of the coefficients of the optimal linear combination of
the control variates are computed, and adaptive, consistent MCMC estimators are derived for these optimal co ..."
Add to MetaCart
A general methodology is presented for the construction and effective use of control variates for reversible MCMC samplers. The values of the coefficients of the optimal linear combination of the
control variates are computed, and adaptive, consistent MCMC estimators are derived for these optimal coefficients. All methodological and asymptotic arguments are rigorously justified. Numerous MCMC
simulation examples from Bayesian inference applications demonstrate that the resulting variance reduction can be quite dramatic.
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=3351541","timestamp":"2014-04-19T14:52:44Z","content_type":null,"content_length":"34171","record_id":"<urn:uuid:251a278e-333c-4d58-bb4d-b3e8328c1c70>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00391-ip-10-147-4-33.ec2.internal.warc.gz"}
|
the encyclopedic entry of commutativity
In mathematics, commutativity is the ability to change the order of something without changing the end result. It is a fundamental property in most branches of mathematics and many proofs depend on
it. The commutativity of simple operations was for many years implicitly assumed and the property was not given a name or attributed until the 19th century when mathematicians began to formalize the
theory of mathematics.
Common uses
The commutative property (or commutative law) is a property associated with binary operations and functions. Similarly, if the commutative property holds for a pair of elements under a certain binary
operation then it is said that the two elements commute under that operation.
In group and set theory, many algebraic structures are called commutative when certain operands satisfy the commutative property. In higher branches of math, such as analysis and linear algebra the
commutativity of well known operations (such as addition and multiplication on real and complex numbers) is often used (or implicitly assumed) in proofs.
Mathematical definitions
The term "commutative" is used in several related senses.
1. A binary operation ∗ on a set S is said to be commutative if:
$forall x,y in S: x * y = y * x ,$
- An operation that does not satisfy the above property is called noncommutative.
2. One says that x commutes with y under ∗ if:
$x * y = y * x ,$
3. A binary function f:A×A → B is said to be commutative if:
$forall x,y in A: f \left(x, y\right) = f\left(y, x\right) ,$
History and etymology
Records of the implicit use of the commutative property go back to ancient times. The Egyptians used the commutative property of multiplication to simplify computing products. Euclid is known to have
assumed the commutative property of multiplication in his book Elements. Formal uses of the commutative property arose in the late 18th and early 19th century when mathematicians began to work on a
theory of functions. Today the commutative property is a well known and basic property used in most branches of mathematics. Simple versions of the commutative property are usually taught in
beginning mathematics courses.
The first use of the actual term commutative was in a memoir by Francois Servois in 1814, which used the word commutatives when describing functions that have what is now called the commutative
property. The word is a combination of the French word commuter meaning "to substitute or switch" and the suffix -ative meaning "tending to" so the word literally means "tending to substitute or
switch." The term then appeared in English in Philosophical Transactions of the Royal Society in 1844.
Related properties
The associative property is closely related to the commutative property. The associative property states that the order in which operations are performed does not affect the final result. In
contrast, the commutative property states that the order of the terms does not affect the final result.
Symmetry can be directly linked to commutativity. When a commutative operator is written as a binary function then the resulting function is symmetric across the line y = x. As an example, if we let
a function f represent addition (a commutative operation) so that f(x,y) = x + y then f is a symmetric function which can be seen in the image on the right.
Commutative operations in everyday life
• Putting your shoes on resembles a commutative operation since it doesn't matter if you put the left or right shoe on first, the end result (having both shoes on), is the same.
• When making change we take advantage of the commutativity of addition. It doesn't matter what order we put the change in, it always adds to the same total.
Commutative operations in math
Two well-known examples of commutative binary operations are:
$y + z = z + y quad forall y,zin mathbb\left\{R\right\}$
For example 4 + 5 = 5 + 4, since both expressions equal 9.
$y z = z y quad forall y,zin mathbb\left\{R\right\}$
For example, 3 × 5 = 5 × 3, since both expressions equal 15.
Noncommutative operations in everyday life
• Washing and drying your clothes resembles a noncommutative operation, if you dry first and then wash, you get a significantly different result than if you wash first and then dry.
• The Rubik's Cube is noncommutative. For example, twisting the front face clockwise, the top face clockwise and the front face counterclockwise (FUF') does not yield the same result as twisting
the front face clockwise, then counterclockwise and finally twisting the top clockwise (FF'U). The twists do not commute. This is studied in group theory.
Noncommutative operations in math
Some noncommutative binary operations are:
• subtraction is noncommutative since $0-1neq 1-0$
• division is noncommutative since $1/2neq 2/1$
• matrix multiplication is noncommutative since
begin{bmatrix} 0 & 2 0 & 1 end{bmatrix} = begin{bmatrix} 1 & 1 0 & 1 end{bmatrix} cdot begin{bmatrix} 0 & 1 0 & 1 end{bmatrix} neq begin{bmatrix} 0 & 1 0 & 1 end{bmatrix} cdot begin{bmatrix} 1 & 1 0
& 1 end{bmatrix} = begin{bmatrix} 0 & 1 0 & 1 end{bmatrix}
Mathematical structures and commutativity
• An abelian group is a group whose group operation is commutative.
• A commutative ring is a ring whose multiplication is commutative. (Addition in a ring is by definition always commutative.)
• In a field both addition and multiplication are commutative.
• Axler, Sheldon (1997). Linear Algebra Done Right, 2e. Springer. ISBN 0-387-98258-2.
Abstract algebra theory. Covers commutativity in that context. Uses property throughout book.
• Goodman, Frederick (2003). Algebra: Abstract and Concrete, Stressing Symmetry, 2e. Prentice Hall. ISBN 0-13-067342-0.
Abstract algebra theory. Uses commutativity property throughout book.
• Gallian, Joseph (2006). Contemporary Abstract Algebra, 6e. ISBN 0-618-51471-6.
Linear algebra theory. Explains commutativity in chapter 1, uses it throughout.
• http://www.ethnomath.org/resources/lumpkin1997.pdf Lumpkin, B. (1997). The Mathematical Legacy Of Ancient Egypt - A Response To Robert Palter. Unpublished manuscript.
Article describing the mathematical ability of ancient civilizations.
• Robins, R. Gay, and Charles C. D. Shute. 1987. The Rhind Mathematical Papyrus: An Ancient Egyptian Text. London: British Museum Publications Limited. ISBN 0-7141-0944-4
Translation and interpretation of the Rhind Mathematical Papyrus.
Online Resources
• Krowne, Aaron, , Accessed 8 August 2007.
Definition of commutativity and examples of commutative operations
• , Accessed 8 August 2007.
Explanation of the term commute
• Yark , Accessed 8 August 2007
Examples proving some noncommutative operations
Article giving the history of the real numbers
Page covering the earliest uses of mathematical terms
Biography of Francois Servois, who first used the term
See also
|
{"url":"http://www.reference.com/browse/commutativity","timestamp":"2014-04-19T18:59:28Z","content_type":null,"content_length":"92697","record_id":"<urn:uuid:cebe71ca-52cd-49d6-afc4-1d90d8be0619>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00063-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Number of results: 7,769
Saturday, February 26, 2011 at 11:19am by Bidyut
4.8 hours
Saturday, February 26, 2011 at 11:18am by Zach
Thursday, March 19, 2009 at 8:40pm by keyshia
I get [(x^2+1)√(x^2-4) + x^2]/(x^2 + 1) which matches your answer.
Thursday, December 4, 2008 at 10:14pm by Reiny
Whoops, sorry, forgot the 3^8
Thursday, March 19, 2009 at 8:40pm by Damon
for what are value of k are the roots of 3x^2-6x+k=0 equal?
Saturday, February 26, 2011 at 11:19am by jackie
Oh, I see. Solve for P first then substitute it in. I guess I forgot that concept. Thanks!
Tuesday, February 3, 2009 at 9:41am by Vincent
Horizontal axis means of form: (y-k)^2 = 4p(x-h) if the vertex is at the origin (h,k) = (0,0) y^2 = 4 p x 36 = 4 p (4) p = (9/4) so y^2 = 9 x
Tuesday, February 3, 2009 at 9:41am by Damon
If (2f)(5) is supposed to mean 2 f(5), then the answer is 2 * (25 +1) = 52. Why did you bother to define g(x) if it isn't used in the question?
Monday, November 24, 2008 at 11:58pm by drwls
algebra2/trig (?)
As it is, it means: (2x)+[4/(x^2-9)] or do you mean (2x+4)/(x^2-9)?
Saturday, February 26, 2011 at 11:21am by MathMate
Evaluate the indicated function for f(x)=x2+1 and g(x)=x-4 Note: x2+1 means x squared plus one. (2f)(5)
Monday, November 24, 2008 at 11:58pm by Vincent
the length of a rectangle is 2x+4/(x^2-9) and its width is 3/(x-3). express the perimeter of the rectangle as a single fraction in simplest form.
Saturday, February 26, 2011 at 11:21am by jackie
Find an equation of the parabola with vertex at the origin. Passes through the point (4,6); horizontal axis. I know it has to be y^2=4px. But I don't know what to do with (4,6). Please show step by
step. Thanks.
Tuesday, February 3, 2009 at 9:41am by Vincent
Hi, I can't figure out what I'm supposed to do on this problem. Could someone help? Thanks. Find the coefficient of a of the term in the expansion of the binomial. Binomial (x^2+3)^12 Term ax^8
Thursday, March 19, 2009 at 8:40pm by Mike
1) looks like you are dealing with the cubes so term(n) = n^3 2) did you notice that each term is one less than a perfect square? e.g. 3 = 2^2 - 1 --term(1) 8 = 3^2 -1 -- term(2) 15 = 4^2 -1 24 = 5^2
- 1 35 = 6^2 - 1 so term(n) = (n+1)^2 -1
Saturday, April 28, 2012 at 7:55pm by Reiny
Algebra2/ Trig
During a baseball season, a company pledges a donation to a charity of $5000 plus $100 for every home run hit by the local team. Does it make more sense to represent this situation using a sequence
or a series? Explain your reasoning.
Monday, April 18, 2011 at 2:20pm by Naseeb
Write the next term of sequence,and then write the rule for the nth term. 1)1,8,27,64,....125 the nth is n 2) 3,8,15, 24,.....35 the nth is Can you help me and correct ? thank you
Saturday, April 28, 2012 at 7:55pm by JOSEPH
tina can paint a room in 8 hours, but when she and her friend emily work together, they can complete the job in 3 hours. how long would it take emily to paint the room alone?
Saturday, February 26, 2011 at 11:18am by jackie
ab-bc+ad-dc/ab+bc+ad+dc Simplify and state the domain
Wednesday, December 1, 2010 at 8:37pm by James Caulfield
ab-bc+ad-dc/ab+bc+ad+dc Simplify and state the domain
Wednesday, December 1, 2010 at 8:53pm by James Caulfield
For the roots of the quadratic equation ax^2+bx+c=0 to be equal, the discriminant Δ must equal to zero, where Δ=b^2-4ac...(1) For the given equation 3x^2-6x+k=0 a=3,b=-6, c=k Substitute a,b,and c in
(1), equate to zero and solve for k. Post your answer for checking...
Saturday, February 26, 2011 at 11:19am by MathMate
Find all the zeros of the function and write the polynomial as a product of linear factors. g(x)=3x^3-4x^2+8x+8 On my graphing calculator it says that it is -2/3. But when I do it by hand using
synthetic division, I don't get a zero. I'm going crazy here... I've tried -2/3 and...
Tuesday, January 6, 2009 at 12:34am by Vincent
sqr(x^2-4)+(x^2)/x^2+1 I got ((x^2+1)*sqr(x^2-4)+x^2)/x^2+1 But my teacher gave (sqr(x^2-4)+x^2)/x^2+1 I did the problem over again but I can't figure out why my teacher got the answer. Can someone
confirm that my teacher is right or am I right? sqr=square root ^=to the power ...
Thursday, December 4, 2008 at 10:14pm by Vincent
you probably meant ... (ab-bc+ad-dc)/(ab+bc+ad+dc) = (b(a-c) - d(a-c))/(b(a+c) + d(a+c)) = (a-c)(b-d)/((a+c)(b+d)) a ≠ -c , b≠ - d
Wednesday, December 1, 2010 at 8:53pm by Reiny
the General term(r+1) = C(12,r)(x^2)^(12-r)(3^r) = C(12,r)(x^(24-2r)(3^r) so ax^8 = C(12,r)(x^(24-2r)(3^r) then 24-2r = 8 r = 8 It must be the 9th term and it must be C(12,8)x^8(3^8) = 495(6561)x^8
so a = 3247695
Thursday, March 19, 2009 at 8:40pm by Reiny
if x= -2/3 then one of the factors is 3x+2 I did algebraic long division and got x^2 - 2x + 4 with no remainder. so g(x)=3x^3-4x^2+8x+8 = (3x+2)(x^2-2x+4) so x= -2/3 or x = 1 ± √-3 wich are complex
numbers you will not be able to express it as a product of only linear ...
Tuesday, January 6, 2009 at 12:34am by Reiny
Review the method at http://www.sparknotes.com/math/algebra2/polynomials/section3.rhtml You aren't doing it right
Sunday, January 18, 2009 at 10:22am by drwls
Trig 30
x/2 = 2pi/3 yields a unique solution of x = 4pi/3 Does this answer come from some trig equation? Your subject title suggests that. In that case find the period of the trig function, then add k
(period) to 4pi/3, k an integer.
Friday, March 19, 2010 at 7:59pm by Reiny
It wants the number in from of the x^8 term but in (p+q)^12 = p^12 + 12 p^11 q + 66 p^10 q^2 +220p^9q^3+ 495 p^8q^4 ...... we want the term with p^4 q^8 (which is the same as the term for p^8q^4)
because it is x^2 and not x If you either use a binomial expansion table or ...
Thursday, March 19, 2009 at 8:40pm by Damon
Sunday, November 6, 2011 at 6:12am by Ms. Sue
could someone please refresh my memory of the basic trig functions? ex) cos^2 + sin^2 = 1? That is a trig identity. http://www.sosmath.com/trig/Trig5/trig5/trig5.html
Thursday, April 12, 2007 at 10:18pm by raychael
please show me again in trig way
Saturday, May 17, 2008 at 12:40pm by Doni --i didnt get this one can please show me the trig way a little bit more clrealy
Algebra Help PLEASE
Since this is not my area of expertise, I searched Google under the key words "polynomial binomial division" to get these possible sources: http://www.sparknotes.com/math/algebra2/polynomials/
section2.rhtml http://www.sparknotes.com/math/algebra2/polynomials/section3.rhtml ...
Friday, June 6, 2008 at 4:27pm by PsyDAG
Subtract 2 pi from 7pi/3. The trig functions of 7 pi/3 will be the same values as the trig functions of pi/3, which is 60 degrees. The cosine of pi/3 is 1/2. That makes the secant 3. The tangent is
sqrt3 You should be able to look up or figure out the other funtions of that ...
Tuesday, January 12, 2010 at 9:50pm by drwls
Monday, August 27, 2007 at 9:04pm by DrBob222
y-6=2 y+4 _= help 7
Sunday, June 27, 2010 at 7:04pm by donna
q+5 + q-2 = 7/3 _____________ 3 2
Sunday, June 27, 2010 at 7:44pm by me
Wednesday, January 26, 2011 at 6:03pm by janelle
no its (x+5)(x-3)
Wednesday, January 26, 2011 at 5:41pm by janelle
3/5 + 4/5 = 9/5 = 1 4/5
Sunday, March 13, 2011 at 5:16pm by PsyDAG
Wednesday, August 29, 2012 at 2:40pm by Steve
Thank you ! !
Wednesday, August 29, 2012 at 2:40pm by Unknown
If f = (1, 2), (2, 3), (3, 4), (4, 5), g = (1, -2), (3, -3), (5, -5), and h = (1, 0), (2, 1), (3, 2), Help? 1. (f)/(h) 2. g o f o h
Monday, September 17, 2012 at 6:05am by Unknown
Monday, October 8, 2012 at 5:56pm by Allysiah
Monday, October 15, 2012 at 11:54am by Anonymous
Help? f(x)=5 ; x=3,-2,-1,0,1,2,3
Friday, November 30, 2012 at 1:26pm by Unknown
i don't know
Thursday, September 27, 2012 at 11:51am by thamara
Tuesday, December 10, 2013 at 12:20pm by Steve
Tuesday, December 10, 2013 at 9:40am by bowershe
Math - Solving Trig Equations
Start by recalling the most important identity. My math teacher calls this "the #1 Identity." sin^2(x) + cos^2(x) = 1 We want to simplify our trig equation by writing everything in terms of sine.
Let's solve the #1 Identity for cos^2(x) because we have that in our trig ...
Wednesday, November 21, 2007 at 6:29pm by Michael
Thanks! So how did u get the 2500?
Tuesday, September 22, 2009 at 7:51pm by :)help please
solve: 8 t - 3 1 - ---- = ------ + ---- t + 5 t + 5 3
Tuesday, December 22, 2009 at 11:32pm by aisha
I got it, .5
Friday, May 28, 2010 at 9:15am by jeff
Sunday, June 27, 2010 at 3:52pm by ann
P (Z ¡Ü -.64)
Sunday, June 27, 2010 at 5:08pm by felicia
its suppposed to be y=
Sunday, June 27, 2010 at 7:04pm by donna
y-6= 2/7 ____ y+4
Sunday, June 27, 2010 at 7:04pm by donna
Sunday, June 27, 2010 at 7:33pm by donna
the 2 is supposed to be below q-2
Sunday, June 27, 2010 at 7:44pm by me
Where are the graphs?
Sunday, November 21, 2010 at 3:04pm by Henry
h = P - 2w.
Sunday, November 21, 2010 at 3:05pm by Henry
how do you factor this? x(x-3)+5(x-3)
Wednesday, January 26, 2011 at 5:41pm by janelle
Wednesday, January 26, 2011 at 5:19pm by helper
Correct, (x - 3)(x + 5)
Wednesday, January 26, 2011 at 5:41pm by helper
Sunday, March 20, 2011 at 7:35pm by angel
and your question is?
Sunday, March 20, 2011 at 7:35pm by bobpursley
Tuesday, May 17, 2011 at 9:16pm by marilyn
algebra2-help : (
Monday, November 9, 2009 at 7:40pm by kamran
Help ? 1. x-3x^1/2+2=0
Wednesday, August 29, 2012 at 11:20am by Unknown
Yes. it Was 10x ..
Monday, September 17, 2012 at 5:56am by Unknown
i got 4
Monday, October 8, 2012 at 5:56pm by Allysiah
Monday, October 8, 2012 at 6:04pm by Allysiah
Tuesday, October 12, 2010 at 6:57pm by Anonymous
Thursday, November 29, 2012 at 3:40pm by Steve
So it's -14?
Thursday, November 29, 2012 at 3:41pm by Victoria
I'd expect so.
Thursday, November 29, 2012 at 3:41pm by Steve
Thursday, November 29, 2012 at 9:26pm by PhysicsPro
srry :(
Tuesday, December 10, 2013 at 9:40am by Sam
Find all of the solutions between 0 and 2pi: 2sin(x)^2 = 2 + cos(x)
Thursday, November 13, 2008 at 9:16pm by Trig
its in the first chapter of my trig course. i know its review frm geometry and alg
Friday, November 14, 2008 at 11:25am by y912f
simplify to a constant or basic trig function.. 1+tan(x)/1+cot(x)
Sunday, March 14, 2010 at 1:14pm by hilde
This subject is not trig. You will have to use your own graphic calculator.
Sunday, May 15, 2011 at 12:10am by drwls
Find (if possible) the trig function of the quadrant angle.
Monday, June 6, 2011 at 3:59pm by Master chief
first of all, this is not trig, secondly your expression makes no sense.
Monday, January 7, 2013 at 10:03pm by Reiny
what is the exact value of 7pi for all trig func.?
Monday, February 1, 2010 at 10:23pm by claire
write a product of two trig functions that equals 1.
Sunday, November 6, 2011 at 9:38pm by Dani
any calculus text will list the trig functions and their derivatives.
Thursday, January 19, 2012 at 8:45am by Steve
Solve the following trig equation 2cos(x-π/6)=1
Tuesday, January 31, 2012 at 2:51am by Matt
I figured out that tan(theta)=-1 is 3pi/4+n(pi)
Thursday, March 22, 2012 at 12:28pm by TRig
Find one numerical value for cot x/cos x = 5
Monday, November 12, 2012 at 7:31am by TRIG HELP
csc thea= -2, quadrant 4. what are the six trig functions?
Monday, May 6, 2013 at 4:17pm by Shay
What kind of reflection does the trig function y = cos(4x/3 minus 1) have?
Monday, April 21, 2008 at 8:53pm by Priya
What kind of reflection does the trig function y = cos(4x/3 minus 1) have?
Tuesday, April 22, 2008 at 12:03am by Chaitanya
how do i do this? give the signs of the six trig functions for each angle: 74 degrees?
Wednesday, December 3, 2008 at 6:59pm by Tay
express the trig. ratios sinA,secA n tanA in terms of cotA.
Saturday, May 7, 2011 at 5:49am by Anonymous
How do you solve cos(2arcsin 1/4) using inverse trig. functions??!! PLEASE HELP ME!
Monday, May 9, 2011 at 11:20am by Tor
find the 5 trig ratios when sin theta = -5/13 and lies in quadrant 3
Wednesday, September 14, 2011 at 10:38pm by karen
graphing trig functions
I see the equation of a straight line, not a trig function
Sunday, May 20, 2012 at 10:15pm by Reiny
surely you have learned the basic definitions of the trig functions, tanØ = y/x = 4/3
Sunday, January 20, 2013 at 5:17pm by Reiny
What kind of reflections are the following trig functions? y = 3cos(x-1) y = sin(-3x+3) y = -2sin(x)-4
Sunday, April 20, 2008 at 10:27pm by Joshua
Pages: 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | Next>>
|
{"url":"http://www.jiskha.com/search/index.cgi?query=Algebra2%2FTrig","timestamp":"2014-04-17T22:36:36Z","content_type":null,"content_length":"27279","record_id":"<urn:uuid:300655d2-65d8-46a2-9a91-fb3f0218da7a>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00341-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Life on the lattice
I promised there were going to be some interesting posts, and I feel this is one of them. I want to talk about harnessing the power of evolution for the extraction of excited state masses from
lattice QCD simulations.
OK, this sounds just outright crazy, right? Biology couldn't possibly have an impact on subnuclear physics (other than maybe by restricting the kinds of ideas our minds can conceive by the nature of
our brains, which could of course well mean that the ultimate theory, if it exists, is unthinkable for a human being, but that is a rather pessimist view; I am also talking about QCD here). Well,
biology doesn't have any impact on what is after all a much more fundamental discipline, obviously, but Darwin's great insight has applications far beyond the scope of mere biology. This insight,
which I will roughly paraphrase as "
starting from a set of entities which are subject to random mutations and from which those least adapted to some external constraints are likely to be removed and displaced by new entities derived
from and similar to those not so removed, one will after a large enough time end up with a set of entities that are close to optimally adapted to the external constraints
", is of course the basis of the very active field of computer science known as evolutionary algorithms. And optimisation is at the core of extracting results from lattice simulations.
What people measure in lattice simulations are correlators of various lattice operators at different (euclidean) times, and these can be expanded in an eigenbasis of the Hamiltonian as
$C(t)=\left\langle O(t)O(0)\right\rangle = \sum_n c_n e^{-E_n t}$
(for periodic boundary conditions in the time direction the exponential becomes a cosh instead, but let's just ignore that for now), where the c
measure the overlap between the eigenstates of the operator and those of the Hamiltonian, and the E
are the energies of the Hamiltonian's eigenstates. Of course only states that have quantum numbers compatible with those of the operator O will contribute (since otherwise c
In order to extract the energies E
from a measurement of the correlator <O(t
)O(0)>, one needs to fit the measured data with a sum of exponentials, i.e. one has to solve a non-linear least-squares fitting problem. Now, there are of course a number of algorithms (such as
) that are excellent at solving this kind of problem, so why look any further? Unfortunately, there are a number of things that an algorithm such as Levenberg-Marquardt requires as input that are
unknown in a typical lattice QCD data analysis situation: How many exponentials should the fitting ansatz use (obviously we can't fit all the infinitely many states)? Which range of times should be
fitted (and which should be disregarded as dominated by noise or disregarded higher states)? A number of Bayesian techniques designed to deal with this problem have sprung up over time (such as
constrained fitting
), and some of those deserve a post of their own at some point.
From the evolutionary point of view, one can simply allow evolution to find the optimal values for difficult-to-optimise parameters like the fitting range and number of states to fit. Basically, one
sets up an ecosystem consisting of organisms that encode a fitting function complete with the range over which it attempts to fit the data. The fitness of each organism is taken to be proportional to
minus its χ
/(d.o.f.); this will tend to drive the evolution both towards increased fitting ranges and lower numbers of exponentials (to increase the number of degrees of freedom), but this tendency is
counteracted by the worsening of χ
. The idea is that if one subjects these organisms to a regimen of mutation, cross-breeding and selection, evolution will ultimately lead to an equilibrium where the competing demands for small χ
and large number of degrees of freedom balance in an optimal fashion.
After Rob Petry here in Regina brought up this idea, I have been toying around with it for a while, and so far I am cautiously optimistic that this may lead somewhere: for the synthetic data sets
that I let this method look at, it did pretty well in identifying the right number of exponentials to use when there was a clear-cut answer (such as when only finitely-many were present to start
with). So the general method is sound; it remains to be seen how well it does on actual lattice data.
|
{"url":"http://latticeqcd.blogspot.com/2007/03/fitness-and-fitting.html","timestamp":"2014-04-17T12:45:40Z","content_type":null,"content_length":"87142","record_id":"<urn:uuid:d12b424f-fb36-4e8c-bc0d-6355ef0983af>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00398-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Berichte der Arbeitsgruppe Technomathematik (AGTM Report)
14 search hits
Tensor Spherical Harmonics and Tensor Spherical Splines (1993)
Willi Freeden T. Gervens Michael Schreiner
In this paper, we deal with the problem of spherical interpolation of discretely given data of tensorial type. To this end, spherical tensor fields are investigated and a decomposition formula is
described. Tensor spherical harmonics are introduced as eigenfunctions of a tensorial analogon to the Beltrami operator and discussed in detail. Based on these preliminaries, a spline
interpolation process is described and error estimates are presented. Furthermore, some relations between the spline basis functions and the theory of radial basis functions are developed.
On a Kinetic Model for Shallow Water Waves (1993)
Jens Struckmeier
The system of shallow water waves is one of the classical examples for nonlinear, twodimensional conservation laws. The paper investigates a simple kinetic equation depending on a parameter e
which leads for e to 0 to the system of shallow water waves. The corresponding equilibrium distribution function has a compact support which depends on the eigenvalues of the hyperbolic system.
It is shown that this kind of kinetic approach is restricted to a special class of nonlinear conservation laws. The kinetic model is used to develop a simple particle method for the numerical
solution of shallow water waves. The particle method can be implemented in a straightforward way and produces in test examples sufficiently accurate results.
Nonorthogonal Expansions on the Sphere (1993)
Willi Freeden Michael Schreiner
Discrete families of functions with the property that every function in a certain space can be represented by its formal Fourier series expansion are developed on the sphere. A Fourier series
type expansion is obviously true if the family is an orthonormal basis of a Hilbert space, but it also can hold in situations where the family is not orthogonal and is overcomplete. Furthermore,
all functions in our approach are axisymmetric (depending only on the spherical distance) so that they can be used adequately in (rotation) invariant pseudodifferential equations on the frames
(ii) Gauss- Weierstrass frames, and (iii) frames consisting of locally supported kernel functions. Abel-Poisson frames form families of harmonic functions and provide us with powerful
approximation tools in potential theory. Gauss-Weierstrass frames are intimately related to the diffusion equation on the sphere and play an important role in multiscale descriptions of image
processing on the sphere. The third class enables us to discuss spherical Fourier expansions by means of axisymmetric finite elements.
Multivariate First-Order Integer-Valued Autoregressions (1993)
Jürgen Franke T. Rao Subba
Modelling and Numerical Simulation of Collisions (1993)
Helmut Neunzert
In these lectures we will mainly treat a billard game. Our particles will be hard spheres. Not always: We will also touch cases, where particles have interior energies due to rotation or
vibration, which they exchange in a collision, and we will talk about chemical reactions happening during a collision. But many essential aspects occur already in the billard case which will be
therefore paradigmatic. I do not know enough about semiconductors to handle collisions there - the Boltzmann case is certainly different but may give some idea even for the other cases.
Generalized Weighted Spline Approximation on the Sphere (1993)
Willi Freeden R. Franke
Spline functions that interpolate data given on the sphere are developed in a weighted Sobolev space setting. The flexibility of the weights makes possible the choice of the approximating
function in a way which emphasizes attributes desirable for the particular application area. Examples show that certain choices of the weight sequences yield known methods. A pointwise
convergence theorem containing explicit constants yields a useable error bound.
Fast Generation of Low-Discrepancy Sequences (1993)
Jens Struckmeier
The paper presents a fast implementation of a constructive method to generate a special class of low-discrepancy sequences which are based on Van Neumann-Kakutani tranformations. Such sequences
can be used in various simulation codes where it is necessary to generate a certain number of uniformly distributed random numbers on the unit interval.; From a theoretical point of view the
uniformity of a sequence is measured in terms of the discrepancy which is a special distance between a finite set of points and the uniform distribution on the unit interval.; Numerical results
are given on the cost efficiency of different generators on different hardware architectures as well as on the corresponding uniformity of the sequences. As an example for the efficient use of
low-discrepancy sequences in a complex simulation code results are presented for the simulation of a hypersonic rarefied gas flow.
Exact Solutions of Discrete Kinetic Models and Stationary Problems for the Plane Broadwell Model (1993)
A.V. Bobylev
Domain Decomposition: Linking Kinetic and Aerodynamic Descriptions (1993)
Reinhard Illner Helmut Neunzert
We discuss how kinetic and aerodynamic descriptions of a gas can be matched at some prescribed boundary. The boundary (matching) conditions arise from requirement that the relevant moments
(p,u,...) of the particle density function be continuous at the boundary, and from the requirement that the closure relation, by which the aerodynamic equations (holding on one side of the
boundary) arise from the kinetic equation (holding on the other side), be satisfied at the boundary. We do a case study involving the Knudsen gas equation on one side and a system involving the
Burgers equation on the other side in section 2, and a discussion for the coupling of the full Boltzmann equation with the compressible Navier-Stokes equations in section 3.
Construction of Particlesets to Simulate Rarefied Gases (1993)
Michael Hack
In this paper a new method is introduced to construct asymptotically f-distributed sequences of points in the IR^d. The algorithm is based on a transformation proposed by E. Hlawka and R. Mück.
For the numerical tests a new procedure to evaluate the f-discrepancy in two dimensions is proposed.
|
{"url":"https://kluedo.ub.uni-kl.de/solrsearch/index/search/searchtype/series/id/16164/start/0/rows/10/yearfq/1993/sortfield/title/sortorder/desc","timestamp":"2014-04-21T02:52:55Z","content_type":null,"content_length":"41495","record_id":"<urn:uuid:0986c253-b061-4f5c-bab7-b376da56b66d>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00162-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Re: st: Rename problem: r(110) already defined
[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]
Re: st: Rename problem: r(110) already defined
From "Friedrich Huebler" <fhuebler@gmail.com>
To statalist@hsphsun2.harvard.edu
Subject Re: st: Rename problem: r(110) already defined
Date Wed, 13 Feb 2008 14:20:32 -0500
Thanks to everyone who responded to my -rename- problem. I added an IF
command to the -foreach- loop and now have a solution that (a) ignores
variable names that are already lowercase and (b) stops execution of
the do-file if the names of two variables in the data differ only with
regard to upper or lower case.
local vars "`r(varlist)'"
foreach v of local vars {
local newname = lower("`v'")
if "`v'" != "`newname'" rename `v' `newname'
The lines above can be replaced by a single command if -renvars- is installed.
. renvars, lower
Thanks again.
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
|
{"url":"http://www.stata.com/statalist/archive/2008-02/msg00582.html","timestamp":"2014-04-18T19:07:04Z","content_type":null,"content_length":"6108","record_id":"<urn:uuid:0dad406b-f858-4faf-a18f-0236170fcec4>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00037-ip-10-147-4-33.ec2.internal.warc.gz"}
|
One thing to be aware of is that you'll need to convert your image do
floating point if you want the difference image to be correct. If you
don't, it will stay as uint8 and it will clip values to 0 and you'll
have an incorrect image. Run this demo and you'll see what I mean:
(IMPORTANT: BE SURE TO JOIN THE "message = sprintf(...." LINE WHICH
WILL GET SPLIT INTO MULTIPLE LINES BY THE NEWSREADER!!!)
% function test()
% Change the current folder to the folder of this m-file.
% (The line of code below is from Brett Shoelson of The Mathworks.)
clc; % Clear command window.
clear; % Delete all variables.
close all; % Close all figure windows except those created by imtool.
imtool close all; % Close all figure windows created by imtool.
workspace; % Make sure the workspace panel is showing.
fontSize = 20;
% Read in standard MATLAB gray scale demo image.
grayImage = imread('cameraman.tif');
subplot(2, 2, 1);
imshow(grayImage, []);
title('Original Grayscale Image', 'FontSize', fontSize);
set(gcf, 'Position', get(0,'Screensize')); % Maximize figure.
% Calculate the mean gray level of the entire image.
meanGrayLevel = mean(grayImage(:))
% Calculate the difference with the result being a uint8 image.
uint8SubtractedImage = grayImage - meanGrayLevel;
subplot(2, 2, 2);
imshow(uint8SubtractedImage, []);
title('Difference uint8 Image', 'FontSize', fontSize);
% Calculate the mean after subtraction.
% It should be zero, but it won't be.
meanGrayLeveluint8 = mean(uint8SubtractedImage(:))
% Calculate the difference with the result being a uint8 image.
dblSubtractedImage = double(grayImage) - meanGrayLevel;
subplot(2, 2, 3);
imshow(dblSubtractedImage, []);
title('Difference double Image', 'FontSize', fontSize);
% Calculate the mean after subtraction. It will be zero.
meanGrayLevelDouble = mean(dblSubtractedImage(:))
% Display results.
message = sprintf('The mean of the original uint8 image = %.2f\nThe
mean of the uint8 difference image = %.2f\nThe mean of the double
difference image = %.2f', ...
meanGrayLevel, meanGrayLeveluint8, meanGrayLevelDouble);
|
{"url":"http://www.mathworks.com/matlabcentral/newsreader/view_thread/282928","timestamp":"2014-04-17T15:34:07Z","content_type":null,"content_length":"60950","record_id":"<urn:uuid:1e7f5bdb-f4df-4b44-a032-0468422df312>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00278-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Ultimate Metal Forum - View Single Post - 48/2(9+3) = ???
Originally Posted by
Quote me any serious book that adds one more rule to the standard.
DM Division and Multiplication
AS Addition and Subtraction
Should it be
MD Multiplication and Division (but Multiplication first if there is no * sign)
AS Addition and Subtraction
AB means and will always mean just one thing: A*B
There is another rule that is not connected to PEMDAS but does effect it, they mention both rules in high school but for whatever reason never tell you that they are interconnected. That only happens
in high school when they are cramming calculus down your throat.
Like I have said many times so far you HAVE to simply expressions, this was something you should have learned very well in any basic algebra class around the same time you learned PEMDAS. Simplifying
expressions always comes before you can do anything else. The P in PEMDAS means that you have to simply what is in the parenthesis, simplify means, get rid of the parenthesis entirely before you
continue on and in our given equation the step to get rid of the parenthesis looks something like this:
= 48/2(9+3)
= 48/2(12)
= 48/24
= 2
Originally Posted by
AB means A*B, but A(B) means (A*B).
The "2" in the original equation is not a value unto itself, it's just a quantity of (9+3). That's it. You don't divide 48 by 2, you divide it by two QUANTITIES of (9+3).
exactly, you can use a quantity of an expression to divide another number because that would mean you would go from having 2 quantities of 9+3 to 12 quantities of 9+3
|
{"url":"http://www.ultimatemetal.com/forum/9755205-post32.html","timestamp":"2014-04-19T04:21:15Z","content_type":null,"content_length":"21783","record_id":"<urn:uuid:b46a34c9-5fbd-4e1e-b71e-2899b3d6150a>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00514-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Manorhaven, NY Algebra Tutor
Find a Manorhaven, NY Algebra Tutor
...In addition to a comprehensive review of all the math concepts tested on the SAT, I teach students how to understand the test makers' logic, and how to approach every question systematically. I
also cover important strategies such as: when and how to guess; which questions, if any, to skip; and ...
18 Subjects: including algebra 1, algebra 2, geometry, GRE
...I was a math major at Washington University in St. Louis, and minored in German, economics, and writing. While there, I tutored students in everything from counting to calculus, and beyond.
26 Subjects: including algebra 2, geometry, algebra 1, physics
...I have used these lessons to help individual students with their school math work and their SAT and ACT math work for the past 4 years as a private tutor.Algebra 1, or Integrated Algebra is the
study of solving equations for an unknown value. It also includes some coordinate geometry, statistics...
10 Subjects: including algebra 2, algebra 1, geometry, SAT math
...Feel free to contact me with any question.I have a Ph.D in Biology and I am a certified tutor in various subjects, Math, Biology, Physics, Chemistry, and French. I tutored students at various
levels and skills and I was thrilled to see how quickly they improved their grades. I am very patient and adapt to the student personality to help him/her achieve their goal.
18 Subjects: including algebra 1, algebra 2, chemistry, calculus
...I have a great deal of experience with the college admissions process, and I am very familiar with the SAT, ACT and Advanced Placement examinations. I would love to help other students get into
the colleges of their dreams via consulting on applications, essays, etc. In terms of my personal exp...
43 Subjects: including algebra 1, algebra 2, English, writing
|
{"url":"http://www.purplemath.com/Manorhaven_NY_Algebra_tutors.php","timestamp":"2014-04-19T19:40:24Z","content_type":null,"content_length":"24142","record_id":"<urn:uuid:cfe8a755-f2dd-473f-a93a-aa303ada0f21>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00284-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Optimization Online - A First-Order Smoothed Penalty Method for Compressed Sensing
A First-Order Smoothed Penalty Method for Compressed Sensing
Necdet Serhat Aybat (nsa2106
Garud Iyengar (garud
Abstract: We propose a first-order smoothed penalty algorithm (SPA) to solve the sparse recovery problem min{||x||_1 : Ax=b}. SPA is efficient as long as the matrix-vector product Ax and A^Ty can be
computed efficiently; in particular, A need not be an orthogonal projection matrix. SPA converges to the target signal by solving a sequence of penalized optimization sub-problems, and each
sub-problem is solved using Nesterov's optimal algorithm for simple sets [13, 14]. We show that the SPA iterates x_k are eps-feasible, i.e. ||Ax_k-b||_2<= eps and eps-optimal, i.e. | ||x_k||_1-||x*||
_1 | <= eps after O(eps^(-3/2)) iterations. We also bound the sub-optimality, | ||x_k||_1-||x*||_1 | for any iterate x_k; thus, the user can stop the algorithm at any iteration k with guarantee on
the sub-optimality. SPA is able to work with L_1, L_2 or L_infinity penalty on the infeasibility, and SPA can be easily extended to solve the relaxed recovery problem min{||x||_1 : ||Ax-b||_2}.
Keywords: Penalty method, First order method, Compressed sensing, Nesterov's method, Smooth approximations of nonsmooth functions, L1 minimization
Category 1: Convex and Nonsmooth Optimization (Nonsmooth Optimization )
Category 2: Applications -- Science and Engineering (Biomedical Applications )
Citation: IEOR Department, Columbia University, April 2009
Download: [PDF]
Entry Submitted: 06/23/2009
Entry Accepted: 06/23/2009
Entry Last Modified: 03/24/2010
Modify/Update this entry
|
{"url":"http://www.optimization-online.org/DB_HTML/2009/06/2323.html","timestamp":"2014-04-19T17:01:23Z","content_type":null,"content_length":"7523","record_id":"<urn:uuid:1e063a7f-66a0-415f-a6d9-d2ce32c3751f>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00445-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Wright State Calculus Instruction
Wright State University | Mathematics and Statistics
Wright State Calculus Instruction
• This site only deals with the Calculus I, II, III sequence, MTH 2300, 2310, 2320.
• Be sure to also visit the Calculus Laboratory Home Page!
• Links labelled (PDF) require Acrobat Reader.
• (09/06/13) Sample semester common finals are posted below
• (08/27/12) We are finally on semesters!
General Information
• Syllabi for Calculus I-III
• Departmental Policies concerning Calculus Courses
• Stewart,Calculus: Concepts and Contexts , 4th Edition (Brooks/Cole) is the text used in Calculus I-III
• Help for MTH 2300 (and MTH 2240 and MTH 2280) is available through the Math Learning Center
Sample Common Finals for MTH 2300
Sample Common Finals for MTH 2310
This page last changed on September 6, 2013.
Send comments and suggestions to Richard Mercer.
|
{"url":"http://www.wright.edu/~richard.mercer/Calculus/index.html","timestamp":"2014-04-19T16:14:00Z","content_type":null,"content_length":"2840","record_id":"<urn:uuid:992755f1-e154-4e36-a999-c467466abe1a>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00334-ip-10-147-4-33.ec2.internal.warc.gz"}
|
LaTeX in Freeplane
From Freeplane - free mind mapping and knowledge management software
Freeplane 1.2.x supports LaTeX formulae in boxes underneath nodes while Freeplane 1.3.x deprecates those boxes and adds LaTeX directly to node contents. Please see the relevant subsections below.
Thanks to the excellent JLaTeXMath!
LaTeX Text+Formulae in Freeplane 1.3.x
• LaTeX text is displayed inline in node content (as opposed to underneath nodes in 1.2.x)
• You can tell Freeplane to treat a node as LaTeX text by either:
□ using a "\latex " ("\latex" + <space or newline>) prefix
□ View->Properties panel, then Core text->Format->LaTeX
• By default the LaTeX interpreter is in text mode, so you need to use $...$ for (inline) formulae
• Automatic linebreaks are supported
• The editor supports LaTeX syntax highlighting
\latex my formula: $x_2=\frac{1}{2}$
Commmon/global LaTeX Macros
Freeplane has a textbox in preferences->Plugins->LaTeX that allows you to enter code (usually macros) that will be inserted into every LaTeX node before the actual node content. Be aware though, that
using this means that your map will only be readable by someone else if he/she also includes the macros in his/her config!
"Unparsed LaTex" (LaTeX for Export)
JLaTeXMath, the component used by Freeplane for rendering LaTeX, is focused on math and thus does not support i.e. itemize or enumerate and more. However, some people want to export complete LaTeX
documents, including code not supported by JLaTeXMath, and if you try to use unsupported LaTeX in LaTeX nodes (Format=LaTeX or node prefix "\latex"), then this will be correctly exported but you will
get ugly error boxes in Freeplane.
In order to solve this, we have added Format="Unparsed LaTeX" (translation may be different) and the node prefix "\unparsedlatex" (for symmetry, will not be translated). Nodes designated like this
will use LaTeX syntax highlighting and will be exported correctly, but will not be rendered with JLaTeXMath.
Including LaTeX content from file system
If your node matches the pattern
then Freeplane will include the given file at the given position in the LaTeX export. Note that the export will fail if the document cannot be read.
Caveat: The file must be a well-formed XML document, so you must have a root tag and escape <, > and & (<, >, &), like this:
1 & 2 \\
3 & 4 \\
Combination of LaTeX and Groovy formulas
Here's how to format formula results as LaTeX:
1. Set node format to LaTeX.
2. Let your formula generate LaTeX code.
Example (copy 'n paste it into a map)
="\\LaTeX: \$\\sum_{children} = ${children.sum(0)}\$"
Note that LaTeX symbol '\' has to be doubled in a double quoted string and that a $ have to be escaped with a single '\' to prevent Groovy from interpreting it as the prefix of a variable.
Known Problems
• Array environments are maximized on the maximum node width
• align=right and align=center does not work well
• The syntax highlighting editor has problems with some unicode/chinese characters. If you experience this, turn off the editor in prefs->Plugins->LaTeX (Freeplane will then use the normal editor
for LaTeX).
Export solutions
There are many XSLT scripts out there; here is one from Igor Gartzia Olaizola that integrates well with Freeplane and also allows to export to LaTeX Beamer presentations: https://sites.google.com/
site/freemind2beamer/. The source code is on github.
Please give us feedback on the 1.3.x solution:
• How do you like the ways to treat a node as LaTeX?
• What do you think about the way we deprecate the 1.2.x formulae?
• Are you missing a feature in JLaTeXMath or JLaTeXMath integration?
• How do you like the syntax highlighting?
LaTeX Formulae in Freeplane 1.2.x
This type of LaTeX formulae in Freeplane is deprecated in Freeplane 1.3.x (the formula boxes will still be displayed and can be edited but you can't add new boxes)! Please see the 1.3.x section above
if you're using Freeplane 1.3.x.
• you can add a LaTeX formula to a node by running Edit->Node Extensions->Add LaTeX formula...
• you can edit a LaTeX formula related to a node by running Edit->Node Extensions->Edit LaTeX formula...
• you can remove a LaTeX formula by selecting Edit->Node Extensions->Remove LaTeX formula OR by using Edit->Node Extensions->Edit LaTeX formula... and specifying an empty text.
• by default the LaTeX interpreter is in math mode
• does not support automatic linebreaks
|
{"url":"http://freeplane.sourceforge.net/wiki/index.php/LaTeX_in_Freeplane","timestamp":"2014-04-19T20:46:12Z","content_type":null,"content_length":"22308","record_id":"<urn:uuid:97552798-275f-40cf-bcba-8fbb21b376f2>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00466-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Recursion question?
03-20-2012, 04:31 AM
Recursion question?
I'm really not even sure what this question is asking. This is on a practice test for my exam on Recursion. Any help would be great....direction to take, etc
Write a recursive method
public String changeBase( int number, int base)
to change a base 10 number to another base between 2 and 9, returning the result as a string
of digits. If number is 0, return an empty string. Otherwise, append number modulo base onto
the result of a recursive call to changeBase of number divided by base and the new base. For
example, to convert 20 to base 8, calling changeBase( 20, 8) causes (20 % 8) to be appended
onto changeBase( 2, 8).
Thank you for your time!!
03-20-2012, 05:06 AM
Re: Recursion question?
03-23-2012, 05:55 PM
Re: Recursion question?
Nice homework dump. What is your question?
|
{"url":"http://www.java-forums.org/new-java/56980-recursion-question-print.html","timestamp":"2014-04-16T15:16:54Z","content_type":null,"content_length":"4778","record_id":"<urn:uuid:4450ec2e-fc54-4fe0-8ecf-1f05ae6afd68>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00047-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Anne Katz, owner of Katz Sport Shop, loans $8,000 to Shelley Slater to help her open an art shop. Shelley plans to repay Anne at the end of 8 years with interest compounded semiannually at 8 percent.
At the end of 8 years, Anne will receive?
• one year ago
• one year ago
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
In our school we all must have had come across this formula: Amount = Principal (1+Rate of Interest/100) ^ Time Now, here the interest is getting compounded semi-annually, hence, the number of
time periods would double and the rate of interest would be divided by two. So, it shall be put as $8,000*(1+0.04)^16 So, Shelly will have to repay Anne $14983.85 at the end of eighth year.
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/50d20abae4b052fefd1d44ea","timestamp":"2014-04-18T19:01:05Z","content_type":null,"content_length":"30422","record_id":"<urn:uuid:e843b930-1a24-47d6-926c-80ad7741146c>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00170-ip-10-147-4-33.ec2.internal.warc.gz"}
|
SVG Image PNG Image
Test that the viewer can handle the gradientTransform and the patternTransform attribute on gradients and patterns respectively.
From top-down the appearance of objects is as follows.
The top rectangle has a linear gradient whose coordinate system has been scaled down by a half. So the gradient travelling from left to righ (from blue to red to lime) should only occuply the left
half the rectangle.
The next rectangle has radial gradient that has been translated to the center and skewed in the positive X direction by 45 degrees. Therefore the gradient should appear ellipltical and rotated around
the center.
The last row contains a rectangle with pattern on the fill. The transformation on the pattern moves the coordinate system to the top left of the rectangle and then scales it by a factor of 2 and then
skew's it in the X direction by 45 degrees. The pattern consists of a 2 by 2 array of colored rectangles.
The rendered picture should match the reference image exactly, except for possible variations in the labelling text (per CSS2 rules).
|
{"url":"http://www.w3.org/Graphics/SVG/Test/20061213/htmlObjectHarness/full-pservers-grad-06-b.html","timestamp":"2014-04-20T01:03:29Z","content_type":null,"content_length":"4652","record_id":"<urn:uuid:55f93321-a76d-4a29-a4b6-087a3f350c39>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00476-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Haskell Code by HsColour
{-# OPTIONS -fglasgow-exts -fno-implicit-prelude #-}
{- |
Copyright : (c) Henning Thielemann 2008
License : GPL
Maintainer : synthesizer@henning-thielemann.de
Stability : provisional
Portability : requires multi-parameter type classes
Comb filters, useful for emphasis of tones with harmonics
and for repeated echos.
We cannot generalize this to "Synthesizer.Generic.Signal"
since we need control over the chunk size.
module Synthesizer.Storable.Filter.Recursive.Comb where
import qualified Synthesizer.Storable.Signal as Sig
import qualified Synthesizer.Plain.Filter.Recursive.FirstOrder as Filt1
import qualified Synthesizer.Generic.Signal as SigG
import qualified Synthesizer.Generic.SampledValue as Sample
-- import qualified Synthesizer.Storable.Filter.Delay as Delay
import Foreign.Storable (Storable)
import qualified Algebra.Module as Module
-- import qualified Algebra.Field as Field
import qualified Algebra.Ring as Ring
import qualified Algebra.Additive as Additive
import Algebra.Module((*>))
import qualified Prelude as P
import PreludeBase
import NumericPrelude
{- |
The most simple version of the Karplus-Strong algorithm
which is suitable to simulate a plucked string.
It is similar to the 'runProc' function.
{-# INLINE karplusStrong #-}
karplusStrong ::
(Ring.C a, Module.C a v, Sample.C v) =>
Filt1.Parameter a -> Sig.T v -> Sig.T v
karplusStrong c wave =
Sig.delayLoop (SigG.modifyStatic Filt1.lowpassModifier c) wave
{- |
Infinitely many equi-delayed exponentially decaying echos.
The echos are clipped to the input length.
We think it is easier (and simpler to do efficiently)
to pad the input with zeros or whatever
instead of cutting the result according to the input length.
{-# INLINE run #-}
run :: (Module.C a v, Storable v) =>
Int -> a -> Sig.T v -> Sig.T v
run time gain =
Sig.delayLoopOverlap time (amplify gain)
{- |
Echos of different delays.
Chunk size must be smaller than all of the delay times.
{-# INLINE runMulti #-}
runMulti :: (Ring.C a, Module.C a v, Storable v) =>
[Int] -> a -> Sig.T v -> Sig.T v
runMulti times gain x =
let y = foldl
(Sig.zipWith (+)) x
(map (flip (Sig.delay Sig.defaultChunkSize zero) (amplify gain y)) times)
-- (map (flip Delay.staticPos (gain *> y)) times)
in y
{- | Echos can be piped through an arbitrary signal processor. -}
{-# INLINE runProc #-}
runProc :: (Additive.C v, Storable v) =>
Int -> (Sig.T v -> Sig.T v) -> Sig.T v -> Sig.T v
runProc = Sig.delayLoopOverlap
{-# INLINE amplify #-}
amplify :: (Storable v, Module.C a v) =>
a -> Sig.T v -> Sig.T v
amplify gain = Sig.map (gain *>)
|
{"url":"http://hackage.haskell.org/package/synthesizer-0.0.3/docs/src/Synthesizer-Storable-Filter-Recursive-Comb.html","timestamp":"2014-04-18T21:25:55Z","content_type":null,"content_length":"15099","record_id":"<urn:uuid:02d6a01d-4ec6-4f8f-b1fe-1d747f72ef7f>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00511-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Patente US8112496 - Efficient algorithm for finding candidate objects for remote differential compression
This application is a continuation of prior U.S. application Ser. No. 10/948,980, filed on Sep. 24, 2004, now U.S. Pat. No. 7,613,787 which is hereby incorporated by reference.
The proliferation of networks such as intranets, extranets, and the internet has lead to a large growth in the number of users that share information across wide networks. A maximum data transfer
rate is associated with each physical network based on the bandwidth associated with the transmission medium as well as other infrastructure related limitations. As a result of limited network
bandwidth, users can experience long delays in retrieving and transferring large amounts of data across the network.
Data compression techniques have become a popular way to transfer large amounts of data across a network with limited bandwidth. Data compression can be generally characterized as either lossless or
lossy. Lossless compression involves the transformation of a data set such that an exact reproduction of the data set can be retrieved by applying a decompression transformation. Lossless compression
is most often used to compact data, when an exact replica is required.
In the case where the recipient of a data object already has a previous, or older, version of that object, a lossless compression approach called Remote Differential Compression (RDC) may be used to
determine and only transfer the differences between the new and the old versions of the object. Since an RDC transfer only involves communicating the observed differences between the new and old
versions (for instance, in the case of files, file modification or last access dates, file attributes, or small changes to the file contents), the total amount of data transferred can be greatly
reduced. RDC can be combined with another lossless compression algorithm to further reduce the network traffic. The benefits of RDC are most significant in the case where large objects need to be
communicated frequently back and forth between computing devices and it is difficult or infeasible to maintain old copies of these objects, so that local differential algorithms cannot be used.
Briefly stated, the present invention is related to a method and system for finding candidate objects for remote differential compression. Objects are updated between two or more computing devices
using remote differential compression (RDC) techniques such that required data transfers are minimized. In one aspect, an algorithm provides enhanced efficiencies by allowing the sender to
communicate a small amount of meta-data to the receiver, and the receiver to use this meta-data to locate a set of objects that are similar to the object that needs to be transferred from the sender.
Once this set of similar objects has been found, the receiver may reuse any parts of these objects as needed during the RDC algorithm.
A more complete appreciation of the present invention and its improvements can be obtained by reference to the accompanying drawings, which are briefly summarized below, to the following detailed
description of illustrative embodiments of the invention, and to the appended claims.
Non-limiting and non-exhaustive embodiments of the present invention are described with reference to the following drawings.
FIG. 1 is a diagram illustrating an operating environment;
FIG. 2 is a diagram illustrating an example computing device;
FIGS. 3A and 3B are diagrams illustrating an example RDC procedure;
FIGS. 4A and 4B are diagrams illustrating process flows for the interaction between a local device and a remote device during an example RDC procedure;
FIGS. 5A and 5B are diagrams illustrating process flows for recursive remote differential compression of the signature and chunk length lists in an example interaction during an RDC procedure;
FIG. 6 is a diagram that graphically illustrates an example of recursive compression in an example RDC sequence;
FIG. 7 is a diagram illustrating the interaction of a client and server application using an example RDC procedure;
FIG. 8 is a diagram illustrating a process flow for an example chunking procedure;
FIG. 9 is a diagram of example instruction code for an example chunking procedure;
FIGS. 10 and 11 are diagrams of example instruction code for another example chunking procedure;
FIG. 12 illustrates an RDC algorithm modified to find and use candidate objects;
FIGS. 13 and 14 show a process and an example of a trait computation;
FIGS. 15 and 16 may be used when selecting the parameters for b and t;
FIG. 17 illustrates data structures that make up a compact representation of: an Object Map and a set of Trait Tables; and
FIG. 18 illustrates a process for computing similar traits, in accordance with aspects of the present invention.
Various embodiments of the present invention will be described in detail with reference to the drawings, where like reference numerals represent like parts and assemblies throughout the several
views. Reference to various embodiments does not limit the scope of the invention, which is limited only by the scope of the claims attached hereto. Additionally, any examples set forth in this
specification are not intended to be limiting and merely set forth some of the many possible embodiments for the claimed invention.
The present invention is described in the context of local and remote computing devices (or “devices”, for short) that have one or more commonly associated objects stored thereon. The terms “local”
and “remote” refer to one instance of the method. However, the same device may play both a “local” and a “remote” role in different instances. Remote Differential Compression (RDC) methods are used
to efficiently update the commonly associated objects over a network with limited-bandwidth. When a device having a new copy of an object needs to update a device having an older copy of the same
object, or of a similar object, the RDC method is employed to only transmit the differences between the objects over the network. An example described RDC method uses (1) a recursive approach for the
transmission of the RDC metadata, to reduce the amount of metadata transferred for large objects, and (2) a local maximum-based chunking method to increase the precision associated with the object
differencing such that bandwidth utilization is minimized. Some example applications that benefit from the described RDC methods include: peer-to-peer replication services, file-transfer protocols
such as SMB, virtual servers that transfer large images, email servers, cellular phone and PDA synchronization, database server replication, to name just a few.
Operating Environment
FIG. 1 is a diagram illustrating an example operating environment for the present invention. As illustrated in the figure, devices are arranged to communicate over a network. These devices may be
general purpose computing device, special purpose computing devices, or any other appropriate devices that are connected to a network. The network 102 may correspond to any connectivity topology
including, but not limited to: a direct wired connection (e.g., parallel port, serial port, USB, IEEE 1394, etc), a wireless connection (e.g., IR port, Bluetooth port, etc.), a wired network, a
wireless network, a local area network, a wide area network, an ultra-wide area network, an internet, an intranet, and an extranet.
In an example interaction between device A (100) and device B (101), different versions of an object are locally stored on the two devices: object O[A]on 100 and object O[B ]on 101. At some point,
device A (100) decides to update its copy of object O[A ]with the copy (object O[B]) stored on device B (101), and sends a request to device B (101) to initiate the RDC method. In an alternate
embodiment, the RDC method could be initiated by device B (101).
Device A (100) and device B (101) both process their locally stored object and divide the associated data into a variable number of chunks in a data-dependent fashion (e.g., chunks 1-n for object O
[B], and chunks 1-k for object O[A], respectively). A set of signatures such as strong hashes (SHA) for the chunks are computed locally by both the devices. The devices both compile separate lists of
the signatures. During the next step of the RDC method, device B (101) transmits its computed list of signatures and chunk lengths 1-n to device A (100) over the network 102. Device A (100) evaluates
this list of signatures by comparing each received signature to its own generated signature list 1-k. Mismatches in the signature lists indicate one or more differences in the objects that require
correction. Device A (100) transmits a request for device B (101) to send the chunks that have been identified by the mismatches in the signature lists. Device B (101) subsequently compresses and
transmits the requested chunks, which are then reassembled by device A (100) after reception and decompression are accomplished. Device A (100) reassembles the received chunks together with its own
matching chunks to obtain a local copy of object O[B].
Example Computing Device
FIG. 2 is a block diagram of an example computing device that is arranged in accordance with the present invention. In a basic configuration, computing device 200 typically includes at least one
processing unit (202) and system memory (204). Depending on the exact configuration and type of computing device, system memory 204 may be volatile (such as RAM), non-volatile (such as ROM, flash
memory, etc.) or some combination of the two. System memory 204 typically includes an operating system (205); one or more program modules (206); and may include program data (207). This basic
configuration is illustrated in FIG. 2 by those components within dashed line 208.
Computing device 200 may also have additional features or functionality. For example, computing device 200 may also include additional data storage devices (removable and/or non-removable) such as,
for example, magnetic disks, optical disks, or tape. Such additional storage is illustrated in FIG. 2 by removable storage 209 and non-removable storage 210. Computer storage media may include
volatile and non-volatile, removable and non-removable media implemented in any method or technology for storage of information, such as computer readable instructions, data structures, program
modules or other data. System memory 204, removable storage 209 and non-removable storage 210 are all examples of computer storage media. Computer storage media includes, but is not limited to, RAM,
ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic
storage devices, or any other medium which can be used to store the desired information and which can be accessed by computing device 200. Any such computer storage media may be part of device 200.
Computing device 200 may also have input device(s) 212 such as keyboard, mouse, pen, voice input device, touch input device, etc. Output device(s) 214 such as a display, speakers, printer, etc. may
also be included. All these devices are known in the art and need not be discussed at length here.
Computing device 200 also contains communications connection(s) 216 that allow the device to communicate with other computing devices 218, such as over a network. Communications connection(s) 216 is
an example of communication media. Communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier
wave or other transport mechanism and includes any information delivery media. The term “modulated data signal” means a signal that has one or more of its characteristics set or changed in such a
manner as to encode information in the signal. By way of example, and not limitation, communication media includes wired media such as a wired network or direct-wired connection, and wireless media
such as acoustic, RF, microwave, satellite, infrared and other wireless media. The term computer readable media as used herein includes both storage media and communication media.
Various procedures and interfaces may be implemented in one or more application programs that reside in system memory 204. In one example, the application program is a remote differential compression
algorithm that schedules file synchronization between the computing device (e.g., a client) and another remotely located computing device (e.g., a server). In another example, the application program
is a compression/decompression procedure that is provided in system memory 204 for compression and decompressing data. In still another example, the application program is a decryption procedure that
is provided in system memory 204 of a client device.
Remote Differential Compression (RDC)
FIGS. 3A and 3B are diagrams illustrating an example RDC procedure according to at least one aspect of the present invention. The number of chunks in particular can vary for each instance depending
on the actual objects O[A ]and O[B].
Referring to FIG. 3A, the basic RDC protocol is negotiated between two computing devices (device A and device B). The RDC protocol assumes implicitly that the devices A and B have two different
instances (or versions) of the same object or resource, which are identified by object instances (or versions) O[A ]and O[B], respectively. For the example illustrated in this figure, device A has an
old version of the resource O[A], while device B has a version O[B ]with a slight (or incremental) difference in the content (or data) associated with the resource.
The protocol for transferring the updated object O[B ]from device B to device A is described below. A similar protocol may be used to transfer an object from device A to device B, and that the
transfer can be initiated at the behest of either device A or device B without significantly changing the protocol described below.
□ 1. Device A sends device B a request to transfer Object O[B ]using the RDC protocol. In an alternate embodiment, device B initiates the transfer; in this case, the protocol skips step 1 and
starts at step 2 below.
□ 2. Device A partitions Object O[A ]into chunks 1-k, and computes a signature Sig[Ai ]and a length (or size in bytes) Len[Ai ]for each chunk 1 . . . k of Object O[A]. The partitioning into
chunks will be described in detail below. Device A stores the list of signatures and chunk lengths ((Sig[A1], Len[A1]) (Sig[Ak], Len[Ak])).
□ 3. Device B partitions Object O[B ]into chunks 1-n, and computes a signature Sig[Bi ]and a length Len[Bi ]for each chunk 1 . . . n of Object O[B]. The partitioning algorithm used in step 3
must match the one in step 2 above.
□ 4. Device B sends a list of its computed chunk signatures and chunk lengths ((Sig[B1], Len[B1]) . . . (Sig[Bn], Len[Bn])) that are associated with Object O[B ]to device A. The chunk length
information may be subsequently used by device A to request a particular set of chunks by identifying them with their start offset and their length. Because of the sequential nature of the
list, it is possible to compute the starting offset in bytes of each chunk Bi by adding up the lengths of all preceding chunks in the list.
☆ In another embodiment, the list of chunk signatures and chunk lengths is compactly encoded and further compressed using a lossless compression algorithm before being sent to device A.
□ 5. Upon receipt of this data, device A compares the received signature list against the signatures Sig[A1 ]. . . Sig[Ak ]that it computed for Object O[A ]in step 2, which is associated with
the old version of the content.
□ 6. Device A sends a request to device B for all the chunks whose signatures received in step 4 from device B failed to match any of the signatures computed by device A in step 2. For each
requested chunk Bi, the request comprises the chunk start offset computed by device A in step 4 and the chunk length.
□ 7. Device B sends the content associated with all the requested chunks to device A. The content sent by device B may be further compressed using a lossless compression algorithm before being
sent to device A.
□ 8. Device A reconstructs a local copy of Object O[B ]by using the chunks received in step 7 from device B, as well as its own chunks of Object O[A ]that matched signatures sent by device B in
step 4. The order in which the local and remote chunks are rearranged on device A is determined by the list of chunk signatures received by device A in step 4.
The partitioning steps 2 and 3 may occur in a data-dependent fashion that uses a fingerprinting function that is computed at every byte position in the associated object (O[A ]and O[B],
respectively). For a given position, the fingerprinting function is computed using a small data window surrounding that position in the object; the value of the fingerprinting function depends on all
the bytes of the object included in that window. The fingerprinting function can be any appropriate function, such as, for example, a hash function or a Rabin polynomial.
Chunk boundaries are determined at positions in the Object for which the fingerprinting function computes to a value that satisfies a chosen condition. The chunk signatures may be computed using a
cryptographically secure hash function (SHA), or some other hash function such as a collision-resistant hash function.
The signature and chunk length list sent in step 4 provides a basis for reconstructing the object using both the original chunks and the identified updated or new chunks. The chunks that are
requested in step 6 are identified by their offset and lengths. The object is reconstructed on device A by using local and remote chunks whose signatures match the ones received by device A in step
4, in the same order.
After the reconstruction step is completed by device A, Object O[A ]can be deleted and replaced by the copy of Object O[B ]that was reconstructed on device A. In other embodiments, device A may keep
Object O[A ]around for potential “reuse” of chunks during future RDC transfers.
For large objects, the basic RDC protocol instance illustrated in FIG. 3A incurs a significant fixed overhead in Step 4, even if Object O[A ]and Object O[B ]are very close, or identical. Given an
average chunk size C, the amount of information transmitted over the network in Step 4 is proportional to the size of Object O[B], specifically it is proportional to the size of Object O[B ]divided
by C, which is the number of chunks of Object B, and thus of (chunk signature, chunk length) pairs transmitted in step 4.
For example, referring to FIG. 6, a large image (e.g., a virtual hard disk image used by a virtual machine monitor such as, for example, Microsoft Virtual Server) may result in an Object (O[B]) with
a size of 9.1 GB. For an average chunk size C equal to 3 KB, the 9 GB object may result in 3 million chunks being generated for Object O[B], with 42 MB of associated signature and chunk length
information that needs to be sent over the network in Step 4. Since the 42 MB of signature information must be sent over the network even when the differences between Object O[A ]and Object O[B ](and
thus the amount of data that needs to be sent in Step 7) are very small, the fixed overhead cost of the protocol is excessively high.
This fixed overhead cost can be significantly reduced by using a recursive application of the RDC protocol instead of the signature information transfer in step 4. Referring to FIG. 3B, additional
steps 4.2-4.8 are described as follows below that replace step 4 of the basic RDC algorithm. Steps 4.2-4.8 correspond to a recursive application of steps 2-8 of the basic RDC protocol described
above. The recursive application can be further applied to step 4.4 below, and so on, up to any desired recursion depth.
□ 4.2. Device A performs a recursive chunking of its signature and chunk length list ((Sig[A1], Len[A1]) . . . (Sig[Ak], Len[Ak])) into recursive signature chunks, obtaining another list of
recursive signatures and recursive chunk lengths ((RSig[A1], RLen[A1]) . . . (RSig[As], RLen[As])), where s<<k.
□ 4.3. Device B recursively chunks up the list of signatures and chunk lengths ((Sig[B1], Len[B1]) . . . (Sig[Bn], Len[Bn])) to produce a list of recursive signatures and recursive chunk
lengths ((RSig[B1], RLen[B1]) . . . (RSig[Br], RLen[Br])), where r<<n.
□ 4.4. Device B sends an ordered list of recursive signatures and recursive chunk lengths ((RSig[B1], RLen[B1]) . . . (RSig[Br], RLen[Br])) to device A. The list of recursive chunk signatures
and recursive chunk lengths is compactly encoded and may be further compressed using a lossless compression algorithm before being sent to device A.
□ 4.5. Device A compares the recursive signatures received from device B with its own list of recursive signatures computed in Step 4.2.
4.6. Device A sends a request to device B for every distinct recursive signature chunk (with recursive signature RSig[Bk]) for which device A does not have a matching recursive signature in its set
(RSig[As ]. . . RSig[As]).
□ 4.7. Device B sends device A the requested recursive signature chunks. The requested recursive signature chunks may be further compressed using a lossless compression algorithm before being
sent to device A.
□ 4.8. Device A reconstructs the list of signatures and chunk information ((Sig[B1], Len[B1]) . . . (Sig[Bn], Len[Bn])) using the locally matching recursive signature chunks, and the recursive
chunks received from device B in Step 4.7.
After step 4.8 above is completed, execution continues at step 5 of the basic RDC protocol described above, which is illustrated in FIG. 3A.
As a result of the recursive chunking operations, the number of recursive signatures associated with the objects is reduced by a factor equal to the average chunk size C, yielding a significantly
smaller number of recursive signatures (r<<n for object O[A ]and s<<k for object O[B], respectively). In one embodiment, the same chunking parameters could be used for chunking the signatures as for
chunking the original objects O[A ]and O[B]. In an alternate embodiment, other chunking parameters may be used for the recursive steps.
For very large objects the above recursive steps can be applied k times, where k≧1. For an average chunk size of C, recursive chunking may reduce the size of the signature traffic over the network
(steps 4.2 through 4.8) by a factor approximately corresponding to C^k. Since C is relatively large, a recursion depth of greater than one may only be necessary for very large objects.
In one embodiment, the number of recursive steps may be dynamically determined by considering parameters that include one or more of the following: the expected average chunk size, the size of the
objects O[A ]and/or O[B], the data format of the objects O[A ]and/or O[B], the latency and bandwidth characteristics of the network connecting device A and device B.
The fingerprinting function used in step 2 is matched to the fingerprinting function that is used in step 3. Similarly, the fingerprinting function used in step 4.2 is matched to the fingerprinting
function that is used in step 4.3. The fingerprinting function from steps 2-3 can optionally be matched to the fingerprinting function from steps 4.2-4.3.
As described previously, each fingerprinting function uses a small data window that surrounds a position in the object; where the value associated with the fingerprinting function depends on all the
bytes of the object that are included inside the data window. The size of the data window can be dynamically adjusted based on one or more criteria. Furthermore, the chunking procedure uses the value
of the fingerprinting function and one or more additional chunking parameters to determine the chunk boundaries in steps 2-3 and 4.2-4.3 above.
By dynamically changing the window size and the chunking parameters, the chunk boundaries are adjusted such that any necessary data transfers are accomplished with minimal consumption of the
available bandwidth.
Example criteria for adjusting the window size and the chunking parameters include: a data type associated with the object, environmental constraints, a usage model, the latency and bandwidth
characteristics of the network connecting device A and device B, and any other appropriate model for determining average data transfer block sizes. Example data types include word processing files,
database images, spreadsheets, presentation slide shows, and graphic images. An example usage model may be where the average number of bytes required in a typical data transfer is monitored.
Changes to a single element within an application program can result in a number of changes to the associated datum and/or file. Since most application programs have an associated file type, the file
type is one possible criteria that is worthy of consideration in adjusting the window size and the chunking parameters. In one example, the modification of a single character in a word processing
document results in approximately 100 bytes being changed in the associated file. In another example, the modification of a single element in a database application results in 1000 bytes being
changed in the database index file. For each example, the appropriate window size and chunking parameters may be different such that the chunking procedure has an appropriate granularity that is
optimized based on the particular application.
Example Process Flow
FIGS. 4A and 4B are diagrams illustrating process flows for the interaction between a local device (e.g., device A) and a remote device (e.g., device B) during an example RDC procedure that is
arranged in accordance with at least one aspect of the present invention. The left hand side of FIG. 4A illustrates steps 400-413 that are operated on the local device A, while the right hand side of
FIG. 4A illustrates steps 450-456 that are operated on the remote device B.
As illustrated in FIG. 4A, the interaction starts by device A requesting an RDC transfer of object O[B ]in step 400, and device B receiving this request in step 450. Following this, both the local
device A and remote device B independently compute fingerprints in steps 401 and 451, divide their respective objects into chunks in steps 402 and 452, and compute signatures (e.g., SHA) for each
chunk in steps 403 and 453, respectively.
In step 454, device B sends the signature and chunk length list computed in steps 452 and 453 to device A, which receives this information in step 404.
In step 405, the local device A initializes the list of requested chunks to the empty list, and initializes the tracking offset for the remote chunks to 0. In step 406, the next (signature, chunk
length) pair (Sig[Bi], Len[Bi]) is selected for consideration from the list received in step 404. In step 407, device A checks whether the signature Sig[Bi ]selected in step 406 matches any of the
signatures it computed during step 403. If it matches, execution continues at step 409. If it doesn't match, the tracking remote chunk offset and the length in bytes Len[Bi ]are added to the request
list in step 408. At step 409, the tracking offset is incremented by the length of the current chunk Len[Bi].
In step 410, the local device A tests whether all (signature, chunk length) pairs received in step 404 have been processed. If not, execution continues at step 406. Otherwise, the chunk request list
is suitably encoded in a compact fashion, compressed, and sent to the remote device B at step 411.
The remote device B receives the compressed list of chunks at step 455, decompresses it, then compresses and sends back the chunk data at step 456.
The local device receives and decompresses the requested chunk data at step 412. Using the local copy of the object O[A ]and the received chunk data, the local devices reassembles a local copy of O[B
]at step 413.
FIG. 4B illustrates a detailed example for step 413 from FIG. 4A. Processing continues at step 414, where the local device A initializes the reconstructed object to empty.
In step 415, the next (signature, chunk length) pair (Sig[Bi], Len[Bi]) is selected for consideration from the list received in step 404. In step 416, device A checks whether the signature Sig[Bi ]
selected in step 417 matches any of the signatures it computed during step 403.
If it matches, execution continues at step 417, where the corresponding local chunk is appended to the reconstructed object. If it doesn't match, the received and decompressed remote chunk is
appended to the reconstructed object in step 418.
In step 419, the local device A tests whether all (signature, chunk length) pairs received in step 404 have been processed. If not, execution continues at step 415. Otherwise, the reconstructed
object is used to replace the old copy of the object O[A ]on device A in step 420.
Example Recursive Signature Transfer Process Flow
FIGS. 5A and 5B are diagrams illustrating process flows for recursive transfer of the signature and chunk length list in an example RDC procedure that is arranged according to at least one aspect of
the present invention. The below described procedure may be applied to both the local and remote devices that are attempting to update commonly associated objects.
The left hand side of FIG. 5A illustrates steps 501-513 that are operated on the local device A, while the right hand side of FIG. 5A illustrates steps 551-556 that are operated on the remote device
B. Steps 501-513 replace step 404 in FIG. 4A while steps 551-556 replace step 454 in FIG. 4A.
In steps 501 and 551, both the local device A and remote device B independently compute recursive fingerprints of their signature and chunk length lists ((Sig[A1],Len[A1]), . . . (Sig[Ak],Len[Ak]))
and ((Sig[B1],Len[B1]), (Sig[Bn],Len[Bn])), respectively, that had been computed in steps 402/403 and 452/453, respectively. In steps 502 and 552 the devices divide their respective signature and
chunk length lists into recursive chunks, and in steps 503 and 553 compute recursive signatures (e.g., SHA) for each recursive chunk, respectively.
In step 554, device B sends the recursive signature and chunk length list computed in steps 552 and 553 to device A, which receives this information in step 504.
In step 505, the local device A initializes the list of requested recursive chunks to the empty list, and initializes the tracking remote recursive offset for the remote recursive chunks to 0. In
step 506, the next (recursive signature, recursive chunk length) pair (RSig[Bi], RLen[Bi]) is selected for consideration from the list received in step 504. In step 507, device A checks whether the
recursive signature RSig[Bi ]selected in step 506 matches any of the recursive signatures it computed during step 503. If it matches, execution continues at step 509. If it doesn't match, the
tracking remote recursive chunk offset and the length in bytes RLen[Bi ]are added to the request list in step 508. At step 509, the tracking remote recursive offset is incremented by the length of
the current recursive chunk RLen[Bi].
In step 510, the local device A tests whether all (recursive signature, recursive chunk length) pairs received in step 504 have been processed. If not, execution continues at step 506. Otherwise, the
recursive chunk request list is compactly encoded, compressed, and sent to the remote device B at step 511.
The remote device B receives the compressed list of recursive chunks at step 555, uncompressed the list, then compresses and sends back the recursive chunk data at step 556.
The local device receives and decompresses the requested recursive chunk data at step 512. Using the local copy of the signature and chunk length list ((Sig[A1],Len[A1]), (Sig[Ak],Len[Ak])) and the
received recursive chunk data, the local devices reassembles a local copy of the signature and chunk length list ((Sig[B1],Len[B1]), . . . (Sig[Bk],Len[Bn])) at step 513. Execution then continues at
step 405 in FIG. 4A.
FIG. 5B illustrates a detailed example for step 513 from FIG. 5A. Processing continues at step 514, where the local device A initializes the list of remote signatures and chunk lengths, SIGCL, to the
empty list.
In step 515, the next (recursive signature, recursive chunk length) pair (RSig[Bi], RLen[Bi]) is selected for consideration from the list received in step 504. In step 516, device A checks whether
the recursive signature RSig[Bi ]selected in step 515 matches any of the recursive signatures it computed during step 503.
If it matches, execution continues at step 517, where device A appends the corresponding local recursive chunk to SIGCL. If it doesn't match, the remote received recursive chunk is appended to SIGCL
at step 518.
In step 519, the local device A tests whether all (recursive signature, recursive chunk length) pairs received in step 504 have been processed. If not, execution continues at step 515. Otherwise, the
local copy of the signature and chunk length list ((Sig[B1],Len[B1]), . . . (Sig[Bk],Len[Bn])) is set to the value of SIGCL in step 520. Execution then continues back to step 405 in FIG. 4A.
The recursive signature and chunk length list may optionally be evaluated to determine if additional recursive remote differential compression is necessary to minimize bandwidth utilization as
previously described. The recursive signature and chunk length list can be recursively compressed using the described chunking procedure by replacing steps 504 and 554 with another instance of the
RDC procedure, and so on, until the desired compression level is achieved. After the recursive signature list is sufficiently compressed, the recursive signature list is returned for transmission
between the remote and local devices as previously described.
FIG. 6 is a diagram that graphically illustrates an example of recursive compression in an example RDC sequence that is arranged in accordance with an example embodiment. For the example illustrated
in FIG. 6, the original object is 9.1 GB of data. A signature and chunk length list is compiled using a chunking procedure, where the signature and chunk length list results in 3 million chunks (or a
size of 42 MB). After a first recursive step, the signature list is divided into 33 thousand chunks and reduced to a recursive signature and recursive chunk length list with size 33 KB. By
recursively compressing the signature list, bandwidth utilization for transferring the signature list is thus dramatically reduced, from 42 MB to about 395 KB.
Example Object Updating
FIG. 7 is a diagram illustrating the interaction of a client and server application using an example RDC procedure that is arranged according to at least one aspect of the present invention. The
original file on both the server and the client contained text “The quick fox jumped over the lazy brown dog. The dog was so lazy that he didn't notice the fox jumping over him.”
At a subsequent time, the file on the server is updated to: “The quick fox jumped over the lazy brown dog. The brown dog was so lazy that he didn't notice the fox jumping over him.”
As described previously, the client periodically requests the file to be updated. The client and server both chunk the object (the text) into chunks as illustrated. On the client, the chunks are:
“The quick fox jumped”, “over the lazy brown dog.”, “The dog was so lazy that he didn't notice”, and “the fox jumping over him.”; the client signature list is generated as: SHA[11], SHA[12], SHA[11],
and SHA[14]. On the server, the chunks are: “The quick fox jumped”, “over the lazy brown dog.”, “The brown dog was”, “so lazy that he didn't notice”, and “the fox jumping over him.”; the server
signature list is generated as: SHA[21], SHA[22], SHA[23], SHA[24], and SHA[25].
The server transmits the signature list (SHA[21]-SHA[25]) using a recursive signature compression technique as previously described. The client recognizes that the locally stored signature list (SHA
[11]-SHA[14]) does not match the received signature list (SHA[21]-SHA[25]), and requests the missing chunks 3 and 4 from the server. The server compresses and transmits chunks 3 and 4 (“The brown dog
was”, and “so lazy that he didn't notice”). The client receives the compressed chunks, decompresses them, and updates the file as illustrated in FIG. 7.
Chunking Analysis
The effectiveness of the basic RDC procedure described above may be increased by optimizing the chunking procedures that are used to chunk the object data and/or chunk the signature and chunk length
The basic RDC procedure has a network communication overhead cost that is identified by the sum of:
(S1) |Signatures and chunk lengths from B|=|O[B]|*|SigLen|/C, where |O[B]| is the size in bytes of Object O[B], SigLen is the size in bytes of a (signature, chunk length) pair, and C is the expected
average chunk size in bytes; and
(S2) Σchunk_length, where (signature, chunk_length)∉Signatures from B,
and signature ∈ Signatures from A
The communication cost thus benefits from a large average chunk size and a large intersection between the remote and local chunks. The choice of how objects are cut into chunks determines the quality
of the protocol. The local and remote device must agree, without prior communication, on where to cut an object. The following describes and analyzes various methods for finding cuts.
The following characteristics are assumed to be known for the cutting algorithm:
1. Slack: The number of bytes required for chunks to reconcile between file differences. Consider sequences s1, s2, and s3, and form the two sequences s1s3, s2s3 by concatenation. Generate the chunks
for those two sequences Chunks1, and Chunks2. If Chunks1′ and Chunks2′ are the sums of the chunk lengths from Chunks1 and Chunks2, respectively, until the first common suffix is reached, the slack in
bytes is given by the following formula:
slack=Chunks[1] ′−|s [1]|=Chunks[2] ′−|s [2]|
2. Average Chunk Size C:
When Objects O[A ]and O[B ]have S segments in common with average size K, the number of chunks that can be obtained locally on the client is given by:
and (S2) above rewrites to:
Thus, a chunking algorithm that minimizes slack will minimize the number of bytes sent over the wire. It is therefore advantageous to use chunking algorithms that minimize the expected slack.
Fingerprinting Functions
All chunking algorithms use a fingerprinting function, or hash, that depends on a small window, that is, a limited sequence of bytes. The execution time of the hash algorithms used for chunking is
independent of the hash window size when those algorithms are amenable to finite differencing (strength reduction) optimizations. Thus, for a hash window of size k it is should be easy (require only
a constant number of steps) to compute the hash #[b[1], . . . , b[k−1],b[k]] using b[0], b[k], and #[b[0], b[1], . . . , b[k−1]] only. Various hashing functions can be employed such as hash functions
using Rabin polynomials, as well as other hash functions that appear computationally more efficient based on tables of pre-computed random numbers.
In one example, a 32 bit Adler hash based on the rolling checksum can be used as the hashing function for fingerprinting. This procedure provides a reasonably good random hash function by using a
fixed table with 256 entries, each a precomputed 16 bit random number. The table is used to convert fingerprinted bytes into a random 16 bit number. The 32 bit hash is split into two 16 bit numbers
sum1 and sum2, which are updated given the procedure:
sum1+=table[b [k]]−table[b [0]]
sum2+=sum1−k*table[b [0]]
In another example, a 64 bit random hash with cyclic shifting may be used as the hashing function for fingerprinting. The period of a cyclic shift is bounded by the size of the hash value. Thus,
using a 64 bit hash value sets the period of the hash to 64. The procedure for updating the hash is given as:
hash=hash^((table[b [0]]<<1)|(table[b [0] ]>>u))^table[b [k]];
where 1=k% 64 and u=64−1
In still another example, other shifting methods may be employed to provide fingerprinting. Straight forward cyclic shifting produces a period of limited length, and is bounded by the size of the
hash value. Other permutations have longer periods. For instance, the permutation given by the cycles (1 2 3 0) (5 6 7 8 9 10 11 12 13 14 4) (16 17 18 19 20 21 15) (23 24 25 26 22) (28 29 27) (31 30)
has a period of length 4*3*5*7*11=4620. The single application of this example permutation can be computed using a right shift followed by operations that patch up the positions at the beginning of
each interval.
Analysis of Previous Art for Chunking at Pre-Determined Patterns
Previous chunking methods are determined by computing a fingerprinting hash with a pre-determined window size k (=48), and identifying cut points based on whether a subset of the hash bits match a
pre-determined pattern. With random hash values, this pattern may as well be 0, and the relevant subset may as well be a prefix of the hash. In basic instructions, this translates to a predicate of
the form:
CutPoint(hash)≡0==(hash & ((1<<c)−1)),
where c is the number of bits that are to be matched against.
Since the probability for a match given a random hash function is 2^−c, an average chunk size C=2^c results. However, neither the minimal, nor the maximal chunk size is determined by this procedure.
If a minimal chunk length of m is imposed, then the average chunk size is:
A rough estimate of the expected slack is obtained by considering streams s[1]s[3 ]and s[2]s[3]. Cut points in s[1 ]and s[2 ]may appear at arbitrary places. Since the average chunk length is C=m+2^c,
about (2^c/C)^2 of the last cut-points in s[1 ]and s[2 ]will be beyond distance m. They will contribute to slack at around 2^c. The remaining 1−(2^c/C)^2 contribute with slack of length about C. The
expected slack will then be around (2^c/C)^3+(1−(2^c/C)^2)*(C/C)=(2^c/C)^3+1−(2^c/C)^2, which has global minimum for m=2^c−1, with a value of about 23/27=0.85. A more precise analysis gives a
somewhat lower estimate for the remaining 1−(2^c/C)^2 fraction, but will also need to compensate for cuts within distance m inside s[3], which contributes to a higher estimate.
Thus, the expected slack for the prior art is approximately 0.85*C.
Chunking at Filters (New Art)
Chunking at filters is based on fixing a filter, which is a sequence of patterns of length m, and matching the sequence of fingerprinting hashes against the filter. When the filter does not allow a
sequence of hashes to match both a prefix and a suffix of the filter it can be inferred that the minimal distance between any two matches must be at least m. An example filter may be obtained from
the CutPoint predicate used in the previous art, by setting the first m−1 patterns to
0!=(hash & ((1<<c)−1))
and the last pattern to:
0==(hash & ((1<<c)−1)).
The probability for matching this filter is given by (1−p)^m−1 p where p is 2^−c. One may compute that the expected chunk length is given by the inverse of the probability for matching a filter (it
is required that the filter not allow a sequence to match both a prefix and suffix), thus the expected length of the example filter is (1−p)^−m+1p^−1. This length is minimized when setting p:=1/m,
and it turns out to be around (e*m). The average slack hovers around 0.8, as can be verified by those skilled in the art. An alternative embodiment of this method uses a pattern that works directly
with the raw input and does not use rolling hashes.
Chunking at Local Maxima (New Art)
Chunking at Local Maxima is based on choosing as cut points positions that are maximal within a bounded horizon. In the following, we shall use h for the value of the horizon. We say that the hash at
position offset is an h-local maximum if the hash values at offsets offset−h, . . . , offset−1, as well as offset+1, offset+h are all smaller than the hash value at offset. In other words, all
positions h steps to the left and h steps to the right have lesser hash values. Those skilled in the art will recognize that local maxima may be replaced by local minima or any other metric based
comparison (such as “closest to the median hash value”).
The set of local maxima for an object of size n may be computed in time bounded by 2·n operations such that the cost of computing the set of local maxima is close to or the same as the cost of
computing the cut-points based on independent chunking Chunks generated using local maxima always have a minimal size corresponding to h, with an average size of approximately 2h+1. A CutPoint
procedure is illustrated in FIGS. 8 and 9, and is described as follows below:
□ 1. Allocate an array M of length h whose entries are initialized with the record {is Max=false, hash=0, offset=0}. The first entry in each field (isMax) indicates whether a candidate can be a
local maximum. The second field entry (hash) indicates the hash value associated with that entry, and is initialized to 0 (or alternatively, to a maximal possible hash value). The last field
(offset) in the entry indicates the absolute offset in bytes to the candidate into the fingerprinted object.
□ 2. Initialize offsets min and max into the array M to 0. These variables point to the first and last elements of the array that are currently being used.
□ 3. CutPoint(hash, offset) starts at step 800 in FIG. 8 and is invoked at each offset of the object to update M and return a result indicating whether a particular offset is a cutpoint.
☆ The procedure starts by setting result=false at step 801. At step 803, the procedure checks whether M[max].offset+h+1=offset. If this condition is true, execution continues at step 804
where the following assignments are performed: result is set to M[max].is Max, and max is set to max−1% h. Execution then continues at step 805. If the condition at step 803 is false,
execution continues at step 805. At step 805, the procedure checks whether M[min].hash>hash. If the condition is true, execution continues at step 806, where min is set to (min−1) % h.
Execution the continues at step 807 where M[min] is set to {isMax=false, hash=hash, offset=offset}, and to step 811, where the computed result is returned.
☆ If the condition at step 805 is false, execution continues to step 808, where the procedure checks for whether M[min].hash=hash. If this condition is true, execution continues at step 807
☆ If the condition at step 808 is false, execution continues at step 809, where the procedure checks whether min=max. If this condition is true, execution continues at step 810, where M
[min] is set to {isMax=true, hash=hash, offset=offset}. Execution then continues at step 811, where the computed result is returned.
☆ If the condition at step 809 is false, execution continues at step 811, where min is set to (min+1) % h. Execution then continues back at step 805.
□ 4. When CutPoint(hash, offset) returns true, it will be the case that the offset at position offset−h−1 is a new cut-point.
Analysis of Local Maximum Procedure
An object with n bytes is processed by calling CutPoint n times such that at most n entries are inserted for a given object. One entry is removed each time the loop starting at step 805 is repeated
such that there are no more than n entries to delete. Thus, the processing loop may be entered once for every entry and the combined number of repetitions may be at most n. This implies that the
average number of steps within the loop at each call to CutPoint is slightly less than 2, and the number of steps to compute cut points is independent of h.
Since the hash values from the elements form a descending chain between min and max, we will see that the average distance between min and max (|min−max|% h) is given by the natural logarithm of h.
Offsets not included between two adjacent entries in M have hash values that are less than or equal to the two entries. The average length of such chains is given by the recurrence equation f(n)=1+1/
n*Σ[k<n ]f(k). The average length of the longest descending chain on an interval of length n is 1 greater than the average length of the longest descending chain starting from the position of the
largest element, where the largest element may be found at arbitrary positions with a probability of 1/n. The recurrence relation has as solution corresponding to the harmonic number H[n]=1+½+⅓+¼+ .
. . +1/n, which can be validated by substituting H[n ]into the equation and performing induction on n. H[n ]is proportional to the natural logarithm of n. Thus, although array M is allocated with
size h, only a small fraction of size ln(h) is ever used at any one time.
Computing min and max with modulus h permits arbitrary growth of the used intervals of M as long as the distance between the numbers remain within h.
The choice of initial values for M implies that cut-points may be generated within the first h offsets. The algorithm can be adapted to avoid cut-points at these first h offsets.
The expected size of the chunks generated by this procedure is around 2h+1. We obtain this number from the probability that a given position is a cut-point. Suppose the hash has m different possible
values. Then the probability is determined by:
Approximating using integration ∫[0≦x<m ]1/m(x/m)^2hdx=1/(2h+1) indicates the probability when m is sufficiently large.
The probability can be computed more precisely by first simplifying the sum to:
(1/m)^2h+1Σ[0≦k<m ]k^2h,
which using Bernoulli numbers B[k ]expands to:
The only odd Bernoulli number that is non-zero is B[1], which has a corresponding value of −½. The even Bernoulli numbers satisfy the equation:
H [∞] ^(2n)=(−1)^n−12^2n−1π^2n B [2n]/(2 n)!
The left hand side represents the infinite sum 1+(½)2n+(⅓)2n+ . . . , which for even moderate values of n is very close to 1.
When m is much larger than h, all of the terms, except for the first can be ignored, as we saw by integration. They are given by a constant between 0 and 1 multiplied by a term proportional to h^k−1/
m^k. The first term (where B[0]=1) simplifies to 1/(2h+1). (the second term is −1/(2m), the third is h/(6 m^2)).
For a rough estimate of the expected slack consider streams s[1]s[3 ]and s[2]s[3]. The last cut points inside s[i ]and s[2 ]may appear at arbitrary places. Since the average chunk length is about
2h+1 about ¼′th of the last cut-points will be within distance h in both s[1 ]and S[2]. They will contribute to cut-points at around ⅞ h. In another ½ of the cases, one cut-point will be within
distance h the other beyond distance h. These contribute with cut-points around ¾h. The remaining ¼′th of the last cut-points in s[1 ]and s[2 ]will be in distance larger than h. The expected slack
will therefore be around ¼*⅞+*½*¾+¼*¼=0.66.
Thus, the expected slack for our independent chunking approach is 0.66*C, which is an improvement over the prior art (0.85*C).
There is an alternate way of identifying cut-points that require executing in average fewer instructions while using space at most proportional to h, or in average ln h. The procedure above inserts
entries for every position 0 . . . n−1 in a stream of length n. The basic idea in the alternate procedure is to only update when encountering elements of an ascending chain within intervals of length
h. We observed that there will in average only be ln h such updates per interval. Furthermore, by comparing the local maxima in two consecutive intervals of length h one can determine whether each of
the two local maxima may also be an h local maximum. There is one peculiarity with the alternate procedure; it requires computing the ascending chains by traversing the stream in blocks of size h,
each block gets traversed in reverse direction.
In the alternate procedure (see FIGS. 10 and 11), we assume for simplicity that a stream of hashes is given as a sequence. The subroutine CutPoint gets called for each subsequence of length h
(expanded to “horizon” in the Figures). It returns zero or one offsets which are determined to be cut-points. Only ln(h) of the calls to Insert will pass the first test.
Insertion into A is achieved by testing the hash value at the offset against the largest entry in A so far.
The loop that updates both A[k] and B[k].isMax can be optimized such that in average only one test is performed in the loop body. The case B[┐].hash<=A[k].hash and B[┐].isMax is handled in two loops,
the first checks the hash value against B[ℏ].hash until it is not less, the second updates A[k]. The other case can be handled using a loop that only updates A[k] followed by an update to B[┐].isMax.
Each call to CutPoint requires in average ln h memory writes to A, and with loop hoisting h+ln h comparisons related to finding maxima. The last update to A[k].isMax may be performed by binary search
or by traversing B starting from index 0 in at average at most log ln h steps. Each call to CutPoint also requires re-computing the rolling hash at the last position in the window being updated. This
takes as many steps as the size of the rolling hash window.
Observed Benefits of the Improved Chunking Algorithms
The minimal chunk size is built into both the local maxima and the filter methods described above. The conventional implementations require that the minimal chunk size is supplied separately with an
extra parameter.
The local max (or mathematical) based methods produce measurable better slack estimate, which translates to further compression over the network. The filter method also produces better slack
performance than the conventional methods.
Both of the new methods have a locality property of cut points. All cut points inside s3 that are beyond horizon will be cut points for both streams s1s3 and s2s3. (in other words, consider stream s1
s3, if p is a position≧|s1|+horizon and p is a cut point in s1s3, then it is also a cut point in s2s3. The same property holds the other direction (symmetrically), if p is a cut point in s2s3, then
it is also a cut point in s1s3). This is not the case for the conventional methods, where the requirement that cuts be beyond some minimal chunk size may interfere adversely.
Alternative Mathematical Functions
Although the above-described chunking procedures describe a means for locating cut-points using a local maxima calculation, the present invention is not so limited. Any mathematical function can be
arranged to examine potential cut-points. Each potential cut-point is evaluated by evaluating hash values that are located within the horizon window about a considered cut-point. The evaluation of
the hash values is accomplished by the mathematical function, which may include at least one of locating a maximum value within the horizon, locating a minimum values within the horizon, evaluating a
difference between hash values, evaluating a difference of hash values and comparing the result against an arbitrary constant, as well as some other mathematical or statistical function.
The particular mathematical function described previously for local maxima is a binary predicate “_>_”. For the case where p is an offset in the object, p is chosen as a cut-point if hash[p]>hash[k],
for all k, where p-horizon≦k<p, or p<k≦p+horizon. However, the binary predicate>can be replaced with any other mathematical function without deviating from the spirit of the invention.
Finding Candidate Objects for Remote Differential Compression
The effectiveness of the basic RDC procedure described above may be increased by finding candidate objects on the receiver, for signature and chunk reuse during steps 4 and 8 of the RDC algorithm,
respectively. The algorithm helps Device A identify a small subset of objects denoted by: O[A1], O[A2], . . . , O[An ]that are similar to the object O[B ]that needs to be transferred from Device B
using the RDC algorithm. O[A1], O[A2], . . . , O[An ]are part of the objects that are already stored on Device A.
The similarity between two objects O[B ]and O[A ]is measured in terms of the number of distinct chunks that the two objects share divided by the total number of distinct chunks in the first object.
Thus if Chunks(O[B]) and Chunks(O[A]) are the sets of chunks computed for O[B ]and O[A ]of the RDC algorithm, respectively, then, using the notation |X| to denote the cardinality, or number of
elements, of set X:
$Similarity ( O B , O A ) = { c B | c B ∈ Chunks ( O B ) ⋀ ∃ c A ∈ Chunks ( O A ) · c B = c A } { c B | c B ∈ Chunks ( O B ) } ⋃ { c A | c A ∈ Chunks ( O A ) } $
As a proxy for chunk equality, the equality on the signatures of the chunks is used. This is highly accurate if the signatures are computed using a cryptographically secure hash function (such as
SHA-1 or MD5), given that the probability of a hash collision is extremely low. Thus, if Signatures(O[B]) and Signatures(O[A]) are the sets of chunk signatures computed for O[B ]and O[A ]in the
chunking portion of the RDC algorithm, then:
$Similarity ( O B , O A ) ≅ { Sig B | Sig B ∈ Signatures ( O B ) ⋀ ∃ Sig A ∈ Signatures ( O A ) · Sig B = Sig A } { Sig B | Sig B ∈ Signatures ( O B ) } ⋃ { Sig A | Sig A ∈ Signatures
( O A ) } $
Given an object O[B ]and the set of objects Objects[A ]that are stored on Device A, the members of Objects[A ]that have a degree of similarity with O[B ]which exceeds a given threshold s are
identified. A typical value for s may be s=0.5, (50% similarity) i.e. we are interested in objects that have at least half of their chunks in common with O[B]. The value for s, however, may be set at
any value that makes sense for the application. For example, s could be set between 0.01 and 1.0 (1% similar to 100% similar). This set of objects is defined as:
Similar(O [B],Objects[A] ,s)={O [A] |O [A]∈Objects[A]^Similarity(O [B] ,O [A])≧s}
The set of objects O[A1], O[A2], O[An ]is computed as a subset of Similar(O[B], Objects[A], s) by taking the best n matches.
The basic RDC algorithm described above is modified as follows to identify and use the set of similar objects O[A1], O[A2], O[An].
FIG. 12 illustrates an RDC algorithm modified to find and use candidate objects, in accordance with aspects of the invention. The protocol for finding and using candidate objects on Device A and the
transferring the updated object O[B ]from device B to device A is described. A similar protocol may be used to transfer an object from device A to device B, and the transfer can be initiated at the
behest of either device A or device B without significantly changing the protocol described below.
□ 1. Device A sends device B a request to transfer Object O[B ]using the RDC protocol.
□ 1.5 Device B sends Device A a set of traits of Object O[B], Traits(O[B]). Generally, the traits are a compact representation of the characteristics relating to object O[B]. As will be
described later, Device B may cache the traits for O[B ]so that it does not need to recompute them prior to sending them to Device A.
□ 1.6. Device A uses Traits(O[B]) to identify O[A1], O[A2], . . . , O[An], a subset of the objects that it already stores, that are similar to Object O[B]. This determination is made in a
probabilistic manner.
□ 2. Device A partitions the identified Objects O[A1], O[A2], . . . , O[An ]into chunks. The partitioning occurs in a data-dependent fashion, by using a fingerprinting function that is computed
at every byte position of the objects. A chunk boundary is determined at positions for which the fingerprinting function satisfies a given condition. Following the partitioning into chunks,
Device A computes a signature Sig[Aik ]for each chunk k of each Object O[Ai].
□ 3. Using a similar approach as in step 2, Device B partitions Object O[B ]into chunks, and computes the signatures Sig[Bj ]for each of the chunks. The partitioning algorithm used in step 3
must match the one in step 2 above.
□ 4. Device B sends list of chunk signatures (Sig[B1 ]. . . Sig[Bn]) to Device A. This list provides the basis for Device A being able to reconstruct Object O[B]. In addition to the chunk
signatures Sig[Bi], information will be sent about the offset and length of each chunk in Object O[B].
□ 5. As Device A receives the chunk signatures from Device B, it compares the received signatures against the set of signatures (Sig[A11], Sig[A1m], . . . , Sig[An1], . . . Sig[An1]) that it
has computed in step 2. As part of this comparison, Device A records every distinct signature value it received from Device B that does not match one of its own signatures Sig[Aik ]computed
on the chunks of Objects O[A1], O[A2], . . . , O[An].
□ 6. Device A sends a request to Device B for all the chunks whose signatures were received in the previous step from Device B, but which did not have a matching signature on Device A. The
chunks are requested by offset and length in Object O[B], based on corresponding information that was sent in Step 4.
□ 7. Device B sends the content associated with all the requested chunks to device A.
□ 8. Device A reconstructs Object O[B ]by using the chunks received in step 6 from Device B, as well as its own chunks of objects O[A1], O[A2], . . . , O[An ]that matched signatures sent by
Device B in step 4. After this reconstruction step is complete, Device A may now add the reconstructed copy of Object O[B ]to its already stored objects.
To minimize network traffic and CPU overhead, Traits(O[B]) should be very small and the determination of the set of similar objects O[A1], O[A2], . . . , O[An ]be performed with very few operations
on Device A.
Computing the Set of Traits for an Object
The set of traits for a object O, Traits(O), is computed based on the chunk signatures computed for O, as described for steps 2 or 3 of the RDC algorithm, respectively.
FIGS. 13 and 14 show a process and an example of a trait computation, in accordance with aspects of the invention.
The algorithm for identifying similar objects has four main parameters (q, b, t, x) that are summarized below.
q Shingle size
b Number of bits per trait
t Number of traits per object
x Minimum number of matching traits
The following steps are used to compute the traits for object O, Traits(O).
□ 1. At block 1310, the chunk signatures of O, Sig[1 ]. . . Sig[n ]are grouped together into overlapping shingles of size q, where every shingle comprises q chunk signatures, with the exception
of the last q−1 shingles, which will contain fewer than q signatures. Other groupings (discontiguous subsets, disjoint subsets, etc.) are possible, but it is practically useful that inserting
an extra signature causes all of the previously considered subsets to still be considered.
□ 2. At block 1320, for each shingle 1 . . . n, a shingle signature Shingle[1 ]. . . Shingl[n ]is computed by concatenating the q chunk signatures forming the shingle. For the case where q=1,
Shingle[1]=Sig[1], . . . , Shingle[n]=Sig[n].
□ 3. At block 1330, the shingle set {Shingle[1 ]. . . . Shingle[n]} is mapped into t image sets through the application of t hash functions H[1 ]. . . H[t]. This generates t image sets, each
containing n elements:
IS [1] ={H [1](Shingle[1]), H [1](Shingle[2]), . . . , H [1](Shingle[n])}
. . .
IS [t] ={H [t](Shingle[1]), H [t](Shingle[2]), . . . , H [t](Shingle[n]}
□ 4. At block 1340, the pre-traits PT[1 ]. . . PT[t ]are computed by taking the minimum element of each image set:
PT[1]=min(IS [1])
. . .
PT[t]=min(IS [t]).
☆ Other deterministic mathematical functions may also be used to compute the pre-traits. For example, the pre-traits PT[1 ]. . . PT[t ]are computed by taking the maximum element of each
image set:
PT[1]=max(IS [1])
. . .
PT[t]=max(IS [t]).
☆ Mathematically, any mapping carrying values into a well-ordered set will suffice, max and min on bounded integers being two simple realizations.
□ 5. At block 1350, the traits T[1 ]. . . T[t ]are computed by selecting b bits out of each pre-trait PT[1 ]. . . PT[t]. To preserve independence of the samples, it is better to choose
non-overlapping slices of bits, 0 . . . b−1 for the first, b . . . 2b−1 for the second, etc, if the pre-traits are sufficiently long:
T [1]=select[0 . . . b−1](PT[1]).
. . .
T [t]=select[(t−1)b . . . tb−]1(PT[t])
☆ Any deterministic function may be used to create traits that are smaller in size than the pre-traits. For instance, a hash function could be applied to each of the pre-traits so long as
the size of the result is smaller than the pre-trait; if the total number of bits needed (tb) exceeds the size of a pre-trait, some hash functions should be used to expand the number of
bits before selecting subsets.
The number of traits t and the trait size b are chosen so that only a small total number of bits (t*b) is needed to represent the traits for an object. This is advantageous if the traits are
precomputed and cached by Device A, as will be described below. According to one embodiment, some typical combinations of (b,t) parameters that have been found to work well are e.g. (4,24) and
(6,16), for a total of 96 bits per object. Any other combinations may also be used. For purposes of explanation, the i^th trait of object A will be denoted by T[i](A).
Efficiently Selecting the Pre-traits
To efficiently select the pre-traits PT[1 ]. . . PT[t], the following approach is used, allowing partial evaluation of the shingles, and thus reducing the computational requirements for selecting the
pre-traits. Logically, each H[i ]is divided into two parts, High[i ]and Low[i]. Since only the minimum element of each image set is selected, the High[i ]is computed for every chunk signature and the
Low[i ]is computed only for those chunk signatures which achieve the minimum value ever achieved for High[i]. If the High values are drawn from a smaller space, this may save computation. If,
further, several High values are bundled together, significant computation may be saved. Suppose, for instance, that each High value is 8 bits long. Eight of these can be packed into a long integer;
at the cost of computing a single 8-byte hash from a signature, that value can be chopped into eight independent one byte-slices. If only the High value were needed, this would reduce computational
costs by a factor of eight. However, on average one time in 256 a corresponding Low value needs to be computed and compared to other Low values corresponding to equal High values.
Finding Similar Objects Using the Sets of Traits
The algorithm approximates the set of objects similar to a given object O[B ]by computing the set of objects having similar traits to O[B]:
Traitsimilarity(O [B] ,O [A])=|{i|T [i](A)=T [i](B)}|
SimilarTraits(O [B],Objects[A] ,x)={O [A] |O [A]∈Objects[A]{circumflex over (0)}TraitSimilarity(O [B] ,O [A])≧t}
Other computations from which these values might be derived would work just as well.
To select the n most similar objects to a given object O[B], SimilarTraits(O[B], Objects[A], x) is computed and the n best matching objects out of that set are taken. If the size of SimilarTraits(O
[B], Objects[A], x) is smaller than n, the entire set is taken. The resulting set of objects forms a potential set of objects O[A1], O[A2], . . . , O[An ]identified in step 1.6 of the modified RDC
algorithm illustrated in FIG. 12.
According to the embodiments, objects may be chosen guided by similarity, but trying also to increase diversity in the set of objects by choosing objects similar to the target, but dissimilar from
one another, or by making other choices from the set of objects with similar traits.
According to one embodiment, the following combinations of parameters (q,b,t,x) may be used: (q=1,b=4,t=24,x=9) and (q=1,b=6,t=16,x=5).
FIGS. 15 and 16 may be used when selecting the parameters for b and t, in accordance with aspects of the present invention. The curves for the probability of detecting matches and for false positives
first for (b=4, t=24) is shown in FIG. 15, and then for (b=6, t=16) is shown in FIG. 16. Both sets of similarity curves (1510 and 1610) allow the probabilistic detection similar objects with true
similarity in the range of 0-100%. According to one embodiment, the false positive rate illustrated in displays 1520 and 1620 drops to an acceptable level at roughly 10 of 24 (providing 40 bits of
true match), and at 6 of 16 (36 bits of match); the difference in the required number of bits is primarily due to the reduced number of combinations drawing from a smaller set. The advantage of the
larger set is increased recall: fewer useful matches will escape attention; the cost is the increased rate of falsely detected matches. To improve both precision and recall, the total number of bits
may be increased. Switching to (b=5, t=24), for instance would dramatically improve precision, at the cost of increasing memory consumption for object traits.
A Compact Representation for the Sets of Traits
It is advantageous for both Device A and Device B to cache the sets of traits for all of their stored objects so that they don't have to recompute their traits every time they execute steps 1.6 and
1.5, respectively, of the modified RDC algorithm (See FIG. 12 and related discusssion). To speed up the RDC computation, the trait information may be stored in Device A's and Device B's memory,
The representation described below uses on the order of t+p memory bytes per object, where t is the number of traits and p is the number of bytes required to store a reference or a pointer to the
object. Examples of references are file paths, file identifiers, or object identifiers. For typical values oft and p, this approach can support one million objects using less than 50 MB of main
memory. If a device stores more objects, it may use a heuristic to prune the number of objects that are involved in the similarity computation. For instance, very small objects may be eliminated a
priori because they cannot contribute too many chunks in steps 4 and 8 of the RDC algorithm illustrated in FIG. 12.
FIG. 17 illustrates data structures that make up a compact representation of: an ObjectMap and a set of t Trait Tables, in accordance with aspects of the invention.
Initially, short identifiers, or object IDs, are assigned to all of the objects. According to one embodiment, these identifiers are consecutive non-negative 4-byte integers, thus allowing the
representation of up to 4 Billion objects.
A data structure (ObjectMap) maintains the mapping from object IDs to object references. It does not matter in which order objects stored on a device get assigned object IDs. Initially, this
assignment can be done by simply scanning through the device's list of stored objects. If an object gets deleted, its corresponding entry in ObjectMap is marked as a dead entry (by using a reserved
value for the object reference). If an object is modified, it corresponding entry in ObjectMap is marked as a dead entry, and the object gets assigned the next higher unused object ID.
When the ObjectMap becomes too sparse (something that can be easily determined by keeping track of the total size and the number of dead entries), both the ObjectMap and the Trait Tables are
discarded and rebuilt from scratch.
The Trait Tables form a two-level index that maps from a trait number (1 to t) and a trait value (0 to 2^b−1) to a TraitSet, the set of object IDs for the objects having that particular trait. A
TraitSet is represented as an array with some unused entries at the end for storing new objects. An index IX[i,k ]keeps track of the first unused entry in each TraitSet array to allow for appends.
Within a TraitSet, a particular set of objects is stored in ascending order of object IDs. Because the space of object IDs is kept dense, consecutive entries in the TraitSets can be expected to be
“close” to each other in the object ID space on average, two consecutive entries should differ by about t*2^b (but by at least 1). If the values oft and b are chosen so that t*2^b<<255, then
consecutive entries can be encoded using on average only one unsigned byte representing the difference between the two object ID, as shown in FIG. 17. An escape mechanism is provided by using the
0x00 byte to indicate that a full 4-byte object ID follows next, for the rare cases where the two consecutive object IDs differ by more than 255.
According to a different embodiment, if an object ID difference is smaller than 256 then it can be represented as a single byte, otherwise the value zero is reserved to indicate that subsequent bytes
represent the delta minus 256, say, by using a 7 in 8 representation. Then, for b=6, 98% of deltas will fit in one byte, 99.7% fit in two bytes, and all but twice in a billion into three bytes. It
has been found that this scheme uses on average 1.02 bytes per object, compared to 1.08 bytes per object for the scheme shown in FIG. 17.
Entries in the Trait Tables corresponding to dead object IDs can be left in the Trait Tables. New entries are appended at the end (using indices IX[1,0 ]. . . IX[t,2] [ b ] [−1]).
Finding Similar Objects using the Compact Representation
FIG. 18 illustrates a process for finding objects with similar traits, in accordance with aspects of the invention. According to one embodiment, to compute SimilarTraits(O[B], Objects[A], x), the
steps are are similar to a merge sort algorithm. The algorithm uses (t−x+1) object buckets, OB[x]OB[t], that are used to store objects belonging to Objects[A ]that match at least x and up to and
including t traits of O[B], respectively.
□ 1. At block 1810, select the t TraitSets corresponding to the t traits of O[B]: TS[1 ]. . . TS[t]. Initialize OB[x]OB[t ]to empty. Initialize indices Pi . . . P[t ]to point to the first
element of TS[1 ]. . . TS[t], respectively. TS[k][P[k]] is the notation for the object ID pointed to by P[k].
□ 2. At decision block 1820, if all of P[1 ]. . . P[t ]point past the last element of their TraitSet arrays TS[1 ]. . . TS[t], respectively, then go to step 6 (block 1860).
□ 3. At block 1830, the MinP set is selected which is the set of indices pointing to the minimum object ID, as follows:
MinP={P [k] |Λj∉[1,t]·TS [j] └P [j] ┘≧TS [k] └P [k]┘}
☆ Let MinID be the minimum object ID pointed to by all the indices in MinP.
□ 4. At block 1840, Let k=|MinP|, which corresponds to the number of matching traits. If k>x and if ObjectMap(MinP) is not a dead entry, then append MinID to OB[k].
□ 5. Advance every index P[k ]in MinP to the next object ID in its respective TraitSet array TS[k]. Go to step 2 (block 1820).
□ 6. At block 1860, select the similar objects by first selecting objects from OB[t], then from OB[t−1], etc., until the desired number of similar objects has been selected or no more objects
are left in OB[x]. The object IDs produced by the above steps can be easily mapped into object references by using the ObjectMap.
The above specification, examples and data provide a complete description of the manufacture and use of the composition of the invention. Since many embodiments of the invention can be made without
departing from the spirit and scope of the invention, the invention resides in the claims hereinafter appended.
|
{"url":"http://www.google.es/patents/US8112496?dq=flatulence","timestamp":"2014-04-19T02:03:49Z","content_type":null,"content_length":"303286","record_id":"<urn:uuid:4c066982-11b3-4abf-a10e-1f2961206474>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00445-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Help required with differentiation and integration
June 1st 2009, 04:13 AM #1
Jun 2008
Help required with differentiation and integration
Hello, Ive got a maths exam next week and need help on differentiation as I'm finding it hard to get my head around. I have a few examples of theory questions, if someone could walk me through
how to get the solution(s) that would be great, or just the solutions and I could work my way back through the method. Thanks
I have to differentiate the following:
f (x) = 2 exp (0.4x – 5)
g (x) = ln (2-3x)
I (t) = Pb/R (0.2 – 3e^at)
y (t) = 3 ln (0.5t – 0.4)
I also have to evaluate definite integrals but would need to scan that examples in, i think the above is more than enough to ask for help with.
Those look like fairly basic problems. How about showing what you have done and where you got stuck so we can see where you need help?
these are the solutions I got, are they correct, or on the right track at least?
f' (x) = 0.8 exp (0.4x – 5)
g' (x) = -3/(2-3x)
I' (t) = Pb/R (– 3ae^at)
y' (t) = 1.5/(0.5t – 0.4)
June 1st 2009, 04:20 AM #2
MHF Contributor
Apr 2005
June 2nd 2009, 04:37 AM #3
Jun 2008
|
{"url":"http://mathhelpforum.com/calculus/91387-help-required-differentiation-integration.html","timestamp":"2014-04-17T10:19:12Z","content_type":null,"content_length":"35522","record_id":"<urn:uuid:05153765-e803-4f2b-a1b4-12c47c046050>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00210-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Sheldon Ross,
University of Southern California
Gambler ruin problems and pricing a barrier option under a jump diffusion model
Suppose there are r gamblers, with gambler i initially having a fortune n(i). In our first model we suppose that at each stage two of the gamblers are chosen to play a game, equally likely to be won
by either player, with the winner of the game receiving 1 from the loser. Any gambler whose fortune becomes 0 leaves, and this continues until there is only a single gambler left. We are interested
in the mean number of players that involve both players i and j. In our second model we suppose that all remaining players contribute 1 to a pot, which is equally likely to be won by each of them.
The problem here is to determine the expected number of games played until one player has all the funds.
If time permits, we will also discuss how to efficiently simulate the expected return from an up and in (or up and out) barrier call option under the assumption that the price of the security follows
a geometric Brownian motion with random jumps.
|
{"url":"http://web.njit.edu/~lcumming/Site/Sheldon_Ross.html","timestamp":"2014-04-20T10:46:58Z","content_type":null,"content_length":"6779","record_id":"<urn:uuid:61f28fd6-9e2c-4836-aca8-c451d88d3148>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00150-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Partial signatures
From HaskellWiki
"The regular (full) signature of a function specifies the type of the function and -- if the type includes constrained type variables -- enumerates all of the typeclass constraints. The list of the
constraints may be quite large. Partial signatures help when:
• we wish to add an extra constraint to the type of the function but we do not wish to explicitly write the type of the function and enumerate all of the typeclass constraints,
• we wish to specify the type of the function and perhaps some of the constraints -- and let the typechecker figure out the rest of them.
Contrary to a popular belief, both of the above are easily possible, in Haskell98."
|
{"url":"http://www.haskell.org/haskellwiki/index.php?title=Partial_signatures&oldid=3617","timestamp":"2014-04-25T08:35:03Z","content_type":null,"content_length":"13939","record_id":"<urn:uuid:ee604b78-cf0b-4933-a255-f925372c527b>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00497-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Public Function Payment( _
ByVal vRate As Variant _
, ByVal vNPer As Variant _
, ByVal vPV As Variant _
, Optional ByVal vFV As Variant _
, Optional ByVal vType As Variant _
) As Variant
Calculate the Payment for an annuity based on fixed, periodic payments and a fixed interest rate.
Example: How much would you have to pay into a savings account monthly in order for that savings account to be worth $50,000 after 20 years, assuming that the savings account pays 5.25% annual
percentage rate (APR) compounded monthly? At least $118.18.
Payment(0.0525 / 12, 20 * 12, 0, 50000) = -118.172083172565
Example: How much would your monthly payments be for a four-year loan on a car that costs $20,000 assuming the loan has an annual percentage rate (APR) of 7.5%? Approximately $478.92.
Payment(0.07 / 12, 4 * 12, 20000) = -478.924893248856
See the PaymentVerify Subroutine for more examples of this Function.
See also:
InterestRate Function
NumberPeriods Function
PresentValue Function
FutureValue Function
PaymentType Function
Pmt Function (Visual Basic)
PMT Function (Microsoft Excel)
Summary: An annuity is a series of fixed payments (all payments are the same amount) made over time. An annuity can be a loan (such as a car loan or a mortgage loan) or an investment (such as a
savings account or a certificate of deposit).
vRate: Interest rate per period, expressed as a decimal number. The vRate and vNPer arguments must be expressed in corresponding units. If vRate is a monthly interest rate, then the number of periods
(vNPer) must be expressed in months. For a mortgage loan at 6% annual percentage rate (APR) with monthly payments, vRate would be 0.06 / 12 or 0.005. Function will return Null if vRate is Null or
cannot be interpreted as a number.
vNPer: Number of periods. The vRate and vNPer arguments must be expressed in corresponding units. If vRate is a monthly interest rate, then the number of periods (vNPer) must be expressed in months.
For a 30-year mortgage loan with monthly payments, vNPer would be 30 * 12 or 360. Function will return Null if vNPer is Null or cannot be interpreted as a number.
vPV: Present value (lump sum) of the series of future payments. Cash paid out is represented by negative numbers and cash received by positive numbers. Function will return Null if vPV is Null or
cannot be interpreted as a number.
vFV: Optional future value (cash balance) left after the final payment. Cash paid out is represented by negative numbers and cash received by positive numbers. The future value of a loan will usually
be 0 (zero). vFV defaults to 0 (zero) if it is missing or Null or cannot be interpreted as a number.
vType: Optional argument that specifies when payments are due. Set to 0 (zero) if payments are due at the end of the period, and set to 1 (one) if payments are due at the beginning of the period.
vType defaults to 0 (zero), meaning that payments are due at the end of the period, if it is missing or Null or cannot be interpreted as a number. Function returns Null if vType is not 0 (zero) nor 1
v2.0 Addition: This function is new to this version of Entisoft Tools.
Copyright 1996-1999 Entisoft
Entisoft Tools is a trademark of Entisoft.
|
{"url":"http://www.entisoft.com/ESTools/MathFinancial_Payment.HTML","timestamp":"2014-04-20T09:32:54Z","content_type":null,"content_length":"5148","record_id":"<urn:uuid:7a919560-96ce-4b13-910a-38cce746ffda>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00335-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Filters and Active Feedback
Analog Filter Design
Linear Filters and Active Feedback
Technically, a (real) pulsatance is a rate of phase change per unit of time. It's expressed in angular units (radians or degrees) per unit of time (second). Pulsatance is also commonly called
angular frequency. On the other hand, frequency is what pulsatance becomes when phases are expressed in cycles (one cycle is a phase change of 360° or 2p radians).
The modern convention is to express a pulsatance (preferably denoted by the symbol w) in radians per second (rad/s) and the corresponding frequency (preferably denoted by the symbol n) in hertz
(Hz, or "cycle per second").
w = 2p n
In electrical engineering, the letter "i" is often used to denote a current intensity. It's thus unavailable as a name for the unit vector along the imaginary axis of the complex plane. So, the
letter "j" is used instead for that purpose. The square of that imaginary number is -1. It's to the number 1 (unity) what a step sideways to the left is to a step forward. Refrain from calling
this "the" square root of -1.
j^ 2 = -1
The value at time t of a pure sinewave signal of frequency n and/or of pulsatance w = 2pn can be conveniently represented as the real part of the following expression, where s is equal to the
imaginary pulsatance jw.
|A| exp ( jq + s t ) = |A| cos ( q + wt ) + j (...)
In this, A is a positive real number and q is called the phase of the signal. The number A = |A| exp(jq) is called the complex amplitude of the signal and the value of the signal at time t is
therefore simply the real part of:
A exp ( s t )
Therefore, the complex amplitude of the signal's derivative is A s. Likewise, the complex amplitude of the second derivative is A s^ 2, etc.
A slight generalization can be made by considering that the above remains true even if s is a complex number with a nonzero real part s.
s = s + jw = -p + jw
s is called the damping constant. A negative value of s (a positive p) does translate into a signal which is a damped sinewave, like e^ -pt cos(wt).
This observation may be construed as the basis for Oliver Heaviside's operational calculus, which characterizes circuits by their reaction to nonoscillatory decaying signals (p>0, w=0). This
approach rests on the co-called Laplace transform (and its inverse). From a mathematical standpoint, such an analysis (which may be quite convenient) is just as sound as the more "physical" one
based on sinewave signals, involving the Fourier transform (and its own inverse). Either approach yields results applicable to any signal whatsoever.
A dipole is defined as a current-conserving two-terminal device (the total electric electric charge inside the device does not change). Each terminal may also be referred to as an electrode or a
pin. One of them is (somewhat arbitrarily) called "input", the other is the "output" terminal. Whatever current enters the input goes out the output; this quantity is the current (i) through the
dipole. The difference between the tension (voltage) of the input electrode and the output tension is called the voltage (u) across the dipole.
A dipole for which u is proportional to i, is called a linear dipole. The coefficient of proportionality between u and i is the impedance (Z).
In this, Z is a complex number which may depend on the operating complex pulsatance (s) defined above. For example, Z may be equal to s multiplied by a (real) constant L when the voltage is
proportional to the derivative of the current (such is the case for a perfect inductor of inductance L, as discussed below).
Operating at a given imaginary pulsatance s = jw, the dipole's resistance is defined as the real part of its impedance Z (the aforementioned perfect inductor has zero resistance). The imaginary
part of an impedance is called reactance.
A nonzero reactance at (imaginary) pulsatance s indicates that the current and the voltage are out of phase at the corresponding operating frequency.
A linear dipole whose impedance is a real number R which does not depend on the frequency of the signal is called a pure resistor of resistance R.
Practical resistors are never quite ideal, because any conducting element has a nonzero inductance which may become noticeable at very high frequencies. Also, there may be a tiny dependence of R
on the amplitude of the signal (Ohm's law is a very good practical approximation, but it's not a strict law of nature).
For completeness, we may also mention that resistance may vary greatly with temperature, so that a high current (which heats up the resistor) may give the apparence of a change in resistance with
the amplitude of "large signals".
Ideally, a capacitor (or electrical condenser) is a two-terminal device which stores opposite charges (q and -q) on two opposing armatures, connected to each terminal. That charge (q) is
proportional to the voltage (U) across the terminals and the coefficient of proportionality is the condenser's capacity (C).
We discuss elsewhere the physical basis for that relation and how the capacity (C) can be computed from geometric parameters and/or from the characteristics of the dielectric material separating
the conducting plates (armatures).
Strictly speaking, the charge on each armature is proportional to its absolute voltage (with respect to an "infinitely distant" ground) so there may be a bias in the actual charges stored on
each armature. However, the only important practical quantities are variations in the charges (i.e., currents) and/or differences in voltage, so the above fiction is an adequate description.
The quality factor Q of a system reacting to a periodic excitation is the ratio of its maximum energy to the average energy it dissipates (per radian of phase change).
Q = wL / R
I first heard about the following approach to elementary analog electronic design in the late 1970's at Ecole Polytechnique (X). It was a novelty at the time.
In an electronic circuit, a dipole is defined as a two-terminal component; whatever current enters one terminal goes out the other.
Normally, such a dipole is characterized by how the current through it varies with time as a function of the voltage across it (or vice-versa). The characteristic of an ordinary dipole thus
imposes one constraint between current and voltage...
However, two types of extraordinary dipoles may be considered which greatly simplify the design of some active systems which could not otherwise be modelized by dipoles alone... One such beast is
called a nullator (symbol -o-) and imposes two contraints: Zero current, zero voltage. On the other hand, a so-called norator dipole (symbol -¥-) imposes no constraints at all: Any current, any
voltage. Neither of those can be realized by itself but they can appear in complementary pairs which make the total number of constraints just right (i.e., one constraint per dipole connecting
two nodes).
For example, a short-circuit (zero voltage, any current) can be considered to consist of a nullator and a norator in parallel. in series. Less trivially, a properly polarized high-gain
transistor is approximately equivalent to a norator from collector (C) to emitter (E) and a nullator from base (B) to emitter (E).
A nearly perfect embodiement of a useful nullator-norator combination is the popular type of subsystem known as an operational amplifier. The gain of an operational amplifier is normally so large
that some feedback must somehow occur which forces the two high-impedance inputs of the amplifier to be at nearly the same voltage (or else the output "saturates" at either the lowest or the
highest value allowed). The amplifier's inputs may thus be construed as the two extremities of a nearly perfect nullator. Conversely, the amplifier's output can be viewed as one extremity of a
norator connected to the system's ground.
In practice, of course, the circuit will only be stable with the proper choice of amplifier inputs for the extremeties of the nullator ("inverting" vs. "non-inverting" input). Nevertheless, the
nullator-norator approach allows a quick preliminary design before final stability issues are addressed.
At left is the standard first-order passive RC low-pass attenuator. At zero output current, the input voltage u is to R+Z what the output v is to Z. So, u/v is 1+R/Z = 1+R(G+jwC)
The ratio v/u = H(s) expressed as a function of the complex pulsatance (s) is called the transfer function. In this case, it's equal to 1 / (1+RG+RC s). Introducing the DC attenuation A = 1 /
(1+RG) and the circuit's characteric pulsatance w[0] = ARC, we obtain:
H = A / (1 + j x)
The normalized variable is x = w / w[0] = 2pn ( 1/RC + G/C ).
The normalized gain (in dB) of the first-order low-pass filter is obtained by plotting 20 log(|H|/A) as a function of x, using a logarithmic scale for x, as shown above. This diagram is called a
Bode plot and is commonly used to chart the frequency response of any filter.
The above shape is the main reason why bandwidth is usually defined as the range of frequencies for which the signal's amplitude is attenuated by no more than a factor of Ö2 (-3 dB) from a
reference gain (corresponding to low-frequency signals and/or DC in the case of a low-pass filter). As the power is the square of the amplitude, such an attenuation means that the power is
divided by 2, so the above is best called "half-power bandwidth".
This definition does gives directly the "corner frequency" of any low-pass Butterworth filter, including the above first-order lowpass, which is the simplest Butterworth filter... The relation
isn't so simple in other cases.
The second-order passive RLC low-pass filter at left is like its first order counterpart, except that the resistor R becomes the impedance R+jwL. Therefore, u/v is 1+(R+jwL)(G+jwC)
u / v = (1+RG) + jw (RC+LG) - w^2 LC
We may cast this in a normalized form:
v / u = A / [ 1 + l j w/w[0] - (w/w[0 ])^ 2 ]
A = 1 / (1+RG)
2pn[0] = w[0] = Ö 1+RG
A = 1 / (1+RG) is the low-frequency attenuation, used as the 0 dB reference level in the above normalized Bode amplitude plot which charts the variations of the gain |v/u| in decibels, against
the ratio of the pulsatance w to the nominal pulsatance (w[0 ]) on a logarithmic horizontal scale.
So normalized, the response of a second-order lowpass filter is characterized by the so-called damping l. For the above actual circuit, it's useful to express l by introducing the characteristic
resistance R[0] = Ö(L/C).
l ^= w[0] RC+LG = R/R[0] + R[0 ]G
1+RG ( 1+RG )^½
For the common case where G = 0, this means that l is simply R/R[0].
In the normalized lowpass transfer function 1 / ( 1 + l s + s^ 2 ) different values of the damping l make the corresponding second-order filter a member of one of the general families discussed
elsewhere on this page:
l = 0 Perfect (ideal) resonator, no damping. R = 0 and G = 0.
l = 1 Natural Chebyshev filter, with 1.25 dB ripple. (Blue line.)
l = Ö2 Butterworth filter (= 0 dB Chebyshev filter).
l = Ö3 Bessel filter. (Gold line.)
l = 2 Linkwitz-Riley filter: Two cascaded identical first-order filters.
l > 2 Two first-order filters with distinct corner frequencies
(whose geometric mean is 1 and whose sum is l).
To clarify some of the technical literature pertaining to Chebyshev filters, it's important to distinguish the "corner" frequency (compatible with the above "nominal" frequency) from what's
best called the "cutoff" frequency... The cutoff frequency of a lowpass "equiripple" Chebychey filter is defined as the highest frequency for which the gain is equal to one of the bandpass
minima (all such minima are equal in a Chebyshev filter). The cutoff frequency coincides with the corner ("nominal") frequency only in the case of a "natural" Chebychev filter (like the 1.25
dB second-order Chebyshev filter plotted above). For high-ripple Chebyshev filters, the cutoff frequency is higher than the corner frequency. For low-ripple Chebyshev filters, it's lower (and
the term "cutoff frequency" is not recommended in that case).
(2007-06-16) The Sallen-Key lowpass filter
Active second-order filters and/or resonators without inductors.
When active components are used for signal processing, the DC gain of a lowpass filter should be kept close to unity. A larger gain would impose limitations on the input amplitudes (in order to
prevent saturation of the output signal) whereas a much smaller gain would worsen the signal-to-noise ratio (SNR or S/N).
This second-order active lowpass filter of unity gain was among the designs introduced in 1955 by R.P. Sallen and E. L. Key (Lincoln Labs of MIT).
"A Practical Method of Designing Active Filters" by R.P. Sallen and E.L. Key.
IRE Transactions on Circuit Theory CT-2, 74 -85 (1955)
It can be used as a building block (along with a first-order stage) to realize all the lowpass filters described on this page, without using any inductor.
2pn[0] = w[0] = 1 / RC
l = []( x + 1/x )^ y
v = u / [ 1 + l j w/w[0] - (w/w[0 ])^ 2 ]
The value of l in a normalized second-order factor 1/(1+ls+s^ 2 ) may thus be obtained from any convenient combination of the parameters x and y.
For example, with equal resistors (x=1) we have l = 2y and a second-order Butterworth filter (l=Ö2) is obtained for y=1/Ö2 (i.e., C[1] = 2 C[0 ]).
In practice, capacitors may only be available in a few standard values. Picking coarse values for the capacitors, we may use the following formula to compute precise matching values for the two
resistors R[-] and R[+].
R[±] ^= R ( z ± Ö z^ 2 -1 ^) where ^R = ^1 / w[0] Ö C[0]^ C[1]
and ^z[ ]= (½ l) Ö C[1] / C[0]
We just have to choose capacitor values so that z > 1.
The voltage response does not depend on which resistor goes where, but you may want to make the input impedance larger (and/or reduce the power involved) by placing the larger resistance R[+] on
the input side.
Numerically,[ ] when z is large, the above expression yields a mediocre way to compute R[-] with ordinary floating-point arithmetic (because subtracting nearly equal quantities entails a
great loss of precision).[ ] Instead, we compute R[+] first[ ] (full precision is retaimed when quantities of like signs are added)[ ] then obtain R[-] from the following formula (no
precision is lost in multiplications or divisions).
R[-] = R^ 2 / R[+]
(2007-06-09) Low-pass Butterworth filters
The lowpass filters with the flattest low-frequency responses.
Such filters are named after the British radio engineer Stephen Butterworth (1885-1958) who first described them in 1930.
"On the Theory of Filter Amplifiers" (1930) by Stephen Butterworth
Experimental Wireless and the Radio Engineer, vol. 7, pp. 536-541.
Little is known [ 1 | 2 ] about the life of Stephen Butterworth (MSc, OBE). He served in the British National Physical Laboratory (NPL) and joined the Admiralty scientific staff in 1921. He
retired from the Admiralty Research Laboratory in 1945 and passed away in 1958.
The normalized transfer function of an order-n lowpass Butterworth filter is of the form 1/B[n](s) where B[n] is a Butterworth polynomial of order n.
│ n │ Normalized Butterworth Polynomial B[n](s) │
│ 0 │ 1 │
│ 1 │ 1 + s │
│ 2 │ 1 + s Ö2 + s^ 2 │
│ 3 │ ( 1 + s ) ( 1 + s + s^ 2 ) │
│ 4 │ ( 1 + s Ö(2-Ö2) + s^ 2 ) ( 1 + s Ö(2+Ö2) + s^ 2 ) │
│ 5 │ ( 1 + s ) ( 1 + s (Ö5-1)/2 + s^ 2 ) ( 1 + s (Ö5+1)/2 + s^ 2 ) │
│ 6 │ ( 1 + s (Ö6-Ö2)/2 + s^ 2 ) ( 1 + s Ö2 + s^ 2 ) ( 1 + s (Ö6+Ö2)/2 + s^ 2 ) │
│ │ m │
│ 2m │ Õ [ 1 + 2 s sin p(2k-1)/2n + s^ 2 ] │
│ │ k=1 │
│ │ m │
│ 2m+1 │ (1+s) Õ [ 1 + 2 s sin p(2k-1)/2n + s^ 2 ] │
│ │ k=1 │
For any n, | B[n ](x) | is Ö2, so the attenuation of a Butterworth filter at its corner frequency is always -3 dB (well, -3.0103 dB, to be more precise).
(2007-06-20) Linkwitz-Riley crossover filter
2 cascaded lowpass Butterworth filters and 2 cascaded highpass filters.
Cascading two identical lowpass Butterworh filters of order n gives a lowpass filter of order 2n with a 6 dB attenuation at the corner frequency.
This is particularly useful in combination with a similar highpass filter tuned to the same frequency... Since both output amplitudes are halved at that crossover frequency, their sum remains at
the 0 dB level.
Such a feature is desirable in the design of audio systems, where low frequencies are directed to one loudpeaker and high frequencies to another. Modern professional active audio crossovers are
often based on a fourth-order Linkwitz-Riley design (LR-4). With digital signal processing (DSP) Linkwitz-Riley crossovers of order 8 are available (LR-8).
The basic idea was credited to Russ Riley in a paper published by Siegfried Linkwitz in 1976 (both Linkwitz and Riley were HP R&D engineers).
□ "Active Crossover Networks for Non-coincident Drivers"
Siegfried H. Linkwitz, J. Audio Eng. Soc., vol. 24, pp. 2-8 [ ](1976).
□ Linkwitz-Riley Crossovers: A Primer by Dennis Bohn (Rane, 2005).
Linkwitz-Riley active crossovers were first made commercially available by Sundholm and Rane in 1983. Nowadays, this may well be the most popular design for professional audio crossovers.
The basic properties of Chebyshev polynomials can be put to good use in filter design, by explicitely allowing ripples of amplitude e in the frequency response.
T[0](x) = 1
T[1](x) = x T[n+2](x) = 2x T[n+1](x) - T[n](x)
T[2](x) = -1 +2x^2
T[3](x) = -3x +4x^3
T[4](x) = 1 -8x^2 +8x^4
T[5](x) = 5x -20x^3 +16x^5
T[6](x) = -1 +18x^2 -48x^4 +32x^6
T[7](x) = -7x +56x^3 -112x^5 +64x^7
T[8](x) = 1 -32x^2 +160x^4 -256x^6 +128x^8
The parametrization of Cauer filters is general enough to include Butterworth filters and both types of Chebyshev filters.
Those filters are named after the German scientist Wilhelm Cauer (1900-1945). They're also called elliptic filters, complete Chebyshev filters or Zolotarev filters to honor the work of Egor
Zolotarev (1847-1878) whose results were applied to filter theory by Wilhelm Cauer in 1933.
The Optimum "L" filter, or Legendre filter, was introduced in 1958 by Athanasios Papoulis (1921-2002). Among all filters with a monotonic frequency response, the Legendre filter has the maximal
roll-off rate. Its features are thus intermediate between the slow roll-off of a Butterworth filter (which is monotonic with unimodal derivatives) and the faster roll-off of a (non-monotonous)
Chebychev filter.
The Gegenbauer polynomials are a generalization of the Legendre polynomials (which correspond to the special case l = ½). They are named after Leopold Gegenbauer (1849-1903).
For a given value of l, the Gegenbauer polynomials are recursively defined:
□ C[0](x) = 1
□ C[1](x) = 2l x
□ C[n](x) = ^1/[n] [ (2n+2l-2) x C[n-1](x) - (n+2l-2) C[n-2](x) ]
The generating function of those Gegenbauer polynomials is:
( 1 - 2xt + t^2 )^ -l = å[n] C[n](x) t^ n
Pochhammer symbols are used below for orders beyond n = 5
│ n │ Ultraspherical Gegenbauer Polynomial C[n](x) │
│ 0 │ 1 │
│ 1 │ 2 l x │
│ 2 │ -l + 2 l(l+1) x^ 2 │
│ 3 │ -2 l(l+1) x + ^4/[3] l(l+1)(l+2) x^ 3 │
│ 4 │ ^1/[2] l(l+1) - 2 l(l+1)(l+2) x^ 2 + ^2/[3] l(l+1)(l+2)(l+3) x^ 4 │
│ │ l(l+1)(l+2) x - ^4/[3] l(l+1)(l+2)(l+3) x^ 3 + ^4/[15] l(l+1)(l+2)(l+3)(l+4) x^ 5 │
│ 5 ├────────────────────────────────────────────────────────────────────────────────────────────────────────┤
│ │ (l)[3] x - ^4/[3] (l)[4] x^ 3 + ^4/[15] (l)[5] x^ 5 │
│ 6 │ ^-1/[6] (l)[3] + (l)[4] x^ 2 - ^2/[3] (l)[5] x^ 4 + ^4/[45] (l)[6] x^ 6 │
│ 7 │ ^-1/[3] (l)[4] x + ^2/[3] (l)[5] x^ 3 - ^4/[15] (l)[6] x^ 5 + ^8/[315] (l)[7] x^ 7 │
│ 8 │ ^1/[24] (l)[4] - ^1/[3] (l)[5] x^ 2 + ^1/[3] (l)[6] x^ 4 - ^4/[45] (l)[7] x^ 6 + ^2/[315] (l)[8] x^ 8 │
(2007-06-10) Bode phase plot. Bayard-Bode relations.
The correlation between phase delay and attenuation slope
If G = |G| exp(jj) is the complex gain of a discrete low-pass filter, the following approximative relation holds, far from its corner frequencies, because it holds far from the corner frequency
of every elementary such filter (the transfer function of higher-order filters is the product of transfer functions of order 1 or 2).
j » ^p/[2] d ( Log |G| ) / d ( Log w )
The Bayard-Bode relations where developped in 1936 by Marcel Bayard (1895-1956, X1919-S).
(2007-06-10) Group delay and Bessel-Thomson filters
Optimizing phase linearity and group delay to preserve signal shape.
The class of orthogonal polynomials named after the German mathematician and astronomer (Friedrich) Wilhelm Bessel (1784-1846) was only introduced in 1948 by H.L. Krall and O. Fink. The filters
themselves were first presented by W.E. Thomson in 1949 and are best called Bessel-Thomson filters (BT for short).
"Delay Networks Having Maximally Flat Frequency Characteristics"
W.E. Thomson. Proc. IEEE, part 3, vol. 96, pp. 487-490 (Nov. 1949).
The group delay of a filter whose gain is G = |G| exp(jj) is defined to be:
t[g] = - dj / dw
The transfer function is q[n](0)/q[n](s) where q[n] is the n-th reverse Bessel polynomial, as tabulated below:
q[0](s) = 1
q[1](s) = 1 + s q[n] = (2n-1) q[n-1] + s^2 q[n-2]
q[2](s) = 3 + 3 s + s^2
q[3](s) = 15 + 15 s + 6 s^2 + s^3
q[4](s) = 105 + 105 s + 45 s^2 + 10 s^3 + s^4
q[5](s) = 945 + 945 s + 420 s^2 + 105 s^3 + 15 s^4 + s^5
q[6](s) = 10395 + 10395 s + 4725 s^2 + 1260 s^3 + 210 s^4 + 21 s^5 + s^6
(2007-06-13) Linear Phase Equiripple Filters
Ripples allow better group delay flatness than with Bessel filters.
These filters are to Bessel filters with respect to group delay what Chebyshev filters are to Butterworth filters with respect to amplitude gain. In either case, better pass-band flatness of the
frequency response for the desired property is achieved by allowing some ripples, foregoing the strict monotonicity featured in Butterworth filters (for amplitude gain) or Bessel filters (for
group delay).
(2007-06-14) DSL filters (ADSL over POTS)
Allowing POTS bellow 3400 Hz and blocking digital data above 25 kHz.
"Plain Old Telephone Service" (POTS) requires only the voiceband (300 Hz to 3400 Hz) correponding to the spoken human voice. PCM digitalized voice corresponds to the 0-4 kHz range (8 kHz sampling
This is strictly for standard telephony (voice). By contrast, "CD quality" digital audio involves a 44.1 kHz sampling rate, corresponding to an upper audio limit of 22.05 kHz. The "audio
range" is most often quoted as going from 20 Hz to 20 kHz, although you've certainly not heard a 20 kHz tone since you were an infant (and never will again, if you ever did)... The highest
vocal note in classical repertoire is G7 (3136 Hz). The last key on an 88-key grand piano is at 4186 Hz.
The final "twisted pair" which goes to the telephone subscriber is able to carry a much broader signal, up to 1.1 MHz or more. ADSL service makes use of that entire 0-1104 kHz band by dividing it
into 256 channels, each 4.3125 kHz wide.
Those channels are numbered from 0 to 255. The lowest one is the voiceband reserved for POTS. Next are 5 silent channels which provide a wide gap (from 4 kHz to 25 kHz) so a simple so-called "DSL
filter" can safely block the digital frequencies (above 25.875 kHz) for POTS devices (telephone and/or FAX).
The remaining 250 channels, from 25.875 kHz to 1104 kHz, are used specifically for digital service. With ADSL, there's typically much more traffic downstream (downloading) than upstream
(uploading). Only a small portion of the bandwidth is allocated to upstream traffic (normally, the 26 channels from 25.875 kHz to 138 kHz, but this can be increased to 276 kHz per "Annex M" of
the ADSL2 standard). This explains the "A" for "asymmetric" in the ADSL acronym; Such an Asymmetric Digital Subscriber Line is nominally 8.92 times faster one way (223+1 download channels) than
the other (25+1 upload channels). In practice, a 4 to 1 ratio seems more common nowadays
A third-order lowpass filter with a nominal corner frequency of 3243.375 Hz will produce an attenuation at 25.875 kHz roughly equal to the cube of the frequency ratio (1/8). This means an
amplitude ratio of less than 0.002 (-54 dB).
The characteristic impedance of a telephone line is 600 W.
Order-4 DSL filter by Ben Kamen.
|
{"url":"http://www.numericana.com/answer/filter.htm","timestamp":"2014-04-18T00:14:00Z","content_type":null,"content_length":"68993","record_id":"<urn:uuid:878b67d6-576b-47e8-aef1-8a7439cdaafb>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00527-ip-10-147-4-33.ec2.internal.warc.gz"}
|
MathFiction: The Fairytale of the Completely Symmetrical Butterfly (Dietmar Dath)
I have long thought that Emmy Noether deserved to be the heroine of a work of mathematical fiction. I had even begun writing a story of my own to fill this gap. But, have no fear, since Dietmar Dath
has admirably contributed this piece of "magical realism".
For the most part, this is a biography of Emmy Noether from her childhood, through her education, her rise to prominence in the mathematical circles of Germany of the early 20th century, and finally
her "exile" in the United States during the Nazi years. It does not shy away from serious issues like sexism and anti-semitism. It does not actually discuss her mathematical results in detail,
despite including a detailed definition of the mathematical terms ring and ideal. In particular, Noether's Theorem -- an important result in mathematical physics connecting conservation laws (part of
analysis) and symmetries (part of algebra) -- is only hinted at by the talking butterfly.
Oh, did I forget to mention the talking butterfly that knows about supersymmetric theories of particle physics when Emmy was a child? That's the magical part. It is what makes this into a work of
fiction, which I like, and its appearance also helps to soften the sadness of her tragic death. However, to the extent that the "hints" from the butterfly seem to lessen Noether's achievement I
cannot help feeling that this is a bit unfair.
That's my only complaint. Otherwise, this is a beautifully written piece. Best of all, it saves y'all from having to read my attempt at incorporating Emmy Noether into a story!
|
{"url":"http://kasmana.people.cofc.edu/MATHFICT/mfview.php?callnumber=mf660","timestamp":"2014-04-20T21:21:56Z","content_type":null,"content_length":"9951","record_id":"<urn:uuid:1b65c2f7-3bda-47e4-9c13-e755cbf143ce>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00367-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Printed from http://tektonics.org/piwrong.php
Is the Bible wrong about pi?
Video version!
1 Kings 7:23 And he made a molten sea, ten cubits from the one brim to the other: it was round all about, and his height was five cubits: and a line of thirty cubits did compass it round about.
(see also 2 Chron. 4:2)
Some critics say that the measurements given for the circular bath do not give a proper value for pi. There are a couple of answers to this, one of which we give a link to below, and which is better
than the one I have here. The more common answer is that these verses give an estimate of pi that is rounded to the nearest full digit.
Objection: The fact is that 30 cubits is not the correct answer. If you say that 31.4 is not the correct answer either, then I will allege that your 31.4159265 figure is incorrect as well. Following
the stream of logic you have set in motion, there is no correct answer, because every answer involves rounding. Any answer would be automatically false.
Of course there is a certain category error here, since the value of pi is (so we are told by the mathematicians) one of those things that we can never provide the "correct" answer for -- it goes on
an on and on. So the 1 Kings writer would have either had to estimate or else he would still be writing today.
You are assuming the answer involves rounding without proving as much. The answer is wrong until you can prove it results from rounding. You can't allege it's the result of rounding until I prove
it's not.
Despite this, it is well-known and accepted that ancient estimates of distance, length, etc. were not always given down to the levels of our modern measurements (though see below). Thus it is the
critics burden to show that rounding is not involved, if anything, since rounding was the norm,
If guesswork is going to be admissible, then many biblical contradictions could be explained away by mere conjecture and theorizing. Nearly every numerical contradiction in the OT, for example, could
be lightly dismissed by simple reference to the "rounding" defense.
As we have just noted, however, the "pi" category is of a rather different nature, and rounding was standard procedure in the ancient world. Therefore, this "slippery slope" warning is without
substance. I expect accuracy to at least two orders of magnitude, which the ancients understood and depended on themselves, so my demand isn't unreasonable.
The ancients did measure pi more precisely in some cases -- but this is found in places like the Rhynd Papyrus, a book of mathematical equations. The Kings and Chronicles writers were evidently
literate, but there is no evidence that they were mathematicians. We would rightly expect accuracy of greater order from specialists in mathematics like the writer of the Rhynd Papyrus, and from
Babylonian astrologers. But such an expectation is unreasonable from a non-mathematician.
Put it this way: If we ask how many gallons of fuel a rocket contains, we expect a detailed answer like "4,942,827.78 gallons" from a NASA engineer, if he is involved in a techincal discussion with
other engineers. If he's talking to the press, and he is savvy, he'll say "4.9 million gallons" rather than bewilder the scientifically inert with more detail. Your average hobbyist (or even a
reporter) will say "5 million gallons".
Are any of them incorrect? No, because there is a semantic contract that correlates the level of precision with the level of expertise. Unless the Bible authors were mathematicians on the level of
Archimedes (one of the other few ancients to go this far in looking at pi), then it is unreasonable to expect precision to that level from them.
For more info, here are some interesting sites:
For another answer, from my friends at CMI, see here.
Here's an interesting result. Skeptic Sam Gibson wrote in to a person styled "Dr. Math" on this issue, and the results of the correspondence are here. On his own site, Sam argues that rounding off pi
equates with "rounding off" books of the Bible and taking, say, Romans 7 out of Romans. That's an apples and oranges equation, as his encounter with Dr. Math shows.
A helpful reader has also made this point:
The Hebrew Rabbi and writer of the earliest known Hebrew geometry textbook (Mishnat ha-Middot,) Nehemiah, states, "Now it is written: And he made the molten sea of ten cubits from brim to brim,
round in compass, and yet its circumference is thirty cubits, for it is written: And a line of thirty cubits did compass it round about. What is the meaning of the verse, And a line of thirty
cubits, and so forth? Nehemiah says: Since the people of the world say that the circumference of a circle contains three times and one seventh of the thread, take off that one seventh for the
thickness of the walls of the sea on the two brims, then there remain, Thirty cubits did compass it round about."
And another has added:
...maybe the circumference actually measured 30.3l213 cubits and the diameter measured 9.64866 cubits. The ancients who perhaps hated fractions as much as school kids do today, simply gave the
numbers as 30 and 10 respectively. Why should they not? We do know that they were not into the precision fetish that we are today. Also, if the Bible had to go into as much detail as Skeptics
want, I hardly would be carrying it to church. I'd need a wheelbarrow.
|
{"url":"http://www.tektonics.org/lp/piwrong.php","timestamp":"2014-04-18T00:16:42Z","content_type":null,"content_length":"19770","record_id":"<urn:uuid:6d94bbd8-8d41-47d3-bdff-a4245b6cad62>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00204-ip-10-147-4-33.ec2.internal.warc.gz"}
|
How to make a circle skirt: full, 3/4, 1/2, 1/4
I am excited to have finally finished my 'how to make a circle skirt' tutorial! YAY! Follow the link below to a downloadable PDF, otherwise just follow the instructions in the images below. Feedback
or comments are always welcome, let us know if you try it out!
82 comments:
1. Great post!
2. thank you so much for this tutorial, you helped me alot! THANKS!
By the way, I've just found your blog and I have to say that I'm really happy that I did it :) your style is so chic! and your photos also..so I just wonder..if we could follow each other.. But
even if your answer is “no” I’ll be still your readerwith love,http://chocarome.blogspot.com/
3. really cool post!
New post!
Dress up for armageddon
4. Rarely you can find something so useful like this on blogs... Well done and thanks for sharing!
5. Thank you so much! I have never had the patience to sit down and figure all this out, so I appreciate this even more!!!
6. I saw a skirt online that I wanted. It was too expensive so I searched "flippy skirt" which is what they are called sometimes. Then I realised... hey! This is just a circle skirt!! So I want to
make one like the one I saw. Its made out of a wintery tartan material. Next challenge? I think so.
7. I'm currently making my very first circle skirt with Casey of Elegant Musings and her Sew-Along, so I'm very grateful to see directions written up here as well. Thanks! I can't wait until I have
my very first circle skirt to wear!
8. Awesome tutorial! We love this post! We're very into skirts and we love that many people are learning to sew their own. You get your own creativity out and it's more fun when you make it by
Much love from the SABO SKIRT girls!
shop: www.saboskirt.com
blog: www.saboskirt.blogspot.com
9. love your tables! WOW! such wonderful tutorial and such hard work! (says me in total admiration:) )
10. Thank you for a great tutorial but I just wanted to check why you minus 1cm for seam allowance... Wouldn't you add it?
11. hi Jo, thanks for your question. By subtracting the radius you are actually adding more fabric. I know it feels weird....I always have to think about it when I am making a skirt. If you draw a
circle which symbolises your waist, then draw a smaller circle inside that, you will see that the smaller circle gives you more fabric to work with. I hope this makes sense and that it is
helpful. thanks again for the question
12. thanks for the tutorial, one thing i could not figure that is the n in the formula .for ex if the waist is 90 cm how to go about the calculation of r and what the value of n pl
13. Hi Laitha! The symbol is actually a pi symbol. If your calculator doesn't have a pi button just use 3.14. So if you are making a full circle skirt the formula is:
r = 90/(2 x 3.14) = 14.32cm
Sorry if the symbol was confusing.
Hope that helps:) let us know how you go.
14. Grazie per il tutorial, Paola from Italy
15. This is a great tutorial. I have a dress pattern which would really benefit from a full skirt rather than the one it's designed with so I'm going to use this.
Can I just check does this method allow for any ease on the finished garment, or should I add that to the waist measurement?
Ruthie x
16. You should not need to add more for ease, I don't. But if you are unsure you can always leave a little extra seam allowance:) Thanks for your question, I hope that I answered it. Let us know how
you go:)
17. Thanks for posting this tutorial. This is going on my "To Sew" list.
18. Hi there,
I'm not sure i've missed something, but how much material should I buy for this? I can't find it anywhere..
1. Hi, it depends on which skirt you are making. For the 1/4 skirt you might only need one metre. Where as if you are making the full circle, 3/4 circle skirt you would be safe with 2 metres. It
also depends on the length that you want. Just also make sure you check the bolt length of the fabric (150 cm or 115 cm bolt length). Generally go with 2 metres:) I hope that my answer is not
too complicated, let me know if you need further clarification. Good luck, let us know how you go!
19. DO you have any other patterns available? I'm a first time sewer and this is the easiest instructions I've ever found!
1. Hi, thanks for your kind comment. I am constantly getting tutorials ready to post... I just need more hours in the day:) Hopefully you will find one of our upcoming tutorials useful. Thanks
again for your comment
20. Hey!
This is my first browse on your blog and I am already inspired to dust off the machine and get sewing!!
Thanks for simplifying the circle skirt, it's a great tutorial !!
1. Thanks heaps for your positive comment! We really appreciate it:)
2. Hi! Even thouhg this is an old post, I am hoping for an answer. I am making the 1/4 skirt, but I am confused whit how many pattern pieces I will need? It says that it is not needed to fold
the fabric, but I cant get it to make sense whit just the one piece for an intire skirt?
3. You do just need to cut out one of your pattern pieces, not on a fold. You sew the two edges together and it creates a perfect ALine skirt.
21. HI!
I've started sewing a dress that I designed with a 3/4 fishtail circle skirt, but all the material keeps bunching in the back, do you have any suggestions on how to "even out" the bunching in the
1. Hi, sorry for this delayed response. I only just read your message. I am not sure why the material is bunching at the back. Perhaps the measurement for the waist was a little too much. Is it
bunching at the waist? or is it bunching somewhere else. What fabric have you used? I hope my answer is not too too late. If it is bunching at the waist you can just reduce the fullness of
the skirt, by pulling it in a little.... let me know where it is bunching.
Great that you have designed your own dress!
22. Wow. Thank you so much for posting this. Great great help. Especially for those just beginning to sew.
23. thank you for sharing this awesome tutorial. I am new at sewing and this looks like a project I would like to try.
I have a question, sorry if it is silly, as I said I have no experience sewing:
Would it be possible to use different fractions of the circle, such as 1/5, or 1/8? And if yes, how should the fabric be folded?
Thank you!
1. Not a silly question at all.
If you want a 1/5 circle skirt the equation would be r=(5xc)/(2 pi). If you are making this skirt you do not need to fold the fabric at all. If you choose to make a skirt which usually
requires folding and you can't get your head around how to fold it, just use your formula to cut out the waist circle. Look at the tutorial. On the 5th page of the tutorial, you will see that
I have shown 3 ways to make the skirt from the formula. Use the third 'waist circumference'. You just lie your fabric out and place the circle in the middle and measure it out from there.
.... I hope that makes sense. Let me know if you need further clarification. Thanks for your question:)
I am not sure what a 1/8 would look like, it would be very tight... I think... worth trying I suppose.
2. Thank you so much Mahaila!
24. Thank you so much. This was my first ever thing to make. Made high waist half circle mini skirt, still need to put the zipper in :) Didn't think that I'm able to make my own pattern for the skirt
but with your tutorial it was so easy. Thank you very very much for bringing many new skirts into my life :D
This is an addiction now :)
1. I am so glad that you found it useful, thanks for letting me know! I really appreciated the feedback:)
25. Great post! this is very detailed notes. Thank you very much!
1. Thanks, for the fantastic feedback. I hope all went well with your skirt:)
26. Wow! Just what I was looking for. Thanks for all the detail and pictures!
Hope I can be as successful with the skirts as you. :-))
1. Thanks for your lovely comment. I hope you went well with the skirt! Thanks again for the feedback.
27. What a great post, full of clear and detailed information. I never knew how to make a skirt until now just because I never found so good instructions. Great job.
I will link you in my blog one of these days if you don't mind. I will let you know when. Thanks again.
1. Thanks for this positive feedback. It is always great to hear:) Thanks for the link too!
28. step 1 - I have cut 2 circles
step 2 - tomorrow I am going to attach them both to a single waistband, add zip (unless I can think of something easier)and then hem with bias binding...
step 3 - add poodle applique (yet to be sourced!)
step 4 - watch daughter in starring role in primary school revue!
thankyou for such a straightforward pattern and instructions
1. Thanks for your lovely comment. I am glad that you found it easy to use. I love step 4! It must have been great seeing her up on stage:)
29. Hi,
Great tutorial. This is going to be first sewing project.
I am just confused about the zip; if it's a circle skirt (i.e. no openings)then will I need to cut throught the fabric so that the zip can open?
Thank you!
1. Thanks for your question. You will see that the circle is not a continuous. You will have to sew the opening together and you can add the zipper there. Cut your pattern out and you will see
the opening.
I hope that this makes sense:)
30. Amazing tutorial!..very well explained it and great technique!! thank you so much!...and never ever erase it, very please!!!
1. Thanks for your feedback! I am glad that you like tutorial:)
31. How do I know what the width and length of the waistband should be? I really am enjoying this pattern!
1. I'm sorry, I did figure out that the length of the waistband is the waist measurement plus the seam allowance, but what is the width of the waistband?
2. Hi, Thanks for your question. The width of the waistband depends on the look that you want. I usually make mine about 3 to 4 cm wide (or just over an inch wide). If you want the waistband to
be 3 cm wide, the width of the fabric piece that you cut out will be 3cm plus 3cm (so that you can fold it) plus the seam allowance (another 2cm). So approx. 8cm wide.
Thanks again for your question:) and good luck!
32. Thank you for your 'table of circle skirt dimensions.' I've referred to it when making skirts for my daughter and myself several times.
1. Thanks so much for your feedback. I am glad that you found the table useful!
33. This was really easy! Thanks for the tutorial. Plus I'm a math teacher and I can show the kids how people use pi in real life!
1. Your feedback is much appreciated! I am glad that both you and the kids found it useful:)
34. I was hoping to whip one together with some fabric I have for church today...but found that I only ended up with 1 1/3yd of 54" fabric. (I thought I get my fabric in 2 - 3 yd increments so
someone is on my unhappy list-maybe me) So I am assuming if I do the equation for 1/8 circle skirt and cut out 8 sections it would come together to make the skirt. The fabric is a weave, fairly
heavy..not dense and thick just slightly heavy and unpatterned (solid color) would it mess with the nap at all?
1. oh, and the skirt needs to be long..like 34 inches there in lays my problem. If it were to be short skirt there would be no issue.
2. Nope this won't work. Hmm. I'm stumped.
35. I've just cut out a full circle skirt using this tutorial, and it did indeed come out as one continuous circle, so I will now need to cut it open to put the zip in. Have I done something wrong? I
placed the fabric on the folds as shown :S
1. Hi, thanks for your question, if your fabric is big enough you can cut it out without having an opening. Though you will have to cut an opening in it... just so that you can put the zip in.
Good Luck!
36. I'm not sure if this question has been asked: In the US, fabric width is either 45-inches or 60-inches. This would obviously make a large difference in the size of the skirt -- and throw off the
math. What were the dimensions of your *starting* fabric, before you folded and started marking/cutting?? Ex: 2 yards of 45" or 1 yard of 60"?
1. Looking at my conversion chart - US 2 yards and 45 inches = 1.8 metres at 114.3 cm
US 1 yard - 0.91 = 152.4 cm. These measurements are pretty close to metric. We use 114 or 150 at 1 or 2 metre. So really it depends on how long you want your skirt and how much volume you
want. e.g. if I want a long, full circle skirt, I need to measure from my waist to the desired length. This should encompass the bolt length. If I want full circle skirt I need to get 2 yards
or more. If I want a short 1/4 skirt, perhaps I only need 1 yard at 0.91. I hope this makes sense.
37. Hi! This looks wonderful-- I have some fabric I've been meaning to make into a skirt forever, and I just never had the pattern for it. One question, though: if I were going to do an elastic
waistband rather than a hook and eye, would I need to change the dimensions of the skirt at all? Thank you!
1. It depends on how you are using the elastic. I assume you would use the elastic instead of the non elastic waist band. You would need to make the skirt with a stretch fabric as well,
otherwise you won't be able to get the skirt on or off. You would need to add more volume to the waist measurement. How much depends on the desired result
38. This tutorial is helping me a lot. Thanks! Plus, I hate maths, so I was thrilled when your example waist size (to find the radius) was my waist size - the whole formula was written out for me!
Thanks again.
1. I am glad it was so helpful!
39. hello.... i realy like your tutorial and can't wait to try it out. ,but my question is, how about if i like to have the waistband on the hip area (so when i wear the skirt, the waistband will be
on my hip). should i use my hip measurement for the circumference instead of the waist measurement.? thank you very much,,,, ^^
1. Yes, my 1/4 skirt (the green and navy blue one above) is on the hip. You just need to use your hip measurement as opposed to the waist measurement
40. I LOVE your tutorial. Especially the hidden zipper trick! I'm wearing the cutest sage green linen circle skirt right now thanks to you! I'm going to paint some gears on the corner (where the
poodle would go) for a steampunk look. I cannot wait to make more and give them out and presents for my friends and their little girls!
Thank you thank you THANK YOU!
1. Sounds like you are being really creative with it! GREAT! So happy to have helped:) The feedback is very much appreciated!
41. I have a question... What is your seam allowance when sewing up the sides? And how did you factor it into the pattern?
I'm so confused because if you make the pattern based off of a 26inch waist, wouldn't the seam allowance come out of that 26inches? Shouldn't you add the seam allowance to the waist circumference
and then find the radius based off of that?
Please tell me what I'm doing wrong!
1. You need to minus 1 cm or 5/8 inches when calculating the radius. For a full circle skirt, this is the equation. c/2x3.14. for a 26 inch waist the equation is. 26/ (2x 3.14)= 4.14 inch (4 1/8
inch) minus (-) 5/8 /seam allowance = 3 1/2 inch radius. This is your radius. I hope this helps:)
42. I've just made two circle skirts based on your calculations, and though they've come out nicely both of them have come out consistently larger at the waist and therefore sit lower than I'd
wanted. I've double checked my calculations to make sure that what I'm doing matches your instructions, but can't see whats gone wrong. All I can guess is that the waist shape being an oval
rather than circle has meant that in the maths some extra fabric has been added in. Is this a problem you've encountered before? Can you shed any light on what I'm doing wrong?
1. The calculations should always make some sort of circle. Not oval. Whole circle, 3/4 circle, 1/2 circle, 1/4 circle. What type of fabric are you using? Sometimes the grainline can effect the
end result. Have you made it with a muslim fabric to check the fit? Perhaps make a muslim, and check the fit, then substract or add to your radius measurement. This is about all I can suggest
so far. Let me know how you go:) Thanks for the question
43. The informations are so lovely and so usefull so thank you very much. Be sure i will use all of them keeping in my mind.Have a goog luck. http://guncelyazar.net
1. Thanks for the feedback! Much appreciated!
44. Hi Fickle Sense! I'm so glad I found this through a Pinterest pin by the Aussie Curves blog! I was trying to download your PDF with the link, but it took me to the Adobe website although my Adobe
is all up to date. Is the PDF still available? I would love a copy... if you're willing to email it to me, please send it to lauragabriele.e@gmail.com Thank you so much!
45. Oh my I must be stupid, I do not understand what's going on on step 3!
1. In step 3, You cut a strip of tape or use a tape measure with the radius measurement. Pin the end of the tape to the corner of your paper and mark the paper with the radius measurement.
Slowly move the tape measure around until you have your entire radius marked out. The last picture shows what it should look like by the end. I hope this helps:)
46. Hello I use a 2cm seam allowance in all my sewing... Should I minus 2 cm instead of 1 ?
1. Yes!
47. Hi! Even thouhg this is an old post, I am hoping for an answer. I am making the 1/4 skirt, but I am confused whit how many pattern pieces I will need? It says that it is not needed to fold the
fabric, but I cant get it to make sense whit just the one piece for an intire skirt?
1. You just need one piece. It does work, I have made the 1/4 skirt many times. Just give it a go on a scrap piece if you feel hesitant
Thanks for your question
2. hi,
i love it that you added the tutorial for 3/4 and 1/2. I want to make a dress with a circle skirt bottom but one that is not so full. i'm thinking of using the 1/2 circle pattern. please help
me understand how it will fit at the waist if i am using only half of circumference.
3. Hi, it is still the same circumference for all skirts. The different equations give the dimensions for each type of skirt. Just use the equation for the particular skirt you want to make.
Give it a go and experiment if you are not sure.
|
{"url":"http://www.ficklesense.com/2011/09/how-to-make-circle-skirt-full-34-12-14.html","timestamp":"2014-04-20T05:52:33Z","content_type":null,"content_length":"282909","record_id":"<urn:uuid:3e938a8f-783b-4a73-a098-34439f840a8f>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00638-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Computer Scientists 'Prove' God Exists
Two scientists have formalized a theorem regarding the existence of God penned by mathematician Kurt Gödel. But the God angle is somewhat of a red herring -- the real step forward is the example it
sets of how computers can make scientific progress simpler.
As headlines go, it's certainly an eye-catching one. "Scientists Prove Existence of God," German daily Die Welt wrote last week.
But unsurprisingly, there is a rather significant caveat to that claim. In fact, what the researchers in question say they have actually proven is a theorem put forward by renowned Austrian
mathematician Kurt Gödel -- and the real news isn't about a Supreme Being, but rather what can now be achieved in scientific fields using superior technology.
When Gödel died in 1978, he left behind a tantalizing theory based on principles of modal logic -- that a higher being must exist. The details of the mathematics involved in Gödel's ontological proof
are complicated, but in essence the Austrian was arguing that, by definition, God is that for which no greater can be conceived. And while God exists in the understanding of the concept, we could
conceive of him as greater if he existed in reality. Therefore, he must exist.
Even at the time, the argument was not exactly a new one. For centuries, many have tried to use this kind of abstract reasoning to prove the possibility or necessity of the existence of God. But the
mathematical model composed by Gödel proposed a proof of the idea. Its theorems and axioms -- assumptions which cannot be proven -- can be expressed as mathematical equations. And that means they can
be proven.
Proving God's Existence with a MacBook
That is where Christoph Benzmüller of Berlin's Free University and his colleague, Bruno Woltzenlogel Paleo of the Technical University in Vienna, come in. Using an ordinary MacBook computer, they
have shown that Gödel's proof was correct -- at least on a mathematical level -- by way of higher modal logic. Their initial submission on the arXiv.org research article server is called
"Formalization, Mechanization and Automation of Gödel's Proof of God's Existence."
The fact that formalizing such complicated theorems can be left to computers opens up all kinds of possibilities, Benzmüller told SPIEGEL ONLINE. "It's totally amazing that from this argument led by
Gödel, all this stuff can be proven automatically in a few seconds or even less on a standard notebook," he said.
The name Gödel may not mean much to some, but among scientists he enjoys a reputation similar to the likes of Albert Einstein -- who was a close friend. Born in 1906 in what was then Austria-Hungary
and is now the Czech city of Brno, Gödel later studied in Vienna before moving to the United States after World War II broke out to work at Princeton, where Einstein was also based. The first version
of this ontological proof is from notes dated around 1941, but it was not until the early 1970s, when Gödel feared that he might die, that it first became public.
Now Benzmüller hopes that using such a headline-friendly example can help draw attention to the method. "I didn't know it would create such a huge public interest but (Gödel's ontological proof) was
definitely a better example than something inaccessible in mathematics or artificial intelligence," the scientist added. "It's a very small, crisp thing, because we are just dealing with six axioms
in a little theorem. … There might be other things that use similar logic. Can we develop computer systems to check each single step and make sure they are now right?"
'An Ambitious Expressive Logic'
The scientists, who have been working together since the beginning of the year, believe their work could have many practical applications in areas such as artificial intelligence and the verification
of software and hardware.
Benzmüller also pointed out that there are many scientists working on similar subject areas. He himself was inspired to tackle the topic by a book entitled "Types, Tableaus and Gödel's God," by
Melvin Fitting.
The use of computers to reduce the burden on mathematicians is not new, even if it is not welcomed by all in the field. American mathematician Doron Zeilberger has been listing the name Shalosh B.
Ekhad on his scientific papers since the 1980s. According to the New York-based Simons Foundation, the name is actually a pseudonym for the computers he uses to help prove theorems in seconds that
previously required page after page of mathematical reasoning. Zeilberger says he gave the computer a human-sounding name "to make a statement that computers should get credit where credit is due."
"human-centric bigotry" on the part of mathematicians, he says, has limited progress.
Ultimately, the formalization of Gödel's ontological proof is unlikely to win over many atheists, nor is it likely to comfort true believers, who might argue the idea of a higher power is one that
defies logic by definition. For mathematicians looking for ways to break new ground, however, the news could represent an answer to their prayers.
|
{"url":"http://abcnews.go.com/Technology/computer-scientists-prove-god-exists/story?id=20678984&singlePage=true","timestamp":"2014-04-20T00:50:11Z","content_type":null,"content_length":"94507","record_id":"<urn:uuid:a6b3a891-c2cd-4d5a-832e-41ffe74131c5>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00594-ip-10-147-4-33.ec2.internal.warc.gz"}
|
3. Simple method for predicting earthquake damage
• A Precise and simple method was developed for carrying out seismic hazard analysis.
• This method allows simple calculation of probability of structural damage and areas
Decisions where to apply seismic retrofit to railway lines, in which order of priority and choosing the necessary level of reinforcement, require a seismic hazard evaluation method focused on
structural damage and operational safety.
The present research aims to yield an improved and more easily applicable method for evaluating the probability of seismic damage to railway lines. The method begins with the evaluation of seismic
intensities (maximum ground acceleration or PGA, and maximum ground velocity or PGV) and corresponding occurrence probabilities at each location alongside the target line, taking into account fault
properties and amplification characteristics of surface ground. A fragility curve combining the structural properties (natural period T, yield seismic coefficient Khy), seismic intensity and damage
(Fig 1)) is then proposed, based on non-linear dynamic analysis. A fragility curve evaluating the seismic intensity and train running safety is also proposed.
By incorporating the strength and occurrence probability of an expected earthquake into these fragility curves, it is possible using only four parameters (PGA, PGV, T, Khy) to distinguish structures
with high probability of suffering earthquake damage as well as the area where operational safety will be badly affected, without complicated earthquake response analyses. Furthermore, a simple
regression expression estimating the structural properties (T and Khy) only from the height of structure is proposed. This method is an economical solution for characterizing large numbers of
structures, even if they have not been examined in detail.
The above method was applied to hypothetical Japanese high speed lines. Results gave confirmation of locations subject to significant seismic motion (Fig 2 0-10 km radius zone) and vulnerable spots
which did not suffer major seismic motion but had structures with weaker resistance (Fig 2, within a radius of 15, 26, 45, 55 km) and where there was a subsequent high risk of earthquake damage.
|
{"url":"http://www.rtri.or.jp/eng/rd/seika/2010/01/safety_E03.html","timestamp":"2014-04-16T04:10:44Z","content_type":null,"content_length":"4597","record_id":"<urn:uuid:82c74bad-4f62-49b9-84c1-f61b7c6af510>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00567-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Title page for 86425018
Reference 中文部分
Alexander, C. O. and C. T. Leigh, “On The Covariance Matrices Used in Value at Risk Models”, The Journal of Derivatives, Spring 1997, pp. 50-62.
Aussenegg, Wolfgang and Stefan Pichler, “Empirical Evaluation of Simple Models to Calculate Value-at-Risk of Fixed Income Instruments”, 1997, Working Paper.
Basle Committee on Banking Supervision, “Supervisory Framework for the Useof “Backtesting” in Conjunction with the Internal Models Approach to Market Risk Capital Requirements”, January 1996.
Basle Committee on Banking Supervision, “Amendment to The Capital Accord to Incorporate Market Risks”, January 1996.
Basle Committee on Banking Supervision, Technical Committee of the International Organization of Securities Commission (“IOSCO”), “Survey of Disclosures about Trading and Derivatives Activities of
Banks and Securities Firms”, November 1996, Join Report.
Beder, Tanya Styblo, ”VAR: Seductive but Dangerous”, Financial Analysts Journal, (September-October), 1995, pp.12-24.
Boudoukh, Jacob, Matthew Richardson and Robert F. Whitelaw , “Investigation of A Class of Volatility Estimators”, The Journal of Derivatives, Spring 1997, pp. 63-71.
Brown, Stephen J. & Philip H. Dybvig, “The Emplicatins of Cox, Ingersoll ,Roll Theory of the Term Structure of Interest Rates”, Journal of Finance, Vol.41, No. 3, July 1986, pp. 617-30.
Bulter, J. S. and Barry Schachter, “Improving Value-at-Risk Estimates by Combining Kernel Estimation with Historical Simulation”, 1996, Working Paper.
Christoffersen, P. F., “Evaluating Interval Forcasts”, Manuscript, Department of Economics, University of Pennsylvania, 1995.
Crnkovic C., Drachman J., “Quality Control”, Risk, Vol. 9, No. 9, September 1996, pp. 138-42.
Dimson, E. and P. R.Marsh, “Capital Requirements for Securities Firms”, Journal of Finance, Vol. 50., No. 3, pp. 821-51.
Duffie, Darrell and Jun Pan, “An Overview of Value at Risk”, The Journal of Derivatives, Spring 1997, pp. 7-49.
Estrella, Arturo, Darryll Hendricks, John Kambhu, Soo Shin, and Stefan Walter, “The Price Risk of Options: Measurement and Capital Requirements”, FRBNY Quarterly Review, Summer-Fall1994, pp.27-43.
Fong, Gifford and Oldrich A. Vasicek, “A Multidimensional Framework for Risk Analysis”, Financial Analysts Journal, July/August 1997, pp. 51-57.
Grundy, Brauce D. and Zvi Wiener , “The Analysis of VAR , Deltas and State Prices: A New Approach”, 1996, Working Paper.
Guerra, Jose Ismael Gonzalez and Karl Peter Rubach Cata, “Market Risk Measurement in the Mexican Financial Market”, 1997, Working Paper.
Harvey, Campbell R., “The Real Term Structure and Consumption Growth,” Journal of Financial Economics 22(1988), pp. 305-33.
Hendricks, Darryll, “Evaluation of Value-at-Risk Models Using Historical Data”, FRBNY Economic Policy Review, 1996, pp. 39-70.
Jackson, P., Maude D. J. and W. Perraudin, “Bank Capital and Value at Risk”, The Journal of Derivatives 4/3, Spring 1997, pp. 73-90.
Jorion, Philippe, “Value at Risk: The New Benchmark for Controlling Market Risk”, IRWIN, 1997.
Linsmeier, Thomas J. and Neil D. Pearson, “Risk Measurement: An Introduction to Value at Risk”, 1996, Working Paper.
Lopez, J. A., “Regulatory Evaluation of Value-at-Risk Models”, Working Paper, September 1996.
Merton, R. C. and A. F. Perold, 1993, “Theory of Risk Capital in Financial Firms”, Journal of Applied Corporate Finance, Vol. 6, No.3, pp. 16-32.
Singh, Manoj K., “Value at Risk Using Principal Components Analysis”, The Journal of Portfolio Management, Fall 1997, pp.101-112.
|
{"url":"http://thesis.lib.ncu.edu.tw/ETD-db/ETD-search/view_etd?URN=86425018","timestamp":"2014-04-18T00:13:40Z","content_type":null,"content_length":"11601","record_id":"<urn:uuid:e1e5fc7d-058b-44b8-987c-a4079dc6f88e>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00636-ip-10-147-4-33.ec2.internal.warc.gz"}
|
SPSSX-L archives -- July 2003 (#102)LISTSERV at the University of Georgia
Date: Wed, 9 Jul 2003 08:54:39 -0400
Reply-To: Mark Davenport <madavenp@OFFICE.UNCG.EDU>
Sender: "SPSSX(r) Discussion" <SPSSX-L@LISTSERV.UGA.EDU>
From: Mark Davenport <madavenp@OFFICE.UNCG.EDU>
Subject: Re: Mitchell Meeusen's logistic question
Comments: To: Mitchell.Meeusen@mercer.com
Content-Type: text/plain; charset=US-ASCII
Logistic regression would appear to be the logical choice--assuming that
you are trying to determine the probability of someone being in the No
(or Yes) category given particular values on a set of predictors. There
is no reason why you can't use multiple predictors.
Go to Analyze--Regresion--Binary Logistic
Enter your IVs as covariates. My understanding is that Base SPSS has
the Logisitc procedure so you should not have any problems. I would
strongly suggest you look at a good book on logistic regression before
you start. Interpreting logistic regression results can be tricky. Try
Hosmer and Lemeshow's Applied Logistic Regression.
Mark A. Davenport Ph.D.
Asst to the Vice Chancellor for Student Affairs/Research and
The University of North Carolina at Greensboro
149 Mossman Bldg.
Greensboro, NC 27402-6170
'An approximate answer to the right problem is worth a good deal more
than an exact
answer to an approximate problem' -- J. W. Tukey
>>> "Meeusen, Mitchell" <Mitchell.Meeusen@MERCER.COM> 7/8/2003 4:45:21
PM >>>
I've installed the base system. Here's the question I'm trying to
> I'm trying to run a regression, with five or six predictors on a
> response variable. The response variable is a flag variable, 1 for
yes, 0
> for no. It is believed that logistic regression is the best way to
> this. Is this correct?
> Now, given that I'm trying to run a logistic regression, I have
> independent variables, but there's only space for one of them. Is
there a
> way to run a logistic regression on a flagged dependent variable
> multiple independent variables?
This e-mail and any attachments may be confidential or legally
If you received this message in error or are not the intended
recipient, you
should destroy the e-mail message and any attachments or copies, and
you are
prohibited from retaining, distributing disclosing or using any
contained herein. Please inform us of the erroneous delivery by
e-mail. Thank you for your cooperation.
|
{"url":"http://listserv.uga.edu/cgi-bin/wa?A2=ind0307&L=spssx-l&D=0&P=11229","timestamp":"2014-04-19T17:22:13Z","content_type":null,"content_length":"11206","record_id":"<urn:uuid:096d2319-c801-4090-9302-e6b01151dc6a>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00131-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Fwd: Interpolation problem
15 Oct 17:49 2010
Re: Fwd: Interpolation problem
Moreland, Kenneth <kmorel <at> sandia.gov>
2010-10-15 15:49:12 GMT
If I can, I would like to expand the conversation a little bit because I don’t think the technical details are telling the whole story.
The root of the problem is that the interpolation of a scalar field based on the four corners of a square (or any quadrilateral) is in general ill-defined. We usually mean for field to be linearly
interpolated between the points. That is, find some function of the form f(x,y) = a*x + b*y + c that gives the correct scalar for all four points. But, in fact, four points over constrain the problem
and no such linear function exists.
So really, the differences you are seeing are the differences between how the internal VTK functions resolve the interpolation and how your rendering hardware is doing it. I haven’t looked at the
source code, but I’m assuming that VTK is using something like bilinear interpolation. Bilinear interpolation works by first interpolating the scalar on opposite edges and then interpolating again
between these edges to get a point in the interior. Bilinear interpolation has several advantages: it can be done independently of neighboring polygons and still be C0 continuous, it is easy to
implement, it is smooth in the interior, and it gives an interpolation that intuitively makes sense. Note, however, that bilinear interpolation is not linear. This is evident by your plot (which is a
Your graphics driver is much more concerned with speed. As such, it takes your quadrilateral and breaks it into triangles. This circumvents the whole interpolation problem because the three vertices
of the triangle perfectly constrain the linear function. It is also easy for the graphics hardware to compute the interior of the triangle. Of course, the result is not as, shall we say, pleasant as
bilinear interpolation. It is not smooth: There is a C1 discontinuity at the line where the quadrilateral was split into triangles. Also, this splitting is arbitrary. The split could just as easily
been made in the opposite direction. In that case, you would see a red line go from upper left to lower right instead of that blue line from lower left to upper right.
So the rendering is probably not what was indented when defining the scalar value on a quadrilateral. However, ParaView allows it because correcting the problem would make the rendering prohibitively
slow. Furthermore, it is rarely even noticeable. The square in this example is worst case. Not only do no linear functions fit, they are not even close. Thus, the different ways to resolve the issue
are dramatically different. In a practical application, this does not occur. The scalar values tend to more closely fit a linear function. If a quadrilateral like this occurred in a real data set, it
might be indicative of a meshing problem. Furthermore, real meshes have lots of facets. If this square was a small part of a much bigger surface, differences in interpolation are less meaningful.
In short, the interpolation your graphics hardware performs is sufficient for qualitative analysis (getting an overview of behavior), which is all its really good for anyway. When you do qualitative
analysis (showing actual numbers in the data) such as in your plot, the more accurate interpolation models of VTK are used.
On 10/14/10 9:06 AM, "Andy Bauer" <andy.bauer <at> kitware.com> wrote:
2010/10/14 小縣信也 <so0208jp <at> gmail.com>
Hi Andy,
Thank you for replying.
Do you mean that the rendering image doesn't reflect the result of
interpolation ?
If so, what is the most common usage?
In what situation is the interpolation used ?
If your grid uses triangles then the image should match the interpolation for the typical node/point based interpolation.
2010/10/13 Andy Bauer <andy.bauer <at> kitware.com>:
> I think this is a rendering issue and not an interpolation issue. From the
> 2d plot you can see that it's properly interpolating the values. I think
> the quadrilateral is getting rendered as 2 triangles in which case the
> diagonal values appear to be constant since the 2 end points are at the same
> value.
> 2010/10/12 小縣信也 <so0208jp <at> gmail.com>
>> Hello
>> I'm sending the following e-mail again ,because nobody answered it.
>> Does anyone have information on my problem?
>> Shinya
>> ---------- Forwarded message ----------
>> From: 小縣信也 <so0208jp <at> gmail.com>
>> Date: 2010/10/7
>> Subject: Interpolation problem
>> To: paraview <at> paraview.org
>> Hello, paraview users
>> I draw the file “Sample_inter.vtk” on ParaView. (ref:attached file)
>> I chose “Gouraud” in Interpolation option.
>> The contor picture doesn’t seem to be interpolated by 4 points.
>> However,the graph which is made by PlotOverLine shows the gradation of
>> 4 points scalar.
>> Why are they different?
>> Does anyone know this problem?
>> How can I make contour picture interpolated by Gouraud ?
>> I look forward to your reply to my inquiry
>> Shinya Ogata
>> _______________________________________________
>> Powered by www.kitware.com <http://www.kitware.com>
>> Visit other Kitware open-source projects at
>> http://www.kitware.com/opensource/opensource.html
>> Please keep messages on-topic and check the ParaView Wiki at:
>> http://paraview.org/Wiki/ParaView
>> Follow this link to subscribe/unsubscribe:
>> http://www.paraview.org/mailman/listinfo/paraview
<span>If I can, I would like to expand the conversation a little bit because I don’t think the technical details are telling the whole story.<br><br>
The root of the problem is that the interpolation of a scalar field based on the four corners of a square (or any quadrilateral) is in general ill-defined. We usually mean for field to be linearly interpolated between the points. That is, find some function of the form f(x,y) = a*x + b*y + c that gives the correct scalar for all four points. But, in fact, four points over constrain the problem and no such linear function exists.<br><br>
So really, the differences you are seeing are the differences between how the internal VTK functions resolve the interpolation and how your rendering hardware is doing it. I haven’t looked at the source code, but I’m assuming that VTK is using something like bilinear interpolation. Bilinear interpolation works by first interpolating the scalar on opposite edges and then interpolating again between these edges to get a point in the interior. Bilinear interpolation has several advantages: it can be done independently of neighboring polygons and still be C0 continuous, it is easy to implement, it is smooth in the interior, and it gives an interpolation that intuitively makes sense. Note, however, that bilinear interpolation is not linear. This is evident by your plot (which is a parabola).<br><br>
Your graphics driver is much more concerned with speed. As such, it takes your quadrilateral and breaks it into triangles. This circumvents the whole interpolation problem because the three vertices of the triangle perfectly constrain the linear function. It is also easy for the graphics hardware to compute the interior of the triangle. Of course, the result is not as, shall we say, pleasant as bilinear interpolation. It is not smooth: There is a C1 discontinuity at the line where the quadrilateral was split into triangles. Also, this splitting is arbitrary. The split could just as easily been made in the opposite direction. In that case, you would see a red line go from upper left to lower right instead of that blue line from lower left to upper right.<br><br>
So the rendering is probably not what was indented when defining the scalar value on a quadrilateral. However, ParaView allows it because correcting the problem would make the rendering prohibitively slow. Furthermore, it is rarely even noticeable. The square in this example is worst case. Not only do no linear functions fit, they are not even close. Thus, the different ways to resolve the issue are dramatically different. In a practical application, this does not occur. The scalar values tend to more closely fit a linear function. If a quadrilateral like this occurred in a real data set, it might be indicative of a meshing problem. Furthermore, real meshes have lots of facets. If this square was a small part of a much bigger surface, differences in interpolation are less meaningful.<br><br>
In short, the interpolation your graphics hardware performs is sufficient for qualitative analysis (getting an overview of behavior), which is all its really good for anyway. When you do qualitative analysis (showing actual numbers in the data) such as in your plot, the more accurate interpolation models of VTK are used.<br><br>
On 10/14/10 9:06 AM, "Andy Bauer" <<a href="andy.bauer <at> kitware.com">andy.bauer <at> kitware.com</a>> wrote:<br><br></span><blockquote>
2010/10/14 小縣信也 <<a href="so0208jp <at> gmail.com">so0208jp <at> gmail.com</a>><br></span><blockquote><span>Hi Andy,<br><br>
Thank you for replying.<br>
Do you mean that the rendering image doesn't reflect the result of<br>
interpolation ?<br></span></blockquote>
<br></span><blockquote><span>If so, what is the most common usage?<br>
In what situation is the interpolation used ?<br></span></blockquote>
If your grid uses triangles then the image should match the interpolation for the typical node/point based interpolation. <br><br>
2010/10/13 Andy Bauer <<a href="andy.bauer <at> kitware.com">andy.bauer <at> kitware.com</a>>:<br>
> I think this is a rendering issue and not an interpolation issue. From the<br>
> 2d plot you can see that it's properly interpolating the values. I think<br>
> the quadrilateral is getting rendered as 2 triangles in which case the<br>
> diagonal values appear to be constant since the 2 end points are at the same<br>
> value.<br>
> 2010/10/12 小縣信也 <<a href="so0208jp <at> gmail.com">so0208jp <at> gmail.com</a>><br>
>> Hello<br>
>> I'm sending the following e-mail again ,because nobody answered it.<br>
>> Does anyone have information on my problem?<br>
>> Shinya<br>
>> ---------- Forwarded message ----------<br>
>> From: 小縣信也 <<a href="so0208jp <at> gmail.com">so0208jp <at> gmail.com</a>><br>
>> Date: 2010/10/7<br>
>> Subject: Interpolation problem<br>
>> To: <a href="paraview <at> paraview.org">paraview <at> paraview.org</a><br>
>> Hello, paraview users<br>
>> I draw the file “Sample_inter.vtk” on ParaView. (ref:attached file)<br>
>> I chose “Gouraud” in Interpolation option.<br>
>> The contor picture doesn’t seem to be interpolated by 4 points.<br>
>> However,the graph which is made by PlotOverLine shows the gradation of<br>
>> 4 points scalar.<br>
>> Why are they different?<br>
>> Does anyone know this problem?<br>
>> How can I make contour picture interpolated by Gouraud ?<br>
>> I look forward to your reply to my inquiry<br>
>> Shinya Ogata<br>
>> _______________________________________________<br>
>> Powered by www.kitware.com <<a href="http://www.kitware.com">http://www.kitware.com</a>> <br>
>> Visit other Kitware open-source projects at<br>
>> <a href="http://www.kitware.com/opensource/opensource.html">http://www.kitware.com/opensource/opensource.html</a><br>
>> Please keep messages on-topic and check the ParaView Wiki at:<br>
>> <a href="http://paraview.org/Wiki/ParaView">http://paraview.org/Wiki/ParaView</a><br>
>> Follow this link to subscribe/unsubscribe:<br>
>> <a href="http://www.paraview.org/mailman/listinfo/paraview">http://www.paraview.org/mailman/listinfo/paraview</a><br>
|
{"url":"http://permalink.gmane.org/gmane.comp.science.paraview.user/9886","timestamp":"2014-04-18T00:14:36Z","content_type":null,"content_length":"21899","record_id":"<urn:uuid:ebacb4a6-a27f-48d0-9651-2d23adc85e7c>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00434-ip-10-147-4-33.ec2.internal.warc.gz"}
|
scipy.linalg.solve_sylvester(a, b, q)[source]¶
Computes a solution (X) to the Sylvester equation (AX + XB = Q).
New in version 0.11.0.
a : (M, M) array_like
Leading matrix of the Sylvester equation
b : (N, N) array_like
Parameters :
Trailing matrix of the Sylvester equation
q : (M, N) array_like
Right-hand side
x : (M, N) ndarray
Returns :
The solution to the Sylvester equation.
Raises :
If solution was not found
Computes a solution to the Sylvester matrix equation via the Bartels- Stewart algorithm. The A and B matrices first undergo Schur decompositions. The resulting matrices are used to construct an
alternative Sylvester equation (RY + YS^T = F) where the R and S matrices are in quasi-triangular form (or, when R, S or F are complex, triangular form). The simplified equation is then solved
using *TRSYL from LAPACK directly.
|
{"url":"http://docs.scipy.org/doc/scipy/reference/generated/scipy.linalg.solve_sylvester.html","timestamp":"2014-04-20T16:00:19Z","content_type":null,"content_length":"8276","record_id":"<urn:uuid:2591765a-bf34-4ce3-a37c-adf902095c99>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00452-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math 0280 - Introduction to Matrices and Linear Algebra
Student Guidelines and Syllabus
About the course
The principal topics of the course include vectors, matrices, determinants, linear transformations, eigenvalues and eigenvectors, and selected applications.
Math 0220 or equivalent, with a grade of C or better.
The text for this course is Linear Algebra, A Modern Introduction, Third Edition by David Poole.
Course Objectives
Students who complete Math 0280 are expected to have mastered the fundamental ideas of linear algebra and to be able to apply these ideas to a variety of practical problems. More specifically, in
Math 0280 you will be expected to:
- explore and learn the core concepts associated with systems of linear equations, manipulation of matrices, linear transformations, orthogonality, and eigenvalues/eigenvectors;
- begin to think abstractly about certain of these topics;
- understand how these ideas can be used to solve problems and compute things.
Homework/quizzes/written assignments
Each week, you will be assigned some problems to write up and hand in. These assignments will be graded and returned. In addition, you will be provided with a list of practice problems to do, even
though they will not be handed in and graded. At the instructor's discretion there may be quizzes or written assignments.
Your course grade will be determined as follows:
• Two midterm exams: 40% (20 % each)
• Final exam: 40%
• Written assignments/quizzes/homework assignments: 20%
Some sections may deviate slightly from this formula. Any variations will be announced by your instructor at the beginning of the term.
Calculators Policy
Calculators are NOT allowed on the quizzes, midterm examinations and the final exam.
Final Exam Policy
All sections will take a departmental final exam at a time and place to be scheduled by the registrar. You MUST attend the final exam.
Final Grade Policy
Your course grade will not exceed your final exam grade by more than one letter grade.
Exam Dates
See the class schedule for the dates of the two midterm exams and the final. The room of the final exam will be announced by your instructor.
Getting Help
Walk in tutoring is available in the Math Assistance Center (MAC) in Room 215 of the O'Hara Student Center. See http://www.mathematics.pitt.edu/about/math-assistance-center
Office Hours
Your instructor will announce the office hours.
Disability Resource Services
If you have a disability for which you are or may be requesting an accommodation, you are encouraged to contact both your instructor and the Office of Disability Resources and Services, 216 William
Pitt Union (412) 624-7890 as early as possible in the term. See http://www.studentaffairs.pitt.edu/drsabout
Academic Integrity
Cheating/plagiarism will not be tolerated. Students suspected of violating the University of Pittsburgh Policy on Academic Integrity will incur a minimum sanction of a zero score for the quiz, exam
or paper in question. Additional sanctions may be imposed, depending on the severity of the infraction.
On homework, you may work with other students or use library resources, but each student must write up his or her solutions independently. Copying solutions from other students will be considered
cheating, and handled accordingly.
|
{"url":"http://www.pitt.edu/~sysoeva/math280/280syllabus.html","timestamp":"2014-04-21T10:16:59Z","content_type":null,"content_length":"4181","record_id":"<urn:uuid:5bcee53d-584f-4e91-957f-7c59f7d3fd56>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00128-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Posts about World’s Fair on (Roughly) Daily
Posts Tagged ‘World’s Fair’
Existentialist Star Wars (in French!)
Star Wars with a French Existentialist twist. Almost all the subtitles (except for little things like “Despair!” and “I die!” and a few others) are actually quotes from Jean-Paul Sartre. And
obviously this will make no sense if you understand French. If you do know it, hit yourself in the head repeatedly before watching this. And then hit yourself repeatedly when you’re done
More from creator OneMinuteGalactica here (Do be sure to check out “Luke Skywalker- Worst Scout Ever“)
As we steep in ennui, we might recall that it was on this date in 1889 that the Eiffel Tower opened to the public. The spire, now iconic of Paris, was designed by Gustave Eiffel (who also created
the armature for France’s largest gift to the U.S., the Statue of Liberty) and served as the entrance arch to the 1889 World’s Fair.
Gary Foshee, a collector and designer of puzzles from Issaquah near Seattle walked to the lectern to present his talk. It consisted of the following three sentences: “I have two children. One is
a boy born on a Tuesday. What is the probability I have two boys?”
The event was the Gathering for Gardner [see here], a convention held every two years in Atlanta, Georgia, uniting mathematicians, magicians and puzzle enthusiasts. The audience was silent as
they pondered the question.
“The first thing you think is ‘What has Tuesday got to do with it?’” said Foshee, deadpan. “Well, it has everything to do with it.” And then he stepped down from the stage.
Read the full story of the conclave– held in honor of the remarkable Martin Gardner, who passed away last year, and in the spirit of his legendary “Mathematical Games” column in Scientific American–
in New Scientist… and find the answer to Gary’s puzzle there– or after the smiling professor below.
“I have two children. One is a boy born on a Tuesday. What is the probability I have two boys?”… readers may hear a Bayesian echo of the Monty Hall Problem on which (R)D has mused before:
The first thing to remember about probability questions is that everyone finds them mind-bending, even mathematicians. The next step is to try to answer a similar but simpler question so that we
can isolate what the question is really asking.
So, consider this preliminary question: “I have two children. One of them is a boy. What is the probability I have two boys?”
This is a much easier question: The way Foshee meant it is, of all the families with one boy and exactly one other child, what proportion of those families have two boys?
To answer the question you need to first look at all the equally likely combinations of two children it is possible to have: BG, GB, BB or GG. The question states that one child is a boy. So we
can eliminate the GG, leaving us with just three options: BG, GB and BB. One out of these three scenarios is BB, so the probability of the two boys is 1/3.
Now we can repeat this technique for the original question. Let’s list the equally likely possibilities of children, together with the days of the week they are born in. Let’s call a boy born on
a Tuesday a BTu. Our possible situations are:
* When the first child is a BTu and the second is a girl born on any day of the week: there are seven different possibilities.
* When the first child is a girl born on any day of the week and the second is a BTu: again, there are seven different possibilities.
* When the first child is a BTu and the second is a boy born on any day of the week: again there are seven different possibilities.
* Finally, there is the situation in which the first child is a boy born on any day of the week and the second child is a BTu – and this is where it gets interesting.
There are seven different possibilities here too, but one of them – when both boys are born on a Tuesday – has already been counted when we considered the first to be a BTu and the second on any
day of the week. So, since we are counting equally likely possibilities, we can only find an extra six possibilities here.
Summing up the totals, there are 7 + 7 + 7 + 6 = 27 different equally likely combinations of children with specified gender and birth day, and 13 of these combinations are two boys. So the answer
is 13/27, which is very different from 1/3.
It seems remarkable that the probability of having two boys changes from 1/3 to 13/27 when the birth day of one boy is stated – yet it does, and it’s quite a generous difference at that. In fact,
if you repeat the question but specify a trait rarer than 1/7 (the chance of being born on a Tuesday), the closer the probability will approach 1/2.
[See UPDATE, below]
As we remember, with Laplace, that “the theory of probabilities is at bottom nothing but common sense reduced to calculus,” we might ask ourselves what the odds are that on this date in 1964 the
World’s Largest Cheese would be manufactured for display in the Wisconsin Pavilion at the 1964-65 World’s Fair. The 14 1/2′ x 6 1/2′ x 5 1/2′, 17-ton cheddar original– the product of 170,000 quarts
of milk from 16,000 cows– was cut and eaten in 1965; but a replica was created and put on display near Neillsville, Wisconsin… next to Chatty Belle, the World’s Largest Talking Cow.
UPDATE: reader Jeff Jordan writes with a critique of the reasoning used above to solve Gary Foshee’s puzzle:
For some reason, mathematicians and non-mathematicians alike develop blind
spots about probability problems when they think they already know the
answer, and are trying to convince others of its correctness. While I agree
with most of your analysis, it has one such blind spot. I’m going move
through a progression of variations on another famous conundrum, trying to
isolate these blind spots and eventually get the point you overlooked.
Bertrand’s Box Paradox: Three identical boxes each have two coins inside:
one has two gold coins, one has two silver coins, and one has a silver coin
and a gold coin. You open one and pull out a coin at random, without seeing
the other. It is gold. What is the probability the other coin is the same
A first approach is to say there were three possible boxes you could pick,
but the information you have rules one out. That leaves two that are still
possible. Since you were equally likely to pick either one before picking a
coin, the probability that this box is GG is 1/2. A second approach is that
there were six coins that were equally likely, and three were gold. But two
of them would have come out of the GG box. Since all three were equally
likely, the probability that this box is GG is 2/3.
This appears to be a true paradox because the “same” theoretical approach -
counting equally likely cases – gives different answers. The resolution of
that paradox – and the first blind spot – is that this is an incorrect
theoretical approach to solving the problem. You never want to merely count
cases, you want to sum the probabilities that each case would produce the
observed result. Counting only works when each case that remains possible
has the same chance of producing the observed result. That is true when you
count the coins, but not when you count the boxes. The probability of
producing a gold coin from the GG box is 1, from the SS box is 0, and from
the GS box is 1/2. The correct answer is 1/(1+0+1/2)=2/3. (A second blind
spot is that you don’t “throw out” the impossible cases, you assign them a
probability of zero. That may seem like a trivial distinction, but it helps
to understand what probabilities other than 1 or 0 mean.)
This problem is mathematically equivalent to the original Monty Hall
Problem: You pick Door #1 hoping for the prize, but before opening it the
host opens Door #3 to show that it is empty. Given the chance, what is the
probability you win by switching to door #2? Let D1, D2, and D3 represent
where the prize is. Assuming the host won’t open your door, and knows where
the prize is so he always opens an empty door, then the probability D2 would
produce the observed result is 1, that D3 would is 0, and that D1 is …
well, let’s say it is 1/2. Just like before, the probability D2 now has the
prize is 1/(1+0+1/2)=2/3.
Why did I waffle about the value of P(D1)? There was a physical difference
with the boxes that produced the explicit result P(GS)=1/2. But here the
difference is logical (based on the location of the prize) and implicit. Do
we really know the host would choose randomly? In fact, if the host always
opens Door #3 if he can, then P(D1)=1 and the answer is 1/(1+0+1)=1/2. Or if
he always opens Door #2 if he can, P(D1)=0 and the answer is 1/(1+0+0)=1.
But if we observe that the host opened Door #2 and assume those same biases,
the results reverse.
To answer the question, we must assume a value for P(D1). Assuming anything
other than P(D1)=1/2 implies a bias on the part of the host, and a different
answer if he opens Door #2. So all we can assume is P(D1)=1/2, and the
answer is again 2/3. That is also the answer if we average the results over
many games with the same host (and a consistent bias, whatever it is). The
answer most “experts” give is really that average, and it is a blind spot
that they are not using all the information they have in the individual
We can make the Box Paradox equivalent to this one by making the random
selection implicit. Someone looks in the chosen box, and picks out a gold
coin. The probability is 2/3 that there is another gold coin if that person
picks randomly, 1/2 if that person always prefers a gold coin, and 1 if that
person always prefers a silver one. Without knowing the preference, we can
only assume this person is unbiased and answer 2/3. Over many experiments,
it will also average out to 2/3 regardless of the bias. And this person
doesn’t even have to show the coin. If we assume he is truthful (and we can
only assume that), the answers are the same if he just says “One coin is
Finally, make a few minor changes to the Box Paradox. Change “silver” to
“bronze.” Let the coins be minted in different years, so that the year
embossed on them is never the same for any two. Add a fourth box so that one
box has an older bronze coin with a younger gold coin, and one has a younger
bronze coin with an older gold coin. Now we can call the boxes BB, BG, GB,
and GG based on this ordering. When our someone says “One coin is bronze,”
we can only assume he is unbiased in picking what kind of coin to name, and
the best answer is 1/(1+1/2+1/2+0)=1/2. If there is a bias, it could be
1/(1+1+1+0)=1/3 or 1/(1+0+0+0)=1, but we can’t assume that. Gee, this sounds
oddly familiar, except for the answer. :)
The answer to all of Gary Foshee’s questions is 1/2. His blind spot is that
he doesn’t define events, he counts cases. An event a set of outcomes, not
an outcome itself. The sample space is the set of all possible outcomes. An
event X must be defined by some property such that every outcome in X has
that property, *and* every outcome with the property is in X. The event he
should use as a condition is not “this family includes a boy (born on a
Tuesday)”, it is “The father of this family chooses to tell you one of up to
two facts in the form ‘my family includes a [gender] (born on a [day]).’”
Since most fathers of two will have two different facts of that form to
choose from, Gary Foshee should have assigned a probability to each, not
merely counted the families that fit the description. The answer is then
(1+12P)/(1+26P), where P is the probability he would tell us “one is a boy
born on a Tuesday” when only one of his two children fit that description.
The only value we can assume for P is 1/2, making the answer
(1+6)/(1+13)=1/2. Not P=1 and (1+12)/(1+26)=13/27.
And the blind spot that almost all experts share, is that this means the
answer to most expressions of the simpler Two Child Problem is also 1/2. It
can be different, but only if the problem statement makes two or three
points explicit:
1) Whatever process led to your knowledge of one child’s gender had access
to both children’s genders (and days of birth).
2) That process was predisposed to mention boys over girls (and Tuesdays
over any other day).
3) That process would never mention facts about both children.
When Gary Foshee tells you about one of his kids, #2 is not satisfied. He
probably had a choice of two facts to tell you, and we can’t assume he was
biased towards “boy born on Tuesday.” Just like Monty Hall’s being able to
choose two doors changes the answer from 1/2 to 2/3, Gary Foshee’s being
able to choose two facts changes the answer from 13/27 to 1/2. It is only
13/27 if he was forced to mention that fact, which is why that answer is
Other readers are invited to contribute their thoughts.
Ohio Edison decided to remove a 275-foot smoke stack at its Mad River Power Plant in Springfield, Ohio. But in the event, on November 10, the mammoth chimney fell the wrong way– toward spectators
and surrounding buildings . There were no injuries reported, but power was interrupted to more than 8000 customers by the errant stack. (Via msnbc.com)
As we stand well out of the way, we might recall that it was on this date in 1930 that Henry W. Jeffries invented the Rotolactor. Housed in the Lactorium of the Walker Gordon Laboratory Company,
Inc., at Plainsboro, N.J., it was a 50-stall revolving platform that enabled the milking of 1,680 cows in seven hours by rotating them into position with the milking machines. A spiffy version of
the Rotolactor displayed at the 1939 New York World’s Fair in the Borden building as part of the “Dairy World of Tomorrow,” was one of the most popular attractions in the Fair’s Food Zone.
From PC World, a list of “The Most Dangerous Jobs in Technology“… It won’t surprise readers to see “fixing undersea internet cables” or “communications-tower climbing” on the list. But items like
“mining ‘conflict minerals’” and “unregulated e-waste recycling” are reminders of facets of the technology industry of which we too rarely think. Consider, for example, “internet content
Think of the most disgusting things you’ve stumbled across online. Now imagine viewing the stuff that nightmares are made of–hate crimes, torture, child abuse–in living color, from 9 to 5 every
day. That’s the work of Internet content moderators, who get paid to filter out that kind of material so you don’t have to see it pop up on a social network or photo-sharing site. Demand for the
work is growing, especially as more Web-based services enable users to post pictures instantly from their mobile devices.
“Obviously it’s not the job for everyone,” says Stacey Springer, vice president of operations at Caleris. The West Des Moines, Iowa, company’s 55 content moderation employees scan up to 7 million
images every day for some 80 different clients. “Some people might take it personally if they have a child and see images of children that might be sensitive to them, or if they see animal
Caleris content reviewers receive free counseling as well as benefits including health insurance, but for some the psychological scars don’t heal easily.
Contemplate the full list here.
As we think twice about replacing that iPhone, we might recall that it was on this date in 1888 that the first baby– Edith Eleanor McLean, who weighed 2 lb 7 oz at her pre-mature birth– was placed in
a “hatching cradle””– or as now we call them, “incubator.” Designed by Drs. Allan M. Thomas and William C. Deming, it became a public curiosity before it settled in regular use in neonatal care.
One of the most popular attractions at the 1904 World’s Fair, for example, was an “exhibit” of 14 metal-framed glass incubators, attended by nurses caring for real endangered infants from orphanages
and poor families (whose care was funded by exhibit admission fees).
The World’s Fair in 1904 included “incubator babies” as one of the main attractions on the Pike. Source: neonatology.com
|
{"url":"http://roughlydaily.com/tag/worlds-fair/","timestamp":"2014-04-19T09:29:35Z","content_type":null,"content_length":"76881","record_id":"<urn:uuid:76669cdf-bfb5-4535-b09c-ff8b44f3afc9>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00252-ip-10-147-4-33.ec2.internal.warc.gz"}
|
R for Categorical Data
When categorical data appear in textbooks, it is usually already summarized in tables or graphs. Hence, you usually do not need technology to do homework problems with categorical data. However, this
leaves one underprepared for dealing with real data, so this page is for those who need to do that. We will use an example dataset small enough so you can do the calculations by hand and compare your
results to the computer. Imagine a survey question with answer choices Agree, Disagree or Undecided. Suppose 25 people give these responses:
Where's the Mode?
Most software will not report the mode. That's because the mode is rarely useful for measurements. To find it when you do need it, you have to treat the data as categorical. For categorical data, the
modal category is the one with the most observations (if there is such a category). You can see by counting that there are more A's on the list above than D's or U's, so A is the modal category. This
is the shortest summary for categorical data, analogous to just giving the mean or median for measurements. When we find the modal category for a group of measurements, it is called the mode. It is
useful only when the measurements resemble categorical data in having values that are repeated over and over. An example might be number of children in a family. Here you might see 0, 1, 2... over
and over. For more typical measurements, such as these
1.66597, 1.91566, 2.53406, 2.88043, 2.93449, 3.08816, 1.73520, 3.21908, 3.77892, 3.98208
the mode is not useful because there is none. No value is repeated.
If you need the mode, make a frequency table for the data and find the category with the most observations.
Using R for Categorical Data
Run R. Use quotation marks to enter the data as text.
> survey = c("A","A","D","U","D","D","A","U","A","D","A","D","D","A","U","A","U","D","D","A","A","A","U","D","A")
> survey
[1] "A" "A" "D" "U" "D" "D" "A" "U" "A" "D" "A" "D" "D" "A" "U" "A" "U" "D" "D"
[20] "A" "A" "A" "U" "D" "A"
> table(survey)
A D U
The modal category is "A" (agree).
Graphics have to be made from the numbers in such a table as the one above rather than the letters in the variable.
> barplot(table(survey))
> pie(table(survey))
Notice that it is obvious from the bar chart that A is the modal category. It takes sharp eyes to see this in the pie chart. The summaries above are in order of decreasing statistical quality. A
table gives the most and most precise information in the least amount of space; a pie chart gives the least.
|
{"url":"http://courses.statistics.com/software/R/R4cat.htm","timestamp":"2014-04-17T06:41:15Z","content_type":null,"content_length":"4408","record_id":"<urn:uuid:650f0505-0e19-448b-82a2-5b4bf9b535f2>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00252-ip-10-147-4-33.ec2.internal.warc.gz"}
|
$(\infty,1)$-Topos theory
Extra stuff, structure and property
structures in a cohesive (∞,1)-topos
Locality and descent
The notion of, equivalently
and specifically of
is the $\infty$-categorification of the notion of, equivalently
Where a sheaf is a presheaf with values in Set that satisfies the sheaf condition, an ∞-category-valued (pseudo)presheaf is an $\infty$-stack if it “satisfies descent” in that its assignment to a
space $X$ is equivalent to its descent data for any cover or hypercover $Y^\bullet \to X$: if the canonical morphism
$\mathbf{A}(X) \to Desc(Y^\bullet, \mathbf{A})$
is an equivalence. This is the descent condition.
One important motivation for $\infty$-stacks is that they generalize the notion of Grothendieck topos from 1-categorical to higher categorical context.
This is a central motivation for considering higher stacks. They may also be thought of as internal ∞-groupoids in a sheaf topos.
A well developed theory exists for $\infty$-stacks that are sheaves with values in ∞-groupoids. given that ordinary sheaves may be thought of as sheaves of 0-categories and that $\infty$
-groupoid-values sheaves may be thought of as sheaves of (∞,0)-categories, these may be called (∞,1)-sheaves. In the case that these $\infty$-groupoids have vanishing homotopy groups above some
degree $n$, these are sometimes also called sheaf of n-types.
The currently most complete picture of (∞,1)-sheaves appears in
but is based on a long development by other authors, some of which is indicated in the list of references below.
With the general machinery of (∞,1)-category theory in place, the definition of the (∞,1)-category of ∞-stacks is literally the same as that of a category of sheaves: it is a reflective (∞,1)
$\infty Stacks(C) \simeq Sh_\infty(C) \stackrel{\stackrel{\bar{(\cdot)}}{\leftarrow}}{\to} PSh_\infty(C)$
of the (∞,1)-category of (∞,1)-presheaves with values in ∞Grpd, such that the left adjoint (∞,1)-functor $\bar {(\cdot)}$ – the ∞-stackification operation – is left exact.
One of the main theorems of Higher Topos Theory says that the old model structures on simplicial presheaves are the canonical
This allows to regard various old technical results in a new conceptual light and provides powerful tools for actually handling $\infty$-stacks.
In particular this implies that the old definition of abelian sheaf cohomology is secretly the computation of ∞-stackification for $\infty$-stacks that are in the image of the Dold-Kan embedding of
chain complexes of sheaves into simplicial sheaves.
Derived $\infty$-stacks
Notice that an $\infty$-stack is a (∞,1)-presheaf for which not only the codomain is an (∞,1)-category, but where also the domain, the site, may be an (∞,1)-category.
To emphasize that one considers $\infty$-stacks on higher categorical sites one speaks of derived stacks.
Higher $\infty$-stacks
The above concerns $\infty$-stacks with values in ∞-groupoids, i.e, (∞,0)-categories. More generally there should be notions of $\infty$-stacks with values in (n,r)-categories. These are expected to
be modeled by the model structure on homotopical presheaves with values in the category of Theta spaces.
Quasicoherent $\infty$-stacks
An archetypical class of examples of $\infty$-stacks are quasicoherent ∞-stacks of modules, being the categorification of the notion of quasicoherent sheaf. By their nature these are really $(\
infty,1)$-stacks in that they take values not in ∞-groupoids but in (∞,1)-categories, but often only their ∞-groupoidal core is considered.
Affine $\infty$-stacks
for the site $C = Alg_k^{op}$ with a suitable topology a Quillen adjunction
$\mathcal{O} : sPSh(C)_{loc} \stackrel{\leftarrow}{\to} [\Delta^{op},Alg_k] \simeq dgAlg_k^{+} : Spec$
is presented, where $\mathcal{O}$ sends and $\infty$-stack to its global dg-algebra of functions and $Spec$ constructs the simplicial presheaf “represented” degreewise by a simplicial algebra (under
the monoidal Dold-Kan correspondence these are equivalent to dg-algebras).
An $\infty$-stack in the image of $Spec : dgAlg_k^+ \to sPSh(C)$ is an affine $\infty$-stack. The image of an arbitrary $\infty$-stack under the composite
$Aff : sPSh(C) \stackrel{\mathcal{O}}{\to} dgAlg_k^+ \stackrel{Spec}{\to} sPSh(C)$
is its affinization.
This notion was considered in the full (∞,1)-category picture in
where it is also generalized to derived stacks, i.e. to the (∞,1)-site $dgAlg_k^-$ of cochain dg-algebras in non-positive degree, where the pair of adjoint (∞,1)-functors is
$\mathcal{O} : Sh_{(\infty,1)}((dgAlg_k^-)^{op}) \stackrel{\leftarrow}{\to} [\Delta^{op},dgAlg_k^-] \simeq dgAlg_k : Spec$
with $\mathcal{O}$ taking values in unbounded dg-algebras.
In detail, $\mathcal{O}$ acts as follows: every ∞-stack $X$ may be written as a (colimit) over representable $Spec A_i \in dgAlg_i$
$X \simeq \lim_{\to^i} Y(Spec A_i) \,,$
where $Y : (dgAlg^-)^{op} \to \mathbf{H}$ is the (∞,1)-Yoneda embedding.
The functor $\mathcal{O}$ takes any such colimit-description, and simply reinterprets the colimit in $dgAlg^{op}$, i.e. the limit in $dgAlg$:
$\mathcal{O}(X) = \lim_{\leftarrow^i} A_i \,.$
The study of $\infty$-stacks is known in parts as the study of nonabelian cohomology. See there for further references.
The search for $\infty$-stacks probably began with Alexander Grothendieck in Pursuing Stacks.
The notion of $\infty$-stacks can be set up in various notions of $\infty$-categories. Andre Joyal, Jardine, Bertrand Toen and others have developed the theory of $\infty$-stacks in the context of
simplicial presheaves and also in Segal categories.
• Bertrand Toën, Gabriele Vezzosi; Homotopical algebraic geometry. I. Topos theory, Adv. Math. 193 (2005), no. 2, 257–372, doi, Homotopical Algebraic Geometry II: geometric stacks and applications,
• Bertrand Toën, Gabriele Vezzosi; Segal topoi and stacks over Segal categories, math.AG/0212330.
• Bertrand Toën; Higher and derived stacks: a global overview (arXiv).
This concerns $\infty$-stacks with values in ∞-groupoids, i.e. $(\infty,0)$-categories. More generally descent conditions for $n$-stacks and $(\infty,n)$-stacks with values in (∞,n)-categories have
been earlier discussed in
All this has been embedded into a coherent global theory in the setting of quasicategories in
|
{"url":"http://ncatlab.org/nlab/show/infinity-stack","timestamp":"2014-04-18T11:08:50Z","content_type":null,"content_length":"64485","record_id":"<urn:uuid:2879c094-393e-417b-a7cc-f26a25212731>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00010-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Scatter plots and the different types of correlation
Scatter plots
Scatter plots are used to show relationship between two sets of data by writing them as ordered pairs
To illustrate, let us pretend that you have a business that sells notebooks
Day 1, you sell 10 notebooks
Day 1, you sell 10 notebooks
Day 2, you sell 5 notebooks
Day 3, you sell 15 notebooks
Day 4, you sell 10 notebooks
Day5 , you sell 20 notebooks
Day 6, you sell 15 notebooks
Day 7, you sell 30 notebooks
Day 8, you sell 15 notebooks
Day 9, you sell 25 notebooks
Day 10, you sell 15 notebooks
You can display this situation with ordered pairs as shown below:
(1,10), (2,5), (3,15), (4,10), (5,20), (6,15), (7,30), (8,15), (9, 25), and (10, 15)
Then we can put the ordered pairs on the coordinate system. The resulting graph is called a scatter plot or scatter graph.
Notice how the points are scattered around and everything is located in the first quadrant
Two sets of data can form 3 types of relationships
When y increases as x increases, the two sets of data have a positive correlation
Basically, when you closely examine the graph, you will see that the graph has a tendency to go upward
When y decreases as x increases, the two sets of data have a negative correlation
Basically, when you closely examine the graph, you will see that the graph has a tendency to go downward
When x and y are not related, we say that the two sets of data have no correlation
|
{"url":"http://www.basic-mathematics.com/scatter-plots.html","timestamp":"2014-04-18T10:34:13Z","content_type":null,"content_length":"34910","record_id":"<urn:uuid:b3067372-a6c0-47f7-bacf-e0048d4e1ab0>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00035-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Sunnyside, NY ACT Tutor
Find a Sunnyside, NY ACT Tutor
...Students will learn the 9 elementary argument forms (the rules of inference) and the 10 logical equivalencies (rules of replacement) and how to use these forms to construct argument proofs.
They will also learn how to construct and work with truth tables and truth trees. I try to teach not just...
34 Subjects: including ACT Math, English, reading, writing
...The next levels of algebraic thinking- linear algebra and abstract algebra, were also mainstays in my studies. I believe that studying higher levels of the subjects gives you a real
understanding of them that you cannot get from only studying the basics. I think it's sort of akin to a writer wh...
22 Subjects: including ACT Math, calculus, geometry, algebra 2
...I helped many students got into their dream schools or honor classes. I have two master degrees (physics and math) and have very deep understanding of physics and math concepts. I have my own
way to present difficult concepts in an easiest way to make sure even low level students can understand.
12 Subjects: including ACT Math, calculus, physics, algebra 2
...I am also a math textbook author and editor. I love mathematics and I love to help people learn mathematics. Alg.
9 Subjects: including ACT Math, geometry, algebra 2, algebra 1
...As of now, I am tutoring junior high students for SHSAT and a sophomore for PSAT. I am patient with my students and help them build strong basic skills which will help them solve complicated
problems.I have helped students prepare for integrated algebra and geometry regents. One of my students ...
15 Subjects: including ACT Math, calculus, geometry, ESL/ESOL
Related Sunnyside, NY Tutors
Sunnyside, NY Accounting Tutors
Sunnyside, NY ACT Tutors
Sunnyside, NY Algebra Tutors
Sunnyside, NY Algebra 2 Tutors
Sunnyside, NY Calculus Tutors
Sunnyside, NY Geometry Tutors
Sunnyside, NY Math Tutors
Sunnyside, NY Prealgebra Tutors
Sunnyside, NY Precalculus Tutors
Sunnyside, NY SAT Tutors
Sunnyside, NY SAT Math Tutors
Sunnyside, NY Science Tutors
Sunnyside, NY Statistics Tutors
Sunnyside, NY Trigonometry Tutors
|
{"url":"http://www.purplemath.com/Sunnyside_NY_ACT_tutors.php","timestamp":"2014-04-16T05:03:42Z","content_type":null,"content_length":"23667","record_id":"<urn:uuid:41116b7b-315d-4aaf-b9f2-a406d700fcde>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00607-ip-10-147-4-33.ec2.internal.warc.gz"}
|
file at ftp.cs.wisc.edu/Approx
Content of the AAA_readme file at ftp.cs.wisc.edu/Approx
Items by: Carl de Boor, Amos Ron, Thomas Hangelbroek, Tom Hogan, Olga Holtz, Youngmi Hur, Michael Johnson, Scott Kersey, Zuowei Shen, Shayne Waldron,...
Items on: approximation orders, box splines and exponential box splines, dimensions of kernels of linear operators, frames, multivariate polynomial interpolation, multivariate splines, numerical
analysis, polynomial ideals, commutative algebra and approximation theory, quasi-interpolation, radial basis function approximation, refinable functions, scattered data approximation, shift-invariant
spaces, subdivision, surveys, univariate splines, wavelets, Weyl-Heisenberg systems, linear algebra, \TeX, splinebib, file mailing, ...
Items from: 2010--14, 2005--09, 2000--04, 1994--99, 1990--94, before 1990
These are the files that can be obtained by anonymous from ftp.cs.wisc.edu/Approx. The mathematical papers are in postscript and in pdf, and the former are also available as compress(ed) files, as
indicated by the subscript .Z, to be uncompress(ed) before using, as well as as (shorter) gzip(ed) files, as indicated by the subscript .gz, to be extracted by gzip -d .
If you have trouble because of file contamination, specify binary as your first command in ftp.
The files are in order of increasing age.
Elastic splines I: Existence;
Albert Borb\'ely & Michael J. Johnson;
january 2014
to appear in
The role of inner summaries in the fast evaluation of thin-plate splines;
Michael J. Johnson
november 2011
to appear in
Matlab and C programs for the fast evaluation of thin-plate splines;
Michael J. Johnson
november 2011
\AiCM; 39(1); 2013; 1--25;
Minimal degree univariate piecewise polynomials with prescribed Sobolev regularity;
Amal Al-Rashdan, Michael J. Johnson
october 2011
\JAT; 164(1); 2012; 1--5;
Compactly supported piecewise polyharmonic radial functions with prescribed regularity;
Michael J. Johnson
october 2011
\CA 35(2); 2012; 201--223;
re_shadrin.: ps ps.Z ps.gz pdf
On the (bi)infinite case of Shadrin's theorem concerning the $L_\infty$-boundedness of the $L_2$-spline projector;
Carl de Boor
july 2011
has appeared in the Subbotin 75th anniversary volume
Trudy Instituta Matematiki i Mekhaniki UrO RAN; 17(3); 2011, 25--29;
stahl.: ps ps.Z ps.gz pdf
On Radon's recipe for choosing correct sites for multivariate polynomial interpolation;
Dominik Stahl and Carl de Boor
june 2011
\JAT; 163(12); 2011; 1854--1858;
liron.: ps ps.Z ps.gz pdf
External zonotopal algebra;
Nan Li and Amos Ron
april 2011
hangron.: ps ps.Z ps.gz pdf
Nonlinear Approximation Using Gaussian Kernels;
Thomas Hangelbroek and Amos Ron
appeared in:
JFA; xx(x); 2010; xxx--xxx;
hrx.: ps ps.Z ps.gz pdf
Hierarchical zonotopal spaces;
Olga Holtz, Zhiqiang Xu and Amos Ron
october 2009
to appear in
TAMS; xx(x); 201x; xxx-xxx;
dahmen.: ps ps.Z ps.gz pdf
The way things were in multivariate splines: A personal view;
Carl de Boor
april 2009
in Multiscale, Nonlinear and Adaptive Approximation, R. DeVore and A. Kunoth
(eds.), Springer Verlag (Berlin-Heidelberg, Germany); 2009; 19--37;
A symmetric collocation method with fast evaluation;
Michael Johnson
appeared in
IMAJNA; 29(3); 2009; 773--789;
Scattered data reconstruction by regularization in B-spline and associated wavelet spaces;
Michael Johnson, Zuowei Shen, Yugong XU
appeared in
\JAT; 159(2); 2009; 197--223;
angpl.: ps ps.Z ps.gz pdf
Multivariate polynomial interpolation: Aitken-Neville sets and generalized principal lattices;
Carl de Boor
april 2008
updated 1may08, 8jul08, 10aug08
\JAT; 161(1); 2009; 411--420;
ard.: ps ps.Z ps.gz pdf
Approximation using scattered shifts of a multivariate function;
Ronald DeVore and Amos Ron
feb 2008
TAMS; xx(x); 2010; xxx-xxx;
shekht.: ps ps.Z ps.gz pdf
On the pointwise limits of bivariate Lagrange projectors;
Carl de Boor and Boris Shekhtman
november 2007
\LAA; 429(1); 2008; 311--325;
zonotopes.: ps ps.Z ps.gz pdf
Zonotopal algebra;
Olga Holtz and Amos Ron
september 2007
boxsubdv.: ps ps.Z ps.gz pdf
Box splines revisited: convergence and acceleration methods for the subdivision and the cascade algorithms;
Carl de Boor and Amos Ron
december 2006
\JAT; 150(1); 2008; 1--23;
TPSerror.: ps ps.Z ps.gz pdf
Error estimates for thin plate spline approximation in the disc;
Thomas Hangelbroek
december 2006
apframes.: ps ps.Z ps.gz pdf
Time frequency representations of almost-periodic functions;
Yeon Hyang Kim and Amos Ron
decenber 2006
\CA; xx; 2008; xxx--xxx;
gcn.: ps ps.Z ps.gz pdf
Multivariate polynomial interpolation: conjectures concerning GC-sets;
Carl de Boor
september 2006
proc. 1st Dolomites Workshop: 2006:
07dec06 update
\NA; 45; 2007; 113--125;
A note on the limited stability of surface spline interpolation;
Michael Johnson
has appeared
\JAT; 141(2); 2006; 182--188;
laglimit.: ps ps.Z ps.gz pdf
What are the limits of Lagrange projectors?
Carl de Boor
december 2005
(Constructive Theory of Functions, Varna 2005), B. Bojanov (ed.),
Marin Drinov Acad.\ Publ.\ House (Sofia); 2006; 51--63;
lcamp.: ps ps.Z ps.gz pdf
L-CAMP: Extremely local high-performance wavelet representations in high spatial dimension;
Youngmi Hur and Amos Ron
november 2005
IEEE Trans.\ Info.\ Theory, {\bf 54 (5)} (2008), 2196--2209.
pcf.: ps ps.Z ps.gz pdf
New constructions of piecewise-constant wavelets;
Youngmi Hur and Amos Ron
may 2005
ETNA, Special Volume on Constructive Function Theory {\bf 25}
(2006), 138--157.
huron.: ps ps.Z ps.gz pdf
CAPlets: wavelet representations without wavelets;
Youngmi Hur and Amos Ron
march 2005
MvsD.: ps ps.Z ps.gz pdf
Ideal interpolation: Mourrain's condition vs $D$-invariance;
Carl de Boor
jan 2005
(Banach Center Publications Vol.~72: Approximation and Probability),
Tadeusz Figiel and Anna Kamont (eds.), IMPAN (Warszawa, Poland); 2006; 49--55;
texasxi.: ps ps.Z ps.gz pdf
Ideal interpolation;
Carl de Boor
dec 2004
\TexasXI; 59--91;
dec05: three footnotes added
dvdsurvey.: ps ps.Z ps.gz pdf
Divided differences;
Carl de Boor
nov 2004
Surveys in Approximation Theory; 1; 2005; 46--69;
monomint.: ps ps.Z ps.gz pdf
Interpolation from spaces spanned by monomials
Carl de Boor
aug 2004
\AiCM; 26(1-3); 2007; 63--70;
efficdvd.: ps ps.Z ps.gz pdf
An efficient definition of the divided difference
Carl de Boor
aug 2004
in (Approximation Theory: A Volume Dedicated to Borislav Bojanov),
D. K. Dimitrov, G. Nikolov, and R. Uluchev (eds.), Marin Drinov Academic
Publ. House (Sofia); 2004; 58--63;
olgamos.: ps ps.Z ps.gz pdf
Approximation order of shift-invariant subspaces of $W_s^2(R^d)$
Olga Holtz and Amos Ron
May 2004
\JAT; 132; 2005; 97--148;
radpol.: ps ps.Z ps.gz pdf
On interpolation by radial polynomials
Carl de Boor
dec 2003
as of 21jul04/23feb04
\AiCM; 24; 2006; 143--153;
asympterr.: ps ps.Z ps.gz pdf
An asymptotic expansion for the error in a linear map that reproduces polynomials of a certain order
Carl de Boor
apr 2003
\JAT; 134; 2005; 171--174;
chakpop.: ps ps.Z ps.gz pdf
The B-spline recurrence relations of Chakalov and of Popoviciu
Carl de Boor and Allan Pinkus
mar 2003
\JAT; 124(1); 2003; 115--123;
gsi.: ps ps.Z ps.gz pdf
Generalized shift-invariant systems
Amos Ron and Zuowei Shen
jan 2003
\CA; 22; 2005; 1--45;
rbf_err_anal.: ps ps.Z ps.gz pdf
An error analysis for radial basis function interpolation
Michael J. Johnson
jan 2003
has appeared
\NM; 98; 2004; 675--694;
floater.: ps ps.Z ps.gz pdf
A divided difference expansion of a divided difference
Carl de Boor
nov 2002
\JAT; 122(1); 2003; 10--12;
leibniz.: ps ps.Z ps.gz pdf
A Leibniz formula for multivariate divided differences
Carl de Boor
apr 2002
\SJNA; 41(3); 2003; 856--868;
Lp_order_SSI.: ps ps.Z ps.gz pdf
The $L_p$ approximation order of surface spline interpolation for $1\le p \le 2$
Michael J. Johnson
apr 2002
\CA; 20(2); 2004; 133--167;
dim1.: ps ps.Z ps.gz pdf
The Wavelet Dimension Function is The Trace Function of A Shift-Invariant System
Amos Ron, Zuowei Shen
jun 2001
\PAMS; 131(5); 2003;1385--1398;
dhrs.: ps ps.Z ps.gz pdf
Framelets: MRA-based constructions of wavelet frames
Ingrid Daubechies, Bin Han, Amos Ron, Zuowei Shen
feb 2001
\ACHA; 14; 2003; 1--46;
cagdhand.: ps ps.Z ps.gz pdf
Spline Basics
Carl de Boor
dec 2000
Chapter 6 in \Cagdhand; see (cagd.snu.ac.kr/main.html)
inverse_basis.: ps ps.Z ps.gz pdf
What is the inverse of a basis?
Carl de Boor
sep 2000
BIT; 41(5); 2001; 880--890;
(the printed version contains a random 7 right in the middle of (4.2), the most
important formula of the paper, and an extra p in the first display on page 889)
interpolatepsi.: ps ps.Z ps.gz pdf
Scattered data interpolation from principal shift-invariant spaces
Michael J. Johnson
feb 2000
\JAT; 113; 2001; 172--188;
SIsurvey.: ps ps.Z ps.gz pdf
Introduction to Shift-Invariant Spaces I: Linear Independence
Amos Ron
dec 1999
to appear in
(Multivariate Approximation and Applications),
A. Pinkus, D. Leviatan, N. Dyn, and D. Levin (eds.),
Cambridge University Press (Cambridge); 200x; xxx--xxx;
rst.: ps ps.Z ps.gz pdf
Computing the Sobolev regularity of refinable functions by the Arnoldi Method
Amos Ron, Zuowei Shen, Kim-Chuan Toh
oct 1999
\SJMAA; 23; 2001; 57--76;
polintflats.: ps ps.Z ps.gz pdf
Polynomial interpolation to data on flats in $\Rd$
Carl de Boor, Nira Dyn, Amos Ron
aug 1999
\JAT; 105; 2000; 313--343;
mixed.: ps ps.Z ps.gz pdf
On mixed interpolating-smoothing splines and the $\nu$-spline
Scott Kersey
jun 1999
ssiorder.: ps ps.Z ps.gz pdf
The $L_2$-approximation order of surface spline interpolation
Michael J. Johnson
jun 1999
Math. Comp. 70, 719--737 (2001)
jordannf.: ps ps.Z ps.gz pdf
On Ptak's derivation of the Jordan normal form;
Carl de Boor
may 1999
\LAA; 310; 2000; 9--10;
exists.: ps ps.Z ps.gz pdf
Best near-interpolation by curves: existence and convergence
Scott Kersey
apr 1999
optimality.: ps ps.Z ps.gz pdf
Best near-interpolation by curves: optimality conditions
Scott Kersey
apr 1999
mp.: ps ps.Z ps.gz pdf
Computational aspects of multivariate polynomial interpolation: Indexing the coefficients
Carl de Boor
jan/feb 1999
may99 incorporated referee's comments
\AiCM; 12; 2000; 289--301;
compact.: ps ps.Z ps.gz pdf
On the error in surface spline interpolation of a compactly supported function
Michael J. Johnson
sep 1998, dec 1998
Kuwait J. Sci. Eng. 28, 37--54 (2001)
overcome.: ps ps.Z ps.gz pdf
Overcoming the boundary effects in surface spline interpolation
Michael J. Johnson
nov 1998
IMA J. Numer. Anal. 20, 405--422 (2000)
smooth.: ps ps.Z ps.gz pdf
Calculation of the smoothing spline with weighted roughness measure
Carl de Boor
sep 1998
Math.\ Models Methods Appl.\ Sci.; 11(1); 2001; 33--41;
encoan.: ps ps.Z ps.gz pdf
Multivariate Hermite interpolation (talk at Guernavaca, 13apr99)
Carl de Boor
apr 1999
updated 08feb01 to conform to present-day notation
updated 20apr08 to correct some misprints
texas9.: ps ps.Z ps.gz pdf
Wavelets and their associated operators
Amos Ron
July 1998
\TexasIXc; 283--317;
ger.: ps ps.Z ps.gz pdf
A new factorization technique of the matrix mask of univariate refinable functions
Gerlind Plonka and Amos Ron
June 1998
has appeared in \NM; 87(3); 2001; 555--595;
hoganjia.: ps ps.Z ps.gz pdf
Dependency relations among th shifts of a multivariate refinable distribution
Thomas A. Hogan and Rong-Qing Jia
March 1998
to appear in CA
nec_note.: ps ps.Z ps.gz pdf
A note on matrix refinement equations
Thomas A. Hogan
February 1998
\SJMA; 29; 1998; 849--854
improved.: ps ps.Z ps.gz pdf
An improved order of approximation for thin-plate spline interpolation in the unit disc
Michael Johnson
February 1998
has appeared in
\NM; 84(3); 2000; 451--474;
hardinhogan.: ps ps.Z ps.gz pdf
Refinable subspaces of a refinable space
Douglas P. Hardin and Thomas A. Hogan
February 1998
to appear in PAMS
hk.: ps ps.Z ps.gz pdf
Construction of Compactly Supported Affine Frames in $L_2(\Rd)$
Amos Ron and Zuowei Shen
December 1997
(Advances in Wavelets), K. S. Lau (ed.), Springer-Verlag (New York); 1998;
surfspli.: ps ps.Z ps.gz pdf
A bound on the approximation order of surface splines
Michael Johnson
October 1997
\CA; 14; 1998; 429--438;
A selfunwrapping wrapper containing MATLAB 5 programs for the construction and evaluation of the least interpolant to data in any number of dimensions
Carl de Boor
as of February 1999
dframe.: ps ps.Z ps.gz pdf
Affine system in $L_2(\Rd)$ II: dual systems
Amos Ron and Zuowei Shen
July 1997
Appeared in:
J. Fourier Analysis and Appl., Special Issue on Frames {\bf 3} (1997), 617-637.
reg.: ps ps.Z ps.gz pdf
The Sobolev regularity of refinable functions
Amos Ron and Zuowei Shen
March 1997
J. Approx. Theory, {\bf 106(2)} (2000), 185--225.
chamonix.: ps ps.Z ps.gz pdf
The error in polynomial tensor-product, and in Chung-Yao, interpolation
Carl de Boor
February 1997
Surface Fitting and Multiresolution Methods
A. Le M\'ehaut\'e, C. Rabut, and L. L. Schumaker (eds),
Vanderbilt University Press (Nashville TN), 35--50.
6apr09: added missing reference to Waldron'98a
splerr.: ps ps.Z ps.gz pdf
On the Meir/Sharma/Hall/Meyer analysis of the spline interpolation error
Carl de Boor
December 1996
Appeared in:
\Powellfest; 47--58;
bmr.: ps ps.Z ps.gz pdf
Asymptotically Optimal Approximation and Numerical Solutions of Differential Equations
Martin D. Buhmann, Charles A. Micchelli, Amos Ron
October 1996
\Powellfest; 59--82;
cg.: ps ps.Z ps.gz pdf
Tight compactly supported wavelet frames of arbitrarily high smoothness
Karlheinz Gr\"ochenig, Amos Ron
September 1996
\PAMS; 126; 1998; 1101--1107;
BDR4.: ps ps.Z ps.gz pdf
Approximation orders of FSI spaces in $L_2(\Rd)$
Carl de Boor, Ron DeVore, and Amos Ron
March 1996
additional references added June-July 1996
referees' comments incorporated Aug/Sep 1996
a minor but useful variation of main result and add'l refs added feb97
An earlier version was, unfortunately, printed as
\CA; 14; 1998; 411--429;
The correct version is
\CA; 14; 1998; 631--652;
tight.: ps ps.Z ps.gz pdf
Compactly supported tight affine spline frames in $L_2(\Rd)$
Amos Ron and Zuowei Shen
February 1996
\MC; 65(216); 1998; 1513--1530;
multiw.: ps ps.Z ps.gz pdf
Stability and independence of the shifts of finitely many refinable functions
Thomas A. Hogan
January 1996
revised February 1997
\JFAA; 3; 1997; 757--774;
affine.: ps ps.Z ps.gz pdf
Affine systems in $L_2(\Rd)$: the analysis of the analysis operator
Amos Ron, Zuowei Shen
December 1995
J. Functional Analysis {\bf 148} (1997), 408-447
zerocount.: ps ps.Z ps.gz pdf
The multiplicity of a spline zero
Carl de Boor
December 1995
January 96 (reflect referee's comments)
appeared in
\AoNM; 4; 1997; 229--238;
bennett.: ps ps.Z ps.gz pdf
On determining the foot of the continental slope
Carl de Boor
November 1995
revised feb'96
ker2.: ps ps.Z ps.gz pdf
On ascertaining inductively the dimension of the joint kernel of certain commuting linear operators. II
Carl de Boor, Amos Ron, Zuowei Shen
May 1995
Adv. in Math. {\bf 123} (1996), 223--242.
cdr.: ps ps.Z ps.gz pdf
How smooth is the smoothest function in a given refinable space?
Albert Cohen, Ingrid Daubechies, Amos Ron
May 1995
Applied and Computational Harmonic Analysis {\bf 3} (1996), 87--89.
perturb.: ps ps.Z ps.gz pdf
Approximation in $L_p(\Rd)$ from spaces spanned by the perturbed integer translates of a radial basis function
Michael J. Johnson
May 1995
\JAT; 107(2); 2000; 163--203;
sauerxu.: ps ps.Z ps.gz pdf
On the Sauer-Xu formula for the error in multivariate polynomial interpolation;
Carl de Boor
March 1995
\MC; 65; 1996; 1231--1234;
frame2.: ps ps.Z ps.gz pdf
Gramian analysis of affine bases and affine frames
Amos Ron and Zuowei Shen
March 1995
\TexasVIIIw; 375--382;
multdvdf.: ps ps.Z ps.gz pdf
A multivariate divided difference
Carl de Boor
March 1995
\TexasVIIIa; 87--96;
% 25mar02: supplied missing g in (4.1) and completed refs [17] and [18].
smoothwav.: ps ps.Z ps.gz pdf
Smooth refinable functions provide good approximation orders
Amos Ron
February 1995
SIAM J. Math. Anal. {\bf 28} (1997), 731--748.
stabindep_texasviii.: ps ps.Z ps.gz pdf
Stability and independence of the shifts of a multivariate refinable function
Thomas A. Hogan
February 1995
\TexasVIIIw; 159--166;
stabindep.: ps ps.Z ps.gz pdf
Stability and independence for multivariate refinable distributions
Thomas A. Hogan
February 1995
revised August 1997
appeared in
\JAT; 98(2); 1999; 248--270;
upbound.: ps ps.Z ps.gz pdf
An upper bound on the approximation power of principal shift-invariant spaces
Michael J. Johnson
December 1994
appeared in:
\CA; 13(2); 1997; 155--176;
lowbound.: ps ps.Z ps.gz pdf
On the approximation power of principal shift-invariant subspaces of $L_p(R^d)$
Michael J. Johnson
December 1994
\JAT; 91(3); 1997; 279--319;
wh.: ps ps.Z ps.gz pdf
Weyl-Heisenberg frames and Riesz bases in $L_2(\Rd)$
Amos Ron and Zuowei Shen
October 1994
Duke Math. J. {\bf 89} (1997), 237-282.
symmetries.: ps ps.Z ps.gz pdf
Symmetries of linear functionals
Shayne Waldron
October 1994
\TexasVIIIa; 541--550;
hardy.: ps ps.Z ps.gz pdf
A multivariate form of Hardy's inequality and $L_p$-error bounds for multivariate Lagrange interpolation schemes
Shayne Waldron
August 1994
\SJMA; 28(1); 1997; 233--258;
lift.: ps ps.Z ps.gz pdf
Integral error formul{\ae} for the scale of mean value interpolations which includes Kergin and Hakopian interpolation
Shayne Waldron
July 1994
\NM; 77(1); 1997; 105--122;
hermite.: ps ps.Z ps.gz pdf
$L_p$-error bounds for Hermite interpolation and the associated Wirtinger inequalities
Shayne Waldron
May 1994
\CA; 13(4); 1997; 461--679;
extremising.: ps ps.Z ps.gz pdf
Extremising the $L_p$-norm of a monic polynomial with roots in a given interval and Hermite interpolation
Shayne Waldron
May 1994
polintelim.: ps ps.Z ps.gz pdf
Gauss elimination by segments and multivariate polynomial interpolation
Carl de Boor
March 1994
in (Approximation and Computation), R.V.M. Zahar (ed.),
ISNM 119, Birkh\"auser Verlag (Basel-Boston-Berlin); 1994; 1--22;
sphere.: ps ps.Z ps.gz pdf
Strictly positive definite functions on spheres
Amos Ron and Xingping Sun
February 1994
\MC; 65(216); 1996; 1513--1530;
frame1.: ps ps.Z ps.gz pdf
Frames and stable bases for shift-invariant subspaces of $L_2(\Rd)$
Amos Ron and Zuowei Shen
February 1994
appeared in Canad. Math. J.
\CJM; 47(5); 1995; 1051--1094;
pscattered.: ps ps.Z ps.gz pdf
$L^p$-approximation orders with scattered centres
Martin D. Buhmann and Amos Ron
January 1994
\ChamonixIIb; 93--112;
scattered.: ps ps.Z ps.gz pdf
Radial basis function approximation: from gridded centers to scattered centers
Nira Dyn and Amos Ron
November 1993
appeared in Proc. London Math. Soc. (1995)
\PLMS; 71(3); 1995; 76--108;
approxloc.: ps ps.Z ps.gz pdf
Approximation orders of and approximation maps from local principal shift-invariant spaces
Amos Ron
May 1993
\JAT; 81(1); 1995; 38--65;
boxeval.: ps ps.Z ps.gz pdf
On the evaluation of box splines,
Carl de Boor
March 1993
has appeared in Numer.Algorithms; 5; 1993; 5--23;
an alternative approach, particularly good for multiple directions, can be
found in Leif Kobbelt's paper
Stable evaluation of box splines;
\NA; 14(4); 1997; 377--382;
% available at
% http://www.mpi-sb.mpg.de/~kobbelt/papers/boxeval.ps.gz
% with the corresponding set of m-files at
% http://www.mpi-sb.mpg.de/~kobbelt/papers/boxeval.tgz
multpp.: ps ps.Z ps.gz pdf
Multivariate piecewise polynomials,
Carl de Boor
October 1992
has appeared in Acta Numerica; 2; 1993; 65--109;
wav2.: ps ps.Z ps.gz pdf
Multiresolution analysis by infinitely differentiable compactly supported functions
Nira Dyn, Amos Ron
September 1992
ACHA (1995)
stablemask.: ps ps.Z ps.gz pdf
Characterizations of linear independence and stability of the shifts of a univariate refinable function in terms of its refinement mask
Amos Ron
September 1992
sct1.: ps ps.Z ps.gz pdf
Negative observations concerning approximations from spaces generated by scattered shifts of functions vanishing at $\infty$
Amos Ron
September 1992
has appeared in J. Approx. Theory; 78(3); 1994; 364--372;
aowoquasi.: ps ps.Z ps.gz pdf
Approximation order without quasi-interpolants
Carl de Boor
August 1992
has appeared in \TexasVII; 1--18;
ker.: ps ps.Z ps.gz pdf
On ascertaining inductively the dimension of the joint kernel of certain commuting linear operators
Carl de Boor, Amos Ron, Zuowei Shen
June 1992
updated Apr 96 to reflect copy editor's changes
has appeared in Adv. Applied Math; 17; 1996; 209--250;
aoradial.: ps ps.Z ps.gz pdf
The $L_2$-Approximation Orders of Principal Shift-Invariant Spaces Generated by a Radial Basis Function
Amos Ron
March 1992
has appeared in \Nmatnion; 245--268;
aobivar.: ps ps.Z ps.gz pdf
A sharp upper bound on the approximation order of smooth bivariate pp functions
Carl de Boor and Rong-Qing Jia
March 1992
has appeared in J.Approx.Theory; 72(1); 1993; 24--33;
wavelet.: ps ps.Z ps.gz pdf
On the construction of multivariate (pre)wavelets
Carl de Boor, Ronald A. DeVore, Amos Ron
February 1992
has appeared in Constr.Approx.; 9; 1993; 123--166;
several.: ps ps.Z ps.gz pdf
The structure of finitely generated shift-invariant spaces in $L_2(\RR^d)$
Carl de Boor, Ronald A. DeVore, Amos Ron
February 1992
has appeared in J. of Functional Analysis 119(1); 1994; 37--78;
% minor correction in intro 01feb01
polinterr.: ps ps.Z ps.gz pdf
On the error in multivariate polynomial interpolation
Carl de Boor
has appeared in Applied Numerical Mathematics; 10; 1992; 297--305;
l2shift.: ps ps.Z ps.gz pdf
Approximation from shift-invariant subspaces of $L_2(\RR^d)$
Carl de Boor, Ronald A. DeVore, Amos Ron
July 1991
has appeared in Trans.Amer.Math.Soc. 341; 1994; 787--806;
% note, this file has the name `ell-2-shift', not `one-two-shift'.
% misprint corrected: 19oct94, 17nov97
aoinfty.: ps ps.Z ps.gz pdf
Fourier analysis of the approximation power of principal shift-invariant spaces
Carl de Boor, Amos Ron
July 1991
has appeared in Constr.Approx.; 8; 1992; 427--462;
quasiaprx.: ps ps.Z ps.gz pdf
Quasiinterpolants and approximation power of multivariate splines
Carl de Boor
July 1990
has appeared in
(Computations of curves and surfaces), Dahmen, Gasca, Micchelli
(eds.), Kluwer (Dordrecht, Netherlands); 1990; 313--345;
leastsol.: ps ps.Z ps.gz pdf
The least solution for the polynomial interpolation problem;
Carl de Boor, Amos Ron
June 1990
has appeared in Math.Zeitschrift; 210; 1992; 347--378;
compleast.: ps ps.Z ps.gz pdf
Computational aspects of polynomial interpolation in several variables
Carl de Boor, Amos Ron
March 1990
has appeared in Math.Comp.; 58; 1992; 705--727;
empty.: ps ps.Z ps.gz pdf
An empty exercise
Carl de Boor
March 1990
has appeared in
ACM SIGNUM Newsletter; 25(4); 1990; 2--6;
polintconte.: ps ps.Z ps.gz pdf
Polynomial interpolation in several variables
Carl de Boor
March 1990
has appeared in
(Studies in Computer Science {(in Honor of Samuel D. Conte)}),
R. DeMillo and J. R. Rice (eds.), Plenum Press (New York); 1994; 87--119;
djlr.: ps ps.Z ps.gz pdf
On multivariate approximation by integer translates of a basis function
N. Dyn, I.R.H. Jackson, D. Levin and A. Ron
Nov. 1989
has appeared in
Israel Journal of Mathematics {\bf 78} (1992), 95--130.
studia.: ps ps.Z ps.gz pdf
A characterization of the approximation order of multivariate spline spaces
A. Ron
November 1989
has appeared in
Studia Mathematica {\bf 98(1)} (1991), 73--90.
modest.: ps ps.Z ps.gz pdf
An alternative approach to (the teaching of) rank and dimension
Carl de Boor
February 1990
has appeared in
\LAA; 146; 1991; 221--229;
quasi.: ps ps.Z ps.gz pdf
The exponentials in the span of the multiinteger translates of a compactly supported function: quasiinterpolation and approximation order
Carl de Boor and Amos Ron
November 1989
has appeared in
J. London Math. Soc. (2); 45; 1992; 519--535;
polideal.: ps ps.Z ps.gz pdf
Polynomial ideals and multivariate splines
Carl de Boor, Amos Ron
June 1989
has appeared in
(Multivariate Approximation Theory IV, ISNM 90),
C. Chui, W. Schempp, and K. Zeller (eds.),
Birk\-h\"auser Verlag (Basel); 1989; 31--40;
boxtiling.: ps ps.Z ps.gz pdf
Carl de Boor, K. H"ollig
Box-spline tilings
May 1989
has appeared in \AMMo; 98; 1991; 793--802;
twopolspaces.: ps ps.Z ps.gz pdf
On two polynomial spaces associated with a box spline
Carl de Boor, Nira Dyn \& Amos Ron
April 1989
has appeared in
\PJM; 147; 1991; 249--267;
% (may'99: corrected and updated the references)
newideal.: ps ps.Z ps.gz pdf
On polynomial ideals of finite codimension with applications to box spline theory;
Carl de Boor, Amos Ron
December 1988
has appeared in
\JMAA; 158; 1991; 168--193;
% (de.01: corrected (3.8), changed pi to Pi, updated one reference)
multiint.: ps ps.Z ps.gz pdf
On multivariate polynomial interpolation
Carl de Boor, Amos Ron
November 1988
has appeared in Constr. Approx.; 6; 1990; 287--302;
% (13nov98: corrected two misprints and changed the symbol for polynomial
% space from \pi to \Pi)
limorg.: ps ps.Z ps.gz pdf
The limit at the origin of a smooth function space;
Carl de Boor, Amos Ron
November 1988
has appeared in \TexasVI; 93--96;
cornercut.: ps ps.Z ps.gz pdf
Local corner cutting and the smoothness of the limiting curve;
Carl de Boor
November 1988
has appeared in \CAGD; 7; 1990; 389--397;
chebspline.: ps ps.Z ps.gz pdf
The exact condition of the B-spline basis may be hard to determine
Carl de Boor
July 1988
has appeared in \JAT; 60; 1990; 344--359;
csd1.: ps ps.Z ps.gz pdf
A necessary and sufficient condition for the linear independence of the integer translates of a compactly supported distribution
Amos Ron
has appeared in:
Constructive Approximation {\bf 5}(1989), 297--308.
ebs4.: ps ps.Z ps.gz pdf
Local approximation by certain spaces of multivariate exponential-polynomials, approximation order of exponential box splines and related interpolation problems
Nira Dyn and Amos Ron
January 1988
has appeared in:
\TAMS; 319; 1990; 381--403
whatisspline.: ps ps.Z ps.gz pdf
What is a multivariate spline?
Carl de Boor
August 1987
has appeared
(Proc.\ First Intern.\ Conf.\ Industr.\ Applied Math., Paris 1987), J.\
McKenna and R. Temam (eds.), SIAM (Philadelphia PA); 1988; 90--101;
bsplloccond.: ps ps.Z ps.gz pdf
The condition of the B-spline basis for polynomials
Carl de Boor
April 1987
has appeared as
\SJNA; 25(1); 1988; 148--152;
but the journal's final computer-aided processing messed up the paper through
the omission of various words (that happened to lie beyond column 80 on a
line in the TeX file).
bsplbasic.: ps ps.Z ps.gz pdf
B-spline basics
Carl de Boor
MRC 2952, 1986
in (Fundamental Developments of Computer-Aided Geometric Modeling),
Les Piegl (ed.), Academic Press (London) 1993; 27--49;
% Corrected (in Section 12) on 04 mar 96.
% Scaling of figures adjusted and misprints corrected on 03 jun 96
% A misprint corrected (and adjusted to current tex-macros) on 06 jun 96
% A misprint corrected on 12feb98
% A reference updated 27apr09 but the change to (2.4b) made then rescinded on 30mar10
birmingham.: ps ps.Z ps.gz pdf
Multivariate Approximation
Carl de Boor
MRC 2950, August 1986
in (State of the Art in Numerical Analysis),
A. Iserles and M. Powell (eds.),
Institute Mathematics Applications (Essex); 1987; 87--109;
% 10mar09: some references, some notation (for pol.spaces, intervals, etc)
% updated; unfortunately, the picture files are lost.
BBform.: ps ps.Z ps.gz pdf
$B$--form basics;
Carl de Boor
in (Geometric Modeling: Algorithms and New Trends),
G. E. Farin (ed.),
SIAM Publications (Philadelphia); 1987; 131--148;
% 30nov09 supplied the missing Figure 3, and corrected a typo in (3.3) and in
% last display before Section 6.
polshift.: ps ps.Z ps.gz pdf
The polynomials in the linear span of integer translates of a compactly supported function
Carl de Boor
has appeared in: \CA; 3; 1987; 199--208;
% may99: updated the references, corrected various typos, and made the changes:
% \Pi --> \tilde\Pi, \pi -->\Pi, scale-invariant --> dilation-invariant,
% scale (S_h) --> ladder (S_h)
Convergence of cardinal series;
Carl de Boor, Klaus H\"ollig, Sherman Riemenschneider
August 1985
has appeared: \PAMS; 98(3); 1986; 457--460;
notaknot.: ps ps.Z ps.gz pdf
Convergence of cubic spline interpolation with the not-a-knot condition
Carl de Boor
MRC TSR 2876, October 1985
Partitions of unity and approximation
Carl de Boor and Ronald DeVore
May 1984
has appeared in \PAMS; 93(4); 1985; 705--709;
A geometric proof of total positivity for spline interpolation
Carl de Boor and Ron DeVore
March 1984
has appeared in \MC; 45(172); 1985; 497--504;
contrapp.: ps ps.Z ps.gz pdf
Controlled approximation and a characterization of the local approximation order
Carl de Boor and R.-Q. Jia
has appeared in Proc.\ AMS; 95(4); 1985; 547--553;
Approximation order from bivariate $C^1$-cubics: A counterexample
Carl de Boor and Klaus H"ollig
has appeared as
\PAMS; 87(4); 1983; 649--655;
Approximation order from smooth bivariate splines
C. de Boor, R. DeVore, K. H\"ollig
has appeared in TexasIV; 353--357;
Approximation by smooth multivariate splines
Carl de Boor and Ron DeVore
December 1981
has appeared in \TAMS; 276(2); 1983; 775--788;
Inverses of infinite sign regular matrices
C. de Boor, S. Friedland, and A. Pinkus
November 1980
has appeared: \TAMS; 274(1); 1982; 59--68;
Recurrence relations for multivariate B-splines
Carl de Boor and Klaus H\"ollig
May 1981
has appeared: \PAMS; 85(3); 1982; 397--400;
The inverse of a totally positive bi-infinite band matrix
C. de Boor
November 1980
has appeared: \TAMS; 274(1); 1982; 45--58;
Collocation approximation to eigenvalues of an ordinary differential equation: Numerical illustrations;
Carl de Boor and Blair Swartz
summer 1980
has appeared in \MC; 36; 1981(153); 1--19;
Local piecewise polynomial projection methods for an O.D.E.\ which give high-order convergence at knots
Carl de Boor and Blair Swartz
summer 1980
has appeared in \MC; 36; 1981; 21--33;
maxnormbound.: ps ps.Z ps.gz pdf
On a max-norm bound for the least-squares spline approximant
Carl de Boor
has appeared in
(Approximation and Function Spaces),
C. Ciesielski (ed.),
North Holland (Amsterdam); 1981; 163--175;
Collocation approximation to eigenvalues of an ordinary differential equation: Numerical illustrations;
Carl de Boor and Blair Swartz
summer 1980
has appeared in \MC; 35(151); 1980; 679--694;
C. de Boor, R. DeVore, K. H\"ollig
Mixed norm $n$-widths
March 1979
has appeared in \PAMS; 80(4); 1980; 577--583;
agee.: ps ps.Z ps.gz pdf
How does Agee's smoothing method work?
Carl de Boor
has appeared in
(Proceedings of the 1979 Army Numerical Analysis and Computers Conference),
xxx (ed.), ARO Rept.\ 79-3, Army Research Office (Triangle Park NC); 1979;
The Numerically Stable Reconstruction of a Jacobi Matrix from Spectral Data;
Carl de Boor and G. H. Golub
apr 1977
\LAA; 21; 1978; 245--360;
Comments on the comparison of global methods for linear two-point boundary
value problems;
Carl de Boor and Blair Swartz
summer 1977
has appeared in \MC; 31(140); 1977; 916--921;
survey76.: ps ps.Z ps.gz pdf
Splines as linear combinations of B-splines. A Survey
Carl de Boor
has appeared in \TexasII; 1--47;
% corrected version (with updated references) 19sep97
% left off an extraneous label (6.1) 01aug03
oddbiinf.: ps ps.Z ps.gz pdf
Odd-degree spline interpolation at a biinfinite knot sequence
Carl de Boor
\Tex-version of MRC TSR #1666, August 1976
has appeared in \BonnI; 30--53;
l2inlinfconj.: ps ps.Z ps.gz pdf
A bound on the $L_\infty$-norm of $L_2$-approximation by splines in terms of a global mesh ratio;
Carl de Boor
has appeared:
\MC; 30(136); 1976; 765--771;
loclinfl.: ps ps.Z ps.gz pdf
On local linear functionals which vanish at all $B$-splines but one;
Carl de Boor
has appeared in (Theory of Approximation with Applications),
A. G. Law and N. B. Sahney (eds.),
Academic Press (New York); 1976; 120--145;
budanfourier.: ps ps.Z ps.gz pdf
Cardinal interpolation and spline functions VIII: The Budan Fourier theorem for splines and applications;
Carl de Boor and I. J. Schoenberg
feb 1975
has appeared in (Lecture Notes in Mathematics 501), K. B\"ohmer (ed),
Springer-Verlag (Berlin); 1976; 1--77;
% corrected jul2000 a la KohlerNikolov95a
smallderiv.: ps ps.Z ps.gz pdf
A smooth and local interpolant with ``small'' $k$-th derivative;
Carl de Boor
has appeared in
(Numerical Solutions of Boundary Value Problems for Ordinary Differential Equations),
A. Aziz (ed.), Academic Press (New York); 1975;
howsmall.: ps ps.Z ps.gz pdf
How small can one make the derivatives of an interpolating function?;
Carl de Boor
has appeared as \JAT; 13; 1975; 105--116;
% corrected version (with updated references) 22oct99
A remark concerning perfect splines
Carl de Boor
has appeared as \BAMS; 80(4); 1974; 724--727;
splbound.: ps ps.Z ps.gz pdf
On bounding spline interpolation;
Carl de Boor
\JAT; 14(3); 1975; 191--203;
% added a footnote pointing out that Jia disproved the conjecture, in %Jia88d
quasiint.: ps ps.Z ps.gz pdf
The quasi-interpolant as a tool in elementary polynomial spline theory;
Carl de Boor
has appeared in \TexasI; 269--276;
goodappr.: ps ps.Z ps.gz pdf
Good approximation by splines with variable knots;
Carl de Boor
has appeared in \EdmontonI; 57--72;
Carl de Boor and James W. Daniel
Splines with nonnegative $B$-spline coefficients
april 1973
has appeared: \MC; 28(126); 1974; 565--568;
Subroutine Package for Calculating with B-splines;
Carl de Boor
August 1971
report LA-4728-MS Los Alamos scientific laboratory,
An extended version has appeared as
``Package for calculating with B-splines'', \SJNA, 14, 1977, 441--472.
gamma.: ps ps.Z ps.gz pdf
On the approximation by $\gamma$-polynomials;
Carl de Boor
has appeared in \MadisonII; 157--183;
unifappr.: ps ps.Z ps.gz pdf
On uniform approximation by splines
Carl de Boor
has appeared as \JAT; 1; 1968; 219--235;
4oct08 various misprints have been corrected
tr20.: ps ps.Z ps.gz pdf
Least Squares Cubic Spline Approximation I -- Fixed Knots
Carl de Boor, John R. Rice
CSD TR 20 April 1968
tr21.: ps ps.Z ps.gz pdf
Least Squares Cubic Spline Approximation II -- Variable Knots
Carl de Boor, John R. Rice
CSD TR 21 April 1968
% error in NUBAS corrected 3oct01
On local spline approximation by moments;
Carl de Boor
Sep 1966
has appeared in
\JMM; 17; 1968; 729--735;
deboorphd.: ps ps.Z ps.gz pdf
The method of projections as applied to the numerical solution of two point boundary value problems using cubic splines
Carl de Boor Ph.D. thesis, Univ.\ Michigan
August 1966
% corrected version (with updated references) 4apr96
lynch.: ps ps.Z ps.gz pdf
On splines and their minimum properties;
Carl de Boor and Robert E. Lynch
has appeared in
\JMM; 15; 1966; 953--969;
Nonlinear inrterpolation by splines, pseudosplines and elastica;
Garrett Birkhoff, Hermann Burchard, Donald Thomas;
february 1965
VERY LONG FILE, 4.8Mb
GMR 468, General Motors Research Laboratories (Warren MI); 1965;1
Piecewise polynomial interpolation and approximation;
Garrett Birkhoff and Carl R. de Boor;
VERY LONG FILE, 8.1Mb
has appeared in
\Generalmotors; 164--190;
Bicubic spline interpolation;
Carl de Boor
VERY LONG FILE, 4.8Mb
has appeared in
\JMP (J.Mathematics and Physics); 41(3); 1962; 212--218;
viva_vi.: ps ps.Z ps.gz pdf
Viva vi!, (a brief introduction to vi)
Carl de Boor
version: 03aug03
intro.: ps ps.Z ps.gz pdf
TeXnicalities (an informal introduction to TeX use) (the odd pages)
Carl de Boor
version: 06feb99
ttintro.: ps ps.Z ps.gz pdf
TeXnicalities (an informal introduction to TeX use) (the even pages)
Carl de Boor
version: 06feb99
A file of plain TeX macros useful for writing papers and books in plain TeX (including automatic sequencing of formal items and items in the bibliography, and the exact placement of items).
Carl de Boor
version: 31may04
A file of plain TeX macros useful for handling the typesetting of programs and program-related material in plain TeX.
Carl de Boor
version: 25feb98
A file of plain TeX macros useful for generating (the draft of) an index.
Carl de Boor
version: 27jun98
TeX macros of use with the spline bibliography.
Carl de Boor
version: 05jun04
TeX macros of use with the spline bibliography.
Carl de Boor
version: 02may04
TeX macros of use with the spline bibliography.
Carl de Boor
version: 27may04
a self-unwrapping wrapper containing files for simplifying the safe mailing of files via email.
Carl de Boor
version: mar 96
isobib.: ps ps.Z ps.gz pdf
List of publications of I. J. Schoenberg
(Carl de Boor)
updated 10mar09
|
{"url":"http://pages.cs.wisc.edu/~deboor/ftpreadme.html","timestamp":"2014-04-17T18:23:09Z","content_type":null,"content_length":"92909","record_id":"<urn:uuid:d18890fc-d37a-4bb8-b1d7-f052481249ce>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00211-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Triangle Inequality Theorem
The triangle inequality theorem states that any side of a triangle is always shorter than the sum of the other two sides.
Try this Adjust the triangle by dragging the points A,B or C. Notice how the longest side is always shorter than the sum of the other two.
In the figure above, drag the point C up towards the line AB. As it gets closer you can see that the line AB is always shorter than the sum of AC and BC. It gets close, but never quite makes it until
C is actually on the line AB and the figure is no longer a triangle.
The shortest distance between two points is a straight line. The distance from A to B will always be longer if you have to 'detour' via C.
To illustrate this topic, we have picked one side in the figure above, but this property of triangles is always true no matter which side you initially pick. Reshape the triangle above and convince
yourself that this is so.
The Converse
A triangle cannot be constructed from three line segments if any of them is longer than the sum of the other two.
For more on this see Triangle inequality theorem converse.
Related triangle topics
Perimeter / Area
Triangle types
Triangle centers
Congruence and Similarity
Solving triangles
Triangle quizzes and exercises
(C) 2009 Copyright Math Open Reference. All rights reserved
|
{"url":"http://www.mathopenref.com/triangleinequality.html","timestamp":"2014-04-16T18:56:12Z","content_type":null,"content_length":"12050","record_id":"<urn:uuid:bb76ad29-67f7-4f8c-ab2d-5efee65b1199>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00383-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
an objects motion is described by the equation d=4sin(pi t). the displacement d, is measured in meters. the time t is measure in seconds. 1. what is the objects position at t=0? be sure to include
appropriate units 2. what is the objects maximum displacement from its resting position? be sure to include appropriate unit. 3. how much time is required for one oscillation. 4. what is the
frequency of this motion? 5. what will the height of the object be at t=1.75 seconds?
Best Response
You've already chosen the best response.
Hey sexy.
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
1) at t = 0 d = 0 2) maximum displacement is nothing but maximum value of 4 sin(pie*t) = 4 3) One oscillation means total angle rotates by 2*pie ... at t=0 angle =0, at t = 2 angle = 2*pie ..
hence t = 2 4) frequency = 1/t = 1/2 = 0.5 5) height = this depends on what path is the object moving .. is this a circular path or a straight line path .. and displacement is wrt a point or a
line ?
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/4f243f8ce4b0a2a9c26673e4","timestamp":"2014-04-19T07:20:55Z","content_type":null,"content_length":"32988","record_id":"<urn:uuid:c53620c0-7e9b-4f54-8e7e-10f893547903>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00661-ip-10-147-4-33.ec2.internal.warc.gz"}
|
biggest cube problem (given set of bricks)
up vote 2 down vote favorite
Input: set of bricks, each one is made of 1x1x1 cubes glued together face to face, like tetris pieces.
Problem: find the way of putting together those pieces to make a solid that contains biggest full cubical subsolid.
For example, the solid below contains a 2x2x2 cube
Is there any smart algorithm I could use in this kind of problem? The only solution that comes to mind is ordinary brute force. I'm looking for ideas for an accurate algorithm as well as an
approximation algorithm.
2 Shouldn't we expect it to be NP hard to know whether they can be assembled to have a subsolid of a certain size? – Joel David Hamkins Oct 15 '11 at 10:46
To add support for Joel's hunch, "Tetris is Hard, Even to Approximate": arxiv.org/abs/cs/0210020 . Their proofs use reductions from 3-partition. – Joseph O'Rourke Oct 15 '11 at 14:11
I edited to add the image, and cleaned up some grammar. I am not clear, however, on the intended meaning of "full", whether it means a cubical subsolid, or merely rectangular. – Joel David Hamkins
Oct 15 '11 at 19:01
1 @Joel: I read it as meaning (i) the subsolid should be a perfect cube, and not just a large rectangle, and (ii) the subsolid should be convex, rather than say having some empty interior. If this
is not the correct reading, I hope @mn will revise the question. – Theo Johnson-Freyd Oct 16 '11 at 1:05
@Theo Johnson-Freyd: that is exactly what I meant – m n Oct 16 '11 at 14:01
add comment
2 Answers
active oldest votes
In general the problem should be NP-hard. While this may not be a reduction, I am thinking of trying to pack n^2 many 1 by 1 by n bins with 1 by 1 by k bricks, where k actually means many
bricks of different sizes; if I am right, bin packing can be reduced to your problem.
If the bricks are of few types, it may be possible to construct quickly tiling solutions that allow one to get nice approximations. For example, 3 of the 7 blocks used to build a Soma cube
each tile the 2x2x2 cube, so given any count of those 3 kinds of blocks, you can likely come within O(1) of the maximal cubical volume achievable using little more than arithmetic; if you
up vote can generate dissections of small cubes or prisms with your input bricks, you can then quickly decide which of many rectangular prisms are nicely buildable with the given tile set, and this
2 down can be used to approximate the maximal rectangular volume, or maximal cubical volume, as desired. Note that this does not contradict the above (idea for a) reduction because k in this
vote instance is bounded from above by some small number.
Gerhard "Ask Me About System Design" Paseman, 2011.10.15
In the example using 3 kinds of blocks, it is easy to construct cubes of side length 2n with sufficient even quantities of the blocks, and figures which are "2n+1 cubes with three bumpy
sides" given sufficient quantities of all the blocks, so this particular problem has most cases nicely solvable in linear or near linear time, depending on how long it takes to compute a
cube root. Gerhard "Ask Me About System Design" Paseman, 2011.10.15 – Gerhard Paseman Oct 16 '11 at 5:09
add comment
Look at dimension $n=2$. Then this reduces to finding a clique or independent set of given size $k$ which is NP-complete when the cubes (squares) are arbitrarily placed on $\mathbb{Z}^{2}
$. Hence, this is NP-complete for higher dimensions in general case. It would be interesting to look at semidefinite relaxations of this problem (say call it as higher dimensional clique
or independent set problem).
up vote 0
down vote Having said that your problem is specific - in the sense - the cubes are not disjoint and the number of cubes in higher rows is lower than number of cubes in lower rows - for $n=3$, this
specific case is most likely $O(N^{3})$ as it looks like scanning each row from bottom may suffice ($N$ is maximum of rows in all dimensions).
add comment
Not the answer you're looking for? Browse other questions tagged approximation-algorithms or ask your own question.
|
{"url":"http://mathoverflow.net/questions/78202/biggest-cube-problem-given-set-of-bricks","timestamp":"2014-04-18T00:45:49Z","content_type":null,"content_length":"61242","record_id":"<urn:uuid:ad5a3dc1-242f-4a8f-b3e8-46a447a96f23>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00488-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Chapter 5
Part 1: For the problem in the Teacher's Edition, page 108
Provide students with the Problem Worksheet (PDF file).
To find the average speed of the Concorde students need to divide 17,400 miles by 12. To find how many miles Amelia Earhart could have traveled in 12 hours, students need to find her average speed
per hour by dividing 2,205 by 15. Then take the answer, 147, and multiply it by 12 to find how many miles Amelia Earhart could have traveled in 12 hours.
The average speed of the Concorde was about 1,450 miles per hour (mi/h).
Amelia Earhart could have traveled about 1,764 miles in 12 hours.
Part 2: Be an Investigator
A good time to do this investigation is after Lesson 5 on division with greater numbers.
Introducing the Investigation
Introduce the investigation by reading aloud the assignment at the top of the first page of the Description of Investigation and Student Report (PDF file), by having one of your students read aloud
the assignment, or by having the students read the assignment individually.
Ask, How would you find the average speed in miles per hour (mi/h) of one of these flights? (divide the distance by the amount of time)
Ask, Which average speed is not in terms of mi/h?(the Wright Brothers' average speed)
Doing the Investigation
Have students share both their solution and how they found it.
Answers for the Data Sheet
Date Name Flight Distance Length of Time Average Speed
December 17, 1903 Wright Brothers First flight 120 feet 12 seconds about
7 mi/h
May 17, 1913 Domingo Rosillo First flight from Florida to Cuba 90 miles about 3 hours about
30 mi/h
May 20, 1927 Charles Lindbergh New York to Paris 3,610 miles about 34 hours about
106 mi/h
May 20, 1932 Amelia Earhart First solo flight across the Atlantic by a woman 2,026 miles about 15 hours about
135 mi/h
Student Report
The student report gives students an opportunity to show the results of the work they have done calculating the average speed of some famous flights.
Extending the Investigation
You might want to have the students use a calculator to calculate the average speed of the Wright Brothers flight in miles per hour. This would make it easier to compare their speed with the other
To convert to find the average speed in hours rather than seconds, you would have to determine the number of seconds in an hour. There are 60 seconds in one minute and 60 minutes in an hour, so there
are 60 × 60 or 3,600 seconds in an hour. 10 ft/s is multiplied by
|
{"url":"http://www.eduplace.com/math/mw/minv/5/minv_5c5.html","timestamp":"2014-04-20T00:38:47Z","content_type":null,"content_length":"6576","record_id":"<urn:uuid:3a8b0dd6-1669-499b-ad3d-6b3217fe0c5b>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00209-ip-10-147-4-33.ec2.internal.warc.gz"}
|
X-rays spotted bouncing off relativistic matter spiraling into black hole
Gas is moving nearly at light-speed as the space it occupies is dragged around.
by Matthew Francis - Feb 27, 2013 6:37 pm UTC
The space near black holes is one of the most extreme environments in the Universe. The bodies' strong gravity and rotation combine to create rapidly spinning disks of matter that can emit huge
amounts of light at very high energies. However, the exact mechanism by which this light is produced is uncertain, largely because high-resolution observations of black holes are hard to do. Despite
their outsized influence, black holes are physically small: even a black hole a billion times the mass of the Sun occupies less volume than the Solar System.
A new X-ray observation of the region surrounding the supermassive black hole in the Great Barred Spiral Galaxy may have answered one of the big questions. G. Risaliti and colleagues found the
distinct signature of X-rays reflecting off gas orbiting the black hole at nearly the speed of light. The detailed information the astronomers gleaned allowed them to rule out some explanations for
the bright X-ray emission, bringing us closer to an understanding of the extreme environment near these gravitational engines.
Despite the stereotype of black holes "sucking" matter in, they attract it via gravity. That means stars, gas, and other things can fall into orbits around black holes, which may be stable for long
periods of time. Gas often forms accretion disks and jets that release huge amounts of energy in the form of light. This energy can include X-ray emissions. So despite their name, black holes can be
very luminous objects.
Nearer the boundary of a rotating black hole—its event horizon—the strength of gravity is such that the space matter occupies can be also dragged around the black hole. This effect is called "frame
dragging," and is predicted by Einstein's general theory of relativity. The region in which frame dragging becomes significant, however, is very close to the black hole's event horizon, which is
relatively small, especially when imaged from Earth. As a result, astronomers could not be sure whether ordinary orbital effects or relativistic frame-dragging is more important for producing the
intense X-ray emissions.
Astronomers paid particularly close attention to the supermassive black hole at the center of the Great Barred Spiral Galaxy (also known by its catalog number NGC 1365) when a cloud of gas
momentarily eclipsed it. That rare event allowed them to get a good size estimate for the accretion disk that surrounds the black hole. The current study followed up by monitoring fluctuations in the
X-ray emissions, using the orbiting XMM-Newton and NuSTAR X-ray telescopes.
In particular, the researchers looked at emission from neutral and partly ionized iron atoms in the gas. Prior observations showed that the emission lines were broadened, which can be caused by
several different phenomena. Researchers considered two primary hypotheses: absorption by other gas along the line of sight between the black hole and us, or very fast motion of the gas itself.
The new data strongly supported the latter option. In this scheme, the observed X-ray light reflected off the inner edge of the accretion disk, where the gas is moving at very close to the speed of
light. According to the models, this scattering occured well within the frame-dragging region near the black hole. The inner edge of the accretion disk may be close to or at the minimum stable
distance from the black hole. Closer than that distance, and matter can no longer orbit in a circular path—it will tend to spiral in.
The authors argued that any explanation of the X-ray emission that fails to account for the general-relativistic effects just won't work. Previous observations estimated that the black hole in the
Great Barred Spiral Galaxy is spinning nearly as fast as possible; whether other black holes will have similar properties remains an open question.
Nature, 2013. DOI: 10.1038/nature11938 (About DOIs).
Promoted Comments
• LawOfEntropyArs Praetorian
Veritas super omens wrote:
I still don't get it. I thought a black hole was a singularity. A point mass. That would infer no diameter. How can something with no diameter have a spin. It seems there would be no way to tell
where the "start" of a rotation is.
Same way the electron, as a point particle, has spin. So long as you're massive, nothing prevents your wave function from carrying angular momentum, and when a black hole has angular momentum,
space is dragged along for the ride. Actually, any rotating mass does this (it's been measured around the Earth) but it's most extreme around a black hole.
49 Reader Comments
1. g0m3r619Ars Tribunus Militum
Relativistic matter has Porn blocker? WHO KNEW?!
2. asreenuSmack-Fu Master, in training
very interesting article. I always thought that the energy emitted from black hole relatively small and they couldn't be detected easily. I hope they will be able to crack the mystery as to what
happens to matter and space when it enters the black hole and how it gets reshaped
3. Tyler X. DurdenArs Praefectus
even a black hole a billion times the mass of the Sun occupies less volume than the Solar System.
Umm, given that the solar system is roughly 2,000 sun diameters, and thus constitutes roughly 8,000,000,000 billion times the volume of the sun, I certainly would hope so.
4. atlcomputechArs Centurion
That sucks...
5. Tyler X. DurdenArs Praefectus
asreenu wrote:
very interesting article. I always thought that the energy emitted from black hole relatively small and they couldn’t be detected easily.
Still true, it is still an indirect observation they are talking about. It is the mess around the black hole that is reflecting and generating EM, assumed to be gaseous matter that the black hole
is in the process of accumulating.
For black holes that are not accumulating large amounts of matter like this we have to rely on gravity more directly such as gravimetric lensing, which is tricker because you need a suitable
distant directly observable object behind it relative to our position.
6. Control GroupArs Legatus Legioniset Subscriptor
TFA wrote:
Previous observations estimated that the black hole in the Great Barred Spiral Galaxy is spinning nearly as fast as possible; whether other black holes will have similar properties remains an
open question.
Quick question from the ignorant: what does "nearly as fast as possible" mean? What's the limit on black hole rotational speed?
7. deas187Ars Scholae Palatinae
I imagine this as being like looking into two mirrors that are parallel and facing each other
8. Wheels Of ConfusionArs Legatus Legionis
I hear Professor Challenger has an interesting theory regarding the spreading of the emission lines.
9. redtomatoArs Praetorian
BBC news has an excellent writeup of this:
"The results suggest a black hole more than 3 million km across, whose outermost edge is moving at a speed near that of light."
Mind: boggled.
Head: asplode.
10. wyrmholeArs Scholae Palatinae
Tyler X. Durden wrote:
Umm, given that the solar system is roughly 2,000 sun diameters, and thus constitutes roughly 8,000,000,000 billion times the volume of the sun, I certainly would hope so.
A black hole of 10^9 solar masses is much less dense than the sun if you use the Schwarzschild radius to describe the volume of the black hole.
~18 kg/m^3 for the black hole vs 1408 kg/m^3 for the sun.
Whereas if you had a black hole of 10^-9 solar masses, then that would be ridiculously more dense than the sun at 1.8*10^37 kg/m^3
Edit: Okay, I can't make my wolfram alpha links work. Here are the inputs I used:
"black hole event horizon radius with 10^9 solar masses"
"10^9 solar masses / ( 4/3 * pi(2.953*10^12 m)^3)"
"1 solar mass / solar volume"
""black hole event horizon radius with 10^-9 solar masses"
"10^-9 solar masses / ( 4/3 * pi(2.953*10^-6 m)^3)"
Last edited by wyrmhole on Wed Feb 27, 2013 1:46 pm
11. nowimnothingSmack-Fu Master, in training
redtomato wrote:
BBC news has an excellent writeup of this:
"The results suggest a black hole more than 3 million km across, whose outermost edge is moving at a speed near that of light."
Mind: boggled.
Head: asplode.
That is cool. From a non-physicist, would it be possible to utilize some of that speed like we would a gravitational slingshot?
12. LawOfEntropyArs Praetorian
Control Group wrote:
Quick question from the ignorant: what does "nearly as fast as possible" mean? What's the limit on black hole rotational speed?
A black hole rotating so fast that its angular momentum (multiplied by the fundamental constants necessary to convert it to a distance) is larger than its mass squared (converted to a distance in
a similar way) has no event horizon. We call this a naked singularity, and while it doesn't seem like GR itself allows a proof that naked singularities don't exist, simulations suggest and
empirical evidence supports the idea that it's impossible for such a black hole to form. Disallowing naked singularities limits the rate at which black holes of a given mass can spin. Most black
holes are right up against this limit, as a result of collapsing from much larger moderately spinning objects to their tiny sizes.
More info: http://en.wikipedia.org/wiki/Kerr_metri ... _solutions
13. CervusArs Centurion
Trying to think of what the Frame Dragging effect would be doing to the space-time around the black hole with that mass and that speed.
14. LawOfEntropyArs Praetorian
Cervus wrote:
Trying to think of what the Frame Dragging effect would be doing to the space-time around the black hole with that mass and that speed.
The black hole in this Wiki diagram is relatively pedestrian (in terms of its rotation rate) next to the one in the article.
http://en.wikipedia.org/wiki/File:Parti ... k_hole.svg
15. ChuckstarArs Tribunus Angusticlaviuset Subscriptor
LawOfEntropy wrote:
Control Group wrote:
Quick question from the ignorant: what does "nearly as fast as possible" mean? What's the limit on black hole rotational speed?
A black hole rotating so fast that its angular momentum (multiplied by the fundamental constants necessary to convert it to a distance) is larger than its mass squared (converted to a distance in
a similar way) has no event horizon. We call this a naked singularity, and while it doesn't seem like GR itself allows a proof that naked singularities don't exist, simulations suggest and
empirical evidence supports the idea that it's impossible for such a black hole to form. Disallowing naked singularities limits the rate at which black holes of a given mass can spin. Most black
holes are right up against this limit, as a result of collapsing from much larger moderately spinning objects to their tiny sizes.
More info: http://en.wikipedia.org/wiki/Kerr_metri ... _solutions
Even allowing naked singularities, no point on the event horizon would be able to exceed the speed of light, so depending on the radius, you could determine a maximum spin rate.
16. FortunatusSmack-Fu Master, in training
Great writing. Clearly explained.
17. LawOfEntropyArs Praetorian
Chuckstar wrote:
Even allowing naked singularities, no point on the event horizon would be able to exceed the speed of light, so depending on the radius, you could determine a maximum spin rate.
The event horizon isn't a physical object. The speed of light limit is local, so the space there can move as fast as it likes, dragging matter around from the perspective of an external observer
at as large an apparent speed as it wants, so long as no matter on the event horizon locally exceeds c. Except for the angular momentum limit.
18. ringobobArs Praefectus
LawOfEntropy wrote:
The event horizon isn't a physical object. The speed of light limit is local, so the space there can move as fast as it likes, dragging matter around from the perspective of an external observer
at as large an apparent speed as it wants, so long as no matter on the event horizon locally exceeds c. Except for the angular momentum limit.
I've not studied Relativity in too much depth, but this gives me a context to understand something I've always had a problem with re: the idea that the speed of light is a hard limit. Thanks!
19. gallahadSmack-Fu Master, in training
nowimnothing wrote:
redtomato wrote:
BBC news has an excellent writeup of this:
"The results suggest a black hole more than 3 million km across, whose outermost edge is moving at a speed near that of light."
Mind: boggled.
Head: asplode.
That is cool. From a non-physicist, would it be possible to utilize some of that speed like we would a gravitational slingshot?
The boost in speed from a gravitational sling shot actually comes from the velocity of the gravitational source, not the force of gravity itself. Kind of like how if you're in a car and throw a
ball, the ball's full velocity is a combination of the car's velocity and your throwing velocity. So, yes, you could, if the black hole is moving, but not really any differently than you would
get from a planet going the same way.
20. archtopArs Centurion
Control Group wrote:
TFA wrote:
Previous observations estimated that the black hole in the Great Barred Spiral Galaxy is spinning nearly as fast as possible; whether other black holes will have similar properties remains an
open question.
Quick question from the ignorant: what does "nearly as fast as possible" mean? What's the limit on black hole rotational speed?
I wondered the same thing, and after googling thoroughly, couldn't find anything anywhere. All I could find was that the event horizon was moving at "nearly" or "85%" of the speed of light.
But why no rotations-per-minute (or similar) figure for this well-characterized black hole? It's a figure commonly given for neutron stars, for example.
21. LawOfEntropyArs Praetorian
gallahad wrote:
The boost in speed from a gravitational sling shot actually comes from the velocity of the gravitational source, not the force of gravity itself. Kind of like how if you're in a car and throw a
ball, the ball's full velocity is a combination of the car's velocity and your throwing velocity. So, yes, you could, if the black hole is moving, but not really any differently than you would
get from a planet going the same way.
This is true, but black holes, because of frame dragging and their extreme rotational speeds, allow for a better mechanism. You can steal rotational energy from the rotating hole and accelerate
to relativistic speeds for "free".
22. ZassounotsukushiWise, Aged Ars Veteran
archtop wrote:
Control Group wrote:
TFA wrote:
Previous observations estimated that the black hole in the Great Barred Spiral Galaxy is spinning nearly as fast as possible; whether other black holes will have similar properties remains an
open question.
Quick question from the ignorant: what does "nearly as fast as possible" mean? What's the limit on black hole rotational speed?
I wondered the same thing, and after googling thoroughly, couldn't find anything anywhere. All I could find was that the event horizon was moving at "nearly" or "85%" of the speed of light.
But why no rotations-per-minute (or similar) figure for this well-characterized black hole? It's a figure commonly given for neutron stars, for example.
You ask good questions and we have answers for you! The question was addressed somewhat generally here:
http://physics.stackexchange.com/questi ... -radiation
So let me run you through the basics. A black hole is entropic death. It is a state of maximum entropy, so don't expect to pull it apart or get anything back out - physics will prevent it.
Ostensibly, you could spin it SO fast that its gravity could no longer hold stuff on the equator inside the event horizon. Obviously there must be a physical limit to how fast it can spin. Not
only spin, but charge of the black hole could cause stuff to be ejected. Imagine that the black hole has a very large negative or positive charge. On Earth, if something has too strong of a
charge it will discharge into the atmosphere, as you know. The same can happen in space if you have a voltage too ridiculously high. Like charges repel like, so if the black hole has a strong
enough charge, it would eject matter, and we can not allow this. Another important details is that Coulomb's constant is much greater than the gravitational constant, so compared to the total
mass, it wouldn't even that that much (relative) charge to do this.
So, we know that the black hole can't spin too fast or have too much of an electrical charge. What if we try to force it? It will force things right back. Basically, if the black hole spins too
fast, then it won't accept any more spin (I say "spin" instead of angular momentum here). So if you take a wheel, spin it, and throw it into a black hole it won't take it. What will happen? Isn't
that the exciting part!? No actually, physicists have a pretty good handle on this. It will just throw stuff away that gets near to it.
On this point, yes, the rotational energy of a black hole could, in fact, be harvested with relative ease, like other people have asked. It's entirely thinkable that a black hole could be used as
a slingshot of sorts. If it is rotating close to the theoretical maximum, it will give up that extra rotation without much convincing needed.
23. wyrmholeArs Scholae Palatinae
ringobob wrote:
LawOfEntropy wrote:
The event horizon isn't a physical object. The speed of light limit is local, so the space there can move as fast as it likes, dragging matter around from the perspective of an external observer
at as large an apparent speed as it wants, so long as no matter on the event horizon locally exceeds c. Except for the angular momentum limit.
I've not studied Relativity in too much depth, but this gives me a context to understand something I've always had a problem with re: the idea that the speed of light is a hard limit. Thanks!
There's still an issue here with exceeding the speed of light and causality.
If you exceed the speed of light with respect to any external observer, then that observer will see you arrive at your destination before you left. "Seeing" here implying that some information
from you can reach them. According to that observer, effect will precede cause and either causality is broken, or the relativity principle is broken, and either way Relativity is broken.
In this sense, the speed of light is a global limit.
However there's only a causality problem if there's a potential for interaction. So for instance universal expansion implies that extremely distant objects are moving away from us faster than
light, but by the same token no information can reach us from them so there's no possibility of causality violation. Same with something within the event horizon moving faster than light with
respect to an outside observer -- no information can escape, so no causality violation.
Hypothetical FTL "cheats" like wormholes where you move FTL and then can hypothetically interact with the rest of the universe run into this problem. It's unknown how the universe deals with it.
24. AreWeThereYetiArs Scholae Palatinaeet Subscriptor
LawOfEntropy wrote:
Chuckstar wrote:
Even allowing naked singularities, no point on the event horizon would be able to exceed the speed of light, so depending on the radius, you could determine a maximum spin rate.
The event horizon isn't a physical object. ...
Unless there is a firewall :-). They may not exist, but the arguments going on about the idea of black hole firewalls are some of the most interesting in physics right now, to me. http://
More on topic, the possibility of apparent light speed violation because of stretching of the underlying spacetime is something people usually forget about the possibility of. I wonder how fast
the frames are getting dragged right near the event horizon, and percentage-wise how much that increases the apparent speed of light relative to our frame.
25. Veritas super omensArs Scholae Palatinae
I still don't get it. I thought a black hole was a singularity. A point mass. That would infer no diameter. How can something with no diameter have a spin. It seems there would be no way to tell
where the "start" of a rotation is.
26. LawOfEntropyArs Praetorian
Veritas super omens wrote:
I still don't get it. I thought a black hole was a singularity. A point mass. That would infer no diameter. How can something with no diameter have a spin. It seems there would be no way to tell
where the "start" of a rotation is.
Same way the electron, as a point particle, has spin. So long as you're massive, nothing prevents your wave function from carrying angular momentum, and when a black hole has angular momentum,
space is dragged along for the ride. Actually, any rotating mass does this (it's been measured around the Earth) but it's most extreme around a black hole.
27. AreWeThereYetiArs Scholae Palatinaeet Subscriptor
Veritas super omens wrote:
I still don't get it. I thought a black hole was a singularity. A point mass. That would infer no diameter. How can something with no diameter have a spin. It seems there would be no way to tell
where the "start" of a rotation is.
I do not think spin means what you think it means. No disrespect intended, spin turns out to be a lot more mysterious and strange than people's intuition. One of the many strange conceptual
problems that physics faced in the last century had to do with exactly this issue. Things like electrons can be shown to have angular momentum, and people pictured this as a little spinning ball.
But people kept trying to measure the diameter of the electron, and came up with a value for an upper limit that would require the surface of the spinning electron at that diameter to be moving
faster than the speed of light. So clearly it can't be spinning the way that we think of a ball as spinning.
And as it turns out, ALL elementary particles (in the standard model) are dimensionless points, as far as is known. In other words, there isn't anything BUT dimensionless points. All other
non-elementary particles are simply clusters of those dimensionless points.
So how does a dimensionless point spin? No one knows what that means intuitively, but they do it, because we can measure the fact that they have angular momentum. The best way to think about it
is that quantum spin is more of a symmetry property of objects, not some property that involves "motion", and it can behave differently than our intuitions about spin. For example, electrons have
a spin of 1/2, which means they must rotate twice on their axis before they come back to their original wavefunction state.
Now not only is that really bizarre, since everything else in our experience comes back to the same orientation after 1 rotation, but it is even more bizarre since it also means that if YOU go
AROUND the electron once, that is equivalent to it rotating once, and so if you walk around an electron and then look at it, it isn't the same, it has a negated wavefunction, and you have to walk
around it twice before it looks the same as it did at the beginning. This is a mysterious feature of the way spacetime works, and we just have to accept it, and accept that spin is something
trickier than we think it is.
28. Veritas super omensArs Scholae Palatinae
I think I'm starting to get it. "spin" is a manifestation of the interaction of matter with spacetime and HOW it manifests is varied depending on local conditions.
29. Veritas super omensArs Scholae Palatinae
I knew that electrons had "spin" but was under the impression that it wasn't spin like in a top but was an arbitrary physics name like "charm". I didn't realize that it was related to angular
momentum. How do measure the angular momentum of an electron?
30. AlhazredArs Scholae Palatinae
LawOfEntropy wrote:
Chuckstar wrote:
Even allowing naked singularities, no point on the event horizon would be able to exceed the speed of light, so depending on the radius, you could determine a maximum spin rate.
The event horizon isn't a physical object. The speed of light limit is local, so the space there can move as fast as it likes, dragging matter around from the perspective of an external observer
at as large an apparent speed as it wants, so long as no matter on the event horizon locally exceeds c. Except for the angular momentum limit.
Right, angular momentum needs to be conserved from every frame of reference. Remember, from the frame of reference of a particle falling into the black hole there IS no distinct event horizon
either, each view of the situation is radically different, so different that the 2 cease to communicate (apparent difference in velocity reaches C from both frames). Exactly what frames are
dragging and where the angular momentum is are also entirely relative. Complete solutions for what happens at the singularity itself of course don't exist.
31. weary_scientistSmack-Fu Master, in training
"Despite their outsized influence, black holes are physically small: even a black hole a billion times the mass of the Sun occupies less volume than the Solar System."
This is not a good comparison...
Volume of the sun ~ 1.41 X 10^18 km ^3
Volume of the solar system (only out to Neptune orbit)~ 3.81 X 10^29 km ^3
Therefore 2.7 X 10 ^ 11 suns would fit within the solar system... 270 billion suns.
So inferring that your black hole is only half the solar system, this would put it below 1% the density of the sun...
Perhaps you should stick to telling us how many elephants the black hole weighs and how many 747s in diameter it is across. /troll
32. zelanniiArs Praefectus
Control Group wrote:
TFA wrote:
Previous observations estimated that the black hole in the Great Barred Spiral Galaxy is spinning nearly as fast as possible; whether other black holes will have similar properties remains an
open question.
Quick question from the ignorant: what does "nearly as fast as possible" mean? What's the limit on black hole rotational speed?
Well, in theory, thought the center can spin very fast, at a point, instead of spinning the matter near it, once that matter starts approaching light speed velocities itself, that matter can
itself exert a "resistive" force on the spinning. Essentially, since the matter CAN'T go faster, spinning faster is thus impossible as doing so would create coinditions otherwise not possible,
thus the boyancy of tyhat system as more matter is introduces and the mass of the spinning plate increases, the black hose stops accelerating it's own spin.
33. zelanniiArs Praefectus
weary_scientist wrote:
"Despite their outsized influence, black holes are physically small: even a black hole a billion times the mass of the Sun occupies less volume than the Solar System."
This is not a good comparison...
Volume of the sun ~ 1.41 X 10^18 km ^3
Volume of the solar system (only out to Neptune orbit)~ 3.81 X 10^29 km ^3
Therefore 2.7 X 10 ^ 11 suns would fit within the solar system... 270 billion suns.
So inferring that your black hole is only half the solar system, this would put it below 1% the density of the sun...
Perhaps you should stick to telling us how many elephants the black hole weighs and how many 747s in diameter it is across. /troll
The size of the "black hole" is not the same as the size of the point mass at it's center. The "black hole" is the area insude the event horizon, but only a tiny fraction of that space is mass.
In fact, the mass contained inside the singularity is compressed to a ball somewhat the size of a common sun but weighing several billion times as much. This creates an event horison several
light days across, but that is not the size of the object in it creating that phenomenon.
34. zelanniiArs Praefectus
AreWeThereYeti wrote:
Veritas super omens wrote:
I still don't get it. I thought a black hole was a singularity. A point mass. That would infer no diameter. How can something with no diameter have a spin. It seems there would be no way to tell
where the "start" of a rotation is.
I do not think spin means what you think it means. No disrespect intended, spin turns out to be a lot more mysterious and strange than people's intuition. One of the many strange conceptual
problems that physics faced in the last century had to do with exactly this issue. Things like electrons can be shown to have angular momentum, and people pictured this as a little spinning ball.
But people kept trying to measure the diameter of the electron, and came up with a value for an upper limit that would require the surface of the spinning electron at that diameter to be moving
faster than the speed of light. So clearly it can't be spinning the way that we think of a ball as spinning.
And as it turns out, ALL elementary particles (in the standard model) are dimensionless points, as far as is known. In other words, there isn't anything BUT dimensionless points. All other
non-elementary particles are simply clusters of those dimensionless points.
So how does a dimensionless point spin? No one knows what that means intuitively, but they do it, because we can measure the fact that they have angular momentum. The best way to think about it
is that quantum spin is more of a symmetry property of objects, not some property that involves "motion", and it can behave differently than our intuitions about spin. For example, electrons have
a spin of 1/2, which means they must rotate twice on their axis before they come back to their original wavefunction state.
Now not only is that really bizarre, since everything else in our experience comes back to the same orientation after 1 rotation, but it is even more bizarre since it also means that if YOU go
AROUND the electron once, that is equivalent to it rotating once, and so if you walk around an electron and then look at it, it isn't the same, it has a negated wavefunction, and you have to walk
around it twice before it looks the same as it did at the beginning. This is a mysterious feature of the way spacetime works, and we just have to accept it, and accept that spin is something
trickier than we think it is.
Correct. The problem is, people think of an electron as a ball at all, when in reality, it's a collection of smaller parts orbiting each other, just like electrons orbit protons and neutrons. The
idea is that the pattern of those orbits creates a "wobble" that has angular momentum. The time is 1/2 because the parts do not orbit in a plane around a center, but through and back again, each
part moving in reaction to the others moving, and it takes 2 "spins" to reset all the parts to an originating (or equal in some other way) state. In fact, it's likely each of those parts has
their own angular movement internally, and they're likely made of even yet smaller bits we're just starting to comprehend.
35. zelanniiArs Praefectus
Want to really blow people's minds? It's possible that inside black holes, matter exists at negative kelvin temperatures..... a potential and kinetic energy so high, additional energy cannot be
added, and in such a state, particles that would otherwise attract (and in fact cannot because they create negative pressure), would instead collapse in upon themselves instead or repel each
other. This may be the fundamental force behind black hole matter/energy ejection phenomenon, and how the universe is expanding although gravity should be making it contract.
36. kbarbSmack-Fu Master, in training
There's another good write up of the NGC 1365, the Great Barred Spiral Galaxy, and its black hole spin over at Bad Astronomy :
Superfast Spinning Black Hole Tearing Up Space at Nearly the Speed of Light
37. AreWeThereYetiArs Scholae Palatinaeet Subscriptor
zelannii wrote:
weary_scientist wrote:
"Despite their outsized influence, black holes are physically small: even a black hole a billion times the mass of the Sun occupies less volume than the Solar System."
This is not a good comparison...
Volume of the sun ~ 1.41 X 10^18 km ^3
Volume of the solar system (only out to Neptune orbit)~ 3.81 X 10^29 km ^3
Therefore 2.7 X 10 ^ 11 suns would fit within the solar system... 270 billion suns.
So inferring that your black hole is only half the solar system, this would put it below 1% the density of the sun...
Perhaps you should stick to telling us how many elephants the black hole weighs and how many 747s in diameter it is across. /troll
The size of the "black hole" is not the same as the size of the point mass at it's center. The "black hole" is the area insude the event horizon, but only a tiny fraction of that space is mass.
In fact, the mass contained inside the singularity is compressed to a ball somewhat the size of a common sun but weighing several billion times as much. This creates an event horison several
light days across, but that is not the size of the object in it creating that phenomenon.
No. The singularity at the center of a black hole is a dimensionless point, as far as is known.
38. AreWeThereYetiArs Scholae Palatinaeet Subscriptor
zelannii wrote:
AreWeThereYeti wrote:
Veritas super omens wrote:
I still don't get it. I thought a black hole was a singularity. A point mass. That would infer no diameter. How can something with no diameter have a spin. It seems there would be no way to tell
where the "start" of a rotation is.
I do not think spin means what you think it means. No disrespect intended, spin turns out to be a lot more mysterious and strange than people's intuition. One of the many strange conceptual
problems that physics faced in the last century had to do with exactly this issue. Things like electrons can be shown to have angular momentum, and people pictured this as a little spinning ball.
But people kept trying to measure the diameter of the electron, and came up with a value for an upper limit that would require the surface of the spinning electron at that diameter to be moving
faster than the speed of light. So clearly it can't be spinning the way that we think of a ball as spinning.
And as it turns out, ALL elementary particles (in the standard model) are dimensionless points, as far as is known. In other words, there isn't anything BUT dimensionless points. All other
non-elementary particles are simply clusters of those dimensionless points.
So how does a dimensionless point spin? No one knows what that means intuitively, but they do it, because we can measure the fact that they have angular momentum. The best way to think about it
is that quantum spin is more of a symmetry property of objects, not some property that involves "motion", and it can behave differently than our intuitions about spin. For example, electrons have
a spin of 1/2, which means they must rotate twice on their axis before they come back to their original wavefunction state.
Now not only is that really bizarre, since everything else in our experience comes back to the same orientation after 1 rotation, but it is even more bizarre since it also means that if YOU go
AROUND the electron once, that is equivalent to it rotating once, and so if you walk around an electron and then look at it, it isn't the same, it has a negated wavefunction, and you have to walk
around it twice before it looks the same as it did at the beginning. This is a mysterious feature of the way spacetime works, and we just have to accept it, and accept that spin is something
trickier than we think it is.
Correct. The problem is, people think of an electron as a ball at all, when in reality, it's a collection of smaller parts orbiting each other, just like electrons orbit protons and neutrons. The
idea is that the pattern of those orbits creates a "wobble" that has angular momentum. The time is 1/2 because the parts do not orbit in a plane around a center, but through and back again, each
part moving in reaction to the others moving, and it takes 2 "spins" to reset all the parts to an originating (or equal in some other way) state. In fact, it's likely each of those parts has
their own angular movement internally, and they're likely made of even yet smaller bits we're just starting to comprehend.
No. The electron is not a collection of smaller parts. It is an elementary particle that cannot be decomposed into parts, and is a dimensionless point, in the standard model.
39. Wheels Of ConfusionArs Legatus Legionis
AreWeThereYeti wrote:
No. The electron is not a collection of smaller parts. It is an elementary particle that cannot be decomposed into parts, and is a dimensionless point, in the standard model.
Unless they're made of preons.
You must login or create an account to comment.
Feature Story (2 pages)
Review: We wear Samsung’s Gear 2 and Gear Fit so you don’t have to
Samsung doubles down on wearables, but still doesn’t get it right.
|
{"url":"http://arstechnica.com/science/2013/02/x-rays-spotted-bouncing-off-relativistic-matter-spiraling-into-black-hole/?comments=1&post=23968389","timestamp":"2014-04-21T12:40:35Z","content_type":null,"content_length":"156755","record_id":"<urn:uuid:72f3790f-87f4-411b-a290-3ea18d77fcb1>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00542-ip-10-147-4-33.ec2.internal.warc.gz"}
|
For which $n$ is there only one group of order $n$?
up vote 27 down vote favorite
Let $f(n)$ denote the number of (isomorphism classes of) groups of order $n$. A couple easy facts:
1. If $n$ is not squarefree, then there are multiple abelian groups of order $n$.
2. If $n \geq 4$ is even, then the dihedral group of order $n$ is non-cyclic.
Thus, if $f(n) = 1$, then $n$ is a squarefree odd number (assuming $n \geq 3$). But the converse is false, since $f(21) = 2$.
Is there a good characterization of $n$ such that $f(n) = 1$? Also, what's the asymptotic density of $\{n: f(n) = 1\}$?
gr.group-theory finite-groups
2 (In case people want some data and known results, oeis.org/A000001) – Andres Caicedo Nov 13 '13 at 4:32
1 Presumably the density of $n$ with $f(n)=1$ is zero, because there are lots of semidirect products. – Lucia Nov 13 '13 at 4:40
3 Since we always have the cyclic group of order $n$, then $f(n)=1$ if and only if $n$ is a cyclic number. The cyclic numbers are well known: they are the square-free integers $n=p_1\cdots p_r$,
where $p_1\lt p_2\lt\cdots\lt p_r$ in which $p_i$ does not divide any of $p_j-1$ for all $j\neq i$. See e.g. Pete Clark's answer here and references cited there. – Arturo Magidin Nov 13 '13 at
1 @DanielHast: The explicit description I give is equivalent to Gerry Myerson's below ($\gcd(n,\varphi(n))=1$). – Arturo Magidin Nov 13 '13 at 6:31
4 I'd rather have thought this question would be a good candidate for migration to Math.SE ... . – Stefan Kohl Nov 13 '13 at 14:04
show 2 more comments
4 Answers
active oldest votes
$f(n)=1$ if and only if $\gcd(n,\phi(n))=1$, where $\phi$ is the Euler phi-function. These $n$ are tabulated at http://oeis.org/A003277
up vote 42 down vote accepted The result is found in Tibor Szele, Über die endichen Ordnungszahlen, zu denen nur eine Gruppe gehört, Comment. Math. Helv. 20 (1947) 265–267, MR0021934 (9,131b).
add comment
Let $G(x)$ denote the number of $n \leq x$ such that there is exactly $1$ isomorphism class of groups of order $n$. Then: $$G(x) \sim e^{-\gamma}\frac{x}{\log\log\log(x)} $$ where $\
up vote 23 gamma$ is Euler's constant. This is a result of Erdos, Murty and Murty. Their paper also contains other interesting results on the distribution of values of the group order function.
down vote
Thanks — that's an interesting paper. To clarify, does $\gamma$ denote the Euler–Mascheroni constant? – Daniel Hast Nov 13 '13 at 5:08
1 @Daniel, Yes. It arises (essentially) through the use of Merten's formula. – Mark Lewko Nov 13 '13 at 5:18
add comment
The original question has already been answered, so I thought I would provide a slightly more general version.
up vote 14 down vote The short paper http://www.math.ku.dk/~olsson/manus/three-group-numbers.pdf describes those orders for which there are precisely 1, 2 or 3 groups of the given order.
add comment
If $p$ is the smallest prime dividing $n$, then if $n=pm$ and $p|\phi(m)$ then there exists a semidirect product of the cyclic group of order $p$ and the cyclic group of order $m$. So $f(n)
$ is not $1$ for such $n$. Now given a prime $p$, most values of $m$ that are odd and coprime to $p$ will have $\phi(m)$ being a multiple of $p$ (all we need is some prime factor of $m$ to
be $1\pmod p$, and usually $m$ will have some such factors). Since most numbers won't be coprime to all small primes, this will give a proof that the density of numbers with $f(n)=1$ is
up vote 5
down vote Note: Mark Lewko posted the interesting reference to Erdos, Murty & Murty while I was writing the answer above. Comparing our answers, one can see that the numbers with $f(n)=1$ are closely
related to the numbers $n$ having no prime factor below $\log \log n$.
add comment
Not the answer you're looking for? Browse other questions tagged gr.group-theory finite-groups or ask your own question.
|
{"url":"http://mathoverflow.net/questions/148731/for-which-n-is-there-only-one-group-of-order-n/148734","timestamp":"2014-04-19T18:07:28Z","content_type":null,"content_length":"72223","record_id":"<urn:uuid:e18fb8bb-a552-4174-9c3d-795c0f1c4baa>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00652-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Space Shuttle - 1 (Program
Space Shuttle - 1 (Program "shuttle")
Our theme exercise for this quarter is based on the NASA Space Shuttle. In spite of its age, the Space Shuttle has continued until now to be the prime element of the US Space Transportation System
for space research and applications. It can carry payloads of up to 29,000 kg, and is capable of launching deep space missions into their initial low Earth orbit. It is also able to retrieve
satellites from Earth orbit and repair and redeploy them or bring them back to Earth for refurbishment and reuse. It was hoped that the Shuttle would continue to play a major role in the
International Space Station program, however since the latest accident in which the Columbia was destroyed the entire Shuttle program is currently under review. This exercise sequence is dedicated to
the memory of the seven astronauts who perished in the tragic accident of the Columbia on Feb 1, 2003.
We see from the following photograph that the Space Shuttle consists of four major subsystems: the two Solid Fuel Rocket Boosters, the External Liquid Fuel Tank, and the Orbiter.
There are a number of phases in the cycle of a complete mission. These include the 'launch' phase in which the full power of the Space Shuttle Vehicle is used, including the two solid fuel booster
rockets and the three main engines of the orbiter. During this phase the Shuttle flight is almost completely vertically upwards. This phase ends after an elapsed time of about 2 minutes, when the two
booster rockets have reached the 'Brennschluss' (burnout) condition and separate from the Shuttle. In the second phase the orbiter continues to use the liquid fuel in the external tank (liquid
hydrogen, with liquid oxygen as the oxidizer) for about 6.5 more minutes, discards the external fuel tank and enters the 'orbit insertion' phase. With the various orbital operations, deorbit and
return to Earth, a mission can last up to fifteen days. Our exercise sequence will consider the initial 'launch' phase, i.e. the first 2 minutes of the Shuttle mission, and then follow the empty
booster shells until they fall to earth (booster splashdown).
We will be concerned with evaluating the upward velocity and height of the Shuttle throughout the 'launch' phase. It is believed that during this phase of extreme acceleration a piece of the
protective insulation broke off, causing the Columbia accident. We first derive the upward velocity equation. From Newton's second law, the upward acceleration of the Shuttle is given by:
where F is the total upward force (N)
M is the total mass of the Shuttle (kg)
V is the upward velocity (m/s)
t is the elapsed time (s).
The upward force is comprised of the thrust from both the booster rockets and the main orbiter engines, modified by the gravitational force, thus:
where Tb is the booster thrust (N)
To is the orbiter thrust (N)
g is the acceleration due to gravity (m/s/s)
The thrust is given by:
where vf is the velocity of the exhaust fuel gases (m/s)
At this stage we encounter a problem. The booster rockets and the orbiter engines operate under very different exhausting fuel velocities, hence their respective influence on the Shuttle motion need
to be evaluated separately. In order to simplify the analysis we assume (tongue in cheek) that we can linearly superpose the contribution of each engine system to the resulting Shuttle velocity.
Combining the above equations we obtain:
where the subscript b refers to the boosters and o refers to the orbiter main engines.
This equation can be rearranged in the form of a variables separable differential equation, thus:
Each term in this equation can be separately integrated over the elapsed time, thus finally:
where Mt is the total initial mass of the Space Shuttle Vehicle (including the solid booster fuel) (kg)
qb is the ejection rate of the exhausting fuel in the booster (kg/s)
qo is the ejection rate of the exhausting fuel in the orbiter (kg/s)
gm is the effective mean value of gravitational acceleration (m/s/s)
log( ) is the natural logarithm function - in the C++ language it is available in the math library.
The values of these parameters for a typical space mission are given in the figure above.
Digression: We normally treat g as a constant, having a value of 9.807 m/s^2. However this value varies with altitude according to the Law of Universal Gravitation, thus:
where gs is the value of g at Earth's surface (9.807 m/s/s )
Rs is the radius of Earth (6.38 x 106 m)
h is the altitude above Earth (m)
gh is the value of g at altitude h ( m/s/s )
In this exercise sequence we wish to consider both the velocity and height of the shuttle as a function of elapsed time. In particular we would like to determine the elapsed time to reach a specified
height. There is no easy explicit solution to this highly nonlinear problem, hence we will spend the entire quarter over six exercises developing a computer approach to solution.
1. Write a program that will define a class Shuttle based on the equation given above to evaluate the upward velocity as a function of elapsed time from liftoff. The various parameters used in the
equation should be declared as private or public variables, as shown in the structure diagram below. We will use the values given in the figure, excepting for the total mass at liftoff (Mt). Choose a
value of the total mass at liftoff (Mt) 2abcdef kilograms, where abcdef are the 6 digits of your Oak id account. Thus for an Oak id iu123456 the mass used should be 2,123,456 kg.
The user first enters the chosen total mass at liftoff. The main program then uses this value to construct an object of the class 'Shuttle', assigning and displaying the values of all the private and
public variables. Subsequently the main function should get a value of elapsed time from the keyboard, invoke a class function find_velo to evaluate the upward velocity, and display both on the
screen. The function find_velo should have only one argument, being the elapsed time in seconds. A flow diagram of the program is shown below:
The source code of your program should be in your home directory and named shuttle.cpp.
2. You will need to execute the program as many times as required to obtain enough points in order to plot a suitable velocity vs time graph. In order to evaluate the height at any value of elapsed
time t, we need to evaluate the relevant area under the velocity curve, since:
Probably the easiest method of evaluating an area is through counting the elemental squares enclosed by that area and multiplying the result by the width (seconds) and height (meters/second) of an
elemental square. This approach will be demonstrated in class.
Once you have drawn the graph of height vs elapsed time you should draw a horizontal line representing the height attained when the booster rockets have used up all their fuel (this will vary,
depending on your social security number). You will first need to evaluate the elapsed time for all the fuel to be used up in the booster rockets.
Notice that in this exercise we use the computer in a very unsophisticated role - as a mere calculator. However, we have to start somewhere. In the coming exercises we will successively (and
joyfully) relinquish all of the manual processes above to the computer, including drawing the graphs, evaluating the integral (area under the curve) to determine the cumulative height of the shuttle.
Obviously we will always remain with the creative manual process of designing and writing the program.
|
{"url":"http://www.ohio.edu/mechanical/programming/shuttle/shuttle.html","timestamp":"2014-04-21T13:09:53Z","content_type":null,"content_length":"10819","record_id":"<urn:uuid:3a5b5d88-9b5f-43d9-82da-c0e710ecda71>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00436-ip-10-147-4-33.ec2.internal.warc.gz"}
|