content
stringlengths 86
994k
| meta
stringlengths 288
619
|
|---|---|
How to estimate the number of recursive calls
December 10th, 2012, 08:23 PM #1
Junior Member
Join Date
Dec 2012
How to estimate the number of recursive calls that would be used by the code
public static double binomial(int N, int k, double p)
if ((N == 0) || (k < 0)) return 1.0;
return (1.0 - p)*binomial(N-1, k) + p*binomial(N-1, k-1);
to compute binomial(100, 50)???
I am looking for the formula that gives me the exact number of calls
I sent an email to writer of the book that I found this problem in it and he answered me:
Hint: calculate the number of calls for some small values of N and k and consider functions that involve N! (N factorial) and k! (k factorial).
but still nothing and I am confused
Re: How to estimate the number of recursive calls
So, which small values for N and k did you try?
Cheers, D Drmmr
Please put [code][/code] tags around your code to preserve indentation and make it more readable.
As long as man ascribes to himself what is merely a posibility, he will not work for the attainment of it. - P. D. Ouspensky
Re: How to estimate the number of recursive calls
So, which small values for N and k did you try?
I wrote a program that gives me the numbes but for (100,50) it takes too long
I am sure that there is a formul
Re: How to estimate the number of recursive calls
I wrote a program that gives me the numbes but for (100,50) it takes too long
I am sure that there is a formul
Of course, and the question is about finding that formula, rather than the specific answer.
Let's take the following approach: write a function f that expresses the number of recursive calls in terms of n and k.
Forgetting the check on the value of n and k for a minute, it should be obvious that f(n, k) = f(n - 1, k) + f(n - 1, k - 1).
You can expand this further in steps by substituting this formula for the elements on the right hand side. I.e. the second step yields f(n, k) = f(n - 2, k) + 2 * f(n - 2, k - 1) + f(n - 2, k -
Repeat this for a few steps and then have a look at this: http://en.wikipedia.org/wiki/Pascal's_triangle
That should help you to find a more general formula for f(n, k). Once you have that, you can consider how to introduce the if condition in the function.
Cheers, D Drmmr
Please put [code][/code] tags around your code to preserve indentation and make it more readable.
As long as man ascribes to himself what is merely a posibility, he will not work for the attainment of it. - P. D. Ouspensky
Re: How to estimate the number of recursive calls
I tried before all of this and I think no one knows the answer in the world and wanna give up because it takes my time too much and spent enough time for this problem.
number of recursive calls :
f(n, k) = f(n - 1, k) + f(n - 1, k - 1) + 1
Re: How to estimate the number of recursive calls
I tried before all of this and I think no one knows the answer in the world and wanna give up because it takes my time too much and spent enough time for this problem.
I think I have a solution.
Each call to f gives rise to two new calls to f recursively. One can think of the call pattern as an imaginary binary tree. Each call to f represents a node and the first f call in f goes to the
left child node and the second to the right child node.
If k is equal to n or bigger then the recursion will be limited by n only. The imaginary binary tree will be complete and have a depth of n. The number of nodes, or rather f calls, will be 2^
(n+1) - 1. Say n=1 for example that will give 3 f calls and if n=2 there are 7.
Now if k is made smaller than n it will start limiting the recursion. The imaginary binary tree will no longer be complete. Certain branches will be ignored because k becomes negative in some
call paths and that will stop the recursion before it reaches the maximum depth determined by n. This happens each time the second call to f in f has been called k+1 times along a call path (the
first call to f in f leaves k unchanged).
The question becomes: How many nodes, that is f calls, in the imaginary binary tree won't be visited because of k? If those are subtracted from the total number of nodes of the complete binary
tree we have the number of actual f calls. An analysis is somewhat involved. If a node (f call) in the complete binary tree is on a path below more than k+1 right side child nodes (calls to the
second f in f) then it will never be reached (because of the k limit) and should be subtracted from the total.
The result is simple and somewhat surprising. The number of never performed f calls can be found in Pascal's triangle! They're in a subtriangle defined by n and k. One simply add up all binominal
coefficients of that subtriangle and double the sum.
Pascal's triangle
1: 1 1
2: 1 2 1
3: 1 3 3 1
4: 1 4 6 4 1
5: 1 5 10 10 5 1
6: 1 6 15 20 15 6 1
7: ...
The subtriangle for f(n,k) is from k+1 to n-1. To show how the subtriangle is extracted is easiest with an example.
If n=4 and k=1 the subtriangle is from 2 to 3 and looks like this:
Subtriangle (from k+1=2 to n-1=3)
2: 1
3: 1 3
All coefficients are added and the sum is doubled. That's the number of f calls that should be removed from the maximum total (which is 2^(n+1) - 1 according to the formula above). So we have f
(4,1) = (2^(4+1) - 1) - 2*(1 + 1+3) = (32 - 1) - 2*5 = 21.
If instead n=6 and k=2 one gets a subtriangle from 3 to 5 like:
Subtriangle (from k+1=3 to n-1=5)
3: 1
4: 1 4
5: 1 5 10
And f(6,2) = (2^(6+1) - 1) - 2*(1 + 1+4 + 1+5+10) = (128-1) - 2*22 = 83.
Since Pascal's triangle can be calculated in quadratic time an f(n,k) algorithm based on that will be O(N^2). On the other hand binominal coefficients quickly become very large so f(100, 50) is
hardly tractable with standard hardware supported arithmetics.
Last edited by nuzzle; December 13th, 2012 at 06:54 AM.
I think I have a solution.
Each call to f gives rise to two new calls to f recursively. One can think of it as a binary tree. Each call to f represents a node and the first f call in f goes to the left and the second to
the right.
If k is equal to n or bigger then the recursion will be limited by n only. The imaginary binary tree will be complete and have a depth of n. The number of nodes, or rather f calls, will be 2^
(n+1) - 1. Say n=1 for example that will give 3 f calls.
Now if k is made smaller than n it will start limiting the recursion. The imaginary binary tree will no longer be complete. Certain branches will be ignore because k has become smaller than 0 for
certain paths and the recursion will stop until it reaches the maximum depth sey by n. This happens each time the second call to f in f has been called k+1 times because then k becomes negative
(the first call to f in f leaves k unchanged)
The question now becomes: How many nodes, that is f calls, in the imaginary binary tree won't be visited because of k? If those are subtracted from the total number of nodes of the complete
binary tree we have the number of actual f calls.
An analysis is somewhat involved. If a node is reached after more than k+1 "right child" calls to f then it should be subtracted from the total. But the result is simple and somewhat surprising.
The number of not performed f calls can found in Pascal's triangle. One simply cuts out a subtriangle and add up all coefficients.
Pascal's triangle
1: 1 1
2: 1 2 1
3: 1 3 3 1
4: 1 4 6 4 1
5: 1 5 10 10 5 1
6: 1 6 15 20 15 6 1
Now if n=4 and k=1 you cut out this subtriangle from k+1 to n-1.
Subtriangle (cut from k+1=2 to n-1=3)
2: 1
3: 1 3
You add up the numbers and double the sum. That's the number of nodes (f calls) that should be removed from the total which is 2^(n+1) - 1 according to the formula above. So we have f(4,1) = (2^
(4+1) - 1) - 2*(1 + 1+3) = (32 - 1) - 2*5 = 21.
If instead n=6 and k=2 you get,
Subtriangle (cut from k+1=3 to n-1=5)
3: 1
4: 1 4
5: 1 5 10
And f(6,2) = (2^(6+1) - 1) - 2*(1 + 1+4 + 1+5+10) = (128-1) - 2*22 = 83.
Since Pascal's triangle can be calculated in quadratic time the algorithm is O(N^2). On the other hand binominal coefficients quickly become very large so f(100, 50) is hardly tractable with
standard arithmetic.
thank for your answer
but there is only one thing remains
I sent an mail to the writer of the book that I found this problem and he answered me
Hint: calculate the number of calls for some small values of N and k and consider functions that involve N! (N factorial) and k! (k factorial).
where is the factorial in this formula?
Re: How to estimate the number of recursive calls
where is the factorial in this formula?
Pascal's triangle consists of binominal coefficients (n|k) defined as n! / (k! * (n-k)!) so there are lots of factorials in the solution. I wouldn't say that hinting at factorials is such a great
help really but maybe the author has some other solution approach in mind where they play a more prominent role.
In my solution binominals appear because they have a combinatorial interpretation. (n|k) tells how many ways there are to order k of n elements. There may be a deeper reason why they appear as a
subtriangle of Pascal's triangle or maybe it's just coincidental. I couldn't say but it's interesting.
Last edited by nuzzle; December 12th, 2012 at 06:05 AM.
Re: How to estimate the number of recursive calls
The entries in Pascal's triangle can be expressed in terms of factorials ( https://en.wikipedia.org/wiki/Pascal...e#Combinations ). By nuzzle's analysis, you are summing over entries in the
triangle. Presumably there is some analytical solution.
Best Regards,
All advice is offered in good faith only. You are ultimately responsible for effects of your programs and the integrity of the machines they run on.
Re: How to estimate the number of recursive calls
To follow up: possibly of interest, though perhaps at an excessive level of detail: http://mathworld.wolfram.com/BinomialSums.html
Best Regards,
All advice is offered in good faith only. You are ultimately responsible for effects of your programs and the integrity of the machines they run on.
Re: How to estimate the number of recursive calls
Presumably there is some analytical solution.
Maybe the sum of binominals I have in my solution can be reduced to factorials but computationally it may be better to keep the binominals. It's because of the relationship with Pascal's triangle
in this case. Factorials becomes awfully big very quickly and require lots of multiplications to compute. Binominals on the other hand stay smaller and if the recurrent Pascal's triangle
algorithm is used only additions are needed. For example the factorial 20! is 2432902008176640000 while the binominal (20|10) is just 184756.
I looks like the OP is using Java and it sports a standard class for arbitrary-precision integers called BigInteger. Using BigInteger in combination with the very efficient recurrent Pascal's
triangle algorithm would make it possible I think to find the exact solution to even something as big as f(100,50) in reasonable time.
Last edited by nuzzle; December 13th, 2012 at 06:56 AM.
Re: How to estimate the number of recursive calls
Maybe the sum of binominals I have in my solution can be reduced to factorials but computationally it may be better to keep the binominals. It's because of the relationship with Pascal's triangle
in this case. Factorials becomes awfully big very quickly and require lots of multiplications to compute. Binominals on the other hand stay smaller and if the recurrent Pascal's triangle
algorithm is used only additions are needed. For example the factorial 20! is 2432902008176640000 while the binominal (20|10) is just 184756.
I looks like the OP is using Java and it sports a standard class for arbitrary-precision integers called BigInteger. Using BigInteger in combination with the very efficient recurrent Pascal's
triangle algorithm would make it possible I think to find the exact solution to even something as big as f(100,50) in reasonable time.
I know what you mean but our purpose is to find a formula not a better way to calculate binomial
we can say in formula 50! and don't need to calculate it because it is formula
Re: How to estimate the number of recursive calls
I know what you mean but our purpose is to find a formula not a better way to calculate binomial
we can say in formula 50! and don't need to calculate it because it is formula
Well, in the outset you were desperate for even a solution to this problem and now you've been presented with both an analytical formula and an algorithm for its efficient calculation.
I'm sure you can take it from here, can't you?
Well, in the outset you were desperate for even a solution to this problem and now you've been presented with both an analytical formula and an algorithm for its efficient calculation.
I'm sure you can take it from here, can't you?
yeah I think I can do
many thank to you
Re: How to estimate the number of recursive calls
This is the biggest integer I've ever calculated,
f(2000, 1000) =
I took just a second on an ordinary desktop. Quite amazing.
The code is written in Java using BigInteger and the recurrent Pascal's triangle algorithm.
import java.math.BigInteger;
import java.util.ArrayList;
public class Main {
// the rows of Pascal's triangle one by one
static class PascalRows extends ArrayList<BigInteger> {
{ add(BigInteger.ONE);}
public void nextRow() {
BigInteger f = get(0);
for (int i=1,N=size(); i<N; i++) {
final BigInteger s = get(i);
set(i, f.add(s));
f = s;
// sum of all binominals of a subtriangle of Pascal's triangle
static BigInteger subTriangle(int n, int k) {
BigInteger sum = BigInteger.ZERO;
final PascalRows pascal = new PascalRows();
for (int i=0; i<k; i++) pascal.nextRow();
for (int i=0,N=n-k-1; i<N; i++) {
for (int j=0; j<=i; j++) sum = sum.add(pascal.get(j));
return sum;
// f(n,k) = 2 * (2^n - subTriangle(n,k)) - 1
static BigInteger f(int n, int k) {
return BigInteger.ONE.shiftLeft(n).subtract(subTriangle(n,k)).shiftLeft(1).subtract(BigInteger.ONE);
public static void main(String[] args) {
System.out.println("Hello World!");
This was a nostalgic trip down memory lane. I haven't used Java or Eclipse for years.
BigInteger is something of a fossil. It's the only surviving reminder of a proud initiative called Java Grande. It was active at the peak of the Java hype age when there still was no limit to
what Java could and should be used for. So much effort and now it's just a faint memory in the senile minds of a few elders who still remember Java in its hayday. So sad.
But BigInteger still is nice and handy. I wish Boost offered an arbitrary precision arithmetics package (at least I haven't seen any).
Last edited by nuzzle; December 18th, 2012 at 03:46 AM.
December 11th, 2012, 04:14 AM #2
Senior Member
Join Date
Jul 2005
December 11th, 2012, 06:13 AM #3
Junior Member
Join Date
Dec 2012
December 11th, 2012, 08:03 AM #4
Senior Member
Join Date
Jul 2005
December 11th, 2012, 08:11 AM #5
Junior Member
Join Date
Dec 2012
December 11th, 2012, 05:39 PM #6
Elite Member
Join Date
May 2009
December 11th, 2012, 06:11 PM #7
Junior Member
Join Date
Dec 2012
December 12th, 2012, 01:26 AM #8
Elite Member
Join Date
May 2009
December 12th, 2012, 01:27 AM #9
Join Date
Feb 2011
United States
December 12th, 2012, 02:56 AM #10
Join Date
Feb 2011
United States
December 13th, 2012, 01:06 AM #11
Elite Member
Join Date
May 2009
December 13th, 2012, 10:24 AM #12
Junior Member
Join Date
Dec 2012
December 13th, 2012, 04:14 PM #13
Elite Member
Join Date
May 2009
December 13th, 2012, 04:54 PM #14
Junior Member
Join Date
Dec 2012
December 16th, 2012, 12:24 AM #15
Elite Member
Join Date
May 2009
|
{"url":"http://forums.codeguru.com/showthread.php?530637-How-to-estimate-the-number-of-recursive-calls&p=2096181","timestamp":"2014-04-25T04:12:25Z","content_type":null,"content_length":"175246","record_id":"<urn:uuid:3d009a78-ae1b-4251-b286-3dc832d8204e>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00382-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Array of ranges using OFFSET formula - with examples - E90E50fx
In this post we would like to illustrate the strange behaviour of OFFSET function described by Dick Kusleika in the
Array with offset article
The feature we are focusing is: “...when an array is used as the 2nd argument of OFFSET... an array of RANGES is returned.” The statement may be generalized: arrays could be used in the rows, cols,
height or width parameter too.
For example this formula uses array for the rows parameter:
=OFFSET( A1:A10 , {0;1;2} , ,3 )
if array-entered, results these 3-element ranges:
While this one uses array for height:
=OFFSET( A1:A10 , , , {2;3;4} )
and the result is:
Unfortunately Excel is not capable to display array of ranges, and only a few formula is able to operate correctly by them. One of these formulas is SUBTOTAL - using one of it’s 11 different
functions, it is able to evaluate the ranges separately and gives the expected result.
Darts challenge example
The first example will illustrate how OFFSET function could be applied for the
Darts challenge
posted by us a few months ago.
The base question was how to calculate the total square of the sum of k-element groups. Knowing the above mentioned fact about OFFSET the question is answered. We only need the k-element arrays and
SUBTOTAL them with parameter 9. than square, sum and finished. The only little trick is that we need “circular” k-element groups, so for example in case k=3 we should have a group where we add the
very last element of the range and the first two, and another group where we add the last two elements and the very first. In general, this “circulation” applies for the last k-1 elements of the
This is why two subtotals are used in our solution. One will summarize k-element groups without circulation, so will not add the elements from the beginning of the list. The second subtotal will
create additional groups from the beginning of the list - only for the elements where “circulation” needed. Adding this two array-results together to have the sum of the k-element groups.
E voilá, the formula:
=SUM( (SUBTOTAL(9,OFFSET(rng,MAX(ROW(rng))-ROW(rng),,k)) + IF( ROW(rng)-MIN(ROW(rng))<k-1 , SUBTOTAL(9,OFFSET(rng,,,k-ROW(INDIRECT("1:"&k)))) ) )^2)
First subtotal with parameter 9 to sum k-element groups. The second (rows) parameter of OFFSET is an array, it will go through the range from the bottom to the top and create array of k-element
Second subtotal with pRank formula examplearameter 9 to sum the elements needed for the circular ranges for the last k-1 rows - nested into an IF function. The array is in the 3rd (height) parameter
of OFFSET, this will create the additional groups needed for the “circulation”.
You can check it out in the example file.
Rank formula example
Another function capable to deal with the array of ranges is COUNTIF. This formula is not available via the parameters of subtotal, but fortunately it could work together beautifully with offset.
Sometimes it is a problem that RANK function does not take into account ties, so duplicate numbers will have the same rank - similarly as in the Olympic Games: in case of tie on the first place, two
gold medals are distributed, but no silver, so rank 2 will not appear.
In Excel 2010 MS introduced two new versions of Rank: RANK.AVG uses the average rank in case of duplicated values, RANK.EQ returns the top rank of the set of the duplicated numbers (as it is in the
old RANK).
None of these functions solve the case when you want to take the ties into account in a way looking the position of the number within the range.
The task is very easy, only need to add a correction to rank: count how many times the number appears in the list before actual cell.
In the below example the formula in B12 will be:
=RANK(B2,n_data,1)+COUNTIF( $B$2:B2 , B2 )-1
Copying across the columns it will nicely do the task.
However many times we need the result in array, so this copy-formula is not a solution. Using the features of OFFSET we can create an array formula which will result an array of the adjusted rank
numbers. The formula is array-entered to the range B14:G14
=RANK(n_data,n_data,1) + COUNTIF( OFFSET(n_data,,,,COLUMN(n_data)-MIN(COLUMN(n_data))+1), OFFSET(n_data,,COLUMN(n_data)-MIN(COLUMN(n_data)),,1) )-1
In this formula we use the same COUNTIF as above.
The first offset creates the array of ranges as the $B$2:B2 part works when copying the first formula across the columns. Array is used in the width parameter of OFFSET.
The second offset picks up the values one by one from the range similarly as B2 goes through the range. Array is used in the column parameter of OFFSET and creates one element ranges.
You can learn more from the
example file
where you can also find the “descending” version of the modified rank formula.
The FrankensTeam
István Orosz
|
{"url":"https://sites.google.com/site/e90e50fx/home/arrayofrangesusingoffsetformula-withexamples","timestamp":"2014-04-16T16:18:31Z","content_type":null,"content_length":"62560","record_id":"<urn:uuid:77f78682-6d3a-4bcf-b472-429fd804e9d6>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00125-ip-10-147-4-33.ec2.internal.warc.gz"}
|
tetris-like combinatorial problem
Thank you Matterwave
only a brute force counting is apparent to me at this point
I started like that, using element size as rule. I reduced all possible locations of the element a) as 2 (border and middle):
XaXX aXXX
XaXX aXXX
XaXX aXXX
XaXX aXXX
These by 4 possible rotations, and by 2 possible mirror-like turns (not counting diagonal symmetries, but not sure if I should):
e.g. rotation of 1 by 90° to the right:
Then, in the remaining space of these two (X's), I found 6 possible arrangements of the element b) also by 4 rotations, and by 2 mirror turns. E.g.:
Then, using the elements c), was able to find 116 possible arrangements of the combination c) - d). This gives me 116 x 48 x 16 = 89,088 combinations... but now the problem is to be sure that 116 is
the right number, or to eliminate similarities between groups (e.g. in the group of elements a) some symmetries are similar to some rotations).
Any help/comment is more than welcomed... thanks
|
{"url":"http://www.physicsforums.com/showthread.php?p=3872920","timestamp":"2014-04-19T15:16:09Z","content_type":null,"content_length":"47874","record_id":"<urn:uuid:676a17ba-cc72-448c-8855-f26ea53aff49>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00295-ip-10-147-4-33.ec2.internal.warc.gz"}
|
6.02 Quiz #2 Review Problems
Problem . Consider an LTI system characterized by the unit-sample response h[n].
A. Given an expression for the frequency response of the system H(e^jΩ) in terms of h[n].
H(e^jΩ) = &Sigma[m] h[m]e^-jΩm
B. If h[0]=1, h[1]=0, h[2]=1, and h[n]=0 for all other n, what is H(e^jΩ)?
H(e^jΩ) = 1 + e^-2jΩ
C. Let h[n] be defined as in part B and x[n] = cos(Ω[1])n). Is there a value of &Omega[1] such that y[n]=0 for all n and 0 ≤ &Omega[1] ≤ π?
Ω[1] = π/2
D. Find the maximum magnitude of y[n] if x[n] = cos(πn/4). Maximum magnitude of y[n] is sqrt(2).
E. Find the maximum magnitude of y[n] if x[n] = cos(-πn/2). Maximum magnitude of y[n] is 0.
Problem . Suppose we want to design a filter whose frequency response is zero at Ω[1] = 2π/3 and Ω[2] = 3π/4.
A. Make a qualitative sketch of the filter's frequency response. Would you characterize the filter as low-pass or high-pass? This is a low-pass filter.
B. Let h[n] be the unit-sample response of the filter. For what value of N is h[n] = 0 for n ≥ N? 5. To achieve the desired frequency response we need a response that has zeros at e^j2Ω/3, e^-j2Ω/3,
e^j3Ω/4, and e^-j3Ω/4, i.e., a response of the form
H(e^jΩ) = h[0] + h[1]z + h[2]z^2 + h[3]z^3 + h[4]z^4
C. Derive the unit-sample response of the filter. Don't worry about scaling the response (yet). We use two single-zero filters in series, each with a unit-sample response of the form h =
[1,-2cosΩ,1]. The overall unit-sample response can be calculated by convolving the two single-zero responses.
h[0] = 1
h[1] = 1 + sqrt(2)
h[2] = 2 + sqrt(2)
h[3] = 1 + sqrt(2)
h[4] = 1
h[n] = 0 for all other n
D. What is the appropriate scale factor for the unit-sample response above that will ensure that H(e^jΩ) ≤ 1. The maximum magnitude of the filter response occurs at Ω=0. We want to scale the
response so that H(e^j0) = 1. In this case z = e^j0 = 1 so
H(e^j0) = 1 + (1+sqrt(2)) + (2+sqrt(2)) + (1+sqrt(2)) + 1 = 6 + 3*sqrt(2)
The appropriate scale factor for the h[n] is the reciprocal of this value.
E. To improve the quality of the filter, we want to add a pole at Ω = π/2. Choose an appropriate value for r, the magnitude of the pole, and give the difference equation for the combined 2-zero,
1-pole filter. The difference equation for the combined system is
a0*y[n] + a1*y[n-1] + a2*y[n-2] = h[0]*x[n] + h[1]*x[n-1] + h[2]*x[n-2] + h[3]*x[n-3] + h[4]*x[n-4]
where the h[n] were derived in part C and, with r = .9,
a[0] = 1/r^2 = 1/.81
a[1] = -(2/r)cosΩ = 0
a[2] = 1
Problem . In this problem we explore the form of modulation/demodulation for a class of signals known as bandpass signals. Consider a signal x[n] that repeats periodically with period N=8001 and has
purely real Fourier coefficients X[k] shown below.
We wish to transmit the signal x[n] over a communication channel. The channel acts as a high-pass filter with frequency response C[k] that is 0 for k ≤ 1001 and is 1 elsewhere as shown below:
We would like to use synchronous modulation/demodulation in a way that does not require the receiver to have its own local oscillator that would have to maintain phase synchronization with the
transmitter. The transmitter uses the modulation system depicted below (Ω[1] = 2π/8001):
A. Sketch Z[k] and R[k], the Fourier coefficients of the signals z[n] and r[n], respectively. Clearly label your sketches. When modulating a signal that has purely real Fourier coefficients by a
sine wave which has purely imaginary coefficients, we get a signal that has purely imaginary coefficients. The resulting spectrum will have two copies of X[k]: one scaled by jN/2 centered around
k=-1500, and one scaled by -jN/2 centered around k=+1500. The addition of the sine wave itself to the signal adds two non-zero coeffients at k=-1500 and k=1500:
You can get R[k] by applying the high-pass filter C[k] to Z[k]. Note that R[k] has both (1) part of the original signal, X[k], and (2) the carrier signal, sin(1500Ω[1]n).
B. The receiver/demodulator that we would like to use has the following structure:
Where H1[k] is an ideal high-pass filter shown below:
Using ideal low-pass, high-pass, and/or band-pass filters, sketch the frequency responses H2[k] and H3[k], as well as the index of the cutoff frequency, L, for H1(f) so that y[n]=x[n]. Also
sketch V[k], the Fourier coefficients of the intermediate signal v[n]. The idea here is to use H1[k] and H2[k] to separate the signal from the carrier contained in R[k]. Given H1[k] is a
high-pass filter, we can use it to remove the carrier part and retain the signal part, as shown below. Choose L in the range 1501-1999; let's use L=1750.
To get the carrier part, we can need to remove the signal part. Thus we can use either a band-pass filter or a low-pass filter for H2[k]. Let's use a band-pass filter centered at the carrier
frequency, k=±1500, shown below:
Now that the signal and carrier parts have been separated, we can use the carrier part to demodulate the signal part back to baseband by multiplying the carrier and signal parts together. Recall
that this is the same as frequency-shifting the signal part by ±1500 and scaling by ±1/(2j). The result, v[n], has the Fourier coefficients V[k] shown below:
This is almost the same as the original signal's Fourier coefficients X[k], except that (1) the amplitude is scaled by 1/4, and (2) there are some additional high frequency components . We can
use the filter H3[k] to correct for these things. A low-pass filter with a gain of 4 and a cutoff at k=±1000 will do the job:
Problem . Consider a binary convolutional code specified by the generators (1011, 1101, 1111).
A. What are the values of
a. constraint length of the code
b. rate of the code
c. number of states at each time step of the trellis
d. number of branches transitioning into each state
e. number of branches transitioning out of each state
f. number of expected parity bits on each branch
a. 4
b. 1/3
c. 2^4-1 = 8
d. 2
e. 2
f. 3
A 10000-bit message is encoded with the above code and transmitted over a noisy channel. During Viterbi decoding at the receiver, the state 010 had the lowest path metric (a value of 621) in the
final time step, and the survivor path from that state was traced back to recover the original message.
B. What is the likely number of bit errors that are corrected by the decoder? How many errors are likely left uncorrected in the decoded message? 621 errors were likely corrected by the decoder to
produce the final decoded message. We cannot infer the number of uncorrected errors still left in the message absent more information (like, say, the original transmitted bits).
C. If you are told that the decoded message had no uncorrected errors, can you guess the approximate number of bit errors that would have occured had the 10000 bit message been transmitted without
any coding on the same channel? 3*10000 bits were transmitted over the channel and the received message had 621 bit errors (all corrected by the convolutional code). Therefore, if the 10000-bit
message would have been transmitted without coding, it would have had approximately 621/3 = 207 errors.
D. From knowing the final state of the trellis (010, as given above), can you infer what the last bit of the original message was? What about the last-but-one bit? The last 4 bits? The state gives
the last 3 three bits of the original message. In general, for a convolutional code with a constraint length k, the state indicates the final k-1 bits of the original message. To determine more
bits we would need to know the states along the most-likely path as we trace back through the trellis.
Consider a transition branch between two states on the trellis that has 000 as the expected set of parity bits. Assume that 0V and 1V are used as the signaling voltages to transmit a 0 and 1
respectively, and 0.5V is used as the digitization threshold.
E. Assuming hard decision decoding, which of the two set of received voltages will be considered more likely to correspond to the expected parity bits on the transition: (0V, 0.501V, 0.501V) or (0V,
0V, 0.9V)? What if one is using soft decision decoding? With hard decision decoding: (0V, 0.501V, 0.501V) -> 011 -> hamming distance of 2 from expected parity bits. (0V, 0V, 0.9V) -> 001 ->
hamming distance of 1. Therefore, (0V, 0V, 0.9V) is considered more likely.
With soft decision decoding, (0V, 0.501V, 0.501V) will have a branch metric of approximately 0.5. (0V, 0V, 0.9V) will have a metric of approximmately 0.8. Therefore, (0V, 0.501V, 0.501V) will be
considered more likely.
Problem . Indicate whether each of the statements below is true or false, and a brief reason why you think so.
A. If the number states in the trellis of a convolutional code is S, then the number of survivor paths at any point of time is S. True. There is one survivor per state.
The path metric of a state s1 in the trellis indicates the number of residual uncorrected errors left along the trellis path from the start state to s1. False. It indicates the number of likely
corrected errors.
B. Among the survivor paths left at any point during the decoding, no two can be leaving the same state at any stage of the trellis. False. In fact, the survivor paths will likely merge at a certain
stage in the past, at which point all of then will emerge from the same state.
C. Among the survivor paths left at any point during the decoding, no two can be entering the same state at any stage of the trellis. True. When two paths merge at any state, only one of them will
ever be chosen as a survivor path.
D. For a given state machine of a convolutional code, a particular input message bit stream always produces the same output parity bits. False. The same input stream with different start states will
produce different output parity bits.
Problem . Consider an LTI system A that has the following difference equation relationship between its input samples x[n] and output samples y[n]:
y[n] = (1/4) * (x[n] - 2x[n-1] + x[n-2])
A. What is the unit sample response of the system?
h[0] = 1/4
h[1] = -1/2
h[2] = 1/4
h[n] = 0 for all other n
B. Write an expression for the frequency response of this system as a polynomial in z = e^-jΩ. Using the unit sample response, one can derive the frequency response as H(z) = (1/4) * (z^2 - 2z + 1).
C. For which frequencies does this system serve as a notch filter? In other words, for what values of Ω is the frequency response of the system equal to zero? The frequency response of the system is
zero at z = 1, or e^-jΩ = 1, or Ω = 0. Thus this system eliminates the zero frequency (DC component) in any signal.
D. What is response of the system to the input x[n] = 1^n? Ans: 1^n is equivalent to the complex exponential e^jΩn for Ω=0. Because the frequency response of the system is zero at Ω=0, the output of
the system is y[n]=0 for all n. One can also verify this by using flip-and-slide to convolve 1^n with the unit sample response.
Now consider a second LTI system B whose unit sample response is given by
h[n] = (1/2)^n for n ≥ 0
h[n] = 0 for n < 0
E. Express the relationship between the inputs x[n] and outputs y[n] of the system in the form of a difference equation. From the standard convolution equation we get
y[n] = Σ[m] h[m] x[n-m]
= 1*x[n] + (1/2)*x[n-1 + (1/4)*x[n-2] + ...
= x[n] + (1/2)*y[n-1]
Thus we have y[n] - 0.5*y[n-1] = x[n]
Note that because the unit sample response is infinite, we cannot write a finite difference equation using x[n] alone, as we did for system A above, and have to use a feedback term y[n-1].
F. Write an expression for the frequency response of this system as a polynomial in z = e^-jΩ. style="display:none"> Substitute x[n] = e^jΩn and y[n] = H(e^jΩ)e^jΩn into the difference equation
above, divide by e^jΩn on both sides, and rearrange (as we did in class), to get the following expression:
H(z) = 1/(1-0.5*z) where z = e^-jΩ
G. What is the frequency response of this system at Ω = 0, Ω = π/2, and Ω = π.
Ω = 0 => z = 1 => H(z) = 2
Ω = π/2 => z = -j => H(z) = 1/(1+0.5j)
Ω = π => z = -1 => H(z) = 2/3
H. For which frequencies does this system serve as a notch filter? In other words, for what values of Ω is the frequency response of the system equal to zero? The frequency response of this system
is nonzero for all frequencies.
Now suppose systems A and B are connected in series, i.e., the output of system A is fed as input to system B.
I. What is the difference equation that relates the input of the combined system x[n] with its output y[n]? y[n] - 0.5*y[n-1] = (1/4) * (x[n] - 2x[n-1] + x[n-2])
J. What is the unit sample response of the combined system? The unit sample response of the combined system is obtained by convolving the individual unit sample responses (using, say, flip and
slide). Note that the unit sample response of the combined system is not finite because the difference equation has feedback terms.
h[n] = 0 for n < 0
h[0] = 1/4
h[1] = -3/8
h[n] = (1/4)*(1/2)^n - (1/2)*(1/2)^n-1 + (1/4)*(1/2)^n-2 for n ≥ 2
= (1/2)^n+2
K. Write down the frequency response of the combined system in terms of a polynomial in z = e^-jΩ. H(z) = (1/4)*(z^2 - 2z + 1) / (1 - 0.5*z)
L. What is the response of the combined system to the input x[n] = (-1)^n?
(-1)^n = e^jΩn for Ω = π
At Ω = π, z = -1, H(z) = 2/3. Therefore, output y[n] = (2/3)*(-1)^n.
Problem . Suppose management has decided to use 20-bit data blocks in the company's new (20,k,3) error correcting code. What's the maximum value of k that will permit the code to be used for single
bit error correction, i.e., that will achieve a minimum Hamming distance of 3 between code words? n and k must satisfy the constraint that n ≤ 2^n-k-1. If n=20 then a little trial-and-error search
finds k=15. Problem . Consider the following (n,k,d) block code:
D0 D1 D2 D3 D4 | P0
D5 D6 D7 D8 D9 | P1
D10 D11 D12 D13 D14 | P2
P3 P4 P5 P6 P7 | P8
where D0-D14 are data bits, P0-P2 are row parity bits, P3-P7 are column parity bits and P8 is the parity of all the other bits in the message. The transmitted code word will be:
D0 D1 D2 ... D13 D14 P0 P1 ... P6 P7 P8
A. Please give the values for n, k, d for the code above. n=24, k=15, d=4.
To see why d is 4, consider two non-identical 15-bit messages, A and B:
□ If hamming(A,B)=1, one of the data bits has flipped. That means that three of the parity bits will be different: the row and column parity for the flipped data bit, and the overall parity
bit. So there's a total of 4 bits that change when the message changes by 1 bit. In this case the Hamming distance between the original and new code words is 4.
□ If hamming(A,B)=2, two of the data bits have flipped. In the worst case, if the two bits are in the same row, two of the column parity bits will be different; if they are in the same column,
two of the row parity bits will be different. Again at least of 4 bits change and again the Hamming distance between the original and new code words will be 4.
□ If hamming(A,B)=3, three of the data bits have flipped. In the worst case, the bits are positioned so that two of them are in the same row (so the row parity doesn't change) and two of them
are in the same column (so the column parity doesn't). But the overall parity bit will be different, but that's still a total of 4 changed bits, so the two resulting code words have a Hamming
distance of 4.
□ If hamming(A,B)≥4, the two code words will of course have a Hamming distance of at least 4.
Conclusion: Code words produced from different 15-bit messages have a Hamming distance of at least 4.
B. If D0 D1 D2 ... D13 D14 = 0 1 0 1 0, 0 1 0 0 1, 1 0 0 0 1, please compute P0 through P8. P0 through P8 = 0 0 0 1 0 0 1 0 0
C. Now we receive the four following code words:
M1: 0 1 0 1 0, 0 1 0 0 1, 1 0 0 0 1, 0 0 0 1 1 0 1 0 0
M2: 0 1 0 1 0, 0 1 0 0 1, 1 0 0 0 1, 0 0 0 1 1 0 1 0 1
M3: 0 1 0 1 0, 0 1 0 0 1, 1 0 0 0 1, 1 0 0 1 1 0 1 0 0
M4: 0 1 0 1 0, 0 1 0 0 1, 1 0 0 0 1, 1 0 0 1 1 0 1 0 1
For each of received code words, indicate the number of errors. If there are errors, indicate if they are correctable, and if they are, what the correction should be.
0 1 0 1 0 | 0 | 0
0 1 0 0 1 | 0 | 0
1 0 0 0 1 | 0 | 0
1 1 0 1 0 | 0
The syndrome bits are shown in red. There are two errors detected, one in P4 and one in P8. The P8 error indicates that there's a single-bit error in the other message bits; and since there's
only one other error -- the parity bit P4 -- it must be that parity bit itself that had the error. So the correct data bits are:
M1: 0 1 0 1 0, 0 1 0 0 1, 1 0 0 0 1
0 1 0 1 0 | 0 | 0
0 1 0 0 1 | 0 | 0
1 0 0 0 1 | 0 | 0
1 1 0 1 0 | 1
The syndrome bits are non-zero but P8 is okay, which indicates an uncorrectable multi-bit error.
0 1 0 1 0 | 1 | 1
0 1 0 0 1 | 0 | 0
1 0 0 0 1 | 0 | 0
1 1 0 1 0 | 0
Again the syndrome bits are non-zero but P8 is okay, which indicates an uncorrectable multi-bit error.
0 1 0 1 0 | 1 | 1
0 1 0 0 1 | 0 | 0
1 0 0 0 1 | 0 | 0
1 1 0 1 0 | 1
Here the error in P8 indicates a correctable single-bit error, and the row and column parity bit indicate the data bit which got flipped (D1). Flipping it gives us the correct data bits:
M4: 0 0 0 1 0, 0 1 0 0 1, 1 0 0 0 1
Problem . Consider two convolutional coding schemes - I and II. The generator polynomials for the two schemes are
Scheme I: G0 = 1101, G1 = 1110
Scheme II: G0 = 110101, G1 = 111011
Notation is follows: if the generator polynomial is, say, 1101, then the corresponding parity bit for message bit n is
x[n] xor x[n-1] xor x[n-3]
where x[n] is the message sequence.
A. Indicate TRUE or FALSE
a. Code rate of Scheme I is 1/4.
b. Constraint length of Scheme II is 4.
c. Code rate of Scheme II is equal to code rate of Scheme I.
d. Constraint length of Scheme I is 4.
The code rate (r) and constraint length (k) for the two schemes are
I: r = 1/2, k = 4
II: r = 1/2, k = 6
a. false
b. false
c. true
d. true
B. How many states will there be in the state diagram for Scheme I? For Scheme II? Number of states is given by 2^k-1 where k = constraint length. Following the convention of state machines as
outlined in Lecture 10, number of states in Scheme I is 8 and in Scheme II, 32.
C. Which code will lead to a lower bit error rate? Why? Scheme II is likely to lead to a lower bit error rate. Both codes have the same code rate but different constraint lengths. So Scheme II
encodes more history and since it is less likely that 6 trailing bits will be in error vs. 4 trailing bits, II is stronger.
D. Alyssa P. Hacker suggests a modification to Scheme I which involves adding a third generator polynomial G2 = 1001. What is the code rate r of Alyssa's coding scheme? What about constraint length
k? Alyssa claims that her scheme is stronger than Scheme I. Based on your computations for r and k, is her statement true? For Alyssa's scheme r = 1/3, k = 4. Alyssa's code has a lower code rate
(more redundancy), and given then she's sending additional information, the modified scheme I is stronger in the sense that more information leads to better error detection and correction.
Problem . Consider a convolution code that uses two generator polynomials: G0 = 111 and G1 = 110. You are given a particular snapshot of the decoding trellis used to determine the most likely
sequence of states visited by the transmitter while transmitting a particular message:
A. Complete the Viterbi step, i.e., fill in the question marks in the matrix, assuming a hard branch metric based on the Hamming distance between expected an received parity where the received
voltages are digitized using a 0.5 V threshold. The digitized received parity bits are 1 and 0.
For state 0:
PM[0,n] = min(PM[0,n-1]+BM(00,10), PM[1,n-1]+BM(10,10)) = min(1+1,0+0) = 0
Predecessor[0,n] = 1
For state 1:
PM[1,n] = min(PM[2,n-1]+BM(11,10), PM[3,n-1]+BM(01,10)) = min(2+1,3+2) = 3
Predecessor[1,n] = 2
For state 2:
PM[2,n] = min(PM[0,n-1]+BM(11,10), PM[1,n-1]+BM(01,10)) = min(1+1,0+2) = 2
Predecessor[2,n] = 0 or 1
For state 3:
PM[1,n] = min(PM[2,n-1]+BM(00,10), PM[3,n-1]+BM(10,10)) = min(2+1,3+0) = 3
Predecessor[1,n] = 2 or 3
B. Complete the Viterbi step, i.e., fill in the question marks in the matrix, assuming a soft branch metric based on the square of the Euclidean distance between expected an received parity
voltages. Note that your branch and path metrics will not necessarily be integers.
For state 0:
PM[0,n] = min(PM[0,n-1]+BM([0,0],[0.6,0.4]), PM[1,n-1]+BM([1,0],[0.6,0.4])) = min(1+0.52,0+.32) = .32
Predecessor[0,n] = 1
For state 1:
PM[1,n] = min(PM[2,n-1]+BM([1,1],[0.6,0.4]), PM[3,n-1]+BM([0,1],[0.6,0.4])) = min(2+0.52,3+0.72) = 2.52
Predecessor[1,n] = 2
For state 2:
PM[2,n] = min(PM[0,n-1]+BM([1,1],[0.6,0.4]), PM[1,n-1]+BM([0,1],[0.6,0.4])) = min(1+0.52,0+0.72) = 0.72
Predecessor[2,n] = 1
For state 3:
PM[1,n] = min(PM[2,n-1]+BM([0,0],[0.6,0.4]), PM[3,n-1]+BM([1,0],[0.6,0.4])) = min(2+0.52,3+.32) = 2.52
Predecessor[1,n] = 2
C. Does the soft metric give a different answer than the hard metric? Base your response in terms of the relative ordering of the states in the second column and the survivor paths. The soft metric
certainly gives different path metrics, but the relative ordering of the likelihood of each state remains unchanged. Using the soft metric, the choice of survivor path leading to states 2 and 3
has firmed up (with the hard metric either of the survivor paths for each of states 2 and 3 could have been chosen).
Problem . Single-sideband (SSB) modulation is a modulation technique designed to minimize the amount of footprint used to transmit an amplitude modulated signal. Here's one way to implement an SSB
A. Starting with a band-limited signal s[n], modulate it with two carriers, one phase shifted by π/2 from the other. The modulation frequency is chosen to be B/2, i.e., in the middle of the
frequency range of the signal to be transmitted. Sketch the real and imaginary parts of the Fourier coefficients for the signals at points A and B. The figure below shows the Fourier coefficients
for the signal s[n].
B. The modulated signal is now passed through a low-pass filter with a cutoff frequency of B/2. Sketch the real and imaginary parts of the Fourier coefficients for the signals at points C and D.
C. The signal is modulated once again to shift it up to the desired transmission frequency. Sketch the real and imaginary parts of the Fourier coefficients for the signals at points E and F.
D. Finally the two signals are summed to produce the signal to be sent over the air. Sketch the real and imaginary parts of the Fourier coefficients for the signal at point G.
Problem . We learned in lecture that if we modulate a signal with a cosine of a particular frequency and then demodulate with a sine of the same frequency and pass the result through a low-pass
filter, we get nothing! We can use this effect to our advantage -- here's a modulation/demodulation scheme that sends two independent signals over a single channel using the same frequency band:
A. The Fourier coefficients for signals a[n] and b[n] are show below. Sketch the real and imaginary parts of the Fourier coefficients for the signal at point A.
B. Sketch the real and imaginary parts of Fourier coefficients for signal at point B, right after we demodulate the combined signal using a cosine. The result is passed through a low-pass filter
with a cutoff of M; compare the signal at point D to the two input signals and summarize your findings. The signal at point D is a scaled replica of a[n].
C. Sketch the real and imaginary parts of Fourier coefficients for signal at point C, right after we demodulate the combined signal using a sine. The result is passed through a low-pass filter with
a cutoff of M; compare the signal at point E to the two input signals and summarize your findings. The signal at point E is a scaled replica of b[n].
Problem . The diagram for a broken IQ transmitter is shown below. Assume N=1024 and M=64.
The IQ receiver was designed assuming the transmitter was working correctly:
A. The broken transmitter sent the symbols I=1 and Q=1. However, the receiver received the symbols I[RX]=0.617 and Q[RX]=0.924. What is the value of D?
x[n]=I[n]cos(MΩ[1]n) + Q[n]sin(MΩ[1]n - MΩ[1]D)
Recall the identity sin(a-b) = sin(a)cos(b) - cos(a)sin(b):
x[n] = I[n]cos(MΩ[1]n) + Q[n] * [cos(MΩ[1]D)sin(MΩ[1]n) - sin(MΩ[1]D)cos(MΩ[1]n)]
x[n] =[I[n] - Q[n]sin(MΩ[1]D)] * cos(MΩ[1]n) + Q[n]cos(MΩ[1]D) sin(MΩ[1]n)
The receiver demodulates x[n] back to baseband, so that
I[RX][n]= I[n] - Q[n]sin(MΩ[1]D)
Q[RX][n] = Q[n]cos(MΩ[1]D)
You can use either of the above formulas to solve for D: D = 1.
B. The broken transmitter sent the symbols I=1 and Q=1. However, the receiver received the symbols I[RX]=0 and Q[RX]=0. What is the value of D? D = 4.
Problem . Consider a convolution code with two generator polynomials: G0=101 and G1=110.
A. What is code rate r and constraint length k for this code? We send two parity bits for each message bit, so the code rate r is 1/2. Three message bits are involved in the computation of the
parity bits, so the constraint length k is 3.
B. Draw the state transition diagram for a transmitter that uses this convolutional code. The states should be labeled with the binary string x[n-1]...x[n-k+1] and the arcs labeled with x[n]/p[0]p
[1] where x[n] is the next message bit and p[0] and p[1] are the two parity bits computed from G0 and G1 respectively.
The figure below is a snapshot of the decoding trellis showing a particular state of a maximum likelihood decoder implemented using the Viterbi algorithm. The labels in the boxes show the path
metrics computed for each state after receiving the incoming parity bits at time t. The labels on the arcs show the expected parity bits for each transition; the actual received bits at each time are
shown above the trellis.
C. Fill in the path metrics in the empty boxes in the diagram above (corresponding to the Viterbi calculations for times 6 and 7).
D. Based on the updated trellis, what is the most-likely final state of the transmitter? How many errors were detected along the most-likely path to the most-likely final state? The most-likely
final state is 01, the state with the smallest path metric. The path metric tells us the total number of errors along the most-likely path leading to the state. In this example there were 3
errors altogether.
E. What's the most-likely path through the trellis (i.e., what's the most-likely sequence of states for the transmitter)? What's the decoded message? The most-likely path has been highlighted in red
below. The decoded message can be read off from the state transitions along the most-likely path: 1000110.
F. Based on your choice of the most-likely path through the trellis, at what times did the errors occur? The path metric is incremented for each error along the path. Looking at the most-likely path
we can see that there were single-bit errors at times 1, 3 and 5.
|
{"url":"http://web.mit.edu/6.02/www/s2009/handouts/quiz2_review.html","timestamp":"2014-04-21T09:54:48Z","content_type":null,"content_length":"36283","record_id":"<urn:uuid:9009ad32-422a-4c15-9066-d6cd36cdf054>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00476-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[SOLVED] Modeling problem -- optimization
June 1st 2009, 12:34 PM #1
[SOLVED] Modeling problem -- optimization
A storage bin is to be constructed by removing a sector with central angle $\theta$ from a cicular piece of tin of radius 10 ft and folding the remainder of the tin to form a cone, as shown. What
is the maximum volume of a storage bin formed in this fashion.
I have two questions: is my attempt at modeling the situation correct, and if so, where did I go wrong in solving the model -- I got a negative answer for the angle that produces the mazimum
Let circumference of the material be c1, the circumference of the cone base be c2, and the arclength of the material removed be s.
$c_{1} = 2\pi(10) = 20\pi$
$s = 10\theta$
$c_{2} = 20\pi - 10\theta$
$2\pi R = 20\pi - 10\theta$
$R = 10 - \frac{5\theta}{\pi}$
$H = \sqrt{10^{2} - \left(\frac{5\theta}{\pi}\right)^{2}} = \sqrt{100 - \frac{25\theta^{2}}{\pi^{2}}}$
$V = \frac{1}{3}\pi R^{2} H$
$V = \frac{\pi}{3}\left(10 - \frac{5\theta}{\pi}\right)^{2}\sqrt{100 - \frac{25\theta^{2}}{\pi^{2}}}$
$V = \frac{\pi}{3}\left(100 - \frac{100\theta}{\pi} + \frac{25\theta^{2}}{\pi^{2}}\right)\sqrt{100 - \frac{25\theta^{2}}{\pi^{2}}}\longleftarrow\mbox{H ere is the model -- right so far?}$
$\frac{dV}{d\theta} = \frac{\pi}{3}\left[\left(-\frac{100}{\pi} + \frac{50\theta}{\pi^{2}}\right)\sqrt{100 - \frac{25\theta^{2}}{\pi^{2}}} + \left(100 - \frac{100\theta}{\pi} + \frac{25\theta^
{2}}{\pi^{2}}\right)\frac{-25\theta}{\pi^{2}\sqrt{100 - \frac{25\theta^{2}}{\pi^{2}}}}\right]$
$0 = \left(-\frac{100}{\pi} + \frac{50\theta}{\pi^{2}}\right)\sqrt{100 - \frac{25\theta^{2}}{\pi^{2}}} + \left(100 - \frac{100\theta}{\pi} + \frac{25\theta^{2}}{\pi^{2}}\right)\frac{-25\theta}{\
pi^{2}\sqrt{100 - \frac{25\theta^{2}}{\pi^{2}}}}$
$\left(\frac{100}{\pi} - \frac{50\theta}{\pi^{2}}\right)\sqrt{100 - \frac{25\theta^{2}}{\pi^{2}}} = \left(100 - \frac{100\theta}{\pi} + \frac{25\theta^{2}}{\pi^{2}}\right)\frac{-25\theta}{\pi^{2}
\sqrt{100 - \frac{25\theta^{2}}{\pi^{2}}}}$
$\pi^{2}\left(\frac{100}{\pi} - \frac{50\theta}{\pi^{2}}\right)\left(100 - \frac{25\theta^{2}}{\pi^{2}}\right) = (-25\theta)\left(100 - \frac{100\theta}{\pi} + 25\theta^{2}\right)$
$10000\pi - \theta\frac{5000}{\pi} - \theta^{2}\frac{2500}{\pi^{3}}+ \theta^{3}\frac{1250}{\pi^{2}} = -2500\theta + \theta^{2}\frac{2500}{\pi} - 6250\theta^{3}$
$8\pi^{4} - 4\pi^{2}\theta - 2\theta^{2}+ \pi\theta^{3} = -2\pi^{3}\theta + 2\pi^{2}\theta^{2} - 5\pi^{3}\theta^{3}$
$0 = \theta^{3}(\pi + 5\pi^{3}) - \theta^{2}(2 + 2\pi^{2}) + \theta(-4\pi^{2} + 2\pi^{3}) + 8\pi^{4}$
The problem is that there is no solution to this in on $0 \leq \theta \leq 2\pi$ (at least according to my calculator) -- a long way to go for a very disapointing answer. Where did I go wrong?
Simple Substitution
The first error I see is on line 6. By your formulation, $R = 10 - \frac{5\theta}{\pi}$ . By Pythagoras, $H^2=10^2-R^2=100-(10 - \frac{5\theta}{\pi})^2$ . Looks like a case of simple oversight.
Try correcting this and looking at the model again.
Oh, bleeping bleep!!!! I can't belive... oh for the love of Mike. I had notes on this problem on three separate sheets of paper, and somehow I copied it like that onto my final draft, so to
speak, before I solved it. I looked over and over the tricky parts of this -- but not the simple parts at the top. It just goes to show the value of a fresh set of eyes when you are so immersed
in something. Thanks so much, Media Man. I hope that was the only problem. Not looking forward to redoing all the rest of it, but you gotta do what you gotta do.
Edit: Got it! The maximum volume is about $403 ft^{3}$. Much easier to solve without the error. Thanks again.
Last edited by sinewave85; June 5th 2009 at 11:34 AM.
June 1st 2009, 04:18 PM #2
Senior Member
Apr 2009
Atlanta, GA
June 1st 2009, 06:27 PM #3
|
{"url":"http://mathhelpforum.com/calculus/91433-solved-modeling-problem-optimization.html","timestamp":"2014-04-18T14:52:00Z","content_type":null,"content_length":"44447","record_id":"<urn:uuid:ea100604-c7f8-47d0-bd58-69891a83fd38>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00113-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The chaotic-representation property for a class of normal martingales
Belton, Alexander C. R. and Attal, Stéphane (2007) The chaotic-representation property for a class of normal martingales. Probability Theory and Related Fields, 139 (3-4). pp. 543-562. ISSN 0178-8051
Full text not available from this repository.
Suppose Z=(Zt)t ³ 0Z=(Zt)t0 is a normal martingale which satisfies the structure equation d[Z]t = (a(t)+b(t)Zt-) dZt + dtd[Z]t=((t)+(t)Zt−)dZt+dt . By adapting and extending techniques due to
Parthasarathy and to Kurtz, it is shown that, if α is locally bounded and β has values in the interval [-2,0], the process Z is unique in law, possesses the chaotic-representation property and is
strongly Markovian (in an appropriate sense). If also β is bounded away from the endpoints 0 and 2 on every compact subinterval of [0,∞] then Z is shown to have locally bounded trajectories, a
variation on a result of Russo and Vallois.
Item Type: Article
Journal or Publication Title: Probability Theory and Related Fields
Additional Information: RAE_import_type : Journal article RAE_uoa_type : Pure Mathematics
Uncontrolled Keywords: Azéma martingale - Chaotic-representation property - Normal martingale - Predictable-representation property - Structure equation
Subjects: Q Science > QA Mathematics
Departments: Faculty of Science and Technology > Mathematics and Statistics
ID Code: 2397
Deposited By: ep_importer
Deposited On: 01 Apr 2008 09:40
Refereed?: Yes
Published?: Published
Last Modified: 09 Oct 2013 13:12
Identification Number:
URI: http://eprints.lancs.ac.uk/id/eprint/2397
Actions (login required)
|
{"url":"http://eprints.lancs.ac.uk/2397/","timestamp":"2014-04-20T03:54:15Z","content_type":null,"content_length":"15705","record_id":"<urn:uuid:e5ddc64c-ea9d-406a-b0ed-ea38967425a3>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00458-ip-10-147-4-33.ec2.internal.warc.gz"}
|
st: Expert Opinion - An alternative to rndbin
[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]
st: Expert Opinion - An alternative to rndbin
From tiago.pereira@incor.usp.br
To statalist@hsphsun2.harvard.edu
Subject st: Expert Opinion - An alternative to rndbin
Date Sun, 23 Sep 2007 22:17:26 -0300 (BRT)
Dear Statalisters and Stata experts,
I would like to know your expert opinion about a faster approach to
generate binomial random numbers using Stata. I wish to know if the
approach below is valid and if it makes some sense.
Well, for example, assume we have _N=1. Then, we have 25000 observations
and p= 0.25, which is the probability of the event. -rndbin- is quite
rndbin 25000 0.25 1
qui count if xb==1
However, I have to run -rndbin- millions of times, say, 10^15 times, count
the number of events (xb=1) and then summarize it to get a new variable.
That approach takes a lot of time and even using Mata functions this is
Taking statistical aspects of the binomial distribution into account, may
I approximate that calculation using the following approach?
p = 0.25
observations = 25000
sd_of_p_hat = standard deviation of the p_hat
gene sd_of_p_hat= sqrt(((p)*(1-p))/(observations))
generate z = invnorm(uniform())
replace p = (z)*(sd_of_p_hat)+(p)
gene number_of_events= round(p*observations)
The latter approach is really faster (2-3 seconds for 100000 studies,
whereas -rndbin- are likely to take some hours, at least in my PC) and it
is likely to be unbiased for pīs between 0.3 and 0.7, the range I have to
work with in Human Genetics.
I will again grateful for any help and comments.
Best regards,
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
|
{"url":"http://www.stata.com/statalist/archive/2007-09/msg00735.html","timestamp":"2014-04-18T05:44:06Z","content_type":null,"content_length":"6211","record_id":"<urn:uuid:7c596fde-2ea1-43ee-ace2-2c48763370dd>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00363-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Stochastic Spatial Simulator
1. What is an Interacting Particle System?
2. About the main window.
3. About Implementation.
Model List:
1. Cyclic Resource-Species Model
2. Epidemic Model
3. Contact Process
4. Multi-type Contact Process
5. General Rock-Scissor-Paper Model
6. Voter Model (Linear and Threshold)
7. Greenberg-Hastings Model
File I/O and Time Series Record:
1. File I/O
2. Spatial Window and Time Series Record
References and Acknowledgements:
1. References
2. Acknowledgements
1. "What is an Interacting Particle System?"
An interacting particle system is an individual-based stochastic spatial model with discrete space (sites) and continuous time. Mathematically, it forms a continuous-time Markov process. The
action takes place on a 2-dimensional rectangular lattice (or grid) of sites, with a number of options for the grid size. Each site in the lattice can be in a number of different states (colors),
depending on the specific model. The state of a given site can change to other states at rates that depend in general on the configuration of states at ``neighboring" sites. This happens in
continuous time and very quickly, so when you're watching the simulation you will typically see sites changing all over the lattice. These changes are asynchronous due to the continuous time
nature of this Markov process. The way to think of this is that every site has associated with it an (exponential) alarm clock with rate that depends on the state at that site and the states at
neighboring sites. The site whose alarm rings first makes the appropriate change and all neighboring sites recalculate their rates. All the alarm clocks now start over and we wait for the next
one to ring. (We remark that the behavior of synchronously updated cellular automata can be similar in some respects but very different in others. For example, updating all the sites at once can
lead to very rigid behavior that produces patterns not typically seen in biological populations.) Below we describe the ingredients used in specifying the above rates.
Neighborhood: The neighborhood of a site x is, in general, of the form N(x)={y: |y-x| <= r}. In other words, we take as neighbors of site x all sites y in the lattice that are within distance r
(and not equal to x) Thus, the positive integer r will represent the range of interaction. The distance used in |y-x| will usually be given by an l-p norm. We treat three different types of
neighborhoods in the simulator. Four neighbors refers to the four nearest neighbors (l-infinity norm, range 1); eight neighbors refers to the eight nearest neighbors in the 3x3 box about x
(l-infinity norm, range 1); twenty four neighbors refers to the sites in the 5x5 box about x (l-infinity norm, range 2).
Rates: There are two basic types of rates that will allow you to build most models of interest: (1) Contact: b(i,j) means i to j at rate b(i,j) * fj(x), where fj(x)=nj(x)/N, with N denoting the
number of neighbors and nj(x) the neighbors of x in state j. More specifically, if site x is in state i, then site x changes its state to j at rate b(i,j) times the fraction of neighbors in state
j. Thus b(i,j) represents the maximum possible rate from i to j. (2) Spontaneous: d(i,j) means i to j at a constant rate d(i,j). More specifically, if site x is in state i, then, independent of
the states of the neighbors of x, site x changes its state to j at the indicated rate.
The b and d are to make us think of birth rate (which requires a parent to be nearby) and death rate. The exact interpretation will change from model to model. Just remember that rates labeled
with a b require contact from nearby sites. For example, in an epidemic model, contact birth would correspond to infection of a susceptible by contact with an infective neighbor.
The above contact rates increase linearly with the number of neighbors in state j. A variation of linear contact births is obtained by replacing nj(x) by the indicator function 1(nj(x) >= T).
This is a nonlinear form of contact births called threshold contact. This option is put in force for all contact births by clicking the Threshold box in the parameters. Thus, b(i,j) would be
taken to mean i->j at rate b(i,j) provided nj(x) is at least T, the threshold; if nj(x)is less than T, then the corresponding rate is 0. Note that we use the number nj(x) of neighbors in state j
instead of the fraction of such so that our thresholds have integer values.
the main window.
On the top left of the main window, there are two boxes. From the first box, one can select a model. From the second box, one can specify how big the lattice size is. Obviously, the bigger the
lattice size is, the more information one gets, and the slower the simulation runs. In the middle part of the left panel, there is also a File I/O box designed particularly to study the Cyclic
Resource-Species Model. There is also a Time Series Record box, also specifically for the Cyclic Resource-Species Model. One can specify size and position of the spatial window by clicking
Spatial Wnd button. By clicking the Parameters button, one can set up parameters for corresponding models specified in the Model list. The big "movie" screen in the middle of the main window
shows the model animation powered by OpenGL. By clicking the Phase Portrait button, one can watch phase portraits between any two species specified in the Parameter Dialog Box. This is another
feature that designed exclusively for the Cyclic Resource-Species Model. The bottom part of the main window is another movie section showing the frequency curve for each species in the
simulation. For the Cyclic Model, the frequency curve is based on a spatial window. In the parameter dialog box, one can specify the spatial window size and position on the lattice. For all other
models, the frequency curves are based on the whole lattice. On the right side panel, the legend shows the correspondence between colors and states.
the implementation
The models in SSS were developed using Visual C++. The graphical interface employs OpenGL, the premier environment for developing portable, interactive 2D and 3D graphics applications. To run SSS
at reasonable speeds with lattice size 250x250 and above, a Pentium III 866 with 256M RAM is recommended. Higher configuration PCs are desirable. SSS has been tested on Windows 2000 and Windows
XP. Other operating systems in the Windows family (e.g., Windows 98 and Window NT) should also work, but we have not tested them.
Model Overview:
1. Cyclic Resource Species Model
States {0,1,2, ... 2K-1} with even numbered states indicating species and odd numbered states indicating resources. Species rates are b(2i-1,2i)=b[2i], d(2i,2i+1)=d[2i].
Paramter settings: In the parameter dialog box: (1) A group of edit boxes enables the user to specify different rates: d(i,j)'s are death rates for species, and b(i,j)'s are contact birth rates
for species. (2) One can choose a number of states from a box. Notice that the number of states can only be an even number; for convenience, we only implement up to 10 states. (3) On another box,
one can set up the neighborhood size. We have local dependence on rates of change and this neighborhood size specifies the range of the local interactions. (4) By choosing threshold model and
specify number of threshold, changing rate of a site turn to be a step function, instead of linear function by default. (4) Sometime, all the living things will die out either due to random
effect, or "bad" parameter settings. One would like to know if introducing species onto those leftover resources, whether the process can re-establish or not. By seeding, we randomly put
different living things onto resources at a specifying density.
2. Epidemic model
This is a stochastic spatial version of the classical SIR models of mathematical epidemiology. States {0,1,2} with 0 = removed, 1 =susceptible, 2 = infected. So the flip rates are d(0,1)=a, b
(1,2)= b, and d(2,0)=d. Here, a represents the rate of a spontaneous regrowth at a dead site; this might correspond to a birth or the arrival of an immigrant from outside the population. The
infection rate b for the disease is and the infection spreads by contact. Finally, d is the death rate for infected individuals.
Parameter settings: This simple parameter dialog enables one to specify the infection rate from susceptible to infective, death rate of infective and rebirth rate of removed. One can also choose
two different initial conditions: with few infectives at the center or in a randomly mixed population.
3. Contact Process
States {0,1} with 0 =vacant, 1 = occupied. There is at most one particle per site; particles die at constant rate d and give birth at rate l with the location of the offspring chosen at random
from the neighboring sites; if the selected neighboring site is already occupied, the birth is suppressed. Thus this model has b(0,1)= l and d(1,0)=d, (Remember that b(0,1)= l means a site in
state 0 changes its state to 1 at rate l n1(x) due to a birth from a neighboring occupied site.) Setting d=0 gives Richardson's growth model.
Parameter settings: By theorem 2.14 in Chapter III of Liggett, there is a critical value lc, so that if l<lc, the basic contact process is ergodic. While if l>lc, the basic contact process is not
ergodic. It has been shown that at l=lc, the basic contact process is also ergodic. To find the exact value of lc is still an open problem.
4. Multi-type Contact Process
States {0,1,2} with 0 = vacant, 1 = species 1, 2 = species 2. The two species compete for vacant sites and die as in the contact process. Rates are b(0,1)= l1,, b(0,1)= l2, and d(1,0)= d1,b(0,1)=
Parameter settings: In the parameter dialog box for this model, one can setup contact birth rates and instantaneous death rates for different types.
5. General Rock-Scissor-Paper Model
States {0,1,2,...k-1} with 0 = species 0, ..., k-1 = species k-1. The k species form a cyclic (nontransitive) competitive system with 1 replacing 0, 2 replacing 1,..., k-1 replacing k-2, and 0
replacing k-1. Rates are b(i-1,i)=l, where the indices are understood cyclically. The special case of three species K=3 gives the usual Rock-Scissors-Paper model. See Bohannan (reference 16) for
such a simulation of an experimental system of three strains of bacteria.
Parameter settings: In the parameter dialog box for this model, one can set up a number of species (3 to 8) in the loop and contact birth rates of corresponding species. One can also change the
interaction neighborhood size. By choosing the threshold option and specifying the threshold, one changes the basic contact process to a threshold contact process. Threshold contact births can
sometimes help in the development of interesting spatial patterns.
6. Voter Model
States {0,1,2..., K-1} with 0 = species 0, 1 = species 1, and so on. This can be thought of as a model of K competing species in a spatial setting, with no species having a competitive advantage.
Rates are b(i,j)=1 and b(j,i)=1, for all i,j<k. A threshold voter model is one with nonlinear rates obtained by replacing fi(x) with the indicator function 1(ni(x)bT), where T represents the
Parameter settings: In the parameter dialog box for this model, one can set up a number of species (up to 10), neighborhood size (4, 8 and 24). By choosing the threshold option and specifying the
threshold, one change the linear voter process to a threshold voter process. Threshold voter model seems tending to produce more interesting patterns.
7. Greenberg-Hastings Model
States{0,1,2..., K-1} with 0 denoting an excited state and the others representing states that are, successively, less and less excited. In particular, the state K-1 is fully rested and capable
of excitation in the presence of a sufficient number of excited states. The rates are b(k-1,0)=l, via threshold, and d(i,i+1)=1 for i=0,1...,K-2.
Parameter settings: We fix the number of states at 8. The first rate parameter is the rate of contact birth. Other rates are for instantaneous death. To help development of interesting patterns,
we set up the contact birth via threshold. One can modify the intensity of the threshold by adjusting the neighborhood size and threshold numbers.
File I/O and Time Series Data
1. File I/O
The functionality of file I/O is designed particularly to study the cyclic model. By clicking the Save button on the main window, one can save a snapshot as a file together with its parameter
settings. To run the simulation later starting from this configuration, click the Load button and choose the corresponding file. The file has an extension .ips. The format of the file is as
1^st line: lattice size and comments
2^nd line: lattice size and comments
3^rd line: number of states and comments
4^th line: neighborhood size and comments
5^th line: 10 rates corresponding to rates in the Cyclic Model Parameter Dialog.
6^th line: threshold parameters. First number either 1 (Yes) or 0 (No), the second number specify the threshold.
7^th line and beyond: specifies the configuration of states.
2. Spatial Window and Time Series Data
This functionality is to output time series record into a file with an extension .tsr. The time series is frequency of some species in a spatial window. Species and spatial window can be setup by
clicking Spatial Wnd button in main window. By studying time series record, one can identify at what land scale interesting things happen.
Time series record file consists of three columns, first column specifies the time step, second column is species one density and third column is species two density. For more information about
land scale, see [4] in references.
For books dealing with the mathematics of interacting particle systems, we recommend [7, 8, 13]. Connections between particle systems and differential equations can be found in [1, 2, 5, 9, 10].
Biological applications appear in [1, 5]; chemistry and physics in [15, 1].
(1) Dieckmann, U., Law, R., Metz, J.A.J. (Eds.), 2000. The Geometry of Ecological Interactions: Simplifying Spatial Complexity. Cambridge University Press, Cambridge.
(2) Krone, S.M., 2003. Spatial models: stochastic and deterministic.Math. Comp. Mod. In press.
(3) Krone, S.M., Neuhauser, C., 2000. A spatial model of range-dependent succession. J. Appl. Probab. 37, 1044-1060.
(4) Rand, D.A, Wilson, H.B., 1995. Using spatio-temporal chaos and intermediate-scale determinism to quantify spatially extended ecosystems. Proc. Roy. Soc. Lond. B 343, 111-117.
(5) R. Durrett and S. Levin, Stochastic spatial models: a user's guide to ecological applications, Phil. Trans. R. Soc. Of London B, Biological Sciences 343 (1994), 329--350.
(6) T. Cox and R. Durrett, Limit theorems for the spread of epidemics and forest fires, Stoch. Proc. Appl. 30 (1988),171--191.
(7) R. Durrett, Lecture Notes on Particle Systems and Percolation(Wadsworth, Belmont, CA, 1989).
(8) R. Durrett, Ten Lectures on Particle Systems, Lecture Notes in Mathematics 1608 (Springer, Berlin, 1995).
(9) R. Durrett and C. Neuhauser, Particle systems and reaction diffusion equations, Ann. Probab.22 (1994), 289--333.
(10) C. Kipnis and C. Landim, Scaling Limits of Interacting Particle Systems (Springer, Berlin, 1999).
(11) S. Krone, The two-stage contact process, Ann. Appl. Probab. 9 (1999), 331--351.
(12) S. Krone and C. Neuhauser, A spatial model of range-dependent succession, J. Appl. Probab. 37 (2000),1044-1060.
(13) T. Liggett, Interacting Particle Systems (Springer, New York,1985).
(14) C. Neuhauser, Ergodic theorems for the multitype contact process, Probab. Theory Relat. Fields 91 (1992), 467--506.
(15) H. Spohn, Large Scale Dynamics of Interacting Particles (Springer, Berlin, 1991).
(16) B.Kerr, M.Riley, M.Feldman and B.J.M Bohannan. 2002. Local dispersal promotes biodiversity in a real life game of rock-paper-scissors. Nature 418:171-174.
Although most of the C++ code and the graphical interface for this simulator were written independently, we were inspired by the pioneering efforts of Ted Cox and Rick Durrett who created the
Unix-based simulator S3. Guan and Krone were supported in part by NSF grant EPS-00-80935 and NIH grant P20 RR016448. Guan would like to thank the helpful websites http://www.opengl.org and http:/
/nehe.gamedev.net for online documentation and tutorials on OpenGL. The random number generator comes from http://www.agner.org. Guan would also like to thank his friend Xin Yan who patiently
answer his coding questions.
|
{"url":"http://www.webpages.uidaho.edu/~krone/winsss/sss-tutorial.html","timestamp":"2014-04-21T12:17:31Z","content_type":null,"content_length":"64971","record_id":"<urn:uuid:16dcec56-6b2b-4e56-83fa-2cb70d80d876>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00454-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Big Data Generalized Linear Models with Revolution R Enterprise
R''s glm function for generalized linear modeling is very powerful and flexible: it supports all of the standard model types (binomial/logistic, Gamma, Poisson, etc.) and in fact you can fit any
distribution in the exponential family (with the family argument). But if you want to use it on a data set with millions or rows, and especially with more than a couple of dozen variables (or even
just a few categorical variables with many levels), this is a big computational task that quickly grows in time as the data gets larger, or even exhaust the available memory.
The rxGlm function included in the RevoScaleR package in Revolution R Enterprise 6 has the same capabilities as R's glm, but is designed to work with big data, and to speed up the computation using
the power of multiple processors and nodes in a distributed grid. In the analysis of census data in the video below, fitting a Tweedie model on 5M observations and 265 variables takes around 25
seconds on a laptop. A similar analysis, using 14 million observations on a 5-node Windows HPC Server cluster takes just 20 seconds.
This demonstration was part of last week's webinar on Revolution R Enterprise 6. If you're not familiar with Revolution R Enterprise, the first 10 minutes is an overview of the differences from
open-source R, and the remaining 20 minutes describes the new features in version 6. Follow the link below to check out the replay.
Revolution Analytics webinars: 100% R and More: Plus What's New in Revolution R Enterprise 6.0
|
{"url":"http://www.inside-r.org/blogs/2012/06/28/big-data-generalized-linear-models-revolution-r-enterprise","timestamp":"2014-04-20T07:13:54Z","content_type":null,"content_length":"14627","record_id":"<urn:uuid:e0f8c796-74e9-4d1b-8f59-7df9696f9283>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00268-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Posts about gromov on Quomodocumque
This common and unfortunate fact of the lack of adequate presentation of basic ideas and motivations of almost any mathematical theory is probably due to the binary nature of mathematical
Either you have no inkling of an idea, or, once you have understood it, the very idea appears so embarrassingly obvious that you feel reluctant to say it aloud…
(Gromov, “Stability and Pinching”)
|
{"url":"http://quomodocumque.wordpress.com/tag/gromov/","timestamp":"2014-04-17T03:49:29Z","content_type":null,"content_length":"42705","record_id":"<urn:uuid:b3145bba-78d9-42f5-9dcf-cf83afd00f80>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00502-ip-10-147-4-33.ec2.internal.warc.gz"}
|
6.4 The Standard Library (2)--Mathematical Functions
Every implementation of Modula-2, and of every computer language that is to be used in a scientific or academic environment, must also provide a number of standard mathematical functions. A name and
contents of the module that contains these were suggested by Wirth--he called it MathLib0. However, in some versions, vendors called it either MathLib or MathLib1, and in a few the procedure names
started with an uppercase letter. The ISO standard version is called RealMath. It makes available the two real constants:
pi = 3.1415926535897932384626433832795028841972;
exp1 = 2.7182818284590452353602874713526624977572;
to as many decimal places as the implementation allows. In addition, the following sections detail the basic functions that are exported by RealMath, with their parameter lists, some comments and
examples. (A few have been used before .)
6.4.1 Square Root
PROCEDURE sqrt (x : REAL) : REAL;
This function procedure returns the square root of a positive real number. Naturally, there will be an error generated if one attempts to take the square root of a negative number.
Given two sides of a right triangle, compute the hypotenuse.
Ask the user for two numbers, the sides' lengths
Read each into a real variable
Compute the hypotenuse using Pythagoras' Theorem
hyp = sqrt (a * a + b * b)
Print out the result
Data Table:
Two reals to hold the side lengths, one for the hypotenuse
WriteString, WriteLn, ReadReal, WriteReal, sqrt
MODULE Pythagoras;
(* Written by R.J. Sutcliffe *)
(* using ISO Modula-2 *)
(* to illustrate RealMath.sqrt *)
(* last revision 1993 03 01 *)
FROM STextIO IMPORT
WriteString, WriteLn, SkipLine, ReadChar;
FROM SRealIO IMPORT
ReadReal, WriteFixed;
FROM SIOResult IMPORT
ReadResult, ReadResults;
FROM RealMath IMPORT
side1, side2, side3 : REAL;
key : CHAR;
PROCEDURE GetReal (VAR numToGet : REAL);
tempResult : ReadResults;
WriteString ("Please type in a real number ===> ");
ReadReal (numToGet);
tempResult := ReadResult ();
SkipLine; (* swallow line marker *)
UNTIL tempResult = allRight;
END GetReal;
BEGIN (* main *)
WriteString ("What is the first side length? ");
GetReal (side1);
WriteString ("What is the second side length? ");
GetReal (side2);
side3 := sqrt (side1 * side1 + side2 * side2);
WriteString ("The hypotenuse is ");
WriteFixed (side3, 2, 0);
WriteString (" units long.");
WriteString ("Press any key to continue");
ReadChar (key);
END Pythagoras.
Results of a run:
What is the first side length?
Please type in a real number ===> 20.0
What is the second side length?
Please type in a real number ===> 21.0
The hypotenuse is 29.00 units long.
6.4.2 Exponential and Logarithmic Functions
It was observed in section 4.9 that an amount A placed at compound interest rate for time years would grow to A(1 + rate)^time. Notice that if the interest is compounded n times a year, the amount
will be higher than if it is compounded annually, even though the interest rate at each compounding period must be divided by n. For instance, the first example in that section found that $1000 at 6%
compounded for 10 years would grow to 1790.85. If the interest is computed monthly, the rate at each application of interest is .06/12, or .005, and the number of applications becomes 10*12 or 120.
Modifying the formula above yields A(1 + rate/n)^n*time and in this case the $1000 grows to $1819.40. If the compounding is done daily, the result is 1822.20, not much of a difference from monthly
compounding. One might ask whether continuing to increase the number of compounding periods indefinitely would yield an indefinitely large (infinite) amount, or if there is some limit beyond which
the amount will not grow.
The latter turns out to be the case. To see that this is so, consider a principal amount of $1.00 placed at 100% for a year, and increase the number of compounding periods indefinitely. In
mathematical terms, this computes:
As the number of periods grows and the interest rate per period shrinks, this converges to a definite limit at about 2.7182818. (It has a non-repeating, non-terminating decimal representation.) This
number, denoted e arises naturally in a variety of situations in mathematics, and mathematical libraries provide the exponential function to compute y = e^x. The inverse function, to compute the
exponent, or logarithm of x given the number y is the logarithm function x = ln(y). These were mentioned in section 4.5 in connection with writing the function procedure APowerB and are found in
RealMath as:
PROCEDURE exp (numb : REAL) : REAL;
PROCEDURE ln (numb : REAL) : REAL;
In addition, RealMath exports the related function procedure:
PROCEDURE power (base, exponent: REAL): REAL;
NOTE: Some non-standard mathematical library modules also export some or all of the following related function procedures:
For Base 10 Logarithms:
PROCEDURE log (numb : REAL) : REAL;
PROCEDURE TenToX (numb : REAL) : REAL;
For other power and magnitude operations:
PROCEDURE ipower (numb1 : REAL; numb2 : INTEGER) : REAL;
(* Both return numb1 to the numb2 power *)
PROCEDURE Magnitude (numb : REAL) : INTEGER;
(* returns the order of magnitude of numb, namely the largest integer less than or equal to the scale factor or log10 of numb *)
One application of the logarithmic and exponential functions is to compute radioactive (and other) decay processes. Under normal conditions, a quantity of radioactive material decays over time
according to the formula
A = A[0] e^kt
where A[0] is the amount of the substance present at time zero, A is the amount at the time being examined, t is the elapsed time in appropriate units, and k is a constant that is a property of the
In the standard literature, one often finds the constant k expressed indirectly as the half-life, that is, the time it would take for half of any given quantity of the substance to decay.
A lab is gathering data from experiments done on radioactive samples and determines experimentally the amount of a radioactive substance present at time zero and also at some subsequent time. Write a
program to calculate the half-life of the substance from this data. (This is often one way of identifying an unknown radioactive material.)
The formula A = A[0] e^kt may be rewritten as
and, upon taking natural logarithms on both sides and solving for k, one obtains
In the case where half of the material is supposed to have decayed, the right hand side of the latter formula becomes
or, solving for t,
With these variations on the initial formula, all the tools are at hand to write the code to do the computation.
MODULE HalfLife;
(* Written by R.J. Sutcliffe *)
(* using ISO Modula-2 *)
(* to illustrate RealMath.ln *)
(* last revision 1994 08 30 *)
FROM STextIO IMPORT
WriteString, WriteLn, ReadChar, SkipLine;
FROM SRealIO IMPORT
ReadReal, WriteFixed;
FROM SIOResult IMPORT
ReadResult, ReadResults;
FROM RealMath IMPORT
PROCEDURE GetReal (VAR numToGet : REAL);
tempResult : ReadResults;
WriteString ("Please type in a real number ===> ");
ReadReal (numToGet);
tempResult := ReadResult ();
SkipLine; (* swallow line marker *)
UNTIL tempResult = allRight;
END GetReal;
initialAmount, laterAmount, timePassed,
constant, halfLife : REAL;
key: CHAR;
WriteString ("What was the initial amount? ");
GetReal (initialAmount);
WriteString ("How much time elapsed til the second reading? ");
GetReal (timePassed);
WriteString ("And, how much material was left then? ");
GetReal (laterAmount);
constant := ln (laterAmount / initialAmount) / timePassed;
halfLife := ln ( 0.5) / constant;
WriteString ("The half life of this material is ");
WriteFixed (halfLife, 6, 10);
WriteString ("Press a key to continue ==>");
ReadChar (key);
END HalfLife.
Trial Run:
What was the initial amount? Please type in a real number ===> 100.0
How much time elapsed til the second reading? Please type in a real number ===> 10.0
And, how much material was left then? Please type in a real number ===> 25.0
The half life of this material is 5.000000
Press a key to continue ==>
Naturally, the units for the half life will be the same as those of the elapsed time given by the user.
Exponential growth works in much the same way, except that the constant k is positive instead of negative.
6.4.3 Trigonometric Functions
It is discovered in elementary geometry that in two similar triangles, the sides are in proportion.
In particular, for right triangles, one may relate these fixed ratios to a particular acute angle, such as angle B in figure 6.8. When this is done, and the right triangle labelled as in figure 6.9,
the trigonometric ratios are defined as follows:
The symbols sin, cos, and tan are themselves abbreviations for sine, cosine, and tangent, respectively. Auxiliary to these three are their inverse functions arcsin, arccos, and arctan for producing
an angle given one of the fixed ratios. Wirth suggested that MathLib0 provide only three of these functions; the minimum necessary for work in trigonometry, but the ISO library RealMath supplies all
six. They are:
PROCEDURE sin (x : REAL) : REAL;
PROCEDURE cos (x : REAL) : REAL;
PROCEDURE tan (x : REAL) : REAL;
PROCEDURE arcsin (x : REAL) : REAL;
PROCEDURE arccos (x : REAL) : REAL;
PROCEDURE arctan (x : REAL) : REAL;
NOTES: 1. The first three require the angle to be in radians, and return the sine, cosine, and tangent, respectively, of the angle supplied. The last three takes the sine, cosine, and tangent of an
angle and returns the principal value of the angle measure (in radians). For arcsine and arctangent, the values returned are in the range -
2. Many non-standard implementations are much less generous in their supply of trigonometric functions than this, and may omit as many as three of these.
3. While angles larger than 2
Here is another useful procedure:
PROCEDURE degToRad (x : REAL) : REAL;
(* converts degrees to radians *)
RETURN (x * pi / 180.0);
END degToRad;
Basic trigonometric identities and formulas can be employed to extend the scope of the available mathematical functions.
Write a module that computes the area of any triangle given two adjacent sides and an included angle.
Consider the triangle ABC in figure 6.10 below. If the base b (here AC) and the altitude h (here BD) is known, the area of the triangle is given by
S = .5bh
However, since BD is the side opposite angle C, in the right triangle BCD, and sin (C) = BD/BC,
h = BC sin (C) = a sin (C).
That is, provided the given data consists of two sides a and b of the triangle and an included angle C, its area can be computed by using the formula
s = .5ab sin (C)
which reduces to the original formula when C is a right angle, because the sine of 90° is one.
MODULE TriArea;
(* Written by R.J. Sutcliffe *)
(* using ISO Modula-2 *)
(* to illustrate RealMath.sin *)
(* last revision 1993 03 01 *)
FROM STextIO IMPORT
WriteString, WriteLn, ReadChar, SkipLine;
FROM SRealIO IMPORT
ReadReal, WriteFixed;
FROM SIOResult IMPORT
ReadResult, ReadResults;
FROM RealMath IMPORT
sin, pi;
PROCEDURE GetReal (VAR numToGet : REAL);
(* prompts for a real number, reads it, loops until a correct one is typed, swallows the end-of-line state and returns the number read *)
tempResult : ReadResults;
WriteString ("Please type in a real number ===> ");
ReadReal (numToGet);
tempResult := ReadResult ();
SkipLine; (* swallow line marker *)
UNTIL tempResult = allRight;
END GetReal;
PROCEDURE degToRad (x : REAL) : REAL;
(* converts degrees to radians *)
RETURN (x * pi / 180.0);
END degToRad;
angleC, sideA, sideB, area: REAL;
key : CHAR;
BEGIN (* main *)
(* obtain triangle data *)
WriteString ("What is the first side length? ");
GetReal (sideA);
WriteString ("What is the second side length? ");
GetReal (sideB);
WriteString ("Now, what is the included angle ");
WriteString ("in degrees? ");
GetReal (angleC);
(* do calculation *)
angleC := degToRad (angleC);
area := 0.5 * sideA * sideB * sin (angleC);
(* inform user of result *)
WriteString ("The area is ");
WriteFixed (area, 5, 0);
WriteString (" square units ");
WriteString ("Press a key to continue ==>");
ReadChar (key);
END TriArea.
First run:
What is the first side length? Please type in a real number ===> 10.0
What is the second side length? Please type in a real number ===> 8.0
Now, what is the included angle in degrees? Please type in a real number ===> 30.0
The area is 20.00000 square units
Second run:
What is the first side length? Please type in a real number ===> 10.0
What is the second side length? Please type in a real number ===> 20.0
Now, what is the included angle in degrees? Please type in a real number ===> 90.0
The area is 100.0000 square units
NOTE: Some nonstandard versions of the math library module use atan instead of arctan and may not export asin, or acos. Others provide the hyperbolic functions sinh, cosh and tanh.
6.4.4 Conversions
RealMath also offers the useful conversion function:
PROCEDURE round (x: REAL): INTEGER;
which rounds off a real to the nearest integer.
Two function procedures that may be in the traditionally named MathLib0 that are of more specialized interest are the following integer/real conversions.
PROCEDURE real (m : INTEGER) : REAL;
PROCEDURE entier (x : REAL) : INTEGER;
The first of these is essentially the same as FLOAT except that it only operates on the type INTEGER (and assignment compatible cardinals.) This is important in older versions of Modula-2 where FLOAT
works only on CARDINAL (not on either one as in the ISO standard.)
The second one is sometimes called the greatest integer function. It takes a real argument, and returns the greatest integer less than or equal to the real. Note that this is not the same as TRUNC
even in those versions where both can return integers. Compare the following:
entier (5.7) produces 5 and TRUNC (5.7) also produces 5, but
entier (-4.3) produces -5 while TRUNC (-4.3) yields -4
That is, for positive numbers, the result is the same, but for negative ones, it will be different, because in those cases entier gives the nearest integer less than the argument and TRUNC simply
"hacks off" the decimal fractional portion. Notice that an order of magnitude function would be written using entier rather than TRUNC.
PROCEDURE Magnitude (num : REAL) : INTEGER;
(* uses non-ISO functions *)
RETURN (entier (ln (num) / ln (10.0) ))
END Magnitude;
This procedure returns -6 when given 4.5E-6 and 2 when given 3.8E2, having computed the base ten logarithms as approximately -5.346 and 2.579 respectively. Notice that a base ten logarithm of a
number (or one in any other base) is computed by dividing the natural logarithm of the number by the natural logarithm of the base, for if
x = log[10]y then 10^y = x,
so that, taking natural logarithms on both sides yields
y ln(10) = ln (x)
and therefore
y = ln(10)/ln(x)
as used in the procedure.
6.4.5 Summary of RealMath
DEFINITION MODULE RealMath;
pi = 3.1415926535897932384626433832795028841972;
exp1 = 2.7182818284590452353602874713526624977572;
PROCEDURE sqrt (x: REAL): REAL;
PROCEDURE exp (x: REAL): REAL;
PROCEDURE ln (x: REAL): REAL;
PROCEDURE sin (x: REAL): REAL;
PROCEDURE cos (x: REAL): REAL;
PROCEDURE tan (x: REAL): REAL;
PROCEDURE arcsin (x: REAL): REAL;
PROCEDURE arccos (x: REAL): REAL;
PROCEDURE arctan (x: REAL): REAL;
PROCEDURE power (base, exponent: REAL): REAL;
PROCEDURE round (x: REAL): INTEGER;
END RealMath.
6.4.6 Other Mathematical functions
A wide variety of other function procedures and error handling may be provided in some auxiliary modules associated with RealMath, or, in non-standard versions, added to MathLib0.
The ISO standard libraries, and some non-standard versions as well, include a second module that is identical to RealMath but that acts on and returns long types.
DEFINITION MODULE LongMath;
pi = 3.1415926535897932384626433832795028841972;
exp1 = 2.7182818284590452353602874713526624977572;
PROCEDURE sqrt (x: LONGREAL): LONGREAL;
PROCEDURE exp (x: LONGREAL): LONGREAL;
PROCEDURE ln (x: LONGREAL): LONGREAL;
PROCEDURE sin (x: LONGREAL): LONGREAL;
PROCEDURE cos (x: LONGREAL): LONGREAL;
PROCEDURE tan (x: LONGREAL): LONGREAL;
PROCEDURE arcsin (x: LONGREAL): LONGREAL;
PROCEDURE arccos (x: LONGREAL): LONGREAL;
PROCEDURE arctan (x: LONGREAL): LONGREAL;
PROCEDURE power (base, exponent: LONGREAL): LONGREAL;
PROCEDURE round (x: LONGREAL): INTEGER;
END LongMath.
This second module (along with the built-in type LONGREAL itself) are provided because many systems have two or more real types of different precisions. ISO Modula-2 defines the precision of LONGREAL
to be equal to or greater than that of REAL. Thus, if there is only one underlying type in the actual system being used, the programmer may use either or both of the Modula-2 logical types to refer
to this actual type. Both RealMath and LongMath also include an error enquiry function not listed here but the use of this will be postponed to a later chapter.
It should be noted that the name and the contents of both modules in non-standard versions, can vary widely from one implementation to another. For further information, see the Appendix on standard
module definitions or consult the manuals that are available with the system.
|
{"url":"http://www.arjay.bc.ca/Modula-2/Text/Ch6/Ch6.4.html","timestamp":"2014-04-20T10:47:19Z","content_type":null,"content_length":"25526","record_id":"<urn:uuid:6d095bcd-4d7c-4654-9693-5ac89fd6af0e>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00305-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Tamarac, FL Algebra 2 Tutor
Find a Tamarac, FL Algebra 2 Tutor
...For me the prospect of education and educating far outweigh the Friday night party potential. You may assume me a nerd if that tickles your fancy. I will not object.
17 Subjects: including algebra 2, English, chemistry, Spanish
...I have a yoga certification and have been teaching yoga since 2003. I have worked in churches, wellness centers, gyms and yoga studios. I have been helping students for the SAT Math for the
last two years.
16 Subjects: including algebra 2, chemistry, Spanish, geometry
...My methods are straight-forward. I work with the student, parent and teacher to find out what what specific subjects the student needs to improve understanding. I explain the methods to the
student and show examples of calculations, then I ask the student to explain the subject in their own words as if they were teaching it to me.
27 Subjects: including algebra 2, chemistry, physics, calculus
...At SOLLC I did manage software through entire life cycle. I took linear algebra 1, and linear algebra 2 (two semester course) at the Warsaw University. In addition, I have taken a class on Lie
algebras - which is generalization of the linear class.
27 Subjects: including algebra 2, chemistry, physics, calculus
...I have taught Mathematics and Physics in High School and College for over 10 years, I have also taught Spanish privately in recent years. I have a degree in Physics and post-degree studies in
Geophysics and Information Systems. My background as a physicist, my experience as a teacher in college...
10 Subjects: including algebra 2, Spanish, physics, geometry
Related Tamarac, FL Tutors
Tamarac, FL Accounting Tutors
Tamarac, FL ACT Tutors
Tamarac, FL Algebra Tutors
Tamarac, FL Algebra 2 Tutors
Tamarac, FL Calculus Tutors
Tamarac, FL Geometry Tutors
Tamarac, FL Math Tutors
Tamarac, FL Prealgebra Tutors
Tamarac, FL Precalculus Tutors
Tamarac, FL SAT Tutors
Tamarac, FL SAT Math Tutors
Tamarac, FL Science Tutors
Tamarac, FL Statistics Tutors
Tamarac, FL Trigonometry Tutors
Nearby Cities With algebra 2 Tutor
Boca Raton algebra 2 Tutors
Coconut Creek, FL algebra 2 Tutors
Coral Springs, FL algebra 2 Tutors
Davie, FL algebra 2 Tutors
Deerfield Beach algebra 2 Tutors
Fort Lauderdale algebra 2 Tutors
Lauderdale Lakes, FL algebra 2 Tutors
Lauderhill, FL algebra 2 Tutors
Margate, FL algebra 2 Tutors
North Lauderdale, FL algebra 2 Tutors
Oakland Park, FL algebra 2 Tutors
Parkland, FL algebra 2 Tutors
Plantation, FL algebra 2 Tutors
Pompano Beach algebra 2 Tutors
Sunrise, FL algebra 2 Tutors
|
{"url":"http://www.purplemath.com/Tamarac_FL_Algebra_2_tutors.php","timestamp":"2014-04-17T21:41:10Z","content_type":null,"content_length":"24072","record_id":"<urn:uuid:723bfdbf-ab80-4c68-a5fa-8650e8a534be>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00496-ip-10-147-4-33.ec2.internal.warc.gz"}
|
M.: Scattered data interpolation and applications: A tutorial and survey. In: Geometric modelling: methods and their applications
Results 1 - 10 of 31
, 1994
"... We present an approach to modeling with truly mutable yet completely controllable free-form surfaces of arbitrary topology. Surfaces may be pinned down at points and along curves, cut up and
smoothly welded back together, and faired and reshaped in the large. This style of control is formulated as a ..."
Cited by 153 (0 self)
Add to MetaCart
We present an approach to modeling with truly mutable yet completely controllable free-form surfaces of arbitrary topology. Surfaces may be pinned down at points and along curves, cut up and smoothly
welded back together, and faired and reshaped in the large. This style of control is formulated as a constrained shape optimization, with minimization of squared principal curvatures yielding
graceful shapes that are free of the parameterization worries accompanying many patch-based approaches. Triangulated point sets are used to approximate these smooth variational surfaces, bridging the
gap between patch-based and particle-based representations. Automatic refinement, mesh smoothing, and re-triangulation maintain a good computational mesh as the surface shape evolves, and give sample
points and surface features much of the freedom to slide around in the surface that oriented particles enjoy. The resulting surface triangulations are constructed and maintained in real time. 1
Introduction ...
- IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS , 1997
"... This paper describes a fast algorithm for scattered data interpolation and approximation. Multilevel B-splines are introduced to compute a C²-continuous surface through a set of irregularly
spaced points. The algorithm makes use of a coarse-tofine hierarchy of control lattices to generate a sequen ..."
Cited by 106 (9 self)
Add to MetaCart
This paper describes a fast algorithm for scattered data interpolation and approximation. Multilevel B-splines are introduced to compute a C²-continuous surface through a set of irregularly spaced
points. The algorithm makes use of a coarse-tofine hierarchy of control lattices to generate a sequence of bicubic B-spline functions whose sum approaches the desired interpolation function. Large
performance gains are realized by using B-spline refinement to reduce the sum of these functions into one equivalent B-spline function. Experimental results demonstrate that high-fidelity
reconstruction is possible from a selected set of sparse and irregular samples.
- J. Comput. Appl. Math
"... . Spaces of polynomial splines defined on planar triangulations are very useful tools for fitting scattered data in the plane. Recently, [4, 5], using homogeneous polynomials, we have developed
analogous spline spaces defined on triangulations on the sphere and on sphere-like surfaces. Using these s ..."
Cited by 48 (11 self)
Add to MetaCart
. Spaces of polynomial splines defined on planar triangulations are very useful tools for fitting scattered data in the plane. Recently, [4, 5], using homogeneous polynomials, we have developed
analogous spline spaces defined on triangulations on the sphere and on sphere-like surfaces. Using these spaces, it is possible to construct analogs of many of the classical interpolation and fitting
methods. Here we examine some of the more interesting ones in detail. For interpolation, we discuss macro-element methods and minimal energy splines, and for fitting, we consider discrete least
squares and penalized least squares. 1. Introduction Let S be the unit sphere or a sphere-like surface (see Sect. 2 below) in IR 3 . In addition, suppose that we are given a set of scattered points
located on S, along with real numbers associated with each of these points. The problem of interest in this paper is to find a function defined on S which either interpolates or approximates these
data. This pr...
, 2002
"... Numerous problems in electronic imaging systems involve the need to interpolate from irregularly spaced data. One example is the calibration of color input/output devices with respect to a
common intermediate objective color space, such as XYZ or L*a*b*. In the present report we survey some of the m ..."
Cited by 47 (0 self)
Add to MetaCart
Numerous problems in electronic imaging systems involve the need to interpolate from irregularly spaced data. One example is the calibration of color input/output devices with respect to a common
intermediate objective color space, such as XYZ or L*a*b*. In the present report we survey some of the most important methods of scattered data interpolation in two-dimensional and in
three-dimensional spaces. We review both single-valued cases, where the underlying function has the form f:R #R, and multivalued cases, where the underlying function is f:R . The main methods we
review include linear triangular (or tetrahedral) interpolation, cubic triangular (Clough--Tocher) interpolation, triangle based blending interpolation, inverse distance weighted methods, radial
basis function methods, and natural neighbor interpolation methods. We also review one method of scattered data fitting, as an illustration to the basic differences between scattered data
interpolation and scattered data fitting.
, 1999
"... We present a method for the hierarchical representation of vector fields. Our approach is based on iterative refinement using clustering and principal component analysis. The input to our
algorithm is a discrete set of points with associated vectors. The algorithm generates a top-down segmentation o ..."
Cited by 41 (4 self)
Add to MetaCart
We present a method for the hierarchical representation of vector fields. Our approach is based on iterative refinement using clustering and principal component analysis. The input to our algorithm
is a discrete set of points with associated vectors. The algorithm generates a top-down segmentation of the discrete field by splitting clusters of points. We measure the error of the various
approximation levels by measuring the discrepancy between streamlines generated by the original discrete field and its approximations based on much smaller discrete data sets. Our method assumes no
particular structure of the field, nor does it require any topological connectivity information. It is possible to generate multiresolution representations of vector fields using this approach.
Keywords: vector field visualization; Hardy's multiquadric method; binary-space partitioning; data simplification. 1 Introduction The rapid increase in the power of computer systems coupled with the
improving precis...
- in Mathematical Methods for Curves and Surfaces II , 1998
"... . We discuss several approaches to the problem of interpolating or approximating data given at scattered points lying on the surface of the sphere. These include methods based on spherical
harmonics, tensorproduct spaces on a rectangular map of the sphere, functions defined over spherical triangulat ..."
Cited by 34 (5 self)
Add to MetaCart
. We discuss several approaches to the problem of interpolating or approximating data given at scattered points lying on the surface of the sphere. These include methods based on spherical harmonics,
tensorproduct spaces on a rectangular map of the sphere, functions defined over spherical triangulations, spherical splines, spherical radial basis functions, and some associated multi-resolution
methods. In addition, we briefly discuss sphere-like surfaces, visualization, and methods for more general surfaces. The paper includes a total of 206 references. x1. Introduction Let S be the unit
sphere in IR 3 , and suppose that fv i g n i=1 is a set of scattered points lying on S. In this paper we are interested in the following problem: Problem 1. Given real numbers fr i g n i=1 , find a
(smooth) function s defined on S which interpolates the data in the sense that s(v i ) = r i ; i = 1; : : : ; n; (1) or approximates it in the sense that s(v i ) ß r i ; i = 1; : : : ; n: (2) Data
- In Shape Modeling International (SMI , 2004
"... Figure 1: LS-mesh: a mesh constructed from a given connectivity graph and a sparse set of control points with geometry. In this example the connectivity is taken from the camel mesh. In (a) the
LS-mesh is constructed with 100 control points and in (c) with 2000 control points. The connectivity graph ..."
Cited by 30 (4 self)
Add to MetaCart
Figure 1: LS-mesh: a mesh constructed from a given connectivity graph and a sparse set of control points with geometry. In this example the connectivity is taken from the camel mesh. In (a) the
LS-mesh is constructed with 100 control points and in (c) with 2000 control points. The connectivity graph contains 39074 vertices (without any geometric information). (b) and (d) show close-ups on
the head; the control points are marked by red balls. In this paper we introduce Least-squares Meshes: meshes with a prescribed connectivity that approximate a set of control points in a
least-squares sense. The given mesh consists of a planar graph with arbitrary connectivity and a sparse set of control points with geometry. The geometry of the mesh is reconstructed by solving a
sparse linear system. The linear system not only defines a surface that approximates the given control points, but it also distributes the vertices over the surface in a fair way. That is, each
vertex lies as close as possible to the center of gravity of its immediate neighbors. The Least-squares Meshes (LS-meshes) are a visually smooth and fair approximation of the given control points. We
show that the connectivity of the mesh contains geometric information that affects the shape of the reconstructed surface. Finally, we discuss the applicability of LS-meshes to approximation of given
surfaces, smooth completion, mesh editing and progressive transmission.
- IEEE Transactions on Visualization and Computer Graphics , 2004
"... We present a new method for topological segmentation in steady three-dimensional vector fields. Depending on desired properties, the algorithm replaces the original vector field by a derived
segmented data set, which is utilized to produce separating surfaces in the vector field. We define the conce ..."
Cited by 27 (5 self)
Add to MetaCart
We present a new method for topological segmentation in steady three-dimensional vector fields. Depending on desired properties, the algorithm replaces the original vector field by a derived
segmented data set, which is utilized to produce separating surfaces in the vector field. We define the concept of a segmented data set, develop methods that produce the segmented data by sampling
the vector field with streamlines, and describe algorithms that generate the separating surfaces. This method is applied to generate local separatrices in the field, defined by a movable boundary
region placed in the field. The resulting partitions can be visualized using standard techniques for a visualization of a vector field at a higher level of abstraction. 1.
- In Proceedings of EG/IEEE TCVG Symposium on Visualization VisSym ’04 (2004), Deussen O., Hansen C., Keim D.„ Saupe D., (Eds
"... Figure 1: RBF reconstruction of unstructured CFD data. (a) Volume rendering of 1,943,383 tetrahedral shock data set using 2,932 RBF functions. (b) Volume rendering of a 156,642 tetrahedral oil
reservoir data set using 222 RBF functions organized in a hierarchy of 49 cells. While interactive visualiz ..."
Cited by 20 (3 self)
Add to MetaCart
Figure 1: RBF reconstruction of unstructured CFD data. (a) Volume rendering of 1,943,383 tetrahedral shock data set using 2,932 RBF functions. (b) Volume rendering of a 156,642 tetrahedral oil
reservoir data set using 222 RBF functions organized in a hierarchy of 49 cells. While interactive visualization of rectilinear gridded volume data sets can now be accomplished using texture mapping
hardware on commodity PCs, interactive rendering and exploration of large scattered or unstructured data sets is still a challenging problem. We have developed a new approach that allows the
interactive rendering and navigation of procedurally-encoded 3D scalar fields by reconstructing these fields on PC class graphics processing units. Since the radial basis functions (RBFs) we use for
encoding can provide a compact representation of volumetric scalar fields, the large grid/mesh traditionally needed for rendering is no longer required and ceases to be a data transfer and
computational bottleneck during rendering. Our new approach will interactively render RBF encoded data obtained from arbitrary volume data sets, including both structured volume models and
unstructured scattered volume models. This procedural reconstruction of large data sets is flexible, extensible, and can take advantage of the Moore’s Law cubed increase in performance of graphics
- Algorithmica , 1997
"... Creating a computer model from an existing part is a common problem in Reverse Engineering. The part might be scanned with a device like the laser range scanner, or points might be measured on
its surface with a mechanical probe. Sometimes, not only the spatial location of points, but also some asso ..."
Cited by 17 (0 self)
Add to MetaCart
Creating a computer model from an existing part is a common problem in Reverse Engineering. The part might be scanned with a device like the laser range scanner, or points might be measured on its
surface with a mechanical probe. Sometimes, not only the spatial location of points, but also some associated physical property can be measured. The problem of automatically reconstructing from this
data a topologically consistent and geometrically accurate model of the object and of the sampled scalar field is the subject of this paper. The algorithm proposed in this paper can deal with
connected,orientable manifolds of unrestricted topological type, given a sufficiently dense and uniform sampling of the object’s surface. It is capable of automatically reconstructing both the model
and a scalar field over its surface. It uses Delaunay triangulations, Voronoi diagrams and alpha-shapes for efficiency of computation and theoretical soundness. It generates a representation of the
surface and the field based on Bernstein-Bézier polynomial implicit patches (A-patches), that are guaranteed to be smooth and single-sheeted.
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=358645","timestamp":"2014-04-18T08:31:30Z","content_type":null,"content_length":"41513","record_id":"<urn:uuid:77702033-d6da-4f3a-a3f9-2c93f0bc1234>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00104-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Lesson: Negative Exponents
986 Views
34 Downloads
1 Favorites
Lesson Objective
We will be able to compute using negative exponents.
Lesson Plan
Do Now: (5 minutes)
1) Holly spent $13.76 on a birthday present for her mom. She also spent $3.25 on a snack for herself. If she now has $7.74, how much money did she have initially?
2) Evaluate 2ab³ if a = 3 and b = -2
3) √289
Direct Instruction: (20 minutes)
Ask students to simplify the following problem in their composition book:
Ex. 3²
If 3² means 3 x 3, what do you think 3^1 means?
What about 3^0 ?
Any base to the 0 power is equal to 1.
Mathematicians write this rule as a^0 = 1.
We can now simplify any exponent problem that has 0 or a whole number as the exponent. Now let’s think about what happens if we a negative number for the exponent.
When we learning how to compute with integers we came up with lots of different words that the negative sign could mean. What were some of them? (Have students brainstorm until they get “opposite”. )
If I’m facing in one direction and I am told to face the opposite direction, I am going to flip the direction that I am standing in. What is a word that we use to describe a number “flipping”?
So if “-“ means opposite and opposite means “flip” and when we flip a number we get the reciprocal, what do you think we need to do when we see this?
Ex. 8^-2
Ex. 7^-3
Guided Practice: (10 minutes)
Let's take a look at how mathematicians discovered this rule. Have students fill out the following tables as a class. (see file labeled 7.NSO-C.16)
Assessment: (10 minutes)
negative exponents work-out
Lesson Resources
7 NSO C 16 Classwork 404
Negative Exponents work out Assessment 538
|
{"url":"http://betterlesson.com/lesson/8205/negative-exponents","timestamp":"2014-04-17T07:05:24Z","content_type":null,"content_length":"42772","record_id":"<urn:uuid:6a3659d1-7ad2-4af1-8fd7-e0b0aa872d44>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00568-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Village Green, PA Math Tutor
Find a Village Green, PA Math Tutor
...I have taught students in grades 2-12 in a variety of settings - urban classrooms, after-school programs, summer enrichment, and summer schools. I work with students to develop strong
conceptual understanding and high math fluency through creative math games. Having worked with a diverse popula...
9 Subjects: including SAT math, algebra 1, algebra 2, geometry
...I have experience in both single and multiple variable calculus. I have experience in both derivatives and integration. I have taken several courses in geometry and have experience with shapes
and angles.
13 Subjects: including algebra 1, algebra 2, calculus, geometry
...I am a recent Business School Alum and a Business Intelligence Architect with several years of experience in Financial Decision Making and with diversified experience in Decision Analysis,
Operation Research, Planning and Forecasting, Bayesian Analysis, Balance Sheet and Financial Statement Analy...
18 Subjects: including precalculus, algebra 2, calculus, differential equations
...I obtained my International Baccalaureate Diploma in July 2012 at Central High School of Philadelphia. I am well-versed in IB (as well as AP) Biology, Theory of Knowledge, English Literature
and Composition, Writing craft, and 20th Century History. I have also obtained 'A' grades in Spanish lan...
18 Subjects: including algebra 2, algebra 1, SAT math, English
...There is an awesome reward when watching a struggling student as he begins to understand what he needs to do and how everything fits together. I previously taught Algebra I, II, III, Geometry,
Trigonometry, Precalculus, Calculus, Intro to Statistics, and SAT review in a public school. I have tu...
12 Subjects: including statistics, discrete math, linear algebra, algebra 1
|
{"url":"http://www.purplemath.com/Village_Green_PA_Math_tutors.php","timestamp":"2014-04-18T16:30:35Z","content_type":null,"content_length":"24193","record_id":"<urn:uuid:9383ffe0-cad6-4c99-9734-c12704f1f38f>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00023-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Next: TWO SUBGOAL CREATING ARCHITECTURES Up: PLANNING SIMPLE TRAJECTORIES USING Previous: A TYPICAL TASK
Our approach is based on three modules.
The first module is a `program executer' does `know' appropriate action sequences (otherwise our method will not provide additional efficiency).
The second module is the evaluator costs (
Schmidhuber, 1991a]) as well as any other mapping whose output is differentiable with respect to the input.
The third module is the module of interest: the adaptive subgoal generator S.
Not all environments, however, allow to achieve (2). See section 5.
Next: TWO SUBGOAL CREATING ARCHITECTURES Up: PLANNING SIMPLE TRAJECTORIES USING Previous: A TYPICAL TASK Juergen Schmidhuber 2003-03-14
Back to Subgoal learning - Hierarchical Learning
Pages with Subgoal learning pictures
|
{"url":"http://www.idsia.ch/~juergen/subgoalsab/node3.html","timestamp":"2014-04-17T13:15:20Z","content_type":null,"content_length":"8938","record_id":"<urn:uuid:ab8fd686-7808-46b7-b61a-dd21d4646da4>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00125-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
How to find Horizontal Asymptote of: (2x+1)/(x-2)
Best Response
You've already chosen the best response.
it's when the function = zero in the numerator... so it's when 2x+1 = 0 for vertical asymptote it's when the function is undefined so when x-2 = 0
Best Response
You've already chosen the best response.
you have the correct way to find the vertical asymptote ryan, but not the horizontal
Best Response
You've already chosen the best response.
Yes, I know the Vertical, I was inquiring about the Horizontal.
Best Response
You've already chosen the best response.
since the degrees are equal, simply divide the leading coefficients to get 2/1 = 2 So the horizontal asymptote is y = 2
Best Response
You've already chosen the best response.
If the degree of the denominator is larger, then the horizontal asymptote is simply y = 0
Best Response
You've already chosen the best response.
If the degree of the numerator is larger, then you have to use polynomial long division (but at this point, you won't have a horizontal asymptote)
Best Response
You've already chosen the best response.
So, if equal degrees, you just divide it out. If denominator degree is greater, the y asymptote is simply 0. and if the numerator degree is greater, you use long division and have a slant
asymptote. Is that correct?
Best Response
You've already chosen the best response.
you are exactly correct !! Abe!!
Best Response
You've already chosen the best response.
that is correct, either a slant asymptote or something a bit more complicated
Best Response
You've already chosen the best response.
Thank you very much Jim and Star.
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/4e54797e0b8b25209fb86ecf","timestamp":"2014-04-16T19:48:48Z","content_type":null,"content_length":"49667","record_id":"<urn:uuid:2660481e-ff22-41c6-8677-c4543825591d>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00461-ip-10-147-4-33.ec2.internal.warc.gz"}
|
College Algebra (Book with CD-ROM and BCA Tutorial and Info Trac, Passode for Web Access) 8th Edition | 9780534400682 | eCampus.com
College Algebra (Book with CD-ROM and BCA Tutorial and Info Trac, Passode for Web Access)
by Gustafson, R. David
List Price: [S:$265.95:S]
In Stock Usually Ships in 24 Hours.
Questions About This Book?
Why should I rent this book?
Renting is easy, fast, and cheap! Renting from eCampus.com can save you hundreds of dollars compared to the cost of new or used books each semester. At the end of the semester, simply ship the book
back to us with a free UPS shipping label! No need to worry about selling it back.
How do rental returns work?
Returning books is as easy as possible. As your rental due date approaches, we will email you several courtesy reminders. When you are ready to return, you can print a free UPS shipping label from
our website at any time. Then, just return the book to your UPS driver or any staffed UPS location. You can even use the same box we shipped it in!
What version or edition is this?
This is the 8th edition with a publication date of 6/30/2003.
What is included with this book?
• The New copy of this book will include any supplemental materials advertised. Please check the title of the book to determine if it should include any CDs, lab manuals, study guides, etc.
• The Rental copy of this book is not guaranteed to include any supplemental materials. You may receive a brand new copy, but typically, only the book itself.
Clearly written and comprehensive, the eighth edition of Gustafson's popular book provides in-depth and precise coverage, incorporated into a framework of tested teaching strategy. David Gustafson
combines carefully selected pedagogical features and patient explanation to give students a book that preserves the integrity of mathematics, yet does not discourage them with material that is
confusing or too rigorous. Long respected for its ability to help students quickly master difficult problems, this book also helps them develop the skills they'll need in future courses and in
everyday life.
Table of Contents
0. A REVIEW OF BASIC ALGEBRA
Sets of Real Numbers
Integer Exponents and Scientific Notation
Rational Exponents and Radicals
Factoring Polynomials
Algebraic Fractions
Applications of Linear Equations
Quadratic Equations
Applications of Quadratic Equations
Complex Numbers
Polyomial and Radical Equations
Absolute Value
The Rectangular Coordinate System
The Slope of a Nonvertical Line
Writing Equations of Lines
Graphs of Equations
Proportion and Variation
3. FUNCTIONS
Functions and Function Notation
Quadratic Functions
Polynomial and Other Functions
Translating and Stretching Graphs
Rational Functions
Operations on Functions
Inverse Functions
Exponential Functions and Their Graphs
Applications of Exponential Functions
Logarithmic Functions and Their Graphs
Applications of Logarithmic Functions
Properties of Logarithms
Exponential and Logarithmic Equations
The Remainder and Factor Theorems; Synthetic Division
Descartes' Rule of Signs and Bounds on Roots
Rational Roots of Polynomial Equations
Irrational Roots of Polynomial Equations
6. LINEAR SYSTEMS
Systems of Linear Equations
Gaussian Eliminations and Matrix Methods
Matrix Algebra
Matrix Inversion
Partial Fractions
Graphs of Linear Inequalities
Linear Programming
The Circle and the Parabola
The Ellipse
The Hyperbola
Solving Simultaneous Second-Degree Equations
The Binomial Theorem
Sequences, Series, and Summation Notation
Arithmetic Sequences
Geometric Sequences
Mathematical Induction
Permutations and Combinations
Computation of Compound Probabilities
Odds and Mathematical Expectation
9. THE MATHEMATICS OF FINANCE
Compound Interest
Annuities and Future Value
Present Value of an Annuity; Amortization
Appendix 1: A Proof of the Binomial Theorem
Appendix 2: Tables
Table A
Powers and Roots
Table B
Base-10 Logarithms
Table C
Base-e Logarithms
Appendix 3: Answers to Selected Exercises
Index of Applications
|
{"url":"http://www.ecampus.com/college-algebra-book-cdrom-bca-tutorial/bk/9780534400682","timestamp":"2014-04-17T16:10:12Z","content_type":null,"content_length":"59085","record_id":"<urn:uuid:d31dfc75-8ce1-46cd-b76f-02d8e05f9fc5>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00451-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Iņigo Quilez - fractals, computer graphics, mathematics, demoscene and more
A nice way to render fractals like Julia o Mandelbrot sets of polynomails is to use the distance from the current pixel to the boundary of the set. This avoids the usual
aliasing problem of rendering fractals, where details are just to small to be visible through the sampling of the image. Without going into details in this article (go to the The Mandelbrot set
math section for that), the distance to the Mandelbrot set can be computed through it's Green function G(c) (or Hubbard-Douady potential), which is a continuous function. rendered with the
Therefore, according to the usual way to approximate distances to isosurfaces in continuous functions, we can estimate the distance to the fractal surface as distance formula.
Click to enlarge
since the derivative G' is we have that
Basically this means than during our regular iteration loop we need to keep track of both Zn as usual and of its derivative Z'n. If we are rendering the standard Mandelbrot set
then simple derivation rules give
and for a Julia set,
float calcDistance( float a, float b )
Complex c( a, b );
Complex z( 0.0f, 0.0f );
Complex dz( 0.0f, 0.0f );
float m2;
for( int i=0; i<1024; i++ )
dz = 2.0f*z*dz + 1.0f;
z = z*z + c;
m2 = Complex::ModuloSquared(z);
if( m2>1e20f )
// distance estimation: G/|G'|
return sqrtf( m2/Complex::ModuloSquared(dz) )*0.5f*logf(m2);
The estimated distance can now be used to do coloring. The video on the right uses the distance to index into a color palette, while the images below simply remap the distance estimation to a
grayscale value:
This is a realtime version of the algorithm, for reference:
|
{"url":"http://iquilezles.org/www/articles/distancefractals/distancefractals.htm","timestamp":"2014-04-18T15:39:27Z","content_type":null,"content_length":"5181","record_id":"<urn:uuid:5ce54fee-b6b1-46e7-a694-79f550e20500>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00523-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Lewisville, TX Calculus Tutor
Find a Lewisville, TX Calculus Tutor
...I would consider myself able to help with Calculus I (A/B) .... not sure about anything higher .... no one has ever asked for it! I spent the last 6 years of my classroom teaching career
teaching regular Geometry. I know Geometry.
12 Subjects: including calculus, geometry, ASVAB, GRE
...I do not have experience with Microsoft ACCESS. I can help with query strategies and with syntax of most query statements. I am a certified tutor in all math topics covered by the ASVAB.
15 Subjects: including calculus, chemistry, physics, geometry
...Physics is not like Calculus: you can't just get by with knowing a few techniques of integration and differentiation; you really have to be well grounded in the most basic concepts, you have
to start from the very root of it. I tend to have a lot of Physics students who are good in Math but terr...
41 Subjects: including calculus, chemistry, statistics, geometry
...I am specialized in tutoring students for grade-level math, Algebra 1 and 2, geometry, PSAT, SAT, ACT, SAT MATH LEVEL 1&2, Pre-AP Calculus\AP-Calculus and Physics.I am a Texas state certified
teacher (math 4-12), I provide complete Algebra1 course to needed students as acceleration (credit by exa...
20 Subjects: including calculus, physics, statistics, geometry
...I am patient, encouraging and patient with students who are having difficulty in the physical sciences and mathematical concepts. I am creative in developing learning strategies to help
students understand the concepts efficiently.I am a physicist with both a bachelor and master’s degree in physics. As a physicist, we use math as a tool to model or interpret our data.
25 Subjects: including calculus, chemistry, SAT math, statistics
Related Lewisville, TX Tutors
Lewisville, TX Accounting Tutors
Lewisville, TX ACT Tutors
Lewisville, TX Algebra Tutors
Lewisville, TX Algebra 2 Tutors
Lewisville, TX Calculus Tutors
Lewisville, TX Geometry Tutors
Lewisville, TX Math Tutors
Lewisville, TX Prealgebra Tutors
Lewisville, TX Precalculus Tutors
Lewisville, TX SAT Tutors
Lewisville, TX SAT Math Tutors
Lewisville, TX Science Tutors
Lewisville, TX Statistics Tutors
Lewisville, TX Trigonometry Tutors
Nearby Cities With calculus Tutor
Carrollton, TX calculus Tutors
Coppell calculus Tutors
Copper Canyon, TX calculus Tutors
Denton, TX calculus Tutors
Flower Mound calculus Tutors
Frisco, TX calculus Tutors
Grapevine, TX calculus Tutors
Highland Village, TX calculus Tutors
Irving, TX calculus Tutors
Keller, TX calculus Tutors
N Richland Hills, TX calculus Tutors
N Richlnd Hls, TX calculus Tutors
North Richland Hills calculus Tutors
Plano, TX calculus Tutors
The Colony calculus Tutors
|
{"url":"http://www.purplemath.com/Lewisville_TX_Calculus_tutors.php","timestamp":"2014-04-16T22:04:22Z","content_type":null,"content_length":"24141","record_id":"<urn:uuid:f727d130-df00-44d3-90ef-450abd78deec>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00326-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Trigonometry and Complex Exponentials
Amazingly, trig functions can also be expressed back in terms of the complex exponential. Then everything involving trig functions can be transformed into something involving the exponential
function. This is very surprising.
In order to easily obtain trig identities like , let's write and as complex exponentials. From the definitions we have
Adding these two equations and dividing by 2 yields a formula for , and subtracting and dividing by gives a formula for :
We can now derive trig identities. For example,
I'm unimpressed, given that you can get this much more directly using
and equating imaginary parts. But there are more interesting examples.
Next we verify that (4.4.1) implies that . We have
The equality just appears as a follow-your-nose algebraic calculation.
Example 4.4.4
Compute as a sum of sines and cosines with no powers.
We use (
William Stein 2006-03-15
|
{"url":"http://modular.math.washington.edu/20b/notes/html/node30.html","timestamp":"2014-04-20T06:13:36Z","content_type":null,"content_length":"13455","record_id":"<urn:uuid:9bcfa0c8-005a-413a-95f2-9fd2999325a7>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00309-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Newtons cradle
Well, suppose this does happen and set up equations of conservation of kinetic energy and conservation of momentum. We have the situation in which one ball moves with velocity v before the collision,
with two at rest, then two balls move with velocity v/2 after the collision, with the first at rest. Try setting up the equations.
|
{"url":"http://www.physicsforums.com/showthread.php?t=149882","timestamp":"2014-04-16T19:08:36Z","content_type":null,"content_length":"30652","record_id":"<urn:uuid:b3e51f9e-fc90-424e-8f96-f8ef9b7de747>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00224-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Loan Calculator
Leave the field blank that you wish to solve for.
Try the
Loan Calculator
gadget! Add it to your iGoogle page, blog, or website.
fixed-rate loan or mortgage. The calculator estimates the Total Interest paid over the course of the loan.
The payment includes Principal+Interest, but not Insurance and Property taxes. It does not take into account rounding, so the estimate may be off be a few dollars. Note also that the "bi-weekly"
payment option is not the same as "accelerated bi-weekly" payments, in which an extra payment is made.
Solving for the Rate requires iteration, and there may be zero or more solutions. When the solver fails to converge, you'll get an error such as "Could not solve for Rate". The rate is rounded to the
6th decimal place.
Extra Payments: To estimate the effect of making extra payments, follow these 2 steps ...
Step 1: Calculate the normal payment. For example, a 150,000 loan at 5% for 30 yrs = $805.23 per month. Write down the total interest, which is about $139,884
Step 2: Clear the Term field and add $100 to the monthly payment so that the Payment field is $905.23. Press Calc and you'll see that the loan is paid off after 23.5 years (the calculated Term) and
the total interest is about $105,279, saving you about $34,600 in interest. In this calculator, we have defined the "Term" as the number of years to pay off the loan.
Loan Amount - This is the amount that you have borrowed.
Annual Rate % - This is the annual interest rate quoted by the lender.
Term of Loan - The total number of years it will take to pay off the loan.
Payment - This is the PI (Principal + Interest) amount that you'll pay each period.
Payment Frequency - Used to specify the number of payments made per year (Monthly = 12 payments per year, Semi-Monthly = 24 per year, Bi-Weekly = 26 per year).
Note: Values returned by this calculator may not be exact, due to rounding or truncation errors. Also, this calculator assumes the compounding period is the same as the payment frequency.
Disclaimer: Your financial situation is unique, and circumstances vary, so don't depend on this Mortgage calculator to make your home financing decisions. Please consult a professional. This
calculator is for informational use only and does not constitute tax or financial advice.
|
{"url":"http://www.calcnexus.com/loan-calculator.php","timestamp":"2014-04-18T08:48:48Z","content_type":null,"content_length":"9045","record_id":"<urn:uuid:c9a3e2b8-30cb-490c-a6aa-752f7853dd57>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00111-ip-10-147-4-33.ec2.internal.warc.gz"}
|
open question about accuracy of measurement and frame of reference
this is pretty in line with the text book narratives explaining relativity..
if I'm on a train and I have a bouncing ball in front of me, and I have a single guess of its normalized vector in space at any point in time. I guess that it is moving down relative to me and if the
ball doesn't lose any energy in its bounce and it stays perfectly aligned with my Y axis then I'm very close in my estimate 50% of the time. right?
now say I'm outside the train and I have the same chance to make a single guess of the ball's vector, I guess that its moving in a vector aligned with the train, now my guess is only exactly correct
a small portion of the time ( when the ball reaches its max bounce, and when it hits the table) but I'm really close to it's perceived vector an increased % of the time. right?
and my guess gets even better the faster the train is moving, as its forward vector overwhelms the small up and down motion of the ball.
Am I wrong in thinking this way?
|
{"url":"http://www.physicsforums.com/showthread.php?p=2881589","timestamp":"2014-04-18T03:15:44Z","content_type":null,"content_length":"20594","record_id":"<urn:uuid:6316819c-8920-44c6-80d5-16c3923c4174>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00663-ip-10-147-4-33.ec2.internal.warc.gz"}
|
CRAN Task View: gRaphical Models in R
Maintainer: Soren Hojsgaard
Contact: sorenh at math.aau.dk
Version: 2013-07-26
Wikipedia defines a graphical model as follows: A graphical model is a probabilistic model for which a graph denotes the conditional independence structure between random variables. They are commonly
used in probability theory, statistics - particularly Bayesian statistics and machine learning.
A supplementary view is that graphical models are based on exploiting conditional independencies for constructing complex stochastic models with a modular structure. That is, a complex stochastic
model is built up by simpler building blocks. This task view is a collection of packages intended to supply R code to deal with graphical models.
The packages can be roughly structured into the following topics (although several of them have functionalities which go across these categories):
Representation, manipulation and display of graphs
Classical models - General purpose packages
Miscellaneous: Model search, specialized types of models etc.
Bayesian Networks/Probabilistic expert systems
BUGS models
CRAN packages:
Related links:
|
{"url":"http://cran.stat.auckland.ac.nz/web/views/gR.html","timestamp":"2014-04-19T04:20:01Z","content_type":null,"content_length":"20541","record_id":"<urn:uuid:cffb3d8e-5f0c-459b-b2a6-eba0b47d3d7e>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00533-ip-10-147-4-33.ec2.internal.warc.gz"}
|
A replacement for IntegerDigits: computing Sloane OEIS A229024 sequence
I am on a quest to determine the 13th term in
Sloane's (OEIS) A229024
, a sequence that I recently authored. This sequence is all about calculating
for a reasonably large subset of contiguous n. For the 13th term, n is in the vicinity of 182623000, the factorial of which has more than 1.4*10^9 decimal digits. It takes some 20 minutes
for Mathematica to return on my system the sum of the digits of one such n!, so it will take me many months to chart the roughly 15000 sums that I think need to be done. Unfortunately, I am
finding that for a random, roughly-2% of the n that I try, Mathematica returns an erroneous result. For example, I get for
a value of 1600311191. I know that this sum is incorrect because it is not evenly divisible by 9. (It is also significantly smaller than the sum of digits for nearby n-factorial. It appears
introduces more than a billion
Havermann additional
1 Vote trailing zeros preceding the 45653996 trailing zeros that I expect.) I've reported the bug to Wolfram in the hope that this will be fixed in some future version, assuming of course that it
is not my system that is responsible. So what procedure can I use to get a
value for
in the interim? I thought to use
Total[RotateRight[DigitCount[182616009!]]*Range[0, 9]]
results in the identical incorrect value as before.
Which version were you using and on what platform? With version 9.0.1, I get
In[1]:= Total[IntegerDigits[182616009!]]
Out[1]= 6226582986
The number of trailing zeros is as expected:
Ilian Gachevski In[2]:= IntegerExponent[182616009!]
3 Votes Out[2]= 45653996
In[3]:= Length[Last[Split[IntegerDigits[182616009!]]]]
Out[3]= 45653996
I am using 9.0.1 as well, on a current generation iMac (OS 10.8.5) with 32GB RAM (which is the maximum possible). Just doing
consumes most of that RAM (I do have other stuff running in the background). The fact that you would consider doing a
Hans Havermann
on top of that suggests that you have at least twice as much RAM and points to the likelihood that my sporadic
1 Vote
misbehavior is the result of a system memory constraint.
Iliang: What is your $MaxNumber?
Simon Schmidt 182616009! (* Overflow[] *)
$MaxNumber (* 1.233433712981650*10^323228458 *)
1 Vote
Thank you for the information, today I have been able to reproduce this problem on a couple of OS X machines, and will contact the appropriate developers. Your feedback is much
For a possible workaround in the meantime, perhaps you could try one of the approaches mentioned in this Mathematica SE
Ilian Gachevski
, although I am afraid they are more likely to exhaust the available memory (yes, I did not worry much about Split, but that's only because the Linux server I used had 144 GB of RAM).
2 Votes
@Simon: There is a bit more headroom on a 64-bit machine. On my Linux box, $MaxNumber is
After looking at that link, I thought to try just breaking up my number into two parts and doing the sum on each:
In[1]:= ds[n_] := Total[IntegerDigits[n]]
In[2]:= f = 182616009; z = 0; c = 1; While[d = Floor[f/5^c]; d > 0, z = z + d; c++]; g = f!/10^z;
In[3]:= ds[g]
Out[3]= 1394899135
In[4]:= a = Quotient[g, 10^714000000]; b = Mod[g, 10^714000000]; ds[a] + ds[b]
Hans Out[4]= 6226582986
It worked! Thank you for your help.
It won't work every time. I've tried this now on some of the other factorials that I didn't get and it's hit or miss. I guess my procedure shifts the burden to two new numbers and if one
of those is susceptible to this bug, you still get a wrong answer.
Hans, if you happen to upgrade your operating system to OS X 10.9 Mavericks, I would suggest that you try the original computation again. Based on some testing, I would expect it to
work properly with 10.9.
1 Vote
Yes, my original computation now works under Mavericks. I'll hold off saying that it's fixed until I've done several hundred more digit sums. Mac memory management (though seamless to the
user) was already a pretty complex affair in 10.8, and 10.9 introduces some new tricks in that department. I was half-expecting things to get worse, not better.
I have now done 400
sums in 10.9
Havermann without a single fail. In 10.8 I would have encountered about six incorrect results, so I'm reasonably convinced that what I had initially assumed was a Mathematica bug was in fact Mac OS
misbehavior. I have now done over 2500 sums and my very early result is that the 13th term in Sloane's (OEIS) A229024 is less than or equal to 180. I had been considering purchasing the
upcoming Mac Pro which would definitely help me speed things along but I'm worried that it won't accomodate more than 64GB RAM, which would be a serious negative for such a high-end
|
{"url":"http://community.wolfram.com/groups/-/m/t/128136?p_p_auth=LBnsgEe4","timestamp":"2014-04-19T01:47:58Z","content_type":null,"content_length":"98486","record_id":"<urn:uuid:79f0c68a-621d-4b4d-ad56-61e31c104d27>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00137-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Re: efficient compare
From: Andersen <andersen_800_at_hotmail.com> Date: Tue, 25 Apr 2006 15:23:42 +0200 Message-ID: <444E22DE.6040403@hotmail.com>
I am glad someone is actually commenting on my algorithm.
Bob Badour wrote:
> >
> [snipped example]
> We are dealing with three numbers: C, N, and M. C is the cardinality of
> the relation and is linearly proportional to the size of the relation. N
> is the size of the hash table, which is arbitrarily chosen as a large
> number. M is the number of changes since the last synch and is linearly
> proportional to the size of the log file.
C is the cardinality of the sets right? The only relations I know are the tuples (which form a subset of SxS where S is whatever the components of the tuple are).
> We established earlier that C >> M.
Not necessary. I want a generic algorithm which works under all circumstances, where you pay a "cost proportional to the amount of asynchrony".
> Assuming a uniform distribution and C > N, one can expect to find C/N
> tuples in each bucket. It makes no sense to choose N < M, because one
> will expect to find M/N > 1 updates in each bucket meaning you will send
> O(2^N) messages plus O(C) tuples. If we choose N > M and have a uniform
> distribution of changes in buckets, we can expect O(M) <= M buckets to
> have updates.
Still with you... also agree with assumption of N > M.
> In the example snipped, you give M = 1 and log(N) = 160. I simply
> observe that 160 >> 1. For M << N, your example suggests you will need
> to send O(M * log(N)) messages plus O(M * C/N) tuples, which is much
> larger than the logfile O(M).
I agree to the cost of my algo being
O(M * log(N)) [for transfering M checksum path of the checksum tree) + O(M * C/N) [cost of M buckets each containing C/N items]
> What did I misunderstand?
Nothing, I think we agree about what my proposal does. I guess the problem is I do not know exactly what it means to send a logfile. So I really cannot compare to it. Please help me understand a bit
more about the basics of having log files.
Some questions that come to my mind:
The log would grow beyond C, right? As time goes, the size of the log grows to inifinity, even with a fixed C? Then you have to find ways around that with checkpoints etc?
If the log can be really big, how do you know how much of it to transfer? Lets say the checksum of the two logs on the two computers mismatch, how much of it should we send?
My main question: If we have several machines, and they are each making updates locally, and trying to synchronize all with eachother (running your pairwise log synch algo), the logs are not just
this monotonically increasing log.
At time t nodes A and B and C have the log X. Some moment later, A appends some entry to its local log X. At the same time, B does some modification to its X. A third node also makes some updates.
How do we go about making this work? Received on Tue Apr 25 2006 - 08:23:42 CDT
|
{"url":"http://www.orafaq.com/usenet/comp.databases.theory/2006/04/25/0962.htm","timestamp":"2014-04-20T12:45:35Z","content_type":null,"content_length":"9845","record_id":"<urn:uuid:782dbca7-544a-45b6-a3df-e4634cf86eeb>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00370-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[Numpy-discussion] untenable matrix behavior in SVN
Alan G Isaac aisaac@american....
Fri Apr 25 15:57:30 CDT 2008
On Fri, 25 Apr 2008, Christopher Barker apparently wrote:
> I think a Vector object would allow both of:
> M[i,j] == M[i][j]
> and
> M[i] == M[i,:]
The problem is that it would be a crime to give up
the natural production of submatrices. The NATURAL RULE
is: to get a submatrix, use nonscalar indices.
We should *not* give up that x[0,:] is a sub*matrix*
whose first element is x[0,0] and equivalently x[0][0].
*This* is why we must have x[0]!=x[0,:] if we want,
as we do, that x[0][0]==x[0,0].
Note that the idea for attributes ``rows``
and ``columns`` is contained on the discussion page:
I claim that it is natural for these attributes
to yield properly shaped *matrices*.
Once we have these attributes, it is difficult to
see what we gain by introducing the complication
of a separate vector class.
I still see no real gain from a separate vector class
(or row and column vector classes).
Everything that has been proposed to do with these
is achievable with the corresponding matrices with
one exception: scalar indexing -> element so that
x[0][0]==x[0,0]. But this outcome is achieved
MUCH more simply by letting x[0] be a 1d array.
Anyway, I am glad you agree that letting ``x[0]`` be a 1d
array is the proper provisional solution.
More information about the Numpy-discussion mailing list
|
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2008-April/033200.html","timestamp":"2014-04-21T14:48:30Z","content_type":null,"content_length":"4049","record_id":"<urn:uuid:7c1e6c3f-2202-4e0a-9995-5c05662e361a>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00031-ip-10-147-4-33.ec2.internal.warc.gz"}
|
See that if you add quarter of a cake to quarter of a cake, ¼ + ¼, you now have two quarters 2/4, or half a cake. Obviously, that is written as one over two, ½ , this can be described as one cake
divided into two. If you add a quarter to a quarter to a quarter to a quarter, ¼ + ¼ + ¼ + ¼, you now have the whole cake, that is one cake, which can be written as 4/4, while three quarters of the
cake will be written as ¾.
As you will notice, the four quarters of the cake, 4/4 [2], which as already said means four divided by four, and we know very well that four divided by four is one - that is in this case, one cake.
Now it is important for an understanding of fractions, that at each stage the numbers are attached clearly to the physical objects and the manipulation of those physical objects. As we build this
connection, it will later mean that as the learner moves to calculating with the numbers, they will acquire a feel for whether the sums make sense. The learner will be able to return to cakes and
blocks, if confused, in order to make sure that the sums work in the real world.
We have two quarters 2/4, which we then said is half a cake ½, and simply, by looking at the cake or the blocks, you can immediately see with your eyesy-piesyes that two quarters of the cake is
indeed half the cake. Thus it is clear that half is the ‘same’ as (equal to) two quarters: 1/2=2/4.
If you look closely, you will see that, in the expression 2/4, if we divide the two by two, the result is one; and if we divide the four by two, the result is two. That is by carrying out this rule -
do the same to the top and to the bottom - the result works in this case. Soon we shall move on to see whether this magical performance will continue to work in all other circumstances.
If we go up the leaning Tower of Pisa and drop stones, we will see that the stones usually fall to the ground, maybe sometimes bouncing of the heads of passers by. Of course, you can try this while
sitting on a chair, when you may bean a mouse. That these objects fall when you let go of them, eventually becomes so obvious and happens with such predictability, that you decide that this
phenomenon is a rule in the real world. If you drop a computer, or an elephant, or even yourself, each will fall. So, when you pick yourself up, be careful not to drop yourself.
In due course, you will probably be prepared to agree that when you divide (or multiply) the top and bottom parts of a division sum, or fraction, by the same number, the value after these divisions
or multiplications (the value being the amount of cake) does not vary from the fraction before the division or multiplication. 50/100, 6/12, 4/8, 3/6, 2/4, 1/2 are all equivalent fractions and all
indicate one half.
adding or subtracting fractions
simple fractions
Some examples of simple additions with fractions:
three quarters plus one quarter equals four quarters, which equals one
3/4 + 1/4 = 4/4 = 1
one third plus two thirds equals three thirds, which equals one
1/3 + 2/3 = 3/3 = 1
Some examples of simple subtractions with fractions:
one minus one third equals two thirds: 1- 1/3 = 2/3
one minus three quarters equals one quarter : 1 - 3/4 = 1/4
Note that with these simple sums, the number underneath - the divisor or denominator - is the same for all parts of the sum. Each fraction being added the same sort of fraction, for instance:
quarters, or tenths, or eighteenths.
Simple multiplications and divisions with fractions are on the next page - fractions, decimals, percentages and ratios 2. That deals with more complicated fraction sums.
fractions including whole numbers
These are fractions such as one and a quarter: 1 1/4.
Adding such fractions is found in sums such as
1 1/4 + 1 1/4, or 2 1/5 + 3 2/5.
When adding such fractions together, you add the whole numbers together and add the fractions together:
1 + 1 + 1/4 + 1/4.
Thus, 1 1/4 + 1 1/4
= 2 + (1/4 + 1/4)
= 2 + 2/4
= 2 2/4 = 2 1/2
Or, 2 1/5 +3 2/5
= 5 + (1/5 + 2/5)
= 5 +3/5
= 5 3/5
And with three fractions including whole numbers:
1 5/6 + 1 1/6 + 1 2/6
= 3 + (5/6 + 1/6 + 2/6)
= 3 + 8/6
Oh wow, you say, 8/6 is more than 6/6 (or one)!
Yes, so split 8/6 into a whole number and a fraction by subtracting 6/6 (that is, 1) from 8/6:
8/6 - 6/6 = 2/6.
To recap, 1 5/6 + 1 1/6 + 1 2/6
= 3 + 1 +2/6
= 4 2/6
= 4 1/3.
Another way of going about this last sum is to turn everything into sixths [1/6s] .
So ... for 1 5/6, 1 is equivalent to 6/6, that is six parts of something divided into six. Now to that add the 5/6. First, remember that 5/6 is five parts of something divided into six . So 6/6
[six parts of six parts] plus 5/6 [five parts of six parts] makes 11/6.
Now 1 1/6 is 6/6 plus 1/6, which makes 7/6,
while 1 2/6 is 6/6 plus 2/6, which makes 8/6.
Thus, we add 11/6 + 7/6 + 8/6, which makes 26/6. Now this is 26 parts of something divided into six. So let’s divide 26 by 6, and the result is 4 2/6.
The final sum is 1 5/6 + 1 1/6 + 1 2/6
= 26/6
= 4 2/6
= 4 1/3.
mixed-up fractions
adding, say, 2/3 and 1/2, or 1/3 + 3/5,
or 1/5 and 3/8, or 7/20 and 7/12
adding two thirds to one half (2/3 + 1/2)
Using mixed-up fractions is a little more complicated, involving adding fractions like two thirds to one half: 2/3 + 1/2.
Starting with a simple example, a way of working out what is the common ‘bottom number’ of fractions in a sum is to lay out two rows of Cuisenaire rods, each row being made up of rods the length of
one of the numbers being considered. For the sum 2/3 + 1/2, the two numbers would be 3 and 2.
Here you can see that three lots of two (red rods) is equivalent to two lots of three (green rods).
As you can see, one third of one is equivalent to two sixths, and one half is equivalent to three sixths. So we have two thirds, that is four sixths, and by adding the half, which is three sixths, we
have seven sixths in all.
Looking back at the rod picture, you will see that the sixths go commonly into both the halves and the thirds. The sixths are the lowest common denominator, the biggest fraction that will divide into
both the thirds and the halves. You will also notice that two times three equals six (2 x 3 = 6), thus multiplying the denominators together will give you at least one common fraction into which the
fractions you wish to add may be divided.
adding one third to three fifths (1/3 + 3/5)
As you cannot add thirds and fifths as they stand, any more than you can add apples and oranges and end up with plums, you have to turn the apples and oranges into a similar purée in order to add
purée to purée. So our first job is to purée the different fractions that we are trying to add together.
What we can do is turn the thirds into ninths or twelfths, or turn the fifths into tenths or twentieths, but we still will not be able to add them together efficiently.
So what we are trying to do is to find a way of cutting the thirds and the fifths into the ‘same’ size chunks as each other. This is what the search for a lowest common multiple is all about. It is,
in fact, about finding the biggest bits (or fractions) that we can divide both thirds and fifths into evenly. So, in fact, in a way, this is the biggest common bits we can make of both thirds and
fifths. Once again, we have some rather clumsy jargon.
Now let’s set about finding these biggest common bits (fractions) or, as the jargon says, the lowest common multiple. As you can see from the photo-diagram below, you can turn thirds into fifteenths
and you can divide fifths into fifteenths! You can look at the yellow blocks as being fives, or as being five fifths; and the green blocks as being threes, or being three thirds. Or you can look at
the stretch of fifteen white blocks as being fifteenths. As usual, the way you count depends entirely on your purpose.
As you can see in this diagram, both the fifths (5ths, yellow) and the thirds (3rds, green) can be turned into fifteenths (15ths, white), and so can be added together with ease. Instead of one third
and three fifths, we now have three fifteenths and nine fifteenths, which total twelve fifteenths.
Ah, Eureka, as Eratosthenes’ mate Archimedes expostulated, leaping out of his bath and running down the main street in his excitement - well, actually, in his birthday suit. And as reporters of his
day tell it, he wasn’t even arrested. It sounds like they had more sense in those days, or in ancient Greece.
But enough of this jollity, back to our search for biggest bits.
adding one fifth and three eighths (1/5 + 3/8)
So we are standardising two fractions which, in the real world, could both be portions of a cake (or anything else) divided into the same number of parts.
Supposing we have one part of, say, a cake cut into five portions, to be added to three portions of a cake divided into eight parts: 1/5 + 3/8. It would be easier if the cake was divided into the
same number of portions for each fraction.
Here is another way of visualising such as sum, which is also another way of visualising the lowest common multiple, this time of 5 and 8:
│ 0 │ 1 │ 2 │ 3 │ 4 │ 5 │ 6 │ 7 │ 8 │ 9 │ 10 │ 11 │ 12 │
│ 1 │ 1 │ 2 │ 3 │ 4 │ 5 │ 6 │ 7 │ 8 │ 9 │ 10 │ 11 │ 12 │
│ 2 │ 2 │ 4 │ 6 │ 8 │ 10 │ 12 │ 14 │ 16 │ 18 │ 20 │ 22 │ 24 │
│ 3 │ 3 │ 6 │ 9 │ 12 │ 15 │ 18 │ 21 │ 24 │ 27 │ 30 │ 33 │ 36 │
│ 4 │ 4 │ 8 │ 12 │ 16 │ 20 │ 24 │ 28 │ 32 │ 36 │ 40 │ 44 │ 48 │
│ 5 │ 5 │ 10 │ 15 │ 20 │ 25 │ 30 │ 35 │ 40 │ 45 │ 50 │ 55 │ 60 │
│ 6 │ 6 │ 12 │ 18 │ 24 │ 30 │ 36 │ 42 │ 48 │ 54 │ 60 │ 66 │ 72 │
│ 7 │ 7 │ 14 │ 21 │ 28 │ 35 │ 42 │ 49 │ 56 │ 63 │ 70 │ 77 │ 84 │
│ 8 │ 8 │ 16 │ 24 │ 32 │ 40 │ 48 │ 56 │ 64 │ 72 │ 80 │ 88 │ 96 │
│ 9 │ 9 │ 18 │ 27 │ 36 │ 45 │ 54 │ 63 │ 72 │ 81 │ 90 │ 99 │ 108 │
│ 10 │ 10 │ 20 │ 30 │ 40 │ 50 │ 60 │ 70 │ 80 │ 90 │ 100 │ 110 │ 120 │
│ 11 │ 11 │ 22 │ 33 │ 44 │ 55 │ 66 │ 77 │ 88 │ 99 │ 110 │ 121 │ 132 │
│ 12 │ 12 │ 24 │ 36 │ 48 │ 60 │ 72 │ 84 │ 96 │ 108 │ 120 │ 132 │ 144 │
Here we use a cross table to work out which is the smallest number into which both 5 and 8 will divide (their lowest common multiple). The highlighted crimson numbers show that 5 and 8 are both
multiples of 40.
Using the cross table above (if necessary),
1/5 = 8/40 and 3/8 = 15/40.
(Five goes into forty, eight times. Therefore one fifth equals eight fortieths. While eight goes into forty, five times; and we have three eighths, or fifteen fortieths).
So 1/5 + 3/8
= 8/40 + 15/40
= 23/40.
adding seven twentieths and seven twelfths (7/20 + 7/12)
Now looking back to the previous cases of 1/3 + 3/5, 1/5 + 3/8 and 2/3 + 1/2, notice that in each case we could find the biggest bits into which we can divide (split up) both fractions by multiplying
the bottom numbers (denominators) together. So 3 x 5 gives 15ths, 5 x 8 gives 40ths and 2 x 3 gives 6ths.
Clearly, if we multiply 3 x 5, giving the result of 15, then 3 will go into 15 five times and 5 will go into 15 three times; and so on. But, as the numbers become bigger, it is sometimes thought
helpful to continue to find the biggest bits that will go into each fraction, rather than deal in hundredths, or thousandths or seven thousand two hundred and twentieths (7,220ths) and onwards.
Although, in these days of calculators and computers, we could really work work with whatever number that is obtained by multiplying any and all the denominators (bottom parts of fractions) together.
So now I am going to show you how your ancestors went about dealing with this problem in ‘the good old days’. For this next example, we will use the method of finding prime numbers of the bottom
parts of the fractions, using the sum 7/20 + 7/12. Again, we are looking for the biggest bits (lowest common multiple) that go into both twentieths and twelfths.
We look for the prime factors of 20 and 12. To do this, start by dividing by the smallest prime factor you think will go until you run out of prime factors. I have an example here:
20 ÷ 2 =10 12 ÷ 2 = 6
10 ÷ 2 = 5 6 ÷ 2 = 3
and the remaining 5 is a prime number. and the remaining 3 is a prime number.
So the prime factors of 20 are 2 x 2 x 5. So the prime factors of 12 are 2 x 2 x 3.
Now examining these prime factors, the separate prime factors that make up both 20 and 12 are 2 x 2 x 3 x 5. 2 x 2 is common to both numbers, so 2 x 2 needs to be used just one time in making the
common biggest bit.
We could, of course, just multiply 12 by 20 and deal in 240ths, but this way we only have to deal in 60ths (2 x 2 x 3 x 5).
So now to write out the sum of 7/20 + 7/12 in the old-fashioned style:
In the first line, the denominators (bottom parts) are expanded into their several factors.
In the second line, the denominators of the two fractions, 20 or (2x2x5) and 12 or (2x2x3), have been combined together (2x2x3x5). This results in the fraction 7/20, or 7/(2x2x5), being multiplied by
3 on the bottom part (that is, (2x2x5) x 3). So in order to keep this fraction balanced, we also multiply the top part by 3 (7x3).
1/(2x2x5) is 1/20, whereas 1/(2x2x5x3) is 1/60. There are, of course, 3/60ths in each 1/20. As there are 7/20ths in the original sum, 7/20ths therefore becomes 21/60ths.
Similarly with 7/12, or 7/(2x2x3), this fraction is multiplied by 5 at the bottom, so to keep it balanced we must also multiply the top part by 5.
Now the big combined fraction sum is simplified by doing the various multiplications inside the brackets, and then the addition.
Tidying up and another form of balance.
As we saw briefly earlier, a half is equal to two quarters. You can easily see, dividing both the top and bottom of the fraction 2/4 by two gives 1/2, and of course, multiplying the top and bottom of
1/2 by two gives 2/4, thus maintaining the balance between the top and the bottom of the fraction.
As long as you multiply, or divide, both top and bottom by the same number, the balance will remain the same. As noted before, multiplication and division are opposites, or reverse (reversing)
Fractions are are a form of ratio. Three-quarters is ‘three to four’ or ‘‘three out of four’. So as long as you keep the balance, for example six to eight’ or ‘twelve to sixteen’, the fraction does
not change value and the ratio is preserved.
However, if you add, or subtract, the same number to the top and bottom of a fraction, so 1/2 becomes 2/3 by adding one to both the top and the bottom of the fraction 1/2, and the ratio or value is
destroyed. [3]
Now some worked examples.
First 56/60, the result found for the mixed-up fraction addition, 7/20 + 7/12
7/20 + 7/12 = 56/60
The long way to cancel is to divide both top and bottom, 56 and 60, by 2 - the smallest prime number, until you can divide no more. Then, if it is possible, you continue with the next prime number.
This is a bit like finding the biggest common bits (or lowest common factor).
As you gain more experience, you will perhaps notice straightaway that 56 and 60 are divisible by four (2 x 2).
Next let’s cancel down the fraction 12/15.
15 is not divisible by 2, so we start dividing both top and bottom with the next prime number, 3.
When cancelling down a number greater than one, turn the number into a fraction including a whole number (an ‘improper’ or top heavy fraction), and do the cancelling to that. For instance, in the
addition sum 1/4 + 1 1/2 + 1 3/4, the sum is turned into 14/4, which cancels down to 3 1/2.
Until we reach putting in details in this section, 1/10 is written as ·1, that is a point (or dot) before the one, and one quarter is written as ·25, that is 25/100, and so on.
Adding decimals and subtracting decimals is no different from adding or subtracting any other numbers, beyond the necessity of keeping the decimal points lined up. It is easy to practise this with
the calculator below.
Some examples:
• 2·43 +1·68 = 4·11
To replicate this sum using the abelard.org maths educational counter,
□ Reset Counter Value to 1·68;
□ Set Decimal Places to 2;
□ Change Step to 2·43;
□ Switch Direction (if necessary) to Increasing;
□ Now click on the Manual Step button once.
• 4 + ·09 =
4·09 When calculating with decimals, it is important to lay out the numbers with the decimal points lined up. In order to help the learner to keep clear in their own mind what is
happening, it can be helpful to add extra zeroes, shown in the example to the left.
• 2·43 - 1·68
= ·75 This subtraction sum is handled like any other subtraction, as is explained in detail on the writing down sums page. Again, make sure that the decimal points are lined up as
described above.
abelard.org maths educational counter
[This counter functions with javascript, you will need to ensure that javascript is enabled for the counter to work.]
The full version with more detailed instructions, go to the introduction page.
Thus, to practise sums with decimals, for example ·25 x 4,
□ Reset Counter Value to 0;
□ Set Decimal Places to 2;
□ Change Step to ·25;
□ Switch Direction (if necessary) to Increasing;
□ Now click on the Manual Step button four times. The red number counts to 4.
The counter counts up: 0, ·25, ·5, ·75, 1. Thus ·25 x 4 = 1.
Now help the learner to try other multiplication sums. Each time, click the red Reset button to return Manual Steps (the red number) to zero.
Below is a concise version of the Brilliant abelard.org educational maths counter. For an expanded version with more detailed instructions, go to how to teach your child number arithmetic mathematics
- introduction.
┃ ┃
┃ [This counter functions with javascript, you need to ensure that javascript is enabled for the counter to work.] ┃
┃ ┃
┃ Is the counter Manual or Automatic? : You have done manual steps since the last reset Decimal Places ┃
┃ [between 0 and 5]: ┃
┃ the counter is displayed up to decimal places ┃
┃ ┃
┃ Reset Counter Value: Change Step: Direction: ┃
┃ Enter step size: Counting up/counting down ┃
┃ [enter number in base 10] [enter: step size in base 10] ┃
┃ ┃
┃ change step size: ┃
┃ Base [between 2 and 32]: Change Speed: ┃
┃ the counter is displayed in base is added or subtracted on each update the counter changes every seconds. ┃
┃ ┃
┃ ┃
┃ You are welcome to reproduce this configurable counter. However, all pages that include this counter must display a prominent and visible link to: ┃
┃ http://abelard.org/sums/teaching_number_arithmetic_mathematics_introduction.php, with the following text: ┃
┃ “The Configurable Practice Counter was developed by the auroran sunset on behalf of abelard.org and is copyright to © 2009 abelard.org”. ┃
┃ This text and the code, including all comments, must not be altered. ┃
┃ ┃
┃ © 2009 abelard.org ┃
100% (one hundred percent) is a whole cake. 1% is 1/100th (one hundredth) of the cake.
1/10th (one tenth) is written as 10%, that is a percent sign (%) after the ten, and one quarter is written as 25% and so on.
nine one hundredths, or 9/100, or 9%, or ·9
fifty-six one hundredths, or 56/100, or 56%, or ·56
It is necessary to remain alert to the following facts:
• If you take 10% from one hundred, you end up with 90. 100 - 10% = 90.
Whereas, if you now add 10% to 90, the result is 99, not 100. 90 + 10% =99.
• An increase of 10% in a population of 60 million is 6 million.
Whereas, if you have ten pounds and it is increased by 10%, then all you gain will be one pound. Governments and advertisers constantly work to confuse people by taking advantage of the general
lack of numeracy among the population. For example, governments will tell you that they are spending an extra £5 million on health services or schools, rather than tell you that £5 million is a
very small fraction of 1% of the expenditure in these areas.
But when government increases the number of helicopters for the military from 5 to 8, they will trumpet that they have raised the number of helicopters by 60%, while carefully omitting the real
numbers of five and three more helicopters.
Your advertiser of cat foods will tell you that eight out of ten cat owners said that their cats preferred Kattosludge (boiled meat factory waste), while failing to tell you that the cat owners
worked for Kattosludge Incorporated, and only ten owners took part in the survey.
Adding percentages and subtracting percentages is no different from adding or subtracting any other numbers, or fractions (remember that 1% is one hundredth of something.
Ratios are very similar in behaviour, but can be used rather haphazardly. For example, 3:1 (three to one) can mean that for every one apple there are three oranges. Notice that in such usage, there
are four real objects, the three apples plus the orange.
Another usage is in betting, where you may may have odds of four to one. This tends to mean that you bet one pound, dollar or euro in the hope of winning four sponduliks back for every one you lay
out. Usually, if your horse or camel wins, the bookmaker will pay you both the four zelottis for your winning plus the one zelotti of your original bet. This sort of bet is often termed as “four to
one against”.
If the ratio is expressed the other way about, that is 1:4, it is referred to “four to one on”. In other words, for every four you bet, you will gain one and, of course, receive back you stake of
four. So you receive five for an outlay of four - if you win. These are the sort of odds you are offered if you have a very good chance of winning, or even too good a chance of winning.
It is always wise to be very clear on definitions when ratios or percentages are being quoted, and to remember that most gambling is a tax on stupidity.
end notes
1. Be aware that fractions can be written in more than one style. One style is thus, ½, and another is 1/2. A third style is
And of course, 1 ÷ 2 also has the same meaning.
2. The technical names for the parts of a fraction are:
The numerator is sometimes called the dividend, while the denominator is also called the divisor.
When the top part of the fraction (the numerator or dividend) is smaller than the bottom part (the denominator or divisor), strangely and historically the fraction is called a ‘proper fraction’.
A proper fraction is always of a value that is less than one.
When the numerator or dividend is larger than the the denominator or divisor, the fraction is weirdly called an ‘improper fraction’.
3. In another page dealing with equations, you will see equations, that is sums arranged either side of an equals sign, where balance is maintained as long as the same is done to both sides - add,
subtract, multiply, divide or whatever. Whereas, with fractions, the balance is maintained only by forms of multiplication and division.
4. The nomenclature widely and normally used for fractions is illogical and confusing. How you wish to deal with this, I will leave to you. The common usage is to call numbers of the form 1 1/2,
where there is both a whole number and a fractional part of a number a ‘mixed fraction’.
Often it is useful to handle such numbers by putting it all into fractional form. Thus, with 1 1/2, the 1 is broken into two halves and added to the remaining half. Thus 1 1/2 becomes three
halves or 3/2. And this type of fraction is foolishly called an ‘improper fraction’.
Remember, I wish that people be taught in a realistic and meaningful manner using sensible language. There is nothing improper about a fraction valued at over one. Neither is there much mixed
about that same fraction expressed when expressed in the form 1 1/2. The so-called mixed fraction, or the so-called improper fraction, are essentially numbers pointing to values in the real
Meanwhile, teaching fractions etcetera proceeds in a logical order from simple fractions, such as 1/4 + 1/4, then moving onto numbers that include whole numbers, such as 1 1/4 + 1/4, and finally
moving onto actually mixed fractions such as adding 1/4 +1/5. But if I call these last examples mixed, as is logical, the learner is likely to become confused when meeting teachers that use
clumsy language or jargon.
With children who are coping well, I teach learners logically and warn them that they will very probably come across people using the clumsy jargon. I explain to them how this jargon is used,
while telling them not to worry much about it, just learn enough that they remember how the jargon is used when they run across it in classrooms or in exams. In the body of this page and on other
pages, I shall primarily use clear and sensible terms.
5. It is common to put a zero before the decimal point (dot), as in 0·1. As people become used to mathematics, they tend not to put the zero in front, as it has no meaning any more than they tend to
put a zero after a number like 1 or 15, as in 1·0 or 15·0. But it is common, at times, to use leading or trailing zeroes in lists of figures.
However, given that most calculators, including those on computers, automatically include a zero in a sum such as ·3 + ·5, you may have difficulty in convincing others that the zero before a
decimal point is unnecessary and should not really be there.
the decimal point
On these sums pages, generally we display the decimal point in the middle of the numbers, for instance 1·7. However, because doing this requires a special character, both in print and on a
computer, nowadays, the decimal point is frequently displayed as a full stop/period: 1.7. (In continental countries such as France, a comma is used as the decimal marker: 1,7, while the thousands
marker is a space: 2 200 or, sometimes, a dot: 2.200.)
6. This method will work just as well when adding or subtracting three or more mixed-up fractions.
|
{"url":"http://www.abelard.org/sums/teaching_number_arithmetic_mathematics_fractions_decimals_percentages1.php","timestamp":"2014-04-21T12:09:39Z","content_type":null,"content_length":"95774","record_id":"<urn:uuid:c10cdfb6-eb0c-4670-b688-0d7c146ce244>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00464-ip-10-147-4-33.ec2.internal.warc.gz"}
|
I can add to infinity
So, if I have 3 apples and give 3 apples away, I now have an infinity amount of apples?
The flaw in this logic is that reality is split in half as "positive" in "negative"; when in fact, reality is all, the entire, the whole, all that exists (1/1); but reality can be broken down into
bits and pieces such as (galaxies, planets, lifeforms, organs, tissues, cells, etc.) so instead of a number line, it should be a FRACTION line where reality is (1/1) and it can be cut down into
1/1 (WHOLE/ENTIRE), 1/2, 1/3, 1/4, 1/5, 1/6, 1/7, 1/8, 1/9, 1/10, (~ infinity)
But, what IS "the whole"? Easy, it is ALL that IS, all that is continuously happening NOW, since time is just measured movement and memories/thoughts of the mind.
|
{"url":"http://www.abovetopsecret.com/forum/thread871298/pg5","timestamp":"2014-04-19T01:57:42Z","content_type":null,"content_length":"57417","record_id":"<urn:uuid:a64070ed-51ce-4a5f-aef9-9ec540ebde34>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00162-ip-10-147-4-33.ec2.internal.warc.gz"}
|
A line that a curve approaches as it heads towards infinity:
There are three types: horizontal, vertical and oblique:
The curve can approach from any side (such as from above or below for a horizontal asymptote)
And may actually cross over (possibly many times), and even move away and back again.
The important point is that:
The distance between the curve and the asymptote tends to zero as they head to infinity
Horizontal Asymptotes
It is a Horizontal Asymptote when:
as x goes to infinity (or to -infinity) then the curve approaches some fixed constant value "b"
Vertical Asymptotes
It is a Vertical Asymptote when:
as x approaches some constant value "c" (from the left or right) then the curve goes towards infinity (or -infinity)
Oblique Asymptotes
It is an Oblique Asymptote when:
as x goes to infinity (or to -infinity) then the curve goes towards a line defined by y=mx+b (note: m is not zero as that would be horizontal).
Example: (x^2-3x)/(2x-2)
The graph of (x^2-3x)/(2x-2) has:
• A vertical asymptote at x=1
• An oblique asymptote: y=x/2-1
|
{"url":"http://www.mathsisfun.com/algebra/asymptote.html","timestamp":"2014-04-17T15:26:50Z","content_type":null,"content_length":"9906","record_id":"<urn:uuid:1bbe4a22-6277-4312-b035-70fdad750c0e>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00556-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Bag type
distinctSize :: IntBag -> IntSource
O(n). Returns the number of distinct elements in the bag, ie. (distinctSize bag == length (nub (toList bag))).
union :: IntBag -> IntBag -> IntBagSource
O(n+m). Union of two bags. The union adds the elements together.
IntBag\> union (fromList [1,1,2]) (fromList [1,2,2,3])
foldOccur :: (Int -> Int -> b -> b) -> b -> IntBag -> bSource
O(n). Fold over all occurrences of an element at once. In a call (foldOccur f z bag), the function f takes the element first and than the occur count.
Ordered list
Occurrence lists
fromOccurMap :: IntMap Int -> IntBagSource
O(1). Convert a IntMap.IntMap from elements to occurrences into a bag. Assumes that the IntMap.IntMap contains only elements that occur at least once.
showTreeWith :: Bool -> Bool -> IntBag -> StringSource
O(n). The expression (showTreeWith hang wide map) shows the tree that implements the bag. The tree is shown hanging when hang is True and otherwise as a rotated tree. When wide is True an extra wide
version is shown.
|
{"url":"http://hackage.haskell.org/package/uulib-0.9.5/docs/UU-DData-IntBag.html","timestamp":"2014-04-21T11:18:48Z","content_type":null,"content_length":"29668","record_id":"<urn:uuid:de832022-2aab-4d8a-8720-9e15063c4709>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00404-ip-10-147-4-33.ec2.internal.warc.gz"}
|
MathGroup Archive: May 2010 [00270]
[Date Index] [Thread Index] [Author Index]
Re: Why do these not work?
• To: mathgroup at smc.vnet.net
• Subject: [mg109762] Re: Why do these not work?
• From: "Alexander Elkins" <alexander_elkins at hotmail.com>
• Date: Sun, 16 May 2010 05:56:41 -0400 (EDT)
• References: <hse3nk$36b$1@smc.vnet.net>
(Re-posting - lost some how...)
"S. B. Gray" <stevebg at ROADRUNNER.COM> wrote in message
news:hse3nk$36b$1 at smc.vnet.net...
> There is something about Mathematica which does not allow the first two
calls to
> work, and I don't know why. Can anyone give me some guidance?
> First, do I need all these things in the Module and is there a shorter
> way to "localize" them?
> plane3[p1_,p2_,p3_] :=
> Module[{x1,y1,z1, x2,y2,z2, x3,y3,z3, iden},
> {x1, y1, z1} = p1;
> {x2, y2, z2} = p2;
> {x3, y3, z3} = p3;
> iden = 1/Det[{{x1,y1,z1},{x2,y2,z2},{x3,y3,z3}}];
> (* Or 1/Det[{p1,p2,p3}] *)
> aa = iden*Det[{{ 1, y1, z1},{ 1, y2, z2},{ 1, y3, z3}}];
> bb = iden*Det[{{x1, 1, z1},{x2, 1, z2},{x3, 1, z3}}];
> cc = iden*Det[{{x1, y1, 1},{x2, y2, 1},{x3, y3, 1}}];
> Return[{aa, bb, cc}]; (* Plane: aa*x + bb*y + cc*z = 1 *)
> ]
The only thing truly wrong with the plane3 example is that it does not
require that each point be a list of length three.
Here is the simplest form to produce the result you appear to want, with no
intermediate variables, so there is nothing to localize along with an
assurance of only receiving input of the proper form, i.e. that each point
is a list of length three:
If a name for each of the elements of list making up each point is needed,
here is how to do that (notice that it is still not neccessary to have any
intermediate values to do this):
Mathematica evaluation only take place when the input signature matches,
working a bit like an implied replacement rule. The more specific the input
signatures are used first. For example:
In[3]:=f[1]:=Print["x is one"];f[x_Integer]:=Print["x is an integer"];
x is an integer
x is one
> This ReplaceAll gives the numeric answer I want:
> plane3[{x1,y1,z1},{x2,y2,z2},{x3,y3,z3}] /.
> {x1 -> 1, y1 -> 0, z1 -> 0,
> x2 -> 0, y2 -> 1, z2 -> 0,
> x3 -> 0, y3 -> 0, z3 -> 1}
> {1,1,1}
This works because plane3 returns a symbolic result containing symbols, all
of which have a replacement value.
> This ReplaceAll does not work at all:
> plane3[q1,q2,q3]/.{q1->{1,0,0},q2->{0,1,0},q3->{0,0,1}}
During evaluation of In[4]:= Set::shape: Lists {x1$456,y1$456,z1$456} and q1
are not the same shape. >>
During evaluation of In[4]:= Set::shape: Lists {x2$456,y2$456,z2$456} and q2
are not the same shape. >>
During evaluation of In[4]:= Set::shape: Lists {x3$456,y3$456,z3$456} and q3
are not the same shape. >>
(remaining output omitted)
The example plane3 does not require the input to have the proper form, i.e.
that each point must be a list of length three. As a result the plane3
example attempts access the elements which are not lists as lists when
evaluated in ths case. The example plane3s input pattern does not match so
that input remains unchanged when evaluated:
However after ReplaceAll (/.) changes plane3s[q1,q2,q3] into
plane3s[{1,0,0},{0,1,0},{0,0,1}] the result does match the input pattern for
plane3s when evaluated afterwards:
> This ReplaceAll gives a symbolic answer that I do not want:
> q1={x1,y1,z1}; q2={x2,y2,z2}; q3={x3,y3,z3};
> plane3[q1,q2,q3]/.{q1->{1,0,0},q2->{0,1,0},q3->{0,0,1}}
Both the plane3 and plane3s examples are evaluated before the replacement
occurs and therefore both return symbolic results which no longer contain
the symbols q1, q2 or q3. In other words, it is late to make the replacement
after the evaluation has already occured. By making sure the replacement
occurs first, the result is as expected:
Even this works since all the replacements occur first:
> What is the rule? I note in ReplaceAll x/.{{x->1},{x->3},{x->7}},
> but I don't want one at a time.
Using With[] avoids creating symbols in Global`* and performs the
replacement before any calculations occur:
> Thank you for any tips.
> Steve Gray
To create a package to handle Planes, Spheres, Cones, Cylinders and other 3D
objects you would not evaluate any of those objects as they appear.
Intersection, Draw3D, and their like would do that. Check out the Geometrica
application for Mathematica for hints. And sorry, I have no experience with
Alexander Elkins
|
{"url":"http://forums.wolfram.com/mathgroup/archive/2010/May/msg00270.html","timestamp":"2014-04-16T13:14:54Z","content_type":null,"content_length":"30381","record_id":"<urn:uuid:0eb960a0-17e8-41a5-bc08-d3bd8cfc0ad5>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00243-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Marlborough, MA Geometry Tutor
Find a Marlborough, MA Geometry Tutor
...I am certified to teach high school mathematics. Linear Algebra deals with vector spaces including eigenvectors, linear transformations, matrices and their identities, and solving systems of
linear equations using row reduction, gause-jordan elimination and using the inverse matrix to solve systems of equations. Linear algebra also deals with determinants and its many useful
38 Subjects: including geometry, reading, calculus, English
...Your child can have "math confidence", be more independent in doing their homework and get better grades by getting her/him support for Algebra 2. NOW IS THE TIME - Algebra 1 introduced many
topics and skills which will be PRACTICED A LOT in Algebra 2. If your child has come this far but still ...
9 Subjects: including geometry, algebra 1, algebra 2, SAT math
...I want all students to feel confident in math. Even though I was a math major in college I will only tutor math from 5th grade up to Geometry or Algebra 2 in high school. Although I know I
could do more complicated math, middle school math and algebra are what I love to tutor in.
6 Subjects: including geometry, algebra 1, elementary math, study skills
...I have worked mostly with college level introductory courses and high school students. I have worked with many students of different academic levels from elementary to college students.
Whether you want to solidify your knowledge and get ahead or get a fresh perspective if your are struggling, I am confident I can help you.
19 Subjects: including geometry, Spanish, chemistry, calculus
...I am now a Ph.D. student at Boston College, where I continue to teach classes and work with students one-on-one. I am a patient and encouraging teacher, and am used to helping students who are
struggling in a subject that, besides being inherently difficult, does not come naturally to them. My ...
9 Subjects: including geometry, calculus, physics, algebra 1
Related Marlborough, MA Tutors
Marlborough, MA Accounting Tutors
Marlborough, MA ACT Tutors
Marlborough, MA Algebra Tutors
Marlborough, MA Algebra 2 Tutors
Marlborough, MA Calculus Tutors
Marlborough, MA Geometry Tutors
Marlborough, MA Math Tutors
Marlborough, MA Prealgebra Tutors
Marlborough, MA Precalculus Tutors
Marlborough, MA SAT Tutors
Marlborough, MA SAT Math Tutors
Marlborough, MA Science Tutors
Marlborough, MA Statistics Tutors
Marlborough, MA Trigonometry Tutors
|
{"url":"http://www.purplemath.com/Marlborough_MA_Geometry_tutors.php","timestamp":"2014-04-18T06:15:19Z","content_type":null,"content_length":"24303","record_id":"<urn:uuid:ef170a09-68e0-4d55-9478-60260be5184c>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00466-ip-10-147-4-33.ec2.internal.warc.gz"}
|
References for some analogs of the Picard group.
up vote 3 down vote favorite
Let $X$ be a compact complex manifold. By definition, $Pic(X)={\rm H^1}(X,\mathcal{O}^\times)$. We know a lot about this group. What is known about the groups ${\rm H^n}(X,\mathcal{O}^\times)$ for $n
\ge 2$?
A bit more specialized question. It is well known that for a nonsingular projective complex variety $X$ the natural map $${\rm H^1}(X,\mathcal{O}^\times)\to{\rm H^1}(X,\mathcal{M}^\times)$$ is
trivial. What is known about the kernel of the same map for $n=2$ or $n=3$? (Here $\mathcal{M}^\times$ is the sheaf of nonzero meromorphic functions, and the topology is the strong one).
1 $H^2(X,\mathcal{O}^{\times})$ is often called the (cohomological) Brauer group. There is a vast literature on it. – Daniel Loughran May 11 '11 at 8:55
Thank you. But I would like to have some examples at hand. And, what about $n=3$? – Alex Gavrilov May 11 '11 at 9:06
Indeen, after little googling I found some papers about this. Though, in the most of them the cohomology is etale. – Alex Gavrilov May 11 '11 at 9:38
And, apparently, all they care about is the torsion. – Alex Gavrilov May 11 '11 at 10:22
@Alex I am no expert, but I believe if you are looking at compact complex manifolds as you originally stated then $H^i(X,\mathcal{O}^{\times}) = H^i(X,\mathbb{G}_m)$, since the complex topology is
(morally) as good as the étale topology. However with arbitrary varieties it is better to work with $H_{ét}^i(X,\mathbb{G}_m)$ than $H^i(X,\mathcal{O}^{\times})$. – Daniel Loughran May 11 '11 at
show 5 more comments
2 Answers
active oldest votes
First of all, it probably depends on how you define $H^1(X, \mathcal{O}^\times)$. I don't see any reason why derived functor cohomology should agree here with Cech cohomology.
I think that $H^i(X, \mathcal{O}^\times)$ is a functor of order $i+1$ in the sense of Mumford "Abelian Varieties" (2.6, Remark preceding the proof of the theorem of the cube), at least
up vote 2 for complex projective varieties. That is, there is a higher analogue of the theorem of the cube for $H^i(X, \mathcal{O}^\times)$. For this, we look at the exponential sequence as in the
down vote aforementioned Remark.
To clarify the question: $H^n$ there is the Cech cohomology group (in the strong topology). Thank you for the idea, but if there exists an analog of the theorem of the cube, I would
prefer to read about it. To get it for myself may be not so easy. – Alex Gavrilov May 11 '11 at 9:32
2 By a theorem of Godemont, on a Hausdorff paracompact space, Cech and derived cohomology always coincide. Assuming that "strong topology" is the analytic topology, that means you don't
have to worry about this issue. See mathoverflow.net/questions/19312/… for further discussion. – David Speyer May 25 '11 at 18:42
Yes, I meant the analytic topology, of course. Actually, I was pretty sure that these cohomology coincide, though did not know where to look for this. Thanks. But, what I am looking
for are not some functorial properties but nontrivial results PUBLISHED somewhere. To read it! – Alex Gavrilov May 26 '11 at 8:36
add comment
Here is a reference: Grothendieck's three exposés in Dix Exposés sur la Cohomologie des Schémas (and the references therein). One can find there e.g. computation of $H^i_{ét}({\rm Spec}
\text{ } \mathcal{O}_K, \mathbb{G}_m)$ for spectra of rings of integers in number fields.
MR0244269 (39 #5586a) Grothendieck, Alexander Le groupe de Brauer. I. Algèbres d'Azumaya et interprétations diverses. (French) 1968 Dix Exposés sur la Cohomologie des Schémas pp. 46–66
North-Holland, Amsterdam; Masson, Paris, 14.55
up vote 0 MR0244270 (39 #5586b) Grothendieck, Alexander Le groupe de Brauer. II. Théorie cohomologique. (French) 1968 Dix Exposés sur la Cohomologie des Schémas pp. 67–87 North-Holland,
down vote Amsterdam; Masson, Paris, 14.55
MR0244271 (39 #5586c) Grothendieck, Alexander Le groupe de Brauer. III. Exemples et compléments. (French) 1968 Dix Exposés sur la Cohomologie des Schémas pp. 88–188 North-Holland,
Amsterdam; Masson, Paris (Reviewer: J. S. Milne), 14.55
add comment
Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry or ask your own question.
|
{"url":"http://mathoverflow.net/questions/64577/references-for-some-analogs-of-the-picard-group","timestamp":"2014-04-19T15:40:18Z","content_type":null,"content_length":"63846","record_id":"<urn:uuid:71b5d968-a818-4609-b12d-b2236bfa3e38>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00476-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Suppose you divide a polynomial by a binomial. How do you know if the binomial is a factor of the polynomial? Create a sample problem that has a binomial which IS a factor of the polynomial being
divided, and another problem that has a binomial which is NOT a factor of the polynomial being divided.
• one year ago
• one year ago
Best Response
You've already chosen the best response.
one way is by using the factor theorem if f(x) is divisible by (x-a) then f(a) = 0 an example would be x^3 - 2x^2 + 4x - 8 test to see if this is divisible by (x-2) f(2) = 2^3 - 2(2)^2 + 4(2) - 8
= 0 so by factor theorem it is a factor
Best Response
You've already chosen the best response.
lets see if x+3 is a factor of the above polynomial f(-3) = -3^3 -2(-3)^2 +4(-3) - 8 = -27 -18 - 12 - 8 = -65 so x+3 is not a factor by another theorem (the remainder theorem) remainder = -65 for
the division The factor theorem is a special case of remainder theorem
Best Response
You've already chosen the best response.
Interesting, I see now. This is mine that I just came up with, 2x^4 - 9x^3 +21x^2 - 26x + 12 by 2x - 3.
Best Response
You've already chosen the best response.
Thanks alot.
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/4fbffbf8e4b0964abc827cea","timestamp":"2014-04-18T00:36:27Z","content_type":null,"content_length":"35355","record_id":"<urn:uuid:cff55093-796f-46e5-bd7f-505ab08860db>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00121-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Answered by
Jason T.
East Aurora, NY
Gen is interested in seeing how the money grows. Her mom suggests that she takes the money in her piggy bank and deposits it in a local bank paying 8.5% compounded quarterly. IF she finds $700 in...
Barbara from Sumter, SC
2 Answers | 0 Votes
Latest answer by
Parviz F.
Woodland Hills, CA
you have to use distributive property
Latest answer by
Beth L.
Deland, FL
Latest answer by
Torben R.
Phoenix, AZ
SOLVE THIS LINER EQUATION: 3M-11=9M+5
Latest answer by
Ellen S.
Pittsburgh, PA
Need some help is 12:7 the same as 7:12 in a ratio and y is so and if its not.
Latest answer by
Andre W.
New Wilmington, PA
Let a k ≥ 1. show that for any set of n measurements, the fraction included in the interval y¯ − ks to y¯ + ks is at least (1 - 1/k²). hint: s² = 1/(n-1)[ ∑(y...
Top voted answer by
Parviz F.
Woodland Hills, CA
find three consecutive numbers such that the sum of the first integer, half the second integer, and four times the third integer is -30
Answered by
Andre W.
New Wilmington, PA
What math subject comes after Complex Analysis for physics and electrical engineering majors?
Sun from Los Angeles, CA
1 Answer | 0 Votes
Latest answer by
Ralph L.
Chicago, IL
Solve it for me pls (x + h)^2 + (x + h)-28 For x=6 and h=1
Latest answer by
Beth L.
Deland, FL
Simplify | - 4^2 +19| - | -7^2 + 5(10) | =
Latest answer by
Andre W.
New Wilmington, PA
Find the general solution of x'=(3, 2, -2, -2)x. (This is a matrix, 3 and 2 on the left, -2 and -2 on the right.) Answer: x=c1(1, 2)e^-t+c2(2, 1)e^2t After c1 and...
Latest answer by
Parviz F.
Woodland Hills, CA
Simplify : 2 2 1-4 + 19|-| (-7) + 5(10)| ...
Latest answer by
Stephanie L.
Saint Augustine, FL
Simplify the given expression: 2364 0 = ----------------------------------------------------------------------------------------------- ...
Latest answer by
Andre W.
New Wilmington, PA
Latest answer by
John H.
Potomac, MD
would i use the formula r= d/1-dn? which would be .075/1-.075(60/360) = .0625 or 6.25%
Top voted answer by
Lindsey B.
Brandon, FL
I am new to college and I haven't a clue about algebra. I get so confused with what to do.
Latest answer by
Priyanka B.
Evanston, IL
Demonstrate the process with an example please. When finding the greatest common factor of a polynomial, can it ever be larger than the smallest coefficient?
Top voted answer by
Brad M.
Blacksburg, VA
algebra 1 is very hard for me and my test is tomm
Latest answer by
Sarah M.
Indianapolis, IN
This is a question about my math homework! Please help! quiz tomorrow!
|
{"url":"http://www.wyzant.com/resources/answers/math?f=votes&pagesize=20&pagenum=28","timestamp":"2014-04-17T08:03:30Z","content_type":null,"content_length":"64417","record_id":"<urn:uuid:64d74849-ea61-4300-843d-2078d4aa3516>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00649-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Norcross, GA Algebra 2 Tutor
Find a Norcross, GA Algebra 2 Tutor
...I also taught in the college environment for over 10 years and I am currently teaching Math. I have tutored middle and high school math for 20+ years. I enjoy working with the students and
receive many rewards when I see their successes.
20 Subjects: including algebra 2, calculus, geometry, algebra 1
...Taught Prealgebra concepts as a GMAT instructor for three years. I love helping students understand Prealgebra! Tutored on Precalculus topics during high school and college.
28 Subjects: including algebra 2, physics, calculus, economics
...More importantly, I know how to make learning fun and easy. I have a Master's degree in Business Administration (MBA). I am also a published author. I have worked as a tutor and teacher, but
more importantly, I know how to make learning fun and easy.
29 Subjects: including algebra 2, reading, GED, English
...Also, as a result of this exposure, I have a good understanding of some language systems and can hopefully target the source of problem areas more easily. I enjoy working with both children
and adults, and have experience with both. I look forward to hearing from you and supporting your efforts in achieving success in Economics, Math or ESL courses!
14 Subjects: including algebra 2, Spanish, geometry, statistics
...My Ph.D thesis used linear algebra to approximate the solutions to a partial differential equation. I have also taught the class a number of times. I have taught a probability course for 2
years at Emory and Henry College.
20 Subjects: including algebra 2, calculus, statistics, geometry
Related Norcross, GA Tutors
Norcross, GA Accounting Tutors
Norcross, GA ACT Tutors
Norcross, GA Algebra Tutors
Norcross, GA Algebra 2 Tutors
Norcross, GA Calculus Tutors
Norcross, GA Geometry Tutors
Norcross, GA Math Tutors
Norcross, GA Prealgebra Tutors
Norcross, GA Precalculus Tutors
Norcross, GA SAT Tutors
Norcross, GA SAT Math Tutors
Norcross, GA Science Tutors
Norcross, GA Statistics Tutors
Norcross, GA Trigonometry Tutors
Nearby Cities With algebra 2 Tutor
Berkeley Lake, GA algebra 2 Tutors
Buford, GA algebra 2 Tutors
Chamblee, GA algebra 2 Tutors
Doraville, GA algebra 2 Tutors
Duluth, GA algebra 2 Tutors
Dunwoody, GA algebra 2 Tutors
East Point, GA algebra 2 Tutors
Johns Creek, GA algebra 2 Tutors
Kennesaw algebra 2 Tutors
Lilburn algebra 2 Tutors
Milton, GA algebra 2 Tutors
Roswell, GA algebra 2 Tutors
Snellville algebra 2 Tutors
Suwanee algebra 2 Tutors
Tucker, GA algebra 2 Tutors
|
{"url":"http://www.purplemath.com/Norcross_GA_algebra_2_tutors.php","timestamp":"2014-04-21T15:13:30Z","content_type":null,"content_length":"23818","record_id":"<urn:uuid:fb62907e-bccf-4504-b92f-130e6b9288b1>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00264-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Abstract Semilinear Evolution Equations with Convex-Power Condensing Operators
Journal of Function Spaces and Applications
Volume 2013 (2013), Article ID 473876, 9 pages
Research Article
Abstract Semilinear Evolution Equations with Convex-Power Condensing Operators
School of Mathematics, Yangzhou University, Yangzhou 225002, China
Received 14 May 2013; Revised 16 August 2013; Accepted 18 August 2013
Academic Editor: Ji Gao
Copyright © 2013 Lanping Zhu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any
medium, provided the original work is properly cited.
By using the techniques of convex-power condensing operators and fixed point theorems, we investigate the existence of mild solutions to nonlocal impulsive semilinear differential equations. Two
examples are also given to illustrate our main results.
1. Introduction
This paper is concerned with the existence of mild solutions for the following impulsive semilinear differential equations with nonlocal conditions where is the infinitesimal generator of strongly
continuous semigroup for in a real Banach space and constitutes an impulsive condition. and are -valued functions to be given later.
As far as we know, the first paper dealing with abstract nonlocal initial value problems for semilinear differential equations is due to [1]. Because nonlocal conditions have better effect in the
applications than the classical initial ones, many authors have studied the following type of semilinear differential equations under various conditions on ,, and :
For instance, Byszewski and Lakshmikantham [2] proved the existence and uniqueness of mild solutions for nonlocal semilinear differential equations when and satisfy Lipschitz type conditions. In [3],
Ntouyas and Tsamatos studied the case with compactness conditions. Byszewski and Akca [4] established the existence of solution to functionaldifferential equation when the semigroup is compact and is
convex and compact on a given ball. Subsequently, Benchohra and Ntouyas [5] discussed the second-order differential equation under compact conditions. The fully nonlinear case was considered by
Aizicovici and McKibben [6], Aizicovici and Lee [7], Aizicovici and Staicu [8], García-Falset [9], Paicu and Vrabie [10], Obukhovski and Zecca [11], and Xue [12, 13].
Recently, the theory of impulsive differential inclusions has become an important object of investigation because of its wide applicability in biology, medicine, mechanics, and control theory and in
more and more fields. Cardinali and Rubbioni [14] studied the multivalued impulsive semilinear differential equation by means of the Hausdorff measure of noncompactness. Liang et al. [15]
investigated the nonlocal impulsive problems under the assumptions that is compact, Lipschitz, and is not compact and not Lipschitz, respectively. All these studies are motivated by the practical
interests of nonlocal impulsive Cauchy problems. For a more detailed bibliography and exposition on this subject, we refer to [14–18].
The present paper is motivated by the following facts. Firstly, the approach used in [9, 12, 13, 19, 20] relies on the assumption that the coefficient of the function about the measure of
noncompactness satisfies a strong inequality, which is difficult to be verified in applications. Secondly, in [21], it seems that authors have considered the inequality restriction on coefficient
function of may be relaxed for impulsive nonlocal differential equations. However, in fact, they only solve the classical initial value problems rather than the nonlocal initial problems . For more
details, one can refer to the proof of Theorem 3.1 in [21] (see the inequalities and in page 5 and the estimations of the measure of noncompactness in page 6 and page 7 of [21]).
Therefore, we will continue to discuss the impulsive nonlocal differential equations under more general assumptions. Throughout this work, we mainly use the property of convex-power condensing
operators and fixed point theorems to obtain the main result (Theorem 10). Indeed, the fixed point theorem about the convex-power condensing operators is an extension for Darbo-Sadovskii’s fixed
point theorem. But the former seems more effective than the latter at times for some problems. For example, in [22] we ever applied the former to study the nonlocal Cauchy problem and obtained more
general and interesting existence results. Based on the results obtained, we discuss the impulsive nonlocal differential equations. Fortunately, applying the techniques of convex-power condensing
operators and fixed point theorems solves the difficulty involved by coefficient restriction that is, the constraint condition for the coefficient function of is unnecessary (see Theorem 10).
Therefore, our results generalize and improve many previous ones in this field, such as [9, 12, 13, 19, 20].
The outline of this paper is as follows. In Section 2, we recall some concepts and facts about the measure of noncompactness, fixed point theorems, and impulsive semilinear differential equations. In
Section 3, we obtain the existence results of (1) when is compact in . In Section 4, we discuss the existence result of (1) when is Lipschitz continuous, while Section 5 contains two illustrating
2. Preliminaries
Let be a real Banach space, we introduce the Hausdorff measure of noncompactness defined on each bounded subset of by
Now we recall some basic properties of the Hausdorff measure of noncompactness.
Lemma 1. For all bounded subsets ,, and of , the following properties are satisfied:(1) is precompact if and only if ;(2), where and mean the closure and convex hull of , respectively;(3) when ;(4);
(5), for any ;(6), where ;(7)if is a decreasing sequence of nonempty bounded closed subsets of and , then is nonempty and compact in .
The map is said to be -condensing if for every bounded and not relatively compact , we have (see [23]).
Lemma 2 (see[9]: Darbo-Sadovskii). If is bounded closed and convex, the continuous map is -condensing, then the map has at least one fixed point in .
In the sequel, we will continue to generalize the definition of condensing operator. First of all, we give some notations.
Let be bounded closed and convex, the map , and for every , set where means the closure of the convex hull.
Now we give the definition of a kind of new operator.
Definition 3. Let be bounded closed and convex, the map is said to be -convex-power condensing if there exist , and for every bounded and not relatively compact , we have From this definition, if ,
one obtains as relatively compact.
Subsequently, we give the fixed point theorem about the convex-power condensing operator.
Lemma 4 (see [23]). If is bounded closed and convex, the continuous map is -convex-power condensing, then the map has at least one fixed point in .
Throughout this paper, let be a real Banach space. We denote by the Banach space of all continuous functions from to with the norm sup and by the Banach space of all -valued Bochner integrable
functions defined on with the norm . Let is a function from intosuch that is continuous atand the left continuous at and the right limitexists for. It is easy to check that is a Banach space with the
norm and . Moreover, we denote by the Hausdorff measure of noncompactness of , denote by the Hausdorff measure of noncompactness of and denote by the Hausdorff measure of noncompactness of .
Throughout this work, we suppose the following
The linear operator generates an equicontinuous -semigroup . Hence, there exists a positive number such that .
For further information about the theory of semigroup of operators, we may refer to some classic books, such as [24–26].
To discuss the problem (1), we also need the following lemma.
Lemma 5. If is bounded, then one has where .
Lemma 6 (see [27]). If is bounded, then for all , Furthermore, if is equicontinuous on , then is continuous on and
Since -semigroup is said to be equicontinuous, the following lemma is easily checked.
Lemma 7. If the semigroup is equicontinuous and , then the set for a.e. is equicontinuous for .
Definition 8. A function is said to be a mild solution of the nonlocal problem (1), if it satisfies
In addition, let be a finite positive constant, and set and .
3. Is Compact
In this section, we state and prove the existence theorems for the nonlocal impulsive problem (1). First, we give the following hypotheses:
(1) is a Carathéodory function; that is, for all is measurable and for a.e. is continuous;(2) for finite positive constant , there exists a function such that for a.e. and ;(3) there exists a
function such that for a.e. and every bounded subset ;
() is a continuous and compact mapping; furthermore, there exists a positive number such that , for any ;
() is a continuous and compact mapping for every ;
() .
Remark 9. The mapping is said to be -Carathéodory if the assumption is satisfied.
Theorem 10. If the hypotheses , , ,, and are satisfied, then the nonlocal problem (1) has at least one mild solution on .
To prove the above theorem, we need the following lemma.
Lemma 11. If the condition holds, then for arbitrary bounded set , we have
This proof is quite similar to that of Lemma 3.1 in [20]; we omit it.
Proof of Theorem 10. We consider the operator defined by
It is easy to see that the fixed points of are the mild solutions of nonlocal impulsive semilinear differential equation (1). Subsequently, we shall prove that has a fixed point by using Lemma 4.
We shall first prove that is continuous on . In fact, let be an arbitrary sequence satisfying in . It follows from Definition 8 that According to the continuity of in its second argument, for each ,
we have the following: In addition, and are all continuous for each , and hence, the Lebesgue dominated convergence theorem implies Namely, is continuous on .
Subsequently, we claim that . Actually, by , we obtain for any . Thus, .
Now we demonstrate that is equicontinuous for any . Let ,. Since is compact, is relatively compact; that is, there is a finite family such that for any , there exists some such that On the other
hand, as is equicontinuous at , we can choose such that for each ,, uniformly for , and . By , it can be obtained that there exists such that for each ,, uniformly for . Furthermore, by Lemma 7, we
get that there exists such that for each ,, uniformly for . Thus, there exists such that for each ,, uniformly for . Therefore, is equicontinuous at .
Similarly, we can conclude that is also equicontinuous at . Thus, is equicontinuous on .
Set . It is obvious that is equicontinuous on and maps into itself.
Next, we shall prove that is a convex-power condensing operator. Take ; by the definition of convex-power condensing operator, we shall show that there exists a positive integral such that if is not
relatively compact. In fact, by using the conditions and , we get from Lemma 11 that Since , there exists a continuous function such that for any , Then where . Hence, Thus, and hence, by the method
of mathematical induction, for any positive integer and , we obtain Therefore, for any positive integer , we have Since , it follows from the Stirling Formula (see [28]) that and hence, there exists
sufficiently large positive integer such that which shows that is a convex-power condensing operator. From Lemma 4, we get that has at least one fixed point in ; that is, (1) has at least one mild
solution . This completes the proof.
Remark 12. The technique of constructing convex-power condensing operator plays a key role in the proof of Theorem 10, which enables us to get rid of the strict inequality restriction on the
coefficient function of . However, in many previous articles, such as [9, 12, 13, 19, 20], the authors had to impose a strong inequality condition on the integrable function , as they used
Darbo-Sadovskii’s fixed point theorem only. Thus, our result extends and complements those obtained in [9, 12, 13, 19, 20] and has more broad applications.
Remark 13. If we use the following assumption instead of :
there exists a constant such that for a.e. and every bounded subset ,
we may use the same method to obtain for any . Thus, there exists a large enough positive integral such that namely, Therefore, we can get the following consequence.
Theorem 14. If the hypotheses , , ,, and are satisfied, then the nonlocal problem (1) has at least one mild solution on .
4. Is Lipschitz Continuous
In this section, by applying the proof of Theorem 10 and Darbo-Sadovskii’s fixed point theorem, we give the existence of mild solutions of the problem (1) when the nonlocal condition is Lipscitz
continuous in .
We give the following hypotheses:
there exists a constant such that
is Lipschitz continuous with Lipschitz constant , for .
Theorem 15. If the hypotheses , , , , and are satisfied, then the nonlocal problem (1) has at least one mild solution on provided that .
Proof of Theorem 15. Given , let's first consider the following Cauchy initial problem: From the proof of Theorem 10, we can easily see that there exists at least one mild solution to (38). Define by
that is the mild solution to (38). Then Now, we will show that is -condensing on . According to Lemma 11, for any bounded subset , we deduce which implies that In addition, since , it follows that
the mapping is a -condensing operator on . In view of Lemma 2, the mapping has at least one fixed point in , which produces a mild solution for the nonlocal impulsive problem (1).
Remark 16. Similarly, one can show that the conclusion of Theorem 15 remains valid provided that hypothesis is replaced by condition .
Remark 17. In Theorem 15, we do not assume the compactness of nonlocal item . Under the Lipschitz assumption, we make full use of the conclusion of Theorem 10, the properties of noncompact measure
and the technique of fixed point to deal with the solution operator .
Remark 18. Recently, the existence results for fractional differential equations have been widely studied in many papers. For more details on this theory one can refer to [29, 30] and references
therein. It should be pointed out that the techniques and ideas in this paper can also be used to study fractional equations. In the future, we will also try to investigate to nonlocal
controllability of impulsive differential equations by applying the similar techniques, methods, and compactness conditions. Further discussions on this topic will be in our consequent papers.
5. Examples
In this section, we shall give two examples to illustrate Theorems 10 and 15.
Example 1. Consider the following semilinear parabolic system: where is a bounded domain in with smooth boundary , is strongly elliptic, , and .
Let and define the operator by Then the operator is an infinitesimal generator of an equicontinuous -semigroup on (see [26]).
Suppose that the function satisfies the following conditions:(i)the Carathodory condition, that is, , is a continuous function about for a.e. is measurable about for each fixed ;(ii) for all with ,
where satisfies , uniformly in ;(iii) for all , where and .We assume the following.(1) is defined by Moreover, for given , there exist two integrable functions such that and for a.e. and every
bounded subset ;(2) is defined by From Theorem 4.2 in [31], we get directly that is well defined and is a completely continuous operator by the above conditions about the function .(3) is a
continuous and compact function for each defined by
Let us observe that the problem (42) may be reformulated as the abstract problem (1) under the above conditions. By using Theorem 10, the problem (42) has at least one mild solution ; provided that
the hypothesis holds.
Example 2. Consider the following partial differential system: where is a bounded domain in with smooth boundary , and and both are given real numbers for .
Let , and define the operator by
As is known to all, the operator is an infinitesimal generator of the semigroup defined by for each . Here, is equicontinuous but is not compact.
We now suppose the following.(1) is defined by Moreover, for given , there exist two integrable functions such that and for a.e. , and every bounded subset ;(2) is defined by Then is Lipschitz
continuous with constant ; that is, the assumption is satisfied.(3) is a continuous function for each , defined by Here we take ,,,,. Then is Lipschitz continuous with constant , ; that is, the
assumption is satisfied.
Let us observe that (47) may be rewritten as the abstract problem (1) under the above conditions. If the following inequality holds, then according to Theorem 15, the impulsive problem (47) has at
least one mild solution in .
The authors would like to thank the referees for their careful reading and their valuable comments and suggestions to improve their results. This research is supported by the Natural Science
Foundation of China (11201410 and 11271316), the Natural Science Foundation of Jiangsu Province (BK2012260), and the Natural Science Foundation of Jiangsu Education Committee (10KJB110012).
1. L. Byszewski, “Theorems about the existence and uniqueness of solutions of a semilinear evolution nonlocal Cauchy problem,” Journal of Mathematical Analysis and Applications, vol. 162, no. 2, pp.
494–505, 1991. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
2. L. Byszewski and V. Lakshmikantham, “Theorem about the existence and uniqueness of a solution of a nonlocal abstract Cauchy problem in a Banach space,” Applicable Analysis, vol. 40, no. 1, pp.
11–19, 1991. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
3. S. K. Ntouyas and P. Tsamatos, “Global existence for semilinear evolution equations with nonlocal conditions,” Journal of Mathematical Analysis and Applications, vol. 210, no. 2, pp. 679–687,
1997. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
4. L. Byszewski and H. Akca, “Existence of solutions of a semilinear functional-differential evolution nonlocal problem,” Nonlinear Analysis, vol. 34, no. 1, pp. 65–72, 1998. View at Publisher ·
View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
5. M. Benchohra and S. K. Ntouyas, “Nonlocal Cauchy problems for neutral functional differential and integrodifferential inclusions in Banach spaces,” Journal of Mathematical Analysis and
Applications, vol. 258, no. 2, pp. 573–590, 2001. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
6. S. Aizicovici and M. McKibben, “Existence results for a class of abstract nonlocal Cauchy problems,” Nonlinear Analysis, vol. 39, no. 5, pp. 649–668, 2000. View at Publisher · View at Google
Scholar · View at Zentralblatt MATH · View at MathSciNet
7. S. Aizicovici and H. Lee, “Existence results for nonautonomous evolution equations with nonlocal initial conditions,” Communications in Applied Analysis, vol. 11, no. 2, pp. 285–297, 2007.
8. S. Aizicovici and V. Staicu, “Multivalued evolution equations with nonlocal initial conditions in Banach spaces,” Nonlinear Differential Equations and Applications NoDEA, vol. 14, no. 3-4, pp.
361–376, 2007. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
9. J. García-Falset, “Existence results and asymptotic behavior for nonlocal abstract Cauchy problems,” Journal of Mathematical Analysis and Applications, vol. 338, no. 1, pp. 639–652, 2008. View at
Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
10. A. Paicu and I. I. Vrabie, “A class of nonlinear evolution equations subjected to nonlocal initial conditions,” Nonlinear Analysis, vol. 72, no. 11, pp. 4091–4100, 2010. View at Publisher · View
at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
11. V. Obukhovski and P. Zecca, “Controllability for systems governed by semilinear differential inclusions in a Banach space with a noncompact semigroup,” Nonlinear Analysis, vol. 70, no. 9, pp.
3424–3436, 2009. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
12. X. Xue, “Nonlocal nonlinear differential equations with a measure of noncompactness in Banach spaces,” Nonlinear Analysis, vol. 70, no. 7, pp. 2593–2601, 2009. View at Publisher · View at Google
Scholar · View at Zentralblatt MATH · View at MathSciNet
13. X. Xue, “Semilinear nonlocal problems without the assumptions of compactness in Banach spaces,” Analysis and Applications, vol. 8, no. 2, pp. 211–225, 2010. View at Publisher · View at Google
Scholar · View at Zentralblatt MATH · View at MathSciNet
14. T. Cardinali and P. Rubbioni, “Impulsive semilinear differential inclusions: topological structure of the solution set and solutions on non-compact domains,” Nonlinear Analysis, vol. 69, no. 1,
pp. 73–84, 2008. View at Publisher · View at Google Scholar · View at MathSciNet
15. J. Liang, J. H. Liu, and T. J. Xiao, “Nonlocal impulsive problems for nonlinear differential equations in Banach spaces,” Mathematical and Computer Modelling, vol. 49, no. 3-4, pp. 798–804, 2009.
View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
16. N. Abada, M. Benchohra, and H. Hammouche, “Existence and controllability results for nondensely defined impulsive semilinear functional differential inclusions,” Journal of Differential Equations
, vol. 246, no. 10, pp. 3834–3863, 2009. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
17. N. U. Ahmed, “Optimal feedback control for impulsive systems on the space of finitely additive measures,” Publicationes Mathematicae Debrecen, vol. 70, no. 3-4, pp. 371–393, 2007. View at
Zentralblatt MATH · View at MathSciNet
18. Z. Fan and G. Li, “Existence results for semilinear differential equations with nonlocal and impulsive conditions,” Journal of Functional Analysis, vol. 258, no. 5, pp. 1709–1727, 2010. View at
Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
19. T. Cardinali and P. Rubbioni, “On the existence of mild solutions of semilinear evolution differential inclusions,” Journal of Mathematical Analysis and Applications, vol. 308, no. 2, pp.
620–635, 2005. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
20. L. Zhu and G. Li, “On a nonlocal problem for semilinear differential equations with upper semicontinuous nonlinearities in general Banach spaces,” Journal of Mathematical Analysis and
Applications, vol. 341, no. 1, pp. 660–675, 2008. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
21. B. Ahmad, K. Malar, and K. Karthikeyan, “A study of nonlocal problems of impulsive integrodifferential equations with measure of noncompactness,” Advances in Difference Equations, vol. 2013,
article 205, pp. 1–11, 2013. View at Publisher · View at Google Scholar · View at MathSciNet
22. L. Zhu and G. Li, “Existence results of semilinear differential equations with nonlocal initial conditions in Banach spaces,” Nonlinear Analysis, vol. 74, no. 15, pp. 5133–5140, 2011. View at
Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
23. J. X. Sun and X. Y. Zhang, “A fixed point theorem for convex-power condensing operators and its applications to abstract semilinear evolution equations,” Acta Mathematica Sinica, vol. 48, no. 3,
pp. 439–446, 2005. View at MathSciNet
24. K. J. Engel and R. Nagel, One-Parameter Semigroups for Linear Evolution Equations, vol. 194, Springer, New York, NY, USA, 2000. View at MathSciNet
25. K. J. Engel and R. Nagel, A Short Course on Operator Semigroups, Universitext, Springer, New York, NY, USA, 2006. View at MathSciNet
26. A. Pazy, Semigroups of Linear Operators and Applications to Partial Differential Equations, Springer, Berlin, Germany, 1983. View at Publisher · View at Google Scholar · View at MathSciNet
27. J. Banaś and K. Goebel, Measures of Noncompactness in Banach Spaces, vol. 60 of Lecture Notes in Pure and Applied Mathematics, Marcel Dekker, New York, NY, USA, 1980. View at MathSciNet
28. L. Liu, F. Guo, C. Wu, and Y. Wu, “Existence theorems of global solutions for nonlinear Volterra type integral equations in Banach spaces,” Journal of Mathematical Analysis and Applications, vol.
309, no. 2, pp. 638–649, 2005. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
29. L. Hu, Y. Ren, and R. Sakthivel, “Existence and uniqueness of mild solutions for semilinear integro-differential equations of fractional order with nonlocal initial conditions and delays,”
Semigroup Forum, vol. 79, no. 3, pp. 507–514, 2009. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
30. Y. Ren, Y. Qin, and R. Sakthivel, “Existence results for fractional order semilinear integro-differential evolution equations with infinite delay,” Integral Equations and Operator Theory, vol.
67, no. 1, pp. 33–49, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
31. R. H. Martin, Jr., Nonlinear Operators and Differential Equations in Banach Spaces, Wiley, New York, NY, USA, 1976. View at MathSciNet
|
{"url":"http://www.hindawi.com/journals/jfs/2013/473876/","timestamp":"2014-04-17T20:05:53Z","content_type":null,"content_length":"781254","record_id":"<urn:uuid:0aa306d5-d822-440d-83b9-e125b0165216>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00572-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Finite Difference Interpretation
Next: MDWD Networks as Multi-step Up: Multidimensional Wave Digital Filters Previous: MDKC and MDWD Network
Finite Difference Interpretation
It should be clear that a MDWD network corresponding to a particular MDKC (and thus to a given set of PDEs) is no more than a particular type of finite difference method, and can be analyzed as such.
We will do so here for the case of the (1+1)D transmission line, in order to compare the schemes that arise from the WD approach to the simple centered difference schemes which will be introduced in
the next chapter in the waveguide context, and which can also be put into a scattering form.
Stefan Bilbao 2002-01-22
|
{"url":"https://ccrma.stanford.edu/~bilbao/master/node74.html","timestamp":"2014-04-16T10:58:05Z","content_type":null,"content_length":"3107","record_id":"<urn:uuid:533aa0e0-58ea-463b-bdca-e3dd50fea2bb>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00497-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Intermediate Algebra Graphs & Models plus MyMathLab/MyStatLab -- Access Card Package
Why Rent from Knetbooks?
Because Knetbooks knows college students. Our rental program is designed to save you time and money. Whether you need a textbook for a semester, quarter or even a summer session, we have an option
for you. Simply select a rental period, enter your information and your book will be on its way!
Top 5 reasons to order all your textbooks from Knetbooks:
• We have the lowest prices on thousands of popular textbooks
• Free shipping both ways on ALL orders
• Most orders ship within 48 hours
• Need your book longer than expected? Extending your rental is simple
• Our customer support team is always here to help
|
{"url":"http://www.knetbooks.com/intermediate-algebra-graphs-models-plus/bk/9780321760159","timestamp":"2014-04-17T02:03:49Z","content_type":null,"content_length":"40755","record_id":"<urn:uuid:9adc1fc9-dab3-4a1d-abd0-57cbb34f1b34>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00623-ip-10-147-4-33.ec2.internal.warc.gz"}
|
How to Win at Bejeweled
When I am in a somewhat brainless mood, I often occupy myself by playing Bejeweled on my exo-cortex. Although I don't generally play when I have a lot of brainpower, I have nonetheless come up with a
few useful strategies. The advent of Bejeweled 2 led me to revise my strategies in ways I thought interesting enough to talk about.
First, I will discuss the original Bejeweled. For the few of you who haven't played, a quick overview: it is played on an 8 x 8 board, with each square containing a jewel in one of 7 different
colors. A move is made by swapping two orthogonally adjacent jewels. Moves are only valid if at least one of the jewels, in its new position, forms a row of 3 or more of the same color. After each
move, all rows of 3 or more disappear, the jewels above the empty spaces fall down to replace them, and new random jewels are seeded in from the top to fill out the board. As a result of these
falling jewels, more sets may be formed, which in turn vanish and are replaced, until the board contains no more complete sets; bonus points are scored for such combos. Every so often, the player
"finishes a level" and the board is randomized. Play continues until there are no more legal moves.
There are further details about scoring, but they are not relevant to actually getting very high scores. This is because the game is, at least theoretically, infinitely extensible. It doesn't matter
how many points you score in a given move if you have an infinite supply of moves. In short, the way to win can be summarized as "Don't lose."
So, how, in practice, does one "not lose"? There are some simple heuristics, but they depend on some underlying theory. Let us define "Energy" as the number of available moves for a given board
configuration. A boars with zero Energy is a losing position. A randomly-chosen board configuration is very unlikely to be at zero Energy, however. All moves will change the board configuration,
usually to one with a different Energy, greater or lesser. A randomly chosen move will tend to drive the board towards the median Energy of a random board (about 4?). Hence, a randomly-chosen move
will not generally tend towards a losing board, though a long enough drunkard's walk will eventually reach one. The trick in *choosing* a move is clearly to attempt to maximize the energy of the
resulting board configuratiion. Since there is a random element, this cannot always be done perfectly, but one can still have a strong effect on those odds.
My primary heuristic for choosing moves is "Top Down". Consider two moves, A and B, where A is directly above B on the board. Any given move will create randomizing effects in the space above it, but
not usually in the space below it. (On rare occasions, the newly falling blocks may make combos which "drill down" below the initial move.) Hence, choosing move B will tend to disrupt move A, while
choosing move A has a very high chance of leaving B available as a move. If you always choose the "highest" available move, you are only randomizing parts of the board which are already at zero
Energy, and this will tend to create more Energy in those regions. One only makes moves near the bottom of the board in desperation -- but those moves have the largest randomizing effect, and are
best saved for when the board is perilously close to zero Energy.
There are further elaborations one can make when choosing between moves of roughly equal "depth", but just the Top Down principle is alreeady enough to generate impressive scores. Still, a game of
basic Bejeweled will still eventually have a string of bad luck and end. Bejewled 2 turns out to be less susceptible to this...
Bejeweled 2 is very similar to the original game, adding only two new rules. These additions have a serious effect on the strategy, however! They make the game both more strategically interesting,
and also much less liable to runs of bad luck -- if properly exploited, of course!
The first new rule concerns Power Gems. If you make a row of 4 (or two orthogonal rows of 3 that meet at one gem), then the gem that created that row is *not* removed from the board, but instead
becomes a Power Gem. It retains its color, but has added sparklies to show its potent nature. The next time that gem is part of a row, it explodes, destroying all 8 adjacent gems. (If the Power Gem
is at an edge or corner, naturally there are fewer adjacencies.)
The second rule is about Hyper Cubes. When you make a row of 5, the moving gem again is not removed, but becomes a Hyper Cube. Hyper Cubes have no color of their own (if you manage to make a row of 3
of them, nothing happens). When one is swapped with a colored gem, *all* gems of that color are destroyed. (If any of these are Power Gems, they explode at this time.)
Both Power Gems and Hyper Cubes are retained when the board position is randomized between levels. (Power Gems may end up as a new color, but this is irrelevant, since all the other gems are
Hyper Cubes are the key point in Bejeweled 2 strategy. A HC always represents at least one available move and more typically a choice of 3 or 4 moves. No matter how unlucky the board position
becomes, having an available HC means that you haven't yet lost. Moreover, you are usually able to use a HC to greatly increase the randomness of the board -- a vitally important maneuver when the
board nears zero Energy. The Hyper Cube is effectively a means of storing up Energy against future needs. This makes the skilled player much less vulnerable to the sort of string of bad luck that
ends a typical Bejewled 1 game.
So, two new heuristics become apparent. Never use a Hyper Cube when you have any other move available (save them for low Energy states). And, of course, Make Hyper Cubes. How much effort you should
go to in making those Hyper Cubes is not immediately obvious. Since HCs cannot be used in any 'standard' moves, too many of them can clog the board (though this is an easy-to-correct problem).
Personally, whenever I have less than 2 HCs, I actively try to create more, often violating the Top Down heuristic to do so.
So what about Power Gems? Though they score well, we must remember that individual move scores are not important -- only "not losing" is important. In that light, PGs are rather unattractive. When a
PG is created, it is very likely to have no neighboring gems of its own color. Though it has a "high payoff" when triggered, that event is somewhat difficult to bring about. And while this explosion
is a "high payoff" in terms of points, it is typically neutral in terms of its effect on Energy. Worst of all, a Power Gem explosion can cause an extremely valuable Hyper Cube to be wasted!
Given all that, we have another new heuristic: Avoid creating Power Gems. If any *are* created (as will happen occasionally by chance), it's worth some special effort to "prematurely detonate" them,
before they get close to any Hyper Cubes.
I'm currently on my second-ever game of Bejeweled 2. The first one, while I was still working out the implications of the new rules, ended around level 25. This game is at level 33, and shows no
signs of ending any time soon.
|
{"url":"http://alexx-kay.livejournal.com/131629.html","timestamp":"2014-04-23T15:00:34Z","content_type":null,"content_length":"57796","record_id":"<urn:uuid:768ece3c-f11a-40f9-b1df-c08f6873f0c9>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00067-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Help with Real Number System?
November 14th 2007, 04:02 PM #1
Junior Member
Oct 2007
If anyone could help me explain this, it would be great. My teacher explained it to me, but he kinda rushed up so I didn't really get a whole lot.
the real numbers is the set that includes the irrational, rationals, integers, whole numbers and natural numbers. that is, all those sets are subsets of the reals.
so the real numbers are made up of rational and irrational numbers. the irrationals are by themselves. the rationals though include other sets. the integers are a subset of the irrationals, the
whole numbers are a subset of the integers (they are pretty much the same thing. i guess we wouldn't call 0 a whole number which is the reason for the distinction), the natural numbers are a
subset of the integers (natural numbers are positive integers). so that's it. the diagram is just showing how the different sets of numbers relate to each other, i.e. which set contains which
November 14th 2007, 04:07 PM #2
|
{"url":"http://mathhelpforum.com/algebra/22768-help-real-number-system.html","timestamp":"2014-04-21T06:50:53Z","content_type":null,"content_length":"33613","record_id":"<urn:uuid:665c5a46-f26c-4615-a421-a262f60b5b92>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00505-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Department of Mathematics at the University of Georgia
Applied Math Group
The Applied Mathematics Group in the Department of Mathematics at the University of Georgia has scientific interests in Mathematical Biology, Mathematical Ecology, Computational Mathematics,
Approximation Theory, Dynamical Systems, Signal Processing, Control Theory, and Mathematical Finance. The research topics its members are currently investigating include epidemiology and systems
biology of malaria, stability in population dynamics models, systems ecology, network analysis, microbial community analysis, compressive sensing, video restoration, stochastic filtering, and
multivariate spline methods for applications such as numerical solutions of partial differential equations, scattered data interpolation and fitting, and image analysis.
Faculty, PostDocs and Research Interests
• Weidong Chen, Postdoctoral Associate, Part-time Instructor, Ph.D., Kansas State University, 2007. Inverse and ill-posed problems, and regularization methods in applied mathematics and signal
• Simon Foucart, Assistant Professor, Ph.D., University of Cambridge, 2006. Approximation Theory, Compressive Sensing, Computational Mathematics, Bioinformatics.
• Juan B. Gutierrez, Assistant Professor, Ph.D., Florida State University, 2009. Mathematical Biology
• Caner Kazanci, Assistant Professor, Ph.D., Carnegie Mellon University, 2005, Mathematical Biology, Analysis of Biochemical Pathways, Numerical Analysis, Dynamical Systems, Numerical Solutions of
Partial Differential Equations.
• Ming-Jun Lai, Professor, Ph.D., Texas A&M, 1989, Numerical Analysis, Multivariate Splines, Approximation Theory, Computer Aided Geometric Design, Wavelet Analysis, Numerical Solutions of Partial
Differential Equations.
• Alexander Petukhov, Associate Professor, Ph.D., Moscow State University, 1988, Numerical Analysis.
• Qing Zhang, Professor, Ph.D., Brown University, 1988, Applied Probability, Stochastic Optimal Control, Singular Perturbations, Nonlinear Filtering, Manufacturing Systems, Hierarchical Control,
Mathematical Finance.
Weekly Seminars, Spring 2014
Applied Math Seminar : organized by Simon Foucart
All talks are in Room 304, Boyd Graduate Studies on Mondays. 2:30pm. Please see calendar. Applied Math Seminar
Further Information:
The Graduate Bulletin of our department with information for prospective graduate students is available online. If you are interested in graduate studies in (real or complex, classical or modern)
analysis and you would like further information on our group, do not hesitate to contact any of us.
|
{"url":"http://www.math.uga.edu/research/applied.html","timestamp":"2014-04-17T18:54:51Z","content_type":null,"content_length":"18372","record_id":"<urn:uuid:2992e1b2-1058-4df4-9d5a-94942ef5c892>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00138-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Carlsbad, CA Geometry Tutor
Find a Carlsbad, CA Geometry Tutor
...I have been awarded the American Vision Award 2009 for Best in Show in Colorado Arts and Writing Scholastics Competition. I believe creativity is not only necessary for painting and drawing,
but it is also crucial in academia. The key is to channel creativity into academia instead of it being a distraction.I grew up in Beijing before I moved to the U.S at the age of 12.
12 Subjects: including geometry, chemistry, biology, Chinese
...I really CARE and get personally involved to be sure my students succeed! Call me so I can work my magic with you for your math grades. TOGETHER WE CAN DO IT!I teach all levels of algebra: per
algebra, algebra1 & 2 , also college algebra.
17 Subjects: including geometry, reading, ESL/ESOL, ASVAB
Hello my name is Sean and I am a graduating student at UCSD in the field of Mathematics. I have taken a number of educational studies courses to better prepare myself for a future in education,
specifically high school math. I have over twenty hours of in-class tutoring experience at Montgomery Middle School.
10 Subjects: including geometry, calculus, algebra 1, algebra 2
Hey there! I am entering my third year at UC San Diego, where I am a 4.0 student studying Political Science and Business. I feel that as a tutored student, I will be able to better assess your
needs and progress because I've been on the other side of the table!
43 Subjects: including geometry, reading, English, chemistry
...I have a 3.89 GPA, in the Honors Scholar Program, president's list, and permanent president's list. I love teaching and explaining almost anything, and it is quite the thrill to engage a
student who was confused and now completely understands because of me. The problem with school is that teach...
29 Subjects: including geometry, reading, English, ASVAB
Related Carlsbad, CA Tutors
Carlsbad, CA Accounting Tutors
Carlsbad, CA ACT Tutors
Carlsbad, CA Algebra Tutors
Carlsbad, CA Algebra 2 Tutors
Carlsbad, CA Calculus Tutors
Carlsbad, CA Geometry Tutors
Carlsbad, CA Math Tutors
Carlsbad, CA Prealgebra Tutors
Carlsbad, CA Precalculus Tutors
Carlsbad, CA SAT Tutors
Carlsbad, CA SAT Math Tutors
Carlsbad, CA Science Tutors
Carlsbad, CA Statistics Tutors
Carlsbad, CA Trigonometry Tutors
|
{"url":"http://www.purplemath.com/Carlsbad_CA_Geometry_tutors.php","timestamp":"2014-04-19T09:49:32Z","content_type":null,"content_length":"24070","record_id":"<urn:uuid:15ca8658-f12e-445d-938d-b937916ad228>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00392-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Homestead, FL Calculus Tutor
Find a Homestead, FL Calculus Tutor
Proficient in most areas of math such as: pre-algebra, algebra, algebra 2, geometry, precalculus, calculus, trigonometry, and data analysis/functions. Experienced for 11 years in tutoring for the
FCAT, PSAT, SAT, ACT, TACHS, ISEE and GED examinations; specifically, my students have increased their ...
37 Subjects: including calculus, reading, algebra 1, chemistry
...It is challenging and it is hard work, but I am here to help students build a foundation of knowledge that will allow them to learn more easily in the future and appreciate just how fun
learning can be.Passed the first two sequences of Calculus with an A grade. Tutored UF students at the Office ...
24 Subjects: including calculus, English, chemistry, biology
...My students always pass this subject. I always have satisfactory feedback from students who always appreciate me for teaching them this subject in very easy and explanatory way. I received my
Master's degree in Math in fall of 2005 at University of Miami.
23 Subjects: including calculus, chemistry, physics, statistics
...Please feel free to contact me any time.I have taught and tutored algebra for about 5 years. With my patience and positive attitude, my success rate is high. I have taught Algebra I in high
school and at the college level.
18 Subjects: including calculus, chemistry, biochemistry, cooking
I like to teach and explicate with patience the subject that has become a challenge for the student; at the same time I will be learning the exact method to apply to this person and make him or
her catch the route to learn and solve not one but several problems. The goal is to help the student improve in knowledge and in grades as well. The most important is knowledge.
13 Subjects: including calculus, Spanish, algebra 1, Microsoft Excel
Related Homestead, FL Tutors
Homestead, FL Accounting Tutors
Homestead, FL ACT Tutors
Homestead, FL Algebra Tutors
Homestead, FL Algebra 2 Tutors
Homestead, FL Calculus Tutors
Homestead, FL Geometry Tutors
Homestead, FL Math Tutors
Homestead, FL Prealgebra Tutors
Homestead, FL Precalculus Tutors
Homestead, FL SAT Tutors
Homestead, FL SAT Math Tutors
Homestead, FL Science Tutors
Homestead, FL Statistics Tutors
Homestead, FL Trigonometry Tutors
Nearby Cities With calculus Tutor
Coral Gables, FL calculus Tutors
Cutler Bay, FL calculus Tutors
Doral, FL calculus Tutors
Florida City, FL calculus Tutors
Hialeah Gardens, FL calculus Tutors
Hialeah Lakes, FL calculus Tutors
Leisure City, FL calculus Tutors
Miami calculus Tutors
Miami Beach calculus Tutors
Miami Lakes, FL calculus Tutors
Miami Shores, FL calculus Tutors
Opa Locka calculus Tutors
Palmetto Bay, FL calculus Tutors
South Miami, FL calculus Tutors
West Miami, FL calculus Tutors
|
{"url":"http://www.purplemath.com/Homestead_FL_Calculus_tutors.php","timestamp":"2014-04-18T11:44:40Z","content_type":null,"content_length":"24182","record_id":"<urn:uuid:2e60759c-fc37-40d8-81d1-4f384a4aec12>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00474-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Optimal distributed online prediction using mini-batches
Results 1 - 10 of 16
- In NIPS , 2011
"... Stochastic Gradient Descent (SGD) is a popular algorithm that can achieve state-of-the-art performance on a variety of machine learning tasks. Several researchers have recently proposed schemes
to parallelize SGD, but all require performance-destroying memory locking and synchronization. This work a ..."
Cited by 30 (4 self)
Add to MetaCart
Stochastic Gradient Descent (SGD) is a popular algorithm that can achieve state-of-the-art performance on a variety of machine learning tasks. Several researchers have recently proposed schemes to
parallelize SGD, but all require performance-destroying memory locking and synchronization. This work aims to show using novel theoretical analysis, algorithms, and implementation that SGD can be
implemented without any locking. We present an update scheme called Hogwild! which allows processors access to shared memory with the possibility of overwriting each other’s work. We show that when
the associated optimization problem is sparse, meaning most gradient updates only modify small parts of the decision variable, then Hogwild! achieves a nearly optimal rate of convergence. We
demonstrate experimentally that Hogwild! outperforms alternative schemes that use locking by an order of magnitude.
, 2011
"... We analyze the convergence of gradient-based optimization algorithms whose updates depend on delayed stochastic gradient information. The main application of our results is to the development of
distributed minimizationalgorithmswhereamasternodeperformsparameterupdateswhile worker nodes compute stoc ..."
Cited by 16 (3 self)
Add to MetaCart
We analyze the convergence of gradient-based optimization algorithms whose updates depend on delayed stochastic gradient information. The main application of our results is to the development of
distributed minimizationalgorithmswhereamasternodeperformsparameterupdateswhile worker nodes compute stochastic gradients based on local information in parallel, which may give rise to delays due to
asynchrony. Our main contributionistoshowthatforsmoothstochasticproblems,thedelaysareasymptotically negligible. In application to distributed optimization, we show n-node architectures whose
optimization error in stochastic problems—in spite of asynchronous delays—scales asymptotically as O(1 / √ nT), which is known to be optimal even in the absence of delays. 1
- In Proceedings of the 28th International Conference on Machine Learning (ICML-11 , 2011
"... Onlinepredictionmethodsaretypicallystudied as serial algorithms running on a single processor. In this paper, we present the distributed mini-batch (DMB) framework, a method of converting a
serial gradient-based onlinealgorithmintoadistributedalgorithm, and prove an asymptotically optimal regret bou ..."
Cited by 11 (1 self)
Add to MetaCart
Onlinepredictionmethodsaretypicallystudied as serial algorithms running on a single processor. In this paper, we present the distributed mini-batch (DMB) framework, a method of converting a serial
gradient-based onlinealgorithmintoadistributedalgorithm, and prove an asymptotically optimal regret bound for smooth convex loss functions and stochastic examples. Our analysis explicitly takes into
account communication latencies between computing nodes in a network. We also present robust variants, which are resilient to failures and node heterogeneity in an asynchronous distributed
environment. Our method can also be used for distributed stochastic optimization, attaining an asymptotically linear speedup. Finally, we empirically demonstrate the merits of our approach on
large-scale online prediction problems. 1.
"... Mini-batch algorithms have been proposed as a way to speed-up stochastic convex optimization problems. We study how such algorithms can be improved using accelerated gradient methods. We provide
a novel analysis, which shows how standard gradient methods may sometimes be insufficient to obtain a sig ..."
Cited by 6 (3 self)
Add to MetaCart
Mini-batch algorithms have been proposed as a way to speed-up stochastic convex optimization problems. We study how such algorithms can be improved using accelerated gradient methods. We provide a
novel analysis, which shows how standard gradient methods may sometimes be insufficient to obtain a significant speed-up and propose a novel accelerated gradient algorithm, which deals with this
deficiency, enjoys a uniformly superior guarantee and works well in practice. 1
"... Linear classification is a useful tool in machine learning and data mining. For some data in a rich dimensional space, the performance (i.e., testing accuracy) of linear classifiers has shown to
be close to that of nonlinear classifiers such as kernel methods, but training and testing speed is much ..."
Cited by 6 (2 self)
Add to MetaCart
Linear classification is a useful tool in machine learning and data mining. For some data in a rich dimensional space, the performance (i.e., testing accuracy) of linear classifiers has shown to be
close to that of nonlinear classifiers such as kernel methods, but training and testing speed is much faster. Recently, many research works have developed efficient optimization methods to construct
linear classifiers and applied them to some large-scale applications. In this paper, we give a comprehensive survey on the recent development of this active research area.
, 2012
"... This paper presents a methodology for using varying sample sizes in batch-type optimization methods for large scale machine learning problems. The first part of the paper deals with the delicate
issue of dynamic sample selection in the evaluation of the function and gradient. We propose a criterion ..."
Cited by 4 (1 self)
Add to MetaCart
This paper presents a methodology for using varying sample sizes in batch-type optimization methods for large scale machine learning problems. The first part of the paper deals with the delicate
issue of dynamic sample selection in the evaluation of the function and gradient. We propose a criterion for increasing the sample size based on variance estimates obtained during the computation of
a batch gradient. We establish an O(1/ɛ) complexity bound on the total cost of a gradient method. The second part of the paper describes a practical Newton method that uses a smaller sample to
compute Hessian vector-products than to evaluate the function and the gradient, and that also employs a dynamic sampling technique. The focus of the paper shifts in the third part of the paper to L1
regularized problems designed to produce sparse solutions. We propose a Newton-like method that consists of two phases: a (minimalistic) gradient projection phase that identifies zero variables, and
subspace phase that applies a subsampled Hessian Newton iteration in the free variables. Numerical tests on speech recognition problems illustrate the performance of the algorithms.
, 2012
"... We analyze convergence rates of stochastic optimization algorithms for nonsmooth convex optimization problems. By combining randomized smoothing techniques with accelerated gradient methods, we
obtain convergence rates of stochastic optimization procedures, both in expectation and with high probabi ..."
Cited by 2 (0 self)
Add to MetaCart
We analyze convergence rates of stochastic optimization algorithms for nonsmooth convex optimization problems. By combining randomized smoothing techniques with accelerated gradient methods, we
obtain convergence rates of stochastic optimization procedures, both in expectation and with high probability, that have optimal dependence on the variance of the gradient estimates. To the best of
our knowledge, these are the first variance-based rates for nonsmooth optimization. We give several applications of our results to statistical estimation problems and provide experimental results
that demonstrate the effectiveness of the proposed algorithms. We also describe how a combination of our algorithm with recent work on decentralized optimization yields a distributed stochastic
optimization algorithm that is order-optimal.
"... This paper considers a wide spectrum of regularized stochastic optimization problems where both the loss function and regularizer can be non-smooth. We develop a novel algorithm based on the
regularized dual averaging (RDA) method, that can simultaneously achieve the optimal convergence rates for bo ..."
Cited by 1 (0 self)
Add to MetaCart
This paper considers a wide spectrum of regularized stochastic optimization problems where both the loss function and regularizer can be non-smooth. We develop a novel algorithm based on the
regularized dual averaging (RDA) method, that can simultaneously achieve the optimal convergence rates for both convex and strongly convex loss. In particular, for strongly convex loss, it achieves
the opti-log N N mal rate of O ( 1 1
"... We study two communication-efficient algorithms for distributed statistical optimization on large-scale data. The first algorithm is an averaging method that distributes the N data samples
evenly to m machines, performs separate minimization on each subset, and then averages the estimates. We provid ..."
Add to MetaCart
We study two communication-efficient algorithms for distributed statistical optimization on large-scale data. The first algorithm is an averaging method that distributes the N data samples evenly to
m machines, performs separate minimization on each subset, and then averages the estimates. We provide a sharp analysis of this average mixture algorithm, showing that under a reasonable set of
conditions, the combined parameter achieves mean-squared error that decays as O(N −1 +(N/m) −2). Wheneverm ≤ √ N, this guarantee matches the best possible rate achievable by a centralized algorithm
having access to all N samples. The second algorithm is a novel method, based on an appropriate form of the bootstrap. Requiring only a single round of communication, it has mean-squared error that
decays asO(N −1 +(N/m) −3), and so is more robust to the amount of parallelization. We complement our theoretical results with experiments on largescale problems from the internet search domain. In
particular, we show that our methods efficiently solve an advertisement prediction problem from the Chinese SoSo Search Engine, which consists ofN ≈ 2.4×10 8 samples andd ≥ 700,000 dimensions. 1
, 2012
"... In distributed learning, the goal is to perform a learning task over data distributed across multiple nodes with minimal (expensive) communication. Prior work (Daumé III et al., 2012) proposes a
general model that bounds the communication required for learning classifiers while allowing for ε traini ..."
Add to MetaCart
In distributed learning, the goal is to perform a learning task over data distributed across multiple nodes with minimal (expensive) communication. Prior work (Daumé III et al., 2012) proposes a
general model that bounds the communication required for learning classifiers while allowing for ε training error on linearly separable data adversarially distributed across nodes. In this work, we
develop key improvements and extensions to this basic model. Our first result is a two-party multiplicative-weight-update based protocol that uses O(d 2 log 1/ε) words of communication to classify
distributed data in arbitrary dimension d, ε-optimally. This readily extends to classification over k nodes with O(kd 2 log 1/ε) words of communication. Our proposed protocol is simple to implement
and is considerably more efficient than baselines compared, as demonstrated by our empirical results. In addition, we illustrate general algorithm design paradigms for doing efficient learning over
distributed data. We show how to solve fixed-dimensional and high dimensional linear programming efficiently in a distributed setting where constraints may be distributed across nodes. Since many
learning problems can be viewed as convex optimization problems where constraints are generated by individual points, this models many typical distributed learning scenarios. Our techniques make use
of a novel connection from multipass streaming, as well as adapting the multiplicative-weight-update framework more generally to a distributed setting. As a consequence, our methods extend to the
wide range of problems solvable using these techniques. 1
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=14064616","timestamp":"2014-04-18T17:25:42Z","content_type":null,"content_length":"38620","record_id":"<urn:uuid:2a616f12-5b4d-49c4-8c40-5bfede6c202c>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00231-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Computing the posterior balanced accuracy - File Exchange - MATLAB Central
In binary classification, the average accuracy obtained on individual cross-validation folds is a problematic measure of generalization performance. First, it makes statistical inference difficult.
Second, it leads to an optimistic estimate when a biased classifier is tested on an imbalanced dataset. Both problems can be overcome by replacing the conventional point estimate of accuracy by an
estimate of the posterior distribution of the balanced accuracy.
This archive contains a set of MATLAB functions to estimate the posterior distribution of the balanced accuracy and compute its associated statistics.
For full details, see:
K.H. Brodersen, C.S. Ong, K.E. Stephan, J.M. Buhmann (2010)
The balanced accuracy and its posterior distribution.
Proceedings of the 20th International Conference on Pattern Recognition, 3121-3124.
Please login to add a comment or rating.
|
{"url":"http://www.mathworks.com/matlabcentral/fileexchange/29244-computing-the-posterior-balanced-accuracy","timestamp":"2014-04-18T13:42:38Z","content_type":null,"content_length":"32245","record_id":"<urn:uuid:daf253f6-ae26-402a-84ab-a98fa87468c5>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00354-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Year 7 Curriculum
Below are the skills needed, with links to resources to help with that skill. We also enourage plenty of exercises and book work. Curriculum Home
Year 7 | Subtraction
☐ Subtract two integers (with and without the use of a number line)
Year 7 | Division
☐ Understand and be able to use Long Division
Year 7 | Numbers
☐ Distinguish between the various subsets of real numbers (counting/natural numbers, whole numbers, integers, rational numbers, and irrational numbers)
☐ Determine the prime factorization of a given number and write in exponential form
☐ Simplify expressions using order of operations (Note: Expressions may include absolute value and/or integer exponents greater than 0.)
☐ Add, subtract, multiply, and divide integers
☐ Add two integers (with and without the use of a number line)
☐ Recognize and state the value of the square root of a perfect square (up to 225)
☐ Determine the square root of non-perfect squares using a calculator
☐ Classify irrational numbers as non-repeating/non-terminating decimals
☐ Identify the two consecutive whole numbers between which the square root of a non-perfect square whole number less than 225 lies (with and without the use of a number line)
☐ Recognize the difference between rational and irrational numbers (e.g., explore different approximations of pi)
☐ Place rational and irrational numbers (approximations) on a number line and justify the placement.
☐ Write numbers in scientific notation
☐ Translate numbers from scientific notation into standard form
☐ Compare numbers written in scientific notation
☐ Find the common factors and greatest common factor of two or more numbers
☐ Determine multiples and least common multiple of two or more numbers
☐ Compare and order integers from -10 to 10
☐ Recognize and state the value of the cube root of a perfect cube (up to 216)
☐ Determine the cube root of non-perfect cubes using a calculator
☐ Identify the two consecutive whole numbers between which the cube root of a non-perfect cube whole number less than 216 lies (with and without the use of a number line)
☐ Know and understand the Fundamental Theorem of Arithmetic.
Year 7 | Measurement
☐ Calculate distance using a map scale
☐ Determine personal references for customary /metric units of mass
☐ Justify the reasonableness of the mass of an object
☐ Convert capacities and volumes within a given system
☐ Identify customary and metric units of mass
☐ Convert mass within a given system
☐ Draw central angles in a given circle using a protractor (circle graphs)
☐ Determine the tools and techniques required to measure with an appropriate level of precision: mass
☐ Know the metric units of area: Square Millimeter, Square Centimeter, Square Meter, Hectare and Square Kilometer; and how to convert between them. Or know the standard units of area: Square Inch,
Square Foot, Square Yard, Acre, Square Mile; and how to convert between them.
Year 7 | Geometry (Plane)
☐ Build a pattern to develop a rule for determining the sum of the interior angles of polygons
☐ Calculate the radius or diameter, given the circumference or area of a circle
☐ Find a missing angle when given angles of a quadrilateral
☐ Understand that Angles on a Straight Line Add to 180 degrees, and Angles Around a Point Add to 360 degrees
☐ Understand Tessellation, and what is meant by regular and semi-regular tessellations.
Year 7 | Geometry (Solid)
☐ Calculate the volumes of prisms and cylinders, using given formulas and a calculator
☐ Identify the two-dimensional shapes that make up the faces and bases of three-dimensional shapes (prisms, cylinders, cones, and pyramids)
☐ Determine the surface areas of prisms and cylinders, using a calculator and a variety of methods
Year 7 | Algebra
☐ Add and subtract monomials with exponents of one
☐ Evaluate formulas for given input values (surface area, rate, and density problems)
☐ Write down the reciprocal of an algebraic expression
Year 7 | Exponents
☐ Develop the laws of exponents for multiplication and division
Year 7 | Inequalities
☐ Solve one-step inequalities (positive coefficients only)
☐ Graph the solution set of an inequality (positive coefficients only) on a number line.
Year 7 | Linear Equations
☐ Translate two-step verbal expressions into algebraic expressions
☐ Solve multi-step equations by combining like terms, using the distributive property, or moving variables to one side of the equation
Year 7 | Trigonometry
☐ Identify the right angle, hypotenuse, and legs of a right triangle
☐ Explore the relationship between the lengths of the three sides of a right triangle to develop the Pythagorean Theorem
☐ Use the Pythagorean Theorem to determine the unknown length of a side of a right triangle
☐ Determine whether a given triangle is a right triangle by applying the Pythagorean Theorem and using a calculator
Year 7 | Polynomials
☐ Identify a polynomial as an algebraic expression containing one or more terms
Year 7 | Functions
☐ Write an equation to represent a function from a table of values
Year 7 | Data
☐ Identify and collect data using a variety of methods
☐ Predict the outcome of an experiment
☐ Design and conduct an experiment to test predictions
☐ Compare actual results to predicted results
☐ Display data in a circle graph (pie chart)
☐ Convert raw data into double bar graphs and double line graphs
☐ Calculate the range for a given set of data
☐ Read and interpret data represented graphically (pictograph, bar graph, histogram, line graph, double line/bar graphs or circle graph)
Year 7 | Estimation
☐ Estimate surface area
☐ Justify the reasonableness of answers using estimation
☐ Estimate the areas of plane shapes by counting the number of squares needed to cover the shape.
Year 7 | Graphs
☐ Draw the graphic representation of a pattern from an equation or from a table of data
Year 7 | Probability
☐ Interpret data to provide the basis for predictions and to establish experimental probabilities
Year 7 | Statistics
☐ Select the appropriate measure of central tendency
☐ Determine the validity of sampling methods to predict outcomes
Year 7 | Money
☐ Calculate unit price using proportions
☐ Compare unit prices
☐ Convert money between different currencies with the use of an exchange rate table and a calculator
|
{"url":"http://www.mathsisfun.com/links/curriculum-year-7.html","timestamp":"2014-04-16T07:13:27Z","content_type":null,"content_length":"72219","record_id":"<urn:uuid:9d4c745b-6d14-407c-bc22-64af1355e446>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00480-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Speedometer Error
From Ninja250Wiki
It has been routinely noticed that the stock speedometer is not precise. The error can be up to 10%, which can be tested by any speed measuring device put by police on the roads for your
convenience, or by a modern GPS. This error is not limited to Kawasaki, or even motorcycle, speedometers. Automobile manufacturers routinely install speedos that read fast, most likely to avoid
being sued if their customers get speeding tickets.
There are several ways to remedy the situation:
• Play with the speed calculator to see the difference between actual and indicated speed.
• Do the math in your head. Just subtract 10% of your indicated speed. So, if the speedometer shows 60mph, your actual speed is 54. It's real easy.
• Change your front tire to a bigger one. A 100/90-16 front tire will be just about right for speed and odometer reading.
• Install a bicycle computer and calibrate it properly.
• Install a GPS for real accuracy. That's not covered in FAQ, but doing a search on 'GPS' will give lots of choices.
Odometer error
Short answer: There isn't any unless you're really, really picky. If you are, get a GPS.
The odometer and the speedometer are driven by two separate systems in the speedometer head. The odometer uses a rigid gearing system driven by the speedometer cable, so there's no way for the ratio
between the front tire and the odometer number wheels to change. It's fixed. The only variable is the diameter of the front tire. The speedometer needle is driven using a spinning magnet inside a
steel bell resisted by a coil spring, so there's plenty of room for errors in that system.
If you dismantle an EX250F speedometer and count the gear teeth, here is what you'll find:
• The speedometer hub gear ratio is 23:9 - For every 23 turns of the front wheel, the speedo cable turns 9 times.
• The first gear set inside the speedo head has a ratio of 10:1 - For every 10 turns of the speedo cable, the cross shaft spins 1 time.
• The second gear set has a ratio of 16:1, so every 16 turns of the cross shaft turns the longitudinal shaft 1 time.
• The last gear set ratio is 14:1 - Every 14 rotations of the longitudinal shaft turns the 10ths wheel of the odometer 1 time.
• If you do the math, the overall ratio between the wheel and odometer is (14x16x10x9)/23, or 876.5:1.
Tire size variables: The stock front tire for the 250 Ninja F is 100-80/16. The nominal diameter is 22.30" and calculated theoretical circumference is 70.06". This will vary somewhat by manufacturer
and tire model. The stock size tire rotates 904.37 times in one mile. This will make the odometer read 1.03 miles for each mile traveled, roughly 3 percent error over. A common replacement size for
the front tire is 100/90-16, which has a diameter of 23.09" and a calculated circumference of 72.53", again variable by tire model. This size rotates 873.58 times per mile and will make the odometer
read .997 miles for each mile traveled, about 1/3 of a percent under actual miles. In both cases, this is a relatively small error.
If your definition of accurate is zero percent error, then yes, there is some error in the odometer reading. But in the real world, that's just not possible. Every little thing will affect odometer
accuracy, including tire wear, road temperature, tire pressure, and ambient temperature. For everyday purposes the Ninjette's odometer accuracy, though not a perfect 100 percent, can be considered
to be spot on. More importantly, the ratio of front wheel turns to odometer wheel turns can never change or vary, unless gear teeth are physically stripped or broken. If that's the case, then the
typical result is a complete failure to function rather than a decrease in accuracy.
|
{"url":"http://faq.ninja250.org/wiki/Speedometer_Error","timestamp":"2014-04-17T09:44:56Z","content_type":null,"content_length":"16314","record_id":"<urn:uuid:9cdd53c9-8017-41b6-94a9-a347786ae222>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00360-ip-10-147-4-33.ec2.internal.warc.gz"}
|
tough question
September 3rd 2009, 08:35 AM
tough question
this question is particularly tough as don't know how to draw boxes and label them etc but i think it might be able to be done without seeing the boxes, here it is-
a pen is built onto an existing rectangular building where AF is 10 units and FE is 5 units. AF and FE are shared sides between the pen and the rectangular building. the perimeter ABCDE (F is the
corner of the rectangular building) is 65 units. find a relationship between x and y where AB = x and DE = y
show that the area enclosed by the pen is $125 + 30x - x^2$
by completing the square, find the greatest possible area enclosed by the pen.
hats off to anyone that can do this without the visual aid
September 3rd 2009, 09:38 AM
Either learn to post diagrams (or explain CLEARLY)
else don't post anything! All you're doing is creating headaches (Worried)
For this one, as explanation, we'd need something clear, like:
ABCD are the corners of a rectangular building.
E is on AB, F is on AD, forming triangular pen AEF.
AE = 10 and AF = 5........ Gey my drift?
September 3rd 2009, 09:42 AM
i think my best bet here is learning how to make a diagram, any tips on how to do that?
September 3rd 2009, 09:46 AM
this question is particularly tough as don't know how to draw boxes and label them etc but i think it might be able to be done without seeing the boxes, here it is-
a pen is built onto an existing rectangular building where AF is 10 units and FE is 5 units. AF and FE are shared sides between the pen and the rectangular building. the perimeter ABCDE (F is the
corner of the rectangular building) is 65 units. find a relationship between x and y where AB = x and DE = y
show that the area enclosed by the pen is $125 + 30x - x^2$
by completing the square, find the greatest possible area enclosed by the pen.
hats off to anyone that can do this without the visual aid
Hi mark,
Go into Start - Program - Accessories - Paint
Create a drawing using the tools provided, labeling all parts.
Save the drawing as a .jpg or .png file.
Attach drawing to your post.
September 3rd 2009, 09:54 AM
ok this is gonna take a while so i might have to wait till tomorrow in that case, i'll post it here then, thanks for the advice though
September 3rd 2009, 01:05 PM
this question is particularly tough as don't know how to draw boxes and label them etc but i think it might be able to be done without seeing the boxes, here it is-
a pen is built onto an existing rectangular building where AF is 10 units and FE is 5 units. AF and FE are shared sides between the pen and the rectangular building. the perimeter ABCDE (F is the
corner of the rectangular building) is 65 units. find a relationship between x and y where AB = x and DE = y
show that the area enclosed by the pen is $125 + 30x - x^2$
by completing the square, find the greatest possible area enclosed by the pen.
hats off to anyone that can do this without the visual aid
I was able to solve the problem without a visual aid,
but to understand my answer,
you need to see the attached image.
THE CONDITIONS MET:
1) a pen is built onto an existing rectangular building (you meant ON TOP)
2) where AF is 10 units and FE is 5 units. AF and FE are shared sides between the pen and the rectangular building.
see image
3) the perimeter ABCDE (F is the corner of the rectangular building) is 65 units.
see image
4) find a relationship between x and y where AB = x and DE = y
this is easy
$y = \dfrac{10}{15} x$
5) show that the area enclosed by the pen is $125 + 30x - x^2$
that's what the image is for: show
6) by completing the square, (you meant rectangle (Giggle))
7) find the greatest possible area enclosed by the pen.
when x=15, the maximum area is 350 square units
hats off to anyone that can do this without the visual aid
September 3rd 2009, 01:16 PM
it seems as though you've got the right answers, so well done, i take my hat off to you. still don't think i follow though, that drawing isn't like the one in the picture here. i'm gonna try to
draw one now so you can see what you you can make of it
September 3rd 2009, 01:48 PM
here's the question again (as it reads exactly from the book): a pen is built onto an existing rectangular building (shaded) where AF is 10 units and FE is 5 units, as shown in the diagram. the
perimeter ABCDE is 65 units. find a relationship between x and y, where AB = x and DE = y
show that the area enclosed by the pen is 125 + 30x - x^2
by completing the square, find the greatest possible area enclosed by the pen
September 3rd 2009, 02:20 PM
could someone please show me (using the post above) how exactly you would arrive at x = 15 and y = 10 and also run through how you would show that the area enclosed by the pen is $125 + 30x - x^
September 3rd 2009, 04:24 PM
First make an equation for the perimeter.
$(x+5) + (y+10) + x + y =65$
and the solve for y
$y = -x +25$
Now make and equation for the area
$A = (5+x)(10+y) -50$
$A= xy +10x +5y$
Now sub y into your area equation and you have your answer.
Hope that helps
September 4th 2009, 01:34 AM
i sort of understand the first bit, with making an equation for the perimeter (i'm guessing the second x and y are the two unnamed sides) but the rest i haven' got a clue about. could you tell me
step by step where you get y = -x + 25 from and all the rest of it becase as usual i haven't got a clue.
September 4th 2009, 05:47 AM
mr fantastic
i sort of understand the first bit, with making an equation for the perimeter (i'm guessing the second x and y are the two unnamed sides) but the rest i haven' got a clue about. could you tell me
step by step where you get y = -x + 25 from Mr F says: Make y the subject in equation (1). (The second line tells you to do this).
and all the rest of it becase as usual i haven't got a clue.
The reply you got is very clear. Please review it again.
Edit: $(x+5) + (y+10) + x + y = 65 \Rightarrow 2x + 2y + 15 = 65$$\Rightarrow 2x + 2y = 50 \Rightarrow x + y = 25 \Rightarrow y = 25 - x$.
September 4th 2009, 06:59 AM
Mark, if your teacher's intent was to teach solving by substitution,
then I'm perplexed by the fact that this "confusing pen on rectangle"
was used...WHY?
Problem could be worded simply:
Given that x + y = 25 and A = 10x + 5y + xy, what is A in terms of x?
Once students have learned the HOW, then bring in ye olde "word problems".
September 4th 2009, 07:10 AM
me not having a teacher might have something to do with it. i'm just trying to learn from a book, going through all the questions one by one. i still don't understand this question or how 11rdc11
came up with those answers
September 4th 2009, 07:13 AM
ah, i've just seen something written by mr fantastic thats helped a bit, i should have realised that first bit for myself, with or without being taught by anyone actually, those brackets 11rcd11
used threw me off
|
{"url":"http://mathhelpforum.com/algebra/100422-tough-question-print.html","timestamp":"2014-04-18T05:17:00Z","content_type":null,"content_length":"22311","record_id":"<urn:uuid:67698cf6-9631-4f3b-b50e-a5ca91127085>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00523-ip-10-147-4-33.ec2.internal.warc.gz"}
|
I Do Not Get This Problem At All. Can Anybody Help ... | Chegg.com
i do not get this problem at all. Can anybody help me with this problem? I need step by step detail. thanks
Use the figure below for parts (a) and (b).
(a) Can the circuit shown above be reduced to a single resistor connected to the batteries? Explain. (Use the following as necessary: V = 15, R1 = 2.2, and R2 =1.6.)
(b) Calculate each of the unknown currents I1, I2, and I3 for the circuit.
|
{"url":"http://www.chegg.com/homework-help/questions-and-answers/get-problem--anybody-help-problem-need-step-step-detail-thanks-use-figure-parts-b--circuit-q949558","timestamp":"2014-04-16T12:38:04Z","content_type":null,"content_length":"26438","record_id":"<urn:uuid:4ab76d5a-06c9-41f7-87dc-95324c15f1a2>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00574-ip-10-147-4-33.ec2.internal.warc.gz"}
|
King Of Prussia Algebra Tutor
Find a King Of Prussia Algebra Tutor
...Hello Students! If you need help with mathematics, physics, or engineering, I'd be glad to help out. With dedication, every student succeeds, so don’t despair!
14 Subjects: including algebra 1, algebra 2, calculus, physics
...I have a Master's Degree in Speech and Language. As such much of my training required courses in biology and chemistry as it relates to functions of the neurological system, muscular system and
other bodily functions that relate to brain functions. Prior to retiring from my position as Supervis...
51 Subjects: including algebra 1, Spanish, English, reading
...I am stern but caring, serious but fun, and nurturing but have high expectations of all of my students. Together as a team, you and I can help your child to do his or her best. I look forward
to working with you and your child!I am a certified and current teacher in the public schools.
12 Subjects: including algebra 1, algebra 2, geometry, trigonometry
...I have methods to determine what is the best way a student learns, and am committed to finding a way to teach them effectively using visual, auditory, or kinesthetic strategies, or some
combination thereof. I obtained my International Baccalaureate Diploma in July 2012 at Central High School of ...
18 Subjects: including algebra 2, algebra 1, reading, Spanish
...Once, when a calculus student of mine said that about derivatives, I took them out to her car and explained how acceleration, velocity, and distance are all calculus terms we think about in the
car while she drove us around.Swimming is the premier water sport of the world. I was a certified life...
21 Subjects: including algebra 2, chemistry, algebra 1, calculus
|
{"url":"http://www.purplemath.com/king_of_prussia_algebra_tutors.php","timestamp":"2014-04-21T10:46:52Z","content_type":null,"content_length":"24016","record_id":"<urn:uuid:cdc3c5bb-27b1-42f2-b690-1acace23d7f6>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00114-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Sweep of Loading Temps Write Up
Over night we recorded some interesting data that may focus our testing today.
We did 18 small steps to span the range of temperatures that Celani's cell was working in durring the ICCF-17 demo.
The recorded T_mica and T_well look like this.
Nothing particularly stands out here to me. These corresponded to power steps that look like this:
The first two steps are bigger. The next steps were increases in voltage, so as they increased, the power increased more, and shows bigger steps here.
The pressure was a little bit choppy, but it was also on the edge of the noise limit for the pressure sensor, I think. Still, the pressure was definitely dropping after the last step. This seems to
be the same kind of drop we saw in the 24 hour run.
The impedance is what I found interesting, though. There was a definite range in which the impedance would decrease with time.
I wanted to zoom into that area. When it is scaled up, it show up very clearly.
After the 3rd through the 7th step, impedance declined. After the res of them, the impedance was flat or inclined after the power step.
When I plot it vs T_mica, I get this funny looking graph. The little curvy bits happen between 202 C and 220 C. With a peak at 208 or so.
When I look at the excess power calculation, we see the at these points correspond to a smaller dip when the power steps up.
That is a tenuous link, but it intrigues me.
I am most intrigued by the wire's impedance dropping over time. I think that there might be something there to learn. I am interested in seeing how low the impedance will go when I go back to 208C.
The fact that this is right near the operating conditions of the demo I saw with my own eyes has something to do with this. The fact that the actual operating temperature of the wire may be
significantly hotter than the relatively massive mica might mean that there is an actual sweet spot in that temperature range.
Let's watch together.
0 #46 2012-11-20 17:09
I continued to experiment with the curve fitting routines and have come to a conclusion. Either of the two series will do a reasonable job of fitting the data I posted. A combination of the two fits
even better with the following equation especially well matched.
P(T)=.015*5.67E-8*(T)**4 + (T-320.0883178) *(T-68.56979702 )*.0011832 .
The quadratic portion translates to:.0011832*(T)**2-.45986028*T+25.969336.
The fact that an entire family of combinations of the forth order term plus the quadratic terms work together allows us to allocate any portion of the escaping power to either path and still obtain
an excellent match to the data.
It is now necessary for us to find some way to pin down the proportions in order to have a good total model of the process.
0 #45 2012-11-20 02:11
@123Star This is maddening. I have used your forth order curve along with a modified quadratic and can get good fits with several different combinations.
I have no idea as to how to allocate the radiation with the other possible heat escapes at this time. We need to find some method that allows us to actually measure the radiation.
This is one of those times when having too many good possibilities prevents us from finding the true function.
I agree, it is time to let this puppy rest for a while and maybe later something will come to us that solves the problem. Where is a good stroke of lightning when you need it?
0 #44 2012-11-20 01:45
Last note (I go to sleep)
Your parameters are better than mine (gnuplot's)
with P(T)=.001902*T* T -.813*T +74.106
your RMS = 0.206826
0 #43 2012-11-20 01:35
Quadratic interpolation with gnuplot (the parameters are a bit different from yours)
f(x) = a*x**2+ b*x +c
a = 0.00190794 +/- 4.171e-05 (2.186%)
b = -0.817727 +/- 0.03134 (3.833%)
c = 75.0295 +/- 5.78 (7.704%)
rms of residuals: 0.25192
0 #42 2012-11-20 01:29
Ohh I get it, "Calculated power" are just the interpolated values (it fitted TOO nicely with your function,
I redo everything using the 2nd column:
f(x)= a*5.67E-8*(x)** 4 +c
a = 0.0492754 +/- 0.0007454 (1.513%)
c = -19.8548 +/- 1.19 (5.995%)
RMS = 1.33571
f(x)= a*5.67E-8*(x)** 4 +c + b*x
a = 0.0394318 +/- 0.001125 (2.854%)
b = 0.125228 +/- 0.01412 (11.28%)
c = -54.3717 +/- 3.904 (7.18%)
RMS = 0.328594
"free to move absolute zero" (parameter b)
f(x)= a*5.67E-8*(x-b) **4 +c
a = 0.0245443 +/- 0.00175 (7.13%)
b = -100.98 +/- 11.53 (11.42%)
c = -34.6504 +/- 1.727 (4.985%)
RMS = 0.26854
0 #41 2012-11-20 01:19
Is "Calculated power" a T_ambient corrected version of P_in?
I used the third column.
However you're right, with these I get:
f(x)= a*5.67E-8*(x)** 4 +c
a = 0.0492969 +/- 0.0007695 (1.561%)
c = -19.8906 +/- 1.229 (6.178%)
RMS = 1.37898
f(x)= a*5.67E-8*(x)** 4 +c + b*x
a = 0.0392439 +/- 0.001385 (3.529%)
b = 0.127891 +/- 0.01738 (13.59%)
c = -55.1415 +/- 4.805 (8.714%)
RMS = 0.404462
"free to move absolute zero" (parameter b)
f(x)= a*5.67E-8*(x-b) **4 +c
a = 0.0240849 +/- 0.002048 (8.502%)
b = -104.123 +/- 13.83 (13.29%)
c = -35.1599 +/- 2.074 (5.9%)
RMS = 0.318427
Absolute 0 is way off (not zero).
0 #40 2012-11-20 00:37
Ok Star, I obtained mine from the viewer. The times can be found in one of my posts below but here they are:
Temperature K Power_In(Watts) Calculated Power
295.6985 0 0
331.4818 13.6581 13.59494
376.827 37.9368 37.82081
401.499 54.3546 54.28809
427.494 73.7296 74.14389
439.918 84.4947 84.54173
452.173 95.622 95.37347
P(T)=.001902*T*T -.813*T +74.106
The forth order fit was not very good when these temperatures were applied. I will see how my quadratic works with yours.
0 #39 2012-11-19 23:39
Can you post me your data set, so that I can check too?
0 #38 2012-11-19 23:37
Ahah! I wrote 276.15 instead of 273.15 before, oops! :)
I did not choose timestamps by myself, I used the values provided in RunHe2_USA.xls found in https://docs.google.com/document/pub?id=1e3t4J-x208AIlt1dwQ2Wo2MVgjgnocAWH63ivL5R0oM
Page: "calibration points" in that xls file.
Here are the data points (11)
T_GlassOut T_Glassout_Kelvin P_In
24.4696190476 297.6196190476 0.058247619
25.3697857143 298.5197857143 0.3230761905
36.0126952381 309.1626952381 3.6153666667
53.9884380952 327.1384380952 10.457647619
76.9723761905 350.1223761905 20.7743095238
102.3640904762 375.5140904762 34.5922238095
128.4058190476 401.5558190476 51.7301857143
154.2049666667 427.3549666667 72.4241714286
179.096052381 452.246052381 96.2592095238
191.432347619 464.582347619 108.5820809524
202.5709 475.7209 121.7431095238
+1 #37 2012-11-19 23:09
To convert Centigrade to Kelvin one needs to add 273.15. It is so easy to get this value mixed up that I wanted to have it posted for us to see.
I would like for us to continue pursuing the heat loss mechanisms until we are confident that the data makes sense.
I have found an excellent curve fit that is quadratic in form which predicts the Power_In at a given T_GlassOut. An additional function of forth order is also available that so far does not match my
source data.
I am expecting to see some radiation from the test device that should be of the forth order. It is not clear as to how large the radiation term should be and I would like to see all of the various
processes incorporated into one total function.
Add comment
|
{"url":"http://www.quantumheat.org/index.php/en/follow/145-sweep-of-loading-temps-write-up","timestamp":"2014-04-19T01:55:04Z","content_type":null,"content_length":"54610","record_id":"<urn:uuid:d5650fa0-6dae-4d6f-8d8d-7cfadb8787f9>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00617-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Sample Weights & Clustering Adjustments
Sample Weights & Clustering Adjustments
Sample Weights
In each survey year a set of sampling weights is constructed. These weights provide the researcher with an estimate of how many individuals in the United States each respondent's answers represent.
Weighting decisions for the NLSY79 are guided by the following principles:
1. individual case weights are assigned for each year in such a way as to produce group population estimates when used in tabulations
2. the assignment of individual respondent weights involves at least three types of adjustment, with additional considerations necessary for weighting of NLSY79 Child data
The interested user should consult the NLSY79 Technical Sampling Report (Frankel, Williams, and Spencer 1983) for a step-by-step description of the adjustment process. A cursory review of the
process follows.
Adjustment One: The first weighting adjustment involves the reciprocal of the probability of selection at the first interview. Specifically, this probability of selection is a function of the
probability of selection associated with the household in which the respondent was located, as well as the subsampling (if any) applied to individuals identified in screening.
Adjustment Two: This process adjusts for differential response (cooperation) rates in both the screening phase and subsequent interviews. Differential cooperation rates are computed (and
adjusted) on the basis of geographic location and group membership, as well as within-group subclassification.
Adjustment Three: This weighting adjustment attempts to correct for certain types of random variation associated with sampling as well as sample "undercoverage." These ratio estimations are
used to conform the sample to independently derived population totals.
Sampling Weight Readjustments: Sampling weights for the main survey are readjusted to account for noninterviews each survey year. The readjustments are necessitated by differential nonresponse and
use base year sample parameters for their creation, employing a procedure similar to that described above. The only exception occurs in the final stage of post-stratification. Post-stratification
weights in survey rounds two and above have been recomputed on the basis of completed cases in that year's sample rather than the completed cases in the base year sample.
Custom Weights
Users looking for a simple method to correct a single years’ worth of raw data for the effects of over-sampling, clustering and differential base year participation should use the weights include
each round on the data release. Unfortunately, while each round of weights provides an accurate adjustment for any single year, none of the weights provide an accurate method of adjusting multiple
years’ worth of data. The NLS has a custom weighting program which provides the ability to create a set of customized longitudinal weights. These weights improve a researchers’ ability to
accurately calculate summary statistics from multiple years of data.
The custom weighting program calculates its weights by first creating a new temporary list of individuals who meet all of a researcher’s criteria. This list is then weighted as if the individuals
had participated in a new survey round. The weights for this temporary list are the output of the custom weighting program.
There are two slightly different versions of the custom weighting program. The first version is the online program found at http://www.nlsinfo.org/weights/nlsy79. This program allows researchers to
specify the particular rounds in which respondents participated. Researchers can also select if “The respondents are in all of the selected years” or can select if “The respondents are in any or all
of the selected years.”
Important Information About Using the Custom Weighting Program
If you select all survey rounds available and also pick “The respondents are in any or all of the selected years” then the weights produced are identical to round 1 survey weight. This result arises
because the any selection combined with all survey rounds produces a list of every person who participated in the survey.
The second version of the custom weighting program is for researchers who have an even more complex research design. This version allows you to input a list of respondent ids and receive back the
appropriate weights for just that list. For example, the second version of the program allows researchers to weight just people who ever reported smoking cigarettes in any survey or weight just
people who needed extra time to graduate college.
The second version is available upon request from NLS User Services. User Services will send a researcher the underlying PC-SAS code that runs the custom weighting program. To use this version
researchers must first be able to create their own custom list. The lists are simple ASCII files with each NLS public id on its own line. Researchers also need to have a minimal familiarity with
modifying SAS programs and have a valid PC-SAS license for their computer.
Important Information
The output of the custom weight program has 2 implied decimal places just like the weights found in the data release. Dividing each custom weight output value by 100 results in the number of
individuals the respondent represents.
Practical Usage of Weights
The application of sampling weights varies depending on the type of analysis being performed. If tabulating sample characteristics for a single interview year in order to describe the population
being represented (that is, compute sample means, totals, or proportions), researchers should weight the observations using the weights provided. For example, to estimate the average hours worked in
1987 by persons born in 1957 through 1964, simply use the weighted average of hours worked, where weight is the 1987 sample weight. These weights are approximately correct when used in this way,
with item nonresponse possibly generating small errors. Other applications for which users may wish to apply weighting, but for which the application of weights may not correspond to the intended
result include:
Samples Generated by Dropping Observations with Item Nonresponses: Often users confine their analysis to subsamples for which respondents provided valid answers to certain questions. In this case, a
weighted mean will not represent the entire population, but rather those persons in the population who would have given a valid response to the specified questions. Item nonresponse because of
refusals, don't knows, or invalid skips is usually quite small, so the degree to which the weights are incorrect is probably quite small. In the event that item nonresponse constitutes only a small
proportion of the data for variables under analysis, population estimates (that is, weighted sample means, medians, and proportions) would be reasonably accurate. However, population estimates based
on data items that have relatively high nonresponse rates, such as family income, may not necessarily be representative of the underlying population of the cohort under analysis. For more information
on item nonresponse in the NLSY79, see the Item Nonresponse section of this guide.
Data from Multiple Waves: Because the weights are specific to a single wave of the study, and because respondents occasionally miss an interview but are contacted in a subsequent wave, a problem
similar to item nonresponse arises when the data are used longitudinally. In addition, occasionally the weights for a respondent in different years may be quite dissimilar, leaving the user
uncertain as to which weight is appropriate. In principle, if a user wished to apply weights to multiple wave data, weights would have to be recomputed based upon the persons for whom complete data
are available. In practice, if the sample is limited to respondents interviewed in a terminal or end point year, the weight for that year can be used (for more information on weighting see the
section on clustering adjustments).
Regression Analysis: A common question is whether one should use the provided weights to perform weighted least squares when doing regression analysis. Such a course of action may not lead to
correct estimates. If particular groups follow significantly different regression specifications, the preferred method of analysis is to estimate a separate regression for each group or to use dummy
(or indicator) variables to specify group membership.
Users interested in calculating the population average effect of, for example, education upon earnings, should simply compute the weighted average of the regression coefficients obtained for each
group, using the sum of the weights for the persons in each group as the weights to be applied to the coefficients. While least squares is an estimator that is linear in the dependent variable, it
is nonlinear in explanatory variables, and so weighting the observations will generate different results than taking the weighted average of the regression coefficients for the groups. The process
of stratifying the sample into groups thought to have different regression coefficients and then testing for equality of coefficients across groups using an F-test is described in most statistics
Users uncertain about the appropriate grouping should consult a statistician or other person knowledgeable about the data set before specifying the regression model. Note that if subgroups have
different regression coefficients, a regression on a random sample of the population would not be properly specified.
|
{"url":"https://www.nlsinfo.org/content/cohorts/nlsy79/using-and-understanding-the-data/sample-weights-clustering-adjustments","timestamp":"2014-04-19T09:25:11Z","content_type":null,"content_length":"60475","record_id":"<urn:uuid:269efbc5-7ebd-4a09-967b-b9098da63720>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00122-ip-10-147-4-33.ec2.internal.warc.gz"}
|
compound interest problems, without using the formula's
January 20th 2009, 06:20 PM
compound interest problems, without using the formula's
Suppose we have 100$ to invest and the annual interest rate is 10%. Work out
DIRECTLY (in other words don't use some formula you found somewhere unless you derive it from scratch)
how much money you have after one year if
a) the interest is compounded monthly
b) the interest is compounded daily
c) the interest is compounded hourly.
-No idea how to do this... can anyone help?
|
{"url":"http://mathhelpforum.com/calculus/69136-compound-interest-problems-without-using-formulas-print.html","timestamp":"2014-04-17T20:14:06Z","content_type":null,"content_length":"3503","record_id":"<urn:uuid:93423e12-be85-4500-a155-34c8070be8ce>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00530-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Help
Hello i got problem with nr 62. I got some progress but i dont understand
Maybe you can start by determining N from the graph instead of algebraically. Post your answers here.
Hi im currently on with phone but it looks around when n=5 kinda hard to se the graf
Yes, for ε = 0.5, N = 5 and even N = 3 works. So, you need to determine for which x > 0 it is the case that $\frac{\sqrt{4x^2+1}}{x+1}>2-\varepsilon=2-0.5=1.5$. Multiply both sides by (two times) the
denominator, take square of both sides and solve (approximately) the resulting quadratic equation.
i will try again, try se if i can se what i did wrong
You start with the quadratic inequality, which skips a lot of steps. I don't agree with the inequality, but I can't point out the step with an error because these steps are skipped. I wrote a couple
of intermediate results in post #6. I also recommended multiplying both sides by 2 (or 4) to get rid of fractions like 1.5. Here is a sequence of steps I recommend. Start with $\frac{\sqrt{4x^2+1}}
{x+1}>1.5$. 1. Multiply both sides by (x + 1). (The direction of the inequality does not change because we are looking for x > -1, where x + 1 > 0.) 2. Multiply both sides by 2 to get rid of 1.5. 3.
Take square of both sides. (This could lead to appearance of spurious solutions, but we ignore this for now.) 4. Move everything to the left-hand side and add like terms. 5. Solve the quadratic
equation obtained when > is replaced with =. Let's denote the solutions by $x_1$ and $x_2$ where $x_1 < 0$ and $x_2 > 0$. 6. Since the leading coefficient of the quadratic polynomial f(x) is positive
and the inequality has the form f(x) > 0, the solutions to the inequality are $x < x_1$ and $x > x_2$. We are interested in $x_2$. Edit: Sorry, I missed that there are two attached images and not
one. I'm looking at the second one...
idk im following my book how it tell me :P that is when f(x)>0 and now ima solve for f(x)<0
OK, I see that you started from $\left|\frac{\sqrt{4x^2+1}}{x+1}-2\right|<0.5$, which you converted into $\frac{\sqrt{4x^2+1}}{x+1}-2<0.5$. In fact, if we denote $\frac{\sqrt{4x^2+1}}{x+1}$ with z, $
|z-2|<0.5$ is equivalent to $-0.5<z-2<0.5$. Adding 2 to all sides, we get $1.5<z<2.5$. So you are right that $\frac{\sqrt{4x^2+1}}{x+1}<2.5$ is a part of the original inequality. However, if you saw
the graph from post #2, you must have seen that as x tends to infinity, the graph approaches 2 from below. This means that $\frac{\sqrt{4x^2+1}}{x+1}<2$ for positive x, so the inequality $\frac{\sqrt
{4x^2+1}}{x+1}<2.5$ is automatically true for positive x. What we are interested in is the other part: $1.5<\frac{\sqrt{4x^2+1}}{x+1}$. It becomes true only starting from some N > 0. I assumed you
saw all this after post #2 and therefore I recommended solving $\frac{\sqrt{4x^2+1}}{x+1}>2-\varepsilon=2-0.5=1.5$ (*) in post #4. Later, in post #6, I wrote a couple of inequalities you obtain while
solving (*). You ignored all this and started solving a different inequality.
I also recommend clicking the "Reply With Quote" button under my posts to see how formulas are typed using LaTeX. Using LaTeX is not difficult, and you won't need to attach pictures. Also check out
the LaTeX Help subforum on this site.
Last edited by Petrus; February 13th 2013 at 11:12 AM.
|
{"url":"http://mathhelpforum.com/pre-calculus/213044-epsilon.html","timestamp":"2014-04-19T13:00:46Z","content_type":null,"content_length":"84614","record_id":"<urn:uuid:6882aefc-390e-4b06-a1ef-ae4413a441f8>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00210-ip-10-147-4-33.ec2.internal.warc.gz"}
|
When quoting this document, please refer to the following
DOI: 10.4230/OASIcs.CCA.2009.2277
URN: urn:nbn:de:0030-drops-22770
URL: http://drops.dagstuhl.de/opus/volltexte/2009/2277/
Go to the corresponding Portal
Ziegler, Martin
Contributed Papers
Real Computation with Least Discrete Advice: A Complexity Theory of Nonuniform Computability
It is folklore particularly in numerical and computer sciences that, instead of solving some general problem $f:A\to B$, additional structural information about the input $x\in A$ (that is any kind
of promise that $x$ belongs to a certain subset $A'\subseteq A$) should be taken advantage of. Some examples from real number computation show that such discrete advice can even make the difference
between computability and uncomputability. We turn this into a both topological and combinatorial complexity theory of information, investigating for several practical problem show much advice is
necessary and sufficient to render them computable. Specifically, finding a nontrivial solution to a homogeneous linear equation $A\cdot\vec x=0$ for a given singular real $n\times n$-matrix $A$ is
possible when knowing $\rank(A)\in\{0,1,\ldots,n-1\}$; and we show this to be best possible. Similarly, diagonalizing (i.e. finding a basis of eigenvectors of) a given real symmetric $n\times
n$-matrix $A$ is possible when knowing the number of distinct eigenvalues: an integer between $1$ and $n$ (the latter corresponding to the nondegenerate case). And again we show that $n$--fold (i.e.
roughly $\log n$ bits of) additional information is indeed necessary in order to render this problem (continuous and) computable; whereas finding \emph{some single} eigenvector of $A$ requires and
suffices with $\Theta(\log n)$--fold advice.
BibTeX - Entry
author = {Martin Ziegler},
title = {{Real Computation with Least Discrete Advice: A Complexity Theory of Nonuniform Computability}},
booktitle = {6th International Conference on Computability and Complexity in Analysis (CCA'09)},
series = {OpenAccess Series in Informatics (OASIcs)},
ISBN = {978-3-939897-12-5},
ISSN = {2190-6807},
year = {2009},
volume = {11},
editor = {Andrej Bauer and Peter Hertling and Ker-I Ko},
publisher = {Schloss Dagstuhl--Leibniz-Zentrum fuer Informatik},
address = {Dagstuhl, Germany},
URL = {http://drops.dagstuhl.de/opus/volltexte/2009/2277},
URN = {urn:nbn:de:0030-drops-22770},
doi = {http://dx.doi.org/10.4230/OASIcs.CCA.2009.2277},
annote = {Keywords: Nonuniform computability, recursive analysis, topological complexity, linear algebra}
Keywords: Nonuniform computability, recursive analysis, topological complexity, linear algebra
Seminar: 6th International Conference on Computability and Complexity in Analysis (CCA'09)
Issue Date: 2009
Date of publication: 25.11.2009
DROPS-Home | Fulltext Search | Imprint
|
{"url":"http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=2277","timestamp":"2014-04-16T14:15:03Z","content_type":null,"content_length":"5702","record_id":"<urn:uuid:001da244-29bf-41d0-b1dc-605d4e1bb4bf>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00303-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Merlin-Arthur games and stoquastic complexity. Arxiv: quant-ph/0611021
"... QMA (Quantum Merlin-Arthur) is the quantum analogue of the class NP. There are a few QMA-complete problems, most of which are variants of the “Local Hamiltonian” problem introduced by Kitaev. In
this dissertation we show some new QMA-complete problems which are very different from those known previo ..."
Cited by 4 (1 self)
Add to MetaCart
QMA (Quantum Merlin-Arthur) is the quantum analogue of the class NP. There are a few QMA-complete problems, most of which are variants of the “Local Hamiltonian” problem introduced by Kitaev. In this
dissertation we show some new QMA-complete problems which are very different from those known previously, and have applications in quantum chemistry. The first one is “Consistency of Local Density
Matrices”: given a collection of density matrices describing different subsets of an n-qubit system (where each subset has constant size), decide whether these are consistent with some global state
of all n qubits. This problem was first suggested by Aharonov. We show that it is QMA-complete, via an oracle reduction from Local Hamiltonian. Our reduction is based on algorithms for convex
optimization with a membership oracle, due to Yudin and Nemirovskii. Next we show that two problems from quantum chemistry, “Fermionic Local Hamiltonian” and “N-representability, ” are QMA-complete.
These problems involve systems of fermions, rather than qubits; they arise in calculating the ground state energies of molecular systems. N-representability is particularly interesting, as it is a
key component
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=752134","timestamp":"2014-04-25T01:49:31Z","content_type":null,"content_length":"12992","record_id":"<urn:uuid:7ac3a539-87ec-4d36-be2c-3c7e702d23c0>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00213-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Homework Help
Posted by Please help me understand on Thursday, February 9, 2012 at 1:39am.
Yes it was answered previously but I am sorry that I did not follow the pattern. I was told that the answer that I submitted was incorrect. Please help if you can. Thank you so very much.
A three-digit number increases by 9 if we exchange the second and third digits. The same three-digit number increases by 90 if we exchange the first and second digits. By how much will the value
increase if we exchange the first and third digits?
Thank you
• Math Help - Reiny, Thursday, February 9, 2012 at 8:17am
in the original number,
let the unit digit be a,
the tens digit be b
and the hundred digit be c
the appearance of our number is cba
then the value of our number is 100c + 10b + a
case1: interchange 2nd and 3rd digit
number looks like cab
value of our new number is 100c + 10a + b
so 100c + 10a + b - (100c + 10b + a) = 9
9a -9b = 9
a-b = 1 , (#1)
case2: interchange the 1st and 2nd digit
appearance of new number is bca
value of new number is 100b + 10c + a
so 100b+ 10c + a - (100c + 10b + a) = 90
90b -90c = 90
b - c = 1 , (#2)
case3: interchange 1st and 3rd
appearance of new number is abc
value of new number is 100a + 10b + c
change in value = 100a + 10b + c - (100c + 10b + a)
= 99a - 99c
= 99(a-c)
but if we add #1 and #2
we get
a - c = 2
so 99(a-c) = 99(2) = 198
A little know feature of the above is the following "math trick"
1. Pick any 3 digit number, all different and the hundreds digit greater than the unit digit
2. reverse the digits and subtract the numbers. If you get a 2 digit result, insert a 0 in the hundred place
3. reverse your subtraction answer and add the last two results,
4. You will always get 1089
Related Questions
physics - given that the mass of an electron is 9.1 x 10^-28 grams and the ...
Statistics - Given the following null hypothesis, give an example of a Type II ...
Healthcare Statistic - Given the following null hypothesis, give an example of a...
Math HELPPP - In a survey of 2000 adults 50 years and older of whom 40% were ...
physics - k so question is one you have all answered previously, 3 different ...
Mathematics - Maria answered all the problems on her math test. She answered 80 ...
3rd grade math - on one test, chi answered 76 out of 100 questions correctly. on...
cultual diversity - describe other measurments sociologists use to calculate ...
sports - how is cricket played? Previously answered. There's no need to post ...
Algebra - I posted this earlier but it didn't get answered. So I'm posting it ...
|
{"url":"http://www.jiskha.com/display.cgi?id=1328769566","timestamp":"2014-04-16T08:11:19Z","content_type":null,"content_length":"10058","record_id":"<urn:uuid:65c55bef-0119-4f87-9324-c326aaebea32>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00579-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Hypothesis test involving regression coefficient
October 6th 2012, 06:43 PM #1
Oct 2012
Hypothesis test involving regression coefficient
I'm working on a problem and was hoping I could get either confirmation that I'm doing it correctly, or help in learning the correct way. I used Excel to calculate a stock's Beta, comparing
percentage changes in the stock price to that of the S&P 500. I am now tasked with implanting a hypothesis test testing whether the population Beta is greater than 1. My understanding of how to
work this problem is that I will subtract 1 from my regression coefficient, and then divide that by the standard error of the regression coefficient to arrive at my t-statistic. I will then use a
one tail t-test to arrive at my p-value. Is this correct? If not, how would I go about conducting this test? Thank you very much for your time and help.
Re: Hypothesis test involving regression coefficient
Hey Ironlionzion.
What is this Beta? Is this some kind of mean (Since you mention a t-test, this is used primarily for means)?
The idea that you have for getting a statistic and a p-value for the null hypothesis is the way to do it, but the only thing that I am not clear on is what this Beta actually is and how its
calculated from your sample.
Re: Hypothesis test involving regression coefficient
Thank you for your reply. Beta is just my regression coefficient from my regression equation. It measures a stock's volatility versus the market. So, basically I'm being tasked with testing
whether the regression coefficient for my population would be greater than 1. Thank you again for your time and help.
Re: Hypothesis test involving regression coefficient
It will depend on your model and your data, but essentially the whole thing boils down to what the distribution of your Beta's are.
I did a quick search, and the following seems like a good place to start reading:
The key thing is finding a distribution for your Beta's and then using that to get p-values or intervals for a significance level to test hypotheses in the exact same way you do it in a t-test,
or an F-test.
The other way that is derived from scratch is the Bayesian Inference technique that puts priors on the parameters and gets a distribution for the coeffecients which is a t-distribution
(multi-variate) with the point estimates being the normal ones calculated from Least Squares with a covariance matrix given also by the LSE approach sigma_hat^2 * (X'X)^(-1) where X' is the
transpose of the design matrix X and sigma_hat^2 is the estimate of sigma^2).
You should be able to use this result and you can look it up either on Google or in a textbook (my resources are from university notes which I can't share unfortunately).
October 6th 2012, 08:21 PM #2
MHF Contributor
Sep 2012
October 7th 2012, 07:48 AM #3
Oct 2012
October 7th 2012, 03:56 PM #4
MHF Contributor
Sep 2012
|
{"url":"http://mathhelpforum.com/advanced-statistics/204772-hypothesis-test-involving-regression-coefficient.html","timestamp":"2014-04-16T17:29:41Z","content_type":null,"content_length":"39414","record_id":"<urn:uuid:0df973e5-f2bd-4779-843c-c73dab5668a5>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00520-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Not Even Wrong
This Week’s Hype
Today Slashdot brings us the news that Gamma Ray Anomaly Could Test String Theory. As usual with such media claims about the testability of string theory, this is complete nonsense. The story is
based on this Scientific American blog posting, which in turn is based on this paper by the MAGIC gamma ray telescope collaboration.
The claims about testing string theory aren’t in the paper, but appear to come from string theorist Dimitri Nanopoulos who claims that he predicted (or, more accurately, “suggested”) the kind of
effect seen by MAGIC using string theory. As far as I can tell though, just about no string theorists except Nanopoulos and his collaborators Nick Mavromatos and John Ellis actually believe this.
Mavromatos and Nanopoulos also believe that string theory is responsible for the way that our brains work, here’s the abstract of one of their papers on this:
Microtubule (MT) networks, subneural paracrystalline cytosceletal structures, seem to play a fundamental role in the neurons. We cast here the complicated MT dynamics in the form of a
1+1-dimensional non-critical string theory, thus enabling us to provide a consistent quantum treatment of MTs, including enviromental friction effects. We suggest, thus, that the MTs are the
microsites, in the brain, for the emergence of stable, macroscopic quantum coherent states, identifiable with the preconscious states. Quantum space-time effects, as described by non-critical
string theory, trigger then an organized collapse of the coherent states down to a specific or conscious state.
Claims have been made by many string theorists that not only does string theory not predict this kind of violation of Lorentz invariance, but exactly the opposite: string theory predicts no such
violation. String theorist Jacques Distler earlier this year even went so far as to have the University of Texas issue a press release trumpeting his claims to have shown that string theory is
falsifiable, using a calculation based on the assumption that string theory preserves Lorentz invariance (either his colleagues or a PRL referee wouldn’t let him make this claim in the paper the
press release was based on, but that’s another story…).
Claims have been made (although there is controversy about this), that the main competing quantum gravity research program, Loop Quantum Gravity, predicts this sort of violation of Lorentz
invariance, and this would be one way of distinguishing it from string theory. Lubos Motl has a new posting about the MAGIC result, mainly concerned with knocking it down since he fears that it will
be used as evidence for LQG and against string theory.
It seems to me that in any case, the actual experimental evidence here is far too weak to support any claim that a violation of Lorentz invariance has been shown. Among the usual nonsense on
Slashdot, there was the following sensible comment about the MAGIC result from an astrophysicist:
What they are saying is that there are still details we don’t understand about AGN [active galactic nuclei] like Markarian 501. So, while this effect could be a first sign of quantum gravity
(*not* string theory in particular, as others have pointed out), it could also simply be something going on in the intrinsic spectrum of the flares themselves. I’d personally consider the second
explanation more likely at this stage.
As they also point out, one approach to sort out the ambiguity would be to observe other flary AGN at different redshifts (distances). One could then, for example, see if the delay gets shorter
or longer as the distance changes, as one would expect with a quantum gravity effect due to propagation to Earth.
Utterly Off-topic, But How Can I Resist Mentioning: According to this blog entry by a USC student, not only am I the “archnemesis” of string theorist blogger Clifford Johnson, but also
If string theory were a vampire, he’d be Buffy.
I’ll have to consult my friends and colleagues on the resemblance to Buffy question, personally I don’t see it.
I don’t know about vampires, but these “tests of string theory” are kind of like the living dead, staggering around trying to get their teeth into people and turn them into string theory partisans.
No matter how often you blow their heads off with a shotgun, more keep coming…
Update: Lubos and I seem to be in complete agreement about this experimental result and the Nanopoulos et. al. explanation of it. This situation appears to have driven him over the edge.
Update: See Backreaction for a more detailed posting about the MAGIC result.
43 Responses to This Week’s Hype
1. Microtubule (MT) networks, subneural paracrystalline cytosceletal structures, seem to play a fundamental role in the neurons.
Yes, and it’s called scaffolding. “Trigger an organized collapse of the coherent states down to a specific or conscious state”? Where do I find the paper that equates a “collapsed quantum state”
to the “conscious state” (aka. the “unicorn state”, often hinted at in dark, confusing tales, never actually photographed in the wild).
Still, that paper is dated “1995″, and if I remember well, “Microtubules -> Consciousness” inferences, sometimes via Quantum Mechanics, sometimes via old-school Turing Computation made several
appearances in Artificial Life proceedings at those time. Even Sir Penrose entered the game. Extremely speculative. Overall, an approach that went precisely nowhere. Anyway, an area to not get
Maybe I should be happy to languish in engineering.
Maybe I should be happy to languish in engineering.
I guess each discipline has its own no go areas that can be colonialized / made up by members of others as their eccentric hobby. Roger Penrose could do all kinds of speculations about mind,
Goedels theorem and quantum gravity. As a cognitive scientists his reputation was done but as a mathematician he won’t endanger his credibility or “core competence” by doing such excursions ( as
long as they aren’t too mad ).
3. If a theory violates LI at high energies, say the Planck scale, standard renormalization group suggests that at low energies this will manifest itself by a series of relevant, marginal and
irrelevant operators. Bounds on such Lorentz violations at accessible energies are extremely tight, no current experiment will improve those bounds. So, high energy breaking of LI is to my
knowledge sufficient grounds for falsifying a theory, or at least casting very strong doubts on it (spontaneous violation is a different story).
Also to my knowledge LQG does not predict such violation, and Lee does not claim it does. He only claims it might once we understand things better. It escapes me why that should be a good thing,
but maybe I am missing something.
4. Peter,
John Ellis and Dimitri Nanopoulos happen to also be the second and fourth most cited high-energy physicists, with Witten first and Weinberg third. Thus, you’re attempt to try to dismiss them as
crackpots just isn’t going to fly.
5. The link to Motl’s blog leads to something very amusing. Check it out.
6. Eric,
You are seriously underestimating Peter’s capabilities. Being the second and fourth most highly cited high energy physicist (more precisely, phenomenologist) does not deter Peter from declaring
him/her to be a non-sense creating pseudo-scientist. Actually, the more famous person is called a crackpot here the better, because more controversy brings more readers to this blog. It doesn’t
work like in science that you have to create quality content in order to get noticed, in the blogosphere the more crazyness you produce the more attention you will get. Pretty much like in mass
media which of course includes blogs these days. So don’t be surprised to see more top cited phenomenologists, string theoriests, non-string theorist, etc, etc, getting on Peter’s public enemy
7. bhabha,
I would never underestimate Peter’s abilities capabilities, especially in regards to using underhanded tactics.
Regarding the statments about Lorentz violation in string theory, what Lubos and Distler refer to is critical string theory. It’s possible to get this effect (frequency dependent speed of light)
in non-critical string theory.
In regards to the LQG vs. string theory debate, I think this is one more bit of evidence that there is some overlap between the two theories, and they may be part of the same larger theory as Lee
has suggested.
8. Eric and Bhabha,
Unlike you I’m not personally attacking anyone, but discussing their scientific arguments. The argument that the MAGIC results give evidence for string theory is, scientifically, nonsense, and it
would be hard to find anyone other than Nanopoulos, Mavratos and Ellis who would disagree. I can’t help noticing that string theorists rarely admit that these bogus “tests for string theory” are
indefensible, preferring to instead personally attack me for pointing this out, invoking not science but citation counts in their defense.
9. Peter,
First, the MAGIC results are very interesting and cannot be dismissed. Second, such an effect is of interest not just for string theory, but for quantum gravity in general including LQG. Third,
Ellis, Nanopoulos, and Mavromatos did predict this effect several years ago, as you may discover on the arXiv.
Regarding your statement that noone other than ENM takes these results seriously, how do you know? Have you talked with all string theorists and phenomenologists to get there opinion, or are you
just relying on what you heard from Lubos and Distler?
What’s next, trying to undermine their credibility by mentioning that they once wrote papers with Hagelin?
10. Eric,
If you want to provide us with a list of string theorists and phenomenologists who think that the MAGIC results give evidence for string theory and thus that the Slashdot headline is not
nonsense, go right ahead.
11. John Ellis and Dimitri Nanopoulos happen to also be the second and fourth most cited high-energy physicists
This isn’t so difficult when John Ellis single-handedly writes as many papers as any random group of ten other physicists. Writing a lot and getting cited a lot does not in itself make one a
great physicist.
12. ” What’s next, trying to undermine their credibility by mentioning that they once wrote papers with Hagelin?”
In fact their papers with Hagelin were their high point. It’s been all downhill since then.
I’ve actually read more Nanopoulos/Ellis papers than I care to admit. I have to laugh when I hear Nanopoulos claiming to have predicted something. It’s a million monkeys with typewriters type of
13. Microtubules are huge proteins, maybe 4 orders of magnitude over than the scale where you can observe quantum effects with neutral molecules. (If somebody tells me that it is possible to see
interference pattern by shooting tiny molecules like ammonia in vacuo through a double slit, and that scale of the effect can comparable with the actual size of ammonia molecule, I can believe
Looking for quantum effects in the mechanics of a living cell is completely New Age.
14. I was told about this new article on this spamblog. You may visit my blog to see some clarifications of the statements made by the individual behind this spamblog.
15. It’s not clear whether Lorentz violation is a consequence of string theory or if it’s forbidden by string theory. The same holds for loop quantum gravity. However, if you’re going to look for
signs of Lorentz violation, there are many ways to go about it, and looking for an energy dependence in the speed of light is not the most sensitive. Because of the vector character of light, the
speed of light is such theories generally depends on polarization as well as energy. Searches for this kind of birefringence are much more sensitive than experiments that look for differences in
photon arrival times. Indeed, I know that another high-energy telescope experiment already has much better data on photon arrival time differences, but they have refrained from publishing it, in
part because it is not competitive with the polarization bounds.
Because of the vector character of light, the speed of light is such theories generally depends on polarization as well as energy.
Not true Brett, numerous papers about this from non-string QG researchers
have ruled out polarization dependence.
Indeed, I know that another high-energy telescope experiment already has much better data on photon arrival time differences, but they have refrained from publishing it, in part because it is
not competitive with the polarization bounds.
In that case they are doing the QG community a disservice by not publishing, because the polarization bounds are irrelevant. You should urge them to publish.
I assume you mean they have data which would constrain (if not rule out) energy-but-not-polarization dependence, and that is precisely what I see being discussed.
My sense is we have a ways yet to go with this issue.
17. Brett is right that polarization odd variations in the speed of light are already ruled out at planck scale by observations of polarized radio galaxies, but polarization even variations in the
speed of light are not because they cause no bifringence. The latter are, however, a possible consequence of a deformation rather than breaking of Poincare invariance. If an experiment has data
on photon arrival time variation with energy it must, because of the limits on parity odd variation, be parity even, and hence, if there is no other explanation, it could be a detection of a
deformation of poincare invariance (so called “doubly special relativity”).
18. On what grounds can a polarization dependence in the speed of light be ruled out? One can make an assumption that quantum gravity will have certain features, such as no birefringence, but this
will limit the terms in the low-energy effective action to a measure zero subset of the full parameter space of Lorentz invariance violation.
To me, it seems rather wishful thinking to hope that quantum gravity will have such a profound signature as Lorentz violation, while not interacting with the spin structure of the electromagnetic
field. There is no compelling reason why this should be the case. The interactions that avoid birefringence deserve to be tested (and I strongly recommended that the data I saw be published), but
they are only a peculiar subset of the possible Lorentz-violating interactions. One can always write down a nonrenormalizable interaction which all previous experiments have been insensitive to,
but which a new configuration will test; yet selling it as a profound new test of quantum gravity is illogical. (And all renormalizable varying speed of light theories can indeed be bounded by
birefringence.) And if you want to make a generic statement about how well Lorentz invariance has been tested, it behooves you to look at the best bounded sectors, not the worst.
You may visit my blog t
But, Lubos, you have a message in your site saying you don’t want readers from this blog.
20. “If string theory were a vampire, he’d be Buffy.”
Hah. It’s occurred to me that string theory is to real physics as a drag queen is to a real woman. Unlike real women, drag queens are expertly groomed, and beautifully made up, but when the
moment of truth arises, who would you rather be with?
On the other hand, the results of this experiment are quite predictable. That’s one thing drag queens have going for them, unlike string theories.
21. Lee and Brett, regardless of the new and exotic phenomena that you are discussing, I see no reason Lorentz invariance violation at the Planck scale should be automatically a small effect at low
energy. Most conservative estimates based on the existence of LV relevant and marginal perturbations to the standard model (I believe the number of those is 46) makes Lorentz violation basically
already falsified, based on existing experimental results. Unless one finds a way to fine tune away a lot of really large violations I am not sure why we are discussing those tiny sub-sub-leading
22. Dear Moshe and Brett,
Parity odd variation of the speed of light with energy is ruled out to at least order 10^-3 [l_{Planck} Energy] by observational limits on bifringence from polarized sources. See gr-qc/0102093.
This is a prediction of lorentz symmetry breaking, therefor it is reasonable to infer that lorentz invariance is not broken at order l_{Planck}. But deformation of Poincare symmetry is another
thing entirely, there is still a ten parameter global symmetry algebra constraining renormalization effects, so Moshe’s considerations can be answered directly; these are the leading effects of
deformed Poincare symmetry. Since the symmetry group is still present it does rule out as many terms as ordinary poincare symmetry, and one of them is a parity odd varation in the speed of light
coming from the usual dimension five term seen in lorentz symmetry breaking.
To be more precise the Casimer invariant of the deformed Poincare algebra is no longer quadratic in energy and momentum, leading to corrections to the speed of light. Thus, deformed Poincare
symmetry can imply an order l_{Planck} variation in the speed of light with energy, which is parity even and therefor not ruled out. Therefor if the right interpretation of the observations
reported by the MAGIC collaboration is a modification of spacetime symmetry it must be a deformation and not a breaking of Poincare symmetry-because the latter is already ruled out by experiment,
but the former is not. The same holds for the observations Brett hinted about.
23. Lee,
Regarding the paper gr-qc/0102093 (You also mention the same on page 226 of TTWP)…the assumption made in the paper
“If we assume that linearly polarized photons are
detected, and unambiguously identified with a source at cosmological distance z, without any significant interaction
in between, we may be immediately sure that (6) is not strongly violated.”
is too large of an assumption, in my opinion.
Cosmological data has traditionally been the poorest data of all the sciences, and you want to draw definite and strong conclusions about physics at the Planck scale based upon it? This to me is
very wishful thinking.
24. Moshe,
I agree with you about naturalness, which is why I think it’s more important to concentrate on renormalizable forms of Lorentz violation. Indeed, there is no known reason why Lorentz violation,
were it to exist, should be small. Lorentz violating interactions are, of course, technically natural, since they receive no radiative corrections from Lorentz-invariant physics, but if they
actually exist, they must either be finely tuned, or they must be suppressed by some unknown mechanism.
The number of dimension three and four Lorentz-violating operators that can be constructed out of standard model fields is much larger than 46. With just one generation of fermions and the
electromagnetic field, there are about 150. Of course, many of those mix under renormalization. The number of different symmetry types is still greater than 46 though. Actually, how many
different physically meaningful operators there are depends on whether you consider only flat spacetime or whether you consider working in a curved background, and exactly how which terms are
physical under which circumstances is not completely understood.
25. LDM,
I understand reservations about claims of positive results because there could be other explanations, but the claim in gr-qc/0102093 is a negative result. Do you believe that there could be
leading order lorentz symmetry breaking which produces bifringence, which is then masked by some ordinary astrophsyical effect so that no bifringence is seen? What physics do you have in mind
that could reverse rotation in the plane of polarized radio waves so its effects were not seen?
It seems to me reasonable to infer that lorentz symmetry breaking is not present, in this case the experiment and the more theoretical argument discussed here by Moshe agree.
26. Thanks Lee, probably I was a little ambiguous. Sorry. I will see if I can phrase this more precisely…
I am not arguing either for or against Lorentz violation, I am only arguing against using the cosmological data as you have done…
In the TTWP, page 226, you mention that the travel time can be “billions of years” for the photons in question. So we are talking about large distances.
Now, in the paper, we also have the statement
“comparing the time of arrival of rays at different energies emitted simultaneously from the
same source, one can test the validity of this prediction”
I am assuming that time of arrival that is being discussed is based on our distance from the source of photons. (If it is not, and you do not need to know the distance, then I am wrong, and
please accept my apologies for wasting your time.) The problem is that cosmological distances are very uncertain, to quote from M Berry “Principles of Cosmology and Gravitation”:
“How do we know the distances and densities quoted? The Universe is charted by a sequence of techniques, each of which takes us out to a greater range of distances – to the next level of the
‘cosmic distance hierarchy’. Each level is less reliable than the last, so that there is considerable uncertainty about the measurements of very great distances.”
So, it would seem
we have considerable uncertainty in our data…but in TTWP, you are talking about measuring differences of 1/1000 of a second, which it seems to me is a fairly precise or certain measurement…The
impression I have is we do not have that kind of accuracy. And so you cannot meaningfully use the cosmological data in the way you are attempting to, which is to measure differences of 1/1000 of
a second over large cosmic distances.
27. Thanks Brett, the number 46 came from the Coleman-Glashow paper, if the number is bigger I am even more worried…
Lee, the fact that global symmetries are preserved by renormalization, and therefore can be used to forbid otherwise possible interactions, this fact was established through a series of theorems
in the 1960s and 70s, those theorems apply to ordinary global symmetries.
If there is some deformed version of Poincare symmetry it is then natural to ask if it is preserved by renormalization. If not it doesn’t give any restriction on the form of the low energy EFT.
If it is preserved maybe it limits the allowed interactions in some way, but even then it seems to me it will take a miracle to allow small violations of LI while by forbid much the numerous much
large effects.
28. LDM,
The different arrival times are for different gamma rays emitted at the same time from the source with different frequencies, and no you don’t need know the cosmological distance precisely. Once
only needs the distance to be large so that the difference in arrival times is measurable. Essentially, the higher frequency gammas interact with the vacuum and slow down, just as a light ray
does when going through some medium, otherwise known as refraction.
29. garbled the last sentence, the last few words should be “while forbidding the numerous much larger effects”. Those effects refer to all the renormalizable terms Brett discussed, not just the
dimension 5 operator discussed by Lee.
30. “And so you cannot meaningfully use the cosmological data in the way you are attempting to, which is to measure differences of 1/1000 of a second over large cosmic distances.” – LDM
I think you completely misunderstand. If two photons of different energies arrive 1 millisecond apart from the same gamma ray burster or whatever, the accuracy of that measurement (relative time
of 1 ms difference) is independent of the less accurate cosmological distance ladder which estimates how far the gamma ray burster is from you. You simply don’t need to know exactly how far away
the source is, in order to detect that photons are travelling at different speeds…
31. Thank you anon. .Yes, perhaps I misunderstand…but let me ask you, the two photons that arive 1 millisecond apart from the GRB, how do you know that the photons did not leave the GRB 1 millisecond
apart too?
And more to the point that is bothering me, what are the error bounds on these measurements?
32. LDM, if the 1 ms delay is not caused by differences in photon speeds, then similar delay times whould show up in the time-dependent energy spectra of gamma ray bursters, regardless how far away
they are. If the delay is caused by differences in photon speeds, then the furthest sources should show the biggest delays.
33. Dear Moshe,
I agree, an important question is whether deformed poincare symmetry is preserved by renormalization. My argument assumed yes, but you are right that this needs to be shown, I am not sure of the
status of this in various approaches to QFT with deformed poincare symmetry but will check.
Dear LDM,
We seem to be at cross purposes, the paper gr-qc/0102093 does not use arrival times, it uses the absence of bifrengence in observations of polarized radio galaxies. The MAGIC claim does use
arrival times, as do limits using lower energy gamma ray bursts set to M_QG. There are several ways this is addressed in the literature: 1) as anon mentions one can hope to get redshifts for
enough events and see if there is a correlation with distance, 2) by using very short bursts and bounding the dispersion relation by the overall length of the signal, 3) by a better understanding
of the source. None of these apply to the MAGIC claim.
The MAGIC paper uses another argument based on extremizing energy flux. Does anyone know how reliable this kind of argument is? Is it used elsewhere in astrophysics?
34. LDM, if the 1 ms delay is not caused by differences in photon speeds, then similar delay times whould show up in the time-dependent energy spectra of gamma ray bursters, regardless how far away
they are. If the delay is caused by differences in photon speeds, then the furthest sources should show the biggest delays.
which would be detectable provided gamma ray bursters at all eras are essentially identical.
35. Thanks Lee.
36. Hi Lee,
I’m confused by the terms “deformation” and “violation” of Lorentz symmetry. It seems to me that if the lagrangian contains some terms which are invariant under the “deformed” Lorentz symmetry
but NOT invariant under the usual undeformed Lorentz symmetry, such terms would therefore violate the usual Lorentz symmetry since they are not invariant under it, right?
37. Thanks for the link!
38. This thread is interesting but…Does ST predict a LI violation or not? What is the difference between critical and non-critical ST? Are they both derived from the same assumptions? If they are do
we now have two families of ST? And if these ST guys predicting LI violation are not really preaching the true ST why don’t the true ST guys stand up and refute them openly?
39. Cecil,
Non-critical string theory involves strings propagating in a dimension of spacetime less than the critical dimension of ten. The resulting anomalies are cancelled by exciting the Liouville mode
(linear dilaton) of the strings. The statement that string theory strictly obeys Lorentz invariance is true only in the context of critical string theory.
40. So which version of String Theory, critical or non-critical, to String Theorists believe corresponds to the real world? …oh wait
This situation appears to have driven him over the edge.
Who is this Lubos that you mention? Is he affiliated to an academic institution? He seems to refer to L. Susskind with strange reverence, is he another squatter at Stanford?
42. A somewhat belated item has appeared in New Scientist on
the MAGIC “test” of string theory at:
This entry was posted in Uncategorized. Bookmark the permalink.
|
{"url":"http://www.math.columbia.edu/~woit/wordpress/?p=591","timestamp":"2014-04-20T15:54:08Z","content_type":null,"content_length":"88399","record_id":"<urn:uuid:ad6c2b9f-a0f8-48b6-b50f-d7965e6e1182>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00636-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Advogato: Blog for raph
Why formal methods?
Formal methods have earned a bad reputation. At one time, I think it was widely held that eventually we'd figure out how to prove large scale programs correct, and that the benefits would be
compelling. But it hasn't worked out that way. People program as carelessly as ever, with reams of bugs and security holes to show for it.
Even so, I'm stubbornly hopeful. I think we simply haven't worked out good ways to do formal proofs yet. Mathematical proof style isn't really all that formal, and doesn't seem to adapt well to
computer problems. Dijkstra's work provides glimpses of the future, but those techniques won't become popular until we can teach ordinary people to use them.
Another problem is that mathematical logic is fairly nasty. Especially when dealing with infinities, you have to be careful to avoid pitfalls like the Russell set "paradox". It's especially a problem
with rigorously formal logic because you really want "metatheorems" to work: essentially, creating new deduction rules along with proofs that they're sound. The problem is that no formal system can
be both complete and consistent. So you have to place limits on metatheorems, and often getting work done has the flavor of working around these limits.
What's the answer? Well, one way is to bite the bullet and adopt a reasonably powerful axiom set as the basis for all other work. A problem here is that you can't really get people to agree on which
axiom set is the right one. In fact, controversy rages on whether or not to accept the law of the excluded middle or the axiom of choice.
But I find these debates deeply unsatisfying. What bearing do they have on whether a program runs according to its specification? My gut feeling is that infinity-prone concepts such as integer, real,
and set are a bit too seductive. Computers deal with finite objects. In many ways, 64-bit word is a simpler concept than integer, as evidenced by the much larger set of open problems. A 64-bit
addressable array of bytes is a bit larger, but still finite. You can do a lot in that space, but a lot of the classically tough problems become conceptually simple or trivial. Solution to
Diophantine equations? Undecidable over integers, but if you confine yourself to solutions that fit in memory, just enumerate all possible arrays and see if any fit. You wouldn't do it this way in
practice, of course, but I find it comforting to know that it presents no conceptual difficulty.
It looks as if the field of formal methods may be heading this direction anyway. Recently, the subfield of model checking has been racking up some success stories. The models in question have much
more of a finite flavor than most mathematical approaches to computations. It may well be that the technology for reasoning about finite formal systems evolves from the model checking community to
become widely applicable to programs. That would be cool.
I still haven't answered the title question: why formal methods? My answer is that formal techniques are our best hope for producing software of adequate quality. I'm under no illusion that it's a
magic bullet. No matter how good the proof technology becomes, I'm sure it will always be at least one or two orders of magnitude more work to produce a provably correct program than to hack one out.
Even so, programs will still only be "correct" with respect to the specification. In some cases, a spec will be relatively simple and straightforward. Lossless data compression algorithms are
probably my favorite example: essentially you want to prove that the composition of compression and decompression is the identity function. But how can you prove a GUI correct?
You can't, but the use of formal methods will put intense pressure on spec authors to remove needless complexity and ambiguity. When formal methods catch on, I think we'll start seeing specs that
truly capture the essence of an idea, rather than sprawling messes common today.
I also believe that security will be a major driving force in these developments. Our processes for writing software today (free and proprietary alike) are simply incapable of producing secure
systems. But I think it's possible to formalize general security properties, and apply them to a wide class of systems. Buffer overflows (still a rich source of vulnerabilities) are obvious, but I
think it's also possible to nail down higher-level interactions. It would be cool, for example, to prove that a word processing format with rich scripting capabilities is incapable of propagating
Even so, getting the right spec is a hard problem. Timing attacks, for example, took a lot of people by surprise. If your spec doesn't take timing into account, you might have a system that's
provably impossible to break (I'll assume that P != NP gets proven somewhere along the way), but falls even so. This brings me to another point: security assertions are often very low-level, while
the natural tendency of computer science theorists is to lift the abstraction level as high as possible.
This, I am convinced, is how we'll program in fifty years. A lot of work will go into writing good specifications; more than goes into writing code now. Then, when people actually write programs,
they'll do correctness proofs as they go along. It might take a thousand times as much work to crank out a line, but I think we can easily get by on a thousandth as much code.
And I think it's a pretty good bet that this will come out of the free software world rather than proprietary companies. We'll just have to
|
{"url":"http://www.advogato.org/person/raph/diary/253.html","timestamp":"2014-04-18T05:44:46Z","content_type":null,"content_length":"9540","record_id":"<urn:uuid:2b1f5e11-75a6-48e6-bd3c-d316de799c6b>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00435-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Two geometric probability questions (one answered, one more to go)
up vote 2 down vote favorite
1. Given $n$ independent uniformly distributed points on $S^2$, what's the distribution of the distance between two closest points?
2. Consider $n$ iid uniform points on $S^1$, $Y_1, \ldots, Y_n$, in counterclockwise order. Now let $I_1 = Y_2-Y_1, \ldots, I_n = Y_1 - Y_n$ be the spacings between consecutive points. Finally order
the spacing sequence into $I_{(1)} < I_{(2)} < \ldots < I_{(n)}$. They will also generate a spacing sequence, of size $n-1$, $J_1 = I_{(2)} - I_{(1)}, \ldots, J_{n-1} = I_{(n)} - I_{(n-1)}$.
What's the distribution of this last sequence? In particular, what's the mean value of the smallest $J$ and largest $J$?
For the first problem, I think you just fix one of the points to be the north pole, and look at surface areas of caps. – Eric Tressler Nov 11 '10 at 5:53
1 The answer will depend on whether you're talking about chord distance or distance on the surface, though. – Eric Tressler Nov 11 '10 at 5:55
These two are essentially the same, aren't they? – John Jiang Nov 11 '10 at 6:31
I mean there is simple formula relating the two, so the distribution of one would be a simple transform of the other. – John Jiang Nov 11 '10 at 6:32
add comment
2 Answers
active oldest votes
There is an asymptotic formula for the minimal spherical distance when $n$ is large (see e.g. the PhD thesis "Random Diameters and Other U-Max-Statistics" by M. Mayer, Corollary
Theorem. Assume that the points $\xi_1,\xi_2,...,\xi_n$ are independent and uniformly distributed on $\mathbb S^{d−1}$. Let $S_n$ be the smallest central angle formed by point
pairs within the sample. Then for $t > 0$ $$P\{n^{2/(d-1)}S_n\leq t\}=1-\exp\left(-\frac{\Gamma(\frac{d}{2})}{4\pi^{1/2}\Gamma(\frac{d+1}{2})}t^{d-1} \right)+\mathcal O(n^
up vote 4 down
vote accepted I am not sure if there is a nice explicit formula for finite $n$.
In fact, the knowledge of the exact form of the distribution $P\{S_n\leq\theta\}$ on $\mathbb S^2$ would lead to a solution of the Tammes packing problem (which is only solved for a
few values of $n$ to the best of my knowledge).
add comment
2) This is, of course, the same as saying about spacings between uniform points on a segment (you can say that $Y_1=0$, for example). Let it be the segment $[0,1]$.
Now the joint distribution of $I_1,\dots, I_{n}$ is the same as of $E_1/E,\dots, E_n/E$, where $E_1,\dots, E_n$ are iid exponential distributed, $E=\sum_{k=1}^n E_k$ (see Devroye
Non-Uniform Random Variate Generation, p.208). So the distribution of $I_{(1)},\dots, I_{(n)}$ is the same as of $E_{(1)}/E,\dots, E_{(n)}/E$. But the joint distribution of $\{E_{(k)}-E_
{(k-1)},k=1,\dots,n\}$ ($E_{(0)}:=0$) is the same as of $\{(n-k+1)^{-1} E_k,k=1,\dots,n\}$ (ibid, p.211).
So the distribution of $J_1,\dots, J_n$ is the same as of $\{(n+k-1)^{-1} E_k/E,k=1,\dots,n\}$, where $E_1,\dots, E_k$ are iid exponential rv's, $E=\sum_{k=1}^n E_k$. And this is, by the
previous paragraph, equivalent to saying that the distribution is the same as of $\{(n-k+1)^{-1} I_k,k=1,\dots,n\}$.
up vote 2
down vote These are not independent, but very close to be, and from here you can find the distribution of maximum and minimum (but nothing very pleasant there, as the variables in question are not
identically distributed; a formula for the expectation looks extremely ugly).
How to get distribution of $J$ omitting $E$. In fact, this is simple owing to the fact that the ordering map on the simplex $\{(t_1,\dots,t_n)|t_j\ge 0,\sum_j t_j=1\}$ (the support of $I$)
is picewise linear, and moreover each image has the same number of preimages due to the apparent symmetry. So the distribution of $\{I_{(1)},\dots,I_{(n)}\}$ is uniform on its support. Now
we have a one-to-one linear map to $J$. So $J$ is also uniformely distributed. So it's only about finding its support, which is simple, as John Jiang noted.
Thank you for the great answer. I am always scared of exact formulas. – John Jiang Nov 12 '10 at 6:16
@John Jiang: You're warmly welcome. Precise formulas useless here, if you want, I can look into asymptotics. – zhoraster Nov 12 '10 at 8:01
@zhoraster: actually I was able to use the formula you gave above to compute the exact distribution of the minimum: $P(\min J_i > y) = (1-y n(n+1)/2)^{n-1}$, so it doesn't seem bad at
all. What I did is a pretty geometric argument. Notice that $y$ ranges between $0$ and $2/(n(n+1))$, as expected from its being the smallest gap of gaps of $n$ points on the circle. Using
that formula, we just need to integrate $P(\min J_i > y)$ for $y \in [0,2/(n(n+1))]$ to get the expected value, which is $2/((n-1)n(n+1))$. – John Jiang Nov 12 '10 at 9:26
@John Jiang: I see, I had another formula initially (from $E_k/E$), just didn't see that it can be reduced to this. I even discovered now that it is quite straightforward to go from the
distribution of $I$ to the one of $J$ omitting $E$. Anyway, glad that you've found it, congratulations! – zhoraster Nov 12 '10 at 10:09
Thanks again! Your geometric ansatz really helped. I'd be glad to hear how to go directly from I to J, as I am still interested in the entire J sequence, and hopefully prove something
nice about them. – John Jiang Nov 12 '10 at 16:41
add comment
Not the answer you're looking for? Browse other questions tagged pr.probability or ask your own question.
|
{"url":"http://mathoverflow.net/questions/45648/two-geometric-probability-questions-one-answered-one-more-to-go","timestamp":"2014-04-23T14:08:11Z","content_type":null,"content_length":"67407","record_id":"<urn:uuid:d2ee2c36-e4b0-4f88-a634-02c3f5395cdc>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00648-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Modeling and Simulation
Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material,
please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 116
Technology for the United States Navy and Marine Corps, 2000-2035: Becoming a 21st-Century Force B Virtual Engineering: Toward a Theory for Modeling and Simulation of Complex Systems John Doyle,
California Institute of Technology INTRODUCTION This paper is a primer surveying a wide range of issues tied together loosely in a problem domain tentatively referred to as “virtual engineering ”
(VE). This domain is concerned with modeling and simulation of uncertain, heterogeneous, complex, dynamical systems—the very kind of M&S on which much of the vision discussed in this study depends.
Although the discussion is wide ranging and concerned primarily with topics distant from those usually discussed by the Department of the Navy and DOD modeling communities, understanding how those
topics relate to one another is essential for appreciating both the potential and the enormous intellectual challenges associated with advanced modeling and simulation in the decades ahead.
BACKGROUND Perhaps the most generic trend in technology is the creation of increasingly complex systems together with a greater reliance on simulation for their design and analysis. Large networks of
computers with shared databases and high-speed communication are used in the design and manufacture of everything from microchips to vehicles such as the Boeing 777. Advances in technology NOTE: This
appendix benefited from material obtained from many people and sources: Gabriel Robins on software, VLSI, and the philosophy of modeling, Will O'Neil on CFD, and many colleagues and students.
OCR for page 116
Technology for the United States Navy and Marine Corps, 2000-2035: Becoming a 21st-Century Force have put us in the interesting position of being limited less by our inability to sense and actuate,
to compute and communicate, and to fabricate and manufacture new materials, than by how well we understand, design, and control their interconnection and the resulting complexity. While
component-level problems will continue to be important, systems-level problems will be even more so. Further, “components” (e.g., sensors) increasingly need to be viewed as complex systems in their
own right. This “system of systems” view is coming to dominate technology at every level. It is, for example, a basic element of DOD's thinking in contexts involving the search for dominant
battlefield awareness (DBA), dominant battlefield knowledge (DBK), and long-range precision strike. At the same time, virtual reality (VR) interfaces, integrated databases, paperless and
simulation-based design, virtual prototyping, distributed interactive simulation, synthetic environments, and simultaneous process/product design promise to take complex systems from concept to
design. The potential of this still-nascent approach is well appreciated in the engineering and science communities, but what “it” is is not. For want of a better phrase, we refer to the general
approach here as “virtual engineering” (VE). VE focuses on the role of M&S in uncertain, heterogeneous, complex, dynamical systems—as distinct from the more conventional applications of M&S. But VE,
like M&S, should be viewed as a problem domain, not a solution method. In this paper, we argue that the enormous potential of the VE vision will not be achieved without a sound theoretical and
scientific basis that does not now exist. In considering how to construct such a base, we observe a unifying theme in VE: Complexity is a by-product of designing for reliable predictability in the
presence of uncertainty and subject to resource limitations. A familiar example is smart weapons, where sensors, actuators, and computers are added to counter uncertainties in atmospheric conditions,
release conditions, and target movement. Thus, we add complexity (more components, each with increasing sophistication) to reduce uncertainties. But because the components must be built, tested, and
then connected, we are introducing not only the potential for great benefits, but also the potential for catastrophic failures in programs and systems. Evaluating these complexity versus
controllability tradeoffs is therefore very important, but also can become conceptually and computationally overwhelming. Because of the critical role VE will play, this technology should be robust,
and its strengths and limitations must be clearly understood. The goal of this paper is to discuss the basic technical issues underlying VE in a way accessible to diverse communities—ranging from
scientists to policy makers and military commanders. The challenges in doing so are intrinsically difficult issues, intensely mathematical concepts, an incoherent theoretical base, and misleading
popular expositions about “complexity.”
OCR for page 116
Technology for the United States Navy and Marine Corps, 2000-2035: Becoming a 21st-Century Force APPROACH In this primer on VE, we concentrate on “physics-based” complex systems, but most of the
issues apply to other M&S areas as well, including those involving “intelligent agents.” Our focus keeps us on a firmer theoretical and empirical basis and makes it easier to distinguish the effects
of complexity and uncertainty from those of simple lack of knowledge. Our discussion also departs from the common tendency to discuss VE as though it were a mere extension of software engineering.
Indeed, we argue that uncertainty management in the presence of resource limitations is the dominant technical issue in VE, that conventional methods for M&S and analysis will be inadequate for large
complex systems, and that VE requires new mathematical and computational methods (VE theory, or VET). We need a more integrated and coherent theory of modeling, analysis, simulation, testing, and
model identification from data, and we must address nonlinear, interconnected, heterogeneous systems with hierarchical, multi-resolution, variable-granularity models—both theoretically and with
suitable software architectures and engineering environments. Although the foundations of any VE theory will be intensely mathematical, we rely here on concrete examples to convey key ideas. We start
with simple physical experiments that can be done easily with coins and paper to illustrate dynamical systems concepts such as sensitivity to initial conditions, bifurcation, and chaos. We also use
these examples to introduce uncertainty modeling and management. Having introduced key ideas, we then review major success stories of what could be called “proto-VE” in the computer-aided design
(CAD) of the Boeing 777, computational fluid dynamics (CFD), and very large scale integrated circuits (VLSI). While these success stories are certainly encouraging, great caution should be used in
extrapolating to more general situations. Indeed, we should all be sobered by the number of major failures that have already occurred in complex engineering systems such as the Titanic,
Tacoma-Narrows bridge, Denver baggage-handling system, and Ariane booster. We argue that uncertainty management together with dynamics and interconnection is the key to understanding both these
successes and failures and the future challenges. We then discuss briefly significant lessons from software engineering and computational complexity theory. There are important generalizable lessons,
but—as we point out repeatedly—software engineering is not a prototype for VE. Indeed, the emphasis on software engineering to the exclusion of other subjects has left us in a virtual
“pre-Copernican” stage in important areas having more to do with the content of M&S for complex systems. Against this background, we draw implications for VE. We go on to relate these implications to
famous failures of complex engineering systems, thereby demonstrating that the issues we raise are not mere abstractions, and that achieving the potential of VE (and M&S) will be enormously
challenging. We touch
OCR for page 116
Technology for the United States Navy and Marine Corps, 2000-2035: Becoming a 21st-Century Force briefly on current examples of complex systems (smart weapons and airbags) to relate discussion to the
present. We then discuss what can be learned from control theory and its evolution as we move toward a theory of VE. At that point, we return briefly to the case studies to view them from the
perspective of that emerging theory. Finally, we include a section on what we call “soft computing,” a domain that includes “complex-adaptive-systems research, ” fuzzy logic, and a number of other
topics on which there has been considerable semi-popular exposition. Our purpose is to relate these topics to the broader subject of VE and to provide readers with some sense of what can be
accomplished with “soft computing” and where other approaches will prove essential. In summary before getting into our primer, we note that several trends in M&S of complex systems are widely
appreciated, if not well understood. There is an increasing emphasis on moving problems and models from linear to nonlinear; from static to dynamic; and from isolated and homogeneous to
heterogeneous, interconnected, hierarchical, and multi-resolution (or variable granularity and fidelity). What is poorly understood is the role of uncertainty, which we claim is actually the origin
of all the other trends. Model uncertainty arises from the differences between the idealized behavior of conventional models and the reality they are intended to represent. The need to produce models
that give reliable predictability of complex phenomena, and thus have limited uncertainty, leads to the explicit introduction of dynamics, nonlinearity, and hierarchical interconnections of
heterogeneous components. Thus the focus of this paper is that uncertainty is the key to understanding complex systems. INTRODUCTION TO CENTRAL CONCEPTS Dynamical Systems A few simple thought
experiments can illustrate the issues of uncertainty and predictability—as well as of nonlinearity, dynamics, heterogeneity, and ultimately complexity. Most of the experiments we discuss here can
also be done with ordinary items like coins and paper. Consider a coin-tossing mechanism that imparts a certain linear and angular velocity on a coin, which is then allowed to bounce on a large flat
floor, as depicted in Figure B.1 . Without knowing much about the mechanism, we can reliably predict that the coin will come to rest on the floor. For most mechanisms, it will be impossible to
predict whether it will be heads or tails. Indeed, heads or tails will be equally likely, and any sequence of heads or tails will be equally likely. Such specific predictions are as reliably
unpredictable as the eventual stopping of the coin is predictable. The reliable unpredictability of heads or tails is a simple consequence of the sensitivity to initial conditions that is almost
inevitable in such a mechanism. The coin will bounce around on the floor in an apparently random and erratic manner
OCR for page 116
Technology for the United States Navy and Marine Corps, 2000-2035: Becoming a 21st-Century Force FIGURE B.1 Coin tossing experiment. before eventually coming to rest on the floor. The coin's
trajectory will be different in detail for each different toss, in spite of efforts to make the experiment repeatable. Extraordinary measures would be needed to ensure predictability (e.g., dropping
the coin heads up a short distance onto a soft and sticky surface, so as always to produce heads). Sensitivity to initial conditions (STIC) can occur even in simple settings such as a rigid coin in a
vacuum with no external forces, not even gravity. With zero initial velocity, the coin will remain stationary, but the smallest initial non-zero velocity will cause the coin to drift away with
distance proportional to time. The dynamics are linear and trivial. This points out that—in contrast with what is often asserted—sensitivity to initial conditions is very much a linear phenomenon.
Moreover, even in nonlinear systems, the standard definition of sensitivity involves examining infinitesimal variations about a given trajectory and examining the resulting linear system. Thus even
in nonlinear systems, sensitivity to initial conditions boils down to the behavior of linear systems. What nonlinearity contributes is making it more difficult to completely characterize the
consequences of sensitivity to initial conditions. Sensitivity to initial conditions is also a matter of degree; the coin-in-free-space example being on the boundary of systems that are sensitive to
initial conditions. Errors in initial conditions of the coin lead to a drifting of the trajectories that grows linearly with time. In general, the growth can be exponential, which is more dramatic.
If we add atmosphere, but no other external force, the coin will eventually come to rest no matter what the initial velocities, so this
OCR for page 116
Technology for the United States Navy and Marine Corps, 2000-2035: Becoming a 21st-Century Force is clearly less sensitive to initial conditions than the case with no atmosphere. A coin in a thick,
sticky fluid like molasses is even less sensitive. Not all features of our experiment are sensitive to initial conditions. The final vertical position is reliably predictable, the time at which the
coin will come to rest is less so, the horizontal resting location even less so, and so on, with the heads or tails outcome perfectly unpredictable. It follows that any notion of complexity cannot be
attributed to the system, but must include the property of it that is in question. EXPONENTIAL GROWTH, CHAOS, AND BIFURCATION We can get a better understanding of sensitivity to initial conditions
with some elementary mathematics. Suppose we have a model of the form x(t+1) = f(x(t)). This tells us what the state variable x is at time t+1 as a function of the state x at time t. This is called a
difference equation, which is one way to describe a dynamical system—i.e., one that evolves with time. If we specify x(t) at some time, say t = 0, then the formula x(t+1) = f(x(t)) can be applied
recursively to determine x(t) for all future times t = 1,2,3, . . . . This determines an orbit or trajectory of the dynamical system. This only gives x at discrete times, and x is undefined
elsewhere. It is perhaps more natural to model the coin and other physical systems with differential equations that specify the state at all times, but difference equations are simpler to understand.
For the coin, the state would include at least the positions and velocities of the coin, and possibly some variables to describe the time evolution of the air around the coin. If the coin were
flexible, the state might include some description of the bending and its rate. And so on. A scalar linear difference equation is of the form x(t+1) = ax(t), where a is a constant (the vector case is
x(t+1) = Ax(t), where A is a matrix). If x(0) is given, the solution for all time is x(t) = atx(0). Thus, if a>1, nonzero solutions grow exponentially and the system is called unstable. Since the
system is linear, any difference in initial conditions will also grow exponentially. (If a<1, then solutions decay exponentially to zero and the origin is a stable fixed point.) Exponential growth
appears in so many circumstances that it is worth dramatizing its consequences. If a = 10, then in each second x gets 10 times larger, and after 100 seconds it is 10100 larger. With this type of
exponential growth, an error smaller than the nucleus of a hydrogen atom would be larger than the diameter of the known universe in less than 100 seconds. Of course, no physical system could have
this as a reasonable model for long time periods. The point is that linear systems can exhibit very extreme sensitivity to initial conditions because of exponential growth. Of course, STIC is a
matter of degree. The quantity ln(a) is one measure of the degree of STIC and is called the Lyapunov exponent. Suppose we modify our scalar linear system slightly to make it the nonlinear system x
(t+1) = 10x(t) mod10 and restrict the state to the interval [0,10]. This system can be thought of as taking the decimal expansion of x(t) and shifting the
OCR for page 116
Technology for the United States Navy and Marine Corps, 2000-2035: Becoming a 21st-Century Force decimal point to the right and then truncating the digit to the left of the units place. For example,
if x(0) = p = 3.141592 ..., then x(1) = 1.41592 ... and x(2) = 4.1592 ... and so on. This still has, in the small, the same exponential growth as the linear system, but its orbits stay bounded. If x
(0) is rational, then the x(t) will be periodic, and thus there are a countable number of periodic orbits (arbitrarily long periods). If x(0) is irrational, then the orbit will stay irrational and
not be periodic, but it will appear exactly as random and irregular as the irrational initial condition. As is well-known, this system exhibits deterministic chaos. The Lyapunov exponent can also be
generalized to nonlinear systems, and in this case would still be ln(a). The several alternative mathematical definitions of chaos are all beyond the scope of this paper, but the essential features
of chaotic systems are sensitivity to initial conditions (STIC), periodic orbits with arbitrarily long periods, and an uncountable set of bounded nonperiodic (and apparently random) orbits. The STIC
property and the large number of periodic orbits can occur in linear systems. But the “arbitrarily long periods” and “bounded, nonperiodic, apparently random ” features require some nonlinearity.
Chaos has received much attention in the popular press, which often confuses nonlinearity and sensitivity to initial conditions in suggesting that the former in some way causes the latter, when in
fact both are independent and necessary but not sufficient to create chaos. The formal mathematical definitions of chaos involve infinite time horizon orbits, so none of our examples so far would be,
strictly speaking, chaotic. A simple way to get a system that is closer in spirit to chaos would be to put our coin in a box and then shake the box with some periodic motion. Even though the box had
regular motion, under many circumstances the coin's motion in bouncing around the box would appear random and irregular. A simple model with linear dynamics between collisions and a linear model for
the collisions with the box would almost certainly be chaotic, although even this simple system is too complicated to prove the existence of chaos rigorously and it must be suggested via simulation.
Very few dynamical systems have been proved chaotic, and most models of physical systems that appear to exhibit chaos are only suggested to be so by simulation. One-degree-of-freedom models of a ball
in a cylinder with a closed top and a periodically moving piston have been proved chaotic. The ball typically bounces between the piston and the other end wall of the cylinder with the impact times
being random, even though the dynamics are purely deterministic, and even piecewise linear. To get a sense of the notion of bifurcation in dynamical systems, consider the following experiment. Drop a
quarter in as close to a horizontal position and with as little initial velocity as possible. It will drop nearly straight down, and the air will have little effect at the speeds the coin will attain
while it bounces around the floor. Now take a quarter-size piece of paper and repeat the experiment. The paper will begin fluttering rapidly and fall toward the floor at a large angle, landing far
away from where a real quarter would have first hit the floor. This is
OCR for page 116
Technology for the United States Navy and Marine Corps, 2000-2035: Becoming a 21st-Century Force an example of a bifurcation, where a seemingly small change in properties creates a dramatic change in
behavior. The heavy coin will reliably and predictably hit the floor beneath where it is dropped (at which point subsequent collisions may make what follows it quite unpredictable), whereas the paper
coin will spin off in any direction and land far away, but then quickly settle down without bouncing. Thus one exhibits STIC, while the other does not. A simple variant on this experiment illustrates
a bifurcation more directly. Make two photocopies of the diagram in Figure B.2 (or just fold pieces of paper as follows), and cut out the squares along the solid line. The unfolded paper will flutter
erratically when dropped, exhibiting STIC. Next, take one of the papers and fold it along one of the dashed lines to create a rectangularly shaped object. Turn the object so that the long side is
vertical. Then make two triangular folds from the top left and bottom left corners along the dotted lines to produce a small funnel-shaped object. If this is dropped it will quickly settle into a
nice steady fall at a terminal velocity with the point down. This is known as a relative equilibrium in that all the state variables are constant, except the vertical position, which is decreasing
linearly. It is locally stable since small perturbations keep the trajectories close, and is also globally attracting in the sense that all initial conditions eventually lead to this steady falling.
If the folds are then smoothed out by flattening the paper more back to its prefolded shape, then only when the paper is dropped very carefully will it fail to flutter. This nearly flat paper has a
relative equilibrium consisting of flat steady falling, but the basin of attraction of this equilibrium is very small. That is, the more folded the paper is, the larger the set of initial conditions
that will lead to steady falling. If the folds are sharp enough and the distance to the floor great enough, then no matter how the paper is dropped it will eventually orient itself so the point is
down, and then fall steadily. This large change in qualitative behavior as a parameter of the system is changed (in this case, the degree of folding) is the subject of bifurcation analysis FIGURE B.2
Proper folding diagram for bifurcation experiment.
OCR for page 116
Technology for the United States Navy and Marine Corps, 2000-2035: Becoming a 21st-Century Force within the theory of dynamical systems theory. In these examples, bifurcation analysis could be used
to explore why a regular coin shows STIC only after the first collision, while the paper coin shows it only up to hitting the floor, as well as why the dynamics of the folded paper change with the
degree of folding. Of course, bifurcation analysis applies to mathematical models, and developing such models for these examples is not trivial. To develop models that reproduce the qualitative
behavior we see in these simple experiments requires advanced undergraduate level aerodynamics. These models will necessarily be nonlinear if they are to reproduce the fluttering motion, as this
requires a nontrivial nonlinear model for the fluids. Bifurcation is related to chaos in that bifurcation analysis has often been an effective tool to study how complex systems transition from
regular behavior to chaotic behavior. While chaos per se may be overrated, the underlying concepts of sensitivity to initial conditions and bifurcation, and more generally the role of nonlinear
phenomena, are critical to the understanding of complex systems. The bottom line is as follows: We can make models from components that are simple, predictable, deterministic, symmetric, and
homogeneous, and yet produce behavior that is complex, unpredictable, chaotic, asymmetric, and heterogeneous. Of course, in engineering design we want to take components that may be complex,
unpredictable, chaotic, asymmetric, and heterogeneous and interconnect them to produce simple, reliable, predictable behavior. We believe that the deeper ideas of dynamical systems will be important
ingredients in this effort. Complexity It is tempting to view complexity in this context as something that arises in a mystical way between complete order (that the coin will come to rest) and
complete randomness (heads or tails) and to settle on chaotic systems as prototypically complex. We prefer to view complexity in a different way. To make reliable predictions about, say, the final
horizontal resting place, the distribution of horizontal resting positions, or the distribution of trajectories, we would need elaborate models about the experiment and measurements of properties of
the mechanism, the coin, and the floor. We might also improve our prediction of, say, the horizontal resting location if we had a measurement of the positions and velocities of the coin at some
instant after being tossed. This is because our suspicion would be that the greatest source of uncertainty is due to the tossing mechanism, and the uncertainty created by the air and the collisions
with the floor will be less critical, but this would also have to be checked. The quality of the measurement would obviously greatly affect the quality of any resulting prediction, of course.
OCR for page 116
Technology for the United States Navy and Marine Corps, 2000-2035: Becoming a 21st-Century Force To produce a model that reliably predicted, say, the distribution of the trajectories could be an
enormous undertaking, even for such a simple experiment. We would need to figure out the distributions of initial conditions imparted on the coin by the tossing mechanism, the dynamics of the
trajectories of the coin in flight, and the dynamics of the collisions. The dynamics of the coin in the air is linear if the fluid/coin interaction is ignored or if a naive model of the fluid is
assumed. If the coin is light, and perhaps flexible, then such assumptions may allow for too much uncertainty, and a nonlinear model with dynamics of the coin/ fluid interaction may be necessary
(imagine a “coin” made from thin paper, or replace air by water as the fluid). If the coin flexibility interacts with the fluid sufficiently, we could quickly challenge the state of the art in
computational fluid dynamics. The collisions with the floor are also tricky, as they involve not only the elastic properties of the coin and floor, but the friction as well. This now takes us into
the domain of friction modeling, and we could again soon be challenging the state of the art. Even for this simple experiment, if we want to describe detailed behavior we end up with nonlinear models
with complex dynamics and the physics of the underlying phenomena is studied in separate domains. It will be difficult to connect the models of the various phenomena, such as fluid/coin interaction,
and the interacting of elasticity of the floor and coin with frictional forces. It is the latter feature that we refer to as heterogeneity. Heterogeneity is mild in this example since the system is
purely mechanical, and the collisions with the floor are relatively simple. Our view of complexity, then, is that it arises as a direct consequence of the introduction of dynamics, nonlinearities,
heterogeneity, and interconnectedness intended to reduce the uncertainty in our models so that reliable predictions can be made about some specific behavior of our system (or its unpredictability can
be reliably confirmed in some specific sense, which amounts to the same thing). Complexity is not an intrinsic property of the system, or even of the question we are asking, but in addition is a
function of the models we choose to use. We can see this in the coin tossing example, but a more thorough understanding of complexity will require the richer examples studied in the rest of this
paper. While this view of complexity has the seemingly unappealing feature of being entirely in the eye of the beholder, we believe this to be unavoidable and indeed desirable: Complexity cannot be
separated from our viewpoint. Uncertainty Modeling and Management Up to this point, we have been rather vague about just what is meant by uncertainty, predictability, and complexity, but we can now
give some more details. For our coin toss experiment, we would expect that repeated tosses would produce rather different trajectories, even when we set up the tossing mechanism identically each time
to the extent we can measure. There would
OCR for page 116
Technology for the United States Navy and Marine Corps, 2000-2035: Becoming a 21st-Century Force presumably be factors beyond our control and beyond our measurement capability. Thus any model of the
system that used only the knowledge available to us from what we could measure would be intrinsically limited in its ability to predict the exact trajectory by the inherent nonrepeatability of the
experiment. The best we could hope for in a model would be to reliably predict the possible trajectories in some way, either as a set of possible trajectories or in terms of some probability
distribution. Thus we ideally would like to explicitly represent this uncertainty in our model. Note that the uncertainty is in our model (and the data that goes with it). It is we—not nature—who are
uncertain about each trajectory. 1 We now describe informally the mechanisms by which we would introduce uncertainty into our models. Parametric Uncertainty A special and important form of
uncertainty is parametric uncertainty, which arises in even the simplest models such as attempting to predict the detailed trajectory of a coin. Here the “parameters” include the coin's initial
conditions and moments of inertia, and the floor's elasticity and friction. Parameters are associated with mechanisms that are modeled in detail but have highly structured uncertainty. Roughly
speaking, all of the “inputs” to a simulation model are parameters in the sense we use the term here. 2 How do we deal with parametric uncertainty (see also Appendix D )? Average case. If only
average or typical behavior is of interest, this can be easily evaluated with a modest number of repeated Monte Carlo simulations with random initial conditions. In this case the presence of
parametric uncertainty adds little difficulty beyond the cost of a single simulation. Also, in the average case the number of parameters does not make much difference, as estimates of probability
distributions of outcomes do not depend on the number of parameters. Linear models. If the parameters enter linearly in the model, the resulting uncertainty is often easy to analyze. To be sure, we
can have extreme sensitivity to initial conditions, but the consequences are easily understood. Consider the linear dependence of the velocity and position of the first floor collision as a function
of the initial velocities and positions of the coin. A set in the initial 1 Except in our discussion of VLSI later in this appendix, we ignore quantum mechanics and the intrinsically probabilistic
behaviors associated with it. Quantum effects are only very rarely significant for the systems of interest here. 2 Some workers distinguish between “parameters” that can be changed interactively at
run time, or in the course of a run, and “fixed data,” that can be changed only by recompiling the database. Both are parameters for the purposes of this paper.
OCR for page 116
Technology for the United States Navy and Marine Corps, 2000-2035: Becoming a 21st-Century Force Smart Weapons and Airbags In smart weapons, sensors, actuators, and computers are added to counter
uncertainties in atmospheric conditions, release conditions, and target movement. This yields reduced sensitivity to uncertainties in the environment, but at the price of increased sensitivity to a
large number of new components. If a sensor or actuator component fails, the weapon may actually have much worse accuracy than a dumb weapon. If we are careful in our design, we can use this shift in
vulnerability from uncertainty in the environment to uncertainty in our components to our great advantage by making sure that our critical components are sufficiently reliable. Interestingly, it
could be argued that the most successful smart weapons so far have been the simplest, for example, Sidewinder and laserguided bombs. Automobile airbags also reduce vulnerability to uncertainties in
the environment. With an airbag you are safer in a high-speed collision with, say, a drunk driver who has crossed into your lane. Since you have no direct control of the other driver's behavior, an
airbag is one of the most cost-effective control strategies you can take. Unfortunately, there is again increased vulnerability to component failures. Even without component failures, airbags can
make certain circumstances more dangerous. For example, a low-speed collision may cause the air bag to deploy even though without the airbag there would be no danger of injury. Thus one could be
injured by the airbag itself under normal operation even when the system functions properly. This is particularly serious with small passengers, who may be in more danger with an airbag than without.
Overall there is a substantial net reduction in fatalities, but increased danger of injury and death in certain circumstances for all people, and possibly a net increase in danger to smaller people.
The awareness of the danger of airbags to children and small adults has provoked a flurry of research to make more advanced and more complex airbags. Proposed schemes include making the airbag
deployment more adaptable to individual differences in size and body position by using infrared and ultrasonic sensors, together with weight sensors and capacitance sensors, which detect water in
human bodies. Unfortunately, it is possible to fool these sensors as bags of groceries with a hot pizza sitting on a wet towel could presumably be mistaken for a person. Lower-technology solutions
include simply setting the threshold for airbag deployment higher so they go off less frequently in slower-speed collisions. All these solutions again highlight that the design is driven by
uncertainty management, and complexity is introduced as a by-product. What these two examples illustrate is a kind of conservation principle that is at work in complex systems. Indeed, as we will
discuss later, control theory has several such conservation principles that are critical to understanding complex
OCR for page 116
Technology for the United States Navy and Marine Corps, 2000-2035: Becoming a 21st-Century Force systems. Informally, when we introduce new components to reduce the effects of uncertainty in the
environment, we inevitably create increased vulnerability either to these new components, or to other uncertainties in the environment. Since we control the design, if we are careful we can use this
tradeoff to our advantage and shift our vulnerability from things that are more uncertain to things that are less, but explicit models of uncertainty are critical in achieving this. Unfortunately,
with increasing complexity, evaluating these tradeoffs can be conceptually and computationally overwhelming. The earlier section on software engineering discussed how large software development
projects require a highly structured approach throughout, since interconnection management dominates component design. While this is now and always will be a challenging domain, it is still
relatively homogeneous domain with limited uncertainty. Complex systems engineering has all of the challenges of software engineering plus heterogeneity (hardware and software plus chemical,
electrical, mechanical, fluid, communications, and so on) and greater uncertainty (in environment and in system components). Complex systems remain even more poorly understood than large software
systems. Complex systems are poorly understood in part simply because nonlinear, heterogeneous, interconnected, complex dynamical systems are intrinsically difficult to model and understand. But more
importantly, the role of uncertainty is critical, but very poorly understood. Furthermore, scaling of problem size can make the interaction of these issues overwhelming. As we will see, control
theory addresses uncertainty management explicitly, but from a very narrow perspective. A deeper understanding of complex systems is emerging, but in separate and fragmented technical disciplines.
Finally, there is the “referee effect.” The referee effect comes from the observation that we notice referees only when they do a bad job. Similarly, we notice the details of our watches,
televisions, phone systems, cars, planes, networks, and nuclear reactors only when they fail to provide reliable operation and shield us from the world's uncertainties. Basically, the product of a
superior design process makes itself virtually invisible. Even when the design is flawed, it may appear to the user that the failure was due to some component, rather than an underlying design
process. This is true in all the examples of failures above. Success or failure of components, including computer hardware and software, is relatively easily understood. The role of the system design
process itself, deciding which components to use and how to interconnect them, remains a mystery outside of a narrow technical community. Thus complexity in engineering systems is very much in the
eye of the beholder. A design engineer may deliberately introduce great complexity specifically for the purpose of providing the end user with an apparently simple and reliable system. The apparent
complexity depends on the viewpoint, and traditionally the only global viewpoint is that of the control engineer.
OCR for page 116
Technology for the United States Navy and Marine Corps, 2000-2035: Becoming a 21st-Century Force LESSONS FROM CONTROLS Increasingly complex systems rely on advanced control systems, from cheap, fast
computer disk drives to fly-by-wire aircraft to automobiles, integrated chemical production complexes, semiconductor manufacturing systems, and manned and unmanned space systems. Yet, ironically,
control engineering and theory remain poorly understood outside of a narrow technical community. Traditionally, control engineers have been responsible for system integration because the control
engineer adds the last component to a complex system, and does systemwide uncertainty management. Generally speaking, however, control theoreticians generally do not support this process. The
situation is changing dramatically, and the trend is to more integration of system design and control design, but we need to accelerate this trend, and control theorists must expand their vision and
make greater contact with other disciplines. Although control theory by itself offers only a piece of a potential foundation for a theory of VE, it provides a very important complement to dynamical
systems and computer science because uncertainty management is the central issue in automatic control systems. The experience and successes and failures of control theory provide important technical
foundation and additional insight into the potential role of theory in complex systems. Ironically, until the last 10 years, control theory and practical control engineering have had a very distant
relationship. The old story was that since controls were the most mathematical part of engineering it should not be surprising that it simply took decades for theory to get from academia to practice.
While this certainly has some truth, another view is that much of the theory was basically irrelevant, and the reason for this irrelevance was inadequate treatment of uncertainty. Tremendous progress
has occurred in just the last decade in developing a mathematical theory of analysis of uncertain systems in the subfield of robust control. The new tools of structured uncertainty, integral
quadratic constraints, linear matrix inequalities, operator theoretic methods, and so on, are well beyond the scope of this appendix, but a few observations can be made. The rate of transition from
theory to practice has increased dramatically, and ironically, control theorists are doing theory that is both more mathematical and more relevant. Another important factor is that they are using
modern software tools to get their theory into CAD design packages that are commercially available. Thus theory is now routinely used in industry before it has had time to get through the review and
journal publication process. The former can take months, while the latter still takes years. One of the most important messages from control theory is that there are fundamental conservation laws
associated with uncertainty management in complex, interconnected systems. The informal notion suggested by the smart weapon and airbag examples that vulnerability to uncertainty could not be
absolutely reduced but could only be moved around has theoretical expression in the math-
OCR for page 116
Technology for the United States Navy and Marine Corps, 2000-2035: Becoming a 21st-Century Force ematics of control theory. There are conservation laws where the “conserved quantities” are related to
net system-level robustness with respect to component and environmental uncertainty. Interestingly, some of these conservation laws (e.g., Bode's integral formula) are based on results that are up to
50 years old, although they are getting modern extensions. They do require upper division undergraduate mathematics to express, however, and are beyond the scope of this review. Like energy
conservation, they limit the performance of interconnected systems, but with proper understanding can be manipulated to our advantage. Also, like energy conservation, attempts to violate them are
constantly being attempted, often with catastrophic results. While control theory must play a central role in a theory of VE, current control theory has many inadequacies that must be addressed in
this broader context. The first and most obvious is that control theorists take a very limited view of system interconnection, assuming that there is a fixed “plant” with a well-defined performance
objective and a controller with adequate sensors, actuators, and computation to achieve the performance. The control design then amounts to solving for the “control laws” that yield the desired
performance. This view of control is no longer relevant to even today's design environment where the systemwide control engineer's view of performance is needed at the earliest design stages. As
cost-effective uncertainty management correctly takes its place as the dominant design issue, control engineers are forced to play a broader role, and control theory must catch up just to address the
current needs, let alone the expanded needs of future VE. Another weakness of control theory is that it tends to treat uncertainty and nonlinearity completely separately. This has traditionally been
a remarkably effective strategy. To illustrate this, consider the problem of reentry of the Shuttle orbiter. Viewed as a whole, the dynamics are extremely nonlinear, and there are substantial
uncertainties. The strategy has traditionally been to use a simplified nonlinear model with no uncertainty to develop an idealized global trajectory for reentry, and then use a local linearized model
to design a feedback controller to keep the vehicle close to the trajectory in the presence of uncertainty. The sources of uncertainty included atmospheric disturbances, unmodeled vehicle dynamics
due primarily to unsteady aerodynamic and structural effects, parametric uncertainty in the mass distribution and aerodynamic coefficients, and nonlinearities. The nonlinearities include both those
that were in the simplified global model, which have been eliminated through linearization, and also higher-order nonlinearities that were not represented even in the global model. Both are treated
as sources of uncertainty in the linearized model. This strategy works well because the idealized trajectory creates a relative equilibrium about which a linearization is quite reasonable, and the
effects of nonlinearities do not dominate the local behavior about the trajectories. It is easy to imagine many circumstances where this clean separation is not effective, because there is so much
uncertainty that either the idealized trajectory is not meaningful or the local
OCR for page 116
Technology for the United States Navy and Marine Corps, 2000-2035: Becoming a 21st-Century Force behavior cannot be kept close enough to the idealized trajectory to allow the nonlinearities to be
treated as uncertainties. Control theory also has other weaknesses that must be overcome. While mathematical sophistication is a strength of control theorists, they must overcome the natural distance
this tends to create with other engineering disciplines. This is one reason why control theory has been applied to dynamical systems and computational complexity with some early successes, but has
achieved less success in other areas. The limited connection with modeling and physics is even more troubling, as control theorists tend to view modeling as a mystical and unpleasant activity to be
performed by others, hopefully far away. ANALYSIS OF UNCERTAIN DYNAMICAL SYSTEMS While even a superficial exposition of the current state of the art in analysis of uncertain dynamical systems
requires mathematics well beyond the scope of this paper, it is possible to suggest some of the ideas and difficulties with simple drawings. Recall the interference analysis. We can think of a
three-dimensional solid component as being defined as a subset of real Euclidean 3-space. Thus, interference analysis is checking for any intersections of these subsets other than those that are
specified. We can similarly think of components in a dynamical system as being defined as subsets of all the possible time trajectories that their state and boundary conditions can take. Thus, a
circuit component can be thought of as specifying some set of currents and voltages, a mechanical component as specifying some set of velocities, positions, and forces, and so on. These sets are
potentially very complicated as they are subsets of infinite dimensional spaces of time trajectories. Differential equations can be thought of as constraints that determine the set of behaviors. An
interconnection of components is equivalent to the intersection of the subsets that describe their behaviors. For example, two circuit elements connected at their terminals each constrains the
signals between them, and an interconnection simply means that the constraints of both components are in effect. Engineering design may then be thought of as connecting components in such a way as to
produce only a certain desired set of behaviors and no others. Undesirable behaviors are analogous to undesirable interferences in three-dimensional solids, in that they involve unwanted
intersections of sets. To make this point of view more concrete, recall the fluttering paper example, and assume we use a rigid body model of the paper in a case where the folds are fairly flat. The
boundary conditions between the air and paper consist of the paper's position and orientation and their rates and the forces between the paper and the air. Both the paper and the air model put
constraints on what these variables can be, and dropping the paper in air forces both sets of constraints to hold simultaneously. One solution consistent with the constraints is steady fall-
OCR for page 116
Technology for the United States Navy and Marine Corps, 2000-2035: Becoming a 21st-Century Force ing, but there are other fluttering motions that are also possible. The challenge in complex systems
is discovering these extra solutions that may be undesirable. If components are linear with no uncertainty, then their sets of behaviors are linear subspaces, and it is relatively easy to check
globally for undesirable interconnections. This would be analogous to the three-dimensional solids all being just lines and planes. Uncertain or nonlinear components are more complicated to analyze.
Very simple uncertain linear problems are NP hard, and simple nonlinear problems are undecidable. The strategy that has been exploited very successfully in robust control theory is a natural
generalization of the bounding box idea to this setting of components of dynamical systems. Here the bounding boxes are in infinite dimensional spaces, and checking for their intersection requires
sophisticated mathematical and computational machinery. So far, this is the only known method that successfully handles both parametric uncertainty and unmodeled dynamics and overcomes to some extent
the intractability of these problems. While the generalized bounding box methods (they are not called this in robust control theory, but are referred to with a variety of other, more technical terms)
have been successful in control systems analysis and design (they are widely used throughout the world), their application outside of controls has been limited. What is particularly needed now is to
put these methods more in the context of component interconnections, not just the plant-controller paradigm of standard control theory. Also, there remains a great need for methods to analyze
uncertainty and nonlinearity together in some nontrivial way. Developing bifurcation analysis tools that allow for uncertainty would be a good initial step, and research in this direction is under
way. In robustness analysis of uncertain systems, it is usually much easier to find a failure if one exists than to guarantee that none exist when that is the case. This inherent asymmetry is present
in three-dimensional interference analysis and software design and will be a major feature of VE. We must try to overcome this as much as possible, but recognize that a substantial asymmetry is
unavoidable. CASE STUDIES REVISITED While we are far from having an integrated theory of VE, we can gather the various ideas we have discussed from dynamical systems, computer science, and control
theory and briefly revisit the case studies. The success stories in the 777 solid modeling, in CFD, and in VLSI are encouraging, but extrapolation to the broader VE enterprise must be done with
caution. Each success depends on very special features of the problem area, and there are substantial challenges within even these limited domains to extending the existing tools. None of these areas
has faced up to uncertainty management in heterogeneous systems, though all are being increasingly faced with exactly that issue. Among the failures considered, the Estonia Ferry disaster is the one
OCR for page 116
Technology for the United States Navy and Marine Corps, 2000-2035: Becoming a 21st-Century Force FIGURE B.11 Other foundations for VE theory. likely to have benefited from the use of
three-dimensional solid CAD tools such as were used for the 777. The Titanic, Tacoma Narrows Bridge, subsynchronous resonance, and Ariane 5 failures can all be traced to specific unmodeled dynamics
whose analysis, had it been considered, was well within the capability available at the time. Thus it is easy after the fact to view these as simple problems with simple solutions, but the deeper
question is whether a disciplined and systematic approach to VE would help avoid such mishaps. The answer is not obvious because each of these failures involved heterogeneous interactions and
dynamics that are unlike the success stories. The telephone and power system failures and the Denver airport baggage handling system fiasco are more clearly examples where uncertainty management in
complex systems went awry. These highly interconnected and automated systems are intended to improve performance and robustness and at the same time reduce cost, and they generally do so with respect
to the uncertainties and objectives that are considered primary in these systems. Unfortunately, the very complexity introduced to handle uncertainties in some aspects of the system's environment
lead to vulnerabilities elsewhere. It is tempting to imagine that a design environment that stressed uncertainty management and explicit representation of uncertainty across discipline boundaries
would have encouraged design engineers to be alerted in advance to the potential for these failures, but we will have to wait until we have a better picture of exactly what such an environment would
consist of. The challenge will be to avoid believing too much in either virtual worlds, or our past experiences with real ones, as both can mislead us about future realities. Figure B.11 is intended
to convey the way in which some existing communities are addressing the various aspects of VE models: uncertainty, interconnec-
OCR for page 116
Technology for the United States Navy and Marine Corps, 2000-2035: Becoming a 21st-Century Force tion, dynamics, nonlinearity, and complexity. It is intended to suggest that all the issues are being
addressed, but in a fragmented way. We touched briefly and informally on all these topics except statistics. CASE here means computer-aided software engineering, and complexity theory is
computational complexity in theoretical computer science. There are other areas that should contribute to a VE theory, such as nonequilibrium physics, all aspects of scientific computing and
numerical methods, optimization, and discrete-event and hybrid systems. We have argued that while sophisticated hardware and software infrastructures are needed to form the substrate on which robust
VE tools can be implemented, the infrastructure aspects of M&S are already emphasized to a high degree, and the issues focused on in this appendix need comparable attention. In doing so we have
perhaps paid inadequate attention to the need for new and novel software and user-interface paradigms that would address unique needs of VE. We regret we have had neither the time nor the expertise
to explore this further. An aspect of computing that we will briefly discuss, since it is so ubiquitous, is so-called “soft computing.” SOFT AND HARD COMPUTING Soft computing is usually taken to
include fuzzy logic, neural-net computing, genetic algorithms, and so on, in contrast to the “hard computing techniques” of, say, numerical analysis, mathematical programming, structured and
object-oriented programming, probability theory, differential equations, “hard” AI, and so on. According to its proponents, such as Lotfi Zadeh (see, e.g., Zadeh, 1994), “soft computing will
revolutionize computing.” While it is certainly beyond the scope of this appendix to give a thorough discussion of this area, we can provide at least one perspective. There are two standard arguments
made for soft computing. The first is that many problems do not lend themselves to hard computing solutions because the systems under consideration are dominated by what we would traditionally call
“soft” issues, like economic and societal systems, and anything involving human decision making, common-sense reasoning, and natural language. Hard computing and hard AI have failed to achieve
long-standing goals of making human-computer interactions more human-friendly precisely because they have failed to appreciate soft computing approaches. Soft computing, especially fuzzy logic,
allows programming with natural language. Indeed, Zadeh has characterized fuzzy logic as “computing with words.” The hope is that if you know a solution and you can simply and clearly articulate it
in words, then you can program directly without translation to some programming language. There is substantial controversy regarding the degree to which fuzzy logic solves this problem, but the goal
is certainly admirable, and there are cases in which fuzzy logic has been successful. On the other hand, many problems do not fit the fuzzy paradigm at all. In some cases, we can do a particular task
well, but we cannot clearly articulate how we do it. Examples include chess
OCR for page 116
Technology for the United States Navy and Marine Corps, 2000-2035: Becoming a 21st-Century Force playing, vision, speech recognition, and almost all motor skills, such as those involved in sports or
physical labor. Many of the tasks in which humans greatly outperform machines are also ones in which lower animals outperform humans. While biological systems do provide useful inspirations for
machine automation, only humans typically articulate in words a detailed description of their own behavior. Perhaps more importantly, we often need the methods of mathematics, science, and
engineering to help us find a solution. And in still other cases, using such methods permits us to find a better and more robust solution than possible with the simpler forms of fuzzy logic that have
such intuitive appeal. By and large, we believe that the difficult problems of the VE enterprise are problems in which our naive intuition is likely to be dangerously wrong. In such cases, we should
be cautious of seductive shortcuts. The second argument for soft computing, and again fuzzy logic in particular, is that they more naturally exploit the tolerance for imprecision, uncertainty,
partial truth, and approximation that characterize human reasoning. In the context of VE, it is useful to distinguish two kinds of uncertainty: The imprecision and ambiguities in our natural
language, which parallels our sometimes limited ability to precisely specify what we want a system to do. The uncertainty in our models of physical systems, as has been emphasized in this appendix.
While we have emphasized the latter, in the early stages in design of engineering systems the former can often dominate. If VE is successful in dealing effectively with type 2 uncertainty, then type
1 will be increasingly critical to overall system performance. It is here where fuzzy logic and soft computing hold the greatest promise. Advocates argue, though, that fuzzy logic is also ideally
suited to handle uncertainty of type 2 as well. We disagree. Fuzzy logic is intended to capture properties of human language and simply does not address in any meaningful way many of the kinds of
uncertainty we have discussed in this appendix and how uncertainty propagates with dynamics and interconnection. And, if one tried to use fuzzy logic to do so, it would quickly lose its comfortable
“natural-language features.” Fuzzy logic may be useful in representing human decision making in a simulation environment, but we have not considered that issue here. It may also be useful in a
variety of engineering contexts that are ultimately much simpler than those in VE. Similar remarks apply to genetic algorithms. Optimization, and particular global search techniques, will play a
critical role in present and future VE systems. Indeed, our proto-VE examples of aircraft design with CFD, VLSI, and CAD of the type used in the Boeing 777 are domains where global optimization is
either already playing a huge role (VLSI) or a growing role. Statistical methods, advanced optimization theory, and even theoretical computer science (decidability, NP-hardness) are creating a
foundation for this subject, both in academic research
OCR for page 116
Technology for the United States Navy and Marine Corps, 2000-2035: Becoming a 21st-Century Force and in industrial application. From this point of view, genetic algorithms are a very minor piece of
the picture. Their popularity is due primarily to the ease with which people can use them (people who include not only deeply capable scientists, but also more ordinary people with no or little
expertise in statistics, optimization, or complexity theory). Genetic algorithms are often mentioned as a moderately effective way to do global search, especially on highly unstructured problems.
Based on our experience, which tends to be in hard areas of engineering rather than, say, softer problems of military combat modeling, we remain skeptical. Despite the strong market demands for
commercial software to assist in global search in such problems as VLSI design and analysis of uncertain dynamical systems, genetic algorithms have had almost no impact relative to more mathematical
approaches such as branch and bound, and problem-specific heuristics. This is not to say, however, that genetic algorithms have no role to play in VE. The conceptual simplicity of the approach means
that it can be used by domain experts who may not be familiar with more sophisticated optimization ideas or may not want to invest the time to program a better algorithm. Genetic algorithms can be
used to explore global optimization in a new domain, and if it is successful, then there is clear encouragement for further investigation. If not, little investment has been made. “COMPLEX ADAPTIVE
SYSTEMS” AND SOFT COMPLEXITY A term that often arises in conjunction with soft computing is “complex adaptive systems,” which can be considered to be a research area in its own right or a special
case of what we have discussed here under the rubric of VE. It is not, however, a “new science,” nor is it a substitute for the work we have described. Instead, what it has accomplished so far is to
provide a set of metaphors for taking new looks at difficult problems involving complex systems. While the celebration of chaos, nonlinearity, and emergent phenomena has perhaps been overdone, and
while popularizers have sometimes given them a nearly mystical flavor that seems bizarre to those of us working in the VE domain that includes control, dynamical systems, nonequilibrium physics, and
complexity theory, the metaphors and popularized discussions have greatly broadened the audience and are helping to open minds regarding the value of experimenting with methods quite different from
the traditional ones. In this sense, work on complex adaptive systems is helpful to the VE enterprise. The concern, of course, is that the simplifications of popularization— which sometimes include
exaggerated promises and claims—will discredit those associated with complexity research when those exaggerations are better recognized. This is a common problem in science. For example, there were
backlashes against artificial intelligence and expert systems because the more exaggerated claims were finally recognized as such. The backlashes were sometimes quite unfortunate, because the
research in these areas has had profound effects. In any
OCR for page 116
Technology for the United States Navy and Marine Corps, 2000-2035: Becoming a 21st-Century Force case, as we have indicated from the very beginning of this appendix, dynamical systems concepts will
necessarily be at the very heart of any useful theory of VE. It is important that VE researchers develop the kind of nonlinear intuition that the subject encourages and also build on existing methods
for analysis of nonlinear systems. Both the concept of chaos —that apparent complexity and randomness can arise from deep simplicity —and the concept of emergence—that apparent order and simplicity
can arise from deep complexity —are of great importance. On the other hand, they are empty without the more technical concepts such as phase space, bifurcation, strange attractors, Poincare maps,
Lyapunov exponents, Hamiltonians, Euler-Lagrange equations, symplectic maps, integrability, self-organized criticality, ergodicity, and entropy. Unfortunately, there is no easy access to this deeper
work. To end this discussion, we might tentatively propose a notion of “soft complexity” analogous to, and including, “soft computing,” in the same way that we might propose a notion of “hard
complexity” that is analogous to and includes “hard computing.” The flavor of the distinction would be as follows: Soft complexity equals emergence, fractals, artificial life, complex adaptive
systems, edge of chaos, control of chaos, . . . plus soft computing, fuzzy logic, neural nets, and genetic algorithms. Hard complexity equals information theory, algorithmic complexity, computational
complexity, dynamical systems, control theory, CASE/CAD, nonequilibrium physics, statistics, numerical analysis, and so on. This appendix has clearly advocated the relative importance of “hard” over
“soft” complexity in VE. Some of the more extreme advocates for soft complexity claim it will revolutionize analysis and design of complex systems and obviate the need for the “structured and
mathematical approach” advocated here. While we obviously disagree with this assessment, it is likely that soft complexity can help make concepts of hard complexity accessible, albeit in a limited
way, to a nontechnical audience. It is also likely that the soft complexity concepts will be quite valuable in communication and, probably, for certain types of initial exploration of concepts. In
any case, popular expositions of soft complexity will continue to emerge and will have effects on decisions about investment. Our hope is that papers such as the current appendix will help maintain
perspectives. 9 9 For differing perspectives, see a selection of papers by users of fuzzy logic, including engineers, in Proceedings of the IEEE, March 1995. See also the collections of Zadeh's
papers (Yager et al., 1987). And, in this volume, see Appendix G for examples of fuzzy logic research.
|
{"url":"http://books.nap.edu/openbook.php?record_id=5869&page=116","timestamp":"2014-04-21T15:33:11Z","content_type":null,"content_length":"106385","record_id":"<urn:uuid:46e983c9-3610-42ae-9d2f-f7025eae16bb>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00073-ip-10-147-4-33.ec2.internal.warc.gz"}
|
math urgent!
Posted by kathy on Monday, November 29, 2010 at 10:40pm.
find the values of x and y that solve te following system of equations
• math urgent! - Reiny, Monday, November 29, 2010 at 10:49pm
first one times 9 ---> 27x + 18y = - 198
2nd one times 2 ---> 10x - 18y = 50
add them : 37x = -148
x = -4
I will let you finish it.
• math urgent! - Jen, Monday, November 29, 2010 at 10:52pm
3x+2y=-22 (1)
5x-9y=25 (2)
Multiply (1) by 9 and (2) by 2 to get the y's to cancel
27x+18y=-198 (3)
10x-18y=50 (4)
Add (3) and (4) together
Divide both sides by 37
Substitute that in either original equation (1) or (2) to solve for y. Check to make sure it works for the other equation, as well.
Related Questions
math - find the values of x and y that solve the following system of equations ...
math - find the values ofx and y that solve the following system of equations -...
math - find the values of x and y that solve the system of equations -3 x- 8y=19...
Discrete Math - I am really stuck on these problems. I've worked a lot of them ...
math - Solve the following linear system of equations. 3x-2y+z=2 x-y+z=2 5x+10y-...
maths - solve the following simultaneous equation: 5x+2y=29 x-y=-4 2x-y=-1 3x-y=...
College Algebra - Solve the system of equations using matrices. If system has no...
Geometry :( - this maths problem is wrecking my head.... 1.find the points of ...
math - solve the system of equation 5x+2y+z=-25 5x-3y-z=-23 3x+y+2z=-9
Math - Solve the following system: 4x-y+z=6 2x+y+2z=3 3x-2y+z=3 Find the values ...
|
{"url":"http://www.jiskha.com/display.cgi?id=1291088420","timestamp":"2014-04-21T08:10:15Z","content_type":null,"content_length":"8879","record_id":"<urn:uuid:e7653457-b209-4849-8185-9b2baf216acc>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00227-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Golden Ratio
Powers of negative numbers are not well-defined in the real number system.
Basically, we can make sense of [itex](-1)^2, ~(-1)^3[/itex] and others, but that is only for integer exponents!
Once we come to non-integer exponents, then things stop being defined. Things like [itex](-1)^{1/2}[/itex] or [itex](-1)^\pi[/itex] are not defined anymore. This is sharp contrast with powers of
positive numbers!
Of course, it is possible to extend the real number system to define expressions such as the above. This extension is called the complex number system. Things like [itex](-1)^{1/2}[/itex], [itex](-1)
^\pi[/itex] or [itex](-\sqrt{5})^{\sqrt{5}}[/itex] are defined there. They are complex numbers, but not imaginary.
If you want to play around with complex numbers and powers of negative numbers, you can always check
wolfram alpha
|
{"url":"http://www.physicsforums.com/showthread.php?p=4258909","timestamp":"2014-04-17T09:55:49Z","content_type":null,"content_length":"27639","record_id":"<urn:uuid:0bd87eb9-c93c-4cb6-9da9-5f724dab53c4>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00353-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Can someone see if I did this right divide and express the quotient in lowest terms.
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/4f6ac9c4e4b014cf77c7ea0d","timestamp":"2014-04-17T06:54:13Z","content_type":null,"content_length":"183618","record_id":"<urn:uuid:46498996-28c2-4051-bd18-e909eaca90d7>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00038-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Zero thinking
May 1998
"Nothing is more interesting than nothing" — or so says Ian Stewart, Professor of Mathematics at Warwick University. Many people have difficulty with the concept of zero. In fact, it has only really
been used as a number for the last 1500 years or so. Before this time it seems that zero was simply not that important. At the end of the day, a herd of no camels is not worth much.
Perhaps our ancestors were better off? Once you start using zero as a number then you can easily get into difficulty. Adding and taking away don't cause too much trouble, multiplication is
straightforward (though a little unrewarding) but division simply has to be disallowed.
In a previous issue of PASS Maths we were asked what infinity multiplied by zero was. Our answer was that infinity cannot be multiplied by anything in the usual sense of the word because it is not a
number. It's harder to explain away one divided by zero because they're both numbers; you're simply not allowed to do it.
|
{"url":"http://plus.maths.org/content/zero-thinking","timestamp":"2014-04-20T08:18:21Z","content_type":null,"content_length":"19537","record_id":"<urn:uuid:ce6879c3-fb14-4e23-b0ee-5e730ae6457c>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00326-ip-10-147-4-33.ec2.internal.warc.gz"}
|
How to plot a graph with these data
No Profile Picture
Registered User
Devshed Newbie (0 - 499 posts)
Join Date
Nov 2012
Rep Power
How to plot a graph with these data
hi. i'd like to understand how to make the following graph in python, using matplotlib.
i have values for pressure, and it varies with y-axis. so, for example i have:
and so on, other sets for other x, eg:
x=2,y=2,p=5 ...
x=3,y=2,p=7 ...
So, what i need to do, is to put pressure values in a x-y graph. How i can do this? I was thinking of dot with different diameters or stuffs like that, but i don't now how to do.
You can use different diameters:
matplotlib.pyplot.scatter(x, y, s)
s is an array of sizes in units of points^2 (i.e., proportional to area). The diameter would be proportional to sqrt(s).
You can also choose the color of each point.
will give you the details.
No Profile Picture
Contributing User
Devshed Intermediate (1500 - 1999 posts)
Join Date
Feb 2004
San Francisco Bay
Rep Power
|
{"url":"http://forums.devshed.com/python-programming-11/plot-graph-data-934687.html","timestamp":"2014-04-19T12:11:15Z","content_type":null,"content_length":"46643","record_id":"<urn:uuid:01aafcee-f0f9-4fe3-a448-842c15873cc6>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00078-ip-10-147-4-33.ec2.internal.warc.gz"}
|
What is the significance that the Springer resolution is a moment map?
up vote 12 down vote favorite
Let $\mathcal{B}$ be the flag variety and $\mathcal{N} \subset \mathfrak{g}$ is the nilpotent cone. We know that the Springer resolution $$ \mu: T^*\mathcal{B}\rightarrow \mathcal{N} $$
is the moment map, if we identify $\mathfrak{g}$ with $\mathfrak{g}^* $ by the Killing form and consider $\mathcal{N} \subset \mathfrak{g}$ as a subset of $\mathfrak{g}^*$.
As far as I know, the geometric construction of Weyl group and $U(sl_n)$ does not involve moment map or even symplectic geometry, as in the paper "Geometric Methods in Representation Theory of Hecke
Algebras and Quantum Groups"
My question is: what is the consequence of the fact that the Springer resolution is a moment map?
add comment
2 Answers
active oldest votes
One reason to emphasize the Springer resolution's role as a moment map is that it is the semiclassical shadow of Beilinson-Bernstein localization. More precisely passing to functions, the
moment map description asserts that the Springer map is describing the Hamiltonian functions on the cotangent to the flag variety which generate the action of the Lie algebra. We may now
quantize the cotangent bundle $T^* G/B$ to the ring of differential operators on $G/B$, and likewise quantize the dual space $g^*$ to the Lie algebra to the universal enveloping algebra
up vote $Ug$, so that the moment map describes the map from $Ug$ to global differential operators on the flag variety. What's truly significant about the Springer map (it's a birational, proper,
11 down symplectic [crepant] resolution of [rational] singularities) now translates into the Beilinson-Bernstein equivalence (for generic parameters) between $Ug$-modules and (twisted) D-modules on
vote the flag variety, the cornerstone of geometric representation theory. There's now an entire subject (wonderfully represented in a workshop last week in Luminy) seeking to generalize all the
accepted features of this setup to other symplectic resolutions and their quantizations, viewed as the settings for "new representation theories" (the prime examples being Hilbert schemes and other
quiver varieties).
Thank you very much, David! I am trying to rephrase what you said: Since $\mu: T^*\mathcal{B}\rightarrow \mathcal{N}$ is the moment map, by considering vector fields on $\mathcal{B}$ as
1 functions on $T^*\mathcal{B}$, we can identify the comoment map $$ \mathfrak{g}\rightarrow C^{\infty}(T^*\mathcal{B}) $$ with the infinitesimal action $$ \mathfrak{g}\rightarrow T\
mathcal{B}. $$ Then we quantize the later one and get the map from $U(\mathfrak{g})$ to global differential operators on the flag variety. Now we can construct the Beilinson-Bernstein
equivalence. Is that (roughly) what you mean? – Zhaoting Wei Jul 18 '12 at 5:01
1 exactly! Another way to think of this is that $\mu$ is a degenerate version of the inclusion of a generic (regular semisimple) coadjoint orbit into $\mathfrak g^*$ - those orbits are
affine bundles over $G/B$ (twisted cotangent bundles) and degenerate, as the eigenvalues go to zero, to the Springer resolution. – David Ben-Zvi Jul 18 '12 at 15:24
add comment
The short answer might be that this viewpoint provides an attractive alternative way to construct the Springer resolution as a special case in a broader geometric framework, following ideas
of Kostant and Souriau. I'm not at all qualified to attempt a deeper explanation of the significance of this viewpoint, but what I can do is encourage you to explore the literature beyond
what Ginzburg does in his book with Chriss or in his summary lecture notes you quote from the proceedings of the 1997 conference at U. Montreal (posted on arXiv here). In expositions it is
convenient for people to use the special linear case as the main example, but this is of course misleading as to the delicate complications in the general case which encourage the development
of multiple approaches.
In particular, during the 1980s there was important parallel work being done on several related problems by Walter Borho, Jean-Luc Brylinski, Robert MacPherson, and others. Some detailed
references occur in my attempted review of a paper by Ginzburg (as posted on MathSciNet): MR847727 (87k:17014) 17B35 Ginsburg, V. [Ginzburg, Victor], $\mathfrak{g}$-modules, Springer’s
representations and bivariant Chern classes. Adv. in Math. 61 (1986), no. 1, 1–48. [Note that the first symbol in the title was originally printed upper-case but refers to a Lie algebra.]
I guess it's legal to quote my concluding reviewer's remark: "The subject matter spills over many of the conventional dividing lines between disciplines. In their abstracts prepared for the
International Congress of Mathematicians in Berkeley (1986), the author and Borho deal with many of the same issues, but the author’s occurs in the section “Lie groups and representations”,
up vote while Borho’s occurs in the section “Algebra”. Both might equally well be placed in the section “Algebraic geometry”."
4 down
vote It's useful to look at those ICM reports (now available online at http://www.mathunion.org/ICM/) as well as Borho's incomplete lecture notes in the Canad. Math. Soc. Conf. Proc. 5 (1986),
whose interesting Part II wasn't published. The more technical research papers by Borho and Brylinski in Invent. Math. 1982 and 1985 (parts I, III with a gap between) give a clearer idea of
how they fit the pieces together. Then there is the 1989 B-B-M monograph Nilpotent Orbits, Primitive Ideals, and Characteristic Classes, Birkhauser series Progress in Mathematics, 78. The
moral of the story seems to be that more than just the isolated construction of the Springer resolution is at stake here. I realize this doesn't directly answer your question, but may only
complicate it further.
[ADDED] After taking another look at Borho's notes, my understanding is that the moment map is used initially to place the construction of the Springer resolution into an already understood
classical picture. Here the flag variety is a complete variety $X$ with a natural action of the Lie algebra (viewed as vector fields). The action induces $\mu: T^*X \rightarrow \mathfrak{g^*}
$ (identified with $\mathfrak{g}$ via the Killing form). In turn, some of the general theory allows one to see in the special case that the image of $\mu$ is precisely the nilpotent cone $\
mathcal{N}$ (using the orthogonality of the nilradical of a Borel subalgebra to that algebra under the Killing form and taking the saturation of the nilradical under the reductive group).
Similarly one sees that $\mathcal{N}$ is normal (Kostant) and that $\mu$ is a resolution of singularities.
add comment
Not the answer you're looking for? Browse other questions tagged geometric-rep-theory flag-varieties sg.symplectic-geometry moment-map or ask your own question.
|
{"url":"http://mathoverflow.net/questions/102398/what-is-the-significance-that-the-springer-resolution-is-a-moment-map","timestamp":"2014-04-17T01:37:16Z","content_type":null,"content_length":"62161","record_id":"<urn:uuid:7446a42e-ed91-41e4-a302-77bd3388ab1d>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00497-ip-10-147-4-33.ec2.internal.warc.gz"}
|
170 pounds in kg
You asked:
170 pounds in kg
77.1107029 kilograms
the mass 77.1107029 kilograms
Say hello to Evi
Evi is our best selling mobile app that can answer questions about local knowledge, weather, books, music, films, people and places, recipe ideas, shopping and much more. Over the next few months we
will be adding all of Evi's power to this site.
Until then, to experience all of the power of Evi you can download Evi for free on iOS, Android and Kindle Fire.
|
{"url":"http://www.evi.com/q/170_pounds_in_kg","timestamp":"2014-04-19T07:14:43Z","content_type":null,"content_length":"53846","record_id":"<urn:uuid:8936eac2-f082-4008-ac41-5478a83d5116>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00405-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Modularity Aspects of Disjunctive Stable Models
T. Janhunen, E. Oikarinen, H. Tompits, and S. Woltran
Practically all programming languages allow the programmer to split a program into several modules which brings along several advantages in software development. In this paper, we are interested in
the area of answer-set programming where fully declarative and nonmonotonic languages are applied. In this context, obtaining a modular structure for programs is by no means straightforward since the
output of an entire program cannot in general be composed from the output of its components. To better understand the effects of disjunctive information on modularity we restrict the scope of
analysis to the case of disjunctive logic programs (DLPs) subject to stable-model semantics. We define the notion of a DLP-function, where a well-defined input/output interface is provided, and
establish a novel module theorem which indicates the compositionality of stable-model semantics for DLP-functions. The module theorem extends the well-known splitting-set theorem and enables the
decomposition of DLP-functions given their strongly connected components based on positive dependencies induced by rules. In this setting, it is also possible to split shared disjunctive rules among
components using a generalized shifting technique. The concept of modular equivalence is introduced for the mutual comparison of DLP-functions using a generalization of a translation-based
verification method.
This page is copyrighted by AAAI. All rights reserved. Your use of this site constitutes acceptance of all of AAAI's terms and conditions and privacy policy.
|
{"url":"http://www.aaai.org/Library/JAIR/Vol35/jair35-019.php","timestamp":"2014-04-21T09:51:13Z","content_type":null,"content_length":"3306","record_id":"<urn:uuid:b4ebc80e-c66d-4149-ada1-c178ae565423>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00315-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Uniqueness in Inverse Electromagnetic Conductive Scattering by Penetrable and Inhomogeneous Obstacles with a Lipschitz Boundary
Abstract and Applied Analysis
Volume 2012 (2012), Article ID 306272, 21 pages
Research Article
Uniqueness in Inverse Electromagnetic Conductive Scattering by Penetrable and Inhomogeneous Obstacles with a Lipschitz Boundary
School of Mathematics and Information Science, Yantai University, Yantai, Shandong 264005, China
Received 26 August 2012; Accepted 6 December 2012
Academic Editor: Yong Hong Wu
Copyright © 2012 Fenglong Qu. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium,
provided the original work is properly cited.
This paper is concerned with the problem of scattering of time-harmonic electromagnetic waves by a penetrable, inhomogeneous, Lipschitz obstacle covered with a thin layer of high conductivity. The
well posedness of the direct problem is established by the variational method. The inverse problem is also considered in this paper. Under certain assumptions, a uniqueness result is obtained for
determining the shape and location of the obstacle and the corresponding surface parameter from the knowledge of the near field data, assuming that the incident fields are electric dipoles located on
a large sphere with polarization . Our results extend those in the paper by F. Hettlich (1996) to the case of inhomogeneous Lipschitz obstacles.
1. Introduction
In this paper we are interested in determining the shape and location of a penetrable, inhomogeneous, isotropic, Lipschitz obstacle surrounded by a piecewise homogeneous, isotropic medium. The
obstacle is covered with a thin layer of high conductivity. Such penetrable obstacles lead to conductive boundary conditions; for the precise mathematical description, the reader is referred to [1–3
]. In this paper, it is shown that the shape and location of the obstacle and the corresponding surface parameter are uniquely determined from a knowledge of the near field data of the scattered
electromagnetic wave at a fixed frequency. To this end, we need a well posedness result for the direct problem.
The well posedness of the Helmholtz equation for a penetrable, inhomogeneous, anisotropic medium has been studied recently in [4]. In [5], the authors provided a proof for the well posedness of the
scattering problem for a dielectric that is partially coated by a highly conductive layer in the TM case in 2007.
In the case of exterior Maxwell problem for the partially coated Lipschitz domains, the authors in [6] have established the well posedness of a unique solution by variational methods in 2004. For the
homogeneous isotropic medium problem, by means of an integral equation method, Angell and Kirsch proved the existence and uniqueness of the classical solution for Maxwell's equations with conductive
boundary conditions assuming in [2]. Variational methods for the homogeneous isotropic medium problem were proposed in [1], under the assumption that the bounded domain with boundary in the class ,
and some additional conditions on . It is also shown that the obstacle is uniquely determined by the far field patterns of all incident waves with a fixed wave number. For the inhomogeneous
anisotropic media, the well posedness of the direct problem was proved in [7].
The uniqueness result for the inverse medium scattering problem was first provided by Isakov (see [8, 9]), in which it is shown that the shape of a penetrable, inhomogeneous, isotropic medium is
uniquely determined by its far field pattern of all incident plane waves. The idea is to construct singular solutions of the boundary value problem with respect to two different scattering obstacles
with identical far field patterns. Our uniqueness proof is based on this idea. The idea of Isakov was modified by Kirsh and Kress [10] using potential theory for the impenetrable obstacle case with
Neumann boundary conditions. By the same technique, the authors in [11] proved the case of a penetrable obstacle with constant index of refraction. The use of potential theory will require strong
smoothness assumptions on the scattering object. Then D. Mitrea and M. Mitrea [12] improved the previous results to the case of Lipschitz domains. In [13], they extended Isakov's approach to the case
of a penetrable obstacle for Hemholtz equations.The uniqueness theorem of Helmholtz equations for partially coated buried obstacle problem was shown in [14, 15], assuming that the scattering fields
were known with point sources as incident fields.
Recently, uniqueness for the inverse scattering problem in a layered medium has attracted intensive studies. For the sound-soft or sound-hard obstacle case, based on Schiffer's idea, [16] proved a
uniqueness result. But their method can not be extended to other boundary conditions. In recent years, by employing the generalized mixed reciprocity relation, it was proved in [17, 18] that both the
obstacle and its physical property can be uniquely determined for different boundary conditions. For the inverse acoustic scattering by an impenetrable obstacle in a two-layered medium case, it is
shown in [19] that interface is uniquely determined from the far field pattern. Unfortunately, this method can not be extended to the electromagnetic case, but using ideas in [20], a different method
was used in [21] to establish such a uniqueness result for the electromagnetic case.
There are also some uniqueness results for partial differential equations with constant coefficients by integral equation methods. (see [22, 23]). However, integral equation methods are not well
tailored for partial differential equations having inhomogeneous coefficients of the highest derivatives. Consequently, in [24], the author brought together the variational approach and the idea from
[8, 9] to provide a uniqueness proof of Helmholtz equations with inhomogeneous coefficients for a penetrable, anisotropic obstacle. Their method depends on a regularity theorem for the direct problem
and the well posedness of the interior transmission problem related to the direct problem. This idea has been extended to the case of electromagnetic scattering problem for anisotropic media in [25].
The outline of this paper is as follows. In Section 2, besides the formulation of the direct scattering problem in a penetrable, inhomogeneous, Lipschitz domain, we also provide a proof of the well
posedness for the direct problem by using a variational method. The uniqueness result for the inverse problem will be shown in Section 3.
2. The Direct Problem
Let be a bounded penetrable, inhomogeneous, isotropic domain with a Lipschitz boundary denoted by and covered with a thin layer of high conductivity. Assume that the domain is imbedded in a
homogeneous background medium. Define and with being the wave number, where and are the refractive index of the domain and the background medium, respectively. Assume that with for all and is a
complex constant with . Assume further that with is a complex-valued function describing the surface impedance of the coating. The incident field is considered to be an electric dipole located at on
a large sphere with polarization given by Denote by the free space Green tensor of the background medium and define which satisfies where is the Dirac delta function. Note that can be written as
where is the scattered electric field due to the background medium and the electric dipole .
In order to formulate precisely the scattering problem, recall the following Sobolev spaces: where denotes the exterior unit normal to . If is unbounded, we denote by the space of functions for any
compact set ⊂⊂. Introduce the space where . Then the scattering problem can be formulated as follows. Given , find the field and the scattered field such that and the scattered field is required to
satisfy the Silver-Müller radiation condition uniformly in , where .
We first have the following uniqueness result for the above scattering problem.
Theorem 2.1. The scattering problem (2.6)–(2.9) has at most one solution.
Proof. To prove the theorem, it is enough to consider the case whence . Taking the dot product of (2.6) with over and of (2.7) with over with , respectively, and integrating by parts, we obtain by
using the conductive conditions (2.8) and (2.9) that where is the corresponding scattered magnetic field. Taking the complex conjugate of both sides of (2.11) and using the fact that , and are
nonnegative gives An application of the Rellich lemma yields that in (see [26, Theorem 6.10]). This, together with the unique continuation principle, implies that in . From the trace theorem, it
follows that on . Thus, taking the imaginary part of (2.11) and using the assumption that for all , we have that in .
Introduce the electric-to-magnetic Calderon operator (see [27]), which maps the electric field boundary data on the surface of a large ball to the magnetic boundary data on , where satisfies Then the
scattering problem (2.6)–(2.10) can be reformulated in the following mixed conductive boundary value problem (MOCKUP) over a bounded domain: where .
In the following, we introduce some properties of the Calderon operator that will be frequently used in the rest of this section. The basis functions for tangential fields on a sphere are the vector
spherical harmonics of order given by for and . Here, as usual, denotes the surface gradient on the surface of the unit sphere .
For given by , the operator can be defined by where and is the spherical Hankel function.
If in (2.23), we will obtain another operator . Properties of and are collected in the following lemma (for a proof see [27]).
Lemma 2.2. The operator is negative definite in the sense that for any with . Furthermore, is compact, where
In the remainder of this paper we will refer to (2.17)–(2.21) as (CBP). Here we will adapt the variational approach used in [6, 27] to prove the existence of a unique solution to our (CBP). Define
where . Then multiplying (2.17) and (2.18) by test function , using formally integration by parts and using the conductive boundary conditions on , we can derive the following equivalent variational
formulation for (CBP). Find such that where is the incident magnetic field and
We rewrite (2.29) as the problem of finding such that where the sesquilinear form is defined by Here denotes the scalar product, and denotes the scalar product. We will use a Helmholtz decomposition
to factor out the nullspace of the curl operator and then to prove the existence of a unique solution to (CBP).
Define then we seek such that The variational problem (2.34) can be rewritten as where we define Here we have used to write the tangential component of the gradient of in terms of the tangential
gradient on the sphere . By Lemma 2.2, it follows that is negative definite, then we obtain that is a coercive sesquilinear form on . Further by Lax-Milgram theorem, it is easy to see that gives rise
to a bijective operator. Since , still by Lemma 2.2, we know that gives rise to a compact operator. In order to apply the Fredholm alternative to the variational problem (2.34), we need to prove the
following uniqueness lemma.
Lemma 2.3. The variational problem (2.34) has at most one solution.
Proof. It suffices to consider the following equation: Choosing , it is easy to see that By the definition of the operator , if is the weak solution of the problem then we have where Furthermore, we
can compute that which together with the fact implies Therefore the Rellich lemma ensures us that in . From (2.39), we see that on and then which, together with the fact that , implies . This
completes the proof of Lemma 2.3.
Lemma 2.3 together with the Fredholm alternative implies that there exits a unique solution of the variational problem (2.34).
Lemma 2.4. The space is compactly imbedded in , where is a ball with .
Proof. Consider a bounded set of functions . Each function can be extended to all of by solving the exterior Maxwell equation Define Since the tangential components of are continuous across , it
follows that . By using the properties of the Calderon operator and the conditions in , we see that the following equations hold true Then, by the definition of that and the relationship = on , we
immediately have Thus, has a well-defined divergence and in , where Now we choose a cut-off function such that in and is supported in a ball . Then one can use the general compactness theorem
(Theorem 4.7 in [27]) to the sequence and extract a subsequence converging strongly in . This proves the lemma.
From the above definitions of and , we have the following Helmholtz decomposition lemma.
Lemma 2.5. The spaces and are closed subspaces of . The space is the direct sum of the spaces and , that is,
The proof of this Helmholtz decomposition Lemma is entirely classical (see [27, 28]).
We now look for a solution of the variational problem (2.31) in the form , where and is the unique solution of (2.34). We observe that for all by the definition of . Hence the problem of determining
is equivalent to the problem of determining such that From Chapter 10.3.2 in [27] we know that for where the operator is a compact operator from into and the operator satisfies . We now split the
sesquilinear form into with The sesquilinear form is obviously bounded and a direct computation verifies that with some constant .
Hence by Lax-Milgram theorem, gives rise to a bijective operator and by the compact embedding of in and the fact that is a compact operator from into , the second term gives rise to a compact
operator. Then a standard argument implies that the Fredholm alternative can be applied. Finally, the uniqueness theorem yields the existence result. We summarize the above analysis in the following
Theorem 2.6. For any incident field , there exists a unique solution of (CBP) which depends continuously on the incident field .
3. Uniqueness for the Inverse Problem
In this section we will show that the scattering obstacle and the corresponding parameter are uniquely determined from the knowledge of the scattered fields for all , where is the surface of a large
ball with . By some properties of the scattered fields, we can derive a relationship between them, then constructing special singular solutions which satisfy the relationship. Finally, we can obtain
the uniqueness result by using the singularities of the singular solutions that we constructed.
Lemma 3.1. Assume that is not an eigenvalue of Maxwell equation for the domain . Then we have(i) the restriction to of is complete in ;(ii) the restriction to of is complete in .
Proof. For simplicity, we only prove statement (ii). Case (i) can be proved similarly.
Let be such that Then it follows that Define By (3.2), it is easy to see that for arbitrary polarization in the tangential plane to at , we have From the definition of (3.3), we immediately have Due
to the symmetry of the background Green function, as a function of solves ,. Hence, satisfies the Maxwell's equation in . By (3.6) and the fact that is an arbitrary polarization in the tangential
plane to at , we immediately have that .
The uniqueness of the exterior problem implies that in . Thus, the unique continuation principle ensures us that in . By trace theorem, it follows that and on . By the definition of and the jump
relations of the vector potential across , it can be checked that satisfies the following equations: Therefore, the uniqueness theorem of the interior problem for Maxwell's equations implies that in
. Finally, from the jump relations of the vector potential across , we have which completes the proof.
We now consider two obstacles and with the refractive index and the surface impedance . Let denote the unbounded part of and its open complement. From the proof of Theorem 2.6, it follows that the
total field satisfies for any large ball with and all test function , where It is convenient to introduce the following space: where . The relationship derived in the following lemma plays a central
role in the proof of the main result in this section.
Lemma 3.2. Assume that is not an eigenvalue of Maxwell equation in . Let be a ball with . Let and be the scattered fields with respect to and , respectively, produced by the same incident field .
Assume that for all with the radius for a fixed wave number . Then we have Here satisfies the following variational problem: for all , where the coefficients and satisfy that and .
Proof. (i) We first prove that for any fixed , the scattered fields , where is the solution of the following problem: with the incident field and , , . By Lemma 3.1 and the fact that is not an
eigenvalue of Maxwell equation in , it follows that there exists a sequence and such that Let , then it satisfies the Maxwell equation in . Let , then the well posedness of the problem and (3.17)
imply that This, together with the fact that , implies (see [28]) Then by (3.19), (3.20), and the trace theorem, it can be proved that Denote by and the scattered fields with respect to and produced
by the same incident field . By the assumption for all , it is easy to see that . Then by the uniqueness theorem of the exterior scattering problem, it follows that in , which together with the
unique continuation principle ensures that in . Now, by (3.21) and the well posedness of the direct problem (2.18), it can be checked that for any compact set , we have for any fixed .
Therefore, the fact in ensures us that The arbitrarity of implies that for any fixed .
(ii) Next we will show that the identity (3.14) holds. Set , then it follows from (3.11) that for all . Choose two domains with and define a smooth function with in and in . Let , , , it is easy to
see that satisfy the assumptions of the lemma. We further assume satisfies (3.15) with respect to , , , so that substituting into the left hand of (3.24) and noting that in yield that Hence
substituting into (3.24), it follows from the right hand of (3.24) that We define by for all . By Theorem 2.6, it follows that there exists a unique solution of the problem for all . Choose two
domains with and define smooth functions with and . Take in (3.11), it is seen that Equation (3.28) with replaced by yields By (3.26) and (3.27), it can be shown that Taking the difference of (3.29)
and (3.30), we have that By (3.28), we can deduce that is a radiating solution of the corresponding Maxwell's equations in , then it can be extended to all of denoted by by solving the exterior
Maxwell's equation in with on , which also satisfies the Silver-Müler radiation condition at infinity. By applying the vector Green formula to (3.32), it can be proved that In view of the fact and in
, we immediately have Application of the vector Green formula again and noting that both and the extended function satisfy the Silver-Müler radiation condition, it follows that Hence the Stratton-Chu
formula combines with (3.34) implies that Since is an arbitrary polarization in the tangential plane to at , we obtain that . By the fact that is a radiating solution of Maxwell's equation in , it
follows that in . Hence the unique continuation principle implies that in . Therefore, can be used as a test function for , which satisfies (3.15) with . So that from the left hand of (3.30), we
deduce that Thus, it follows from the right hand of (3.30) that . Furthermore, from (3.27) with replaced by , it can be shown that From the definitions of , we observe that which combines (3.38), the
definition of the scalar product , and the fact that implies that (3.14) holds. This ends the proof of this lemma.
The main result of this section is contained in the following theorem.
Theorem 3.3. Let and be the scattered fields with respect to and , respectively, and the corresponding impedances. Suppose that the assumptions in Lemma 3.2 hold true and is not empty for . If one of
the following assumptions holds, then we have . Consider(i);(ii).
Proof. Let us assume that is not included in . Since is connected, we can find a point and a sufficiently small with the following properties:(i);(ii) the points lie in for all , where is the unit
normal to at .Denote , the inner part of the domain . We consider the unique solution of the following problem: Here satisfies the Silver-Müler radiation condition at infinity, and denotes the
magnetic dipole defined by Define It can be proved that is a solution of Maxwell's equations with homogeneous conductive boundary value conditions on in any domain with and .
Define and .
In view of the above definitions of and , it follows that satisfies the variational equation (3.15) in Lemma 3.2 for the obstacle . The well posedness of the direct problem for (CBP) and the fact
that is bounded away from imply that the solution of (3.40) is uniformly bounded in . We now define another singular solution with respect to by where is a magnetic dipole defined in (3.41), and is a
solution of the problem Here satisfies the Silver-Müler radiation condition at infinity. Noting that satisfies the variational equation (3.15) in Lemma 3.2 with and , it follows that both and satisfy
the relationship (3.14), then we obtain For case (i), by the fact that and the singularities of the magnetic dipole defined in (3.41), it can be proved that as , this, together with the fact that the
other terms in the right hand of (3.46) are bounded, leads to a contradiction. Hence we have . By choosing and using the similar analysis as in the proof above, one can prove that . Finally, we
obtain that . For other cases, due to the singularities of , a contradiction also arises in (3.46) as . This proves the theorem.
Theorem 3.4. Assume with parameters and the scattered fields satisfy for all , then we have on .
Proof. From the proof of Theorem 3.3, it follows that there exists two singular solutions of the conductive boundary problem with respect to the obstacle for some . By Lemma 3.2 and the identity , it
can be checked that
|
{"url":"http://www.hindawi.com/journals/aaa/2012/306272/","timestamp":"2014-04-20T08:56:51Z","content_type":null,"content_length":"1048051","record_id":"<urn:uuid:fa1d7b01-5fc7-45de-ac42-8dbbf0ad9512>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00028-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[SOLVED] Proving sinz/z--->1 for complex numbers.
December 5th 2008, 09:13 AM #1
May 2008
[SOLVED] Proving sinz/z--->1 for complex numbers.
Hi everybody, thanks for reading.
I'm having trouble proving lim(sinz/z) = 1 |z-->0.
I tried using the definition with epsilons but reached no where.
The regular proof, for one variable, is of no help here. I cannot see how to prove that |sinz|<|z|. I don't even think it's true....
Also, using U(x,y) and V(x,y) isn't much help for I get pretty long two-variable functions of which it is not any easier to calculate the limit when x,y--->0.
In other words - I'm lost :-\
Thank you!
L'Hospital's rule is applicable. Perhaps though you don't wish to go that route.
I think it would be hard to do this from first principles. But it's easy enough if you're allowed to quote results for example about power series representations.
The power series $\sin z = z - \tfrac {z^3}{3!} + \tfrac{z^5}{5!} - \ldots$ has infinite radius of convergence. So $\tfrac{\sin z}z = 1 - \tfrac {z^2}{3!} + \tfrac{z^4}{5!} - \ldots$ also has
infinite radius of convergence, and goes to 1 as z goes to 0.
1. Thanks
2. Unfortunately, we have not yet studied expensions of complex functions (this is also giving me a hard time proving the chain rule, bah).
However, a friend of mine has only 10 minutes ago told me of something he thought of:
lim sinz/z = lim (sinz-sin(0))/(z-0) = sin'(0) = cos(0)
of course I only need to show sin'x = cosx, but I think that's not gonna be hard...
Anyway, I really appreciate the suggestions, and if you have other original ideas I'd be glad to hear.
Gotta go right now
Thanks again!
1. Thanks
2. Unfortunately, we have not yet studied expensions of complex functions (this is also giving me a hard time proving the chain rule, bah).
However, a friend of mine has only 10 minutes ago told me of something he thought of:
lim sinz/z = lim (sinz-sin(0))/(z-0) = sin'(0) = cos(0)
of course I only need to show sin'x = cosx, but I think that's not gonna be hard...
Anyway, I really appreciate the suggestions, and if you have other original ideas I'd be glad to hear.
Gotta go right now
Thanks again!
$\lim_{x\to{0}}\frac{\sin(x)}{x}=\lim_{x\to{0}}\int _0^1\cos(xy)dy$
Can you see where to go from there?
I can see where you want to go from there, which is to slip the limit past the integral sign. But exchanging limiting operations (applied to a function of a complex variable in this case) is a
delicate business, which needs careful justification. There's a substantial theorem being secretly used here!
In fact, this looks like a very neat way to show that $\lim_{z\to0}\tfrac{\sin z}z=1$, and the change-of-limiting-operations procedure is justified, essentially because the function (z,y)→cos(zy)
is locally uniformly continuous. But that takes at least as much machinery to prove as the other methods proposed in the previous comments.
Having hardly studied anything yet aside limit theorems and a bit of complex differentiation, I definitely cannot use the idea suggested above, but thanks
Hi Aurora,
I shall give a [very] brief proof. Someone will pick me up if there are any mistakes.
If $0 < x < \frac{\pi}{2}$ then there is not much difficulty in showing that $\sin x < x < \tan x$ . Since we are assuming $0 < x < \frac{\pi}{2}$ then we know that $\sin x >0$ and thus $1 < \
frac{x}{\sin x} < \frac{1}{\cos x}$ . In other words,
$\cos x < \frac {\sin x}{x} < 1~~(*)$
Note that $1-\cos x=(1-\cos x)\cdot \frac{1+\cos x}{1+\cos x}=\frac{1-\cos^2 x}{1+\cos x}=\frac{\sin^2 x}{1+\cos x}$
Since we are assuming that $0 < x < \frac{\pi}{2}$ then $\cos x > 0$ and thus $\frac{\sin^2x}{1+\cos x}<\sin^2 x<x^2$ and hence from $(*)$ we have
$1-x^2<\frac{\sin x}{x}<1~~(**)$
We have been assuming that $0 < x < \frac{\pi}{2}$ but equation $(**)$ also holds if $-\frac{\pi}{2}< x < 0$ since $(-x)^2=x^2$ and $\sin (-x)=-\sin x$ . We know that we have $x\approx 0$ and so
using equation $(**)$ and the squeeze theorem we can see that
$<br /> \lim_{x\rightarrow 0}{\frac{\sin x}{x}}= \lim_{x\rightarrow 0}{(1-x^2)}=<br /> \lim_{x\rightarrow 0}{1}=1<br />$
Hope this helps.
If you've studied a bit of complex differentiation, you probably know how to differentiate the exponential function (same formula as for the exponential function on real numbers). Then, since
you've probably defined $\sin z=\frac{e^{iz}-e^{-iz}}{2i}$ (and $\cos z=\frac{e^{iz}+e^{-iz}}{2}$), you can prove that indeed $\sin'z=\cos z$ for complex $z$, and apply your friend's idea.
Laurent, thanks
Sean, thank you too, but notice that the function here is complex, and your proof is relevant only in the real case....
However, thank you for the bother!
I can see where you want to go from there, which is to slip the limit past the integral sign. But exchanging limiting operations (applied to a function of a complex variable in this case) is a
delicate business, which needs careful justification. There's a substantial theorem being secretly used here!
In fact, this looks like a very neat way to show that $\lim_{z\to0}\tfrac{\sin z}z=1$, and the change-of-limiting-operations procedure is justified, essentially because the function (z,y)→cos(zy)
is locally uniformly continuous. But that takes at least as much machinery to prove as the other methods proposed in the previous comments.
Since $-x\sin(xy)$ is bounded in the neighborhood of zero can we not say that $\cos(xy)$ is uniformly continous in terms of this integral?
I do not understand why to use all the unnecessary proofs because, by definition, $\sin z$ is defined in terms its power series*.
That is what Opalg did.
*)Every formal text on complex analysis would define sine in terms of a power series. If not, like through exponentials, then we can use the power series on the exponential to derive the power
series for sine. Thus, the power series definition is basically the definition for sine. So why use any other approach?
December 5th 2008, 11:31 AM #2
Super Member
Aug 2008
December 5th 2008, 11:32 AM #3
December 5th 2008, 12:08 PM #4
May 2008
December 5th 2008, 02:07 PM #5
December 6th 2008, 12:01 AM #6
December 6th 2008, 01:56 AM #7
May 2008
December 6th 2008, 02:42 AM #8
Aug 2007
December 6th 2008, 02:53 AM #9
MHF Contributor
Aug 2008
Paris, France
December 6th 2008, 06:47 AM #10
May 2008
December 6th 2008, 08:55 AM #11
December 6th 2008, 03:27 PM #12
Global Moderator
Nov 2005
New York City
|
{"url":"http://mathhelpforum.com/calculus/63459-solved-proving-sinz-z-1-complex-numbers.html","timestamp":"2014-04-20T14:39:03Z","content_type":null,"content_length":"76033","record_id":"<urn:uuid:2629bb19-a791-41e0-9f51-40ff7e7ee66b>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00156-ip-10-147-4-33.ec2.internal.warc.gz"}
|
primary 6 problem sum
April 30th 2011, 08:18 PM #1
Apr 2009
primary 6 problem sum
Belle was travelling in the MRT. She was seated at the last seat of the last coach.During the journey,she noted the time on her watch was 12.18 P.M when the train just entered an underground
tunnel.The moment she exited from the tunnel,the time was 12.23 P.M.Given that the train travelled at an average speed of 18 km/h and that the tunnel is 1350m long,find the length of the train in
Distance = Rate * Time
What else do you need?
Rule #1 - Name Stuff.
Question #1 - Name what?
Answer #1 - What does it want? Name that.
L = Length of train in metres.
Belle was travelling in the MRT. She was seated at the last seat of the last coach.During the journey,she noted the time on her watch was 12.18 P.M when the train just entered an underground
tunnel.The moment she exited from the tunnel,the time was 12.23 P.M.Given that the train travelled at an average speed of 18 km/h and that the tunnel is 1350m long,find the length of the train in
Notice: $18\frac{km}{h} =5\frac{metres}{second}$
$time=5min=5\cdot 60sec=300sec$
$(length of the tunnel)+(length of the train)=speed\cdot time$
$(length of the train)=speed\cdot time - (length of the tunnel)$
$(length of the train)=300sec\cdot 5\frac{metres}{second} - 1350m=1500m-1350m=150m$
P.S. Write if need details.
April 30th 2011, 08:42 PM #2
MHF Contributor
Aug 2007
May 1st 2011, 04:03 AM #3
Apr 2011
|
{"url":"http://mathhelpforum.com/algebra/179101-primary-6-problem-sum.html","timestamp":"2014-04-21T06:00:45Z","content_type":null,"content_length":"36323","record_id":"<urn:uuid:89b3178f-db51-4f74-9658-d11da1bd37b4>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00600-ip-10-147-4-33.ec2.internal.warc.gz"}
|
RE: Complexity Math in PDF and ASCII Notation (Fluff, Long)
• From: "Peter Torpey" <ptorpey@xxxxxxxxxxxxxxxx>
• To: <programmingblind@xxxxxxxxxxxxx>
• Date: Tue, 11 Sep 2007 08:13:24 -0400
You asked about Mathematica.
Well, I am a blind physicist and have needed to do complex symbolic math.
I must say, I've never figured out a good way of dealing with the PDF type
of documents you wish to read, but if you're doing your own math, equations,
etc., I found mathematica almost impossible to use with Jaws.
The program which I found to be very accessible (and used all of the time)
is Maple (www.maplesoft.com). Although the Java interface they are pushing
into their new releases is somewhat clumsy with Jaws, the Classic interface
is very accessible with Jaws. I have found this program very useful.
Maple is rather a costly program (> $1,000, although there may be a less
expensive version for students).
An open source math program which you can obtain is macsyma. This runs
fairly well with Jaws (although I haven't played around with it much). I
found this on sourceforge.net.
One other neat little program for which I developed scripts and had the
original developer tweak a bit to work well with Jaws is called QD
Accessible. This little program runs on the PacMate and does all sorts of
symbolic math, derivatives, solving symbolic equations, as well as doing
numerical math. I placed the program, scripts, and some documentation I
wrote on the Pacmate Gear web site so that folks could download it if they
needed it.
I hope this helps.
-- Pete
-----Original Message-----
From: programmingblind-bounce@xxxxxxxxxxxxx
[mailto:programmingblind-bounce@xxxxxxxxxxxxx] On Behalf Of Veli-Pekka
Sent: Monday, September 10, 2007 6:01 PM
To: programmingblind@xxxxxxxxxxxxx
Subject: Complexity Math in PDF and ASCII Notation (Fluff, Long)
Hi list,
I'm now on a course that's about algorithms, data structures and temporal
complexity, for the most part. lite math, intuition and analysis of code
rather than any actual programming tasks, per se. ONce again, I've hit the
usual snag of notation, so here are some questions about math:
The slides are PDf files produced by Distiller from PowerPpoint Slides,
arrgh. In them the math is seriously whacky. On exporting to plain text
using Acrobat or Xpdf, both left out various critical math signs such as
greater than or is in set. Using Acrobat Reader 8 and Dolphin Supernova
8 beta the situation is not much better. There are symbols that look like
set theory symbols but it appears their actual code points don't match, in
stead Sn reads something like pounds, for instance, even though I know for
certain that is not what the symbol on screen looks like. Is there any
accessible way to deal with these PDfs? Has anyone had similar experiences
and could share workarounds? This is in Finnish, and the math is near the
end, but here is a sample document:
The book we use is Introduction to ALgorithms, the 2001 edition.
I'm sure I'll be able to get the originals for the lecture notes but they
are power point, so might not be that good to begin with. Even if LaTEX was
used, as in another math oriented computing course I tried, I had a hard
time with that, too. Mostly due to the math itself, but one still has to
know the notation, too and I have never studied LaTEX, although would like
to mainly for writing articles and maintainging references with ease, but
hey, that's OT.
Nested parens and the greek letters make things all the more harder, though,
as far as symbols go. Doable, sure, but not nice and or easy, even if I was
a math whiz, and I assure you I am not. I genuinely like programming but I
have never truely gotten into higher math, higher than say logarithms or
simple derivatives. I kinda like math and have a deep appreciation for some
of the results and people I know who know it well, but somehow feel I have a
hard time coping with very abstract definitions. Part of that is just me,
part is practice and one important portion of that is notation, thus my
questions. I still wish I knew enough to be able to do audio DSP some day
since I'm an analog synth buff, too. But the filter math there is way
beyond me and again OT.
Sorry for these tangents, I'm typing this late at night and don't feel like
cutting, <smile>.
Anyway, back to notation, my other question is, how do you people deal with
the set theory symbols, logic and other basic math signs? So far.
as in a previous course on logic, I've used operators from programming
languages and the HTMl 4.0 entity names with relative success. Are there
better textual notations and on-line references for picking them up?
What does Mathematica use? How is Math ML like?
I wish semi seriously that there would be a math notation that's as speech
friendly as SQl or Ruby is compared to obfuscated C and Perl JAPHs with
speech, to draw bad programming analogies, <grin>. I'm still a fan of Ruby,
SQL, and APple script on syntax grounds alone which is quite telling. I
know this doesn't matter to everyone that much but when ever I can speech
read code that sounds like Good English, I think, now this is easy to
follow, and elegant, to.
With kind regards Veli-Pekka Tätilä (vtatila@xxxxxxxxxxxxxxxxxxxx)
Accessibility, game music, synthesizers and programming:
View the list's information and change your settings at
No virus found in this incoming message.
Checked by AVG Free Edition.
Version: 7.5.485 / Virus Database: 269.13.13/998 - Release Date: 9/10/2007
8:48 AM
View the list's information and change your settings at
Other related posts:
• » RE: Complexity Math in PDF and ASCII Notation (Fluff, Long)
|
{"url":"http://www.freelists.org/post/programmingblind/Complexity-Math-in-PDF-and-ASCII-Notation-Fluff-Long,3","timestamp":"2014-04-16T06:18:05Z","content_type":null,"content_length":"14125","record_id":"<urn:uuid:8d3db612-d733-4bbf-bd12-a233ff4855f2>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00329-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Forum: Ask Dr. Math FAQ: Golden Ratio, Fibonacci Sequence
Please tell me about the Golden Ratio (or Golden Mean), the Golden Rectangle, and the relation between the Fibonacci Sequence and the Golden Ratio.
The Golden Ratio
The golden ratio is a special number approximately equal to 1.6180339887498948482. We use the Greek letter Phi to refer to this ratio. Like Pi, the digits of the Golden Ratio go on forever without
repeating. It is often better to use its exact value:
1 + sqrt{5}
The Golden Rectangle
A Golden Rectangle is a rectangle in which the ratio of the length to the width is the Golden Ratio. In other words, if one side of a Golden Rectangle is 2 ft. long, the other side will be
approximately equal to 2 * (1.62) = 3.24.
Now that you know a little about the Golden Ratio and the Golden Rectangle, let's look a little deeper. Take a line segment and label its two endpoints A and C. Now put a point B between A and C so
that the ratio of the short part of the segment (AB) to the long part (BC) equals the ratio of the long part (BC) to the entire segment (AC):
The ratio of the lengths of the two parts of this segment is the Golden Ratio. In an equation, we have
AB BC
---- = ---- .
BC AC
Now we're ready for the definition of the Golden Ratio. The Golden Ratio is the ratio of BC to AB. If we set the value of AB to be 1, and use x to represent the length of BC, then
1 x
- = ----- .
x 1 + x
If we solve this equation for x, we'll find that it is the value given above, (1+sqrt{5})/2, which is about 1.62.
If you have a Golden Rectangle and you cut a square off it so that what remains is a rectangle, that remaining rectangle will also be a Golden Rectangle. You can keep cutting these squares off and
getting smaller and smaller Golden Rectangles.
Fibonacci Sequence
In the Fibonacci Sequence (0, 1, 1, 2, 3, 5, 8, 13, ...), each term is the sum of the two previous terms (for instance, 2+3=5, 3+5=8, ...). As you go farther and farther to the right in this
sequence, the ratio of a term to the one before it will get closer and closer to the Golden Ratio.
With the Fibonacci Sequence you can do the opposite of what we described above for the Golden Rectangle. Start with a square and add a square of the same size to form a new rectangle. Continue
adding squares whose sides are the length of the longer side of the rectangle; the longer side will always be a successive Fibonacci number. Eventually the large rectangle formed will look like a
Golden Rectangle - the longer you continue, the closer it will be.
See "The Relation of the Golden Ratio and the Fibonacci Sequence" in the Dr. Math archives.
More from the Dr. Math archives:
On the Web:
|
{"url":"http://mathforum.org/dr.math/faq/faq.golden.ratio.html","timestamp":"2014-04-20T19:49:50Z","content_type":null,"content_length":"10455","record_id":"<urn:uuid:d7d1bb52-3a1c-4eb4-b1b4-e842d47504ba>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00498-ip-10-147-4-33.ec2.internal.warc.gz"}
|
From APIDesign
Reusing libraries produced by others is essential aspect of DistributedDevelopment. It simplifies Time To Market, it reduces long term cost of ownership and leads to creation of good technologies.
However it does not come for free. Read details or directly jump to the implications that shall improve your every day development habits.
This page starts by describing a way to convert any 3SAT problem to a solution of finding whether there is a way to satisfy all dependencies of a library in a repository of libraries. Thus proving
that the later problem is NP-Complete. Then it describes the importance of such observations on our development practices.
There are similar observations for other module systems (RPM and Debian, see the external references section), with almost identical proof. The only difference is that both RPM and Debian allow easy
way to specify negation by use of obsolete directive (thus it is easy to map the 3SAT formula). The unique feature of this proof is that it does not need negation at all. Instead it deals with
re-export of an API. As re-export of APIs is quite common in software development, it brings implications of this kind of problem closer to reality.
The problem of satisfying a logic formula remains NP-complete even if all expressions are written in wikipedia::conjunctive normal form with 3 variables per clause (3-CNF), yielding the 3SAT problem.
This means the expression has the form:
$(x_{11} \vee x_{12} \vee x_{13}) \wedge$
$(x_{21} \vee x_{22} \vee x_{23}) \wedge$
$(x_{31} \vee x_{32} \vee x_{33}) \wedge$
$(x_{n1} \vee x_{n2} \vee x_{n3})$
where each x[ab] is a variable v[i] or a negation of a variable $eg v_i$. Each variable v[i] can appear multiple times in the expression.
[edit] Library Versioning Terminology
Let A,B,C,... denote various modules and their APIs.
Let A[1.0],A[1.1],A[1.7],A[1.11] denote compatible versions of module A.
Let A[1.0],A[2.0],A[3.1] denote incompatible versions of module A.
Let A[x.y] > B[u.v] denote the fact that version x.y of module A depends on version u.v of module B.
Let $A_{x.y} \gg B_{u.v}$ denote the fact that version x.y of module A depends on version u.v of module B and that it re-exports module B's API to users of own API.
Let Repository R = (M,D) be any set of modules with their various versions and their dependencies on other modules with or without re-export.
Let C be a Configuration in a repository R = (M,D), if $C \subseteq M$, where following is satisfied:
1. re-exported dependency is satisfied with some compatible version: $\forall A_{x.y} \in C, \forall A_{x.y} \gg B_{u.v} \in D \Rightarrow \exists w >= v \wedge B_{u.w} \in C$
2. each dependency is satisfied with some compatible version: $\forall A_{x.y} \in C, \forall A_{x.y} > B_{u.v} \in D \Rightarrow \exists w >= v \wedge B_{u.w} \in C$
3. each imported object has just one meaning for each importer: Let there be two chains of re-exported dependencies $A_{p.q} \gg ... \gg B_{x.y}$ and $A_{p.q} \gg ... \gg B_{u.v}$ then $x = u \wedge
y = v$
[edit] Module Dependency Problem
Let there be a repository R = (M,D) and a module $A \in M$. Does there exist a configuration C in the repository R, such that the module $A \in C$, e.g. the module can be enabled?
[edit] Conversion of 3SAT to Module Dependencies Problem
Let there be 3SAT formula with with variables v[1],...,v[m] as defined above.
Let's create a repository of modules R. For each variable v[i] let's create two modules $M^i_{1.0}$ and $M^i_{2.0}$, which are mutually incompatible and put them into repository R.
For each formula $(x_{i1} \vee x_{i2} \vee x_{i3})$ let's create a module F^i that will have three compatible versions. Each of them will depend on one variable's module. In case the variable is used
with negation, it will depend on version 2.0, otherwise on version 1.0. So for formula
$v_a \vee eg v_b \vee eg v_c$
we will get:
$F^i_{1.1} \gg M^a_{1.0}$
$F^i_{1.2} \gg M^b_{2.0}$
$F^i_{1.3} \gg M^c_{2.0}$
All these modules and dependencies are added into repository R
Now we will create a module T[1.0] that depends on all formulas:
$T_{1.0} \gg F^1_{1.0}$
$T_{1.0} \gg F^2_{1.0}$
$T_{1.0} \gg F^n_{1.0}$
and add this module as well as its dependencies into repository R.
Claim: There $\exists C$ (a configuration) of repository R and $T_{1.0} \in C$$\Longleftrightarrow$ there is a solution to the 3SAT formula.
"$\Leftarrow$": Let's have an evaluation of each variable to either true or false that evaluates the whole 3SAT formula to true. Then
$C = \{ T_{1.0} \} \bigcup$
$\{ M^i_{1.0} : v_i \} \bigcup \{M^i_{2.0} : eg v_i \} \bigcup$
$\{ F^i_{1.1} : x_{i1} \} \bigcup \{ F^i_{1.2} : eg x_{i1} \wedge x_{i2} \} \bigcup \{ F^i_{1.3} : eg x_{i1} \wedge eg x_{i2} \wedge x_{i3} \}$
It is clear from the definition that each M^i and F^i can be in the C just in one version. Now it is important to ensure that each module is present always at least in one version. This is easy for M
^i as its v[i] needs to be true or false, and that means one of $M^i_{1.0}$ or $M^i_{2.0}$ will be included. Can there be a F^i which is not included? Only if $eg x_{i1} \wedge eg x_{i2} \wedge eg x_
{i3}$ but that would mean the whole 3-or would evaluate to false and as a result also the 3SAT formula would evaluate to false. This means that dependencies of T[1.0] on F^i modules are satisfied.
Are also dependencies of every $F^i_{1.q}$ satisfied? From all the three versions, there is just one $F^i_{1.q}$, the one its x[iq] evaluates to true. However x[iq] can either be without negation,
and as such $F^i_{1.q}$ depends on $M^j_{1.0}$ which is included as v[j] is true. Or x[iq] contains negation, and as such $F^i_{1.q}$ depends on $M^j_{2.0}$ which is included as v[j] is false.
"$\Rightarrow$": Let's have a C configuration satisfies all dependencies of T[1.0]. Can we also find positive valuation of 3SAT formula?
For i-th 3-or there is $T_{1.0} \gg F^i_{1.0}$ dependency which is satisfied. That means $F^i_{1.1} \in C \vee F^i_{1.2} \in C \vee F^i_{1.3} \in C$ - at least one version of F^i module is present in
the configuration. The one F^i that has the satisfied dependency reexports $M^j_{1.0}$ (which means v[j] = true) or $M^j_{2.0}$ (which means v[j] = false). Anyway each i 3-or evaluates to true.
The only remaining question is whether a C configuration can force truth variable v[j] to be true in one 3-or and false in another. However that would mean that there is re-export via $T_{1.0} \gg F^
i_{1.x} \gg M^j_{1.0}$ and also another one via $T_{1.0} \gg F^p_{1.u} \gg M^j_{2.0}$. However those two chain of dependencies ending in different versions of M^j cannot be in one C as that breaks
the last condition of configuration definition (each imported object has just one meaning). Thus each M^j is represented only by one version and each v[j] is evaluated either to true or false, but
never both.
The 3SAT formula's evaluation based on the configuration C is consistent and satisfies the formula.
[edit] Polemics
One of the critiques raised during the LtU review (linked in external sources) is that the kind of situation cannot happen in practise. Surprisingly it can. OSGi and its RangeDependencies lead
naturally into NP-Complete problems. Read more...
[edit] Implications
If there is a repository of modules in various (incompatible) versions, with mutual dependencies and re-export of their APIs, then deciding whether some of them can be enabled is NP-complete problem.
As NP-complete problems are hard to solve, it is usually our best desire to avoid them in real life situations. What does that mean in case one decides to practise DistributedDevelopment (and it is
inevitable that software for 21st century needs this development style)? If you want to avoid headaches with finding the right configuration of various version of the libraries that allows to execute
them together, then stick with following simple rules.
[edit] Be Compatible!
If you develop your own libraries in backward compatible way, you can always select the most recent version of each library. That is the configuration you are looking for. It is easy to find
(obviously) and also it is the most desirable, as it delivers the most modern features and bugfixes that users of such libraries want.
[edit] Reuse with Care!
If you happen to reuse libraries (and you should because reuse lowers Time To Market, just like it did for me when I was publishing my first animated movie), then choose such libraries that can be
trusted to evolve compatibly.
[edit] Hide Incompatibilities!
If you happen to reuse a library that cannot be trusted to keep its BackwardCompatibility, then do whatever you can to not re-export its APIs! This has been discussed in Chapter 10, Cooperating with
Other APIs, but in short: If you hide such library for internal use and do not export any of its interfaces, you can use whatever version of library you want (even few years older) and nobody shall
notice. Moreover in many module systems there can even be multiple versions of the same library in case they are not re-exported.
[edit] Explicit Re-export
Looks like there is a way to eliminate the NP-Completeness by disabling implicit re-export. See LibraryWithoutImplicitExportIsPolynomial. However this works only in a system with standardized
versioning policy and without use of RangeDependencies.
[edit] Conclusion
Avoid complexities and NP-complete problems. Learn to develop in backward compatible way. Reading TheAPIBook is a perfect entry point into such compatible software design of the 21st century.
[edit] External Links
|
{"url":"http://wiki.apidesign.org/wiki/LibraryReExportIsNPComplete","timestamp":"2014-04-19T20:02:57Z","content_type":null,"content_length":"37832","record_id":"<urn:uuid:f4662000-5d8c-4c9a-92b3-1abddf935ab4>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00470-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Resources and services for Utah Higher Education faculty and students such as Canvas and collegEmedia.
Find equivalent forms for positive rational numbers. Divide numerator by denominator
of a fraction to find a decimal.
• TI-73 calculators
• Colored paper and scissors for student Foldables
• Transparencies and worksheets for: “Representing Equivalent Rational Numbers” “
Fractions, Decimals, and Percents With Candy”
• Transparencies for: Finding Equivalent Rational Number Forms song and Converting
Rational Numbers Concentration Game
Background For Teachers:
Enduring Understanding (Big Ideas):
Equivalency Rational numbers
Essential Questions:
• How can I represent equivalent forms for decimals, fractions and percents?
• How can a rational number be converted to a different form?
• How does the fraction a/b relate to a divided by b?
Skill Focus:
Convert positive rational numbers to fraction, decimal or percent form.
Vocabulary Focus:
Convert, equivalent number forms, repeating decimal, terminating decimal.
Ways to Gain/Maintain Attention (Primacy):
Technology, Journaling (Foldable), Sketching, Cooperative group discussion, game.
Instructional Procedures:
Starter: Sketch each of the following and tell where you might use that rational
number form.
1. ¾
2. 25%
3. 0.6
Lesson Segment 1: How can I represent equivalent forms for decimals, fractions and
Put the “Representing Equivalent Rational Numbers”, and “Fractions, Decimals, and
Percents With Candy” on transparencies, so you can discuss with the class.
As a class discuss and work to complete “Representing Equivalent Rational Numbers”
Apply: Have students work to complete “Fractions, Decimals, and Percents With Candy”
one question at a time. Use a Board Talk protocol.
Board Talk Protocol
Students discuss a problem with team members or a partner without writing anything on
their papers.
Two or three students are randomly selected to come to the board to individually
sketch and show reasoning for the first problem. The students work in separate spaces
on the board, so the seated class members will be able to see and compare separate
While the three students are working at the board, the remaining students work in
their seats to complete the first item on their individual papers. Teacher selects a
student at the board to explain to the class what they have done. The class is told
they must each write one GOOD QUESTION about the explanation the student at the board
is giving. A good question starts with how, why, what if, or can you clarify… Write
these GOOD QUESTION starters on the board. Students must write their good question on
their assignment paper as the student is explaining.
After the explaining student finishes, the teacher selects one or two from the class
to ask their GOOD QUESTION to the explaining student.
The teacher may select a second or third student at the board to then explain their
approach, especially if they have a different response. The seated students again
write a GOOD QUESTION for that explaining student. Or, the teacher may ask the class
members to look at all responses on the board and prepare to describe how they are
similar or different.
We know from our last lesson that there are times when one form for a rational number
is better than another. For example, we wouldn’t want to use the percent form for ½
in a recipe, and we wouldn’t want to use the fraction form for $1.35 at the store.
Lesson Segment 2: How can a rational number be converted to a different form? How
does the fraction a/b relate to a divided by b?
Q. Why would we want to be able to convert from one rational number form to another?
There are many procedures for converting rational numbers. One of these is to use the
decimal form for a rational number as the “Middle Man”. That is, that percents and
fractions are first converted to the decimal form, and then can be converted to
another form.
Sketch this graphic on the board. The idea here is that fractions can be converted to
decimals by dividing denominator into numerator, and percents can be converted to a
decimal by moving the decimal two places to the left.
Ask students if they have ever had to go through a “middle man”. For example, when I
was younger, I always went to my mother to ask her to get something I wanted from my
father because she was easier to work with.
Fraction to Decimal: Demonstrate using the TI-73 to write a fraction as a decimal by
dividing the denominator into the numerator. Use common fractions such as ½, ⅓, ⅔, ⅛,
⅝, ¼, ¾, and the fifths. Discuss the repeating decimals for ⅓ and for ⅔ pointing out
that the calculator rounds the last digit for ⅔.
Percent to Decimal: Show students how to use the calculator to divide any number
written in percent form by 100 to get a decimal.
Once the number is written in decimal form, we can use the Decimal Conversion
Decimal to fraction: Write decimal as a fraction using, 10ths, 100ths, 100ths, as
the denominator depending on the last place value of the number
Decimal to percent: Move decimal to the right two places.
Help students make a Three-Flap Foldable for their journal that looks like this. Clip
on the dotted lines up to the fold line.
Write the decimal conversion procedure under the center flap. Write the procedures
for converting fractions and percents to decimal form under the two designated flaps.
Sing the Equivalent Forms of Rational Numbers Song with them (attached)
Another way to finding equivalent rational numbers is to use the
Fraction to Decimal and Decimal to Fraction: Type number then push
Percent to Fraction: Type the number then push
Fraction to percent: Type in the fraction then push
Percent to Decimal: Type the number then push
Having more than one strategy for finding equivalent rational numbers will be
Lesson Segment 3: Practice Game:
Play Converting Rational Numbers Concentration (attached). Put the game on a
transparency. Cover the squares with little post-its. Divide the class into two teams
and have them guess to find a matching pair - two equivalent numbers in different
Assign any text practice as needed.
Assessment Plan:
Observation, student performance tasks.
This lesson plan was created by Linda Bolin.
Utah LessonPlans
Created Date :
Apr 21 2009 15:41 PM
Summary:Find equivalent forms for positive rational numbers. Divide numerator by denominator of a fraction to find a decimal.
Skill Focus: Convert positive rational numbers to fraction, decimal or percent form.
Vocabulary Focus: Convert, equivalent number forms, repeating decimal, terminating decimal.
Ways to Gain/Maintain Attention (Primacy): Technology, Journaling (Foldable), Sketching, Cooperative group discussion, game.
Instructional Procedures: Starter: Sketch each of the following and tell where you might use that rational number form.
Lesson Segment 1: How can I represent equivalent forms for decimals, fractions and percents?
Put the “Representing Equivalent Rational Numbers”, and “Fractions, Decimals, and Percents With Candy” on transparencies, so you can discuss with the class.
As a class discuss and work to complete “Representing Equivalent Rational Numbers”
Apply: Have students work to complete “Fractions, Decimals, and Percents With Candy” one question at a time. Use a Board Talk protocol.
Board Talk Protocol Students discuss a problem with team members or a partner without writing anything on their papers.
Two or three students are randomly selected to come to the board to individually sketch and show reasoning for the first problem. The students work in separate spaces on the board, so the seated
class members will be able to see and compare separate responses.
While the three students are working at the board, the remaining students work in their seats to complete the first item on their individual papers. Teacher selects a student at the board to explain
to the class what they have done. The class is told they must each write one GOOD QUESTION about the explanation the student at the board is giving. A good question starts with how, why, what if, or
can you clarify… Write these GOOD QUESTION starters on the board. Students must write their good question on their assignment paper as the student is explaining.
After the explaining student finishes, the teacher selects one or two from the class to ask their GOOD QUESTION to the explaining student.
The teacher may select a second or third student at the board to then explain their approach, especially if they have a different response. The seated students again write a GOOD QUESTION for that
explaining student. Or, the teacher may ask the class members to look at all responses on the board and prepare to describe how they are similar or different.
We know from our last lesson that there are times when one form for a rational number is better than another. For example, we wouldn’t want to use the percent form for ½ in a recipe, and we wouldn’t
want to use the fraction form for $1.35 at the store.
Lesson Segment 2: How can a rational number be converted to a different form? How does the fraction a/b relate to a divided by b?
Q. Why would we want to be able to convert from one rational number form to another?
There are many procedures for converting rational numbers. One of these is to use the decimal form for a rational number as the “Middle Man”. That is, that percents and fractions are first converted
to the decimal form, and then can be converted to another form.
Sketch this graphic on the board. The idea here is that fractions can be converted to decimals by dividing denominator into numerator, and percents can be converted to a decimal by moving the decimal
two places to the left.
Ask students if they have ever had to go through a “middle man”. For example, when I was younger, I always went to my mother to ask her to get something I wanted from my father because she was easier
to work with.
Fraction to Decimal: Demonstrate using the TI-73 to write a fraction as a decimal by dividing the denominator into the numerator. Use common fractions such as ½, ⅓, ⅔, ⅛, ⅝, ¼, ¾, and the fifths.
Discuss the repeating decimals for ⅓ and for ⅔ pointing out that the calculator rounds the last digit for ⅔.
Percent to Decimal: Show students how to use the calculator to divide any number written in percent form by 100 to get a decimal.
Once the number is written in decimal form, we can use the Decimal Conversion Procedures:
Decimal to fraction: Write decimal as a fraction using, 10ths, 100ths, 100ths, as the denominator depending on the last place value of the number Decimal to percent: Move decimal to the right two
Decimal to fraction: Write decimal as a fraction using, 10ths, 100ths, 100ths, as the denominator depending on the last place value of the number
Decimal to percent: Move decimal to the right two places.
Help students make a Three-Flap Foldable for their journal that looks like this. Clip on the dotted lines up to the fold line.
Write the decimal conversion procedure under the center flap. Write the procedures for converting fractions and percents to decimal form under the two designated flaps.
Sing the Equivalent Forms of Rational Numbers Song with them (attached)
Another way to finding equivalent rational numbers is to use the
Fraction to Decimal and Decimal to Fraction: Type number then push
Having more than one strategy for finding equivalent rational numbers will be helpful.
Lesson Segment 3: Practice Game: Play Converting Rational Numbers Concentration (attached). Put the game on a transparency. Cover the squares with little post-its. Divide the class into two teams and
have them guess to find a matching pair - two equivalent numbers in different forms.
|
{"url":"http://www.uen.org/Lessonplan/preview?LPid=23383","timestamp":"2014-04-16T21:56:21Z","content_type":null,"content_length":"48566","record_id":"<urn:uuid:5e974957-bdc9-44eb-967f-2b6344eb5791>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00265-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Haddonfield Science Tutor
I am a current student of chemical engineering at Rowan University. Chemistry and math are my favorite things, but I am able to tutor for other sciences (at the high school level) other than
chemistry as well. I have a background in mathematics that currently reaches up to calculus III.
7 Subjects: including chemistry, physics, calculus, statistics
...My schedule is very flexible in the evening and on weekends and I never charge for a lesson unless you are completely satisfied.I have my PhD in cellular and molecular biology from the
University of Wisconsin-Madison. I graduated cum laude with a chemistry minor in college. The chemistry classes I have taken at the undergraduate level include chemistry, organic chemistry and
7 Subjects: including biochemistry, genetics, biology, chemistry
...In this lab, I was responsible for training 7 of my coworkers in the experimental and research techniques they would need to work in the research lab alongside me. While training researchers in
this fashion was a different experience from tutoring students for classes, I found there was a lot of...
20 Subjects: including mechanical engineering, electrical engineering, chemistry, physics
...I continue to teach it at the college level after retirement from teaching high school. I consider my strengths to be in finding appropriate analogies for understanding complex ideas and a
thorough understanding of the material. I have taught anatomy and physiology both at the high school and college level.
5 Subjects: including biology, anatomy, physiology, physical science
...A thorough prep course usually requires about 15 hours of work in each of the Math and Reading areas, and 10 hours in writing. Ideally, those hours would be spread over about 8-10 weeks.
However, briefer periods also can help, particularly with students who have taken the SAT previously.
32 Subjects: including physical science, ecology, anthropology, biology
|
{"url":"http://www.purplemath.com/haddonfield_science_tutors.php","timestamp":"2014-04-16T22:28:10Z","content_type":null,"content_length":"24175","record_id":"<urn:uuid:8c5afe63-239f-4425-86e7-f88052429b56>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00063-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Sargent, GA Trigonometry Tutor
Find a Sargent, GA Trigonometry Tutor
...I am a certified teacher in PreK-5 and have taught language arts including phonics for 10 years. In addition I have taught reading and phonics in middle school for 8 years. All of my teaching
includes reading, English, and language arts.
47 Subjects: including trigonometry, chemistry, English, physics
...I am also available for teaching Spanish, as well as almost any subject for lower grades. I am fun and outgoing and I like to make learning fun! In high school, I took a course preparing for
educational fields.
40 Subjects: including trigonometry, reading, Spanish, geometry
[GROUP RATES AVAILABLE!] Math has always seemed interesting to me. It just makes sense. It's logical.
21 Subjects: including trigonometry, calculus, statistics, geometry
I have a bachelor of music degree in vocal performance from Mercer University. I am an opera singer with a passion for math. I have experience in tutoring students in sight-singing in preparation
for All State choir auditions.
25 Subjects: including trigonometry, reading, calculus, statistics
I have graduated with a Diploma Graduate Studies in Math Teaching from the University of the Philippines and a bachelor's degree in Civil Engineering. I have been successfully working in a
manufacturing company in the last 10 years. I have consistently done both paid and unpaid tutorial jobs since...
22 Subjects: including trigonometry, calculus, geometry, algebra 1
Related Sargent, GA Tutors
Sargent, GA Accounting Tutors
Sargent, GA ACT Tutors
Sargent, GA Algebra Tutors
Sargent, GA Algebra 2 Tutors
Sargent, GA Calculus Tutors
Sargent, GA Geometry Tutors
Sargent, GA Math Tutors
Sargent, GA Prealgebra Tutors
Sargent, GA Precalculus Tutors
Sargent, GA SAT Tutors
Sargent, GA SAT Math Tutors
Sargent, GA Science Tutors
Sargent, GA Statistics Tutors
Sargent, GA Trigonometry Tutors
|
{"url":"http://www.purplemath.com/Sargent_GA_Trigonometry_tutors.php","timestamp":"2014-04-17T01:27:02Z","content_type":null,"content_length":"23814","record_id":"<urn:uuid:22f7a814-4638-4296-935d-d5b0ce03affb>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00617-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Analysis and Design of Magnetic Shielding System for Breast Cancer Treatment with Hyperthermia Inductive Heating
International Journal of Antennas and Propagation
Volume 2013 (2013), Article ID 163905, 12 pages
Research Article
Analysis and Design of Magnetic Shielding System for Breast Cancer Treatment with Hyperthermia Inductive Heating
School of Telecommunication Engineering, Suranaree University of Technology, Thailand
Received 27 June 2013; Revised 21 September 2013; Accepted 25 September 2013
Academic Editor: Soon Yim Tan
Copyright © 2013 Chanchai Thongsopa and Thanaset Thosdeekoraphat. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use,
distribution, and reproduction in any medium, provided the original work is properly cited.
An analysis and design of magnetic shielding system are presented for breast cancer treatment with hyperthermia inductive heating. It is a technique to control magnetic field intensity and relocate
the heating area by using a rectangular shielding with aperture. The distribution of the lossy medium was analyzed using the finite difference time domain method. Theoretical analyses investigate
whether a novel shielded system is effective for controlling the magnetic field distribution or heating position. Theoretical and experimental investigations were carried out using a lossy medium.
The inductive applicator is a ferrite core with diameter of 7cm, excited by 4MHz signal and a maximum output power of 750W. The results show that size of heating region can be controlled by
varying the aperture size. Moreover, the investigation result revealed that the position of heating region can be relocated by changing the orientation of the ferrite core with shielded system in
-axis direction. The advantage of the magnetic shielding system is that it can be applied to prevent the side effects of hyperthermia cancer treatment by inductive heating.
1. Introduction
At present, cancer is one of leading causes of population death worldwide. Cancer is the uncontrolled growth and spread of cells. It can affect almost any part of the body, especially breast cancer
because breast cancer has been increasing worldwide every year. Therefore, it is desirable to remove the cancer from the human body as soon as possible. Cancer can be treated effectively by various
methods such as surgical excision, chemotherapy, and radiotherapy including hyperthermia [1–5] which is one of noninvasive techniques. The demands for noninvasive cancer treatment by hyperthermia
heating are rapidly growing [6–13]. There are few techniques for noninvasive deep hyperthermia [14–17]. Most of microwave heating methods could not be used for deep hyperthermia due to skin depth
effect. Low frequency technique is possible for deep treatment, though. The temperature in a cancer cell can be increased by induction [18–26]. To induce heat in the cancer cell, strong magnetic
field has to penetrate the cancer cell to generate eddy current in the cell which can be visualized as electric loss. The eddy current will increase the cell temperature. The temperature of normal
cells due to eddy current is constant since the cancer cell is lower than conductive than normal cell. Nevertheless, the direction of magnetic field is important for localizing the heating region.
Because of high intensity magnetic field will have side effects on neighbouring normal cells, which can be devastating to normal cells [27, 28]. A magnetic shielded system has become an important
topic for hyperthermia inductive heating because it can reduce the side effects on neighbouring normal cells from magnetic field.
Moreover, the magnetic field intensity is crucial for hyperthermia treatment since it controls tissue temperature. It has been shown that magnetic core orientation and position can control the field
distribution in both horizontal and vertical directions [29]. To concentrate magnetic field in a specific region, a shielding system was installed at the magnetic core. The location of heating can be
controlled by moving the ferrite core. The shielding system in [29] utilizing two metal plates to control the vertical magnetic field to controlling heating position. One metal plate was placed
between two ferrite cores, and the other two metal plates were placed close to the ferrite cores. This configuration provides control over the vertical field, and hence, the heating location can be
determined by the ferrite cores location. However, the magnetic field will leak through the unshielded side of the ferrite cores. The leakage of magnetic field results in difficulty of controlling
the heating area and also affects normal cells nearby. Radiotherapy for breast cancer requires regional heating with specific temperature [30]. The temperature is directly proportional to magnetic
field intensity.
In this paper, we presented an analysis and design of magnetic shielding system for breast cancer treatment with hyperthermia inductive heating. This paper will analyze the effects of magnetic
shielded system on heating area and location of induction heating for breast cancer hyperthermia treatment; what the articles presented here consists of numerical simulations and experiments. The
distribution of the lossy medium was analyzed using the finite difference time domain method. The inductive applicator is a ferrite core with diameter of 7cm, excited by 4MHz signal and a maximum
output power of 750W. These theoretical and experimental investigations were carried out using an agar phantom. It is difficult to limit heating area when the applicator’s ferrite cores are
unshielded. The results show that the size of heating region can be controlled by varying the aperture size of the shielded system. However, the heating efficiency is reduced as the aperture size
decreases. If the small heating area is needed, it may require longer treatment time. In addition, the heating location can be varied by changing ferrite core orientation. By moving the orientation
of the ferrite core in -axis direction, the heating location and area were altered dramatically for unshielded ferrite cores, whereas the heating position and area are slightly different for shielded
cores. The results show that the heating position can be relocated from the left to the right of the agar phantom by changing the orientation of the ferrite core with shielded system. The cores’
vertical position has almost no effect on the heating area and position for shielded cores. In contrast, heating area and position are difficult to predict when unshielded cores are used. The
proposed magnetic field shielding system is suitable for preventing the side effects of hyperthermia cancer treatment by induction heating.
2. Concept and Construction of Shielding System
The proposed magnetic shielding system consists of two rectangular shielding plates as shown in Figure 1. The shielding system in [29] consists of a metal plate to control the magnetic field from a
single side of the core. Unlike the regional heating system in [29], the proposed shielding system controls the vertical by enclosing the ferrite core with a rectangular shield with aperture. Since
placing the shielding plate at only one side of the ferrite core in [29] can control the magnetic field of only one side, it will cause magnetic field leak in the opposite side of the shielding
plate. Thus, it is difficult to control the heating area. This magnetic field leakage results in spreading of the heating region that has an effect on other nearby tissues.
In this figure, a two-dimension cross-section of the analytic region is represented in order to easily understand the configuration of the shielding system analysis. The proposed shielding system
limits the magnetic field around the ferrite cores to confine the field in the horizontal direction. Most of the vertical magnetic field will penetrate into the heating body via the aperture, and
hence the heating region size can be determined by the aperture size. Moreover, the heating position which can be relocated from the top to the bottom and the left to the right of the breast by
moving the orientation of the ferrite core with rectangular shield in -axis direction is illustrated in Figure 1. In addition, the design of magnetic field shielding system is necessary to take into
consideration the attenuation of the magnetic field properties of the various materials used in order to spread the magnetic field over the specific area and leakage of magnetic field to the nearby
areas to the fewest.
A major shielding technique used to reduce the magnetic field was divided into two ways as follows. Ferromagnetic shields give good results for small and closed shields, and they also give large
field attenuation at close range to the source for open shield geometries. Highly conductive materials, on the other hand, are found to be suitable for large shield sizes. The attenuation is,
however, reduced in the close vicinity of the source. In this investigation, we have selected a highly conductive material to studies, and their different materials are shown. We can regard the
magnetic field as a result of the electric current flow and the magnetization of surrounding materials. The magnetic field is excited by source currents carried by conductors of various geometries.
With a highly conductive shield, eddy currents arise in the metal. These currents create a field opposing the incident field. The magnetic field is in this way repulsed by the metal and forced to run
parallel to the surface of the shield, yielding a low flux density outside the metal [31, 32]. Therefore, we have to consider effects of magnetic field shielding of various materials that were
tested, such as copper (Cu), lead (Pb), steel (Fe), and transformer steel (Ck-37), which have conductivity, relative permeability, and relative permittivity and are illustrated in Table 1.
The investigation of the effective reduction of the magnetic field for various materials was carried out in the study. To analyse the effectiveness of magnetic field shielding of materials, we
specify a current source of the magnetic field () of 1A/m^2. The dimensions and schematic details of the shielding plate for analyzing the magnetic field intensity of various materials are shown in
Figure 2.
Figure 2 represents the model of rectangular shield plate and distance to measure the intensity of the magnetic field [31, 33]. After that, we analyzed the effectiveness of magnetic field shielding
(SE) of various materials in the following equation [34]. The shielding results are all given for a shield thickness of 1mm at the frequency of 4MHz. Consider, where is the rms flux density without
shield plate and is the rms flux density with shield plate. Analysis of the effective shielding of magnetic field in our study is illustrated in Figure 3. The material used in the analyses consists
of copper, lead, steel, and transformer steel, as mentioned above.
Figure 3 represents the effectiveness of magnetic field shielding of various materials that were tested. The horizontal axis represents the distance from the edge of the rectangular shielding plate,
and the vertical axis shows the effectiveness of the shielding for various materials. The analysis found that the copper materials will be provided the most effective shield of approximately
25.47dB. Therefore, we chose copper as the material used for the analysis and design of magnetic shielding system. Copper is a material that can be reduced to a maximum magnetic field in order to
study the characteristics of magnetic field shielding system which are applied with various aperture sizes to control the magnetic field density and heating position appropriately. The schematic of
the analytical model of magnetic field shielding system is shown in Figure 4.
Figure 4 represents the heating model which is made from agar phantom (the phantom model size is equal to cm) with conductivity, relative permeability, and relative permittivity of 0.62S/m, 1, and
130, respectively. A phantom simulating a human breast was placed between a pair of ferrite cores with magnetic shield (the parameter details of ferrite cores and shielding plate are shown in Figure
4). The distance between both ferrite cores is equal to 18cm, by selected from a distance at the magnetic field were reduced the most from consideration of shielding plate design in Figure 3
mentioned above. The magnetic shield plate is a rectangular metal with conductivity of 59.66S/m. The ferrite core is a highly magnetic material with 0.001S/m conductivity and relative permeability
at 200.
3. Analysis of Temperature Distribution
To determine a method of heating induction and controlling heating position, we solved Maxwell’s equation and analyzed by using the three-dimensional finite difference time domain method [35–40] the
following equations [41–46]: where is the electric field, is the magnetic field (A/m), is the radian frequency, is the permeability, is the forced current density (A/m^2), is the permittivity, and is
the electrical conductivity (S/m). In this analysis, the following fundamental equation for vector potential , which takes the eddy current into consideration, is used [28, 45]. Solving the following
equation for , the magnetic field and eddy current distribution are calculated as follows: where , , and represent the magnetic reluctance, and the current density (A/m^2) can be calculated from
magnetic field and the electric potential (V), respectively. In the electromagnetic analysis, we derive the lowest resonant frequency of the applicator. Subsequently, the temperature distributions
are observed. The temperature changes depend on the output power from the high power oscillator into the applicator systems and treatment time. The power losses in the lossy medium can be calculated
from the relationship of the magnetic field and current density. Moreover, we can control the heating temperature from the external power into the applicator systems. The temperature distribution in
lossy media can be calculated from bioheat transfer equation by assuming that the lossy media is human tissue or breast replica. It can be expressed as [25, 47–54] where is the temperature (°C), is
the heating time (s), is the distribution temperature (m^2·s^−1), is the liquid water flow ratio to the moisture transfer (kg^−1), is the specific heat capacity of an object (4.18J·kg^−1·°C^−1), is
the latent heat of vaporization (kJ·kg^−1), is the mass of liquid (kg), is the heat source of distribution (W·m^−3) calculated from current density of magnetic field, and is the local physical
density of tissue (1000g·m^−3). The simulation of induction heating was conducted by analyzing eddy current distribution of the inductive applicator which is a ferrite core, and it will be discussed
in the next section.
4. Numerical Results
In this section, we investigate the magnetic flux density, which can be controlled by varying the aperture size. In order to resolve the problem of heating region local heating can be controlled by
varying the aperture size of the shielded system. Moreover, the investigation showed that the position of heating region can be relocated by changing the orientation of the ferrite core with shielded
system in -axis direction. For the construction of magnetic shielding system to verify the field distribution on the heating model, full wave 3D numerical simulation was performed using finite
difference time domain method.
4.1. Evaluating Electric Loss Density
To find out how to control the magnetic flux and heating region, we will change aperture size to get the most excellent heating efficiency, while causing smallest magnetic flux leakage to another
nearby tissue. The proposed shielding system limits the magnetic flux around the ferrite cores to confine the field between two ferrite cores. It is a technique to control magnetic field intensity
and relocate the heating area by using a rectangular metal shielding with aperture. The demonstration shows that the magnetic field intensity can be regulated by varying the aperture size.
From these theoretical investigations, one effective method to control a heating region in the breast was found. Hence, the temperature in the heating body can be controlled by the size of shielding
aperture. Electric loss density for the heating model was evaluated. The ferrite core is excited by 4MHz signal. The aperture sizes in the simulation are 5cm, 7cm, and 8cm. Electric loss density
images for heating region of the ferrite core without shield and with rectangular shield with all aperture sizes are shown in Figure 5.
Figure 5 represents the heating region of the ferrite cores with and without rectangular shield. The heating region has spread over the large area when the ferrite core is unshielded with
rectangularly shield as shown in Figure 5(a). When the ferrite core is rectangularly shielded with various aperture sizes, the heating region size is confined in smaller area as shown in Figures 5(b)
–5(d), but in Figures 5(e) and 5(f) the heating region size began to spread in wide area. It is difficult to limit or control heating area when the ferrite cores with shielding plate have the large
aperture size. The heating region size is reduced when the aperture size is smaller. It can be seen that the heating region is controlled by varying the aperture size as illustrated in Table 2. The
results in Table 2 are expressed in terms of the electric loss density. However, replacing the electric loss density mentioned above into the last term of (4), the heating temperature in Celsius
degrees unit per time will be obtained. For example, the aperture size of 8cm when changing the value of electric loss density from 154W/m^3 to the form of temperature will be equal to 36.84
Celsius degrees. Nevertheless, in experiment and measurement result if the small heating area is needed, it may require longer treatment time. More treatment time may be required to heat the cancer
cell to the desired temperature.
The simulations show that the heating area can be effectively controlled by using the rectangular shield with adjustable aperture size as mentioned above in Table 2. The heating area was determined
by the aperture size of the rectangular shield because the heating area is proportional to the aperture size. In unshielded cores, the heating area spreads unpredictably, and, hence, it is difficult
to limit the heating area when the cores are unshielded. Simulations show that the heating area can be controlled by the aperture size of rectangular shield. In contrast, heating area is difficult to
predict when unshielded cores are used. From Table 2, we found that the aperture size of 8 centimeters can perform the best results because the electric loss density is high. Furthermore, it can
control the leakage of the magnetic field or the effectiveness of magnetic field shielding (SE) more effectively [33]. Considering the results of Table 2 shows that the aperture sizes of 9 and 10
centimeters the electric loss density are having highly values. But it is difficult to limit or control heating area when the ferrite cores with shielding plate have the large aperture size. So we
selected the aperture size of 8cm which can perform the best results because the electric loss density is high. Furthermore, it can control the leakage of the magnetic field or the effectiveness of
magnetic field shielding (SE) more effectively.
4.2. Investigating the Heating Orientation
We further investigated the heating location by changing shielded ferrite cores orientation. The ferrite cores changing the angle orientation are investigated as shown in Figure 6. For investigated
the heating location by changing shielded ferrite cores orientation from original cores orientation (0-degree) to 90-degree. By investigated the electric loss density or heating distribution, it will
be changing the angle orientation of the ferrite core in the left hand each once equal to 5 degrees. Which in Figure 6, we have shown only two positions is 45 degrees and 90 degrees. From changing
the angle orientation of the ferrite core with shielding system at 45 degrees it was found that the electric loss density is the most valuable. Furthermore, the effect of distance of the ferrite core
was investigated in -direction to the heating location. The result shows that the heating location can be relocated by changing the position of the ferrite core with rectangular metal shield as shown
in Figure 7.
Figure 6 shows the heating region for the ferrite 45-degree and 90-degree core orientation. In the shielded cores, the aperture size is cm in the simulation for both orientations. The maximum
electric loss densities for the 45-degree and 90-degree orientation are 158W/m^3 and 129W/m^3, respectively. The maximum electric loss density for the 45-degree orientation is more than that for
the parallel ferrite core configuration. In contrast, the electric loss density is lower for the parallel ferrite core configuration with 90-degree orientation. Moreover, the simulation results in
Figure 7 show that the heating locations can be relocated from the left to the right of the breast model by changing the orientation of the ferrite core with rectangular shields in -direction. The
maximum electric loss density for the orientation of the both ferrite cores with rectangular shields which changed from original position to the 8cm is 150W/m^3. Further, the maximum electric loss
density for the orientation of both ferrite cores with rectangular shields which changed from original position to 16cm is 155W/m^3. In this case, the heating efficiency is similar for both
positions and for the original position in Figure 5(d) since the aperture size is identical.
5. The Heating Experiment and Measurement Results
To control the heating area and heating position, the magnetic field distributions near the breast were analyzed using the full wave 3D numerical simulation. Magnetic shielding systems with a
rectangular shield with aperture were introduced to control the magnetic field and examine its shield effect. The proposed shielding system limits the magnetic field around the ferrite cores to
confine the field in horizontal direction. Most of the vertical magnetic field will penetrate into the heated breast via the aperture. The construction of applicator and shielding system for
verifying the numerical and simulation results is illustrated in Figure 8.
Figure 8 shows the construction of magnetic shielding system to verify the field distribution on the heating model. Shield effect of the magnetic flux density was investigated. The proposed magnetic
shielding system analysis consists of high power oscillator, applicator, and agar phantom. The first part is a high power oscillator, which consists of source excited by 4MHz signal and power
amplifier. The second part is the applicators, which includes Ni-Zn type ferrite cores covered with rectangular shielding box with aperture sizes in the examination of 8cm system model. The
demonstration revealed that we can control the magnetic flux intensity and relocate the heating area by using a rectangular metal shielding with aperture. The third part is the agar phantom; it has
an elliptic cylinder-like shape with a pair of protuberances for the breasts model. The dimensions of the longer and shorter axes of the elliptic cylinder are 30 and 20cm, respectively. The
properties of the agar phantom or artificial breasts are presented in Section 2, as already mentioned above. An agar phantom subject to the guideline assigned by the Quality Assurance Committee,
Japanese Society of Hyperthermia Oncology (QAC, JASHO) was used instead of the breast. During the inductive heating, it is obviously defined according to the heating principle that only a conductive
material with loss is well heated.
For the treatment of cancer by using magnetic fields, the applicator must be designed for use in spreading or induction of magnetic field. Because the cancer treatment is performed with hyperthermia
inductive heating, a coil applicator is needed. A ferrite core applicator system for hyperthermia was first proposed by one of the authors to achieve effective heating and to solve irradiation
problems. Since then, several kinds of ferrite core applicators have been studied and developed [11–15]. By introducing a ferrite core for the inductive applicator, a magnetic field can be
concentrated between a pair of poles. Accordingly, local or regional heating becomes possible with a relatively low input power, and irradiation around the applicator is decreased, compared to the
same kind of inductive applicator, which is also agreeable from the viewpoint of electromagnetic compatibility (EMC).
In this paper, the design of the induction coil or an applicator, which the applicator above mentioned is ferrite core types of 2 poles. The equation could be good series resonance in order to
determine the answer by the frequency of 4MHz. The basic principle of the series inductor, which is the total value of inductor, is equal to the sum of both inductors as shown in Figure 9.
Figure 9 shows that a typical series inductance of the inductor value is equal to the sum of the total values of each inductor. The resonance frequency is . The winding number of each pole coil is
turns and adjusts the value of the capacitor to be in good resonance, in which the capacitor value is . The resistance is approximately 0 ohms so that the resonant circuit gets the most excellent
efficiency. Furthermore, it helps in easily understanding analysis and design of the induction coil or an applicator. The induction coils as above will spread the magnetic field between both poles.
The magnetic flux moves back and forth alternately between the two poles, in which the abovementioned spread is the alternating magnetic field. In this paper we define resonance frequency of 4MHz
and the number of pole coils in two series, in which the resonance frequency of the total inductor coil value is μH. The copper wire for inductive coil applicator design is number 13-SWG, which has
cross-sectional area equal to 4.15mm.
The winding number of each pole coil is 14 turns and adjusts the value of the capacitor to be in good resonance, in which the capacitor value is pF as shown in Figure 10. The construction of
parallel ferrite cores and rectangular shielding plates with aperture is shown. To examine these theoretical results of heating characteristics, a heating experiment was conducted. The magnetic field
intensity around the ferrite cores will be limited by shielding system to confine the field in horizontal direction. Most of the vertical magnetic field will penetrate into the heating agar phantom
via the aperture. The experiment of proposed system consists of high power oscillator, induction coils applicator with shielding plates, and agar phantom. The agar phantom used for the experiment is
the same substance.
Figure 10 shows the construction of magnetic shielding system for breast cancer treatment with hyperthermia inductive heating. The paper experiments presented here consist of high power oscillator,
induction coil applicator with shielding plates, and agar phantom. It is the experimental construction of magnetic shielding system that verifies the field distribution on the heating model. These
theoretical and experimental investigations were carried out using an agar phantom. The magnetic flux density and its shield effect were investigated. The inductive applicator is a ferrite core with
diameter of 7cm and length of 20cm, excited by 4MHz signal, and high power oscillator has maximum output power of 750W (input voltage is 50V and input current is 15A). The distance between both
ferrite cores is equal to 18cm, considering the shielding plate design in Section 2.
To examine these heating characteristics, a heating experiment was conducted. The experimental method is the same as the case mentioned above in numerical section. Figure 11 shows an experimental
result of the temperature characteristics for heating region and changing distance from the initial positions of the ferrite cores. It is suggested that the magnetic shield plate plays an important
role in controlling the heating region and heating position. In this experiment, the temperature distribution or the distribution of heat in the breast model can be measured by using thermograph to
measure the heat value inside the breast model. After magnetic field energy flows through the lossy medium or breast model for 20 minutes, we will stop generating magnetic field (in order to prevent
the magnetic field from the excited source to disturb with the thermograph used for thermal imaging). Subsequently, the temperature distribution was observed by using a thermograph (FLIR SYSTEMS
Model T360).
The experiment results of the temperature distributions when the ferrite cores applicator is without shielding plates are shown in Figure 11(a). The temperature distributions at the starting position
when the ferrite cores applicator is with shielding plates at the original position and the temperature distributions in the cross-section are illustrated in Figures 11(b) and 11(c), respectively.
The experiment results of the temperature distributions when ferrite cores applicator with shielding plates offsets from original position in the -direction by a value equal to 16cm are illustrated
in Figure 11(d).
Figure 11(a) shows the temperature distributions of the ferrite cores’ applicator without shielding plate. Figures 11(b) and 11(c) represent the experimental result of the temperature distributions
at the initial positions of the ferrite cores with shielded plates and the temperature distributions of the ferrite cores with shielded plates in the cross-section, respectively. Further, Figure 11
(d) represents the temperature distributions when the ferrite cores applicator offsets from initial positions in the -direction by a value equal to 16cm. The maximum value of temperature of 45.5°C,
45.1°C, 44.9°C, and 45.8°C is shown in Figures 11(a)–11(d), respectively; it is the temperature at the cursor position. The temperature of the thermograph imaging camera setting at the starting
position is in the range of 20°C to 46°C, in which the magnetic energy was generated by the high power oscillator of 750W, excited by 4MHz signal. The white portion near the breast in these figures
represents the highest temperature. It was found that these heating characteristics demonstrate the theoretical results. The experimental result of the temperature distributions observed by using a
thermograph which is consistent with the numerical calculations results. The electric loss density of the heating model is a part of the magnetic field energy from the external source excited by
4MHz signal because the energy of the external electromagnetic field is directly proportional to the internal temperature of the breast model obtained from the thermograph imaging camera.
6. Conclusion
The effect of magnetic shielding system on heating area and the location for breast cancer treatment with hyperthermia inductive heating are presented. It is a novel technique to control magnetic
field intensity and relocate the heating area by using a rectangular shielding with aperture. The distribution of the lossy medium was analyzed using the FDTD. From these investigations, we found
that the aperture size of centimeters could perform the best results because the electric loss density is high to equal, 154W/m^3. Furthermore, it can control the leakage of the magnetic field more
effectively. In addition, the heating location can be varied by changing ferrite core orientation. The results show that the heating position can be relocated from the left to the right of the agar.
Subsequently, the heating experiment was conducted. The inductive applicator is a ferrite core with diameter of 7cm and length of 20cm, excited by 4MHz signal, and a maximum output power of 750 W.
From heating experiment by using a thermograph, we found that the value of the temperature distribution in the breast model is approximately 45°C. The proposed magnetic field shielding system is
suitable to prevent the side effects of hyperthermia cancer treatment by inductive heating.
This work was supported by Suranaree University of Technology (SUT) and by the Office of the Higher Education under NRU project of Thailand. The authors deeply appreciate the valuable comments of the
reviewers and recommends to be advantageous for revisions this paper.
1. E. Ben Hur, M. M. Elkind, and B. V. Bronk, “Thermally enhanced radioresponse of cultured Chinese hamster cells: inhibition of repair of sublethal damage and enhancement of lethal damage,”
Radiation Research, vol. 58, no. 1, pp. 38–51, 1974. View at Scopus
2. P. P. Antichi, N. Tokita, J. H. Kim, et al., “Selective heating of coetaneous human tumors at 27. 12MHz,” IEEE Transactions on Microwave Theory and Techniques, vol. 26, no. 8, pp. 569–572, 1978.
3. J. R. Oleson, “A review of magnetic induction methods for hyperthermia treatment of cancer,” IEEE Transactions on Biomedical Engineering, vol. 31, no. 1, pp. 98–105, 1984.
4. I. Kimura and T. Katsuki, “VLF induction heating for clinical hyperthermia,” IEEE Transactions on Magnetics, vol. 22, no. 6, pp. 1897–1900, 1986. View at Scopus
5. P. Charles and P. Elliot, Handbook of Biological Effects of Electromagnetic Fields, CRC Press, New York, NY, USA, 1995.
6. F. K. Storm, R. S. Elliott, W. H. Harrison, and D. L. Morton, “Clinical RF hyperthermia by magnetic-loop induction: a new approach to human cancer therapy,” IEEE Transactions on Microwave Theory
and Techniques, vol. 30, no. 8, pp. 1149–1158, 1982. View at Publisher · View at Google Scholar · View at Scopus
7. H. Kato and T. Ishida, “New inductive applicator for hyperthermia,” Journal of Microwave Power, vol. 18, no. 4, pp. 331–336, 1983. View at Scopus
8. A. Rosen, M. A. Stuchly, and A. Vander Vorst, “Applications of RF/microwaves in medicine,” IEEE Transactions on Microwave Theory and Techniques, vol. 50, no. 3, pp. 963–974, 2002. View at
Publisher · View at Google Scholar · View at Scopus
9. V. A. Vander, A. Rosen, and Y. Kotsuka, RF/Microwave Interaction With Biological Tissues, Wiley-IEEE, New York, NY, USA, 2006.
10. P. R. Stauffer and S. N. Goldberg, “Introduction: thermal ablation therapy,” International Journal of Hyperthermia, vol. 20, no. 7, pp. 671–677, 2004. View at Publisher · View at Google Scholar ·
View at Scopus
11. M. Hiraoka, M. Mitsumori, N. Hiroi et al., “Developmentof RF and microwave heating equipment and clinical applications to cancer treatment in Japan,” IEEE Transactions on Microwave Theory and
Techniques, vol. 48, no. 1, pp. 1789–1799, 2000. View at Scopus
12. Y. Kotsuka, E. Hankui, M. Hashimoto, and M. Miura, “Developmentof double-electrode applicator for localized thermal therapy,” IEEE Transactions on Microwave Theory and Techniques, vol. 48, no. 1,
pp. 1906–1908, 2000. View at Scopus
13. P. S. Ruggera and G. Kantor, “Development of a family of RF helical coil applicators which produce transversely uniform axially distributed heating in cylindrical fat-muscle phantoms,” IEEE
Transactions on Biomedical Engineering, vol. 31, no. 1, pp. 98–106, 1984. View at Scopus
14. Y. Kotsuka, “Development of ferrite core applicator system for deep-induction hyperthermia,” IEEE Transactions on Microwave Theory and Techniques, vol. 44, no. 10, pp. 1803–1810, 1996. View at
Publisher · View at Google Scholar · View at Scopus
15. Y. Kotsuka and H. Okada, “Development of small and high efficiency implant for deep local hyperthermia,” Japanese Journal of Hyperthermic Oncology, vol. 19, no. 1, pp. 11–22, 2003.
16. V. D'Ambrosio and F. Dughiero, “Numerical model for RF capacitive regional deep hyperthermia in pelvic tumors,” Medical and Biological Engineering and Computing, vol. 45, no. 5, pp. 459–466,
2007. View at Publisher · View at Google Scholar · View at Scopus
17. S. Kuroda, N. Uchida, K. Sugimura, and H. Kato, “Thermal distribution of radio-frequency inductive hyperthermia using an inductive aperture-type applicator: evaluation of the effect of tumour
size and depth,” Medical and Biological Engineering and Computing, vol. 37, no. 3, pp. 285–290, 1999. View at Scopus
18. J. H. Kim, E. W. Hahn, N. Tokita, and L. Z. Nisce, “Local tumor hyperthermia in combination with radiation therapy. I. Malignant cutaneous lesions,” Cancer, vol. 40, no. 1, pp. 161–169, 1977.
View at Scopus
19. R. S. Elliott, W. H. Harrison, and F. K. Storm, “Hyperthermia: electromagnetic heating of deep-seated tumors,” IEEE Transactions on Biomedical Engineering, vol. 29, no. 1, pp. 61–64, 1982. View
at Scopus
20. M. J. Hagmann and R. L. Levin, “Coupling efficiency of helical coil hyperthermia applications,” IEEE Transactions on Biomedical Engineering, vol. 32, no. 7, pp. 539–540, 1985. View at Scopus
21. J.-L. Guerquin-Kern, M. J. Hagmann, and R. L. Levin, “Experimental characterization of helical coils as hyperthermia applicators,” IEEE Transactions on Biomedical Engineering, vol. 35, no. 1, pp.
46–52, 1988. View at Scopus
22. P. Raskmark and J. B. Andersen, “Focused electromagnetic heating of muscle tissue,” IEEE Transactions on Microwave Theory and Techniques, vol. 32, no. 8, pp. 887–888, 1984. View at Scopus
23. C. A. Tiberio, L. Raganella, G. Banci, and C. Franconi, “The RF toroidal transformer as a heat delivery system for regional and focused hyperthermia,” IEEE Transactions on Biomedical Engineering,
vol. 35, no. 12, pp. 1077–1085, 1988. View at Publisher · View at Google Scholar · View at Scopus
24. J. B. Anderson, A. Baun, K. Harmark, et al., “A hyperthermia system using a new type of inductive applicator,” IEEE Transactions on Biomedical Engineering, vol. 31, no. 1, pp. 212–227, 1984.
25. F. Dughiero and S. Corazza, “Numerical simulation of thermal disposition with induction heating used for oncological hyperthermic treatment,” Medical and Biological Engineering and Computing,
vol. 43, no. 1, pp. 40–46, 2005. View at Publisher · View at Google Scholar · View at Scopus
26. H. Rahn, S. Schenk, H. Engler, et al., “Tissue model for the study of heat transition during magnetic heating treatment,” IEEE Transactions on Magnetics, vol. 49, no. 1, pp. 244–249, 2013.
27. C. Polk and E. Postow, Handbook of Biological Effects of Electromagnetic Fields, CRC Press, Boca Raton, Fla, USA, 1996.
28. S. L. Ho, S. Niu, W. N. Fu, et al., “Design and analysis of novel focused hyperthermia devices,” IEEE Transactions on Magnetics, vol. 48, no. 11, pp. 3254–3257, 2012.
29. Y. Kotsuka, M. Watanabe, M. Hosoi, I. Isono, and M. Izumi, “Development of inductive regional heating system for breast hyperthermia,” IEEE Transactions on Microwave Theory and Techniques, vol.
48, no. 1, pp. 1807–1814, 2000. View at Scopus
30. C. Thongsopa, A. Intarapanich, and S. Tangwachirapan, “Shielding system for breast hyperthermia inductive heating,” in Proceedings of the ISEF-XIV International Symposium on Electromagnetic
Fields in Mechatronics, Electrical and Electronic Engineering, Arras, France, September 2009.
31. L. Hasselgren and J. Luomi, “Geometrical aspects of magnetic shielding at extremely low frequencies,” IEEE Transactions on Electromagnetic Compatibility, vol. 37, no. 3, pp. 409–420, 1995. View
at Publisher · View at Google Scholar · View at Scopus
32. Y. Du, T. C. Cheng, and A. S. Farag, “Principles of power-frequency magnetic field shielding with flat sheets in a source of long conductors,” IEEE Transactions on Electromagnetic Compatibility,
vol. 38, no. 3, pp. 450–459, 1996. View at Publisher · View at Google Scholar · View at Scopus
33. Y. Kotsuka, H. Kayahara, K. Murano, H. Matsui, and M. Hamuro, “Local inductive heating method using novel high-temperature implant for thermal treatment of luminal organs,” IEEE Transactions on
Microwave Theory and Techniques, vol. 57, no. 10, pp. 2574–2580, 2009. View at Publisher · View at Google Scholar · View at Scopus
34. L. Hasselgren and J. Luomi, “Geometrical aspects of magnetic shielding at extremely low frequencies,” IEEE Transactions on Electromagnetic Compatibility, vol. 37, no. 3, pp. 409–420, 1995. View
at Publisher · View at Google Scholar · View at Scopus
35. D. Sullivan, “Three-dimensional computer simulation in deep regional hyperthermia using the finite-difference time-domain method,” IEEE Transactions on Microwave Theory and Techniques, vol. 38,
no. 2, pp. 204–211, 1990. View at Publisher · View at Google Scholar · View at Scopus
36. K. S. Kunz and R. J. Luebbers, The Finite Difference Time Domain for Electromagnetics, CRC Press, New York, NY, USA, 1993.
37. D. M. Sullivan, “A frequency-dependent FDTD method for biological applications,” IEEE Transactions on Microwave Theory and Techniques, vol. 40, no. 3, pp. 532–539, 1992. View at Publisher · View
at Google Scholar · View at Scopus
38. C. Thongsopa, M. Krairiksh, A. Mearnchu, and D.-A. Srimoon, “Analysis and design of injection-locking steerable active array applicator,” IEICE Transactions on Communications, vol. E85-B, no. 10,
pp. 2327–2337, 2002. View at Scopus
39. D. C. Dibben and A. C. Metaxas, “Finite element time domain analysis of multimode applicators using edge elements,” Journal of Microwave Power and Electromagnetic Energy, vol. 29, no. 4, pp.
242–251, 1994. View at Scopus
40. S. Bharoti and S. Ramesh, “Simulation of specific absorption rate of electromagnetic energy radiated by mobile handset in human head using FDTD method,” WSEAS Transactions on Communications, vol.
2, pp. 174–180, 2003.
41. W. Renhart, C. A. Magele, K. R. Richter, P. Wach, and R. Stollberger, “Application of eddy current formulations to magnetic resonance imaging,” IEEE Transactions on Magnetics, vol. 28, no. 2, pp.
1517–1520, 1992. View at Publisher · View at Google Scholar · View at Scopus
42. A. Boadi, Y. Tsuchida, T. Todaka, and M. Enokizono, “Designing of suitable construction of high-frequency induction heating coil by using finite-element method,” IEEE Transactions on Magnetics,
vol. 41, no. 10, pp. 4048–4050, 2005. View at Publisher · View at Google Scholar · View at Scopus
43. P. A. Bottomley and E. R. Andrew, “RF magnetic field penetration, phase shift and power dissipation in biological tissue: implications for NMR imaging,” Physics in Medicine and Biology, vol. 23,
no. 4, pp. 630–643, 1978. View at Publisher · View at Google Scholar · View at Scopus
44. N. Kuster and Q. Balzano, “Energy absorption mechanism by biological bodies in the near field of dipole antennas above 300 MHz,” IEEE Transactions on Vehicular Technology, vol. 41, no. 1, pp.
17–23, 1992. View at Publisher · View at Google Scholar · View at Scopus
45. C. A. Balanis, Advanced Engineering Electromagnetic, Wiley, New York, NY, USA, 1989.
46. K. S. Yee, “Numerical solution of initial boundary value problems involving Maxwell’s equations in isotropic media,” IEEE Transactions on Antennas and Propagation, vol. 14, no. 3, pp. 302–307,
47. V. Mateev, I. Marinova, Y. Saito, et al., “Coupled field modeling of Ferrofluid heating in tumor tissue,” IEEE Transactions on Magnetics, vol. 49, no. 5, pp. 1793–1796, 2013.
48. V. D'Ambrosio and F. Dughiero, “Numerical model for RF capacitive regional deep hyperthermia in pelvic tumors,” Medical and Biological Engineering and Computing, vol. 45, no. 5, pp. 459–466,
2007. View at Publisher · View at Google Scholar · View at Scopus
49. R. B. Roemer and T. C. Cetas, “Applications of bioheat transfer simulations in hyperthermia,” Cancer Research, vol. 44, no. 10, supplement, pp. 4788s–4798s, 1984. View at Scopus
50. S. M. Mimoune, J. Fouladgar, A. Chentouf, and G. Develey, “A 3D impedance calculation for an induction heating system for materials with poor conductivity,” IEEE Transactions on Magnetics, vol.
32, no. 3, pp. 1605–1608, 1996. View at Publisher · View at Google Scholar · View at Scopus
51. N. S. Doncov and B. D. Milovanovic, “TLM modeling of the circular cylindrical cavity loaded by lossy dielectric sample of various geometric shapes,” Journal of Microwave Power and Electromagnetic
Energy, vol. 37, no. 4, pp. 237–247, 2002. View at Scopus
52. O. P. Gandhi and J.-Y. Chen, “Electromagnetic absorption in the human head from experimental 6-GHz handheld transceivers,” IEEE Transactions on Electromagnetic Compatibility, vol. 37, no. 4, pp.
547–558, 1995. View at Publisher · View at Google Scholar · View at Scopus
53. A. Hadjem, D. Lautru, C. Dale, M. F. Wong, V. F. Hanna, and J. Wiart, “Study of specific absorption rate (SAR) induced in two child head models and in adult heads using mobile phones,” IEEE
Transactions on Microwave Theory and Techniques, vol. 53, no. 1, pp. 4–11, 2005. View at Publisher · View at Google Scholar · View at Scopus
54. S. C. Gnyawali, Y. Chen, F. Wu et al., “Temperature measurement on tissue surface during laser irradiation,” Medical and Biological Engineering and Computing, vol. 46, no. 2, pp. 159–168, 2008.
View at Publisher · View at Google Scholar · View at Scopus
|
{"url":"http://www.hindawi.com/journals/ijap/2013/163905/","timestamp":"2014-04-19T22:34:45Z","content_type":null,"content_length":"144483","record_id":"<urn:uuid:714a54e3-ae85-4387-85a7-636deb225359>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00436-ip-10-147-4-33.ec2.internal.warc.gz"}
|
the first resource for mathematics
Nuisance parameters, goodness-of-fit problems, and Kolmogorov-type statistics.
(English) Zbl 0683.62026
Goodness-of-fit, Debrecen/Hung. 1984, Colloq. Math. Soc. János Bolyai 45, 21-58 (1987).
[For the entire collection see Zbl 0606.00025.]
A. Kolmogorov [Giorn. Ist. Ital. Atturi 4, 83-91 (1933; Zbl 0006.17402)] in treating the GOF (goodness-of-fit) hypothesis $``{H}_{0}:$ $F={F}_{0}^{"\text{'}}$, introduced the statistic ${D}_{n}=sup|
{F}_{n}\left(z\right)-{F}_{0}\left(z\right)|,$ where ${F}_{0}\left(·\right)$ is a completely specified continuous distribution, and ${F}_{n}\left(·\right)$ is the EDF (empirical distribution
function) of the data $Z=\left({X}_{1},···,{X}_{n}\right)$. ${D}_{n}$ is called the K-S (Kolmogorov-Smirnov) statistic.
In this paper one is concerned with cases in which the hypothesized cpf is not completely specified. The hypotheses here are of the form $``{H}_{0}:F\in {{\Omega }}^{"\text{'}\text{'}},$ where ${{\
Omega }}^{"}$ is a family of cpfs parametrized by a nuisance parameter. For example, ${{\Omega }}^{"}$ could be a family of normals, or exponentials or Paretos.
Since the hypothesized cpf is not completely specified, the K-S statistic cannot be used without some modifications. The object of this paper is to extend the methodology to a variety of families of
cpfs; to several families of stochastic process laws; and to censored data problems. Further, the authors attempt to present a general framework, within which K-S type tests for nuisance parameter
problems can be constructed.
62G10 Nonparametric hypothesis testing
|
{"url":"http://zbmath.org/?q=an:0683.62026","timestamp":"2014-04-21T10:01:06Z","content_type":null,"content_length":"23044","record_id":"<urn:uuid:9d708bff-ba12-47f3-9ee7-15c212bb9674>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00109-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Domain and range?
April 22nd 2008, 04:27 AM #1
Mar 2008
Domain and range?
Also i don't get what does 1-1 mean ?
t(x) = arctan (2x)
I'm not sure if this is right. But i got domain= all real x, where x doesn't = 0. I have no idea about the range.
1-1 probably means "one-to-one", a term you ought to be familiar with.
x = 0 is OK for the domain! arctan(0) = 0 and all is well. The domain is all real x.
The range is (-pi/2, pi/2). That's easy to see if you consider the domain of the inverse of t(x), that is, (1/2) tan (x) ....
April 22nd 2008, 04:45 AM #2
|
{"url":"http://mathhelpforum.com/pre-calculus/35494-domain-range.html","timestamp":"2014-04-17T09:12:09Z","content_type":null,"content_length":"32894","record_id":"<urn:uuid:2caa988c-6db1-4677-9238-1d4353ec5c34>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00127-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Is Robinson Arithmetic biinterpretable with some theory in LST?
up vote 10 down vote favorite
Let ZFC$^{\text{fin}}$ be ZFC minus the axiom of infinity plus the negation of the axiom of infinity. It is well-known that ZFC$^{\text{fin}}$ is biinterpretable with Peano Arithmetic. In this sense
one could argue that ZFC$^{\text{fin}}$ is PA couched in the language of set theory (ie one nonlogical binary relation, $\in$) rather than the language of arithmetic ($+$, $\cdot$, $0$, $S$). This
gives us some confidence that "there exists an infinite set" -- and the hierarchy of large cardinal axioms beyond -- is an at least somewhat-natural extension of arithmetic.
In precise terms, every theory in this hierarchy proves the consistency of all those before it. In vague terms, each theory in this hierarchy adds "more infiniteness" than those before it.
Does the hierarchy start at PA, or is there a step below it? Robinson Arithmetic is a theory in the language of arithmetic; among its properties are:
1. Robinson Arithmetic is essentially undecidable (as PA and all stronger theories are)
2. PA proves the consistency of Robinson Arithmetic
3. Robinson Arithmetic is finitely axiomatizable
The first point might be considered an argument for why Robinson Arithmetic is part of the same hierarchy as PA/ZFC$^{\text{fin}}$ -- it has enough coding power to express primitive recursion. The
second point shows why Robinson Arithmetic is strictly below PA/ZFC$^{\text{fin}}$ on this hierarchy. The third point explains -- in vague terms -- what sort of "infiniteness" PA/ZFC$^{\text{fin}}$
add to Robinson Arithmetic: it adds infinite collections of axioms.
From PA on up, all theories on the hierarchy are biinterpretable with some theory in the language of set theory.
Question: is Robinson Arithmetic biinterpretable with some theory in the language of set theory?
arithmetic lo.logic computability-theory set-theory
Don't know about Q, but Elementary Arithmetic is also finitely axiomatisable, essentially undecidable, and has a finitary consistency proof. Avigad's Number theory and elementary arithmetic (
citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.105.6509) discusses EA and the set theory of Extensionality, Delta-0 Induction, Pair, Union, Power, and Delta-0 Separation. – Daniel Mehkeri Jan 11
'11 at 5:03
add comment
2 Answers
active oldest votes
This is a refinement of the answer provided by Andres Caicedo.
For weak arithmetics, such as Robinson's $Q$, it is not bi-interpretability, but rather the weaker notion of mutual interpretability that turns out to be the "right" notion to study
[See here for a thorough exposition by Harvey Friedman of various notions of interpretability].
It is known that $Q$ is mutually interpretable with a surprisingly weak set theory known as Adjunctive Set Theory, denoted $AS$, whose only axioms are the following two:
1. Empty Set: $\exists x\forall y\lnot (y\in x)$ 2. Adjunction: $\forall x\forall y\exists z\forall t(t\in z\leftrightarrow (t\in x\vee t=y)) $
up vote 7 The mutual interpretability of $Q$ and $AS$ is a refinement of a joint result of Szmielew and Tarski, who proved that $Q$ is interpretable in $AS$ plus Extensionality. This result was
down vote reproted without proof in the classic 1953 monograph Undecidable Theories of Tarski, Mostowski, and Robinson. A proof was published by Collins and Halpern in 1970. Later work in this
accepted area was made by Montagna and Mancini in 1994, and most recently by Albert Visser in 2009, whose paper below I recommend for references and a short history:.
A. Visser, Cardinal arithmetic in the style of Baron von Münchhausen, Rev. Symb. Log. 2 (2009), no. 3, 570–589
You can find a preprint of the paper here.
Note that since $Q$ is known to be essentially undecidable [i.e., every consistent extension of $Q$ is undecidable], the interpretability of $Q$ in $AS$ implies that AS is essentially
Hi Ali, many thanks for the references! – Andres Caicedo May 28 '11 at 19:28
@Ali, thank you! If I could accept both @Andres' answer and yours I would accept both (since, as you mention, your answer builds on his). For the sake of posterity I'll put the green
checkmark on this answer so people who come across this page know that they need to read both answers to get the whole story. Thank you! – Adam Jun 15 '11 at 21:32
I take it that Adjunction essentially means that the union of any set with a singleton set exists (and singleton sets exist by the empty set axiom). You're right, that is surprisingly
weak... Especially since it does not include extensionality! – Adam Jun 15 '11 at 21:35
@Adam: I am glad to hear that you found the answer useful; your paraphrase of Adjunction is on "on the dot", by the way. – Ali Enayat Jun 15 '11 at 21:59
add comment
I do not know about biinterpretability, but this is too long for a comment, and may be useful towards an answer:
I believe an appropriate set-theoretic version of $Q$ is the theory $Q^+$. First, let $Q^*$ be the theory whose axioms are:
1. Extensionality.
2. There is an empty set.
3. Pairing.
4. Union: $\forall x,y\exists z\forall w(w\in z\leftrightarrow w\in x\lor w\in y)$.
This is a true fragment of the theory of $V_\omega$, and proves all true-in-$V_\omega$ $\Sigma_1$ sentences.
An r.e. set is a $\Sigma_1$ subset of $V_\omega$. If $A$ and its complement (in $V_\omega$) are r.e., $A$ is recursive. There are recursively inseparable r.e. sets, say $A$ and $B$, with
$A$ defined by the $\Sigma_1$ formula $\phi$ and $B$ by $\psi$. That they are recursively inseparable means that they are disjoint, but there is no recursive $C$ containing $A$ and disjoint
up vote 3 from $B$.
down vote
We can define $Q^+$ by adding to $Q^*$ the axiom
5. $\forall x(\lnot\phi(x)\lor\lnot\psi(x))$.
This is a strongly undecidable, essentially undecidable (true) theory.
Any axiomatizable consistent extension $T$ of $Q^*$ is $\Pi_1$-incomplete, i.e., there is a true-in-$V_\omega$ $\Pi_1$ statement that $T$ does not prove. This is the first incompleteness
theorem. (Second incompleteness requires a bit more.)
I do not know who these formulations are due to, I learned them from John Steel.
How embarrassing. Yes, $Q^\star$ is in my 225b notes exactly as you describe it. However I've got something completely different for $Q^+$ (all of the $T^+$ theories were in LST plus
countably many constants, one for each set of HF, and countably many axioms basically amounting to extensionality for those constants). I really like how he teaches recursion theory using
HF instead of $\mathbb N$, although the downside is that there's no textbook to go with it. – Adam Jan 11 '11 at 3:53
Oh, sure. What I am calling $Q^+$ here is what Steel called $Q$ with a super-index $**$. But the LaTeX wasn't compiling for that, so I changed its name. – Andres Caicedo Jan 11 '11 at
Okay, I see how adding (5) makes $Q^\star$ essentially undecidable, and that certainly will have to happen in order for the theory to interpret Robinson Arithmetic. But I'm still a bit
uncertain about how you'd take advantage of that when defining an interpretation. Also, do you get the same theory for any pair of r.e. recursively inseparable sets $A$ and $B$, or might
different r.e. sets give you different theories? – Adam Jan 11 '11 at 4:14
Why is $Q^+$ strongly undecidable? – Amit Kumar Gupta Jan 12 '11 at 2:08
Hi Adam. Theories with different pairs $A,B$ are bi-interpretable, but I am not convinced they are actually equal. I am not sure whether $Q^+$ solves your problem. In a sense, the issue
is that we have very limited resources to work with, it is an interesting question. – Andres Caicedo Jan 12 '11 at 2:35
show 2 more comments
Not the answer you're looking for? Browse other questions tagged arithmetic lo.logic computability-theory set-theory or ask your own question.
|
{"url":"http://mathoverflow.net/questions/51703/is-robinson-arithmetic-biinterpretable-with-some-theory-in-lst?sort=oldest","timestamp":"2014-04-16T07:53:33Z","content_type":null,"content_length":"71646","record_id":"<urn:uuid:0d3c7dca-f74a-4f32-98a6-feccf49c70a9>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00556-ip-10-147-4-33.ec2.internal.warc.gz"}
|
cone or conical surface, in mathematics, surface generated by a moving line (the generator) that passes through a given fixed point (the vertex) and continually intersects a given fixed curve (the
directrix). The generator creates two conical surfaces—one above and one below the vertex—called nappes. If the directing curve is a conic section (e.g., a circle or ellipse) the cone is called a
quadric cone. The most common type of cone is the right circular cone, a quadric cone in which the directrix is a circle and the line drawn from the vertex to the center of the circle is
perpendicular to the circle. The generator of a cone in any of its positions is called an element. The solid bounded by a conical surface and a plane (the base) whose intersection with the conical
surface is a closed curve is also called a cone. The altitude of a cone is the perpendicular distance from its vertex to its base. The lateral area is the area of its conical surface. The volume is
equal to one third the product of the altitude and the area of the base. The frustum of a cone is the portion of the cone between the base and a plane parallel to the base of the cone cutting the
cone in two parts.
The Columbia Electronic Encyclopedia, 6th ed. Copyright © 2012, Columbia University Press. All rights reserved.
See more Encyclopedia articles on: Mathematics
|
{"url":"http://www.factmonster.com/encyclopedia/science/cone-mathematics.html","timestamp":"2014-04-19T09:48:39Z","content_type":null,"content_length":"20317","record_id":"<urn:uuid:bb9515d5-75c1-4478-b0fd-3beff74d4cbd>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00377-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Manchester, WA Science Tutor
Find a Manchester, WA Science Tutor
...I have worked as a SAT Prep tutor for over two years now as an independent contractor through a private tutoring company. I have worked with both Public and Private School students in order to
improve their SAT scores. I work with students on their foundational skills (remember fractions?). I also work with students on test taking strategies.
27 Subjects: including biology, reading, algebra 2, algebra 1
...I work with students to familiarize them with the test format and test management strategies. We also work on content areas, reviewing math facts, learning vocabulary, and practicing critical
reading and essay writing. I've tutored all subjects on the ACT since 2003.
32 Subjects: including ACT Science, English, reading, geometry
...I have tutored or taught for the past thirty years. This has provided me with an extensive background for general mathematics. As an instructor, I strive to provide background information and
context to my students so that they not only are able to do the work, but have an understanding of the material as well.
12 Subjects: including organic chemistry, chemistry, geometry, algebra 2
I have over nine years of experience teaching both one-on-one and in classroom settings and love meeting and helping people. In particular, I love working with teenagers and helping them develop
the skills they need to succeed in middle school and high school and to prepare for college. I think I ...
28 Subjects: including ACT Science, physiology, anatomy, ESL/ESOL
...In the classroom, I have helped teach introductory physics classes at the University of Washington and Washington University in St Louis. I also have worked with these students individually on
homework problems or test preparation. As an independent tutor, I have helped students with Algebra/Al...
17 Subjects: including ACT Science, English, chemistry, writing
Related Manchester, WA Tutors
Manchester, WA Accounting Tutors
Manchester, WA ACT Tutors
Manchester, WA Algebra Tutors
Manchester, WA Algebra 2 Tutors
Manchester, WA Calculus Tutors
Manchester, WA Geometry Tutors
Manchester, WA Math Tutors
Manchester, WA Prealgebra Tutors
Manchester, WA Precalculus Tutors
Manchester, WA SAT Tutors
Manchester, WA SAT Math Tutors
Manchester, WA Science Tutors
Manchester, WA Statistics Tutors
Manchester, WA Trigonometry Tutors
|
{"url":"http://www.purplemath.com/Manchester_WA_Science_tutors.php","timestamp":"2014-04-18T23:22:47Z","content_type":null,"content_length":"24084","record_id":"<urn:uuid:21c565e3-f6da-40bb-a2c0-dacf53689464>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00324-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Questions on n-Categories and Topology
Posted by John Baez
Here are some questions on n-categories and topology from Bruce Westbury. I’ll post a reply later — but why don’t some of you take a crack at them first?
– guest post by Bruce Westbury –
Now that we have several definitions of n-categories it seems to me that the next stage is to try and prove some results. The big projects that JB wrote about are:
If we are going to do anything with n-categories then a basic construction is that $n$-categories should be the objects of an ($n$+1)-category. This has been done by Simpson for the Tamsamani
definition, and by Makkai for a variant of the Baez-Dolan definition (in the case $n = \infty$). Is this the current state-of-play?
Another basic idea is that we want to have a notion of equivalence. Do the experts know how to do this? and for which definitions of $n$-category?
Moving on and looking at the big projects; if we have positive answers to the above then the only big project that now has a precise statement is the Stabilisation Conjecture. The only other
ingredient is suspension.
The other three big projects all have “$n$-categories with duals” in the statement. So it seems to me that $n$-categories with duals are more fundamental than $n$-categories. However as far as I am
aware there has been no work done on the definition.
Each of the big projects (other than the Stabilisation Conjecture) also requires the construction of certain $n$-categories with duals. So my next question is whether there are constructions for any
of the following $n$-categories:
• the fundamental $n$-category of a topological space
• the $n$-categories of tangles
• the $n$-categories of cobordisms
• the $n$-categories of $n$-Hilbert spaces
All of these should be $n$-categories with duals so it would help in thinking about what the definition might be if there were positive answers to any of the above. Of course it would be even better
if a positive answer to any of the above also had a built in construction of duals.
Another line of thought is that $n$-groupoids are special $n$-categories with duals so if we had a definition of $n$-categories with duals then it should be clear what an $n$-groupoid is. We would
just say that certain structure maps are equivalences. My next question is whether there is already a definition of an $n$-groupoid? and for which definitions of $n$-category?
Moving on again to the definition of an $n$-category with duals and about what is known in low dimensional cases: there is no difficulty with $n=1$. The case $n=2$ I believe I can do by combining the
definition of a bicategory and the definition of a spherical category. However I have not written anything down. If anybody has any doubts about this then of course I have some incentive to try and
write down a definition. For $n=3$ the only information I have is that JB in his paper with Langford on 2-tangles gave, in effect, a definition of a strict 3-category with duals. I say “in effect”
because his semistrict 3-category had one object.
The other case that we should start with is the definition of a strict n-category with strict duals. Again I believe I know what this is but I have not written anything down and again if anybody has
any doubts about this then of course I have some incentive to try and write down a definition.
This, as far as I know, reflects the current state of play. If you know better then I would be interested to hear about it.
Finally it seems to me there are two possible approaches to the definition of an $n$-category with duals. When I started thinking about this I was thinking of rewriting the definition of an $n$
-category building in duals from the ground up. However thinking about JB’s definition of a strict 3-category with duals has led me down a different route. The idea of this approach is that there is
an obvious forgetful “functor” from $n$-categories with duals to $n$-categories. Then the aim would be to construct a “left adjoint”. This would then give a “monad” on $n$-categories whose “algebras”
would be n-categories with duals. The idea for constructing the “left adjoint” is to use tangles. If this approach succeeded then the Tangle Hypothesis would be true by construction.
Posted at May 21, 2007 4:37 AM UTC
Re: Questions on n-Categories and Topology
So my next question is whether there are constructions for any of the following n-categories:
the fundamental n-category of a topological space
this is the only one I can really comment on, so here goes.
That fundamental $n$-category is going to be an $n$-groupoid (weak, of course), as I imagine you know. The best result I know about is that homotopy types are modelled by groupoids enriched in
simplicial sets. There is not to my knowledge a construction $\Pi:Top \to sSet-Gpd.$
The question is, how do we relate $n$-groupoids as defined by the various methods and groupoids enriched in $n$-coskeletal simplicial spaces? (fingers crossed on the indexing there)
Posted by: David Roberts on May 21, 2007 7:04 AM | Permalink | Reply to this
Re: Questions on n-Categories and Topology
How fundamental n-categories are defined may depend on whose definition of n-category you’re using, of course. In the style of definition of n-category attributed to me (as described for example in
the guidebook by Eugenia Cheng and Aaron Lauda), the fundamental n-category functor is defined at more or less the same time as the notion of (n+1)-category, as part of an inductive process. I’m less
familiar with how other people use their definitions toward this problem.
I think Eugenia and Nick Gurski were trying to modify this type of definition to define n-categories of cobordisms – see here for one version of their working draft (I think there are others).
Posted by: Todd Trimble on May 21, 2007 1:38 PM | Permalink | Reply to this
Re: Questions on n-Categories and Topology
But isn’t the “fundamental $n$-category” actually an $n$-groupoid, and aren’t for $n$-groupoids things much clearer. Or even: clear?
Posted by: urs on May 21, 2007 3:52 PM | Permalink | Reply to this
Re: Questions on n-Categories and Topology
Yes, it’s an n-groupoid. Sorry, I’m not following your question – are what things clearer or clear?
Posted by: Todd Trimble on May 21, 2007 4:39 PM | Permalink | Reply to this
Re: Questions on n-Categories and Topology
Sorry, probably I didn’t get the point. What I wanted to say, though, is:
Isn’t it clear that and how the fundamental $n$-groupoid of a space is an $\omega$-groupoid?
Posted by: urs on May 21, 2007 5:20 PM | Permalink | Reply to this
Re: Questions on n-Categories and Topology
I’m not sure it’s clear to everyone (including myself) – I do note that it was one of the questions on Bruce Westbury’s list, unless I’ve misinterpreted his meaning.
Let me see if I understand what you’re saying (and maybe put it stupidly to elicit a response) – are you saying that for each of the dozen or so definitions of n-category, it’s clear how to go about
defining the corresponding notion of fundamental n-groupoid? If so, has someone written down details? If no one has (but it’s clear anyway), can someone be troubled to write them down?
A follow-up question: comparing the definitions of n-category is hard work. Has there been much work on comparing the corresponding notions of n-groupoid? (Is the ‘the’ in ‘the fundamental
n-groupoid’ rock-solid?)
Posted by: Todd Trimble on May 21, 2007 6:04 PM | Permalink | Reply to this
Re: Questions on n-Categories and Topology
I’m not sure it’s clear to everyone
Okay, in case of doubt here, you should assume that I am misremembering something.
I went back to John’s slides on the homotopy hypotheses.
I seemed to recall that this included the statement that we can think of the fundamental $n$-groupoid as a Kan complex and that this is also called an $\omega$-groupoid.
But possibly I am mixing things up.
Posted by: urs on May 21, 2007 6:20 PM | Permalink | Reply to this
Re: Questions on n-Categories and Topology
No, it’s all straightened out now. I’m sure you were using the word in a standard sense, but obviously I had something else in mind. Thanks for clarifying!
Posted by: Todd Trimble on May 21, 2007 8:55 PM | Permalink | Reply to this
Re: Questions on n-Categories and Topology
Maybe Urs is saying something sort of like this:
If we only wish to study $n$-groupoids, as opposed to more general $n$-categories, we should really use the simplicial approach, because a vast amount has been worked out using this approach, and
sophisticated tools are available. So, we can define $\infty$-groupoids to be Kan complexes, and define $n$-groupoids to be Kan complexes with a special property.
If you’re wondering what a Kan complex is, and what this ‘special property’ is, try pages 16–18 here. For a more thorough introduction to Kan complexes, start with May’s old book Simplicial Methods
in Algebraic Topology, then tackle Goerss and Jardine’s Simplicial Homotopy Theory… and by the time you’re done, maybe Joyal and Tierney’s book on the subject will be out!
Similarly, if we only want to study $(n,1)$-categories, the simplicial approach is also very well-developed. An $(n,1)$-category is an $n$-category where all $j$-morphisms have weak inverses for $j$
> $1$. Joyal calls simplicial $(\infty,1)$-categories ‘quasicategories’. A quasicategory with the same special property I just alluded to is an $(n,1)$-category.
Joyal’s book on quasicategories should eventually appear as part of the proceedings of the IMA workshop on $n$-Categories: Foundations and Applications. This book will be a bit like Quasicategories
for the Working $\infty$-Mathematician. For now, you can listen to Joyal’s lectures on quasicategories, or read about them in Lurie’s paper on $\infty$-topoi.
The fully general simplicial $\infty$-categories are being studied by Street, Verity and others — see this and this. So far, I find this theory much scarier than Kan complexes or quasicategories.
Eventually it should be very nice.
In particular, just as Kan complexes are special quasicategories, quasicategories are special simplicial $\infty$-categories! So, we have three nested theories, with more results about the simpler
and historically earlier ones — but all three are part of a compatible package! And, with this package it should be very easy to prove the Homotopy Hypothesis: in a sense it’s already done.
Posted by: John Baez on May 21, 2007 7:18 PM | Permalink | Reply to this
Re: Questions on n-Categories and Topology
I was very struck by Lurie’s comments that rather than considering an inductive definition of $n$-categories (which ends up being fundamentally circular), one can consider an inductive definition of
$(\infty,n)$ categories given that we can define an $\infty$-category to be a simplicial set. He attributes this idea to Tamsamani. The question, then, is there any particular reason we’d ever need
an $n$-category as opposed to an $(\infty,n)$-category?
Posted by: Aaron Bergman on May 21, 2007 7:41 PM | Permalink | Reply to this
Re: Questions on n-Categories and Topology
Aaron wrote:
The question, then, is there any particular reason we’d ever need an $n$-category as opposed to an $(\infty,n)$-category?
Since an $n$-category is a special sort of $(\infty,n)$-category, the answer to this question is clearly:
Posted by: John Baez on May 21, 2007 9:34 PM | Permalink | Reply to this
Re: Questions on n-Categories and Topology
Perhaps you’ll like this formulation better, then. Do you think it is likely that $(\infty,n)$ categories are a better behaved notion than n-categories?
Posted by: Aaron Bergman on May 21, 2007 9:52 PM | Permalink | Reply to this
Re: Questions on n-Categories and Topology
I’ve never quite been able to understand that comment in Lurie’s paper. That is to say, I know what it means to enrich a category over say vector spaces. I even think I have some idea what it means
to enrich an $(\infty, 1)$-category over something like vector spaces. However, I have no idea what it means to enrich an $(\infty, 1)$-category over another $(\infty, 1)$-category.
Posted by: Noah Snyder on May 22, 2007 6:51 AM | Permalink | Reply to this
Re: Questions on n-Categories and Topology
Maybe Urs is saying something sort of like this:
If we only wish to study $n$-groupoids, as opposed to more general n-categories, we should really use the simplicial approach
I think the trouble was that what I was really thinking was that if we wish to study $n$-groupoids only, then there is the simplicial approach and just that.
Which is wrong, it seems. But might be right “for practical purposes”. :-)
Posted by: urs on May 21, 2007 7:56 PM | Permalink | Reply to this
Re: Questions on n-Categories and Topology
David Roberts wrote:
The best result I know about is that homotopy types are modelled by groupoids enriched in simplicial sets. There is not to my knowledge a construction $\Pi: Top \to sSet-Gpd$
How could we know that homotopy types are modelled by groupoids enriched in simplicial sets without knowing a Quillen equivalence between spaces and groupoids enriched in simplicial sets?
There should be some such Quillen equivalence…
First, we have a functor
$Sing: Top \to s Set$
sending each space $X$ to its ‘singular simplicial set’ $Sing(X)$. Going back we have ‘geometric realization’
$|\cdot| : sSet \to Top$
and these two functors form a Quillen equivalence.
Next, I hope any simplicial set has a kind of ‘path groupoid’ which is a groupoid enriched over simplicial sets. I think I’ve seen people do this, but I don’t know quite how it goes. I have some
guesses, but I won’t bore you with them. I hope this gives a functor
$P : s Set \to s Set-Gpd$
which for some model structure on $s Set-Gpd$ extends to a Quillen equivalence.
So, composing these I’d hope to get a functor
$\Pi: Top \to s Set-Gpd$
which is part of a Quillen equivalence.
Do any experts lurking out there know how to fill in the holes in this idea?
One could also try to start with
$\Pi: Top \to Top-Gpd$
and somehow apply the Quillen equivalence between $Top$ and $sSet$ to get
$\Pi: s Set \to s Set-Gpd$
But, I don’t know how to do this, either.
Posted by: John Baez on May 23, 2007 4:20 AM | Permalink | Reply to this
Re: Questions on n-Categories and Topology
We had a long discussion about the fundamental $n$-category (with duals) of a stratified space in the comments of this post.
Should we expect an adjunction between $n$-categories with duals and $n$-groupoids?
If so, would this have something to do with a left adjoint of the forgetful functor from the category of spaces to the category of stratified spaces?
Posted by: David Corfield on May 21, 2007 9:06 AM | Permalink | Reply to this
Re: Questions on n-Categories and Topology
left adjoint of the forgetful functor from the category of spaces to the category of stratified spaces
Whoops, I meant of course
left adjoint of the forgetful functor from the category of stratified spaces to the category of spaces
I suppose a ‘trivial stratification’ functor might do.
Posted by: David Corfield on May 21, 2007 6:48 PM | Permalink | Reply to this
Re: Questions on n-Categories and Topology
Moving on again to the definition of an n-category with duals…The case n=2 I believe I can do by combining the definition of a bicategory and the definition of a spherical category. However I
have not written anything down. If anybody has any doubts about this then of course I have some incentive to try and write down a definition.
I have thought a bit about 2-categories with duals, ever since I tried to prove that eg. 2Hilb is a 2-category with duals, and it invited me to adopt a slightly different viewpoint than the one
presented by John and Laurel in their 2-tangles paper.
If one follows the lines you suggest, that is, if we take the definition of a spherical category and “2-categorify” it (2-category=bicategory), you’ll end up saying that a 2-category with duals is a
2-category $C$ equipped with a weak 2-functor $* : C^op \rightarrow C$, with a monoidal structure, and some other things satisfying a whole bunch of identities.
I don’t like that way of thinking, because in the examples where our objects are categories of some kind, and the morphisms are functors - like 2Hilb - the $*$-operation on morphisms is going to be
“take the adjoint”.
And that’s where I differ perhaps with many here at the n-cafe, and hold a philosophical objection : although in the case of 2Hilb an adjoint (a) does exist and (b) is unique even up to unique
natural isomorphism, there is no canonical definition for it.
And so I am very hesitant indeed to think of the $*$-operation as (even a weak) 2-functor, because I would like to think that a 2-functor must at least have a meaningful defintion and not be defined
by making arbitrary choices. Even if you insisted on making arbitrary choices for the adjoints, you’d run into a set-theoretic difficulty : you couldn’t use the axiom of choice for example, because
the collection of all 2-Hilbert spaces doesn’t even form a set!
Here’s the way I would prefer to think of a 2-category with duals (I mentioned this here). I would prefer to think of the $*$-operation on the 2-morphisms as a structure , the $*$-operation on the
1-morphisms as a property , and I’m unsure about the $*$-operation on objects :-)
In other words : I would prefer to work locally . You only say that for every morphism $\sigma : A \rightarrow B$, there exists a morphism $\sigma^* : B \rightarrow A$ satisfying etc. etc. This
doesn’t take away the whole point of 2-categories with duals either : they would still be just as useful for 2-tangles, because if you think about it, one only ever needs to work locally . There is
no need to use brute force and choose adjoints for every possible morphism, right before we even begin, if in our applications we’re only going to work with some small region of our 2-category.
Posted by: Bruce Bartlett on May 21, 2007 12:07 PM | Permalink | Reply to this
Re: Questions on n-Categories and Topology
Bruce suggested a
a slightly different viewpoint
on $n$-categories with duals: don’t specify duals, but just assert their existence.
I haven’t thought enough about this issue to make a concrete comment, but I notice that in the world of $n$-categories we run again and again into this very dichotomy of two possible viewpoints:
either we assert that things exist (for instance composites of morphisms), or we choose particular representations (a particular composite) and then are left with these choices and a bunch of
coherence laws satisfied by them.
Posted by: urs on May 21, 2007 1:27 PM | Permalink | Reply to this
Re: Questions on n-Categories and Topology
Yes, I agree this dichotomy comes up a lot when one thinks about n-categories. It’s basically the contrast between a completely algebraic approach and approaches which have some form of non-algebraic
flavour to them. For what it’s worth (and admitting that I don’t know half as much as I pretend to about n-categories), I would tend to vote for an approach which is as algebraic as possible… up to
the tipping point where this principle becomes untenable, and one is forced to compromise :-)
I think the example of adjoints is a good testing-ground to contrast these various approaches. How are we to think of them? One advantage of thinking of them as a a property, and not as a structure,
is that it allows one to choose the appropriate adjoint for any given problem at hand, rather than being forced to use the one someone arbitrarily chose for you right in the beginning.
Posted by: Bruce Bartlett on May 21, 2007 1:52 PM | Permalink | Reply to this
Re: Questions on n-Categories and Topology
Bruce Bartlett writes:
For what it’s worth (and admitting that I don’t know half as much as I pretend to about $n$-categories), I would tend to vote for an approach which is as algebraic as possible… up to the tipping
point where this principle becomes untenable, and one is forced to compromise :-)
The algebraic approach is very tempting for low-dimensional calculations. It tends to get tiring as you move to higher dimensions, since it tends to make you explicitly keep track of coherence laws.
Eventually this becomes too exhausting. Perhaps if all the coherence laws were hidden inside some sophisticated black box one could feel happy knowing they’re there, but not needing to peek inside
very often.
For what it’s worth, all the simplicial approaches I mentioned are completely non-algebraic — unless you go ahead and choose operations that supply fillers for all the relevant horns.
The homotopy theorists, who have proved more and harder theorems than any of us $n$-category theorists, have demonstrated the practicality of simplicial, non-algebraic approaches.
But, I think people should develop all sorts of different philosophies and try all sorts of different approaches! Only time and experimentation will show which are the most fruitful. It’s way too
early to start weeding out what might be promising candidates.
Posted by: John Baez on May 21, 2007 7:48 PM | Permalink | Reply to this
Re: Questions on n-Categories and Topology
For what it’s worth, all the simplicial approaches I mentioned are completely non-algebraic — unless you go ahead and choose operations that supply fillers for all the relevant horns.
I’ve been under the impression that this was sort of morally wrong. There isn’t a unique thing that fills in the various operations, and to choose one is to impose too much structure.
Posted by: Aaron Bergman on May 21, 2007 9:57 PM | Permalink | Reply to this
Re: Questions on n-Categories and Topology
Aaron wrote:
I’ve been under the impression that this was sort of morally wrong. There isn’t a unique thing that fills in the various operations, and to choose one is to impose too much structure.
I agree completely! The price you pay for equipping your gadgets with all this structure is demanding that your morphisms between gadgets preserve all this structure only up to something — where this
something needs to satisfy a bunch of complicated laws of its own. And then you have to do the same for 2-morphisms between morphisms of gadgets, and so on.
So, Bruce Bartlett should explain why he likes the algebraic approach. I know one thing: it comfortably resembles ‘traditional algebra’, where we have operations satisfying equational laws. I don’t
know if he has a stronger reason.
Posted by: John Baez on May 21, 2007 11:16 PM | Permalink | Reply to this
Re: Questions on n-Categories and Topology
One reason for an algebraic approach might be to shed light on algebraic models for homotopy n-types, among other things.
For example, even if Joyal and Tierney hadn’t given their characterization of homotopy 3-types in terms of Gray groupoids, it might have been possible to discover this anyway by applying coherence of
algebraic 3-categories (their 3-equivalence to Gray categories) to the algebraic fundamental 3-groupoid. It’s possible that further development of coherence theory for algebraic n-categories would
yield associated insights into homotopy n-types, and vice-versa.
As we at the n-category cafe know, there has always been a dialectic between homotopy theory and higher-dimensional algebra, beginning with Stasheff’s work on homotopy types of loop spaces,
continuing more recently with iterated loop spaces and iterated monoidal categories, and still more recently with Batanin’s investigations into higher Eckmann-Hilton laws.
Here’s to a future where the inner logic of coherence theory and homotopy theory play off one another, with rich payoff for both!
Posted by: Todd Trimble on May 22, 2007 12:25 AM | Permalink | Reply to this
Re: Questions on n-Categories and Topology
So, Bruce Bartlett should explain why he likes the algebraic approach.
Perhaps I prefer a more algebraic approach… but the point of my post was really to argue the opposite! Namely, in the context of defining a “2-category with duals”, a hard-line algebraic approach
seems the wrong one to me. I explained the reason already : good examples of 2-categories with duals are supposed to be things like 2Hilb where where the objects are categories of some sort, and the
morphisms are functors of some sort. And the duality, on the level of morphisms, sends a functor to its adjoint.
But oftentimes, the adjoints aren’t naturally defined (they only ‘exist’ up to unique natural isomorphism), so to me, it just wouldn’t be cricket to think of the duality operation as a 2-functor
(1)$* : C \rightarrow C$
where $C$ is our candidate ‘2-category with duals’. It’s the philosophical objection that ‘one shouldn’t have to make arbitrary choices when defining something’.
I was saying this stuff because it seemed precisely the opposite approach compared to
(a) Bruce Westbury’s viewpoint above, where he would define a 2-category with duals by “combining the definition of a bicategory and the definition of a spherical category”, and
(b) John’s approach in the 2-tangles paper, where the duality on the 2-category is explicitly regarded as a structure and not as a property . Recall Definition 10,
A monoidal 2-category with duals is a monoidal 2-category equipped with the following structures : …
2. For every morphism $f : A \rightarrow B$ there is a morphism $f^* : B \rightarrow A$ called the dual of $f$, and 2-morphisms $i_f : 1_A \Rightarrow f f^*$ and $e_f : f^* f \Rightarrow 1_B$
called the unit and counit respectively
The dramatic boldface is added by me ;-) The point is that one is specifying, right at the very beginning, the adjoint of a morphism, etc. etc. It is an algebraic approach. One could get away with it
there because one was only dealing with a very strict situation. But I’m expressing my doubts whether this is the right notion of a general “monoidal 2-category with duals”.
Practically all I’ve learnt about higher categories has come from John’s writings… and indeed I had the impression that in fact it was John who harboured his own secret algebraic inclinations, from
which I subsequently followed his cue :-) After all, in the last section of John’s excellent Fields talk (which somehow I managed to miss at the time
— So, why not just use simplicial methods, and forget about ‘globular’ n-categories?
Bad answer : because we always liked globular n-categories.
Better answer : globular methods clarify the structure of $\infty$-categories, and thus $\infty$-groupoids, and thus homotopy types - given the homotopy hypothesis. —
Admittedly, that’s just a pro-globular paragraph and not necessarily a pro-algebraic paragraph. (By the way, I’m just poking fun with these “pro-globular” and “pro-algebraic” catchwords. I know that
John is not “pro-anything”!)
In the remaining slides, John talks about how, in the globular approach, one obtains all sorts of algebraic operations, like ‘composition, ‘whiskering’, ‘braiding’, etc., which might shed light on
the combinatorics of homotopy types. Perhaps I missed the point, but I took this to mean that John was saying that it can be worthwhile to think of things in a more algebraic way… seeing as we
already understand them quite well in a non-algebraic (for instance, simplicial) way.
Posted by: Bruce Bartlett on May 22, 2007 12:39 AM | Permalink | Reply to this
Re: Questions on n-Categories and Topology
I’ve been putting off replying to Bruce’s questions, since it’s an intimidating task: summarizing the work of dozens of mathematicians on some highly technical projects that haven’t reached any sort
of completion. It’s like describing an enormous fractal coastline.
But let me give it a try. Instead of trying to review all existing approaches to these hypotheses:
I’ll just sketch some promising work so far, and mention two projects one might try.
To formalize and prove any one of these hypotheses, we need an approach to $n$-categories and $\infty$-categories — or at least $n$-groupoids and $\infty$-groupoids, or $n$-categories and $\infty$
-categories with duals.
Here I’ll focus on the simplicial and the globular approaches. I won’t attempt to cover the promising multisimplicial approach of Tamsamani and Simpson, or the opetopic approach.
In the simplicial approach, a version of the homotopy hypothesis has already been proved, since simplicial $\infty$-groupoids are just Kan complexes, and the category of Kan complexes is known to be
Quillen equivalent to the category of topological spaces. For details, try:
• Goerss and Jardine, Simplicial Homotopy Theory.
• Mark Hovey, Model Categories.
The technology of model categories always comes as a rude shock to people who enter this subject by becoming interested in $n$-categories — even the very definition of a model category takes a while
to digest — but it’s important. One should think of a model category as a very nice way of presenting a simplicially enriched category — which you can think of as an $\infty$-category with all its
$j$-morphisms for $j > 1$ being invertible. For details, try:
• William G. Dwyer and Daniel M. Kan, Simplicial localizations of categories, J. Pure Appl. Algebra 17 (1980), 267-284.
• William G. Dwyer and Daniel M. Kan, Calculating simplicial localizations, J. Pure Appl. Algebra 18 (1980), 17-35.
• William G. Dwyer and Daniel M. Kan, Function complexes in homotopical algebra, Topology 19 (1980), 427-440.
Starting from the homotopy hypothesis, the best approach to the tangle hypothesis seems to be generalizing ‘the fundamental $n$-groupoid of a space’ to ‘the fundamental $n$-groupoid with duals of a
stratified space’. The reason is that there’s a deep relation between tangles and certain stratified spaces. For details, try our earlier discussion here.
Given all this, it might be a nice project to invent a simplicial concept of $n$-category with duals!
There’s already a simplicial approach to $\infty$-categories:
• Dominic Verity, Weak complicial sets, a simplicial weak omega-category theory. Part I: basic homotopy theory.
It’s been much more developed in the case of $\infty$-categories where all $j$-morphisms are invertible for $j > 1$; these are called ‘quasicategories’ by Joyal:
• André Joyal, Graduate course on quasicategories.
However, $n$-categories with duals seem quite undeveloped in the simplicial approach. Here the globular approach seems to shine:
• Eugenia Cheng, Graduate course on $n$-categories with duals and TQFT.
• Eugenia Cheng, An $\omega$-category with all duals is an $\omega$-groupoid.
(The latter paper formalizes an important issue that has vexed Jim Dolan and I for a long time: it seems that only with a dimensional cutoff is the concept of ‘$n$-category with duals’ different from
that of an $n$-groupoid!)
Batanin’s approach to globular $n$-categories has a lot of momentum going for it. Unfortunately, the theory of globular $n$-groupoids is less developed than that of simplicial $n$-groupoids. Some
good work has been done, but more needs to be done. For the state of the art, try these:
• Denis-Charles Cisinski, Batanin higher groupoids and homotopy types.
• Eugenia Cheng, Batanin $\omega$-groupoids and the homotopy hypothesis.
This suggests another nice project: finish proving the homotopy hypothesis for globular $\infty$-groupoids, and then define a fundamental $n$-category with duals for a stratified space!
There’s a lot more to say, but I hope this helps a bit.
Posted by: John Baez on May 28, 2007 5:25 PM | Permalink | Reply to this
|
{"url":"http://golem.ph.utexas.edu/category/2007/05/questions_on_ncategories_and_t.html","timestamp":"2014-04-21T01:13:01Z","content_type":null,"content_length":"97061","record_id":"<urn:uuid:19b200dd-0bbf-4687-8ad2-257da49f03b1>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00471-ip-10-147-4-33.ec2.internal.warc.gz"}
|
An Adaptive Prediction-Correction Method for Solving Large-Scale Nonlinear Systems of Monotone Equations with Applications
Abstract and Applied Analysis
Volume 2013 (2013), Article ID 619123, 13 pages
Research Article
An Adaptive Prediction-Correction Method for Solving Large-Scale Nonlinear Systems of Monotone Equations with Applications
^1School of Mathematics and Computer Sciences, Gannan Normal University, Ganzhou 341000, China
^2School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China
^3Department of Applied Mathematics, The Hong Kong Polytechnic University, Hong Kong
Received 21 February 2013; Accepted 10 April 2013
Academic Editor: Guoyin Li
Copyright © 2013 Gaohang Yu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any
medium, provided the original work is properly cited.
Combining multivariate spectral gradient method with projection scheme, this paper presents an adaptive prediction-correction method for solving large-scale nonlinear systems of monotone equations.
The proposed method possesses some favorable properties: (1) it is progressive step by step, that is, the distance between iterates and the solution set is decreasing monotonically; (2) global
convergence result is independent of the merit function and its Lipschitz continuity; (3) it is a derivative-free method and could be applied for solving large-scale nonsmooth equations due to its
lower storage requirement. Preliminary numerical results show that the proposed method is very effective. Some practical applications of the proposed method are demonstrated and tested on sparse
signal reconstruction, compressed sensing, and image deconvolution problems.
1. Introduction
Considering the problem to find solutions of the following nonlinear monotone equations: where is a continuous and monotone, that is, for all .
Nonlinear monotone equations arise in many practical applications such as ballistic trajectory computation [1] and vibration systems [2], the first-order necessary condition of the unconstrained
convex optimization problem, and the subproblems in the generalized proximal algorithms with Bregman distances [3]. Moreover, we can convert some monotone variational inequality into systems of
nonlinear monotone equations by means of fixed point maps or normal maps [4] if the underlying function satisfies some coercive conditions. Solodov and Svaiter [5] proposed a projection method for
solving (1). A nice property of the projection method is that the whole sequence of iterates is always globally convergent to a solution of the system without any additional regularity assumptions.
Moreover, Zhang and Zhou [6] presented a spectral gradient projection (SG) method for solving systems of monotone equations which combines a modified spectral gradient method and projection method.
This method is shown to be globally convergent if the nonlinear monotone equations is Lipschitz continuous. Xiao et al. [7] proposed a spectral gradient method to minimize a nonsmooth minimization
problem, arising from spare solution recovery in compressed sensing, consisting of a least-squares data-fitting term and a -norm regularization term. This problem is firstly formulated as a convex
quadratic program (QP) problem and then reformulated to an equivalent nonlinear monotone equation. Furthermore, Yin et al. [8] developed a nonlinear conjugate gradient method for -norm regularization
problems in compressed sensing. Yu [9, 10] extended the spectral gradient method and conjugate gradient-type method to solve large-scale nonlinear system of equations, respectively. Recently, the
authors in [11] proposed a multivariate spectral gradient projection method for solving nonlinear monotone equations with convex constraints. Numerical results show that multivariate spectral
gradient method (MSG) could improve its performance very well.
Following this line, based on multivariate spectral gradient method (MSG), we present an adaptive prediction-correction method for solving nonlinear monotone equations (1) in the next section. Its
global convergence result is established, which is independent of the merit function and Lipschitz continuity. Section 3 presents some numerical experiments to demonstrate and test its practical
performance on compressed sensing and image deconvolution problems. Finally, we have a conclusion section.
2. Adaptive Prediction-Correction Method
Considering the projection method [5] for solving nonlinear monotone equations (1), suppose that we have obtained a direction . By performing some kind of line search procedure along the direction ,
a point can be computed such that By the monotonicity of , for any such that , we have Thus, the hyperplane strictly separates the current iterate from solutions of the systems of monotone equations.
Once we get the separating hyperplane, the next iterate is computed by projecting on it.
Recalling the multivariate spectral gradient (MSG) method [12] for minimization problem , its iterative formula is defined by , where is the gradient of at and is obtained by minimizing with respect
to , where . In particular, when has positive definite diagonal Hessian matrix, multivariate spectral gradient method will be convergent quadratically [12].
Let the column of and denoted by and , respectively. Combining multivariate spectral gradient method with projection scheme, we can present an adaptive prediction-correction method for solving
monotone equations (1) as follows.
Algorithm 1 (multivariate spectral gradient (MSG) method). Given , , , , , . Set .
Step 1. If , stop.
Step 2. (a) If , set .
(b) else if , then set ; otherwise set for , where , .
(c) else if or , set for .
Set .
Step 3 (prediction step). Compute step length , set , where with being the smallest nonnegative integer such that
Step 4 (correction step). Compute
Step 5. Set and go to Step 1.
By using multivariate spectral gradient method, we obtain prediction sequence , and then we get correction sequence via projection. It follows from (17) that will be more close to the solution than ,
that is, the sequence makes progress iterate by iterate. From Step 2(c), we have
In what follows, we assume that for all ; otherwise we have got the solution of the problem (1). The following lemma states that Algorithm 1 is well defined.
Lemma 2. There exists a nonnegative number satisfying (6) for all .
Proof. Suppose that there exists a such that (6) is not satisfied for any nonnegative integer , that is, Let and using the continuity of yields From Steps 1, 2, and 5, we have Thus, The last
inequality contradicts (10). Hence the statement is proved.
Lemma 3. Let and be any sequence generated by Algorithm 1. Suppose that is monotone and that the solution set of (1) is not empty, then and are both bounded. Furthermore, it holds that
Proof. From (6), we have Let be an arbitrary point such that . Taking account of the monotonicity of , we have From (7), (14), and (16), it follows that Hence the sequence is decreasing and
convergent; moreover, the sequence is bounded. Since the is continuous, there exists a constant such that By the Cauchy-Schwarz inequality, the monotonicity of and (15), we have From (18) and (19),
we obtain that is also bounded. It follows from (17) and (18) that which implies From (7), using the Cauchy-Schwarz inequality, we obtain that Thus .
The proof is complete.
Now we can establish the global convergence of Algorithm 1.
Theorem 4. Let be generated by Algorithm 1; then converges to an such that .
Proof. Since , it follows from Lemma 3 that From (8) and (18), it holds that is bounded.
Now we consider the following two possible cases:(i).(ii).
If (i) holds, from (8), we have . By the continuity of and the boundedness of , it is clear that the sequence has some accumulation point such that . From (17), we also have that the sequence
converges. Therefore, converges to .
If (ii) holds, from (8), we have . By (23), it holds that By the line search rule, we have for all sufficiently large, will not satisfy (6). This means Since the sequences , are bounded, we choose a
subsequence, let in (25), we obtain that where are limits of corresponding subsequences. On the other hand, by (8), it holds that which contradicts (26). Hence, is impossible.
The proof is complete.
3. Numerical Experiments
In this section, we report some preliminary numerical experiments to test our algorithms with comparison to spectral gradient projection method [6]. Firstly, in Section 3.1 we test these algorithms
on solving nonlinear systems of monotone equations. Secondly, in Section 3.2, we apply HSG-V algorithm to solve -norm regularization problem arising from compressed sensing. All of numerical
experiments were performed under Windows XP and MATLAB 7.0 running on a personal computer with an Intel Core 2 Duo CPU at 2.2GHz and 2GB of memory.
3.1. Test on Nonlinear Systems of Monotone Equations
We test the performance of our algorithms for solving some monotone equations (see details in the appendix). The termination condition is . The parameters are specified as follows. For MSG method, we
set . In Step 2, the parameter is chosen in the following way:
Firstly, we test the performance of the MSG method on the Problem 1 with , the initial point . Figure 1 displays the performance of MSG method for Problem 1 which indicates that prediction sequences
are better than correction sequences at most time. Taking this into account, we relax the MSG method such that Step 4 in Algorithm 1 is replaced by the following: if mod,eslse,end.
In this case, we refer to this modification as “MSG-V” method. When , the above algorithm will reduce to Algorithm 1. The performance of those methods on the Problem (1) is shown in Figure 1, from
which we can see that the MSG-V method is preferable quite frequently to the SG method while it also outperforms the MSG method. Furthermore, motivated to accelerate the performance of MSG-V method,
we present a hybrid spectral gradient (HSG-V) algorithm. The main idea of the HSG-V algorithm is to run MSG-V algorithm when for ; otherwise switch to spectral gradient projection (SG) method.
And then we compare the performance of MSG method, MSG-V method, and HSG-V method with the spectral gradient projection (SG) method in [6] on test problems with different initial points. We set , ,
in the spectral gradient projection (SG) method in [6], and for MSG-V method and HSG-V method.
Numerical results are shown in Tables 1, 2, 3, 4, 5, and 6 with the form NI/NF/T/BK, where we report the dimension of the problem (), the initial points (Init), the number of iteration (NI), the
number of function evaluations (NF), and the CPU time (Time) in seconds and the number of backtracking (BK). The symbol “F” denotes that the method fails for this test problem, or the number of the
iterations is greater than 10000.
As we can see from Tables 1–6 that the HSG-V algorithm is preferable quite frequently to the SG method and also outperforms the MSG algorithm and MSG-V algorithm, since it can solve about and of the
problems with the best time and the smallest number of function evaluations, respectively. We also find that the SG algorithm seems more sensitive to the initial points.
Figure 2 shows the performance of these algorithms relative to the number of function evaluations and CPU time, respectively, which were evaluated using the profiles of Dolan and Moré [13]. That is,
for each algorithm, we plot the fraction of problems for which the method is within a factor of the smallest number of function evaluations/CPU time. Clearly, the left side of the figure gives the
percentage of the test problems for which a method is the best one according to the number of function evaluations or CPU time, respectively. As we can see from Figure 2, “HSG-V” algorithm has the
best performance.
3.2. Test on -Norm Regularization Problem in Compressed Sensing
There has been considerable interest in solving the -norm regularized least-square problem where is a linear operator, is an observation, and is a nonnegative parameter. Equation (29) mainly appears
in compressed sensing: an emerging methodology in digital signal processing and has attracted intensive research activities over the past few years. Compressed sensing is based on the fact that if
original signal is sparse or approximately sparse in some orthogonal basis, then an exact restoration can be produced by solving (29).
Recently, Figueiredo et al. [14] proposed gradient projection method for sparse reconstruction (GPSR). The first key step of GPSR method is to express (29) as a quadratic program. For any it can be
formulated as , , , where , and for with . We thus have , where is the vector consisting of ones. Hence (29) can be rewritten as the following quadratic program: Furthermore, from [14], (30) can be
written in following form where It is obvious that is a positive semidefinite matrix, hence, (30) is a convex QP problem. Figueiredo et al. [14] proposed a gradient projection method with BB step
length for solving this problem.
Xiao et al. [7] indicated that the QP problem (30) is equivalent to the linear complementary problem: find such that It is obvious that is a solution of (33) if and only if it is a solution of the
following nonlinear systems of equation The function is vector valued, and the “min” is interpreted as componentwise minimum. Xiao et al. [7] proved that is monotone. Hence, (34) can be solved
effectively by the HSG-V algorithm.
Firstly, we consider a typical CS scenario that goal is to reconstruct a length- sparse signal from observations. We measure the quality of restoration by means of squared error (MSE) to the original
signal , that is, where is the restored signal. We test a small size signal with , , and the original contains randomly nonzero elements. is the Gaussian matrix which is generated by command in
MATLAB. In this test, the measurement is usually contaminated by noise, that is, where is the Gaussian noise distributed as . The parameters are taken as , , , , , is forced in decrease as the
measure of [14]. To get better quality estimated signals, the process is terminated when the relative change of the objective function is below , that is, where denotes the function value at .
Figures 3 and 4 report the results of HSG-V for a signal sparse reconstruction from its limited measurement. Comparing the first and last plot in Figure 3, we can find that the original sparse signal
is restored almost exactly from the limited measurement. From the right plot in Figure 4, we observe that all the blue dots are circled by the red circles, which shows that the original signal has
been found almost exactly. All together, this simple experiment shows that HSG-V algorithms perform well, and it is an efficient method to denoise sparse signals.
In the next experiment, we compare the performance of our algorithm with the SGCS algorithm for image deconvolution, in which is a partial DWT matrix whose rows are chosen randomly from DWT matrix.
To measure the quality of restoration, we use the SNR (signal to noise ratio) defined as . Figure 5 shows the original test images, and Figure 6 shows the restoration results by the SGCS and HSG-V
algorithm, respectively. These results show that the HSG-V algorithm can restore blurred image quite well and obtain better quality reconstructed images in an efficient manner.
4. Conclusion
In this paper, we develop an adaptive prediction-correction method for solving nonlinear monotone equations. Under some assumptions, we establish its global convergence. Base on the
prediction-correction method, an efficient hybrid spectral gradient (HSG-V) algorithm is proposed, which is composite of MSG-V, algorithm and SG algorithm. Numerical results show that the HSG-V
algorithm is preferable and outperforms the MSG, MSG-V and SG algorithm. Moreover, HSG-V algorithm is applied to solve -norm regularized problems arising from sparse signal reconstruction. Numerical
experiments show that HSG-V algorithm works well, and it provides an efficient approach for compressed sensing and image deconvolution.
The Test Problems
In this appendix, we list the test functions and the associated initial guess as follows.
Problem 1. , .
Problem 2. , .
Problem 3. is given by and
It is noticed that Problems 1 and 3 are smooth at , while Problem 2 is nonsmooth (Table 7).
This work was partly supported by the National Natural Science Foundation of China (no. 11001060, 61262026, 81000613, 81101046), the JGZX Programme of Jiangxi Province (20112BCB23027), and the
Science and Technology Programme of Jiangxi Education Committee (LDJH12088).
|
{"url":"http://www.hindawi.com/journals/aaa/2013/619123/","timestamp":"2014-04-18T19:03:50Z","content_type":null,"content_length":"399288","record_id":"<urn:uuid:0ef3b49b-6cbe-4c68-acbd-50aca8034b27>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00349-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Rock Hunters
To observe rocks of various types and sizes and to record these observations through drawings.
This lesson centers on students making detailed observations of rocks. Through their observations, students will begin to develop an understanding that there are many types of rocks with a multitude
of different attributes. Although students in the K-2 level are not yet ready to learn about the names of different kinds of rocks or the geological reasons for different rock formations, they are
ready to understand that there are many sizes and shapes of rocks in our environment. They are able to recognize that our earth has sand, which is very small particles of rock; pebbles and small
rocks that they may find in the dirt; and large mountains.
“Teaching geological facts about how the face of the earth changes serves little purpose in these early years. Students should start becoming familiar with all aspects of their immediate
surroundings…” (Benchmarks for Science Literacy, p. 72.) In this lesson, students will become more familiar with their immediate environmental surroundings by studying rocks. Through class
discussions, you will facilitate students thinking about places they see rocks and the different kinds of rocks they know. Students will collect rocks and examine their attributes, such as shape,
size, color, texture, and weight. Using the student sheet provided, students will record their rock observations pictorially. For students who are able to write, they will be challenged to describe
their illustrations with words as well.
You can expect that students will be fascinated with many aspects of rocks, such as how some sparkle; some are dark while others are light; there are smooth ones and bumpy ones. Some may quickly
recognize that one rock is shaped like an egg while another one seems sharp enough to cut like a tool. Their curiosity about rocks will fuel their discussions and explorations in class. The student
sheet will extend their explorations by encouraging them to consider how they can accurately portray their rocks through a drawing. Students will need to think about how they can describe their rocks
in quantitative terms. “Instead of saying that something is big or fast or happens a lot, a better approach is often to use numbers and units to say how big, fast, or often, and instead of claiming
that one thing is harder or faster or colder than another, it is better to use either absolute or relative terms to say how much so.” (Benchmarks for Science Literacy, p. 295.)
Their hands-on explorations of rocks in this lesson and documentation of their observations will help build a foundation for later learning about the rock cycle and geological change on the face of
the earth.
Read More
Planning Ahead
For background knowledge, you may wish to use these resources:
Begin by having students collect a variety of rocks. If it is possible, go for a rock hunt around the school. Each student can carry a bag for collecting the rocks that s/he finds. Later in the
lesson, students will measure their rocks with paper clip chains, so you may want to ask students to include at least one rock in their collection that is big enough to measure in this way. If a rock
hunt is not feasible at your school, have students collect rocks near their homes and bring them into school.
Once the group has collected a number of rocks, ask them to spread them out in front of them and look at the different types they have found. You might ask:
• Are all your rocks the same size?
• Do you see different colors in your rocks?
• Look at the different shapes of your rocks. What kinds of shapes do you see?
• If you pick your rocks up one at a time, do they all feel like they weigh the same?
• When you touch your rocks, what do you notice?
Let students walk around and look at the rocks that their classmates collected. Ask them to consider these same questions when viewing these rocks. Once students have had a chance to look at all the
rocks, encourage them to talk about the variety of rocks there are in their class collection. To give them an opportunity to view even more types of rocks, have students use the Rocks student esheet
to view the Rocks slide show, which provides a visual array of rock types. Students can click on any of the ten choices provided to view different kinds of rocks.
Taking students outdoors to view rocks in their natural surroundings is an ideal way to introduce the idea that rocks of various shapes and sizes are part of our earth. If it is possible, take your
class outdoors and ask them to look for the different places they see rocks. To keep this lesson focused on the benchmark, ask them questions that help them consider the different rock sizes and
shapes they see. (For example, there may be gravel in a parking lot, rocks large enough to sit on near a tree, and sand around a pond.) Since students have already begun thinking about some of the
different attributes of rocks from their rock collections, this walk works well as a transition from thinking about rocks individually to thinking about them as part of our environment. If it is not
possible to take an outdoor walk, try finding magazines and books with photographs of rocks to invite this kind of discussion.
Have students return to looking at their rock collections. Divide the class into small groups (groups of 4-5 work well). Ask each student to choose one or two rocks from their collection to bring to
the small group. Allow students time to look at each others rocks. Give students magnifying lenses to allow for a closer inspection. Ask them to tell each other about their rocks. After they have had
some time to describe their rocks to each other, lead a class discussion that challenges students to use more detail in their descriptions. As a practice exercise, hold up a pencil in front of the
class. Ask them to describe the pencil. Help them develop more detailed descriptions by asking:
• What color is this pencil?
• What does this pencil have at its end?
• What does this pencil have at its other end?
• How long do you think this pencil is?
• What does this pencil remind you of?
If you hold up a second pencil that is slightly different in size and color, you can ask the same questions to help students recognize the value of using descriptions for comparing similar objects.
Now have students return to their small groups and give each student the My Rock student sheet. Ask them to complete items #1 and #2.
Next, give each group a box of paper clips. Show students how to link them together to make a paper clip chain. Ask them to make a chain that is long enough to fit around their rock. (If a student
notices that the length of the last paper clip makes the chain a little longer than they need, but without it, it is not long enough, you can use language like, “Your rock is six paper clips and part
of another around.” This introduces the concept of whole and part without going beyond their cognitive level.) Have students record their measurement on their student sheet (item #3), and then
document their measurement pictorially (item #4).
At this point, it would be helpful to bring students back together for a large group discussion about how these measurements help to describe their rocks. Talk with students about why they think
people measure things. You might ask:
• What do you think people learn when they measure something?
• How do you think measuring something might be helpful?
• What did you learn about your rock when you measured it with paper clips?
• Did you each use the same number of paper clips? (Have students compare their paper clip chains to give them an opportunity to see the many different lengths they needed for their different
• When you look at one of these paper clip chains, what does it tell you about the rock it measured?
• When you look at these paper clip chains (use two from the class to demonstrate), what do you know about the two rocks they measured?
So far, students have made observations and recordings about the shape, size, color, and circumference of their rock. To help them think about weight, allow students the opportunity to use a scale
for weighing their rock. Many types of scale will work for this exploration, and, if you have more than one kind of scale, students can “read” their rock’s weight in different ways. The goal of this
exploration is for students to think about the fact that different rocks have different weights. Students can also experiment with weighing various combinations of rocks. If you do not have access to
a scale, or are interested in making a scale from a few basic materials, see the Making a Scale teacher sheet.
To respond to item #5 on the student sheet, have students weigh their rock (the same rock they have been examining throughout this exercise). Now ask students to find rocks from their collection that
are lighter than this rock, then rocks that are heavier than this rock. They should place these rocks in the appropriate spaces in the table provided for item #5. If students in your group seem ready
for another challenge, you might ask them a few questions about the similarities and differences between the light and heavy rocks. You could ask:
• What is similar about your group of light rocks? Heavy rocks?
• Is there anything different among these light rocks? Heavy rocks?
• Are small rocks always light?
• Are the heavy rocks all the same color?
• Do the rocks with the same shape weigh the same?
These questions will help students think even more critically about various attributes and their relationship to weight. You can help students know that it is what a rock is made of that determines
its weight, not the color or shape, etc. Since this idea involves concepts beyond their cognitive level, it is not necessary to spend much time discussing it, but it introduces students to the idea
that when weighing a rock, some attributes are more important to consider than others.
Again, this measuring activity should encourage students to consider the concept of weight but not exact weight measurements. Students will practice with a measuring tool (a scale); perhaps have an
opportunity to use more than one kind of scale, yet see that each has the same job—to measure; and they will begin to formulate hypotheses about their rocks, the scale, and the idea of weight. This
strengthens the foundation of measurement concepts that will become more detailed and exact in later years.
Bring students back together in a large group. Facilitate a discussion to review their observations.
• What did you learn about your rock?
• What did you learn about the rocks in your group?
• How did you find out how big around your rock is?
• How did you find out about your rock’s weight?
• What words can you use to describe your rock (have them focus on the attributes they observed in this lesson—size, shape, color, etc.)?
• What description words could help someone else learn about your rock, even if they could not see your rock? How would this description help someone else know about your rock?
To help you assess what your students learned from this lesson, have them play a guessing game with their rocks and their student sheets. Have students work in pairs. With a few rocks displayed in
front of them, ask that one student in each pair look at his/her partner’s student sheet and try to guess which rock the sheet describes. If a student is having a hard time guessing, his/her partner
can make more detailed descriptions on the student sheet to give the student more clues. (This will challenge students to really make more specific and accurate recordings.) Students should be able
to use paper clip chains and the scale to help them guess also. Once this first group of students has guessed the correct rocks, have the other partners take a turn at guessing.
While students are playing, walk around the classroom to help students brainstorm about ways they might clarify their descriptions. Refer to the questions listed throughout this lesson to reinforce
the ideas of observation and description.
Students can dip rocks into paint or on an ink pad to make rock prints. You can make your own board game using rock prints to create the paths and real small rocks as the playing pieces. With a
spinner or a die, you can play any number of fun games that would incorporate math. You can easily make the game one in which students learn more about rocks by making playing cards out of index
cards or heavy paper. Each card could be a question that the player answers as s/he follows the path. (You can make this game yourself or allow students to create the game. When students make the
game, they are practicing math and science skills and learning about problem solving as they determine what the rules of the game will be.)
For more ideas about extending learning about rocks, try these websites:
|
{"url":"http://sciencenetlinks.com/lessons/rock-hunters/","timestamp":"2014-04-17T12:29:33Z","content_type":null,"content_length":"33917","record_id":"<urn:uuid:0e239eb2-99c2-48b9-948f-af9e11dfe68e>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00411-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Triangle Side and Angle Inequalities - Concept
In any triangle, the largest angle is opposite the largest side (the opposite side of an angle is the side that does not form the angle). The shortest angle is opposite the shortest side. Therefore,
the angle measures can be used to list the size order of the sides. The converse is also true: the lengths of the sides can be used to order the relative size of the angles. Triangle side and angle
inequalities are important when solving proofs.
There exists a special relationship between the length of a side and its angle in a triangle, so if we start by saying if the measure of angle a is bigger than measure of angle b is bigger than the
measure of angle c, essentially all I'm saying is a is the biggest c is the smallest then we can say that the side opposite of angle a, so the side opposite of angle a is side bc, will be the
largest. We can say that the side opposite of b, so I'm going opposite of b and that's side ac, so bc must be larger than ac and the side opposite of c will be your smallest so ab is your smallest.
So the largest angle is opposite the largest side, the smallest angle is opposite the smallest side. Is the converse true? What if I said instead of the measure of angle a what if I just said, side
what if we said ac is greater than side bc is greater than side ab. Then we could say what, which angle would be your largest? Well if ac is your largest angle excuse me if ac, remember this is a
side is opposite of b then that means that measure of angle b is the largest. If bc is your next largest, what's opposite bc that's measure of angle a and last ab is opposite angle c so we'd say that
measure of angle c would be the smallest. So it works both ways.
angles smallest to largest sides smallest to largest
|
{"url":"https://www.brightstorm.com/math/geometry/triangles/triangle-side-and-angle-inequalities/","timestamp":"2014-04-21T14:41:21Z","content_type":null,"content_length":"62040","record_id":"<urn:uuid:e408fab6-853c-4ec1-853f-44da0000b5ed>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00443-ip-10-147-4-33.ec2.internal.warc.gz"}
|
MathGroup Archive: May 2012 [00369]
[Date Index] [Thread Index] [Author Index]
Re: Sqrt of complex number
• To: mathgroup at smc.vnet.net
• Subject: [mg126672] Re: Sqrt of complex number
• From: Richard Fateman <fateman at cs.berkeley.edu>
• Date: Wed, 30 May 2012 04:10:44 -0400 (EDT)
• Delivered-to: l-mathgroup@mail-archive0.wolfram.com
• References: <201205270842.EAA17817@smc.vnet.net> <jpvfh3$q6g$1@smc.vnet.net> <jq25u1$6gl$1@smc.vnet.net>
On 5/29/2012 2:46 AM, David Bailey wrote:
> In addition to what others have said, it is maybe worth pointing out
> that in general, the Sqrt expression would be embedded in a larger
> expression, such as a+Sqrt[3-4 I]+42 - so what should Mathematica do? If
> it returns a list of all possible answers, that might not be acceptable
> to something that was expecting a single value,
That suggests to me that whatever was expecting a single value has a bug
in it. Ideally if the mathematics dictates "there are multiple answers"
then a good program should be able to deal with it. Otherwise it is
not doing mathematics.
and anyway, expressions
> such as ArcSin[.2] would have an infinite number of answers!
There are several possible notations for infinite sets.
Here's one: Table[f[x],x,1, Inf]
> The only possible alternative strategy would be not to evaluate at all,
No, see above.
> as is the case with Sqrt[x^2] (since the answer can by x or -x).
Root[x^2,n] works for me, if n is an integer. We could have all even
n choose one sign and odd n choose the other.
These suggestions may not fit into today's Mathematica very well, but that
does not mean that a better system could not be constructed.
> David Bailey
> http://www.dbaileyconsultancy.co.uk
• References:
|
{"url":"http://forums.wolfram.com/mathgroup/archive/2012/May/msg00369.html","timestamp":"2014-04-16T16:07:24Z","content_type":null,"content_length":"26905","record_id":"<urn:uuid:44cbede2-5554-4abf-a686-e94ef3809b90>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00362-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Physics Forums - View Single Post - Maths prep for Electromagnetism unit
Early next year I will be beginning an Electromagnetism unit. However, I think I should refresh my maths over the summer break first. Here is the Unit Description from the uni website:
A detailed treatment of electric and magnetic fields and theory sources leading to the formulation of Maxwell's equations. Students will be introduced to a) electronic and magnetic fields in matter;
b) electro-and magnetostatistics; c) Maxwell equations.
The text we are using is:
Elements of Electromagnetics
ISBN: 9780195387759
Sadiku, M.N.O., OUP 5th ed. 2009
So...what mathematical areas should I revisit before day 1?
|
{"url":"http://www.physicsforums.com/showpost.php?p=4186245&postcount=1","timestamp":"2014-04-18T03:12:15Z","content_type":null,"content_length":"9004","record_id":"<urn:uuid:c3870c9a-cd66-4a8c-86a5-5521a6a305a2>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00627-ip-10-147-4-33.ec2.internal.warc.gz"}
|
sequence, in mathematics, ordered set of mathematical quantities called terms. A sequence is said to be known if a formula can be given for any particular term using the preceding terms or using its
position in the sequence. For example, the sequence 1, 1, 2, 3, 5, 8, 13,… (the Fibonacci sequence) is formed by adding any two consecutive terms to obtain the next term. The sequence - 1/2, 1, 7/2,
7, 23/2, 17,… is formed according to the formula ( n ^2 - 2)/2 for the n th, or general, term. A sequence may be either finite, e.g., 1, 2, 3,…50, a sequence of 50 terms, or infinite, e.g., 1,
2, 3,…, which has no final term and thus continues indefinitely. Special types of sequences are commonly called progressions. The terms of a sequence, when written as an indicated sum, form a
series; e.g., the sum of the sequence 1, 2, 3,…50 is the series 1+2+3+…+50.
The Columbia Electronic Encyclopedia, 6th ed. Copyright © 2012, Columbia University Press. All rights reserved.
More on sequence from Fact Monster:
See more Encyclopedia articles on: Mathematics
|
{"url":"http://www.factmonster.com/encyclopedia/science/sequence.html","timestamp":"2014-04-21T12:32:13Z","content_type":null,"content_length":"21052","record_id":"<urn:uuid:057bd1c1-9112-4b6d-b30d-994dc3ea770c>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00116-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Bob's Tribulation Circle and Daniel's Vision of the End
I read your post yesterday and it relates to something I've been working on for a while. Your post revealed another aspect of the same exact phenomena I've been fascinated with. I believe it is only
possible that the Creator of the Cosmos and the Author of God's Word (one and the same) could come up with the perfect set of numbers. These three numbers are way more than just speculative date
setting. I believe these 3 numbers can be the only set that describes a multitude of planetary data, especially Earth and the inner planets. Venus is particularly evident. I'll explain.
I noticed that the diameter of your Tribulation circle, based on the first implied number (1260) and last number (1335) in Daniel 12. The middle number 1290, is not represented just yet. You
calculated that if the circumference of a circle is 2595 (1260+1335) then a centered square inside that circle could only have equal sides of 584 units. 1260 and 1335 in their simplest interpretation
represent days, so 584 units would also mean 584 days. You also calculated the diagonal of the square, based on the circle's diameter, is 1168. This of course is double the 584 as it would be.
However, 1168 divided by 10 is also 20% of 584 or 116.8. Here are a few parameters, 584 days, 116.8 days and 1168 days that are seemingly independent orbital facts.
Every 584 days on average there is an alignment of the earth, Venus and the sun. If you could stand on Venus, one full day from sunrise to sunrise would take 116.8 earth days. Over the 584 day
conjunction period, Venus will have rotated 5 times at 116.8 earth days apiece. At regular but limited intervals (over long periods of time) the earth, Venus, Mercury and the sun can align in 1168
day intervals.
Previously, I had discovered something similar about the sums and differences of the whole set, arriving at some of the same values, albeit using the third value in the set (1290) as well.
1260+1290+1335 = 3885; 1335-1290-1260= -1215. While 3885 is not immediately apparent, the -1215 value is equivalent to the 1215 year cycle, based on tropical years in which the meetings of earth,
Venus and the sun step one full circle BACKWARDS or clockwise in the Mazzaroth. The integer value of 1215 is also significantly 5 times the whole 243 year transit cycle of Venus and 10 times (121.5
years) one of the overlapping intervals of transits (the same thing as an eclipse, but with Venus passing in front of the sun and not the moon).
Now this integer value 1215 when divided into the total 3885 yields the quotient 3.197530864. Multiplying this quotient times the Tropical/solar year yields the same value you arrived at with the
large diagonal, and that is 1167.87317. Remember, I'm using a third value seemingly independent!!! The vision in Daniel 12 indicates that the left hand and right hand lifted towards heaven. If you
analyze the words left and right in this passage, one denotes south and the other north as the person faces east. In either hemisphere, the time frame between when the sun rises furthest north and
furthest south (solstices) is the half-tropical year of 182.6210948 days. If you multiply this half year times the quotient 3.197530864, the value is 583.93658704 days, which is within 23.5 minutes
of NASA's estimated average of the conjunction event!!! Your number was 584 using only two of the values (1260 and 1335)!!!
The common value between 584 and 365 (year) is 73. Whereas 73 x 5 is 365, 73 x 8 is 584. Conversely, there are five (5) 584 day conjunctions (inferior OR superior) in eight (8) 365 day earth years.
The sum of 584 + 365 is 949 which is of course 13 x 73. Venus orbits 13 times to earth's 8 and it passes earth 5 times (every 584 days during those eight years.
God's amazing design is that those 5 meetings, when Venus laps the earth in the perpetual race around the sun, occur almost 216 degrees apart, forming a five pointed star or pentagon if observed from
over our solar system. I've applied that figure in the same manner as your square within the circle and found that the values of the length of the side and the value of the radius, directly relate to
the 3885 total and 8 (8 year cycle).
The same circumference of 2595 yields a radius of 413.007 and the side calculates to be 485.5189383 which is nearly 2 times the transit value in years (243) or 2 times the sidereal day (243 earth
days). This value 485.5189383 x 8 = 3884.1515, about .85 under 3885. The quotient of 413.007077/485.5189383 = .8506508. Multiplying .8506508. x 3885 yields 3304.778358 which is 8 x 413.09!! Again,
the 2595 yielding a radius of 413.07 yields a pentagonal (5) side 485.52 or nearly 1/8th of 3885!!
While 3885 is a time, times and a half of 1110 it is also both 5 times 777 and 7 times 555, a very cool number! If you divide it by 10 (=388.5) and apply it to the conjunction period of 583.92 days:
388.5 / 583.92 = .6653308672; Take that quotient and multiply it times the sidereal year of 365.256363004 days - [365.256363004 x .6653308672 = 243.016332 days] The sidereal rotation period of Venus
as observed from earth is 243.018 days, a 2.4 minute difference!
During the eight years it takes five meetings to return 360 degrees or 12 constellations, Venus appears to turn (from our perspective) on its axis 12 times.
There are many other parameters derived from the three numbers in Daniel. I posted some information towards the end of a response back in January. Please see http://www.fivedoves.com/letters/jan2011/
. The other value in Daniel 2300 mornings and evenings (morning star risings or evening star risings) calculates the precession of the equinoxes within just a few years of the 25,770 year total. 2300
also reconciles the sun, Venus, earth and moon within 34/100's of a second per month!!
Those two values of 1335 and 1260 have a difference of 75. The total 3885 / 75 = 51.8. Reapplied to 3885 as a percentage: .518 x 3885 = 2012.43 which is June 6, 2012 the EXACT day of the Venus
transit occurring at the galactic center, in between the horns of Taurus and 45 years after the Israelis captured Jerusalem. The other post explains a 930,338.5 day period that calculates from
Daniel's vision, based on 3.5 times various cycle periods, which comes to June 20-21 of 2012, the summer solstice in the northern hemisphere. Will the Lord come as the 'summer is nigh at hand'?
In conclusion, only the Author of the Bible and the Creator of the universe could have designed this set of numbers into His Word and His creation! You've demonstrated it geometrically, confirming
the mathematics I've found.
Just as Daniel said, the prophecy would be sealed up until the end times when people travel to and fro and knowledge is greatly increased!
In Christ,
Kevin Heckle
1260+1335 = 2595
2595/pi=826.0141546 dia.
(Square in Circle)
826.0141546^2 = 682299.3836771; 682299.3836771 / 2 = 341149.6918385; SQRT(341149.6918385)=584.0802101
(Square's hypotenuse outside Circle) is 584.0802101 x 2 or the SQRT {(Diameter^2)/2} which is 1168.16042; 1168.16042/10 = 584.0802101/2 = 116.816042
(Pentagon inside Circle)
826.0141546/2 = r = 413.007077
Cos 36*=a/r
a=Cos 36* x r = .80901699 x 413.007077 = 334.1297444
413.007077^2 - 334.1297444^2 = 58932.15986; Sqrt (58932.15986) = 242.7594691
242.7594691 x 2 = 485.5189383 side; 485.51893 x 8 = 3884.1515
413.007077/485.5189383 = .8506508; .8506508 x (1260+1290+1335) = 3304.7783; 3304.7783/8 = 413.09728
|
{"url":"http://www.fivedoves.com/letters/jan2011/kevinh127.htm","timestamp":"2014-04-18T16:38:23Z","content_type":null,"content_length":"17288","record_id":"<urn:uuid:72add3b6-8681-4df5-864f-ca8d1e7f0ed5>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00169-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Serviços Personalizados
Links relacionados
Journal of the Brazilian Society of Mechanical Sciences and Engineering
versão impressa ISSN 1678-5878
ALVES, Thiago Antonini e ALTEMANI, Carlos A. C.. Convective cooling of three discrete heat sources in channel flow. J. Braz. Soc. Mech. Sci. & Eng. [online]. 2008, vol.30, n.3, pp. 245-252. ISSN
1678-5878. http://dx.doi.org/10.1590/S1678-58782008000300010.
A numerical investigation was performed to evaluate distinct convective heat transfer coefficients for three discrete strip heat sources flush mounted to a wall of a parallel plates channel. Uniform
heat flux was considered along each heat source, but the remaining channel surfaces were assumed adiabatic. A laminar airflow with constant properties was forced into the channel considering either
developed flow or a uniform velocity at the channel entrance. The conservation equations were solved using the finite volumes method together with the SIMPLE algorithm. The convective coefficients
were evaluated considering three possibilities for the reference temperature. The first was the fluid entrance temperature into the channel, the second was the flow mixed mean temperature just
upstream any heat source, and the third option employed the adiabatic wall temperature concept. It is shown that the last alternative gives rise to an invariant descriptor, the adiabatic heat
transfer coefficient, which depends solely on the flow and the geometry. This is very convenient for the thermal analysis of electronic equipment, where the components' heating is discrete and can be
highly non-uniform.
Palavras-chave : adiabatic heat transfer coefficient; laminar channel flow; discrete heat sources; numerical investigation; electronics cooling.
|
{"url":"http://www.scielo.br/scielo.php?script=sci_abstract&pid=S1678-58782008000300010&lng=pt&nrm=iso&tlng=en","timestamp":"2014-04-20T03:30:55Z","content_type":null,"content_length":"19172","record_id":"<urn:uuid:500db0e1-801d-4cd0-9d65-3e907bb6812a>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00474-ip-10-147-4-33.ec2.internal.warc.gz"}
|
How do I calculate the amount of fragrance oil I need?
First you'll need to know what percentage of fragrance oil you want to use. The average usage is 6%, please see our FAQ for more information to help determine how much to use with your particular
Here is a simple formula:
(oz of wax using) x (% of fragrance oil you want to use) = (oz of fragrance oil needed)
For example, lets say you are using two pounds of wax and want to use 6% fragrance oil.
First you'll need to calculate the number of ounces of wax you have:
2 x 16 (number of oz in 1 pound) = 32oz
Plug these numbers into your formula:
32 x 6% = 1.92oz
You can round up to 2oz for easy measuring.
Have more questions? Submit a request
|
{"url":"http://support.candlescience.com/hc/en-us/articles/201389120-How-do-I-calculate-the-amount-of-fragrance-oil-I-need-","timestamp":"2014-04-18T03:25:32Z","content_type":null,"content_length":"11750","record_id":"<urn:uuid:a908ace3-19c2-4150-b223-17097c108061>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00658-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[ c++] fibonacci series By me
12-18-2012 #1
Registered User
Join Date
Dec 2012
Fibonacci number on Wiki
#include <iostream>
#include <iomanip>
#include <cmath>
using namespace std;
int power(int a, int b)
int c=a;
for (int n=b; n>1; n--) c*=a;
return c;
double power(double a, int b)
float c=a;
for (int n=b; n>1; n--) c*=a;
return c;
int main()
double first,c1,c2,r1,r2,r3; int r,l;
first=0.4472135955; c1=1.618033989; c2=-0.6180339887;
cout<< " Enter long The series ";
for (int i=1; i<l; i++)
r3= r1-r2;
r= first*r3;
cout <<endl;
return 0;
You forgot to ask a question, such as why the results you're getting are incorrect.
Using a float inside your double version of power would be one mistake.
Also, seeing as how you're limiting yourself to the range of ints anyway, you aren't gaining anything by using the approximation method.
My homepage
Advice: Take only as directed - If symptoms persist, please see your debugger
Linus Torvalds: "But it clearly is the only right way. The fact that everybody else does it some other way only means that they are wrong"
12-18-2012 #2
|
{"url":"http://cboard.cprogramming.com/cplusplus-programming/153279-%5B-cplusplus%5D-fibonacci-series-me.html","timestamp":"2014-04-23T19:16:29Z","content_type":null,"content_length":"44082","record_id":"<urn:uuid:1ec69304-96c5-457d-ac46-0c4b4f6e9f52>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00327-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Calling a Function
03-29-2004 #1
Registered User
Join Date
Mar 2004
Calling a Function
I have to write a program to find out n combination r (mathematics) and i have written it but when i try to call the function it doesnt work. Any help would be greatly appreciated.
//Edward Grant, 02159511, Assignment 5, 159.101
#include <stdio.h>
int factorial (int x);
int n, r, result;
int main() {
while ((n<0)||(n>12)) {
printf("Please enter a value for n\n");
printf("It must be between 0 and 12\n");
scanf("%i", &n);
while ((r>n)||(r<0)) {
printf("Please enter a value for r\n");
printf("It must be less than your value for n and greater than 0\n");
scanf("%i", &r);
result = factorial (n);
printf("The answer for %i Combination %i is %i", n, r, result);
int factorial (int x) {
int counter, temp1, temp2;
counter = 0;
temp1 = 1;
temp2 = x - counter;
while (temp2 > 0) {
return temp1;
re: Calling a Function
on the line in main where i call the function, there should actually be... result = factorial (n) / (factorial (n-r) - factorial (r));
>>it doesnt work.
3 magic words that don't mean too much. Can you be more descriptive please
When all else fails, read the instructions.
If you're posting code, use code tags: [code] /* insert code here */ [/code]
when it gets to the point when it should be calling the function, it stops and does absolutely nothing at all...just sits there
>it stops and does absolutely nothing at all...just sits there
while (temp2 > 0) {
Now tell me where temp2 is changed so that the loop condition can be met any time before the heat death of the universe.
My best code is written with the delete key.
Are you sure it doesn't call it?
Look at your loop within the factorial() function. What's stopping it going around forever?
When all else fails, read the instructions.
If you're posting code, use code tags: [code] /* insert code here */ [/code]
just put a line in the loop to change temp2 by x - counter
> Now tell me where temp2 is changed so that the loop condition can be met any time before the heat death of the universe
LOL - now that's what I said the last time he tried to write factorial
If you dance barefoot on the broken glass of undefined behaviour, you've got to expect the occasional cut.
If at first you don't succeed, try writing your phone number on the exam paper.
I support http://www.ukip.org/ as the first necessary step to a free Europe.
Yeah it doesn't look like you will come out of the loop. Why don't you try using breakpoints to debug your program to figure it out. Microsoft Visual C++ 6.0 will help you big time. Good luck.
03-29-2004 #2
Registered User
Join Date
Mar 2004
03-29-2004 #3
03-29-2004 #4
Registered User
Join Date
Mar 2004
03-29-2004 #5
03-29-2004 #6
03-29-2004 #7
Registered User
Join Date
Mar 2004
03-29-2004 #8
03-30-2004 #9
Registered User
Join Date
Jun 2003
|
{"url":"http://cboard.cprogramming.com/c-programming/51206-calling-function.html","timestamp":"2014-04-16T08:51:46Z","content_type":null,"content_length":"69934","record_id":"<urn:uuid:7bf4b92a-0a22-4362-8245-2570fc989862>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00555-ip-10-147-4-33.ec2.internal.warc.gz"}
|