content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Port Washington, NY Geometry Tutor
Find a Port Washington, NY Geometry Tutor
...I have tutored students in both ESL and Spanish using the phonetics. I had the student create noises; the younger students I had them make faces, and for the older students have show them the
position where the tongue and shape of the lips. I am currently a Spanish major with a concentration in linguistics.
13 Subjects: including geometry, English, Spanish, algebra 2
...Working together, I guide you to the correct solution to difficult test questions, using a Socratic method, leading you step by step in the right direction and without giving you the answers.
Tutoring for many years has afforded me the opportunity to go through most of the available material, an...
55 Subjects: including geometry, English, reading, writing
Hi, I'm Richard. I have been helping students as a tutor for over thirty years in subjects as different as accounting and chemistry. My philosophy of tutoring, and teaching in general, is that
the student should always be in the process of learning TWO things: the subject at hand, of course, but even more importantly he or she should be learning how best to approach new and unfamiliar
50 Subjects: including geometry, chemistry, physics, GRE
...My approach is rigorous: I set high standards and expect 100% commitment from each student. I start every engagement with a thorough consultation to analyze each student’s strengths and
weaknesses. I spend extra time getting to know each student beyond the numbers.
52 Subjects: including geometry, English, reading, writing
...Thanks for your consideration!I consider math to be my best subject and feel very comfortable working with and teaching the ACT material. I personally scored a 36 on my ACT Math section and am
confident I will be able to help your student achieve their maximum potential on the ACT Math section. I am happy to answer any questions upon request!
45 Subjects: including geometry, English, chemistry, calculus
Related Port Washington, NY Tutors
Port Washington, NY Accounting Tutors
Port Washington, NY ACT Tutors
Port Washington, NY Algebra Tutors
Port Washington, NY Algebra 2 Tutors
Port Washington, NY Calculus Tutors
Port Washington, NY Geometry Tutors
Port Washington, NY Math Tutors
Port Washington, NY Prealgebra Tutors
Port Washington, NY Precalculus Tutors
Port Washington, NY SAT Tutors
Port Washington, NY SAT Math Tutors
Port Washington, NY Science Tutors
Port Washington, NY Statistics Tutors
Port Washington, NY Trigonometry Tutors
Nearby Cities With geometry Tutor
Baxter Estates, NY geometry Tutors
East Hills, NY geometry Tutors
Glen Cove, NY geometry Tutors
Great Nck Plz, NY geometry Tutors
Great Neck geometry Tutors
Harbor Acres, NY geometry Tutors
Kensington, NY geometry Tutors
Kings Point, NY geometry Tutors
Little Neck geometry Tutors
Manhasset geometry Tutors
Manorhaven, NY geometry Tutors
Plandome, NY geometry Tutors
Roslyn Heights geometry Tutors
Sands Point, NY geometry Tutors
The Terrace, NY geometry Tutors | {"url":"http://www.purplemath.com/Port_Washington_NY_Geometry_tutors.php","timestamp":"2014-04-16T04:19:20Z","content_type":null,"content_length":"24590","record_id":"<urn:uuid:75f45a8a-53ba-4ee8-88d2-bd794af0b547>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00313-ip-10-147-4-33.ec2.internal.warc.gz"} |
48÷2(9+3) = ? cont.
View profile
48÷2(9+3) = ? is a math problem that, depending on the order of operations used, leads to two different answers: 2 and 288. An alternative version 6÷2(1+2)= ? with the answers 1 or 9, has popped up
online as well. It can be a hot topic for debate, and is sometimes used to troll other users because of the argument that can result afterward.
This internet phenomenon exploded on April 7th, 2011, around the same time when searches for “48÷2(9+3) =” spiked on Google. The thread that first sparked interest in this problem was on Hot Pursuit,
a small local forum based in Texas. Shortly afterwards a member of the site posted the query on BodyBuilding.com and was spread onwards. Other forum posts from that day include Physics Forums, Wall
Street Oasis, SpartanTailgate, GrassCity, Tennis Warehouse, Inside MD Sports, and \The Escapist. On April 8th, it popped up on 6Theory, NIKETLK, Yahoo! Answers, DIYMA.com, and The Ill Community.
On April 27th, 2011, a Redditor posted a slightly different version to the “wtf” subreddit. As of April 29th, 2011, it has received 1281 comments, and has a karma score of 776. A Redditor posted the
following comment to the thread which was the highest rated comment at 574 upvotes:
"I’m a math professor, and my view is that although the standard convention, if applied precisely and rigorously, does give an unambiguous procedure to follow, nobody, and that includes professional
mathematicians, would ever write a formula like this. This is mostly because, after about 3rd grade, none of us ever use the division symbol ever again." | {"url":"http://us.battle.net/d3/en/forum/topic/5760197247?page=2","timestamp":"2014-04-21T02:42:36Z","content_type":null,"content_length":"95500","record_id":"<urn:uuid:fc31ce1e-f4e4-4010-a0e7-3f19287d2c88>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00356-ip-10-147-4-33.ec2.internal.warc.gz"} |
solving the differential equation...
February 6th 2010, 05:17 AM
solving the differential equation...
ok here we go...
the rate of change of N with respect to S is proportional to 500 - S
so the set up is pretty straight forward...
I got dN/dS = k(500 - S)
when i integrated it i got N = 500S - (S)^2/2 + C, the book said something different..which is where im confused..
books answer: N = -k/2(500 - S)^2
please explain, thanks in advance....
February 6th 2010, 05:23 AM
ok here we go...
the rate of change of N with respect to S is proportional to 500 - S
so the set up is pretty straight forward...
I got dN/dS = k(500 - S)
when i integrated it i got N = 500S - (S)^2/2 + C, the book said something different..which is where im confused..
books answer: N = -k/2(500 - S)^2
please explain, thanks in advance....
Check to see if the problem didn't say
"the rate of change of N with respect to S is inversely proportional to 500 - S".
February 6th 2010, 05:34 AM
ok it said proportional....
February 6th 2010, 06:09 AM
the book says,
dN/ds = k(500 - s)
dN/ds = k(500- s)
∫▒dN/ds ds= ∫▒〖k(500-s)ds〗
∫▒dN= - k/2 (500 - s)^2 + C
N= -k/2 (500-s)^2+C | {"url":"http://mathhelpforum.com/differential-equations/127434-solving-differential-equation-print.html","timestamp":"2014-04-17T19:20:21Z","content_type":null,"content_length":"5495","record_id":"<urn:uuid:3cceab50-28ea-4e55-8639-0849e8fb9bde>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00213-ip-10-147-4-33.ec2.internal.warc.gz"} |
For A Biconvex Thin Lens, Choose A Radius Of Curvature ... | Chegg.com
For a biconvex thin lens, choose a radius of curvature somewhere between 10 and 100 centimeters (and different than you did for a converging mirror), then choose three different object distances (at
least one inside the focal length, one between the focal length and the center of curvature, and one beyond the center of curvature), and for each locate the image using the thin lens equation and
determine the image magnification. Also, draw a nice, neat ray diagram that matches the equations, and labels all the relevant points and distances. Finally, for each, describe the image
characteristics as either real or virtual, upright or inverted, and either larger or smaller. | {"url":"http://www.chegg.com/homework-help/questions-and-answers/biconvex-thin-lens-choose-radius-curvature-somewhere-10-100-centimeters-different-convergi-q1286249","timestamp":"2014-04-19T08:24:11Z","content_type":null,"content_length":"19016","record_id":"<urn:uuid:e62af5fd-6c0f-4250-b147-dc59335e3ad8>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00379-ip-10-147-4-33.ec2.internal.warc.gz"} |
Is it possible for a system to have negative potential energy?
Candy Yes. All potential energies have an arbitrary "zero point:" the point we designate a zero value for the potential energy. Consider gravitational potential energy. What is the GPE of a ball on
the ground? We need to set a coordinate system, so +y upward and the origin at the ground. The GPE of the ball is 0 J. Now let's take our origin to be the top of a tall building. Since upward is
positive the y coordinate of the ball is negative. Thus the GPE is also negative. We generally need a zero point for any PE, but note that all Physics really cares about is changes in the PE as an
object moves from one point to another. So ultimately our choice of origin is whatever we feel like...the choice of origin doesn't affect the problem. -Dan | {"url":"http://mathhelpforum.com/advanced-applied-math/4942-possible-system-have-negative-potential-energy.html","timestamp":"2014-04-18T09:56:51Z","content_type":null,"content_length":"34137","record_id":"<urn:uuid:cd44bcc8-f0cc-4523-b652-b5228a67d284>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00428-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
The cost of running a vehicle at an average speed v is km/h is 64 + v^2/100 dollars per hour. Consider a travel distance of 100km a) Show that the total cost to run the vehicle in 100km is C= v +6400
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/4e749c6b0b8b247045d5462d","timestamp":"2014-04-19T12:48:22Z","content_type":null,"content_length":"250139","record_id":"<urn:uuid:abee7bc2-ade7-4f14-bcfd-8410b72045d0>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00420-ip-10-147-4-33.ec2.internal.warc.gz"} |
Summary: AMAT 487 and 587: Topics in Modern Mathematics
SUNY Albany, Spring 2011
Tu Th 11:45 am - 1:05 pm in ES 146
Instructor: Ivana Alexandrova
Course Description
In this course we will discuss the fundamentals of linear partial differential equations.
We will begin with an introduction to the theory of distributions. We will then discuss
how the motivation for semi-classical analysis arises from quantum mechanics. After that
we will introduce the classes of semi-classical symbols and will study their quantizations to
pseudodifferential operators. We will further introduce canonical relations and study their
quantizations to Fourier integral operators. We will also discuss applications of this theory
to the analysis of linear partial differential equations. The course grade will be based on
homework sets.
Prerequisites: A completed undergraduate course on real analysis. Some knowledge of func-
tional analysis is a plus.
Required textbooks:
An Introduction to the Theory of Distributions, F.G. Friedlander and Mark Joshi
An Introduction to Semiclassical and Microlocal Analysis, AndrŽe Martinez
Other references:
The Analysis of Linear Partial Differential Operators, Vol. I, Lars Hšormander | {"url":"http://www.osti.gov/eprints/topicpages/documents/record/933/5323808.html","timestamp":"2014-04-21T05:34:00Z","content_type":null,"content_length":"8315","record_id":"<urn:uuid:265edca7-d830-4984-99f6-e9ddbe9c388e>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00501-ip-10-147-4-33.ec2.internal.warc.gz"} |
In Eigen, all matrices and vectors are objects of the Matrix template class. Vectors are just a special case of matrices, with either 1 row or 1 column.
The first three template parameters of Matrix
The Matrix class takes six template parameters, but for now it's enough to learn about the first three first parameters. The three remaining parameters have default values, which for now we will
leave untouched, and which we discuss below.
The three mandatory template parameters of Matrix are:
Matrix<typename Scalar, int RowsAtCompileTime, int ColsAtCompileTime>
• Scalar is the scalar type, i.e. the type of the coefficients. That is, if you want a matrix of floats, choose float here. See Scalar types for a list of all supported scalar types and for how to
extend support to new types.
• RowsAtCompileTime and ColsAtCompileTime are the number of rows and columns of the matrix as known at compile time (see below for what to do if the number is not known at compile time).
We offer a lot of convenience typedefs to cover the usual cases. For example, Matrix4f is a 4x4 matrix of floats. Here is how it is defined by Eigen:
We discuss below these convenience typedefs.
As mentioned above, in Eigen, vectors are just a special case of matrices, with either 1 row or 1 column. The case where they have 1 column is the most common; such vectors are called column-vectors,
often abbreviated as just vectors. In the other case where they have 1 row, they are called row-vectors.
For example, the convenience typedef Vector3f is a (column) vector of 3 floats. It is defined as follows by Eigen:
We also offer convenience typedefs for row-vectors, for example:
The special value Dynamic
Of course, Eigen is not limited to matrices whose dimensions are known at compile time. The RowsAtCompileTime and ColsAtCompileTime template parameters can take the special value Dynamic which
indicates that the size is unknown at compile time, so must be handled as a run-time variable. In Eigen terminology, such a size is referred to as a dynamic size; while a size that is known at
compile time is called a fixed size. For example, the convenience typedef MatrixXd, meaning a matrix of doubles with dynamic size, is defined as follows:
Matrix<double, Dynamic, Dynamic>
And similarly, we define a self-explanatory typedef VectorXi as follows:
Matrix<int, Dynamic, 1>
You can perfectly have e.g. a fixed number of rows with a dynamic number of columns, as in:
Matrix<float, 3, Dynamic>
A default constructor is always available, never performs any dynamic memory allocation, and never initializes the matrix coefficients. You can do:
• a is a 3-by-3 matrix, with a plain float[9] array of uninitialized coefficients,
• b is a dynamic-size matrix whose size is currently 0-by-0, and whose array of coefficients hasn't yet been allocated at all.
Constructors taking sizes are also available. For matrices, the number of rows is always passed first. For vectors, just pass the vector size. They allocate the array of coefficients with the given
size, but don't initialize the coefficients themselves:
• a is a 10x15 dynamic-size matrix, with allocated but currently uninitialized coefficients.
• b is a dynamic-size vector of size 30, with allocated but currently uninitialized coefficients.
In order to offer a uniform API across fixed-size and dynamic-size matrices, it is legal to use these constructors on fixed-size matrices, even if passing the sizes is useless in this case. So this
is legal:
and is a no-operation.
Finally, we also offer some constructors to initialize the coefficients of small fixed-size vectors up to size 4:
Coefficient accessors
The primary coefficient accessors and mutators in Eigen are the overloaded parenthesis operators. For matrices, the row index is always passed first. For vectors, just pass one index. The numbering
starts at 0. This example is self-explanatory:
Example: Output:
#include <iostream>
#include <Eigen/Dense>
using namespace Eigen;
int main()
MatrixXd m(2,2);
m(0,0) = 3;
Here is the matrix m:
m(1,0) = 2.5; 3 -1
2.5 1.5
m(0,1) = -1; Here is the vector v:
m(1,1) = m(1,0) + m(0,1); 3
std::cout << "Here is the matrix m:\n" << m << std::endl;
v(0) = 4;
v(1) = v(0) - 1;
std::cout << "Here is the vector v:\n" << v << std::endl;
Note that the syntax m(index) is not restricted to vectors, it is also available for general matrices, meaning index-based access in the array of coefficients. This however depends on the matrix's
storage order. All Eigen matrices default to column-major storage order, but this can be changed to row-major, see Storage orders.
The operator[] is also overloaded for index-based access in vectors, but keep in mind that C++ doesn't allow operator[] to take more than one argument. We restrict operator[] to vectors, because an
awkwardness in the C++ language would make matrix[i,j] compile to the same thing as matrix[j] !
Matrix and vector coefficients can be conveniently set using the so-called comma-initializer syntax. For now, it is enough to know this example:
Example: Output:
m << 1, 2, 3, 1 2 3
4, 5, 6, 7 8 9
7, 8, 9;
std::cout << m;
The right-hand side can also contain matrix expressions as discussed in this page.
The current size of a matrix can be retrieved by rows(), cols() and size(). These methods return the number of rows, the number of columns and the number of coefficients, respectively. Resizing a
dynamic-size matrix is done by the resize() method.
Example: Output:
#include <iostream>
#include <Eigen/Dense>
using namespace Eigen;
int main()
MatrixXd m(2,5);
std::cout << "The matrix m is of size " The matrix m is of size 4x3
It has 12 coefficients
<< m.rows() << "x" << m.cols() << std::endl; The vector v is of size 5
As a matrix, v is of size 5x1
std::cout << "It has " << m.size() << " coefficients" << std::endl;
std::cout << "The vector v is of size " << v.size() << std::endl;
std::cout << "As a matrix, v is of size "
<< v.rows() << "x" << v.cols() << std::endl;
The resize() method is a no-operation if the actual matrix size doesn't change; otherwise it is destructive: the values of the coefficients may change. If you want a conservative variant of resize()
which does not change the coefficients, use conservativeResize(), see this page for more details.
All these methods are still available on fixed-size matrices, for the sake of API uniformity. Of course, you can't actually resize a fixed-size matrix. Trying to change a fixed size to an actually
different value will trigger an assertion failure; but the following code is legal:
Example: Output:
#include <iostream>
#include <Eigen/Dense>
using namespace Eigen;
int main()
The matrix m is of size 4x4
// no operation
std::cout << "The matrix m is of size "
<< m.rows() << "x" << m.cols() << std::endl;
Assignment and resizing
Assignment is the action of copying a matrix into another, using operator=. Eigen resizes the matrix on the left-hand side automatically so that it matches the size of the matrix on the right-hand
size. For example:
Example: Output:
std::cout << "a is of size " << a.rows() << "x" << a.cols() << std::endl;
a is of size 2x2
MatrixXf a is now of size 3x3
a = b;
std::cout << "a is now of size " << a.rows() << "x" << a.cols() << std::endl;
Of course, if the left-hand side is of fixed size, resizing it is not allowed.
If you do not want this automatic resizing to happen (for example for debugging purposes), you can disable it, see this page.
Fixed vs. Dynamic size
When should one use fixed sizes (e.g. Matrix4f), and when should one prefer dynamic sizes (e.g. MatrixXf)? The simple answer is: use fixed sizes for very small sizes where you can, and use dynamic
sizes for larger sizes or where you have to. For small sizes, especially for sizes smaller than (roughly) 16, using fixed sizes is hugely beneficial to performance, as it allows Eigen to avoid
dynamic memory allocation and to unroll loops. Internally, a fixed-size Eigen matrix is just a plain array, i.e. doing
really amounts to just doing
so this really has zero runtime cost. By contrast, the array of a dynamic-size matrix is always allocated on the heap, so doing
amounts to doing
float *mymatrix = new float[rows*columns];
and in addition to that, the MatrixXf object stores its number of rows and columns as member variables.
The limitation of using fixed sizes, of course, is that this is only possible when you know the sizes at compile time. Also, for large enough sizes, say for sizes greater than (roughly) 32, the
performance benefit of using fixed sizes becomes negligible. Worse, trying to create a very large matrix using fixed sizes inside a function could result in a stack overflow, since Eigen will try to
allocate the array automatically as a local variable, and this is normally done on the stack. Finally, depending on circumstances, Eigen can also be more aggressive trying to vectorize (use SIMD
instructions) when dynamic sizes are used, see Vectorization.
Optional template parameters
We mentioned at the beginning of this page that the Matrix class takes six template parameters, but so far we only discussed the first three. The remaining three parameters are optional. Here is the
complete list of template parameters:
Matrix<typename Scalar,
int RowsAtCompileTime,
int ColsAtCompileTime,
int Options = 0,
int MaxRowsAtCompileTime = RowsAtCompileTime,
int MaxColsAtCompileTime = ColsAtCompileTime>
• Options is a bit field. Here, we discuss only one bit: RowMajor. It specifies that the matrices of this type use row-major storage order; by default, the storage order is column-major. See the
page on storage orders. For example, this type means row-major 3x3 matrices:
Matrix<float, 3, 3, RowMajor>
• MaxRowsAtCompileTime and MaxColsAtCompileTime are useful when you want to specify that, even though the exact sizes of your matrices are not known at compile time, a fixed upper bound is known at
compile time. The biggest reason why you might want to do that is to avoid dynamic memory allocation. For example the following matrix type uses a plain array of 12 floats, without dynamic memory
Matrix<float, Dynamic, Dynamic, 0, 3, 4>
Convenience typedefs
Eigen defines the following Matrix typedefs:
• MatrixNt for Matrix<type, N, N>. For example, MatrixXi for Matrix<int, Dynamic, Dynamic>.
• VectorNt for Matrix<type, N, 1>. For example, Vector2f for Matrix<float, 2, 1>.
• RowVectorNt for Matrix<type, 1, N>. For example, RowVector3d for Matrix<double, 1, 3>.
• N can be any one of 2, 3, 4, or X (meaning Dynamic).
• t can be any one of i (meaning int), f (meaning float), d (meaning double), cf (meaning complex<float>), or cd (meaning complex<double>). The fact that typedefs are only defined for these five
types doesn't mean that they are the only supported scalar types. For example, all standard integer types are supported, see Scalar types. | {"url":"http://eigen.tuxfamily.org/dox/group__TutorialMatrixClass.html","timestamp":"2014-04-19T08:02:25Z","content_type":null,"content_length":"30664","record_id":"<urn:uuid:02617b66-878a-439a-b143-9e601875dec4>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00284-ip-10-147-4-33.ec2.internal.warc.gz"} |
MathGroup Archive: October 2008 [00049]
[Date Index] [Thread Index] [Author Index]
Re: Variable question
• To: mathgroup at smc.vnet.net
• Subject: [mg92508] Re: Variable question
• From: Bob F <deepyogurt at gmail.com>
• Date: Thu, 2 Oct 2008 04:35:12 -0400 (EDT)
• References: <gc0tkn$9ci$1@smc.vnet.net>
On Oct 1, 4:29 pm, Joh... at giganews.com wrote:
> I have the following test code
> c = 3
> var = "c"
> v = StringTake[var, {1}]
> v
> v is showing up as c but I want it to show up as 3. Is there a way to =
do this?
> Thanks
Well, you could just assign "3" to the variable c, like
c = "3"
var = c (* this is really redundant, just use the c variable in the
StringTake[ c, {1}] function *)
v = StringTake[var, {1}]
Also, note that the second argument on StringTake is either which
character in the string to extract if you use the list syntax, {1} ,
or if you want the first "n" characters use the interger "n" (without
the quotes of course) - in this case you told it to take the first
character, but if c were longer and you wanted the second character or
the first two characters you could
c = "54321"
v = StringTake[ c, {2}] (*would give "4" which is the second
v = StringTake[ c, 2] (*would give "54" which is the first two
Also note that in your original example the last line prints the value
of v a second time since you did not use a ; at the end of the third
line, and without it the value of v is printed once as a result of the
third line and again in the fourth line. So with version 6, the
behaviour of the ; to suppress output is more consistent than version
5 was.
If, what you are trying to do is convert an integer or any expression
to a string you can use the function ToString[expression], e.g.
c = 54321
d = ToString[54321]
v = StringTake[ c, 2] (* but c is not a string but an integer so
this will give an error *)
w = StringTake[ d, 2] (* would give "54" which is the first two
characters of d *)
See the documentation on ToString[], String Manipulations, String
Operations, etc to get more info... | {"url":"http://forums.wolfram.com/mathgroup/archive/2008/Oct/msg00049.html","timestamp":"2014-04-20T03:15:53Z","content_type":null,"content_length":"26917","record_id":"<urn:uuid:d5bef134-dfb7-4400-971d-bb3b034899df>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00341-ip-10-147-4-33.ec2.internal.warc.gz"} |
Briarcliff, PA
Find a Briarcliff, PA Precalculus Tutor
...Routinely score 800/800 on practice tests. Taught high school math and have extensive experience tutoring in SAT Math. Able to help students improve their math skills and also learn many
valuable test-related shortcuts and strategies.
19 Subjects: including precalculus, calculus, statistics, geometry
...I am especially personable and I know I have the ability to inspire students to have success beyond their expectations especially with the creative method I use for teaching. I believe that
the best approach for teaching is to help students conceptualize some seemingly abstract topics in these s...
16 Subjects: including precalculus, Spanish, physics, calculus
...If your home isn’t ideal, contact me and we can work out a convenient location. I’m looking forward to hearing from you soon!As a tutor with a primary focus in math and science, I not only
tutor algebra frequently, but also encounter this fundamental math subject every day in my professional lif...
9 Subjects: including precalculus, calculus, physics, geometry
...It often is called "World Cultures," a survey course that examines the great civilizations of the past (Mesopotamia, China, India, Egypt, Meso-America, Greece, Rome). Another version focuses
on more modern history, beginning with European colonialism (1500-1900) and continuing through the nation...
32 Subjects: including precalculus, chemistry, English, biology
I am currently a volunteer math tutor at the Center for Literacy in Philadelphia. I have a degree in engineering and math. My approach towards tutoring is simple.
23 Subjects: including precalculus, physics, statistics, geometry
Related Briarcliff, PA Tutors
Briarcliff, PA Accounting Tutors
Briarcliff, PA ACT Tutors
Briarcliff, PA Algebra Tutors
Briarcliff, PA Algebra 2 Tutors
Briarcliff, PA Calculus Tutors
Briarcliff, PA Geometry Tutors
Briarcliff, PA Math Tutors
Briarcliff, PA Prealgebra Tutors
Briarcliff, PA Precalculus Tutors
Briarcliff, PA SAT Tutors
Briarcliff, PA SAT Math Tutors
Briarcliff, PA Science Tutors
Briarcliff, PA Statistics Tutors
Briarcliff, PA Trigonometry Tutors
Nearby Cities With precalculus Tutor
Collingdale, PA precalculus Tutors
Darby Township, PA precalculus Tutors
Eastwick, PA precalculus Tutors
Fernwood, PA precalculus Tutors
Folcroft precalculus Tutors
Glenolden precalculus Tutors
Milmont Park, PA precalculus Tutors
Norwood, PA precalculus Tutors
Primos Secane, PA precalculus Tutors
Primos, PA precalculus Tutors
Prospect Park precalculus Tutors
Ridley, PA precalculus Tutors
Secane, PA precalculus Tutors
Tinicum, PA precalculus Tutors
Westbrook Park, PA precalculus Tutors | {"url":"http://www.purplemath.com/Briarcliff_PA_precalculus_tutors.php","timestamp":"2014-04-17T01:38:39Z","content_type":null,"content_length":"24228","record_id":"<urn:uuid:e4780c6d-98d1-4478-8ec7-0fd9b0bf8d03>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00286-ip-10-147-4-33.ec2.internal.warc.gz"} |
Two girls on an island problems
Filed in
Ideas Subscribe
to Decision Science News by Email (one email per week, easy unsubscribe)
Drawn in by our post Tuesday’s child is full of probability problems, author and Professor of Operations Research and Probability Henk Tijms writes in with two new puzzles:
Problem 1:
An isolated island is ruled by a dictator. Every family on the island has two children. Each child is equally likely a boy or a girl. The dictator has decreed that each first-born girl (if any)
in the family should bear the name Mary Ann (the name of the beloved mother-in-law of the dictator). Two siblings never have the same name. You are told that a randomly chosen family that is
unknown to you has a girl named Mary Ann. What is the probability that this family has two girls?
Problem 2:
The dictator has passed away. His son, a womanizer, has changed the rules. For each first-born girl in the family a name must be chosen at random from 10 specific names including the name Mary
Ann, while for each second-born girl in the family a name must be randomly chosen from the remaining 9 names. What is now the probability that a randomly chosen family has two girls when you are
told that this family has a girl named Mary Ann? Can you intuitively explain why this probability is not the same as the previous probability?
If you need a hint, he adds this postscript:
P.S. As you know, the wording in this kind of problems is crucial. I found that the best approach to attack this kind of problems is to use Bayes’ rule in odds form. This specific form of Bayes
forces you to make transparent the assumption you are (implicitly) making in solving the problem. I take the liberty to mention that in the recent third edition of my book Understanding
Who can solve it first?
Karim says:
Well, I’m not sure, so maybe I need the book. I got 1/3 for Problem #1, and 1/2 for Problem #2. The second one is higher because whereas in P1, the probability of having a girl named Mary Ann is the
same for families with 1 or 2 girls (p = 1), for P2, the probability of having a girl named Mary Ann is much higher for families with 2 girls.
One remaining question, is Henk’s mother-in-law actually named Mary Ann??
February 26, 2013 @ 10:40 am
Nate says:
For me, the intuition on P2 is that the Mary Ann in question is equally likely to be either the first or second born, regardless of whether the family is a BG or GG family.
February 26, 2013 @ 12:40 pm
Eliezer Yudkowsky says:
Am I missing something? I’m a pretty experienced Bayesian and I’m getting 1/2 for both cases. In case 1, being told that the family has a girl named Mary Ann gives a posterior probability of ~1 the
first child was a girl (this name is far more likely to be chosen for first-born girls than any other) so we’ve got a 50% probability the other child is a girl. In Bayesian terms, families with two
girls are twice as likely to have a first-born girl as families with 1 girl, hence twice as likely to have a Mary Ann, hence prior odds of 1:2 for GG vs. GB times a likelihood ratio of 2:1 for “Mary
Ann” yields posterior odds of 1:1.
In the second case, a family with 2 girls is again twice as likely to have a girl named Mary Ann as a family with 1 girl (there are 2 girls instead of 1 who might randomly be assigned that name) so
prior odds of 1:2 times a likelihood ratio of 2:1 again equals posterior odds of 1:1 that both children are girls.
Am I misunderstanding the problem? This looks almost exactly like a practice problem that I designed for one of my own sessions on Bayesianism so I might be misinterpreting it to say the same thing
my practice problem did.
February 26, 2013 @ 11:58 pm
Henk Tijms says:
Eliezer , the wording in the first problem may be not perfect. It is said that the dictator has decreed that the first girl (if any) who will be born in the family should bear the Mary Ann. The
family has no choice: the girl MUST bear the name Mary Ann.
February 27, 2013 @ 2:10 am
Eric says:
(Sorry if this posted twice)
Eliezer, you are correct given how you interpreted the problem. This is why the postscript was added about the wording being crucial and Bayes’ rule forcing you to make your assumptions transparent.
Karim (and I) interpreted the phrase “first-born girl” to mean the first girl born to a couple, even if it is the second child to be born. Thus we interpreted the dictator’s law in the first problem
as saying, “Mary Ann shall be the name of the first girl a family has, even if they have a boy first, and if they have a girl first, the second girl shall be given another name,” whereas you
interpreted the law as saying, “Mary Ann shall be the name of a girl only when it is the first child of a couple.”
Your interpretation means that only half of all non-GG families with girls will have a girl named Mary Ann. Under our interpretation, the first problem is just asking you to figure out the
probability a family is GG given the fact they have a girl, since all families with at least one girl have a girl named Mary Ann. And that probability is 1/3. As you said, GG families are twice as
likely to have a girl named Mary Ann in the second problem, so the probability is 1/2.
One problem with interpreting the “first-born girl” naming rule as you did is that you are forced to assume all BG families in the first problem never name their second children Mary Ann, although we
are not given any information about a rule against naming second-born children Mary Ann if they are the first girl born to a family. I think the second problem also must imply our interpretation,
since the second-born girl’s list is shortened by the selection of the first-born girl, (“for each second-born girl in the family a name must be randomly chosen from the remaining 9 names,”) which
would not be possible in BG families.
February 27, 2013 @ 2:40 am
Henk Tijms says:
Eric, thanks for your clarification. On second thought about the first problem, I think that it is not relevant for the asked probability whether any second girl in the family may also bear the name
Mary Ann after the first girl born in the family has received the name Mary Ann. This follows from a closer examination of the derivation of the probability using Bayes’ rule in odds form.
February 27, 2013 @ 3:09 am
Eric says:
Henk, you are right, it is not relevant to the first problem if families with two girls have a second girl named Mary Ann.
However, it is relevant if families that first had a boy cannot name their first girl Mary Ann because she was not the first born child. That would imply that non-Girl,Girl families with at least one
girl are half as likely to have a girl named Mary Ann as Girl,Girl families. And since there are twice as many non-Girl,Girl families, half of the first-born Mary Ann’s are with them (in the Girl,
Boy families) and half are in the Girl, Girl families.
February 27, 2013 @ 4:23 am | {"url":"http://www.decisionsciencenews.com/2013/02/26/two-girls-on-an-island-problems/","timestamp":"2014-04-18T15:46:58Z","content_type":null,"content_length":"55742","record_id":"<urn:uuid:3f973ab2-d7b9-4296-807d-5d1f9fd0a799>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00352-ip-10-147-4-33.ec2.internal.warc.gz"} |
Connected components of large induced subgraphs of hypercubes
up vote 5 down vote favorite
Let $H$ be the $n$-dimensional hypercube, i.e. $\{0,1\}^n$ with edges between two vertices if and only if they differ in exactly one co-ordinate. We say that an edge is in direction $i$ if its
endpoints differ in exactly the $i$'th co-ordinate. Suppose $V$ is a subset of $H$ such that $|V| > 2^{n-1}$. Is it true that at least one connected component of the graph induced by $V$ contains
edges in all $n$ direction?
extremal-graph-theory co.combinatorics
1 Is it known how many connected components there can be for such a subgraph? For small n, the answer seems to be 1. Gerhard "Ask Me About System Design" Paseman, 2011.08.08 – Gerhard Paseman Aug 8
'11 at 16:19
Gerhard, one can have two components in $\{0,1\}^4$ already. – Gjergji Zaimi Aug 8 '11 at 16:51
Indeed Gjergji, also for n=3. I wasn't thinking hard enough when I made the earlier comment. Taking V to be the set of all points with an even number of bits gives the maximum number of connected
components; adding 1 more point connects at most n of those components. I originally thought there was a simpler argument to provide a yes answer. I now feel that something like Hall's theorem is
needed, after remembering about the V above. Gerhard "Coffee Does Make A Difference" Paseman, 2011.08.08 – Gerhard Paseman Aug 8 '11 at 17:21
1 @domortop: Yes, we can. We get $2^{n-2}$ subcubes of size 4 by fixing the first $n-2$ co-ordinates of $H$. Since $|V| > 2^{n-1}$ at least one of these subcubes has 3 points in it. This implies 2
directions. Since we have proof by hand for $n$ up to 5, we can conclude that some component will have 5 directions. – Sukhada Fadnavis Aug 10 '11 at 13:59
2 And how about the (much) weaker assertion that there exists a connected component of size at least $n+1$? (Cf. the case where $V$ consists of all even-weight vertices and just one odd-weight
vertex.) – Seva Aug 10 '11 at 20:14
show 6 more comments
1 Answer
active oldest votes
Yes, this is true. Thanks to Sukhada Fadnavis and Seva for pointing out in the comments that the argument I had written here was wrong. Instead I will point you to the paper where this
is proved
up vote 4 "Bulky subgraphs of the hypercube", by Andrei Kotlov, Europ. J. Combinatorics (2000) 21, 503-507
down vote
accepted As far as I can tell from looking at the literature, it is not known if there are configurations of more than $2^{n-1}$ vertices for which one can not find $n+1$ of them which induce a
tree with an edge in every direction. This would be a strengthening of the result in question.
I am trying to understand your solution. I suppose, your $d$ is actually the degree if $u$ in the bipartite subgraph, induced by $X$ and $N_G(X)$ (not in $G$), and $N(G)$ was meant
to be $N_G(X)$; is this correct? Now, could you explain the conclusion that "the degree of a vertex in $N_G(X)$ is always less than or equal to the degree of its neighbours in $X$"?
And, finally, exactly where you use the assumption that none of the connected components has an edge in every direction? Thanks! – Seva Aug 8 '11 at 20:52
Hi Gjergji, using your notation above, if $x_1, x_2$ are in the same connected component then I don't see why the fourth vertex $v$ should be in $N_G(X)$. Could you please explain?
Thanks. – Sukhada Fadnavis Aug 9 '11 at 4:33
@Sukhada: this is actually easy: if $x_1$ and $x_2$ are both neighbors of $u$, then each of hem differs from $u$ in just one coordinate, and the fourth vertex, say $v$, differs from
$u$ in both these coordinates, and differs from each of $x_1$ and $x_2$ in exactly one coordinate; hence $v$ is a neighbor both of $x_1$ and $x_2$. – Seva Aug 9 '11 at 7:05
1 @ Seva. But $v$ may be in $V$, then the edges between $x_1, x_2$ and $v$ don't show up in the induced bipartite graph between $V$ and $H\V$. – Sukhada Fadnavis Aug 9 '11 at 7:46
1 Awesome! Thanks a lot for the answer Gjeirgji :) – Sukhada Fadnavis Aug 17 '11 at 18:14
show 2 more comments
Not the answer you're looking for? Browse other questions tagged extremal-graph-theory co.combinatorics or ask your own question. | {"url":"http://mathoverflow.net/questions/72339/connected-components-of-large-induced-subgraphs-of-hypercubes","timestamp":"2014-04-17T09:45:41Z","content_type":null,"content_length":"64910","record_id":"<urn:uuid:df17a714-f373-48b3-beae-c9631a9e16fa>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00178-ip-10-147-4-33.ec2.internal.warc.gz"} |
Summary: NON-PLIMBEDDINGS OF 3-MANIFOLDS.
1. Introduction. All imbeddings will be locally flat and all isotopies will
be ambient isotopies. Let M",N"+~ be closed PL manifolds, m >3. For any
imbedding f :M"+N"+~ there is an obstruction in H ~ ( M ;2,) to isotopingf to
a PL imbedding [6]. However, it follows from the twisted product structure
theorem of [9] that for m >5 there is a PL manifold M' and a homeomorphism
g :M'+M such that f 0 g :M'+N is isotopic to a PL imbedding. Thus for m >5
there is a natural way of replacing any imbedding of M in N with a PL
imbedding of a homeomorphic manifold M'.
Here we study the analogous situation for m=3. Since PL structures on
3-manifolds are unique, the above theorem does not extend directly. However,
we show there is a biunique correspondence between isotopy classes of non-PL
imbeddings of M in N and isotopy classes of PL imbeddings (with an ap-
propriate condition on fundamental group) of a manifold M' homology equiv-
alent to M.
In particular, we will show in Section 4 that the cobordism classificationof
non-PL 3-knots in s5[4,2] is equivalent to the cobordism classification of PL
imbeddings of certain homology 3-spheres in s5. The correspondence is natural
in the sense that a 3-knot and the corresponding imbedded homology sphere | {"url":"http://www.osti.gov/eprints/topicpages/documents/record/467/1383636.html","timestamp":"2014-04-21T07:44:04Z","content_type":null,"content_length":"8418","record_id":"<urn:uuid:a8ae9bd3-e2f4-44c3-99d9-50ad9c2824e4>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00110-ip-10-147-4-33.ec2.internal.warc.gz"} |
Two coplanar lines - Math Central
We have two responses for you
If the lines are coplanar, they either intersect (in a single point), or are the same line (colinear) or are parallel (no intersection). If you know they intersect (perhaps from the context of the
question), you can immediately look for the single point.
At this point, all the equations must work so:
(x-5)/4=(y-7)/4 AND
(y-7)/4=-(z+3)/5 AND
(x-8)/7=y-4 AND
You have three unknowns, but you have four equations here, so you can solve this the same way as you would with three variables and two equations: use the substitution and/or the elimination method.
I would:
1. Identify the two equations that don't have z in them.
2. Solve each for x.
3. Make the two expressions that equal x, equal each other. Solve for y.
4. Check this triple (x, y, z) in both lines to make sure I didn't make an arithmetic mistake.
Stephen La Rocque.
Hi Robin,
Look at the East wall of the room you are in. The line where the East wall meets the ceiling (line 1) and the line where the East wall meets the floor (line 2) are parallel lines and they are
coplanar. The East wall is the plane that contains them both.
Now look at the vertical line where the North wall meets the East wall (line 3). Line 1 and line 3 meet at the upper North East corner of the room and again these lines are coplanar. The East wall is
the plane that contains them both.
Finally look at the line where the North wall meets the floor (line 4). There is no point that is on lines 1 and 4 and these lines are not coplanar.
The room you are in is special because the lines I have identified are at right angles to each other. You can however imagine a room where the corners are not square and the ceiling and floor are not
parallel and the situation is similar. Two lines in 3-space are either parallel (and are hence coplanar), meet at a point (and are hence coplanar) or are not parallel and do not meet (and are hence
not coplanar).
I hope this helps, | {"url":"http://mathcentral.uregina.ca/QQ/database/QQ.09.06/h/robin3.html","timestamp":"2014-04-19T12:12:02Z","content_type":null,"content_length":"8974","record_id":"<urn:uuid:d4e005fd-82fa-423d-89dd-a12d09b2d74b>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00466-ip-10-147-4-33.ec2.internal.warc.gz"} |
Getting into quant finance, which course is more appropriate (self.quant)
submitted ago by give_chance
sorry, this has been archived and can no longer be voted on
First of all, sorry if I seem naive. Just a quick note; I live in the UK.
Some background first:
I'm a programmer with a strong background in mathematics. At my current job, I do slot machines maths modelling and verification, so I do a lot of statistics but it's all basic level, it requires
thinking a lot but it doesn't require a great deal of mathematical background. Verification is based on coding simulations in C# and checking that my statistics are correct (I guess that's called
Monte Carlo simulation)
I want to get into banking, I will be straight honest, it's for the money and the position and because I'm in London so it's the best place (one of) for this kind of jobs. However, I do actually
enjoy mathematics and programming mathematical applications, I have always had A* in my mathematics courses, but I studied Computer Science so I didn't do "Advanced" level mathematics.
My situation now:
I have a first class degree Computer Science bachelor from a recognised university but it's not a top-ten one. I'm going to do Master's soon hopefully, and I thought I will do a course related to the
field I want to get into. Now here is the thing, I don't necessarily want to be a quant, I basically just want to have a job in the financial industry that requires a lot mathematics-oriented
My options:
So I have three different courses:
Oxford - MSc Computational and mathematical finance
Imperial London - MSc Mathematical finance
Imperial London - MSc Statistics
One of the tutors at Imperial advised me to do Statistics instead of Mathematical finance because, according to him, it's getting more difficult now to get a quant job and Statistics is broader so I
have more options regarding my job perspectives.
My queries:
1 - Which jobs satisfy my needs (mathematical-oriented programming in a financial industry) other than quantitative finance?
2 - Do you agree with the tutor at Imperial that I should do Statistics instead of Mathematical Finance, the main thing that attracts me to the Maths Finance MSc is that it includes a three months
internship (if I can) which is EXTREMELY important in my opinion, but I still want to hear what you think.
Thanks, I tried to break it down to make it easy to read. Cheers, any advice would be appreicated.
all 4 comments
[–]pdizzz1 point2 points3 points ago
sorry, this has been archived and can no longer be voted on
Before I say anything, I highly suggest you do a post over at quantnet as you will most likely get the best answer from them as they are much more knowledgeable than myself, and they have a much more
active community than /r/quant.
I know you say you don't necessarily want to be a "quant", however what most quants do is use math and computation in their daily jobs, no matter what part of the industry you are in.
The first thing you need to do is dissect each program, and see if you actually qualify for the program itself. I know you say you are very good at math, however you did not take very high-level math
courses. Most quant programs require a background probability/multivariable calculus (calc III)/differential equations to be the most successful.
After taking a look at Oxford and Imperial London (MSc MathFin), both of them pretty much have the same requirements for admission, i.e. probability, analysis, and differential equations (Oxford also
asks for Linear Algebra). If you have not taken those courses, I'm not really sure what that means exactly. I see that Oxford offers refresher courses on those subjects, however I don't know how
difficult it could be without having taken the courses at all.
My problem with the statistics is that, is the MSc in statistics enough? In the US it probably would not be, as many institutions, if they are hiring people with advanced math degrees, typically want
phds. I'm not sure if it is the same for the UK. I see that Imperial London has a lot of finance related stats courses, however I don't know whether or not it will put you in as much of an advantage
as you'd like.
Since you have the CS degree, I think it would be best to stick with the curriculum that is most geared toward computing, since that is what you excel at. Just as in america, some schools, such as
CMU and NYU, have a much greater focus on the computing aspect of quant finance. However I see you know C#, which is good, however the premier language of finance is C++ (or matlab). Becoming
proficient in C++ might be one of the most, if not the most, important parts of working in quant finance.
The last, and probably most important part, is job placement. Where are these programs placing people? What banks, hedge funds, etc. are coming to hire people? What is the percentage of those with
employment after x months. The main reason i think statistics is not the best idea is that it could making finding a job, specifically in finance, harder, since you don't have immediate access to
financial recruiters. You are also very, VERY correct about the internship part. An internship is a catalyst to a career, and securing one is essential.
In terms of actual jobs, there are millions of different careers you could have. Traders, research in high frequency trading, algo trading, portfolio manager (typically a job that you acquire after
many years in the industry), there are so many different jobs I won't bother to name them all. I'd also say that, even though you want to work in "banking", banks are the last places you should be
looking for work. Typically quants want to look at Hedge Funds, with banks being the last option (this is for a lot of reasons, such as the heavy regulations on trading that have been enacted in the
Like I said, I'm far from an expert, but I've read so much on quantnet as well as a lot of publications they list on their site so I like to think I'm somewhat well educated on the subject :).
I would just suggest you post their as well to get more advice. There is also the Wilmott forum which also serves the quant community. I personally think Oxford would be the best fit for you with
your undergrad degree in CS (I am also currently pursuing a CS undergrad degree as well).
[–]incraved1 point2 points3 points ago
sorry, this has been archived and can no longer be voted on | {"url":"http://www.reddit.com/r/quant/comments/17vbe1/getting_into_quant_finance_which_course_is_more/","timestamp":"2014-04-19T12:51:15Z","content_type":null,"content_length":"62044","record_id":"<urn:uuid:d3fb8005-05b5-473f-948e-b9576618872d>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00423-ip-10-147-4-33.ec2.internal.warc.gz"} |
Saugus Geometry Tutor
Find a Saugus Geometry Tutor
...I am a former teacher with 25+ years tutoring experience. I have tutored test prep including ISEE, SSAT, SAT, SAT II, ACT. I have taught Middle and H.S.
19 Subjects: including geometry, GRE, algebra 1, algebra 2
...I also tutor math and writing for middle school and high school students. I was trained by and spent 5 years working for one of the major test prep companies. While working for that company, I
earned a "Tutor of the Year" award and I was one of the most in-demand tutors for my excellent student results.
26 Subjects: including geometry, English, linear algebra, algebra 1
...Fortunately, the SAT math exam covers a specific, well-defined set of topics, and tends to ask the same types of questions in the same way year after year. I will help you with the topics that
you find the most difficult, and I have some basic strategies to offer that will help you effectively o...
44 Subjects: including geometry, English, chemistry, reading
Do you want better grades and test scores? Do you want to get the most from your classes? Is something holding you back from doing your best?
34 Subjects: including geometry, reading, English, writing
...I teach the necessary concepts in order to obtain the answers and also how to use more efficient and quicker methods. SAT preparation requires lots of practice and I offer a study schedule
based on the amount of time which remains until the exam and my assessment of the student's level. Increased scores should follow.
24 Subjects: including geometry, chemistry, calculus, physics
Nearby Cities With geometry Tutor
Belmont, MA geometry Tutors
Chelsea, MA geometry Tutors
Danvers, MA geometry Tutors
East Boston geometry Tutors
Everett, MA geometry Tutors
Lynn, MA geometry Tutors
Malden, MA geometry Tutors
Melrose, MA geometry Tutors
Peabody, MA geometry Tutors
Reading, MA geometry Tutors
Revere, MA geometry Tutors
Stoneham, MA geometry Tutors
Swampscott geometry Tutors
Wakefield, MA geometry Tutors
Winchester, MA geometry Tutors | {"url":"http://www.purplemath.com/saugus_geometry_tutors.php","timestamp":"2014-04-16T19:47:35Z","content_type":null,"content_length":"23519","record_id":"<urn:uuid:058ae17d-5287-401c-8cbc-869c5504c1cc>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00147-ip-10-147-4-33.ec2.internal.warc.gz"} |
New mathematical model links space-time theories
The attached image shows a 'black string' black hole phenomenon with perturbation. Credit: University of Southampton
Researchers at the University of Southampton have taken a significant step in a project to unravel the secrets of the structure of our Universe.
Professor Kostas Skenderis, Chair in Mathematical Physics at the University, comments: "One of the main recent advances in theoretical physics is the holographic principle. According to this idea,
our Universe may be thought of as a hologram and we would like to understand how to formulate the laws of physics for such a holographic Universe."
A new paper released by Professor Skenderis and Dr Marco Caldarelli from the University of Southampton, Dr Joan Camps from the University of Cambridge and Dr Blaise Goutéraux from the Nordic
Institute for Theoretical Physics, Sweden published in the Rapid Communication section of Physical Review D, makes connections between negatively curved space-time and flat space-time.
Space-time is usually understood to describe space existing in three dimensions, with time playing the role of a fourth dimension and all four coming together to form a continuum, or a state in which
the four elements can't be distinguished from each other.
Flat space-time and negative space-time describe an environment in which the Universe is non-compact, with space extending infinitely, forever in time, in any direction. The gravitational forces,
such as the ones produced by a star, are best described by flat-space time. Negatively curved space-time describes a Universe filled with negative vacuum energy. The mathematics of holography is best
understood for negatively curved space-times.
Professor Skenderis has developed a mathematic model which finds striking similarities between flat space-time and negatively curved space-time, with the latter however formulated in a negative
number of dimensions, beyond our realm of physical perception.
He comments: "According to holography, at a fundamental level the universe has one less dimension than we perceive in everyday life and is governed by laws similar to electromagnetism. The idea is
similar to that of ordinary holograms where a three-dimensional image is encoded in a two-dimensional surface, such as in the hologram on a credit card, but now it is the entire Universe that is
encoded in such a fashion.
"Our research is ongoing, and we hope to find more connections between flat space-time, negatively curved space-time and holography. Traditional theories about how the Universe operates go some way
individually to describing its very nature, but each fall short in different areas. It is our ultimate goal to find a new combined understanding of the Universe, which works across the board."
The paper AdS/Ricci-flat correspondence and the Gregory-Laflamme instability specifically explains what is known as the Gregory Laflamme instability, where certain types of black hole break up into
smaller black holes when disturbed – rather like a thin stream of water breaking into little droplets when you touch it with your finger. This black hole phenomenon has previously been shown to exist
through computer simulations and this work provides a deeper theoretical explanation.
In October 2012, Professor Skenderis was named among 20 other prominent scientists around the world to receive an award from the New Frontiers in Astronomy and Cosmology international grant
competition. He received $175,000 to explore the question, 'Was there a beginning of time and space?''.
More information: AdS/Ricci-flat correspondence and the Gregory-Laflamme instability, prd.aps.org/abstract/PRD/v87/i6/e061502
1.7 / 5 (29) May 30, 2013
"Today's scientists have substituted mathematics for experiments, and they wander off through equation after equation, and eventually build a structure which has no relation to reality." Nikola Tesla
And somebody thinks this guy deserves $175k to "explore the question, 'Was there a beginning of time and space?''. Now I understand the derision by Q and others, tear down any theory that threatens
their metaphysical mumbo jumbo theory so as to be able to secure continued funding for their religious devotion to fantasmical pseudo scientific mind games.
1.8 / 5 (6) May 30, 2013
This is quite amazing.
Requires deep thought.
2.3 / 5 (6) May 30, 2013
Reminds me of Slaughterhouse 5. The aliens didn't see time linearly like we did. They could see the whole of time all at once. Maybe time is just a projection onto our universe.
1 / 5 (22) May 30, 2013
In dense aether model this effect can be routinely observed in dark matter streaks with galaxies wrapped up on it. This similarity exists because the dark matter is five-dimensional effect as well.
1.8 / 5 (25) May 30, 2013
This illustration looks like the "anal bead" entering the "black hole" theory, and has all the legitimacy of it as well.
2.7 / 5 (7) May 30, 2013
More stringers, pushing AdS again. Why does holography `not work' in deSitter space ? AdS goes completely against known physics: The cosmo constant is positive, Not negative, as in AdS.
5 / 5 (3) May 30, 2013
fyi: preprint at http://arxiv.org/abs/1211.2815
1.2 / 5 (18) May 30, 2013
And somebody thinks this guy deserves $175k to "explore the question, 'Was there a beginning of time and space?''
Nope, but even the mathematicians should get some money for living. It's sorta social service - east Asia countries are paying monks for praying and doing nonsensical things whole their lives for
maintaining the chastity at the price - well, and we have mathematicians and theoretical physicists. So I'd tolerate it without problem - if just these guys wouldn't fight against cold fusion and
similar research most obstinately. And I'm not talking about dense aether model, which answer this question easily. These parasites cannot admit it - or their existence would become meaningless
2.4 / 5 (9) May 30, 2013
Geometers at work. Like any frontier you plot the way for others to leave or stay the paths.
1.3 / 5 (18) May 30, 2013
Nope, but even the mathematicians should get some money for living.
yeah, behind a cash register, not trying to devise reality through metaphysical mathematics.
3.5 / 5 (6) May 30, 2013
Obviously, they grow/smoke some truly awesome shit in the UK.
3.5 / 5 (13) May 30, 2013
This is quite amazing.
Requires deep thought.
The idea behind what they did is amazingly simple (though do not mistake that to mean the same as 'easy'):
You look at what kind of information the various theories would give - and try to find a transformation from one to the other (much like you find a transformation from a 3D object to another, similar
one that has been translated/rotated in space).
Effectively they are doing a principle component analysis of the independent assumptions that make up the individual theories and trying to match them.
The cool thing is: with this you can see whether two theories are equivalent (because then you get a flawless transformation). Any areas where the transformation is 'off' is somehere where you can
devise a test to delineate which one has it right and which one has it wrong.
5 / 5 (4) May 30, 2013
They lost me at "negative number of dimensions".
1 / 5 (18) May 30, 2013
They lost me at "negative number of dimensions".
In dense aether model it would correspond the chaotic, tachyonic stuff from perspective of longitudinal waves of environment. For example the electrons within superconductors are moving in
negatively-dimensional space-time from perspective of atom lattice. Note that whole the holographic model is inherently tachyonic, or it couldn't work at all. You cannot form the photons and similar
fast particles with projection of waves, which aren't already superluminal.
1 / 5 (17) May 30, 2013
In particle simulations it's quite common, the more we compress the particles, the more their motion will remain constrained to certain locations (Wigner orbitals), i.e. zero-dimensional. But we can
compress the particles even more and from this moment the motion of particles will become negatively-dimensional and unstable: they will shot itself in random directions with high speed. It can be
even modeled experimentally with so-called plasma crystals. This is how you could visualize the consequences of negative dimensionality or motion freedom degree.
4.8 / 5 (13) May 30, 2013
For those whinging at mathematics, it is the best TOOL we have in order to compute observable phenomena, and to discover unseen phenomena. You may whinge at maths, but it is probably because most
people can't do the level of maths that professional mathematicians do. Treat maths as a science, as it continues to grow, and discover novel approaches to calculate reality. The mistake YOU make
though, is to try and "picture" the maths as a real physical process. NOPE, the maths simply allows you to COMPUTE a real physical process, NOT give an actual "mind, visual" representation of what is
happening. Maths uses logic, and it must remain consistent. Any "theory" must remain self consistent, and make accurate calculations that match experimental evidence. Research Richard Feynman and his
work on QED, and all will become clear.
If you still hate maths, then what tool do you prescribe to advance our ability to compute, and thus utilise knowledge of the Universe?
4.8 / 5 (11) May 30, 2013
For an example given by Feynman himself, consider a quantity such as the average houshold in a given town may have 2.3 people living in it. Now that makes perfect sense from a mathematical point of
view, and allows you to compute the overall population of that town, or a good estimate in any given region of the town. However if you try to visualise this, can you really visualise .3 of a person,
and they be alive??? Nope, however this does not detract from the mathematical model utilised. Mathematics therefore can perform operations and give results which do not make sense in your mind,
however the results are useful if you wish to actually do something or build something and have a predicted outcome. So stop comparing each element of maths to reality and simply use it!!!! Though if
a result does not match observation then it's just another paper to sit on the shelf as a failed attempt. Of course a failed mathematical attempt is also useful, to remove one approach as a possible
1.5 / 5 (15) May 30, 2013
It always amuses me when mathematicians and scientists tell you they're trying to unify; all the while throwing out math of increasing complexity...
When they can explain this to a 5 year old; then I'll think they may understand it themselves.
Until then; it's as good a theory as any...
3.4 / 5 (15) May 30, 2013
It always amuses me when mathematicians and scientists tell you they're trying to unify; all the while throwing out math of increasing complexity...
Whereas the alternative is what? That you might actually learn something? But no: science has to be dumb or you won't believe it.
News-flash: The universe isn't as simple as you want it to be. So either get with the program or just admit to being not smart enough. But don't put the blame for your lazyness on the shoulders of
When they can explain this to a 5 year old; then I'll think they may understand it themselves.
They understand it well enough - never fear. YOU don't understand it - and are certainly not the person to judge what they understand or not.
4.5 / 5 (6) May 31, 2013
"Today's scientists have substituted mathematics for experiments, and they wander off through equation after equation, and eventually build a structure which has no relation to reality." Nikola Tesla
Tesla, no doubt, would have expressed far different sentiment if he lived in today's scientific world of wonder.
Zep Tepi
1 / 5 (6) May 31, 2013
For an example given by Feynman himself, consider a quantity such as the average houshold in a given town may have 2.3 people living in it. -ober
Great, I have an 1/3 entity, reminiscient of Schrodingers cat, about the house. Not a live human and not a dead one.
5 / 5 (4) May 31, 2013
It always amuses me when mathematicians and scientists tell you they're trying to unify; all the while throwing out math of increasing complexity...
When they can explain this to a 5 year old; then I'll think they may understand it themselves.
Until then; it's as good a theory as any...
What nonsense. This isn't Star Trek where 7 year olds are discussing Maxwell's Equations all because the mythical bridge to understanding was perfected.
Zep Tepi
1 / 5 (6) May 31, 2013
"According to holography, at a fundamental level the universe has one less dimension than we perceive in everyday life and is governed by laws similar to electromagnetism. The idea is similar to that
of ordinary holograms where a three-dimensional image is encoded in a two-dimensional surface, such as in the hologram on a credit card, but now it is the entire Universe that is encoded in such a
Lets take that further. Here I have quantum dot [more like a black hole]
and now I move it about _|_ on these 3 planes with said dot at the center [yes it is two dimensional at this point]. But let us make this move not just left right and up but also down
Then there are three dimensions created by moving about something with zero dimensions.
Numbers might then match what you perceive but cannot ever describe a quantum dot. [or what happens in a black hole]
Lets expand that movement
How many dimensions now, from none?
3.1 / 5 (17) May 31, 2013
I'm so sick of reading about how dense aether model applies to every article on this site.
Who spends their life sitting on a mainstream physics website blabbing about some theory, crackpot or not? What is WRONG with you?
Do you even have any friends? If so, have you ever noticed that they all get that look like "humor the poor mentally challenged guy who I'm hanging out with so he doesn't commit suicide or something"
every time you start talking about how the foam on your pint or the way that dart flew through the air confirms dense aether model?
At least I can understand the wierd indian guy and the vacuum mechanics guy, they're just plugging their sites. I even get the "mainstream science gave me PTSD" guy, at least he believes it's
personal and egregious. Classic paranoia, complete with intangible malefactor and rare knowledge fantasies. You, however, appear to be nothing more than a fixated proselyte who obsessively comments
on phys.org to make the voices stop.
Seek help...
1.5 / 5 (8) May 31, 2013
This illustration looks like the "anal bead" entering the "black hole" theory, and has all the legitimacy of it as well.
This usually happens after strings theory and before big bang theory.
5 / 5 (2) May 31, 2013
... I move it about _|_ on these 3 planes ...
You've only drawn 2 lines, horizontal and vertical, but you can't draw one out of the screen so I'll imagine that one.
How many dimensions now, from none?
You drew lines on the screen which is 2D. If I imagine it coming out of the screen, that's a third.
4.6 / 5 (10) May 31, 2013
I'm so sick of reading about how dense aether model applies to every article on this site.
Who spends their life sitting on a mainstream physics website blabbing about some theory, crackpot or not? What is WRONG with you?
Worse than that, there is no such thing as "dense aether", he doesn't even have a crackpot idea. He just means what other people would describe by "IMO" and it is pure techno-babble.
1 / 5 (5) May 31, 2013
If you assume time isn't a dimension and I don't think it is, that leaves 3. But is it 3 or is it just 3 because we choose to describe it that way. To me there is only breadth and depth, 2
'dimensions'. Harder to describe that way but just as valid.
1 / 5 (14) May 31, 2013
The dense aether model provides just another way, how to think about Gregory-Laflamme instability. In dense aether model the gravity field arises from shielding of longitudinal waves of vacuum
(gravitational waves) with massive bodies. But the other massive bodies are shielding these waves too and after then we can get a legitimate action, how this shielding of shielding will appear at the
case of two or more collinear objects. In this model the dark matter fibers should be the more pronounced, the more massive objects (black holes, galaxies) are sitting along line. But when these
objects will get too close each other, then the gravitational effect of shielding will prevail over dark matter of shielding of shielding and the dark matter string will break into beads.
5 / 5 (13) May 31, 2013
Dense aether concept was proposed with Oliver Lodge in 1904
"Concept" yes but he never turned it into a theory, nor could he have because as you know gas doesn't support transverse waves. All he did was speak in favour of Lorentz's aether theory and then use
some basic ideas to describe his ideas for a model.
If you never read it, ..
You know I did, when we discussed it last year. I asked your whereabouts in his paper he proposed anything other than Lorentz's formula and you refused to answer.
Your posts are driven with pure ignorance.
No, I've read the reference you gave me to his paper, you seem to imagine it contains something that it does not, the lack of knowledge is on your part.
1 / 5 (14) May 31, 2013
because as you know gas doesn't support transverse waves
It's foamy density fluctuations do. Such a fluctuations exist for example in dense supercritical fluids and inside of even more dense environment their foamy character will be pronounced even more.
Such a strings/branes/whatever spread the energy like the membranes of foam. BTW the dense aether model is not about some particular form of matter, like the gas, fluid, foam or plasma. It considers
all forms of matter including the vacuum as a dynamic emergent continuum.
1 / 5 (13) May 31, 2013
BTW you can consider the well forgotten string-net liquid theory in this regard.
5 / 5 (13) May 31, 2013
because as you know gas doesn't support transverse waves
It's foamy density fluctuations do.
Nope, not without shear stress.
BTW the dense aether model is not about some particular form of matter, like the gas, fluid, foam or plasma.
Then again it isn't a theory. For that you need to be able to demonstrate that you can make numerical predictions of physical effects and you can't do that unless you know what material you are
Like Lodge, it's all talk and no theory, as I said.
It considers all forms of matter including the vacuum as a dynamic emergent continuum.
Rubbish, read Lodge's paper if you're going to try to use it.
1 / 5 (15) May 31, 2013
Nope, not without shear stress
Why not, the light waves are transverse, there is lotta "shear stress" (Maxwell's displacement current)
Then again it isn't a theory. For that you need to be able to demonstrate that you can make numerical predictions of physical effects
It's like the attempt to describe the waterfall with math equations. You can write thousand of it, but you will still never be able to imagine it.
Rubbish, read Lodge's paper if you're going to try to use it.
Rubbish, this is exactly his idea - just try to read his book. But the idea of dense aether model is surprisingly old, even the medieval physicists had started to think about it.
Robert Hooke, 1687: "All space is filled with equally dense material. Gold fills only a small fraction of the space assigned to it, and yet has a big mass. How much greater must be the total mass
filling that space."
BTW Robert Hooke was first, who explained the colors with different wavelengths of aether waves.
5 / 5 (9) May 31, 2013
The usual gaggle of natello, ValeriaT, Zephyr, and other pseudoscientists is beginning to take control of this thread, complete with references to vacuum-mechanics.com, dense aether theory,
out-of-context quotes, and other outright lies. Almost everything they say, almost any argument they make, is uninformed and stupid at best and outright intellectually dishonest at worst.
4 / 5 (12) May 31, 2013
The usual gaggle of natello, ValeriaT, Zephyr, and other pseudoscientists is beginning to take control of this thread
You may not be around long enough - but those three (and a couple of other names) are all the same poster using various sockpuppets. You'll soon notice as they have the same idiosyncratic way of
posting (and at some point he even tried a ludicrous ploy where he was arguing with himself but screwed it up by using the wrong name)
By now he's admitted to being all these. He's just the sort of crazy you have to learn to read past here.
5 / 5 (1) Jun 01, 2013
This illustration looks like the "anal bead" entering the "black hole" theory, and has all the legitimacy of it as well.
The sad thing is that theory is still more closely related to reality than the eu/pc theories.
4.4 / 5 (7) Jun 01, 2013
Nope, not without shear stress
Why not, the light waves are transverse, there is lotta "shear stress" (Maxwell's displacement current)
Then again it isn't a theory. For that you need to be able to demonstrate that you can make numerical predictions of physical effects
It's like the attempt to describe the waterfall with math equations.
Then how are you going to derive Maxwell's Equations from your concept? That's what a theory is, not some vague philosophical hand-waving. Until you derive those, you have nothing.
1 / 5 (10) Jun 01, 2013
The delusion of time, space and the fourth dimension, some physicists claim time itself has been imperceptibly, steadily, slowing down for the life of the universe. I could have used that 175k to
conclusively pinpoint Bugs Bunny's hideout, and put the whole matter to rest once and for all.
1 / 5 (14) Jun 01, 2013
Then how are you going to derive Maxwell's Equations from your concept?
Maxwell originally used the elastic fluid model for derivation of his equations, as the similarity of equations for fluids and vacuum indicates clearly. We know about many hydrodynamical analogies of
magnetic field, for example. This is because the elastic fluids deform in two perpendicular directions, which do result in propagation of EM field in form of electric and magnetic waves, which are
mutually perpendicular to each other. After all, Maxwell was an aetherist, he didn't recognize any other model in his time - so he even couldn't use a different one.
1 / 5 (14) Jun 01, 2013
That's what a theory is, not some vague philosophical hand-waving
BTW The quantum mechanics and general relativity theories predict quite opposite things in many aspects (like the vacuum energy density). For example, the general relativity predicts, all massive
objects will collapse into singularity, whereas the quantum mechanics predict, the wave packets of free particles will always expand into infinity. Such a theories can be never reconciled mutually in
strictly rigorous way, simply because we cannot connect two sets of equations yielding the different solutions in strictly deterministic way (despite the mainstream physicists are trying it
obstinately, because they don't realize it and because they're getting the money from dumb publics for it). The unitary theory therefore will always contain some "hand-waving" in its core.
1 / 5 (12) Jun 01, 2013
I often compare the attempts for reconciliation of quantum mechanics with general relativity to construction of Alexander's horned sphere. The deeply nested hyperdimensional fractality connecting the
worlds bellow and above the human observer scale just corresponds the complexity of terresterial life and living forms residing between these two scales. The mainstream physicists have no idea, what
they're attempting for with their silly low-dimensional models. Despite of it, the combination of four-dimensional general relativity and quantum mechanics theories can describe some aspects of
five-dimensional reality like the small curved patches, which are used for paving of rough surface. In this sense you can consider the stuffs like the AdS/CFT duality and Gregory-Laflamme instability
as a relevant models - they just manifest itself in more subtle way and at different places, than the theorists are expecting right n
5 / 5 (3) Jun 01, 2013
Then how are you going to derive Maxwell's Equations from your concept?
Maxwell originally used the elastic fluid model for derivation of his equations, ..
No, he used the equations previously found by Gauss, Faraday and Ampere and rewrote them in a mathematically symmetrical form. Adding displacement current completed the symmetry. An elastic solid
would give the transverse restoring force needed for propagation but any viscous fluid behaviour would damp it out and limit the range of EM. Maxwell's Equations are only compatible with a rigid
(crystalline) aether.
After all, Maxwell was an aetherist, he didn't recognize any other model in his time - so he even couldn't use a different one.
Newton believed in absolute space but the equations he derived from it are Galilean Invariant so disposed of absolute motion. The history of the development isn't always reflected in the final
1 / 5 (12) Jun 01, 2013
he used the equations previously found by Gauss, Faraday and Ampere and rewrote them in a mathematically symmetrical form
Yep, this is how the mainstream physicists imagine the derivation of physical models - just the old equations are rewritten into another form. Not surprisingly the progress of theoretical physics
during last forty years corresponds this paradigm closely.
but any viscous fluid behaviour would damp it out and limit the range of EM
Which is why Maxwell didn't consider the viscosity, but he still proposed to look for it. BTW the range of EM is limited anyway, which is why the universe appears red-shifted with distance and black
at the end. The viscous effects of vacuum manifest itself with frame dragging effects.
Newton believed in absolute space but he ... disposed of absolute motion
In another words, he was confused in similar way, like during derivation of gravitational law. Robert Hooke got it better again.
5 / 5 (5) Jun 01, 2013
he used the equations previously found by Gauss, Faraday and Ampere and rewrote them in a mathematically symmetrical form
Yep, this is how the mainstream physicists imagine the derivation of physical models ..
Nope, it is how he derived the equations, not a model built on those. You need to learn the difference between a model and a theory.
but any viscous fluid behaviour would damp it out and limit the range of EM
Which is why Maxwell didn't consider the viscosity,but he still proposed to look for it.
Irrelevant, he derived the equations from those of others purely mathematically.
BTW the range of EM is limited anyway, which is why the universe appears red-shifted with distance and black at the end.
Nope, it would reduce the amplitude, not the frequency.
1 / 5 (3) Jun 01, 2013
hold on, someone has been paid to explain the output of a model? thats really broken
1 / 5 (11) Jun 01, 2013
it is how he derived the equations, not a model built on those
You're apparently confused: you must construct the model first, just after to develop the equations for it. The opposite way would be like the solving the homework in physics just with random
combinations of equations without understanding of their sense. In particular, Maxwell's equations are based on "molecular vortex model". You can get all details from here, for example. In Part I of
his 1861 paper, Maxwell proposed the existence of a sea of molecular vortices which are composed of a fluid-like aether, whereas in Part III, he deals with the elastic solid that these molecular
vortices collectively form. Maxwell's third equation is derived hydrodynamically, and it appeared as equation (9) in Part I.
1 / 5 (11) Jun 01, 2013
At the beginning of Part III, Maxwell says "In the first part of this paper I have shown how the forces acting between magnets ...may be accounted for .. innumerable vortices of revolving matter,
their axes coinciding with the direction of the magnetic force at every point of the field. The centrifugal force of these vortices produces pressures distributed in such a way that the final effect
is a force identical in direction and magnitude with that which we observe. Electric current is a solenoidal flow of aether in which a conducting wire acts like a pipe. The pressure of the flowing
aether causes it to leak tangentially into the surrounding sea of tiny vortices, causing the vortices to angularly accelerate and to align solenoidally around the circuit, hence resulting in a
magnetic field." Maxwell further says in the same part "conceived the rotating matter to be the substance of certain cells, divided from each other by cell-walls composed of particles"
1 / 5 (11) Jun 01, 2013
It's evident - with compare to contemporary mainstream physicists - Maxwell perfectly knew, what he is trying to describe with his equations. He understood it. He had whole his model in his head in
details before he started to write math about it, as follows from many illustrations of it - which is something, which you can never met in mainstream physics today. Please note, that his
illustration of electromotive force acting to the charged object with spin is exactly equivalent to the contemporary illustration of Newton-Magnus-Robins force acting to the rotating objects. In
particular, the equation (132) in his 1861 paper is equivalent to the famous equation E = mc² that is normally attributed to Albert Einstein more that forty years later.
3 / 5 (12) Jun 01, 2013
Take your meds, go outside, get some fresh aether.
5 / 5 (4) Jun 01, 2013
it is how he derived the equations, not a model built on those
You're apparently confused: you must construct the model first, just after to develop the equations for it. The opposite way would be like the solving the homework in physics just with random
combinations of equations without understanding of their sense.
Not at all. In his papers, Maxwell makes assumptions about the nature of charge which were badly wrong, but because they are only descriptive and all the real work is done by the maths, he still got
the right answer.
In particular, the equation (132) in his 1861 paper is equivalent to the famous equation E = mc² that is normally attributed to Albert Einstein more that forty years later.
No, although his variables are differently defined, it's essentially the equivalent of
c = 1 / sqrt(u_0 * e_0)
1 / 5 (11) Jun 01, 2013
It's variables were defined correctly - after all, the Lorentz transforms can be derived from Maxwell's theory easily - Feynman did it routinely during his lectures. The modified
Heaviside-Lorentz-Maxwell's theory is fully compatible with special relativity, because it allows only transverse waves, which have no reference frame in any material environment. Actually just the
Lorentz was, who finalized the symmetric regauging of Maxwell's theory after Heaviside and made it fully compliant with Lorentz transform in this way.
5 / 5 (4) Jun 01, 2013
It's variables were defined correctly
Sure, they were just a bit different from the modern version, just as you can calculate currents in a circuit using either resistance or conductivity.
1 / 5 (2) Jun 01, 2013
Thks for the link. A migrated "Old World-ler" attuned to Europe and abreast to the latest of the world, especially science - his professional occupational passion of foregone days. Priceless, simply,
'non-lecturing' conversational explanations to high sounding theatrical words and concepts that one finds 'active' scientists addicted to now. He drops the jargon for the sake of clarity. His math
bolsters his simplicity and strengthens the coherence and consistency for the advocacy of the concepts from science that worked for him. Überzeugung aus eigene Erfahrung! Applying what he learned
(from science) in problem solving led to his conviction that what he now preaches still works - in theory and practice.
There is no better mentor for you.
1 / 5 (7) Jun 02, 2013
ValeriaT, I know you take alot of heat in here, and I may not always agree with you. However, you have alot of good ideas and they are of the type that keeps life interesting. For without the
outlandish yet believable ideas the world would be incredibly dull. For that at least I thank you.
1 / 5 (7) Jun 02, 2013
Maybe it is just coincidence but the structure presented looks an awful lot like many portions of the mandlebrot set visualized.
1 / 5 (6) Jun 03, 2013
Have we forgotten that science is about that can actually be measured? There are no observables in this piece of mathematical speculation, thus it is not science. Black rings are only mathematical
5 / 5 (1) Jun 03, 2013
Professor Skenderis has developed a mathematic model which finds striking similarities between flat space-time and negatively curved space-time, with the latter however formulated in a negative
number of dimensions, beyond our realm of physical perception.
"Our research is ongoing, and we hope to find more connections between flat space-time, negatively curved space-time and holography. Traditional theories about how the Universe operates go some
way individually to describing its very nature, but each fall short in different areas. It is our ultimate goal to find a new combined understanding of the Universe, which works across the
First it's a mathematical model. Second, the model is trying to connect other models, which DO have observables. So IF A LINK IS MADE between many models that do have observables, and previous
attempts failed to connect them, but this theory does, then I'd say it is perfectly in the realm of science. The science here is CONSISTENCY!!!!
3.3 / 5 (7) Jun 03, 2013
There are no observables in this piece of mathematical speculation, thus it is not science.
Science requires a theoretical foundation. Without that any measurement is just a point of data.
You have to start somewhere and set up a theory from which you can make deductions (which they did) by which you can verify/falsify the theory (which they haven't).
One step at a time. Without theory and prediction you can never get to the falsification stage. And it is important for a new theory to
1) either make predictions that are at odds with the current theories
2) make all the same predictions but be far simpler (which would mean one could show equivalence - and it wouldn't be a 'new' theory at all).
Look at our resident crazies: Aether, electrical universe, truth/gods, ...
They may have a 'theory' but they don't make any predictions which are conceivably testable. That's why they aren't science (and why this paper is).
not rated yet Jun 03, 2013
Holographic Universe theory is not new. Michael Talbot, "The Holographic Universe", 1992.
1 / 5 (8) Jun 04, 2013
They may have a 'theory' but they don't make any predictions which are conceivably testable
Dense aether model is testable easily. For example, I explained above, that the dense aether model predicts Gregory-Laflamme instability - but around galaxies (dark matter fibers) - not around black
5 / 5 (3) Jun 04, 2013
Dense aether model is testable easily. For example, I explained above, that the dense aether model predicts Gregory-Laflamme instability..
OK let's test that claim, post your derivation of that prediction from Lodge's theory. | {"url":"http://phys.org/news/2013-05-mathematical-links-space-time-theories.html","timestamp":"2014-04-16T19:14:35Z","content_type":null,"content_length":"155624","record_id":"<urn:uuid:7dbb226e-9b4e-4278-a562-2ef228d311d1>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00533-ip-10-147-4-33.ec2.internal.warc.gz"} |
Separation Logic
15-819A4 Separation Logic (6 units) Spring 2003
John C. Reynolds
Second Half of Spring Semester 2003
Tuesdays and Thursdays, 1:30-2:50 pm Wean Hall 4601
Course Description
Separation logic is an extension of Hoare logic for reasoning about programs that use shared mutable data structures. We will survey the current development of this logic, including extensions that
permit unrestricted address arithmetic, dynamically allocated arrays, and recursive procedures. We will also discuss promising future directions.
PREREQUISITES: 15-819A3 or equivalent, or permission of instructor.
TEXT: Notes and papers will be distributed.
METHOD OF EVALUATION: Grading will be based on homework and final exam.
Tentative Bibliography
1. Reynolds, John C., Intuitionistic Reasoning about Shared Mutable Data Structure. Millennial Perspectives in Computer Science, Palgrave, 2000, pp. 303-321.
2. Ishtiaq, Samin and O'Hearn, Peter W., BI as an Assertion Language for Mutable Data Structures. POPL 28, January 2001, pp. 14-26.
3. Yang, Hongseok, An Example of Local Reasoning in BI Pointer Logic: The Schorr-Waite Graph Marking Algorithm. SPACE 2001, January 2001, pp. 41-68.
4. O'Hearn, Peter W. and Reynolds, John C. and Yang, Hongseok, Local Reasoning about Programs that Alter Data Structures. CSL 2001, LNCS 2142, pp. 1-19.
5. Calcagno, Cristiano and Yang, Hongseok and O'Hearn, Peter W., Computability and Complexity Results for a Spatial Assertion Language for Data Structures. FST TCS 2001, LNCS 2245, pp. 108-119.
6. Yang, Hongseok and O'Hearn, Peter W., A Semantic Basis for Local Reasoning. FOSSACS 2002, LNCS 2303, pp. 402-416.
7. Reynolds, John C., Separation Logic: A Logic for Shared Mutable Data Structures. LICS 17, July 2002.
last updated November 19, 2003 | {"url":"http://www.cs.cmu.edu/~jcr/cs819A4-03.html","timestamp":"2014-04-23T07:51:07Z","content_type":null,"content_length":"2372","record_id":"<urn:uuid:37366aa3-2d79-4d90-97ca-a50d3d8b65f7>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00658-ip-10-147-4-33.ec2.internal.warc.gz"} |
Dynamic load-balancing for parallel adaptive unstructured meshes, Parallel processing for scientific computing
Results 1 - 10 of 18
- JOURNAL OF PARALLEL AND DISTRIBUTED COMPUTING , 1997
"... For a large class of irregular mesh applications, the structure of the mesh changes from one phase of the computation to the next. Eventually, as the mesh evolves, the adapted mesh has to be
repartitioned to ensure good load balance. If this new graph is partitioned from scratch, it may lead to an ..."
Cited by 65 (7 self)
Add to MetaCart
For a large class of irregular mesh applications, the structure of the mesh changes from one phase of the computation to the next. Eventually, as the mesh evolves, the adapted mesh has to be
repartitioned to ensure good load balance. If this new graph is partitioned from scratch, it may lead to an excessive migration of data among processors. In this paper, we present schemes for
computing repartitionings of adaptively refined meshes that perform diffusion of
- Applications in VLSI design, ACM/IEEE Design Automation Conference , 1997
"... Traditional hypergraph partitioning algorithms compute a bisection a graph such that the number of hyperedges that are cut by the partitioning is minimized and each partition has an equal number
of vertices. The task of minimizing the cut can be considered as the objective and the requirement that t ..."
Cited by 64 (2 self)
Add to MetaCart
Traditional hypergraph partitioning algorithms compute a bisection a graph such that the number of hyperedges that are cut by the partitioning is minimized and each partition has an equal number of
vertices. The task of minimizing the cut can be considered as the objective and the requirement that the partitions will be of the same size can be considered as the constraint. In this paper we
extend the partitioning problem by incorporating an arbitrary number of balancing constraints. In our formulation, a vector of weights is assigned to each vertex, and the goal is to produce a
bisection such that the partitioning satisfies a balancing constraint associated with each weight, while attempting to minimize the cut. We present new multi-constraint hypergraph partitioning
algorithms that are based on the multilevel partitioning paradigm. We experimentally evaluate the effectiveness of our multi-constraint partitioners on a variety of synthetically generated problems.
, 1998
"... We design a general mathematical framework to analyze the properties of nearest neighbor balancing algorithms of the diffusion type. Within this framework we develop a new optimal polynomial
scheme (OPS) which we show to terminate within a finite number m of steps, where m only depends on the graph ..."
Cited by 46 (13 self)
Add to MetaCart
We design a general mathematical framework to analyze the properties of nearest neighbor balancing algorithms of the diffusion type. Within this framework we develop a new optimal polynomial scheme
(OPS) which we show to terminate within a finite number m of steps, where m only depends on the graph and not on the initial load distribution. We show that all existing diffusion load balancing
algorithms, including OPS, determine a flow of load on the edges of the graph which is uniquely defined, independent of the method and minimal in the l 2 -norm. This result can be extended to edge
weighted graphs. The l 2 -minimality is achieved only if a diffusion algorithm is used as preprocessing and the real movement of load is performed in a second step. Thus, it is advisable to split the
balancing process into the two steps of first determining a balancing flow and afterwards moving the load. We introduce the problem of scheduling a flow and present some first results on its
complexity and the ...
- In ICDE , 2005
"... Distributed and parallel computing environments are becoming cheap and commonplace. The availability of large numbers of CPU’s makes it possible to process more data at higher speeds.
Stream-processing systems are also becoming more important, as broad classes of applications require results in real ..."
Cited by 46 (5 self)
Add to MetaCart
Distributed and parallel computing environments are becoming cheap and commonplace. The availability of large numbers of CPU’s makes it possible to process more data at higher speeds.
Stream-processing systems are also becoming more important, as broad classes of applications require results in real-time. Since load can vary in unpredictable ways, exploiting the abundant processor
cycles requires effective dynamic load distribution techniques. Although load distribution has been extensively studied for the traditional pull-based systems, it has not yet been fully studied in
the context of push-based continuous query processing. In this paper, we present a correlation based load distribution algorithm that aims at avoiding overload and minimizing end-to-end latency by
minimizing load variance and maximizing load correlation. While finding the optimal solution for such a problem is NP-hard, our greedy algorithm can find reasonable solutions in polynomial time. We
present both a global algorithm for initial load distribution and a pair-wise algorithm for dynamic load migration.
, 1997
"... A parallel method for the dynamic partitioning of unstructured meshes is described. The method introduces a new iterative optimisation technique known as relative gain optimisation which both
balances the workload and attempts to minimise the interprocessor communications overhead. Experiments on a ..."
Cited by 13 (4 self)
Add to MetaCart
A parallel method for the dynamic partitioning of unstructured meshes is described. The method introduces a new iterative optimisation technique known as relative gain optimisation which both
balances the workload and attempts to minimise the interprocessor communications overhead. Experiments on a series of adaptively refined meshes indicate that the algorithm provides partitions of an
equivalent or higher quality to static partitioners (which do not reuse the existing partition) and much more rapidly. Perhaps more importantly, the algorithm results in only a small fraction of the
amount of data migration compared to the static partitioners.
- Parallel Computing , 2000
"... In this paper we contrast the performance of three different parallel dynamic load-balancing algorithms when used in conjunction with a particular parallel, adaptive, time-dependent, 3-d flow
solver that has recently been developed at Leeds. An overview of this adaptive solver is given along with a ..."
Cited by 12 (8 self)
Add to MetaCart
In this paper we contrast the performance of three different parallel dynamic load-balancing algorithms when used in conjunction with a particular parallel, adaptive, time-dependent, 3-d flow solver
that has recently been developed at Leeds. An overview of this adaptive solver is given along with a description of a new dynamic loadbalancing algorithm. The effectiveness of this algorithm is then
assessed when it is coupled with the solver to tackle a model 3-d flow problem in parallel. Two alternative parallel dynamic load-balancing algorithms are also described and tested on the same flow
problem. 1 Introduction The use of distributed memory parallel computers for the solution of large, complex computational mechanics problems has great potential for both significant increases in mesh
sizes and the significant reduction of solution times. For transient problems accuracy and efficiency constraints also require the use of mesh adaptation since solution features on different length
, 1999
"... . We discuss iterative nearest neighbor load balancing schemes on processor networks which are represented by a cartesian product of graphs like e.g. tori or hypercubes. By the use of the
AlternatingDirection Loadbalancing scheme, the number of load balance iterations decreases by a factor of 2 for ..."
Cited by 12 (5 self)
Add to MetaCart
. We discuss iterative nearest neighbor load balancing schemes on processor networks which are represented by a cartesian product of graphs like e.g. tori or hypercubes. By the use of the
AlternatingDirection Loadbalancing scheme, the number of load balance iterations decreases by a factor of 2 for this type of graphs. The resulting flow is analyzed theoretically and it can be very
high for certain cases. Therefore, we furthermore present the Mixed-Direction scheme which needs the same number of iterations but results in a much smaller flow. Apart from that, we present a simple
optimal diffusion scheme for general graphs which calculates a minimal balancing flow in the l 2 norm. The scheme is based on the spectrum of the graph representing the network and needs only m \
Gamma 1 iterations in order to balance the load with m being the number of distinct eigenvalues. 1 Introduction We consider the load balancing problem in a synchronous, distributed processor network.
Each node of the ne...
, 2001
"... Geometric Multigrid methods have gained widespread acceptance for solving large systems of linear equations, especially for structured grids. One of the ..."
, 1998
"... ith adaptive meshing (as in local mesh refinement and coarsening) or adaptive re-meshing. However, as will be seen when considering the applications included within the DRAMA project, a need for
dynamic load balancing arises in applications with fixed meshes where computational and/or communications ..."
Cited by 3 (3 self)
Add to MetaCart
ith adaptive meshing (as in local mesh refinement and coarsening) or adaptive re-meshing. However, as will be seen when considering the applications included within the DRAMA project, a need for
dynamic load balancing arises in applications with fixed meshes where computational and/or communications costs vary greatly as the simulation progresses. Major advances have been made in recent
years in the two areas which form the starting point for the project activities: the development of parallel mesh-partitioning algorithms suitable for dynamic repartitioning (re-allocation of
sub-meshes to processors at run-time); the migration and optimisation of industrial-strength simulation codes to HPC platforms using the message-passing paradigm. However, most industrial-strength
parallel simulations using large processor numbers are performed with static partitioning and non-adaptive meshing - or when adaptive meshing, then with a sequentialised repartitioning phase which
greatly reduces the para
, 1997
"... Graph partitioning has been shown to be an effective way to divide a large computation over an arbitrary number of processors. A good partitioning can ensure load balance and minimize the
communication overhead of the computation by partitioning an irregular mesh into p equal parts while minimizin ..."
Cited by 3 (0 self)
Add to MetaCart
Graph partitioning has been shown to be an effective way to divide a large computation over an arbitrary number of processors. A good partitioning can ensure load balance and minimize the
communication overhead of the computation by partitioning an irregular mesh into p equal parts while minimizing the number of edges cut by the partition. For a large class of irregular mesh
applications, the structure of the graph changes from one phase of the computation to the next. Eventually, as the graph evolves, the adapted mesh has to be repartitioned to ensure good load balance.
Failure to do so will lead to higher parallel run time. This repartitioning needs to maintain a low edge-cut in order to minimize communication overhead in the follow-on computation. It also needs to
minimize the time for physically migrating data from one processor to another since this time can dominate overall run time. Finally, it must be fast and scalable since it may be necessary to
repartition frequently... | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=3554414","timestamp":"2014-04-21T08:04:52Z","content_type":null,"content_length":"39009","record_id":"<urn:uuid:2980c477-5384-41f4-822a-115d9db59ddf>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00575-ip-10-147-4-33.ec2.internal.warc.gz"} |
OR THis 1
This is a discussion on OR THis 1 within the C Programming forums, part of the General Programming Boards category; write a program in c to add, subtract, multiply and divide two complex numbers? z1
= a1+jb2 z2 = a2 ...
write a program in c to add, subtract, multiply and divide two complex numbers? z1 = a1+jb2 z2 = a2 +jb2 express the result in angular form? xxxxxxxxxxxxxxxxxxxx
This sounds a lot like a homework problem (whether it originated from your "boyfriend" or your instructor). Please post the code you have to solve this problem so we can see what you understand and
what you are missing. Thanks.
i dont understand c at all!! that was the question 4my assignment??? its in tomoz and im stuck? pls help me
>i dont understand c at all!! I guess then you don't know about structures. But it isn't too hard. A structure is a set of data. A structure can be defined this way: Code: struct complex_number { int
real; int imaginary; }; A structure can be used as a datatype. Code: struct complex_number complex; Here the variable complex is of type struct complex_number. This complex_number structure can be
used as follows: Code: complex.real = 2; complex.imaginary = 3; Now you have the complex number z=2+j3. I guess you now understand how you could use this in your calculations. Assume: z = a + j b p =
m + jn then w = z + p =( a + m) + j (b + n) This can be implemented as: Code: struct complex_number z, p, w; z.real = 2; z.imaginary = 4; p.real = 3; p.imaginary = 6; w.real = z.real + p.real;
w.imaginary = z.imaginary + p.imaginary; Hope this helps something.
struct complex_number z, p, w; z.real = 2; z.imaginary = 4; p.real = 3; p.imaginary = 6; w.real = z.real + p.real; w.imaginary = z.imaginary + p.imaginary;
I've got the answer for this in Java - I'll post it when I get home from work and can FTP into the school. | {"url":"http://cboard.cprogramming.com/c-programming/8771-1.html","timestamp":"2014-04-24T21:26:06Z","content_type":null,"content_length":"48750","record_id":"<urn:uuid:36d7ae20-4c52-4e1a-9613-d616d8902bfc>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00344-ip-10-147-4-33.ec2.internal.warc.gz"} |
Home Education Magazine: One of the oldest and most informative homeschooling magazines.
Homeschool Information Library
....(HEM's Information Library Index | Older Homeschoolers Index)....
Rethinking Midschool/High School Math
From the Older Kids column, by Cafi Cohen, originally published in the January-February 1997 issue of Home Education Magazine.
M-A-T-H. What thoughts come to mind with the word MATH? One of the three R's. A government school "required subject," according to many state statutes and some local regulations. An essential topic
in any homeschool.
Past that point, what official guidance are you given for teaching math? Often, none. What you cover and how you cover it is up to you. In the absence of specific directions, many homeschooling
families pursue what I call School Math. School Math is how most parents studied math when they attended school; and it is how most schools still teach math.
School Math involves textbooks, workbooks, and exams. Older kids pursuing School Math study the subject sequentially -- in other words, arithmetic, followed by algebra, then geometry, second year
algebra, trigonometry, and so on. Those using a School Math program must always "show all the steps" and reason exactly like the author of the text reasons. Alternate approaches to problem solving
are unacceptable. Getting the right answer is emphasized, often to the exclusion of understanding the process.
School Math texts often include unrealistic problems. Typical is the following. A 20-ft. telephone pole falls across a street and the two ends extend one foot and three feet either side of the
street. How wide is the street?
That's it. A real problem from a real math text. It makes me wonder. When would a person ever measure a street this way? And why? Ambiguous situations are a second deficiency illustrated here. How do
we know the telephone pole is perpendicular to the street? Obviously, the "correct" answer would vary with the angle between the pole and the street.
A steady diet of School Math also often means that math applications (using math in real life situations) are accorded supplementary status. Kids whose math experience is dependent on texts do real
life math seldom - if and only if they complete an exhausting set of math text problems.
Given all this, we homeschooling parents have to ask ourselves, is school math the right way to inculcate mathematical skills and processes? Let's look at the evidence. With respect to formal math
instruction, 80+% of high school graduates in the United States get no further than consumer math, general math, and introductory algebra (this ignores the 30% of students nationwide who don't
graduate at all).
Of those students who do graduate, a majority need cash register picture keys to add up purchases and make change. Sometimes, even the best math students, those who take the advanced math courses,
forget how to accomplish basic tasks like calculating a percentage or cutting a recipe in half. Scores on the math portion of the Scholastic Aptitude Test (a college entrance exam) continue to
When I speak to groups of adults, I sometimes ask how many fear or dislike math and how many consider themselves math-phobic or math illiterate. Consistently 1/3-1/2 of any adult audience identifies
themselves as having a strained relationship the topic.
So, given all this - the relatively low level of formal math course completion, the lack of math application skills, poor standardized test performance, and pervasive math anxiety in the general
population - we home educators have to ask ourselves: do we want to copy a method that yields these results? Do we gain anything by patterning our homeschooling after the programs of the educational
I don't think so. Emulating a failing system is not the way to go. Fortunately, as homeschooling families, we have the latitude to try something else.
We homeschooled two kids from grades six and seven through grade twelve and eventually enjoyed success with the math "program" we developed. We began as many of you probably did: with textbooks and
workbooks and tests and grades; with lessons and problems every day; with pencils and papers and rulers and protractors; and almost always, with serious, sometimes grim looks on our faces.
As we progressed through early middle school math (prealgebra and then algebra), we spent less time with the texts and more time on topics most schools consider supplementary. These included:
* Consumer math and everyday accounting
* Math applications (also called "hands-on" or real life math)
* Mental math
* Recreational math (math games)
* Probability and statistics
* Calculator math
* Math history
And, as I observed my kids' reactions to these topics, we changed our emphasis, and My Ideal Math Program evolved. My Ideal Math Program consists of equal proportions of textbook math and alternative
math activities (those "supplementary" topics listed above). Our kids spent approximately half their time with instructional math program materials with which we are all so familiar (we used Saxon
Math, but any program which your teenager thinks is straightforward and clear is acceptable). And fully half of our kids' math time was devoted to the other topics listed above.
We found that alternating the emphasis weekly - one week program math, one week alternative math - worked out well for us. You may find that daily or monthly or even randomly timed alterations fit
your situation better.
An example of this approach for a seventh grader follows. On week one, the kid completes one lesson daily of Saxon Math 76. On week two, she works on a 4-H Project which incorporates consumer and
hands-on math. On week three, she returns to Saxon Math. On week four, she works with mom on mental math techniques. On week 5, it's back to Saxon. On week 6, we play math games each day.
How will your kid complete his program text if he's only doing it half the time? A good question. What we found was that our kids did not need to do every problem in the text. Because of time spent
with alternative materials, our kids grasped and mastered text concepts and operations more readily. With a program like Saxon, I would recommend that kids try reading all the lessons, but only doing
every other problem set.
What about a tenth grader using Key Curriculum Press materials as the formal math program? On week one, he works 30-45 minutes daily on Key To Geometry. Week two, he plays games taken from the book
Family Math each day. Week three, he is once again working on the Key To - book. Week four, he works on the accounting for his lawn-mowing business and plays with the techniques in a calculator math
book. And so on.
I am recommending an approach that integrates textbook math with topics like hands-on math and recreational math and puts them on an equal footing. Some regard this as radical thinking. Why should
you consider it? Here are five reasons.
First, an integrated approach increases the likelihood your kid will cover math topics in which your program may be weak. No one text is perfect; no program complete, regardless of the advertising.
Your program may do a superb job explaining concepts but lack good drill material. Or it may be drill-heavy and come up short in problem solving or hands-on activities. As an example of a deficiency
of some higher level math programs, your Algebra II or Trigonometry text may omit statistics and probability.
Point two. Alternative math materials and activities help students relate math to The Real World and acquire skills they will use throughout their lives. Kids learn to estimate gallons of paint
needed for a room by running through the calculations and then trying them out (in this case, painting the room). They master budgeting and balancing checkbooks by helping you juggle (or even taking
responsibility for) the family finances. They will never forget how to obtain price per unit after they have stood in a store aisle with a calculator and crunched the numbers several times.
Third, alternative activities and materials encourage a problem- solving, can-do attitude. That is, after all, the goal of all the school math programs: teaching kids to solve real world problems.
Unfortunately, many school programs have gotten lost in their own didacticism. We don't have to.
Fourth, an integrated approach may be the best way to reach the math phobic teenager and the kinesthetic (hands-on) learner. The alternative materials may be the only things that gets through to the
math-phobic kid. And the kinesthetic learner will probably understand his text material only after he has had the chance to "play" with hands-on math materials.
Finally, incorporating hands-on and real life and recreational math materials in your program can help you and your student feel more relaxed about math. Math games are fun. Homeschooling catalogs
carry many recreational math games, but often you need not look any further than your hall closet. Monopoly, for example, provides excellent practice and review of the four basic arithmetic
operations, lots of mental math, as well as fractions, decimals, and percents. Frequent Monopoly players retain "easy" math skills. They do not forget how to calculate 10% mentally just because they
happen to be studying geometry this year.
© 1997 Cafi Cohen
....(HEM's Information Library Index | Older Homeschoolers Index).... | {"url":"http://homeedmag.com/INF/OH/oh_cc.math.html","timestamp":"2014-04-16T16:17:49Z","content_type":null,"content_length":"15278","record_id":"<urn:uuid:ca244246-7d2b-4494-beaf-ff1a19839f52>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00473-ip-10-147-4-33.ec2.internal.warc.gz"} |
Parking functions to non-crossing partitions
up vote 12 down vote favorite
Is there a "local" algorithm which takes as its input a parking function and returns the non-crossing partition labelled by that sequence?
Background: A parking function is a sequence of positive integers which is a permutation of a sequence $a_1\leq \ldots \leq a_n$ satisfying $a_k \leq k$ for all $k$. In a 1996 article, Richard
Stanley showed that there is a bijection between the set of parking functions and the set of maximal chains in the lattice of non-crossing partitions. He gives an algorithm that injectively assigns
to each maximal chain a parking function, and then relies on the fact that the two sets have the same cardinality to deduce surjectivity. I'm asking whether in the meantime someone has found an
algorithm that produces the inverse function.
By "local" I mean one that does not have as its first step to label all the maximal chains of non-crossing partitions.
I'd love to get a complete answer to that question as well. During my flight in the morning, I figured a way to construct this bijection, but it is not completely "local" in your sense. For a
primitive parking function (i.e., one that is weakly increasing) there is a fairly standard way to produce the corresponding maximal chain in the noncrossing partition lattice. And for
non-primitive parking functions, there is a way to reorganize the maximal chain of the corresponding primitive rearrangement. If you are interested, I will write a full answer containing the
construction. – Christian Stump Apr 15 '12 at 9:25
add comment
2 Answers
active oldest votes
I still hope there is a complete (and easy) answer to the question, but as mentioned in my comment above, and since no one else answered so far, I give a description of the inverse map
that is not completely local, but still "better" than first labelling all chains in the noncrossing partition lattice. (I just worked it out briefly, so I am not saying this is the best
way of doing it, but it is easy enough to be worked out in 30 min.)
A parking function is called primitive if it is weakly increasing. Every parking function can be obtained from a primitive parking function by a series of transpositions of consecutive
numbers. E.g.
$$112355 \mapsto 121355 \mapsto 211355 \mapsto 213155 \mapsto 213515 \mapsto 213551.$$
We do not yet have seen here a definition of noncrossing partitions and maximal chains in the noncrossing partition lattice. So observe that a chain in the noncrossing partition lattice
can be seen as factorizations of the long cycle $(1,2,\ldots,n) \in \mathcal{S}_n$ into a product of $n$ transpositions $(ij)$. The map from these factorizations to parking functions
mentioned above is then the map $\phi$ sending $c = (i_1j_1) \ldots (i_nj_n)$ to the sequence $(i_1,\ldots,i_n)$. In Stanley's article, it is shown that $\phi(factorization)$ is indeed a
parking function, and that this map is a bijection. To obtain $\psi := \phi^{-1}$ we must therefore start with a sequence $(i_1,\ldots,i_n)$ and turn it into a factorization $c = (i_1j_1)
\ldots (i_nj_n)$.
We do this in two steps: first, we define $\psi$ for primitive parking functions, and then obtain $\psi$ for general parking functions by a sequence of transpositions of factors $(i_kj_k)
(i_{k+1}j_{k+1}) \mapsto (i_{k+1}j_{k+1})(i_k\tilde j_k)$, such that $$(i_kj_k)(i_{k+1}j_{k+1}) = (i_{k+1}j_{k+1})(i_k\tilde j_k).$$ Observe that this uniquely determines $\tilde j_k$,
up vote 6 and that this is not a valid procedure for any two transpositions. Nonetheless, this works for sequences coming from parking functions.
down vote
accepted Let $seq = (i_1,\ldots,i_n)$ be a primitive parking function. $\psi(seq)$ is then given by sending the last $1$ in $seq$ to the transposition $(12)$. Afterwards, send the last $i \in 1,2$
to $(i,3)$, then the last $i \in 1,2,3$ to $(i,4)$ and so on. For example, replacing letters in $112355$ in this way gives $$ \begin{align*} 112355 &\mapsto 1\ (12)\ 2355 \mapsto 1\ (12)
(23)\ 355 \mapsto 1\ (12)(23)(34)\ 55 \newline &\mapsto (15)(12)(23)(34)\ 55 \mapsto (15)(12)(23)(34)\ 5\ (56) \newline &\mapsto (15)(12)(23)(34)(57)(56) \end{align*} $$ It is
straightforward to check that this yields indeed a factorization of the long cycle, and by construction, we have $\phi(\ (15)(12)(23)(34)(57)(56)\ ) = 112355$, as desired.
Now, to obtain the parking function $211553$, we first interchange positions $2$ and $3$, and interchange the factors such that $(i_2j_2)(i_3j_3) = (i_3j_3)(i_2\tilde j_2)$ so we obtain
$$ \psi( 121355 ) = (15)(23)(13)(34)(57)(56).$$ Here, the third factor became the second, and the second factor $(12)$ turns into $(13)$. We keep going with this procedure and obtain $$ \
begin{align*} \psi(112355) &= (15)(12)(23)(34)(57)(56) \newline \psi(121355) &= (15)(23)(13)(34)(57)(56) \newline \psi(211355) &= (23)(15)(13)(34)(57)(56) \newline \psi(213155) &= (23)
(15)(34)(14)(57)(56) \newline \psi(213515) &= (23)(15)(34)(57)(14)(56) \newline \psi(213551) &= (23)(15)(34)(57)(56)(14). \end{align*} $$ This completes the construction. I didn't present
that proof that both steps in the procedure actually work, but they should in fact do.
If someone wants sees a direct way of computing $\psi(213551)$, please post it! And if I should clarify anything, or if someone finds mistakes, please let me know (I haven't yet gone
though all details to prove that this procedure works, and I would only do it if this is not yet known, and people are interested).
Best, Christian
Thanks for the answer, Christian. I was trying to write a program to do some homology computations. In the meantime I found a mathematica package called posets.m, by Curtis Greene and
students, that generates the non-crossing partition lattice in a convenient form, from which one can, with some work, generate all maximal chains and their labels. In case that info
might be useful to someone ... The algorithm for primitive parking functions I also found on the web, in the thesis of A. Rattan (Waterloo). And I guess ${\tilde j}_2$ can be found by
the usual conjugation routine in the $S_n$. – Michael Falk May 9 '12 at 15:39
You can also do all these computations in Sage, where I implemented the noncrossing partition lattice. If you are interested, I will prepare a worksheet that does compute the
noncrossing partition lattice and make it publicly available at sage.lacim.uqam.ca. It basically looks like W = CoxeterGroup(['A']) NC = W.noncrossing_partition_lattice() chains =
NC.maximal_chains() + some little work for getting the edge labels right. – Christian Stump May 10 '12 at 5:20
That would be great - all I can do with Mathematica is type A. – Michael Falk May 14 '12 at 20:17
1 For completeness, I also post the public worksheet here: sage.lacim.uqam.ca/home/pub/14 – Christian Stump Jun 12 '12 at 6:58
add comment
P. Biane. Parking functions of types A and B Electron. J. Combin. 9 (2002), no. 1, Note 7, 5 pp.
up vote 1 down vote
add comment
Not the answer you're looking for? Browse other questions tagged co.combinatorics or ask your own question. | {"url":"http://mathoverflow.net/questions/94025/parking-functions-to-non-crossing-partitions","timestamp":"2014-04-19T07:09:46Z","content_type":null,"content_length":"63726","record_id":"<urn:uuid:a8d9cbbd-6ff7-4e36-876a-e647752e0c94>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00567-ip-10-147-4-33.ec2.internal.warc.gz"} |
Post a reply
Hi mathematics7;
You want a basis using Hermite polynomials but there are at least two types of them ( maybe more, I do not know.) Which one do you want?
Also, as far as I know they are used for only one type of integral because they are orthogonal to the exponential function in it.
For this they use the physicists version. You seem to think that they can be used for the general f(x). Why do you think that?
The error term is given as:
where n is the number of points and epsilon is some point in the interval [a,b] that maximizes the error. There is also another type of error estimate that is similar to the Euler Mclaurin Summation
If you want a specific example I can run of the weights and abscissas ( zeros ). | {"url":"http://www.mathisfunforum.com/post.php?tid=19176&qid=260767","timestamp":"2014-04-17T06:57:50Z","content_type":null,"content_length":"17088","record_id":"<urn:uuid:3988ab72-f677-4f77-82ad-c7823aedc995>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00334-ip-10-147-4-33.ec2.internal.warc.gz"} |
New Infinite Families of $3$-Designs from Algebraic Curves of Higher Genus over Finite Fields
In this paper, we give a simple method for computing the stabilizer subgroup of $D(f)=\{\alpha \in {\Bbb F}_q \mid \text{ there is a } \beta \in {\Bbb F}_q^{\times} \text{ such that }\beta^n=f(\
alpha)\}$ in $PSL_2({\Bbb F}_q)$, where $q$ is a large odd prime power, $n$ is a positive integer dividing $q-1$ greater than $1$, and $f(x) \in {\Bbb F}_q[x]$. As an application, we construct new
infinite families of $3$-designs.
Full Text: | {"url":"http://www.combinatorics.org/ojs/index.php/eljc/article/view/v14i1n25/0","timestamp":"2014-04-17T18:24:22Z","content_type":null,"content_length":"14506","record_id":"<urn:uuid:2d40d1de-b886-4589-a576-5d2abf82c296>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00169-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mission Viejo Precalculus Tutor
...My primary background is in chemistry but I have strong secondary backgrounds in mathematics and biology. I have an artistic soul and am comfortable using unusual teaching techniques to inspire
students. I am an eloquent scientific communicator and feel most effective in one-on-one or small gro...
6 Subjects: including precalculus, chemistry, calculus, algebra 2
Physics makes the world go 'round (literally)! My name is Thomas and I'm a recent Physics M.S. graduate from the University of California, Irvine. I have 9 quarters (3 full years' worth) of
teaching experience at UC Irvine, which includes leading guided group problem-solving discussions, one-on-on...
20 Subjects: including precalculus, physics, calculus, algebra 1
...Having outstanding grades all throughout high school and graduating with a 3.9 GPA, I believe I am very qualified to tutor in the subjects noted. As a young female, I believe I can connect to
my students in ways that other people cannot. I can develop a friendship and maintain a professional position at the same time.
19 Subjects: including precalculus, Spanish, geometry, biology
...Since entering UCI, I have worked as a physics teaching assistant for a cumulative 7 academic quarters (over 2 years of coursework). During this time, I also worked as a private tutor for
students who would request extra tutoring from the physics department. Philosophically, I dislike teaching m...
6 Subjects: including precalculus, calculus, physics, trigonometry
...I help them achieve high scores on their tests and quizzes. They went from a C to an A in a matter of weeks. I also tutor elementary and middle school kids for NHS (National Honor Society). I
have won the math competition in 8th grade and I was chosen by my school to represent as a spelling bee competitor; however, I was unable to go due to a wrestling tournament.
23 Subjects: including precalculus, reading, geometry, algebra 1 | {"url":"http://www.purplemath.com/Mission_Viejo_Precalculus_tutors.php","timestamp":"2014-04-16T07:46:26Z","content_type":null,"content_length":"24451","record_id":"<urn:uuid:934faf3b-884b-4cb0-a99f-4fd632e1db41>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00132-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Matrix Tree Theorem for Weighted Graphs.
up vote 7 down vote favorite
I am interested in the general form of the Kirchoff Matrix Tree Theorem for weighted graphs, and in particular what interesting weightings one can choose.
Let $G = (V,E, \omega)$ be a weighted graph where $\omega: E \rightarrow K$, for a given field K; I assume that the graph is without loops.
For any spanning tree $T \subseteq G$ the weight of the tree is given to be, $$m(T) = \prod_{e \in T}\omega(e)$$ and the tree polynomial (or statistical sum) of the graph is given to be the sum over
all spanning trees in G, $$P(G) = \sum_{T \subseteq G}m(T)$$ The combinatorial laplacian of the graph G is given by $L_G$, where: $$L_G = \begin{pmatrix} -\sum_{k = 1}^n\omega(e_{1k}) & \omega(e_
{12}) & \cdots & \omega(e_{1n}) \\\ \omega(e_{12}) & -\sum_{k = 1}^n\omega(e_{2k}) & \cdots & \omega(e_{2n})\\\ \vdots & \vdots & \ddots & \vdots \\\ \omega(e_{1n}) & \omega(e_{2n}) & \cdots & -\sum_
{k = 1}^n\omega(e_{nk}) \end{pmatrix} $$
where $e_{ik}$ is the edge between vertices $i$ and $k$, if no edge exists then the entry is 0 (this is the same as considering the complete graph on n vertices with an extended weighting function
that gives weight 0 to any edge not in G). The matrix tree theorem then says that the tree polynomial is equal to the absolute value of any cofactor of the matrix. That is,
$$P(G) = \det(L_G(s|t))$$
where $A(s|t)$ denotes the matrix obtained by deleting row $s$ and column $t$ from a matrix A.
By choosing different weightings one would expect to find interesting properties of a graph G. Two simple applications are to give the weighting of all 1's. Then the theorem allows us to count the
number of spanning trees with ease (this yields the standard statement of the Matrix Tree Theorem for graphs). Alternatively, by giving every edge a distinct formal symbol as its label then by
computing the relevant determinant, the sum obtained can be read as a list of all the spanning trees.
My question is whether there are other interesting weightings that can be used to derive other interesting properties of graphs, or for applied problems.
co.combinatorics graph-theory
add comment
2 Answers
active oldest votes
A very interesting weighting is obtained by just working with directed multigraphs (dimgraphs). 7 or 8 years ago, I applied the matrix-tree theorem applied to dimgraphs in conjunction
with the BEST theorem to provide a structure theory for the equilibrium thermodynamics of hybridization for oligomeric (short) DNA strands.
Briefly, the SantaLucia model of DNA hybridization takes a finite word $w$ on four letters (A, C, G, T) and associates to it various thermodynamical characteristics (e.g., free energy $
\Delta G$ of hybridization) based on the sequence. Assuming the words are cyclic (which is not fair, but also not a very bad approximation practically), one has $\Delta G (w) = \sum_k \
Delta G (w_kw_{k+1})$ where the index $k$ is cyclic and the 16 parameters $\Delta G(AA), \dots, \Delta G(TT)$ are given.
up vote 5 Assuming for convenience that the $\Delta G(\cdot,\cdot)$ are independent over $\mathbb{Q}$, it is not hard to see that $\Delta G$ projects from the space of all words of length $N$ to
down vote the space of dimgraphs on 4 vertices (again, A, C, G, T) with $N$ edges, where (e.g.) an edge from A to G corresponds to the subsequence AG. Using matrix-tree and BEST provides a
accepted functional expression for the number of words of length $N$ with a given number of AA's, AC's, ... and TT's, and thus for the desired $\Delta G$.
Going a step further, one can use generalized Euler-Maclaurin identities for evaluating sums of functions over lattice polytopes to characterize the space of all (cyclic) words of
length $N$ with $\Delta G$ lying in a narrow range. This effectively completes the structure theory and shows how one can construct DNA sequences having desired thermodynamical and
combinatorial properties. Among other things, I used this to design a protocol for (as David Harlan Wood put it) "simulating simulated annealing by annealing DNA".
PS. This can be thought of more abstractly as a way of exactly computing the partition function for what I would call "quantized-bond Potts models". A generalization of the
1 matrix-tree theorem to hypergraphs could allow one to enumerate de Bruijn tori and conceivably solve the two-dimensional Ising model with applied field. Of course, these all turn out
to be rather resistant to attack. – Steve Huntsman Jan 2 '10 at 18:25
add comment
For signed graphs we have an interesting matrix-tree theorem. A signed graph is a graph with the additional structure of edge signs (weights) being either +1 or -1. We say that a cycle in
a signed graph is balanced if the product of the edges in the cycle is +1. A signed graph is balanced if all of its cycles are balanced. Otherise, we say a signed graph is unbalanced.
We say a subgraph $H$ of a connected signed graph $G$ is an essential spanning tree of $\Gamma$ if either
1) $\Gamma$ is balanced and $H$ is a spanning tree of $G$, or
2) $\Gamma$ is unbalanced, $H$ is a spanning subgraph and every component of $H$ is a unicyclic graph with the unique cycle having sign -1.
The matrix-tree theorem for signed graphs is stated as follows:
Let $G$ be a connected signed graph with $N$ vertices and let $b_k$ be the number of essential spanning subgraphs which contain $k$ negative cycles. Then
$$ \det(L(G))=\sum_{k=0}^n 4^k b_k.$$
up vote 1
down vote For some references see:
T. Zaslavsky, Signed Graphs, Discrete Appl. Math, 4 (1982), 47-74.
S. Chaiken, A combinatorial proof of the all minors matrix tree theorem. SIAM J. Algebraic Discrete Methods, 3 (1982), 319-329.
Signed graphs are used in spin glass theory and network applications.
Considering mixed graphs, which are directed graphs that have some edges as undirected, we actually get the same theory. This is immediate since the matrix definitions are identical for
signed graphs and mixed graphs. This seems to escape many of the people studying matrix properties of mixed graphs however. See:
R. Bapat, J. Grossman and D. Kilkarni, Generalized Matrix Tree Theorem for Mixed Graphs, Lin. and Mult. Lin. Algebra, 46 (1999), 299-312.
add comment
Not the answer you're looking for? Browse other questions tagged co.combinatorics graph-theory or ask your own question. | {"url":"http://mathoverflow.net/questions/10493/the-matrix-tree-theorem-for-weighted-graphs","timestamp":"2014-04-21T00:47:06Z","content_type":null,"content_length":"59774","record_id":"<urn:uuid:f69a2e65-8a3d-41d8-b1d4-7914a5f88ae6>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00328-ip-10-147-4-33.ec2.internal.warc.gz"} |
MathGroup Archive: November 1998 [00134]
[Date Index] [Thread Index] [Author Index]
Re: using Upset for defining positive real values (Re: Can I get ComplexExpand to really work?)
• To: mathgroup at smc.vnet.net
• Subject: [mg14711] Re: using Upset for defining positive real values (Re: Can I get ComplexExpand to really work?)
• From: Rolf Mertig <rolf at mertig.com>
• Date: Tue, 10 Nov 1998 01:21:03 -0500
• Organization: Mertig Research & Consulting
• References: <720tjl$1vs@smc.vnet.net>
• Sender: owner-wri-mathgroup at wolfram.com
Maarten.vanderBurgt at icos.be wrote:
> Hello,
> In functions like Solve and Simplify there is no option like the
> Assumptions option in Integrate.
> In a recent message ([mg14634]) Kevin McCann(?) suggested usign Upset as
> an alternative to the Assumptions option in Integrate. I thought this
> might work as well for Solve, Simplify etc.
> In the example below I want A to be positive real number. I use Upset to
> give A the right properties.
> I was hoping Solve[A^2-1 == 0, A] would come up with the only possible
> solution given that A is a positive real: {A -> 1}. Same for
> Simplify[Sqrt[A^2]]: I would expect the result to be simply A (instead
> of Sqrt[A^2]) when A is set to be positive and real.
> Upset does not seem to work here.
> 1st question: why?
Because Simplify and Solve are obviously not written to recognize Upset
> 2nd question: is there a way you can introduce simple assumptions about
> variables in order to rule out some solutions or to reduce the number
> of solutions from functions like Solve, or to get a more simple answer
> from manipulation fuctions like Simplify.
> In[3]:= Solve[a^2-1 == 0, a]
> Out[4]= {{a -> -1},{a -> 1}}
> In[5] := Simplify[Sqrt[a^2]]
> Out[5]= Sqrt[a^2]
Some possibilities are:
In[1]:= PosSolve[eqs_, vars_] := Select[Solve[eqs, vars], Last[Last[#]]
> 0&]
In[2]:= PosSolve[a^2-1 == 0, a]
Out[2]= {{a -> 1}}
In[3]:= PowerExpand[Sqrt[a^2]]
Out[3]= a
Dr. Rolf Mertig
Mertig Research & Consulting
Mathematica training and programming Development and distribution of
FeynCalc Amsterdam, The Netherlands | {"url":"http://forums.wolfram.com/mathgroup/archive/1998/Nov/msg00134.html","timestamp":"2014-04-17T06:49:24Z","content_type":null,"content_length":"36434","record_id":"<urn:uuid:eb9e759d-2e89-4178-8b3d-d86f9392c938>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00286-ip-10-147-4-33.ec2.internal.warc.gz"} |
Spectral Synthesis Noise for Creating Terrain - Game Programming - Articles - Articles
If you find this article contains errors or problems rendering it unreadable (missing images or files, mangled code, improper text formatting, etc) please contact the editor so corrections can be
made. Thank you for helping us improve this resource
Everyone seems to like generating and rendering terrain. I got lots of email about my old fractal program as well as the article I wrote a couple years ago. The fractal program used the Mandelbrot
set to generate the height data, so it had pretty limited usefulness. The algorithm used in this article and program is much more flexible and can generate very realistic and usable terrain.
The code in this project is all from the final project in my CSCI580 class I took Fall 1999. The project was on generating ecosystems, but this article only deals with generating terrain. The
executable for the entire project is linked at the bottom of this page.
Spectral Synthesis & Procedural Functions
The spectral synthesis algorithm is taken directly out of Darwyn Peachey's chapter in
Texturing & Modeling: A Procedural Approach
by Ebert, Musgrave, Peachey, Perlin, and Worley. It's an exceptionally good book, I highly recommend it to anyone interested in learning about procedural techniques. The detailed description begins
on p82 and I also use the spline interpolation function on p31. If you can, read that chapter over as they probably explain it better than I do as I'm trying not to plagiarize too much.
Spectral Synthesis is a noise function - which means it doesn't fit any easily noticable pattern. Noise functions are extremely useful, almost anything in nature can be approximated by some
combination of pattern and noise. The trick is knowing how to combine them. Consider how powerful a relatively simple function like is outlined here can be. High end & photorealistic renderers use
many procedural functions for generating realistic textures and models. Pixar's Renderman rendering system is designed around taking advantage of the power of procedural functions. Besides being able
to generate lots of varying patterns, procedural textures hardly require any storage space because they can be generated at run time (or on the fly) instead of being stored as a 24-bit texture on
disk. Procedural textures have not been taken advantage of much in the game industry - except probably in generating textures in Adobe Photoshop, or another paint program. Now that video cards are
more powerful, games can produce incredibly detailed scenes, and procedural textures and models are going to be a requirement to generate the massive amounts of data necessary for a highly detailed
The Algorithm
Well, I hope I've given you some appreciation for how cool this stuff is - let's take a look at the algorithm.
Spectral synthesis is just slightly more complicated than throwing a bunch of random numbers into a 2d array. The difference is in how the 2d array is treated. Instead of just treating the 2d array
as the texture itself (aliased white noise), it smooths it (with a spline function in this case) to produce some sort of continuity in the data.
But that still wouldn't generate a very convincing or useful (except in some cases) texture. The real cool part comes in when you start adding multiple passes. In this way, spectral synthesis is a
sort of fractal function. It iterates a function with varying parameters over an array that accumulates the values. The varying parameters in this case are wavelength and amplitude. Here's an example
(forgive the low quality of the images, they're taken from the app, which wasn't made for looking at the 2d data).
I only used 4 passes for my project, but you can see how well it works at higher number of passes. Now you wouldn't get these results if you just kept accumulating smoothed random noise - the higher
frequency data would determine too much of the texture. What you want is for the lowest frequency pass to define the basic shape and further higher frequency passes to add detail. So there is an
added scale for each pass related to the pass number. The above images were generated with a scaling factor of 0.8 per pass, while I used 0.4 in my project.
* fracSynthPass(...)
* generate basic points
* interpolate along spline & scale
* add to existing hbuffer
void fracSynthPass( float *hbuf, float freq, float zscale, int xres, int zres )
int i;
int x, z;
float *val;
int max;
float dfx, dfz;
float *zknots, *knots;
float xk, zk;
float *hlist;
float *buf;
// how many to generate (need 4 extra for smooth 2d spline interpolation)
max = freq + 2;
// delta x and z - pixels per spline segment
dfx = xres / (freq-1);
dfz = zres / (freq-1);
// the generated values - to be equally spread across buf
val = (float*)calloc( sizeof(float)*max*max, 1 );
// intermediately calculated spline knots (for 2d)
zknots = (float*)calloc( sizeof(float)*max, 1 );
// horizontal lines through knots
hlist = (float*)calloc( sizeof(float)*max*xres, 1 );
// local buffer - to be added to hbuf
buf = (float*)calloc( sizeof(float)*xres*zres, 1 );
// start at -dfx, -dfz - generate knots
for( z=0; z < max; z++ )
for( x=0;x < max;x++ )
val[z*max+x] = SRANDOM;
// interpolate horizontal lines through knots
for( i=0;i < max;i++ )
knots = &val[i*max];
xk = 0;
for( x=0;x < xres;x++ )
hlist[i*xres+x] = spline( xk/dfx, 4, knots );
xk += 1;
if( xk >= dfx )
xk -= dfx;
// interpolate all vertical lines
for( x=0;x < xres;x++ )
zk = 0;
knots = zknots;
// build knot list
for( i=0;i < max;i++ )
knots[i] = hlist[i*xres+x];
for( z=0;z < zres;z++ )
buf[z*xres+x] = spline( zk/dfz, 4, knots ) * zscale;
zk += 1;
if( zk >= dfz )
zk -= dfz;
// update hbuf
for( z=0;z < zres;z++ )
for( x=0;x < xres;x++ )
hbuf[z*xres+x] += buf[z*xres+x];
free( val );
free( buf );
free( hlist );
free( zknots );
Adding some water
What I used to add water is very simple. It interpolates a spline with 4 colinear nodes - two on two of the edges, and two outside of the area. It then offsets them by some random values to make the
river a little interesting. As the spline is interpolated, the height field is displaced - like shoveling out the dirt to make a bed for the river. It also stores data in an buffer I call the water
buffer. It uses this to determine if a point is underwater, and if it's not how much water is available for plants at the given point.
This is a more powerful way to generate terrain than my previous programs. But I also hope this article helped illustrate the power of procedural methods for generating textures and models. Remember,
I barely touched on the broad capabilities of procedural methods. Play around with it. | {"url":"http://www.gamedev.net/page/resources/_/technical/game-programming/spectral-synthesis-noise-for-creating-terrain-r900","timestamp":"2014-04-17T06:50:22Z","content_type":null,"content_length":"100461","record_id":"<urn:uuid:e2b6fa6c-41ce-4075-b236-2a2677d330b7>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00316-ip-10-147-4-33.ec2.internal.warc.gz"} |
Equidistribution Theorem: distance between solutions
up vote 3 down vote favorite
Can please someone help me with the following problem.
Say we have a sequence $nx \; \mathrm{mod} \; 1$, where $n$ is a whole number and $x$ is irrational.
Now I need to solve the inequality $nx \; \mathrm{mod} \; 1 < v$ with respect to $n$, for some given small $v$.
According to Equidistribution Theorem, this sequence is uniformly distributed on (0,1). And thus we definitely know that there is an infinite number of those $n$'s. But what can we say about these
solutions themselves? In particular, I need to know if the distance between two consequtive solutions is limited or not.
It seems to be true to me, but maybe I'm missing something... and can't figure out a hard proof.
inequalities diophantine-approximation analytic-number-theory nt.number-theory
1 Try applying pigeonhole principle you should be able to get what you are looking for. Its also called Dirichlet's Box Principle. – Vagabond Dec 6 '11 at 8:39
add comment
4 Answers
active oldest votes
Here is a quick solution to your problem. Fix a positive integer $q$ such that $\{ qx \} <v $, and then fix a positive integer $s$ such that $1<s\{qx\}$. Now assume that $n$ is any
integer satisfying $\{nx\}<v$. Let $r$ be the smallest positive integer such that $1<\{nx\}+r\{qx\}$. Clearly, $r\leq s$. In addition, $\{nx\}+r\{qx\}<1+\{qx\}<1+v$, hence in fact $\
up vote 1 down {(n+rq)x\}<v$. We showed that the gaps between the solutions of $\{nx\}<v$ is at most $sq$.
vote accepted
add comment
Basically you don't need the Weyl's Equi. theorem, it's enough to use Kronecker's lemma about density.
If you want to use measure theory, then your question follows from any ergodic theorem you would like + the fact that the system is uniquely ergodic.
If you want, you can use the theory of continued fractions to explicitly compute such a return time rate.
up vote 2 In the Furstenberg terminology, you want to say that the topological dynamical system of the rotation by $x$ is uniformly recurrent.
down vote
If you add nx to itself about ~ 1/(nx) times, you will end up being roughly close to your original nx (this can be made more precise using continued fractions). It is even almost
periodic, you always miss the interval (0,v) by at-most one rotation. Hence the set of return times of the orbit of $0$ under the rotation by $x$ map is so-called syndethic set, means it
has bounded gaps.
Thank you very much for your answer! Kronecker's theorem seems very related and thanks for other useful keywords as well. I should go through it in detail to be sure, but it seems the
problem is solved. – Sasha Dec 6 '11 at 8:31
add comment
The sequence of fractional parts $\{ nx \}$ with $n \in \mathbb{Z}$ is dense in the reals. First we can divide $[0,1]$ into intervals $[0,1/n]\cup [1/n,2/n]\cup \dots \cup [1-1/n,1]$.
Two numbers are in the same interval $n\alpha, n\beta \in [k/n, k/n+1/n]$ for some $0 < a < b < n+2$ and we get $n(b-a) \in [0,1/n]$. We get evenly spaced values of $\{ n x\}$ for
arbitrarily small spacing.
This is more or less Kronecker Approximation Theorem with a hint of why Weyl Equidistribution might be true.
up vote 2
down vote When is $0 < \{ nx \} < x$, for $0 < x < 1$? This happens when $n = \lfloor 1/x \rfloor$. Then $$ 0 < \{ nx \} = \lfloor 1/x \rfloor x - \lfloor \lfloor 1/x \rfloor x \rfloor < x $$ But
the floor-functions do simplify $ \lfloor \lfloor 1/x \rfloor x \rfloor = 1$. Dividing both sides by $x$, the rotation is rescaled: $$ 0 < \frac{\{ nx \}}{x } = \{ 1/x \} < 1 $$
This is the ``renormalization" of the Euclidean algorithm in dynamics/number theory.
add comment
Your question is a special case of the question that I answered in complete detail here, where I gave an explicit fast algorithm that can be used to compute the return times to any interval
. Your problem is essentially the case of $\alpha =0$, which is especially easy to compute. The integers you are asking for are very evenly spaced- a generalized arithmetic progression with
up vote 0 two alternating moduli. So yes, the distance between consecutive integers that do what you want is bounded.
down vote
@Alan, one should be a bit more careful playing with the Ostrowski expansion as you mentioned it (or the cutting sequence for the rotation, as dynamicist will call it). The cutting
sequence alternates between the sides of the zero in the torus, and therefore you get estimation for the absolute value of nx. If you would like to get only estimation about the return
times to the positive side, one should add one to every second element in the cutting sequence. – Asaf Dec 6 '11 at 19:56
@Asaf This requires no significant modification to the post that I already made. If you want a one sided approximation then you use the pieces of the Ostrowski expansion corresponding to
either the odd or even convergents. – Alan Haynes Dec 7 '11 at 8:17
add comment
Not the answer you're looking for? Browse other questions tagged inequalities diophantine-approximation analytic-number-theory nt.number-theory or ask your own question. | {"url":"http://mathoverflow.net/questions/82776/equidistribution-theorem-distance-between-solutions?sort=newest","timestamp":"2014-04-16T07:48:06Z","content_type":null,"content_length":"68715","record_id":"<urn:uuid:4c6eab37-423e-4c75-a432-67f113892fce>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00459-ip-10-147-4-33.ec2.internal.warc.gz"} |
Tennis Groundstroke Drills
This is yet another very good drill to learn the importance of momentum.
a) Two players play and the point starts either with underhand feed or with serve
b) They play to 21 and scoring goes like this: if player A wins the first point, he gets 1 point. If player A then wins the second point in a row, he gets 2 points so his total score is 3. His next
successive point is worth 3 points so the score is 6:0. If player B now wins the point he gets 1 point because that is his first successive point. If player A wins the next point he gets 1 point
because his previous succession of 3 points was broke by player B.
• The players learn that the more successive points they win, the more they are worth. In real tennis the scoring is different but the emotional perception of the player is very similar. If one
leads 5:1 and is caught by his opponent at 5:5, he feels as if he is losing. That's because successive points that you don't win make you feel very powerless. (At least that is true for most
• Players also learn that they can get out of trouble faster. If one is behind 15:5 and he wins only 4 points in a row (1+2+3+4=10), he levels the score at 15:15. Again - in real tennis scoring
doesn't go that way, but in player's minds it is very similar. If one leads 5:1 and the opponent gets to 5:3, most players become more tense and anxious.
• This is the way to approach playing from behind. Even though the gap seems too big, like at 5:1, if the trailing player can win two games in a row with many successive points, then the leading
player will feel as if he is already losing. As you can imagine, playing from that mindset leads to poor results. | {"url":"http://www.tennis4you.com/workshop/groundstrokes/groundstroke-drill-022.htm","timestamp":"2014-04-19T11:59:59Z","content_type":null,"content_length":"24545","record_id":"<urn:uuid:c9122460-fcba-40c0-9955-8c6fbe9a21a6>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00640-ip-10-147-4-33.ec2.internal.warc.gz"} |
Is the first differential Pontryagin class a morphism of stacks?
up vote 10 down vote favorite
In Cech Cocycles for Characteristic Classes, Jean-Luc Brylinski and Dennis McLaughlin provide explicit formulas for Cech cocycles for characteristic classes of real and complex vector bundles, and
show how to refine these to Cech-Deligne cocycles for differential characteristic classes in differential cohomology.
When the compact Lie group $G$ involved is enough connected, e.g., if one is interested in the second Chern class of a principal $SU(n)$-bundle, the Brylinski-McLaughlin formula drastically
simplifies, and it can be shown that in this case it actually gives a morphism of stacks from the classifying stack of principal $G$-bundles with connections to the (higher) stack of principal $U(1)
$-$n$-gerbes with connections (here $n+2$ is the degree of the characteristic class involved).
This stacky interpretation is emphatised, e.g., in the follow-up Cech cocycles for differential characteristic classes - An $\infty$-Lie theoretic construction, by Urs Schreiber, Jim Stasheff and
myself, where it is obtained (as the title suggests) via integration of $L_\infty$-algebras to higher Lie groups. For this approach, the connectivity of $G$ is essential. For instance one can see
that the first fractional differential Pontryagin class $\frac{1}{2}\hat{p}_1$ is a morphism of stacks from the stack of princiapl Spin bundles with connection to the 3-stack of $U(1)$-2-gerbes with
connection, and this precisely reproduces Brylinski-McLaughlin construction, the but one cannot see the Brylinski-McLaughlin cocycle for the first differential Pontryagin class $\hat{p}_1$ for
princiapal $SO$-bundles with connections via Lie integration: $SO$ is not enough connected to allow this. This pheneomenon is in a sense not surprising: for instance the "identity" morphism from the
Lie algebra of $O(2)$ to the Lie algebra of $SO(2)$ cannot be integrated to a morphism of Lie groups from $O(2)$ to $SO(2)$ due to "lack of connectivity" reasons.
Yet the fact that a particular technique fails does not mean that a statement is false. So here is my question: is there a natural interpretation of Brylinski-McLaughlin cocycle for the first
differential Pontryagin class $\hat{p}_1$ as a morphism of stacks (from $SO$-bundles with connections to $U(1)$-2-gerbes with connections)? or is there a natural interpretation of
Brylinski-McLaughlin cocycle for the second differential Chern class $\hat{c}_2$ as a morphism of stacks from $U$-bundles (not $SU$!) with connections to $U(1)$-2-gerbes with connections)?
My feeling is that once there are no topological obstruction (i.e., once the characteristic classes are defined at the non-very-conected level, as in these cases), the morphism of stacks (which
surely exists at the highly connected level) descends from the higher connected cover of $G$ involved to the original $G$. So, for instance since $\frac{1}{2}p_1$ is not an integral class for
$SO$-bundles one can not make $\frac{1}{2}\hat{p}_1$ descend from principal Spin-bundles with connections to princiapal $SO$-bundles with connections; but since $p_1$ is an integral class for
$SO$-bundles, then it should be possible that $\hat{p}_1$ descends. But all my attemps towards a rigorous proof of this have failed so far, so I've begun thinking that I may be wrong, and that $\hat
{p}_1$ can be given no natural interpretation as a morphism of stacks after all.
Any suggestion, reference or criticism concerning this problem is welcome.
at.algebraic-topology higher-category-theory differential-cohomology gerbes
I don't think that your question has something to do with connectedness of groups. The stack morphism you are looking for is simply the assignment of the Chern-Simons 2-gerbe, constructed using a
multiplicative gerbe with connection that represent the differential characteristic class you want. – Konrad Waldorf Dec 12 '11 at 22:56
Hi Konrad, thanks for your comment. What I don't see clearly is precisely how this assignment is functorial. I mean, I know that a differential characteristic class is represented by a
multiplicative gerbe with connection (that is $\hat{H}^{n+2}(X;\mathbb{Z})\cong\{$isomorphism classes of multiplicative $n$-gerbes with connection), but what I don't see clearly is how this choice
of representatives is functorial. Presently, I'm able to see functoriality only in the highly connected case. – domenico fiorenza Dec 13 '11 at 7:16
The multiplicative gerbe lives over the group, it represents the universal differential characteristic class. Functorality, as I understand your question, is in the manifold, and that's a
consequence of the definition of the Chern-Simons 2-gerbe. – Konrad Waldorf Dec 13 '11 at 8:22
By the way, what exactly is the definition of "first differential Pontryagin class"? – Konrad Waldorf Dec 13 '11 at 8:23
actually, for functoriality I meant the following: let us fix a base manifold $\Sigma$ to simplify and let us denote by $\mathbb{X}$ the groupoid of principal $SO$-bundels with connection on $\
Sigma$ and by $\mathbb{Y}$ the 3-groupoid of multiplicative 2-gerbes with connection on $\Sigma$. Then we have a "first differential Pontryagin class" $\pi_0(\mathbb{X})\to \pi_0(\mathbb{Y})$
mapping the isomorphism class of an $SO$-bundle with connection to the isomorphism class of its associated 2-gerbe. My question was: does this lift to a morphism of higher groupoids $\mathbb{X}\to
\mathbb{Y}$. (...) – domenico fiorenza Dec 13 '11 at 12:27
show 6 more comments
1 Answer
active oldest votes
Yes, every differential characteristic class is a stack morphism.
The point is that there exist universal differential characteristic classes. These are not easy to describe since they involve a notion of differential cohomology of classifying spaces.
One way is to use Urs Schreiber's approach. At least in the case of degree four classes one can alternatively use the theory of multiplicative bundle gerbes.
In this context, I'd use the following working definition:
Definition: A universal degree four differential characteristic class on a Lie group $G$ is a multiplicative $U(1)$-bundle gerbe over $G$ with (multiplicative) connection of curvature
Here, $H$ is the "canonical" 3-form on $G$, it is defined using a bilinear symmetric linear form on the Lie algebra of $G$ - I think any such form is fine.
Multiplicative bundle gerbes have been introduced in the paper (1); they represent classes in $H^4(BG,\mathbb{Z})$. The notion of a multiplicative connection is subtle, I'd refer to
Definition 1.3 of my paper (2). The point is that the "naive" definition is too strict and leaves essentially no space for examples. In particular, while a multiplicative gerbe over $G$
can be seen as a 2-gerbe over $BG$, a multiplicative connection is NOT a connection (in the ordinary sense) on this 2-gerbe.
Example 1: If $G=Spin(n)$, the basic gerbe $\mathcal{G}$ over $G$ carries a canonical connection of curvature $H$ and a canonical multiplicative structure. It is the universal
differential half first Pontryagin class, $\frac{1}{2}\widehat{p_1}$. It underlies the definition of string connections I have proposed in my paper (3).
Example 2: If $G = SO(n)$, the bundle gerbe $\mathcal{G}$ descends together with its connection along the projection $Spin(n) \to SO(n)$, but its multiplicative structure does not
up vote 6 descend. Instead, only the multiplicative bundle gerbe $\mathcal{G}^2 := \mathcal{G} \times \mathcal{G}$ descends together with its connection and its multiplciative structure. The
down vote descended gerbe over $SO(n)$ is the universal differential first Pontryagin class, $\widehat{p_1}$. Descent theory for multiplicative gerbes, together with obstructions is discussed in
accepted (4), see Table 1 at the end of the paper.
Now suppose that $\mathcal{G}$ is a universal differential characteristic class, $X$ is a smooth manifold and $P$ is a principal $G$-bundle with connection $A$ over $X$. The Chern-Simons
2-gerbe $\mathbb{CS}_P(\mathcal{G})$ is a $U(1)$-bundle 2-gerbe over $X$, see (1). A connection on $\mathbb{CS}_P(\mathcal{G})$ is constructed from the Chern-Simons 3-form $CS(A)$ and
the multiplicative connection on $\mathcal{G}$; here the condition that the curvature of $\mathcal{G}$ is $H$ is important. This construction is described in detail in Section 3.2 of
(2). It has the following properties:
• if $\xi_P: M \to BG$ is a classifying map for $P$, then $[\mathbb{CS}_P(\mathcal{G})] = \xi_P^*[\mathcal{G}] \in H^4(M,\mathbb{Z})$.
• a connection-preserving bundle morphism $P_1 \to P_2$ (covering some smooth map $X_1 \to X_2$ between base manifolds) induces a morphism $$ \mathbb{CS}_{P_1}(\mathcal{G}) \to \mathbb
{CS}_{P_2}(\mathcal{G}) $$ between bundle 2-gerbes.
The first statement is Theorem 3.2.3 in (2). It means that on the level of the underlying (non-differential) characteristic class the construction is just pullback. The second statement
follows by inspection of the definition of $\mathbb{CS}_P(\mathcal{G})$. It means precisely that we have a stack morphism $$ \mathbb{CS}(\mathcal{G}): Bun_G^{\nabla} \to 2\text{-}Grb_{U
(1)}^{\nabla}. $$
Nice answer! And nice gravatar btw! (For everyone else, it's a painting of a gerbe henri-matisse.net/paintings/ex.html) – David Roberts Dec 13 '11 at 23:59
What an answer! what I was missing was precisely descent and obstructions. Really, really thanks! – domenico fiorenza Dec 14 '11 at 7:05
1 I allowed myself to change the arXiv links from .pdf to abstracts. By the way, this is a very nice answer! – DamienC Dec 14 '11 at 12:25
add comment
Not the answer you're looking for? Browse other questions tagged at.algebraic-topology higher-category-theory differential-cohomology gerbes or ask your own question. | {"url":"http://mathoverflow.net/questions/83147/is-the-first-differential-pontryagin-class-a-morphism-of-stacks/83369","timestamp":"2014-04-19T09:55:35Z","content_type":null,"content_length":"70546","record_id":"<urn:uuid:368b1915-478c-42e7-8e3a-04cb2a760c6c>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00295-ip-10-147-4-33.ec2.internal.warc.gz"} |
Dividing by 48
http://www.velocityreviews.com/forums/(E-Mail Removed)
> I have a signal (integer). How can I describe synthesizable code
> for dividing that signal by 48 ? Result (ls_rowaddr) should
> be whole-number that is integer.
This is an interesting problem. You could use my general entity
for dividing two arbitrary numbers:
library IEEE;
use IEEE.STD_LOGIC_1164.ALL;
use IEEE.NUMERIC_STD.ALL;
entity binary_division is
bits: positive :=
dividend: in unsigned(bits-1 downto 0);
divisor: in unsigned(bits-1 downto 0);
quotient: out unsigned(bits-1 downto 0);
remainder: out unsigned(bits-1 downto 0);
division_by_zero_error: out boolean);
end entity binary_division;
architecture rtl of binary_division is
divide: process(dividend, divisor)
variable temp_dividend: unsigned(bits-1 downto 0);
variable temp_divisor: unsigned(bits-1 downto 0);
variable temp_quotient: unsigned(bits-1 downto 0);
variable align_count: natural range 0 to bits;
-- init
temp_dividend := dividend;
temp_divisor := divisor;
temp_quotient := (others => '0');
if temp_divisor = 0 then
division_by_zero_error <= true;
quotient <= (others => '0');
remainder <= (others => '0');
division_by_zero_error <= false;
if temp_divisor > temp_dividend then
quotient <= (others => '0');
remainder <= dividend;
-- left align
align_count := 0;
for i in 1 to bits-1 loop
exit when temp_divisor(bits-1) = '1';
temp_divisor := shift_left(temp_divisor, 1);
align_count := align_count + 1;
end loop;
-- divide
for i in 0 to bits-1 loop
if temp_divisor > temp_dividend then
temp_quotient := temp_quotient(bits-2 downto 0) & '0';
temp_quotient := temp_quotient(bits-2 downto 0) & '1';
temp_dividend := temp_dividend - temp_divisor;
end if;
temp_divisor := shift_right(temp_divisor, 1);
exit when align_count = 0;
align_count := align_count - 1;
end loop;
quotient <= temp_quotient;
remainder <= temp_dividend;
end if;
end if;
end process;
end architecture rtl;
But this uses a lot of LUTs, 5% of a Cyclone I (about 300 logic elements)
for 8 bit and the worst case timing between input and output is 66.6 ns
(if you tweek a bit the project settings, it can be reduced to 53 ns).
For 16 bit you can use it for benchmarking synthesizer tools, but I
would not recommend it in real designs, because it uses 21% (about 1,300
logic elements) and worst timing is 122 ns. For 32 bit the Quartus II
says "Current module quartus_fit ended unexpectedly", maybe because
it needs more logic elements than available.
There are faster algorithms (
) ),
but if you don't need high parallel speed and if you have a clock, you
can serialize the algorithm, which shouldn't need very many logic elements.
The testbench:
library IEEE;
use IEEE.STD_LOGIC_1164.ALL;
use IEEE.NUMERIC_STD.ALL;
entity binary_division_test is
end entity binary_division_test;
architecture rtl of binary_division_test is
constant bits: positive := 8;
signal dividend: unsigned(bits-1 downto 0);
signal divisor: unsigned(bits-1 downto 0);
signal quotient: unsigned(bits-1 downto 0);
signal remainder: unsigned(bits-1 downto 0);
signal division_by_zero_error: boolean;
binary_division_inst: entity binary_division
generic map(
bits => bits)
port map(
dividend => dividend,
divisor => divisor,
quotient => quotient,
remainder => remainder,
division_by_zero_error => division_by_zero_error);
test_divide: process
variable count: positive;
dividend <= to_unsigned(0, bits);
divisor <= to_unsigned(0, bits);
wait for 1 ps;
count := 1;
for i in 1 to bits loop
count := 2*count;
end loop;
count := count - 1;
for i in 0 to count loop
for j in 0 to count loop
dividend <= to_unsigned(i, bits);
divisor <= to_unsigned(j, bits);
wait for 1 ps;
if j = 0 then
assert quotient = 0 report "quotient error" severity failure;
assert remainder = 0 report "remainder error" severity failure;
assert division_by_zero_error report "division_by_zero_error error" severity failure;
assert quotient = i / j report "quotient error" severity failure;
assert remainder = i - i / j * j report "remainder error" severity failure;
assert not division_by_zero_error report "division_by_zero_error error" severity failure;
end if;
end loop;
end loop;
wait for 1 ps;
assert false report "No failure, simulation was successful." severity failure;
end process;
end architecture rtl;
Frank Buss,
(E-Mail Removed) http://www.frank-buss.de | {"url":"http://www.velocityreviews.com/forums/t375801-p2-dividing-by-48-a.html","timestamp":"2014-04-17T19:26:39Z","content_type":null,"content_length":"62067","record_id":"<urn:uuid:4283de4b-fd95-47bd-9208-7422b59e1ef7>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00607-ip-10-147-4-33.ec2.internal.warc.gz"} |
palindromic numbers
May 14th 2005, 02:38 PM
palindromic numbers
A palindromic number is one that reads the same from left to right or from right to left. For example, 64746 is a palindromic number.
a/ How many odd six-digit palindromic numbers are there?
b/ How many odd seven digit palindromic numbers are there in which every digit appears at most twice?
May 16th 2005, 12:47 AM
How many odd 1-, 2-, and 3-digit numbers are there?
June 29th 2005, 09:40 PM
Thanks for your time, but I worked out the answer myself
June 30th 2005, 07:38 AM
Well now you've got me curious. Do you mind showing us how you solved it?
December 26th 2005, 01:17 PM
Originally Posted by mathsmad
A palindromic number is one that reads the same from left to right or from right to left. For example, 64746 is a palindromic number.
a/ How many odd six-digit palindromic numbers are there?
b/ How many odd seven digit palindromic numbers are there in which every digit appears at most twice?
i didn't understand if in both a and b there is each digit only twice so i will solve twice
a/ <each digit twice>
the sum of chosing including leading zero minus the sum of chosing with leading zero
<as many digits>
December 26th 2005, 03:14 PM
Consider the slots:
_ _ _ _ _ _
In the first slot you can only place 5 numbers (because since it is odd the last slot must be 1,3,5,7, or 9).
In the second slot you can place 10 numbers.
In the third slot you can place 10 numbers.
In the fourth slot you can only have one number (same as third slot because it is palindromic).
In the fifth slot you can only have one number.
In the sixth slot you can only have one number.
Thus by the Fundamental Counting Principle there are a total of 5*10*10 possibilities thus, 500 different numbers.
One interesting fact about palindromes is that if it is even digited then it is always divisible by 11. And if it is odd digited then is remainder when divided by 11 is the middle digit (thus if
it is zero then it is divisible by 11 and this type of number can never be have remainder 10). Thus "there does not exist a palindromic number when divided by 11 leaves a remainder of 10!"
December 29th 2005, 10:38 AM
Originally Posted by mathsmad
A palindromic number is one that reads the same from left to right or from right to left. For example, 64746 is a palindromic number.
a/ How many odd six-digit palindromic numbers are there?
b/ How many odd seven digit palindromic numbers are there in which every digit appears at most twice?
Consider an odd number <=999. Each of these gives a unique six-digit
palindromic number as follows:
first add as many zeros to the front of the number as needed so that we
have exactly three digits. Reverse the digits and append reversed copy to
the front of the three digits.
It is self evident that any six-digit palindromic number can be generated in
this manner.
Therefore there are exactly as many six-digit palindromic numbers as there
are odd numbers <=999. There are exactly 500 odd numbers <=999, so
there are exactly 500 six-digit palindromic numbers.
For seven-digit odd palindromic numbers the three most and the three least
significant digits are the digits of a six-digit palindromic number. Thus for
each possible middle digit there are 500 seven-digit palindromic numbers. The
possible middle digits are 0, 1, 2, .., 9. Hence there are 5000 seven-digit
odd palindromic numbers.
January 22nd 2006, 08:29 PM
Originally Posted by mathsmad
A palindromic number is one that reads the same from left to right or from right to left. For example, 64746 is a palindromic number.
try this one about palindromic numbers:
You start with any number, which isn't a palindromic one. You write the ciphers of this number from end to start and add it to the original number. If the result is palindromic than you've got
what you wanted. If the result is not a palindromic number you write the ciphers (of the result of course!) from end to start, add ... proof ... and so on, until you get a palindrom.
I've attached a simple example. If you start for example with 40793 than you need 22 steps to get a palindrom.
I don't know if this rule is proofed (I never found a proof for it). So maybe the proof is a nice job for a boring weekend ;-) | {"url":"http://mathhelpforum.com/number-theory/240-palindromic-numbers-print.html","timestamp":"2014-04-18T04:41:13Z","content_type":null,"content_length":"9858","record_id":"<urn:uuid:df694a76-e822-455b-a482-c0095c61d109>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00656-ip-10-147-4-33.ec2.internal.warc.gz"} |
Grove Hall, MA Prealgebra Tutor
Find a Grove Hall, MA Prealgebra Tutor
...I teach science in general and computers and computer science. I teach math to all levels of students whether gifted, or normal or having been diagnosed with a disability. Mathematics: Through
tutoring I have a good sense for what students should know and how to diagnose areas of weakness and strength.
47 Subjects: including prealgebra, chemistry, calculus, reading
...GRE: 170 Math, 170 Verbal, 6.0 Writing. GMAT: 780. MY EXPERIENCE: I have thousands of hours of professional teaching, tutoring, and mentoring experience - eight years in the Boston metro area
47 Subjects: including prealgebra, English, chemistry, reading
...I love to also help students and tutor on the side. I want all students to feel confident in math. Even though I was a math major in college I will only tutor math from 5th grade up to
Geometry or Algebra 2 in high school.
6 Subjects: including prealgebra, geometry, algebra 1, elementary math
...As a middle school student, I met with a tutor once a week and it significantly increased my grades. I want to tutor students, because I believe everyone has their own way of learning and I
can cater to that. I specialize in tutoring Biology, Chemistry, Calculus, and Algebra for Middle School and High School level courses.
14 Subjects: including prealgebra, chemistry, calculus, geometry
...I am a neuroscience PhD with a love for biology and chemistry. If you need support in either of these subjects, I would be happy to help you out. I am also available for tutoring beginning to
intermediate French (I have a French minor and have studied in France during college), algebra, and can proofread/edit essays, applications, and papers.
9 Subjects: including prealgebra, chemistry, biology, French
Related Grove Hall, MA Tutors
Grove Hall, MA Accounting Tutors
Grove Hall, MA ACT Tutors
Grove Hall, MA Algebra Tutors
Grove Hall, MA Algebra 2 Tutors
Grove Hall, MA Calculus Tutors
Grove Hall, MA Geometry Tutors
Grove Hall, MA Math Tutors
Grove Hall, MA Prealgebra Tutors
Grove Hall, MA Precalculus Tutors
Grove Hall, MA SAT Tutors
Grove Hall, MA SAT Math Tutors
Grove Hall, MA Science Tutors
Grove Hall, MA Statistics Tutors
Grove Hall, MA Trigonometry Tutors
Nearby Cities With prealgebra Tutor
Cambridgeport, MA prealgebra Tutors
Dorchester, MA prealgebra Tutors
East Braintree, MA prealgebra Tutors
East Milton, MA prealgebra Tutors
East Watertown, MA prealgebra Tutors
Kenmore, MA prealgebra Tutors
North Quincy, MA prealgebra Tutors
Quincy Center, MA prealgebra Tutors
Readville prealgebra Tutors
South Boston, MA prealgebra Tutors
South Quincy, MA prealgebra Tutors
Squantum, MA prealgebra Tutors
West Quincy, MA prealgebra Tutors
Weymouth Lndg, MA prealgebra Tutors
Wollaston, MA prealgebra Tutors | {"url":"http://www.purplemath.com/Grove_Hall_MA_Prealgebra_tutors.php","timestamp":"2014-04-21T15:03:00Z","content_type":null,"content_length":"24358","record_id":"<urn:uuid:7a1de36e-5e97-4530-b631-87db59f15c36>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00568-ip-10-147-4-33.ec2.internal.warc.gz"} |
Copyright © University of Cambridge. All rights reserved.
'Mathsjam Jars' printed from http://nrich.maths.org/
A series of jam jars of uniform cross section look like letters when viewed face-on. They are 1cm thick, and the corners of the vessels have either whole or half cm values for their coordinates.
Hot, smooth jam is poured slowly into each vessel through one of the holes at the top at a rate of 1 cm$^3$ per second.
Seven of the jam jars take the same time to fill up. Which are they?
Which one takes the longest to fill?
Which would fill up first?
The height of the jam in one of the vessels is measured and a chart of the height against time is plotted, as follows:
Which jam jars does this chart correspond to? Can you explain what each part of the chart corresponds to?
The very observant jam maker might have noticed that the chart given is actually slightly inaccurate: at certain points it is an over-measurement and at certain points it is an under-measurement. Can
you see where and why?
Make charts of height against time for some of the other letters.
p.s. If you feel the need to question the 'runnyness' of the jam, we can assume that it is of low viscosity, contains no 'bits' and remains at the same temperature throughout. You can decide on the
flavour. You might also like thinking about
this problem | {"url":"http://nrich.maths.org/9694/index?nomenu=1","timestamp":"2014-04-20T19:21:18Z","content_type":null,"content_length":"4352","record_id":"<urn:uuid:72403f14-7ec4-47bc-8c7e-670be8883172>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00557-ip-10-147-4-33.ec2.internal.warc.gz"} |
st: get predictions using mata
[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]
st: get predictions using mata
From adiallo5@worldbank.org
To statalist@hsphsun2.harvard.edu
Subject st: get predictions using mata
Date Wed, 30 Aug 2006 17:01:31 -0400
Danielle ,
I am not a mata specialist but for the second question,
I think you should use st_matrix.
Danielle wrote:
Dear Statalisters,
I am using -arima- with a panel dataset (possible if you specify the
"condition" option). But, the postestimation commands don't work, so I am
attempting to compute predictions and MSE myself, using mata, which I
am just beginning to learn.
I wrote the following little function to simulate getting predictions:
real matrix test(real rowvector b, real matrix X)
real matrix y
for(i=1; i<=cols(X); i++) {
real scalar yi
yi = b * X[i, .]'
y = (y \ yi)
When I run:
b = (1, 2, 3, 4)
X = (1, 2, 3, 4 \ 5, 6, 7, 8 \ 9, 10, 11, 12 \ 13, 14, 15, 16)
test(b, X)
I get the following error msg:
: test(b, X)
test(): 3200 conformability error
<istmt>: - function returned error
I suspect that part of the problem is the "i" subscripting. In any
case, can someone please offer advice on how to fix this? Or, even better,
has anyone already written something to generate predictions?
Also, after I run -arima-, and get the coefficient matrix e(b), how do
I get this to be a mata matrix?
Thank you,
Danielle Ferry
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2006-08/msg00951.html","timestamp":"2014-04-19T02:54:36Z","content_type":null,"content_length":"9352","record_id":"<urn:uuid:4af5598a-5450-448a-961e-b54852fe0bf4>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00503-ip-10-147-4-33.ec2.internal.warc.gz"} |
V.22 Liouville's Theorem and Roth's Theorem
710 G = Gal(f ) has a fixed field H , which is defined to be the set of all numbers a E such that (a) = a for all H. Galois proved that the association between H and H gives a one-to-one
correspondence between subgroups of G and fields which lie between Q and E (the so-called intermediate subfields of E). The con- dition that f (x) has a radical formula for its roots leads to certain
special kinds of intermediate subfields, and hence to certain special subgroups of G, and even- tually to Galois's most famous theorem: the polyno- mial f (x) has a radical formula for its roots if
and only if its Galois group Gal(f ) is a soluble group. (This means that G = Gal(f ) has a sequence of subgroups 1 = G 0 < G 1 < · · · < G r = G such that for each i, G i is a normal subgroup [I.3
§3.3] of G i+1 and the factor group G i+1 /G i is Abelian.) It follows from Galois's theorem that to demonstrate the insolubility of the quintic, it is enough to produce a quintic f (x) such that Gal
(f ) is not a soluble group. An example of such a quintic is f (x) = 2x 5 - 5x 4 + 5: one can show first that Gal(f ) is isomorphic to the symmet- ric group S 5 ; and second that S 5 is not a soluble
group. Here is a brief sketch of how the argument goes. First one establishes that f (x) is an irreducible polynomial (i.e., is not the product of two rational polynomials of smaller degree) with
five distinct complex roots. Thus, as observed above, Gal(f ) can be regarded as a sub- group of S 5 that permutes the five roots. By sketch- ing the graph of f (x) one can easily see that three of
its roots are real and that the other two, call them 1 and 2 , are complex conjugates of each other. Since ¯ the complex conjugation map z z always gives an automorphism in Gal(f ), it follows that
Gal(f ) is a subgroup of S 5 that contains a 2-cycle, namely ( 1 2 ). Another basic general fact is that the Galois group of an irreducible polynomial permutes the roots transitively, meaning that
for any two roots i , j there exists an automorphism in Gal(f ) that sends i to j . Thus, our group Gal(f ) is a subgroup of S 5 that permutes the five roots transitively and contains a 2-cycle. At
this point some fairly elementary group theory shows that Gal(f ) must actually be the whole of S 5 . Finally, the fact that S 5 is not a soluble group follows easily from the fact that the
alternating group A 5 is a non-Abelian simple group (i.e., it has no normal subgroups apart from the identity subgroup and A 5 itself). These ideas can be extended to produce polynomials 5 that have
Galois group S n , and of any degree n that are therefore not soluble by radicals. The reason V. Theorems and Problems this cannot be done for quartics, cubics, and quadratics is that S 4 and all its
subgroups are soluble groups. V.22 Liouville's Theorem and Roth's Theorem One of the most famous theorems in mathematics is the statement that 2 is irrational. This means that there is no pair of
integers p and q such that 2 = p/q, or equivalently that the equation p 2 = 2q 2 has no integer solutions apart from the trivial solution p = q = 0. The argument that proves this can be considerably
general- ized, and, in fact, if P (x) is any polynomial with integer coefficients and leading coefficient 1, then all its roots are either integers or irrational numbers. For example, since x 3 + x -
1 is negative when x = 0 and positive when x = 1 it must have a root strictly between 0 and 1. This root is not an integer, so it must be irrational. Once one has proved that a number is irrational,
it may seem as though not much more can be said. How- ever, this is very far from true: given an irrational num- ber, one can ask how close it is to being rational, and fascinating and extremely
difficult questions arise as soon as one does so. It is not immediately obvious what this question means, since every irrational number can be approx- imated as closely as you like by rational
numbers. For example, the decimal expansion of 2 begins 1.414213 . . . , which tells us that 2 is within 1/100 000 of the rational number 141 421/100 000. More gener- ally, for any positive integer q
we can let p be the largest integer such that p/q < 2, and then p/q will be within 1/q of 2. In other words, if we want an approximation to 2 with accuracy 1/q, we can obtain it if we use a
denominator of q. However, we can now ask the following question: are there denominators q for which one can one obtain an accuracy much better than 1/q? The answer turns out to be yes. To see this,
let N be a positive integer and con- sider the numbers 0, 2, 2 2, . . . , N 2. Each of these can be written in the form m + , where m is an integer and , the fractional part, lies between 0 and 1.
Since there are N + 1 numbers, at least two of their fractional parts must be within 1/N of each other. That is, we can find integers r < s between 0 and N such that if we write r 2 = n+ and s 2 =
m+, then |-| 1/N. Thus, if we set = -, we have (s -r ) 2 = n-m+ and || 1/N. If we now let q = s - r and p = n - m, 1/qN. Since then 2 = p/q + /q, so | 2 - p/q| q, 1/qN 1/q 2 , so for at least some
positive N | {"url":"http://my.safaribooksonline.com/book/math/9781400830398/part-v-theorems-and-problems/v22_liouvilles_theorem_and_rot","timestamp":"2014-04-19T07:35:14Z","content_type":null,"content_length":"97005","record_id":"<urn:uuid:f5103d6e-2ab6-4fe7-bf1e-d74bf2adcf8e>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00636-ip-10-147-4-33.ec2.internal.warc.gz"} |
Physics Forums - Plane determined by intersecting lines
Plane determined by intersecting lines
1. The problem statement, all variables and given/known data
Find the point of intersection of the lines: x=2t+1, y=3t+2, z=4t+3, and x=s+2, y=2s+4, z=-4s-1, and then find the plane determined by these lines.
2. Relevant equations
How do i find the plane determined by these lines?
3. The attempt at a solution
Ive read through the text, and i figured out the first part about where they intersect:
Pt. A=(1,2,3)
then i substituted the 2nd parametric equation into the x,y,z variables and solved for s.
then i plugged s=-1 back into the parametric equation to find x,y,z for intersection
the equations intersect at (1,2,3)
Now i'm stuck...how do i find the planes determined by these lines?
1. The problem statement, all variables and given/known data
2. Relevant equations
3. The attempt at a solution | {"url":"http://www.physicsforums.com/printthread.php?t=424950","timestamp":"2014-04-16T07:40:55Z","content_type":null,"content_length":"10081","record_id":"<urn:uuid:42057685-17ed-4b46-8daf-45f058bec5eb>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00560-ip-10-147-4-33.ec2.internal.warc.gz"} |
polygon area
Perhaps this result is true.
I just worked it out with my
silly new calculus of a hexagon.(this thread)
This result is for any regular
polygon (all sides equal and all angles equal).
It works for a square of length 5.
And for a pentagon of side length 5, it says 43.01, but I don't know if it is right.
Last edited by John E. Franklin (2005-12-22 09:40:35) | {"url":"http://www.mathisfunforum.com/viewtopic.php?pid=23330","timestamp":"2014-04-18T00:28:25Z","content_type":null,"content_length":"18872","record_id":"<urn:uuid:8e7c9198-e59e-480f-89f0-a078e8f8e617>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00530-ip-10-147-4-33.ec2.internal.warc.gz"} |
Erdos Distance Problem SOLVED!
(All papers referred to in this post can be accessed from my
website on the Erdos Distance Problem.
In 1946 Erdos raised the following question: Given n points in the plane how many distinct distances between points are you guaranteed? Let g(n) denote the number. How does g(n) grow? This problem is
now called
The Erdos distance Problem
A while back I created a
of all(?) of the papers on this problem. Today I happily added one more paper to it which comes close to just about settling it. See also
Tao's Blog
on the recent result.
Julia Garibaldi, Alex Iosevich, and Steven Senger have a book due out in Jan 2011 entitled
The Erdos Distance Problem
. I proofread it for them so, without giving away the ending, I'll just say it is AWESOME.
I have seen it written that Erdos conjectured either g(n) &ge n/sqrt(log n) or g(n) &ge n
. These paper reference Erdos's 1946 paper. No such conjecture is there. I suspect Erdos did state the conjecture in talks.
I give a brief history:
1. Erdos (1946) showed O(n/\sqrt(log n)) &ge g(n) &ge &Omega(n^0.5). These proofs are easy by today's standards.
2. Leo Moser (1952) showed g(n) &ge &Omega(n^0.66...). (Actually n^2/3.)
3. Chung (1984) showed g(n) &ge &Omega(n^0.7143...). (Actually n^5/7.)
4. Chung, Szemeredi, Trotter (1992) showed g(n) &ge &Omega(n ^0.8/log n).
5. Szekely (1993) showed g(n) &ge &Omega(n^0.8).
6. Solymosi and Toth (2004) showed g(n) &ge &Omega(n^0.8571). (Actually n^6/7.)
7. On page 116 of Combinatorial Geometry and its Algorithmic Applications by Pach and Sharir there is a prediction about how far these techniques might go: A close inspection of the general
Solymosi-Toth approach shows that, without any additional geometric idea, it can never lead to a lower bound greater than &Omega(n^8/9). The exponent is 0.888... .
8. Gabor Tardos (2003) showed g(n) &ge &Omega(n^0.8634...). (Actually n^(4e/(5e-1)) - &epsilon.)
9. Nets Hawk Katz and Gabor Tardos (2004) showed g(n) &ge &Omega(n^0.864...). Actually n^((48-14e)/(55-16e)) - &epsilon).
10. (ADDED LATER) Elekes and Sharir (2010) have a paper that sets up a framework which Guth and Katz used to get their result.
11. (THE NEW RESULT) Guth and Katz (2010) showed g(n) &ge &Omega(n/log n).
Some thoughts
1. There is a big gap between 1954 and 1984. I'd be curious why this is.
2. I have seen it written that Erdos conjectured either g(n) &ge n/sqrt(log n) or g(n) &ge n^1-&epsilon. These paper reference Erdos's 1946 paper. No such conjecture is there. I am sure that Erdos
made the conjecture in talks.
3. The paper by Szekely is a very good read and is elementary. Solymosi and Toth is still elementary but is getting harder. It is not clear when these papers cross the line from Could be presented
to a bright high school student to Could not be.
4. While there is a casual statement about needing new techniques this field did not go in the barriers-direction of formally proving certain techniques could not work.
5. Did the new result use additional geometric ideas? Not sure yet- though I have emailed some people to ask.
6. The first result by Erdos really is sqrt(n)-O(1), not \Omega(\sqrt(n)). The rest are asymptotic.
7. If there are 289 points in the plane then there are at least 17 distinct distances. Are there more? I would be curious what happens for actual numbers like this. Exact results are likely very
hard; however, some improvement may be possible.
8. Gabor Tardos is Eva Tardos's brother. Nets Hawk Katz is not related to Jon Katz (Cryptographer and Blogger). (Nets Hawk Katz is a great stage name!) I do not know if Leo Moser and Robin Moser are
8 comments:
1. Dear Bill
Elekes Shamir paper seems missing
2. Dear Gil-
THANKS- I have inserted the paper into my website and also added it to the blog itself.
3. Regarding #2, one paper by Erdos explicitly mentioning the conjecture is:
P. Erdos, On some metric and combinatorial geometric problems, Discrete Mathematics, 60 (1986) 147-153.
In the he abstract he calls it 'An old and probably very difficult conjecture'. He also offers $500 for a proof of it.
4. What's going to happen to the book, which is probably already printed?
5. doesn't sound SOLVED to me
6. Not completely solved- OH, yes,
that is correct. But very close.
As for the author- he told me that he
will try to get a line in about the problem
before it goes to press.
THe book will still be worthwhile as the proofs of the earlier results are easier and interesting.
7. Isn't Erdos Distance currently at O(n/sqrt(log n))≥g(n)≥Ω(n/log n)?
8. Erdos Distance Problem- YES, I
should have said something like
solved-within-a-log factor.
I've even seen it stated as
showing g(n) \ge n^{2-ep} for
all ep>0. THAT version has been solved.
Still very impressive and surprising. | {"url":"http://blog.computationalcomplexity.org/2010/11/erdos-distance-problem-solved.html","timestamp":"2014-04-19T09:30:39Z","content_type":null,"content_length":"169187","record_id":"<urn:uuid:72c6a42b-f91b-418f-80d7-6bdcc73f9508>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00553-ip-10-147-4-33.ec2.internal.warc.gz"} |
Ring Homomorphism
September 21st 2010, 01:57 AM #1
Dec 2009
Ring Homomorphism
(a) What does the question mean by 'determining the kernel'? I know that kernels are ideals and ideals are kernels. Since $\varphi$ is a ring homomorphism from $R \to \mathbb{Z}$, then $ker(\
varphi)=\{ r \in R | \varphi (r) = 0 \}$ is an ideal of R.
Now, a subring I is an ideal of R if $rI := \{ ra: a \in I \} \subseteq I$ and $Ir := \{ ar : a \in I \} \subseteq I$ for $\forall r \in R$. (The subring I is closed under multiplication with
arbitrary elements of R). Is this all I need to say?!
(b) I have no idea how to do this one...
(c) I think $ker(\varphi)$ is a prime ideal of R, if it is a proper ideal of R such that $a,b \in R$ and $ab \in ker(\varphi)$ imply $a \in ker(\varphi)$ or $b \in ker(\varphi)$. What else do I
need to state?
(a) What does the question mean by 'determining the kernel'? I know that kernels are ideals and ideals are kernels. Since $\varphi$ is a ring homomorphism from $R \to \mathbb{Z}$, then $ker(\
varphi)=\{ r \in R | \varphi (r) = 0 \}$ is an ideal of R.
Now, a subring I is an ideal of R if $rI := \{ ra: a \in I \} \subseteq I$ and $Ir := \{ ar : a \in I \} \subseteq I$ for $\forall r \in R$. (The subring I is closed under multiplication with
arbitrary elements of R). Is this all I need to say?!
No. I mean, you've just given the definition of a kernel. You need to apply it to this case. It's just the set of matrices such that a-3b=0. So just stick that in set notation...
Okay, you have a ring homomorphism which maps into the integers. Basically, you are trying to show that the image is isomorphic to the integers (this is easiest, and equivalent). So, how would
you approach this problem? You know that the image injects into the integers (because it is a subring). So, what is left to show?...
This is a exercise in looking at the image to find things about the pre-image/kernel. So, if $ker(\varphi)$ is a prime ideal, then what does this mean about the integers? If $ker(\varphi)$ is a
maximal ideal what does this mean about the integers?
$ker(\varphi)= \{ a-3b=0 | a,b \in \mathbb{Z} \}$
Okay, you have a ring homomorphism which maps into the integers. Basically, you are trying to show that the image is isomorphic to the integers (this is easiest, and equivalent). So, how would
you approach this problem? You know that the image injects into the integers (because it is a subring). So, what is left to show?...
So, I just need to show that $R/ker(\varphi)$ is surjective and operation preserving?
This is a exercise in looking at the image to find things about the pre-image/kernel. So, if $ker(\varphi)$ is a prime ideal, then what does this mean about the integers? If $ker(\varphi)$ is a
maximal ideal what does this mean about the integers?
What do you mean by that? Could you please explain a little bit more?
You have operation preserving - that's just the first Isomorphism theorem. So you just need to prove surjective. Can you think of a neat way of proving that the homomorphism is surjective?
$I$ is a prime ideal in $R$
$\Leftrightarrow$$ab \in I \Rightarrow a\in I$ or $b \in I$
$\Leftrightarrow$...what does this mean about $R/I$? Hint: zero divisors.
$I$ is a prime ideal in $R$
$\Leftrightarrow$$ab \in I \Rightarrow a\in I$ or $b \in I$
$\Leftrightarrow$...what does this mean about $R/I$? Hint: zero divisors.
I tried to follow your hint, but I still need some help to wrap it up:
If $I$ is a prime ideal in $R$ then: $R/I$ is an integral domain if and only if $I$ is a prime ideal. (Also $R/I$ is a field if and only if $I$ is maximal.)
$R/I$ is clearly commutative and it has identity $1+I$. In order to show it's an integral domain I only need to show that there are no zero divisors.
x+I and y+I are nonzero in $R/I$$\iff$$x$ and $y$ do not lie in $I$. If I is a prime then x.y dos not lie in I and so $(x+I)(y+I)=x.y+I$ must be non-zero.
Let $x= \begin{pmatrix}a_1 &3b_1 \\3b_1 & a_1 \end{pmatrix}$, $y= \begin{pmatrix}a_2 &3b_2 \\3b_2 & a_2 \end{pmatrix}$
$xy= \begin{pmatrix}a_1a_2+9b_1b_2 &3a_1b_2+3a_2b_1 \\3a_2b_1+3a_1b_2 & 9b_1b_2+a_1a_2 \end{pmatrix}$
$\varphi (xy)= a_1a_2 +9b_1b_2 - 3(a_1b_1+a_2b_1)$
$xy \in ker(\varphi)$
$\implies a_1-3b_1 = 0$ or $a_2-3b_2=0$
So $x \in ker(\varphi)$ or $y \in ker(\varphi)$
What else do I need to do? Am I right so far?
I see, but I really can't think of a neat way of doing it... maybe more hints please?
I tried to follow your hint, but I'm not sure how to wrap it up:
If $I$ is a prime ideal in $R$ then:
$R/I$ is an integral domain if and only if $I$ is a prime ideal.
(Also $R/I$ is a field if and only if $I$ is maximal.)
$R/I$ is clearly commutative and it has identity $1+I$. In order to show it's an integral domain I only need to show that there are no zero divisors.
x+I and y+I are nonzero in $R/I$$\iff$$x$ and $y$ do not lie in $I$. If I is a prime then x.y dos not lie in I and so
must be non-zero.
Let $x= \begin{pmatrix}a_1 &3b_1 \\3b_1 & a_1 \end{pmatrix}$, $y= \begin{pmatrix}a_2 &3b_2 \\3b_2 & a_2 \end{pmatrix}$
$xy= \begin{pmatrix}a_1a_2+9b_1b_2 &3a_1b_2+3a_2b_1 \\3a_2b_1+3a_1b_2 & 9b_1b_2+a_1a_2 \end{pmatrix}$
$\varphi (xy)= a_1a_2 +9b_1b_2 - 3(a_1b_1+a_2b_1)$
$xy \in ker(\varphi)$
$\implies a_1-3b_1 = 0 or a_2-3b_2=0$
So $x \in ker(\varphi)$ or $y \in ker(\varphi)$
What else should I do?
Use the fact that $R/I \cong \mathbb{Z}$. No working is really needed then.
You need to prove that the generators (or here, generator) is mapped to. Can you see why this is sufficient?
I'm not sure how to fit this in, could you please show me where it can be used? Here's the proof I've written so far:
Suppose I is a prime and suppose R/I is not an integral domain. There are zero divisors in R/I iff there are $(x+I), (y+I) \in (R/I)^*$. So that $(x+I)(y+I) = 0_R =I \in (R/I)^*$.
Now, $(x+I) \in (R/I)^* \iff x+I eq I \iff x+I otin I \iff x otin I$.
$y+I \in (R/I)^* \iff y otin I$
but $xy+I=I \iff xy \in I$
So, $x otin I$and $y otin I$ but $xy \in I$. So we get a contradiction and R/I must be an integral domain and so $I=ker(\varphi)$ is a prime.
You need to prove that the generators (or here, generator) is mapped to. Can you see why this is sufficient?
I'm not quiete sure what you mean. But here's what I've tried:
$R/I \cong \mathbb{Z}$, so $(x+I) \mapsto x$. Let $x+I \in R/I$ where
$x+I = \begin{pmatrix}a &3b \\3b & a \end{pmatrix} + I$
Since $a-3b=0 \iff a= 3b$,
$I = \left\{ \begin{pmatrix}3b &3b \\3b & 3b \end{pmatrix} | b \in \mathbb{Z} \right\}$. Therefore
$\varphi \left( \begin{pmatrix}a &3b \\3b & a \end{pmatrix} + I \right) = \varphi \left( \begin{pmatrix}a &3b \\3b & a \end{pmatrix} - \begin{pmatrix}3b &3b \\3b & 3b \end{pmatrix} \right)= \
varphi \left( \begin{pmatrix}a-3b &0 \\0 & a-3b \end{pmatrix} + I \right)$
= (a-3b)-(a-3b)=0
Is this also correct?
I'm not sure how to fit this in, could you please show me where it can be used? Here's the proof I've written so far:
Suppose I is a prime and suppose R/I is not an integral domain. There are zero divisors in R/I iff there are $(x+I), (y+I) \in (R/I)^*$. So that $(x+I)(y+I) = 0_R =I \in (R/I)^*$.
Now, $(x+I) \in (R/I)^* \iff x+I eq I \iff x+I otin I \iff x otin I$.
$y+I \in (R/I)^* \iff y otin I$
but $xy+I=I \iff xy \in I$
So, $x otin I$and $y otin I$ but $xy \in I$. So we get a contradiction and R/I must be an integral domain and so $I=ker(\varphi)$ is a prime.
Because the two rings are isomorphic, if one has zero divisors then so does the other. Similarly, if one is a field then so is the other. Apply these two facts...
I'm not quiete sure what you mean. But here's what I've tried:
$R/I \cong \mathbb{Z}$, so $(x+I) \mapsto x$. Let $x+I \in R/I$ where
$x+I = \begin{pmatrix}a &3b \\3b & a \end{pmatrix} + I$
Since $a-3b=0 \iff a= 3b$,
$I = \left\{ \begin{pmatrix}3b &3b \\3b & 3b \end{pmatrix} | b \in \mathbb{Z} \right\}$. Therefore
$\varphi \left( \begin{pmatrix}a &3b \\3b & a \end{pmatrix} + I \right) = \varphi \left( \begin{pmatrix}a &3b \\3b & a \end{pmatrix} - \begin{pmatrix}3b &3b \\3b & 3b \end{pmatrix} \right)= \
varphi \left( \begin{pmatrix}a-3b &0 \\0 & a-3b \end{pmatrix} + I \right)$
= (a-3b)-(a-3b)=0
Is this also correct?
You want to show that 1 is contained in $im(\varphi)$. That is, does there exist some 2-by-2 matrix of that form which has determinant equal to 1?
Since the factor ring $R/ ker(\varphi)$ is an integral domain, it follows kernel is a prime ideal. But how do you show that $ker(\varphi)$ is not maximal? More precisely, how do you show that $R/
ker(\varphi)$ is not a field?
September 21st 2010, 02:30 AM #2
September 21st 2010, 05:08 PM #3
Dec 2009
September 21st 2010, 11:14 PM #4
September 23rd 2010, 12:53 AM #5
Dec 2009
September 23rd 2010, 12:57 AM #6
September 24th 2010, 04:28 AM #7
Dec 2009
September 27th 2010, 05:26 AM #8
October 30th 2010, 08:50 PM #9
Dec 2009
November 1st 2010, 12:53 AM #10 | {"url":"http://mathhelpforum.com/advanced-algebra/156916-ring-homomorphism.html","timestamp":"2014-04-19T03:39:32Z","content_type":null,"content_length":"93499","record_id":"<urn:uuid:9211d362-e114-4964-a979-9653ab197cb4>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00476-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Help
March 27th 2008, 08:38 PM #1
Mar 2008
proof help
Prove that the square root of 6 is irrational.
Sorry i don't know how to do the square root in here.
Proof by contradiction
I would use the proof by contradiction method for this.
So let's assume that ${\sqrt {6}}$ is rational.
By definition, that means there are two integers a and b with no common divisors where:
${\frac{a}{b}} = {\sqrt {6}}$
So let's multiply both sides by themselves:
$({\frac{a}{b}})({\frac{a}{b}}) = ({\sqrt {6}})({\sqrt {6}})$
${\frac{a{^2}}{b{^2}}} = 6$
$a{^2} = 6b{^2}$
This last statement means the RHS (right hand side) is even, because it is a product of integers and one of those integers (at least) is even. So $a{^2}$ must be even. But any odd number times
itself is odd, so if $a{^2}$ is even, then a is even.
Since a is even, there is some integer c that is half of a, or in other words:
2c = a.
Now let's replace a with 2c:
$a{^2} = 6b{^2}$
$(2c){^2} = (2)(3)b{^2}$
$2c{^2} = 3b{^2}$
But now we can argue the same thing for b, because the LHS is even, so the RHS must be even and that means b is even.
Now this is the contradiction: if a is even and b is even, then they have a common divisor (2). Then our initial assumption must be false, so ${\sqrt {6}}$ cannot be rational.
March 27th 2008, 08:57 PM #2
Mar 2008 | {"url":"http://mathhelpforum.com/algebra/32315-proof-help.html","timestamp":"2014-04-20T19:22:34Z","content_type":null,"content_length":"33670","record_id":"<urn:uuid:99c2dc8f-b81e-4431-b258-f31e6f5d47b6>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00118-ip-10-147-4-33.ec2.internal.warc.gz"} |
Closed forms of summations and reprospective relations
January 14th 2012, 10:40 PM
Closed forms of summations and reprospective relations
Hello and have a nice week !
Im asking your help with the following exercices (tried but failed).
So the first one
Find the closed form of the double summation
S(from j=1 to n) S(from i=0 to infinity) j^5/3 (1-1/2j^1/3)^I
The second one
(a) T(n)=T(n/4)+log4_n and T(1)=0 fond the closed form .
(b) A flower can live for 2 years and reproduct once a year.Specifically it reproducts during its first year of life . Find a reprospective relation that describes the number of flowers in time n
and then find a closed form if we know that in time 0 there is only one flower.
Thanks in advance !!!!
January 15th 2012, 12:31 AM
Re: Closed forms of summations and reprospective relations
Put in sufficient brackets to make your expressions unambiguous!, As it stands there is no closed form as the inner sum is divergent.
So let us assume you mean:
$S(n)=\sum_{j=1}^n \sum_{i=0}^{\infty} j^{5/3}\left(1-\frac{1}{2j^{1/3}}\right)^i$
Write it as:
$S(n)=\sum_{j=1}^n j^{5/3} \sum_{i=0}^{\infty} \left(1-\frac{1}{2j^{1/3}}\right)^i$
Now the inner sum is an infinite geometric series with sum:
$S_{inner}(j)= \frac{1}{1-(1-1/(2j^{1/3}))}=2j^{1/3}$
January 16th 2012, 12:48 AM
Re: Closed forms of summations and reprospective relations
Put in sufficient brackets to make your expressions unambiguous!, As it stands there is no closed form as the inner sum is divergent.
So let us assume you mean:
$S(n)=\sum_{j=1}^n \sum_{i=0}^{\infty} j^{5/3}\left(1-\frac{1}{2j^{1/3}}\right)^i$
Write it as:
$S(n)=\sum_{j=1}^n j^{5/3} \sum_{i=0}^{\infty} \left(1-\frac{1}{2j^{1/3}}\right)^i$
Now the inner sum is an infinite geometric series with sum:
$S_{inner}(j)= \frac{1}{1-(1-1/(2j^{1/3}))}=2j^{1/3}$
really thanks for the help !(Rofl)(Rofl)(Rofl)(Rofl)(Rofl)(Rofl)
i solved the first one !!!!!!!
But what about the other one with the 2 parts ????
And again thanks in advance !
for the second one part a i found sth but i dont think it is right (cause i dont even use the t(1)=0).
T(n)=O((log4_n)*n^(log4_2) How can i find the closed form of the reprospective relation ?
and what about the part b ? | {"url":"http://mathhelpforum.com/discrete-math/195316-closed-forms-summations-reprospective-relations-print.html","timestamp":"2014-04-18T19:57:04Z","content_type":null,"content_length":"8504","record_id":"<urn:uuid:e649807c-5045-4b07-b200-138399abf935>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00526-ip-10-147-4-33.ec2.internal.warc.gz"} |
Relativity and Cosmology
0912 Submissions
[10] viXra:0912.0056 [pdf] submitted on 29 Dec 2009
Space-Time Geometry Translated Into the Hegelian and Intuitionist Systems
Authors: Stephen P. Smith
Comments: 21 Pages.
Kant noted the importance of spatial and temporal intuitions (synthetics) in geometric reasoning, but intuitions lend themselves to different interpretations and a more solid grounding may be sought
in formality. In mathematics David Hilbert defended formality, while L. E. J. Brouwer cited intuitions that remain unencompassed by formality. In this paper, the conflict between formality and
intuition is again investigated, and it is found to impact on our interpretations of space-time as translated into the language of geometry. It is argued that that language as a formal system works
because of an auxiliary innateness that carries sentience, or feeling. Therefore, the formality is necessarily incomplete as sentience is beyond its reach. Specifically, it is argued that sentience
is covertly connected to space-time geometry when axioms of congruency are stipulated, essentially hiding in the formality what is sense-certain. Accordingly, geometry is constructed from primitive
intuitions represented by one-pointedness and route-invariance. Geometry is recognized as a two-sided language that permitted a Hegelian passage from Euclidean geometry to Riemannian geometry. The
concepts of general relativity, quantum mechanics and entropy-irreversibility are found to be the consequences of linguistic type reasoning, and perceived conflicts (e.g., the puzzle of quantum
gravity) are conflicts only within formal linguistic systems. Therefore, the conflicts do not survive beyond the synthetics because what is felt relates to inexplicable feeling, and because the
question of synthesis returns only to Hegel's absolute Notion.
Category: Relativity and Cosmology
[9] viXra:0912.0051 [pdf] submitted on 25 Dec 2009
Making an Analogy Between a Multi-Chain Interaction in Charge Density Wave Transport and the Use of Wave Functionals to Form Soliton-Anti Soliton Pairs
Authors: Andrew Beckwith
Comments: 21 pages
First reading of article approved by IJMPB, which had half of my PhD dissertation results. For the record. Basis of quantum interpretation of Density wave dynamics included.
Category: Relativity and Cosmology
[8] viXra:0912.0048 [pdf] replaced on 18 May 2010
A Solution for "Dark Matter Mystery", Based on Euclidean Relativity.
Authors: Frederic Lassiaille
Comments: 36 pages.
The study of this article suggests an explanation for the "dark matter mystery". This explanation is based on a modification of Newton's law. This modification is conducted from an Euclidean vision
of relativity. Concerning the mystery of the velocity of the stars inside a galaxy, the study calculates a theoretical curve which is different from the one coming from Newton's law. This theoretical
curve is very close, qualitatively speaking, to the measured one. Concerning the mystery of the velocity of a galaxy inside its group, the explanation is more direct. For this mystery, the study
calculates a greater value for G, the gravitational constant.
Category: Relativity and Cosmology
[7] viXra:0912.0045 [pdf] submitted on 20 Dec 2009
Hypotheses of the Motion in Microcosm
Authors: Zou Ha
Comments: 6 Pages.
One question is that we always get the integrated photographs in good order when taking photograph though the phenomenon of particle-wave duality exist in the microcosm. Feynman path is probability
and we should not get the good order photos. There is unknown mechanism in the microcosm. Another question is quantum gravitation how to connect the line and the dot. If I am the particle how I move.
I think of doing some sewing and the unknown space was introduced. The two questions will be thought together and I give an able mechanism.
Category: Relativity and Cosmology
[6] viXra:0912.0044 [pdf] replaced on 12 Mar 2010
Tidal Charges From BraneWorld Black Holes As An Experimental Proof Of The Higher Dimensional Nature Of The Universe.
Authors: Fernando Loup
Comments: 86 Pages. Titles in the headers of the sections rewritten in a more conventional way.
If the Universe have more than 4 Dimensions then its Extra Dimensional Nature generates in our 4D Spacetime a projection of a 5D Bulk Weyl Tensor. We demonstrate that this happens not only in the
Randall-Sundrum BraneWorld Model where this idea appeared first (developed by Shiromizu, Maeda and Sasaki)but also occurs in the Kaluza-Klein 5D Induced Matter Formalism.As a matter of fact this 5D
Bulk Weyl Tensor appears in every Extra Dimensional Formalism (eg Basini-Capozziello-Wesson-Overduin Dimensional Reduction From 5D to 4D) because this Bulk Weyl tensor is being generated by the Extra
Dimensional Nature of the Universe regardless and independently of the Mathematical Formalism used and the Dimensional Reduction From 5D to 4D of the Einstein and Ricci Tensors in both Kaluza-Klein
and Randall-Sundrum Formalisms are similar.Also as in the Randall-Sundrum Model this 5D Bulk Weyl Tensor generates in the Kaluza-Klein formalism a Tidal "Electric" Charge "seen" in 4D as an Extra
Term in the Schwarzschild Metric resembling the Reissner-Nordstrom Metric. We analyze the Gravitational Bending Of Light in this BraneWorld Black Hole Metric(known as the
Dadhich,Maartens,Papadopolous and Rezania) affected by an Extra Term due to the presence of the Tidal Charge compared to the Bending Of Light in the Reissner-Nordstrom Metric with the Electric Charge
also being generated by the Extra Dimension in agreement with the point of view of Ponce De Leon (explaining in the generation process how and why antiparticles have the same rest mass m[0] but
charges of equal modulus and opposite signs when compared to particles)and unlike the Reissner-Nordstrom Metric the terms G/(c^4) do not appear in the Tidal Charge Extra Term.Thereby we conclude that
the Extra Term produced by the Tidal Charge in the Bending Of Light due to the presence of the Extra Dimensions is more suitable to be detected than its Reissner-Nordstrom counterpart and this line
of reason is one of the best approaches to test the Higher Dimensional Nature of the Universe and we describe a possible experiment using Artificial Satellites and the rotating BraneWorld Black Hole
Metric to do so
Category: Relativity and Cosmology
[5] viXra:0912.0031 [pdf] submitted on 11 Nov 2009
El Simple FENóMENO Del Redshift Gravitatorio Demuestra la Necesidad DE la Nueva ECUACIóN Fundamental DE la TEORíA Conectada
Authors: Xavier Terri Castañé
Comments: 9 pages, Spanish language
Demonstration that the 2 fundamental equations of Einstein's general relativity, geodesics and gravitational equations of Einstein's gravitational field, are incompatible with the phenomenon of
gravitational redshift, according to which the time stationary, measured by a clock lighting, takes more slowly the greater the gravitational potential. Consequently, it is postulated the fundamental
equation of the theory connected. This intertwined with subparagraphs that solve the problem of event horizons and Schwarzschild black holes.
Category: Relativity and Cosmology
[4] viXra:0912.0016 [pdf] submitted on 8 Dec 2009
Apparent Time-Dependence of the Hubble Constant Deduced from the Observed Hubble Velocity-Distance Equation
Authors: Feng Xu
Comments: 7 pages, first published in 2004 in Hadronic Journal, volume 27, pages 741-748
An apparent time dependence of the Hubble constant was deduced from the linear correlation between the recession velocity of galaxies and the traveled distance of their photons under the assumption
of the space expansion being homologous. The time dependence of the space expansion velocity at early era implied that the currently used relativistic Doppler equation, invalid for accelerating/
deaccelerating reference frames, would lead to inaccurate measurement of the cosmological recession velocity for highly redshifted galaxies/quasars.
Category: Relativity and Cosmology
[3] viXra:0912.0015 [pdf] replaced on 23 Jan 2010
Applications of Euclidian Snyder Geometry to Space Time Physics & Deceleration Parameter ( DE Replacement?)
Authors: Andrew Beckwith
Comments: 41 pages, 2 figures, companion piece to http://vixra.org/abs/0912.0012
This 41 page PPT changed to PDF is material in common A.Beckwith gave in the ChristChurch, New Zealand Meeting, December 16th, 2009, i.e. the ACGRG 5th conference in GR , and also to be presented in
the Beyond the Standard Model, 2010 meeting, in South Africa
Category: Relativity and Cosmology
[2] viXra:0912.0013 [pdf] submitted on 7 Dec 2009
Possible Experimental Evidence to the Converse Unruh Effect in Superconductors
Authors: Z.Y. Wang
Comments: 3 pages.
Although there has not any direct evidence to the Unruh effect and Hawking radiation until now, the converse effect was maybe detected in superconductors. In a noteless experiment performed by
scientists of USSR in 1984, a heat flow across the Josephson junction induced the a.c.component[1] and was interpreted to be a thermoelectric effect. Actually, the thermoelectric effect means a
temperature gradient will generate an extra current. It occurs in a normal material but does not exist in superconductors. Here is a whole new effect that an extra phase difference rather than
current is induced. We regard it as the converse Unruh effect where the temperature is corresponding to an acceleration. Then the temperature gradient will lead to an energy difference and
consequential phase change just like the a.c.Josephson effect. Based on the postulate, the frequency formula dependent to the temperature difference T T dl is given which is in the region of ω = 4πkΔ
/(ℏ^2 V[F]) ∫(T[1] - T[2])dl radiowaves and consistent with the mentioned experiment. We hope further experiments will be carried out soon to make clear the phenomenon.
Category: Relativity and Cosmology
[1] viXra:0912.0012 [pdf] replaced on 19 Feb 2010
Applications of Euclidian Snyder Geometry to the Foundations of Space Time Physics
Authors: Andrew Beckwith
Comments: 20 pages. Significant re dos of material on deceleration parameter q(z) on pages 3, 4, and also Appendix A ( just added ). Figure 1 re sized and re parameterized. Additional work done on
rest of document to answer questions raised by participants in Beyond the Standard Model, 2010, and also major re dos of what is now Appendix D. Emphasis as as source document of information for some
of the presentations for several conference presentations, including Beyond the Standard Model 2010, and SPESIF 2010, and Piers XIAN, 2010, as well as possibly either IDM 2010 in July, or DICE 2010
in September 2010.
The following document is to answer if higher dimensions add value to answering fundamental cosmology questions. The results are mixed, 1st with higher dimensions apparently helping in reconstructing
and preserving the value of Planck's constant, and the fine structure constant from a prior to a present universe, while 2nd failing to add anything different from four dimensional cosmological
models to the question of what would cause an increase in the expansion rate of the universe, as of a billion years ago. Finally 3rd, higher dimensions may allow creation of a joint DM and DE model.
A choice between LQG and brane world geometry is introduced by Snyder geometry, where Snyder geometry's minimum uncertainty length calculations δx may help determine to what extent gravity is an
emergent field that is classical. Independent of the choice of LQG and branes (four dimensions versus higher dimensional cosmology models) is the following question: If gravity is largely classical,
how much nonlinearity is involved? Gravitons and their structure as information carriers may help answer these questions. The main point of this document: DM and DE may be unified in terms of
cosmological dynamics if the higher dimensional models of DM, as seen by KK towers of gravitons are seen to be pertinent to increasing acceleration of the universe a billion years ago via a 4th
dimensional small graviton mass term added to the KK tower DM representation of gravitons (a model of DM).
Category: Relativity and Cosmology | {"url":"http://rxiv.org/relcos/0912","timestamp":"2014-04-20T10:47:18Z","content_type":null,"content_length":"18601","record_id":"<urn:uuid:10596e0d-9eaf-4e3a-9329-9a55c4dcf339>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00302-ip-10-147-4-33.ec2.internal.warc.gz"} |
where am i going wrong? [Archive] - Car Audio Forum - CarAudio.com
05-10-2006, 11:20 PM
im building a ported box for 2 12 inch swiss audio spl1290's.
i am trying to figure out what the length of the port should be but i just cant get it....im using this equation
Lv = [(1.463*10^7R^2)/(Fb^2*Vb)] - 1.463
ive searched and searched and read the entire sticky in this forum but i keep coming up with an outragously large number...
the box before the port is going to be
34.5 inches long
19 inches deep
and 14 inches high
andthe port is going to be 3 inches tall
i want it to be ported to 36 hrtz
the internal cubic feet is 4.5 giving each one about 2.25 cubic feet
im going to have the subs on the top of the box firing upwards and the port in the front like near the ground...
so using that formula i came up with that the
square root ((heightxwidth)/pie) because its a vented port.
square root ((3x33)/3.14) 33 beacsue tahts the inside measurment.
so i got 5.613615478.......thats the part that replaces the r casue its a vented box.
Lv = [(1.463*10^7R^2)/(Fb^2*Vb)] - 1.463
Lv= [(1.463*10,000,000*31.51267873)]
--------------------------------------------- -1.463
Lv= [461030489.8]
------------------- -1.463
Lv= [79051.87]-1.463
lv=79050.40 now obviously thats wayyyyy to big for the box???
anyone have any idea what im doing wrong??? like units wrong? the only one i can think of is the Vb i ahve it at 4.5 thats cuboc feet, should that be in inches or something, ikd im reallly confused. | {"url":"http://www.caraudio.com/forums/archive/index.php/t-159151.html","timestamp":"2014-04-18T19:12:37Z","content_type":null,"content_length":"18161","record_id":"<urn:uuid:89d10659-4f1a-495c-b657-cca19eac21fe>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00349-ip-10-147-4-33.ec2.internal.warc.gz"} |
Odd problem in ISR - Arduino Forum
Author Topic: Odd problem in ISR (Read 672 times)
0 Members and 1 Guest are viewing this topic.
« Reply #15 on: March 06, 2012, 09:53:49 pm » Bigger Smaller Reset
(now, to be a true old-fashioned computer nerd, I should have had 5, 10, 20, 30... ?)
You know, I'd forgotten about doing that.
« Reply #16 on: March 07, 2012, 04:58:39 am » Bigger Smaller Reset
I think the problem is in your scaled math. If you scale a multiplicative constant, you can only use it ONCE, after which you have to start re-normalizing your results.
For instance, using a scale factor of 1000 (base 10 to make things easy), multiplying by 0.234 would mean multiplying by 234 instead. So far so good. But if you take that result and multiply by the
scaled constant again, you've multiplied by 54756 (234*234), which is not at all the same as multiplying by a scaled (.234*.234*1000 != 55)
I didn't see the renormalization that you'd have to do. It might have been buried in the mysterious shifts and divisions, but
1) You should make such things obvious by letting the compiler do constant math
#define SCALEDFACTOR(a) ((long)(a*2.0^12))
#define SCALEDMULT(a,x) ((SCALEDFACTOR(a)*x)>>12
.... + (SCALEDMULT(0.904347, Lyv[0])
Compilers these days are very good at doing math on constants; don't make your code more obscure by doing it for them!
2) I think this would explain the results you're seeing. Multiply things by a few extra thousands a couple time in a row and you'll overflow your longs, as well as having intermediate incorrect
3) It'd be nice to see the actual equations, scaling factors, and etc CLEARLY DOCUMENTED.
You're trying to do:
yv[3] = (xv[0] + xv[3]) + 3 * (xv[1] + xv[2])
+ ( 0.9390989403 * yv[0]) + ( -2.8762997235 * yv[1])
+ ( 2.9371707284 * yv[2]);
Using integers scaled by a factor of 4096 (12 bit shift), right?
« Last Edit: March 07, 2012, 05:06:51 am by westfw »
« Reply #15 on: March 06, 2012, 09:53:49 pm » Bigger Smaller Reset
(now, to be a true old-fashioned computer nerd, I should have had 5, 10, 20, 30... ?)
You know, I'd forgotten about doing that.
« Reply #15 on: March 06, 2012, 09:53:49 pm » Bigger Smaller Reset
« Reply #16 on: March 07, 2012, 04:58:39 am » Bigger Smaller Reset
I think the problem is in your scaled math. If you scale a multiplicative constant, you can only use it ONCE, after which you have to start re-normalizing your results.
For instance, using a scale factor of 1000 (base 10 to make things easy), multiplying by 0.234 would mean multiplying by 234 instead. So far so good. But if you take that result and multiply by the
scaled constant again, you've multiplied by 54756 (234*234), which is not at all the same as multiplying by a scaled (.234*.234*1000 != 55)
I didn't see the renormalization that you'd have to do. It might have been buried in the mysterious shifts and divisions, but
1) You should make such things obvious by letting the compiler do constant math
#define SCALEDFACTOR(a) ((long)(a*2.0^12))
#define SCALEDMULT(a,x) ((SCALEDFACTOR(a)*x)>>12
.... + (SCALEDMULT(0.904347, Lyv[0])
Compilers these days are very good at doing math on constants; don't make your code more obscure by doing it for them!
2) I think this would explain the results you're seeing. Multiply things by a few extra thousands a couple time in a row and you'll overflow your longs, as well as having intermediate incorrect
3) It'd be nice to see the actual equations, scaling factors, and etc CLEARLY DOCUMENTED.
You're trying to do:
yv[3] = (xv[0] + xv[3]) + 3 * (xv[1] + xv[2])
+ ( 0.9390989403 * yv[0]) + ( -2.8762997235 * yv[1])
+ ( 2.9371707284 * yv[2]);
Using integers scaled by a factor of 4096 (12 bit shift), right?
« Last Edit: March 07, 2012, 05:06:51 am by westfw »
« Reply #16 on: March 07, 2012, 04:58:39 am » Bigger Smaller Reset
« Last Edit: March 07, 2012, 05:06:51 am by westfw » | {"url":"http://forum.arduino.cc/index.php?topic=95026.msg716979","timestamp":"2014-04-19T01:56:12Z","content_type":null,"content_length":"38933","record_id":"<urn:uuid:763fa6e6-8eb4-4483-8b47-1a1388dfc9b5>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00454-ip-10-147-4-33.ec2.internal.warc.gz"} |
1. Introduction2. Results and Discussion3. Experimental Section4. ConclusionsAcknowledgementsConflicts of InterestReferences
nanomaterials Nanomaterials Nanomaterials Nanomaterials 2079-4991 MDPI 10.3390/nano4010046 nanomaterials-04-00046 Article Effect of Low-Frequency AC Magnetic Susceptibility and Magnetic Properties of
CoFeB/MgO/CoFeB Magnetic Tunnel Junctions Chen Yuan-Tsung * Lin Sung-Hao Sheu Tzer-Shin Department of Materials Science and Engineering, I-Shou University, Kaohsiung 840, Taiwan; E-Mails:
isu10107009m@cloud.isu.edu.tw (S.H.L.); sheu415@isu.edu.tw (T.S.S.) Author to whom correspondence should be addressed; E-Mail: ytchen@isu.edu.tw; Tel: +886-765-777-11 (ext. 3119); Fax:
+886-765-784-44. 02 01 2014 03 2014 4 1 46 54 14 11 2013 19 12 2013 24 12 2013 © 2014 by the authors; licensee MDPI, Basel, Switzerland. 2014
This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/3.0/).
In this investigation, the low-frequency alternate-current (AC) magnetic susceptibility (χ[ac]) and hysteresis loop of various MgO thickness in CoFeB/MgO/CoFeB magnetic tunneling junction (MTJ)
determined coercivity (H[c]) and magnetization (M[s]) and correlated that with χ[ac] maxima. The multilayer films were sputtered onto glass substrates and the thickness of intermediate barrier MgO
layer was varied from 6 to 15 Å. An experiment was also performed to examine the variation of the highest χ[ac] and maximum phase angle (θ[max]) at the optimal resonance frequency (f[res]), at which
the spin sensitivity is maximal. The results reveal that χ[ac] falls as the frequency increases due to the relationship between magnetization and thickness of the barrier layer. The maximum χ[ac] is
at 10 Hz that is related to the maximal spin sensitivity and that this corresponds to a MgO layer of 11 Å. This result also suggests that the spin sensitivity is related to both highest χ[ac] and
maximum phase angle. The corresponding maximum of χ[ac] is related to high exchange coupling. High coercivity and saturation magnetization contribute to high exchange-coupling χ[ac] strength.
magnetic tunnel junctions (MTJs) exchange coupling low-frequency alternate-current (AC) magnetic susceptibility (χ[ac]) resonance frequency (f[res])
Since 1995, the tunneling magnetoresistance (TMR) effect has been extensively discussed, and it has been exploited in much of our modern technology [1,2]. In the past, increasing attention has been
paid to ferromagnetic exchange coupling in magnetic fields [3,4], and the discovery of spintronics has led to a rapid increase in the number of exchange coupling issues. A magnetic tunneling junction
(MTJ) has a trilayer structure that comprises a top free ferromagnetic (FM1) layer, an insulating tunneling barrier layer (spacer), and a bottom pinned ferromagnetic (FM2) layer. It has a great
potential for use in magnetoresistance random access memory (MRAM). It provides many advantages, such as low loss energy, lack of volatility and semi-permanence features, and can be used in
high-density magnetic read heads [5,6,7]. The first demonstrated MgO based tunnel junctions are Parkin et al. [8] and Yuasa et al. [9]. The mechanism of TMR in MgO based junctions is explained by
Butler et al. [10]. In the past, TMR based on CoFeB/MgO/CoFeB MTJ has attracted considerable attention. For example, a previous study found that the magnetron sputtering of CoFeB/MgO/CoFeB at room
temperature (RT) yielded a high TMR ratio [11,12]. Lee et al. [13] also achieved a TMR ratio of 500% at RT. Furthermore, the fabrication of high-quality junctions requires a superior ferromagnetic
layer with a high spin polarization, a crystalline ordering, and a sufficient indirect spin exchange-coupling between the FM1 and FM2 layers [14,15,16,17]. The defects in the tunnel barrier material
can lead to electron trapping and resistance fluctuations and induce field-dependent 1/f noise [18]. The origin of 1/f power spectrum is attributed to charge traps occurring in the barrier layer or
near the interfaces between barrier and magnetic layers at low frequencies [18,19,20,21,22,23]. The alternate-current (AC) susceptibility is related to magnetic noise and exchange-coupling
interaction. The high AC susceptibility can enhance a strong dipole-dipole interaction effect [24]. Moreover, a proper exchange-coupling interaction can induce a large signal-to-noise ratio [25].
However, the external stress acting on magnetic element can induce magnetic susceptibility variation of ferromagnetic layers and disturb spectral power noise of read head device. At low frequencies,
the spectral power noise is dependent on free and fixed ferromagnetic layers of hysteresis loop owing to thermal magnetization fluctuations. The origin of magnetic fluctuations is excited hopping of
magnetic domain walls. However, most of MTJ research has focused on the TMR, whereas the relative low-frequency alternate-current (AC) magnetic susceptibility (χ[ac]) has rarely been examined. The
low field AC measurement at low frequencies is related to the spin sensitivity of MTJ devices [18]. The low-frequency AC magnetic susceptibility (χ[ac]) and hysteresis loop of CoFeB/MgO/CoFeB are
worthwhile to study. This investigation focuses on the maximum χ[ac], the optimal resonance frequency (f[res]) and maximum phase angle (θ[max]) for various MgO barrier thicknesses (6, 8, 11, 13, and
15 Å). The maximum χ[ac] is 0.7 at the optimal resonant frequency of 10 Hz and the maximum phase angle is 228° at an MgO thickness of 11 Å. These values are larger than compared for Fe[40]Pd[40]B[20]
(X Å)/ZnO(500 Å) and suitable for low-frequency magnetic media applications [26]. The magnetic material under the external AC magnetic field shows a magnetic property called multiple-frequency AC
magnetic susceptibility χ[ac] [27]. The origin of χ[ac] is due to the association between magnetic spin interactions [27]. The frequency of the applied AC magnetic field equals the frequency of
oscillation of the magnetic dipole. The maximum χ[ac] value is corresponding to optimal resonance frequency, increasing spin sensitivity at optimal f[res]. It means that the optimal f[res] is
associated with maximal spin sensitivity.
Figure 1 presents the χ[ac] amplitude of the CoFeB/MgO/CoFeB MTJ for different thicknesses of the MgO layer at frequencies in the range 10 to 25,000 Hz. The lowest measured frequency is 10 Hz and the
smallest step frequency is 20 Hz at low frequencies for used χ[ac] measurement. The maxima χ[ac] at the optimal resonance frequency has the following physical meaning. At low frequencies, the
resultant alternate-current (AC) magnetic dipole moment is contributed from the oscillation of volume magnetic dipole moment inside each domain. The applied AC magnetic field acts a driving force.
The magnetic interactions among domains act restored. There exists a resonant frequency as a driving force acting to the system. Thus, the frequency of the peak of the low-frequency magnetic
susceptibility corresponds to the resonant frequency of the oscillation of the magnetic dipole moment inside domains. The χ[ac] peak indicates the spin exchange-coupling interaction and dipole moment
of domain under frequency [27]. It is reasonably concluded that the physical meaning peaks of the low frequency susceptibility indicate the magnetic exchange coupling between two CoFeB layers. The
results suggest that an excitation frequency of 10 to 30 Hz maximizes the χ[ac] of the magnetic exchange-coupled signal, and as the frequency increases above 30 Hz, the χ[ac] obtained from the signal
declines, suggesting that the CoFeB/MgO/CoFeB MTJ is suited to use at low frequencies. The optimal maximum susceptibility, at frequencies in the range of 10 to 30 Hz, can be utilized in inductors and
transformers [28,29].
Measured low-frequency alternate-current magnetic susceptibility (χ[ac]) of CoFeB/MgO/CoFeB as a function of thickness of MgO barrier layer.
Briefly, the maximum χ[ac] at the optimal resonant frequency (f[res]), f[res] corresponds to the maximum spin sensitivity. Therefore, Figure 2 plots the highest χ[ac] as a function of MgO thickness.
The resonance peak of origin 10 Hz represents to initial spin exchange coupling status. The maximum χ[ac] value at 6 Å is 0.44, at 8 Å is 0.52, at 11 Å is 0.71, at 13 Å is 0.27, and at 15 Å is 0.3.
These findings are known to follow from indirect interactions of magnetic moment. The indirect interactions of magnetic moment mean the spin exchange interaction between free CoFeB and pinned CoFeB
layers, which indicate a strong moment interactions can induce a high magnetic susceptibility [24]. The susceptibility peaks relate to the exchange interaction between the two layers closely. The
high χ[ac] peaks are corresponding to high exchange coupling.
Maximum χ[ac] as a function of thickness of MgO barrier layer.
The phase angle (θ) has the following physical meaning. When a magnetic material is in an external magnetic field, the magnetic dipole moment tends to lie in the direction of the interaction of the
magnetic moment with the external field. When an external AC magnetic field is applied, and the AC frequency is not too high compared to microwave frequencies, the magnetic dipole moment oscillates.
The frequency of the applied AC magnetic field equals the frequency of oscillation of magnetic dipole. However, the direction of instantaneous magnetic dipole is not the same as the direction of the
applied magnetic field. The phase angle denotes the difference [29]. Figure 3 plots the corresponding maximum phase angle (θ[max]) between magnetic field and magnetization as a function of MgO
thickness for maximal χ[ac]. The high χ[ac] ensures high spin sensitivity to an increase in the phase angle. Restated, by increasing the phase angle improved the sensitivity to detect the behavior of
an electron spin. In summary, the results concerning the phase angle are consistent with trend of χ[ac].
Variation of maximum χ[ac] with maximum phase angle (θ).
Table 1 presents the important parameters of CoFeB/MgO/CoFeB MTJ. From this Table, an MgO thickness of 11 Å is the best of various MgO thicknesses from 6 to 15 Å. The maximum χ[ac] of indirect
exchange-coupling susceptibility of FM1 and FM2 is 0.7, corresponding to a resonant frequency of 10 Hz and a maximum phase angle of 228.5°. According to previous study, it indicates that
susceptibility is associated with 1/f noise due to electron trap in the tunnel barrier [18]. It is related to electron traps and defects in the tunnel barrier but not related to the magnetization
fluctuations, which suggests that the quality of the tunneling barrier is an important parameter in reducing low-frequency noise in magnetic tunnel junctions. According to the result, this CoFeB/MgO/
CoFeB MTJ is suitable for components and low-frequency magnetic device applications [30].
nanomaterials-04-00046-t001_Table 1
Maximum χ[ac] value, maximum phase angle, and corresponding optimal resonance frequency for various MgO barrier thicknesses.
MgO (Å) Maximum χ[ac] (a.u.) Maximun phase angle θ[max] (degree) Highest χ[ac] corresponding optimal resonance frequency f[res] (Hz)
6 Å 0.44 142.24 30 Hz
8 Å 0.52 96.44 30 Hz
11 Å 0.71 228.59 10 Hz
13 Å 0.27 103.47 10 Hz
15 Å 0.30 155.44 10 Hz
Figure 4a shows the hysteresis loop of CoFeB(75 Å)/MgO(11 Å)/CoFeB(75 Å) MTJ. From this figure, the two-step characteristic of hysteresis loop is indicated that the spin rotated situation between two
CoFeB layers at saturated magnetic magnetization by external field (H). Moreover, the H1, H2, H3, and H4 of Figure 4a indicate the coercive fields of the two CoFeB layers, respectively. The H[c]
value of free CoFeB layer denotes (H2 + H4)/2. The H[c] value of pinned CoFeB layer is (H1 + H3)/2. The H[c] value between two CoFeB layers initially increased at MgO thicknesses from 6 to 11 Å and
decreased at MgO thicknesses from 11 to 15 Å. It can be reasonably concluded that high H[c] value means high exchange-coupling χ[ac] strength. High H[c] strength requires a large external field to
changing the spin arrangement. From Figure 4b, it suggests that the H[c] of MTJ is varied by various MgO thicknesses. The result of Figure 4b is consistent with Figure 2. According to the results of
Figure 2 and Figure 4b, they indicate that maximum χ[ac] means a high magnetic exchange coupling between two CoFeB layers and induces a corresponding high H[c]. The saturation magnetization (M[s])
between two CoFeB layers is also shown the same trend to concave-down feature, which is shown in Figure 4c. From the result of Figure 4, it indicates that high M[s] presents to high exchange-coupling
χ[ac] strength and high H[c] value.
The essential magnetic properties of magnetic tunneling junction are (a) hysteresis loop of MTJ, (b) coercivity value, and (c) saturation magnetization.
A multilayer MTJ was sputtered on a glass substrate by a DC and RF magnetron sputtering system. The chamber pressure was typically under 1 × 10^−7 Torr and the Ar-working chamber pressure was 5 × 10^
−3 Torr. The MTJ structure was glass/CoFeB(75 Å)/MgO(d)/CoFeB(75 Å) with d = 6, 8, 11, 13 and 15 Å. The atomic composition of the CoFeB alloy target was 40 atom % Co, 40 atom % Fe, and 20 atom % B.
In the fabrication of MgO barrier, the initial metal magnesium (Mg) target was deposited on the bottom of ferromagnetic electrode, and then deposited a magnesium oxide (MgO) layer was formed by RF
sputtering reaction in an oxidizing atmosphere using an Ar/O[2] with a mixing ratio of 9:16. The in-plane low-frequency alternate-current magnetic susceptibility (χ[ac]) of MTJ was studied using an χ
[ac] analyzer (X[ac]Quan, MagQu, Taiwan). First, the referenced standard sample is calibrated by an χ[ac] analyzer with an external field. Then, the measured sample is inserted to χ[ac] analyzer. The
driving frequency ranged from 10 to 25,000 Hz. The minimum frequency step is 20 Hz and that the frequency minimum is 10 Hz. The χ[ac] is determined through the magnetization measurement. All measured
samples had the same shape and size to eliminate the demagnetization factor. The χ[ac] valve is arbitrary unit (a.u.), because the χ[ac] result is corresponding to referenced standard sample. It is a
comparative valve. Moreover, the in-plane coercivity (H[c]) and saturation magnetization (M[s]) of the two CoFeB layers were obtained using a superconducting quantum interference device (SQUID,
Quantum Design MPMS5, San Diego, CA, USA).
The MgO barrier layer thickness in CoFeB/MgO/CoFeB MTJs was varied to measure low-frequency alternate-current magnetic susceptibility and magnetic properties. The highest χ[ac] was obtained at a
thickness of 11 Å, corresponding to an optimal resonance frequency of 10 Hz and a maximum phase angle of 228.5°. The best resonance frequencies are from 10 to 30 Hz, and this range of frequencies is
useful for transformers, sensors, and magnetic read heads. Additionally, the important low-frequency alternate-current susceptibility and magnetic results demonstrate that the indirect spin exchange
coupling of top CoFeB and bottom CoFeB layers in CoFeB/MgO/CoFeB oscillates.
This work was supported by the National Science Council, under Grant No. NSC100-2112-M-214-001-MY3 and No. NSC 102-2815-C-214-009-M.
The authors declare no conflict of interest.
Miyazaki T. Tezuka N. Giant magnetic tunneling effect in Fe/Al[2]O[3]/Fe junction 1995 139 L231 L234 Moodera J.S. Kinder L.R. Wong T.M. Meservey R. Large magnetoresistance at room temperature in
ferromagnetic thin film tunnel junctions 1995 74 3273 3276 10.1103/PhysRevLett.74.3273 Katayama T. Yuasa S. Velev J. Zhuravlev M.Y. Jaswal S.S. Tsymbal E.Y. Interlayer exchange coupling in Fe/MgO/Fe
magnetic tunnel junctions 2006 89 112503:1 112503:3 Zhuravlev M.Y. Tsymbal E.Y. Vedyayev A.V. Impurity-assisted interlayer exchange coupling across a tunnel barrier 2005 94 026806:1 026806:4
Matsumoto R. Hamada Y. Mizuguchi M. Shiraishi M. Maehara H. Tsunekawa K. Djayaprawira D.D. Watanabe N. Kurosaki Y. Nagahama T. Tunneling spectra of sputter-deposited CoFeB/MgO/CoFeB magnetic tunnel
junctions showing giant tunneling magnetoresistance effect 2005 136 611 615 10.1016/j.ssc.2005.08.032 Aoki T. Ando Y. Watanabe D. Oogane M. Miyazaki T. Spin transfer switching in the nanosecond
regime for CoFeB/MgO/CoFeB ferromagnetic tunnel junctions 2008 103 103911:1 103911:4 You C.Y. Goripati H.S. Furubayashi T. Takahashi Y.K. Hono K. Exchange bias of spin valve structure with a
top-pinned Co[40]Fe[40]B[20]/IrMn 2008 93 012501:1 012501:3 Parkin S.S.P. Kaiser C. Panchula A. Rice P.M. Hughes B. Samant M. Yang S.H. Giant tunneling magnetoresistance at room temperature with MgO
(100) tunnel barriers 2004 3 862 867 10.1038/nmat1256 Yuasa S. Nagahama T. Fukushima A. Suzuki Y. Ando K. Giant room-temperature magnetoresistance in single-crystal Fe/MgO/Fe magnetic tunnel
junctions 2004 3 868 871 10.1038/nmat1257 Butler W.H. Zhang X.G. Schulthess T.C. MacLaren J.M. Spin-dependent tunneling conductance of Fe/MgO/Fe sandwiches 2001 63 054416:1 054416:12 Djayaprawira
D.D. Tsunekawa K. Nagai M. Maehara H. Yamagata S. Watanabe N. Yuasa S. Suzuki Y. Ando K. 230% room-temperature magnetoresistance in CoFeB/MgO/CoFeB magnetic tunnel junctions 2005 86 092502:1 092502:3
Lee Y.M. Hayakawa J. Ikeda S. Matsukura F. Ohno H. Giant tunnel magnetoresistance and high annealing stability in CoFeB/MgO/CoFeB magnetic tunnel junctions with synthetic pinned layer 2006 89
042506:1 042506:3 Lee Y.M. Hayakawa J. Ikeda S. Matsukura F. Ohno H. Effect of electrode composition on the tunnel magnetoresistance of pseudo-spin-valve magnetic tunnel junction with a MgO tunnel
barrier 2007 90 212507:1 212507:3 Isogami S. Tsunoda M. Komagaki K. Sunaga K. Uehara Y. Sato M. Miyajima T. Takahashi M. In situ heat treatment of ultrathin MgO layer for giant magnetoresistance
ratio with low resistance area product in CoFeB/MgO/CoFeB magnetic tunnel junctions 2008 93 192109:1 192109:3 Lee D.H. Lim S.H. Increase of temperature due to Joule heating during current-induced
magnetization switching of an MgO-based magnetic tunnel junction 2008 92 233502:1 233502:3 You C.Y. Ohkubo T. Takahashi Y.K. Hono K. Boron segregation in crystallized MgO/amorphous-Co[40]Fe[40]B[20]
thin films 2008 104 033517:1 033517:6 Chen Y.T. Wu J.W. Effect of tunneling barrier as spacer on exchange coupling of CoFeB/AlOx/Co trilayer structures 2011 509 9246 9248 10.1016/
j.jallcom.2011.07.007 Jiang L. Nowak E.R. Scott P.E. Johnson J. Slaughter J.M. Sun J.J. Dave R.W. Low-frequency magnetic and resistance noise in magnetic tunnel junctions 2004 69 054407:1 054407:9
Ingvarsson S. Xiao G. Parkin S.S.P. Gallagher W.J. Grinstein G. Koch R.H. Low-frequency magnetic noise in micron-scale magnetic tunnel junctions 2000 85 3289 3292 10.1103/PhysRevLett.85.3289 Nowak
E.R. Merithew R.D. Weissman M.B. Bloom I. Parkin S.S.P. Noise properties of ferromagnetic tunnel junctions 1998 84 6195 6201 10.1063/1.368936 Nowak E.R. Weissman M.B. Parkin S.S.P. Electrical noise
in hysteretic ferromagnet-insulator-ferromagnet tunnel junctions 1999 74 600 602 10.1063/1.123158 Nowak E.R. Spradling P. Weissman M.B. Parkin S.S.P. Electron tunneling and noise studies in
ferromagnetic junctions 2000 377 699 704 Ingvarsson S. Xiao G. Wanner R.A. Trouilloud P. Lu Y. Gallagher W.J. Marley A.C. Roche K.P. Parkin S.S.P. Electronic noise in magnetic tunnel junctions 1999
85 5270 5272 10.1063/1.369851 Jonsson T. Nordblad P. Svedlindh P. Dynamic study of dipole-dipole interaction effects in a magnetic nanoparticle system 1998 57 497 504 10.1103/PhysRevB.57.497 Demir S.
Zadrozny J.M. Nippe M. Long J.R. Exchange coupling and magnetic blocking in bipyrimidyl radical-bridged dilanthanide complexes 2012 134 18546 18549 10.1021/ja308945d23110653 Chen Y.T. Xie S.M. Jheng
H.Y. The low-frequency alternative-current magnetic susceptibility and electrical properties of Si(100)/Fe[40]Pd[40]B[20](X Å)/ZnO(500 Å) and Si(100)/ZnO(500 Å)/Fe[40]Pd[40]B[20](Y Å) systems 2013
113 17B303:1 17B303:3 Yang S.Y. Chien J.J. Wang W.C. Yu C.Y. Hing N.S. Hong H.E. Hong C.Y. Yang H.C. Chang C.F. Lin H.Y. Magnetic nanoparticles for high-sensitivity detection on nucleic acids via
superconducting-quantum-interference-device-based immunomagnetic reduction assay 2011 323 681 685 10.1016/j.jmmm.2010.10.011 Chen Y.T. Lin S.H. Lin Y.C. Effect of low-frequency
alternative-currentmagnetic susceptibility in Ni[80]Fe[20] thin films 2012 2012 186138:1 186138:6 Chen Y.T. Chang Z.G. Low-frequency alternative-current magnetic susceptibility of amorphous and
nanocrystalline Co[60]Fe[20]B[20] films 2012 324 2224 2226 10.1016/j.jmmm.2012.02.042 Feng G. Feng J.F. Coey J.M.D. The effect of magnetic annealing on barrier asymmetry in Co[40]Fe[40]B[20]/MgO
magnetic tunnel junctions 2010 322 1456 1459 10.1016/j.jmmm.2009.04.059 | {"url":"http://www.mdpi.com/2079-4991/4/1/46/xml","timestamp":"2014-04-19T12:20:51Z","content_type":null,"content_length":"62181","record_id":"<urn:uuid:158bd55a-ec48-4bec-bafc-818c9427df6d>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00508-ip-10-147-4-33.ec2.internal.warc.gz"} |
numpy.polynomial.laguerre.lagval(x, c, tensor=True)[source]¶
Evaluate a Laguerre series at points x.
If c is of length n + 1, this function returns the value:
The parameter x is converted to an array only if it is a tuple or a list, otherwise it is treated as a scalar. In either case, either x or its elements must support multiplication and addition
both with themselves and with the elements of c.
If c is a 1-D array, then p(x) will have the same shape as x. If c is multidimensional, then the shape of the result depends on the value of tensor. If tensor is true the shape will be c.shape
[1:] + x.shape. If tensor is false the shape will be c.shape[1:]. Note that scalars have shape (,).
Trailing zeros in the coefficients will be used in the evaluation, so they should be avoided if efficiency is a concern.
x : array_like, compatible object
If x is a list or tuple, it is converted to an ndarray, otherwise it is left unchanged and treated as a scalar. In either case, x or its elements must support addition and
multiplication with with themselves and with the elements of c.
c : array_like
Array of coefficients ordered so that the coefficients for terms of degree n are contained in c[n]. If c is multidimensional the remaining indices enumerate multiple polynomials.
Parameters In the two dimensional case the coefficients may be thought of as stored in the columns of c.
tensor : boolean, optional
If True, the shape of the coefficient array is extended with ones on the right, one for each dimension of x. Scalars have dimension 0 for this action. The result is that every
column of coefficients in c is evaluated for every element of x. If False, x is broadcast over the columns of c for the evaluation. This keyword is useful when c is
multidimensional. The default value is True.
New in version 1.7.0.
values : ndarray, algebra_like
Returns :
The shape of the return value is described above.
The evaluation uses Clenshaw recursion, aka synthetic division.
>>> from numpy.polynomial.laguerre import lagval
>>> coef = [1,2,3]
>>> lagval(1, coef)
>>> lagval([[1,2],[3,4]], coef)
array([[-0.5, -4. ],
[-4.5, -2. ]]) | {"url":"http://docs.scipy.org/doc/numpy-dev/reference/generated/numpy.polynomial.laguerre.lagval.html","timestamp":"2014-04-19T02:52:34Z","content_type":null,"content_length":"12404","record_id":"<urn:uuid:d1adb6b6-8f0e-4155-ae42-a055e1e7a319>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00270-ip-10-147-4-33.ec2.internal.warc.gz"} |
Post a reply
hi nicboo
Welcome to the forum.
This is how you can devise your own formula for this:
Think of a very easy example. Say you want to pack 1000 envelopes and you know you can do 200 per hour.
That's two hundred in the first hour, another two hundred in the second hour, ............, another two hundred in the fifth.
Can you see this will take 5 hours?
So what is the connection between the three numbers ?
So now put words in place of the numbers:
This time will be in hours. Times by 60 to change to minutes. | {"url":"http://www.mathisfunforum.com/post.php?tid=18029&qid=228397","timestamp":"2014-04-19T07:02:27Z","content_type":null,"content_length":"18967","record_id":"<urn:uuid:aee97cca-be37-47ab-9565-f08ee58359b4>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00379-ip-10-147-4-33.ec2.internal.warc.gz"} |
Summary: November 16, 2006
Tibi Constantinescu, in memoriam
Abstract. We follow a stream of the history of positive matrices and
positive functionals, as applied to algebraic sums of squares decomposi-
tions, with emphasis on the interaction between classical moment prob-
lems, function theory of one or several complex variables and modern
operator theory. The second part of the survey focuses on recently dis-
covered connections between real algebraic geometry and optimization
as well as polynomials in matrix variables and some control theory prob-
lems. These new applications have prompted a series of recent studies
devoted to the structure of positivity and convexity in a free -algebra,
the appropriate setting for analyzing inequalities on polynomials having
matrix variables. We sketch some of these developments, add to them
and comment on the rapidly growing literature.
1. Introduction
This is an essay, addressed to non-experts, on the structure of positive | {"url":"http://www.osti.gov/eprints/topicpages/documents/record/962/2019771.html","timestamp":"2014-04-17T05:25:30Z","content_type":null,"content_length":"8169","record_id":"<urn:uuid:f2b3560f-237a-4d89-99a7-0dc1eafd2dde>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00183-ip-10-147-4-33.ec2.internal.warc.gz"} |
How to solve this logarithmic inequality
December 28th 2011, 06:43 AM
How to solve this logarithmic inequality
Hi all,
How can I solve this?
I tried with log_a (b) - log_a (c) = log_a (b/c) but the solution is (13+5x)/(x^2+5x+4)>=0 ...how is this possible?
Thanks in advance
December 28th 2011, 06:58 AM
Re: How to solve this logarithmic inequality
Hi all,
How can I solve this?
I tried with log_a (b) - log_a (c) = log_a (b/c) but the solution is (13+5x)/(x^2+5x+4)>=0 ...how is this possible?
You want to solve $\frac{x^2-9}{x^2+5x+4)}\le 1$.
December 28th 2011, 07:00 AM
Re: How to solve this logarithmic inequality
Okay, as you say, $log_{1/3}(x^2- 9)- log_{1/3}(x^2+ 5x+ 4)\ge 0$ is the same as $log_{1/3}\left(\frac{x^2- 9}{x^2+ 5x+ 4}\right)\ge 0$.
Further $log_a(x)\ge 0$, for any a< 1, if and only if $x\le 1$. That is, your inequality is the same as $\frac{x^2- 9}{x^2+ 5x+ 4}\le 1$.
Subtracting 1 from both sides, $\frac{x^2- 9}{x^2+ 5x+ 4}- 1= \frac{x^2- 9- (x^2+ 5x+ 4)}{x^2+ 5x+ 4}= \frac{-5x- 13}{x^2+ 5x+ 5}\le 0$.
Now, multiply both sides by -1. | {"url":"http://mathhelpforum.com/algebra/194734-how-solve-logarithmic-inequality-print.html","timestamp":"2014-04-19T06:02:39Z","content_type":null,"content_length":"6938","record_id":"<urn:uuid:af20f3f2-0dde-427b-9e25-327b7dd1ca3a>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00372-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: True doppler calculation question
[Date Prev][Date Next][Thread Prev][Thread Next] - [Date Index][Thread Index][Author Index]
Re: True doppler calculation question
• Subject: Re: [amsat-bb] True doppler calculation question
• From: "Don Woodward" <dbwoodw@xxxxxxxxxx>
• Date: Mon, 3 May 2004 18:07:35 -0400
Get a copy of Plan13 and study it - most us it as the basis I believe. It
was originally a BASIC program but somewhere there is a "C" version I
downloaded. Here's a description I found of the BASIC version...
PLAN-13 Satellite Position Calculation Program 1990 Oct No. 85 p.15-25
(See also 1983 Dec) - The basic routines needed for satellite
calculations. Includes fundamental equations, explanation and heavily
commented listing. Shows how to compute Orbit, MA, mode, range, azimuth,
elevation, squint, range rate, doppler shift, height, sub-satellite point,
footprint circle, day numbers, dates, Sun's position, solar azimuth, solar
elevation, illumination, eclipses, visibility etc etc. Routines widely
used by other authors. Booklet available, $10 US.
It is available on-line as the file:
Don Woodward
AMSAT 33535
----- Original Message -----
From: "Cliff Buttschardt" <cbuttsch@kcbx.net>
To: <amsat-bb@AMSAT.Org>
Sent: Monday, May 03, 2004 17:44
Subject: [amsat-bb] True doppler calculation question
There is little doubt that doppler correction is going to be mandatory for
satellite operation. It appears that doppler correction mathematics differs
obtaining that correction. At the CUBESAT earth station we use a FT847 with
identical interface with MacDoppler Pro and Nova. We get markedly differing
compensation! Simultaneously using an IC 910 with Instant tune and Fodtrak,
third compensation results. How is doppler correction calculated "at the
satellite" in these programs and are similar protocols used?
Cliff K7RR AMSAT LM1606
Sent via amsat-bb@amsat.org. Opinions expressed are those of the author.
Not an AMSAT member? Join now to support the amateur satellite program!
To unsubscribe, send "unsubscribe amsat-bb" to Majordomo@amsat.org
Sent via amsat-bb@amsat.org. Opinions expressed are those of the author.
Not an AMSAT member? Join now to support the amateur satellite program!
To unsubscribe, send "unsubscribe amsat-bb" to Majordomo@amsat.org | {"url":"http://www.amsat.org/amsat/archive/amsat-bb/200405/msg00031.html","timestamp":"2014-04-16T19:05:28Z","content_type":null,"content_length":"5311","record_id":"<urn:uuid:aa93f932-c3ed-4eed-8fe2-8496e20bc405>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00367-ip-10-147-4-33.ec2.internal.warc.gz"} |
MySQL Lists: mysql: Re: I can't find the missing rows in a table--
List: General Discussion « Previous MessageNext Message »
From: Mathieu Bruneau Date: January 1 2006 6:19am
Subject: Re: I can't find the missing rows in a table--
View as plain text
mos a écrit :
> This should be so simple, yet I've struck out.
> I have 2 tables, each with a common column called "pid" which is an
> integer and is a unique index. There are approx 18 million rows in each
> table, and one of the tables has approx 5000 fewer rows than the other
> table. So it should be a piece of cake finding the missing rows right?
> Well I did a
> select * from t1 left join t2 on t1.pid=t2.pid where t2.pid is null
> select * from t2 left join t1 on t2.pid=t1.pid where t1.pid is null
> and both queries return a null set. I then checked both tables and none
> of them have pid as null.
> I then counted the number of non-unique pid's and there aren't any (of
> course with a unique index I didn't think there would be)
> Ok, so there are no rows in t1 that aren't in t2, and vice versa.
> There are no duplicate sid values and no empty sid values.
> I physically counted the rows in each table and they are indeed off by
> around 5000 rows.
> I checked the tables for consistency and they passed.
> How can anyone explain this? How do I find the missing rows?
> TIA
> Mike
The 2 queries you paste seemed correct and should output the result
unless there is something really strange happening.
If you are using a version that support subquery you could try
select * from t1 where id not in (select id from t2);
Not sure which exactly is suppose to be faster but it's worth a try!
Is it possible that you are hitting some kind of limit on maximum join
number in your server ? I'm not even sure if a limit of that kind exists
(Just putting my tought on the table)
Mathieu Bruneau
aka ROunofF
GPG keys available @ http://rounoff.darktech.org | {"url":"http://lists.mysql.com/mysql/193393","timestamp":"2014-04-19T17:36:41Z","content_type":null,"content_length":"7199","record_id":"<urn:uuid:873f910f-3c63-4be2-9d99-b78d5e802d5f>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00075-ip-10-147-4-33.ec2.internal.warc.gz"} |
Greatest Common Divisor.
November 22nd 2009, 06:47 PM #1
Sep 2008
Greatest Common Divisor.
Hey guys, I got a simple question here asking to find the greatest common divisor of a and b in the form ma+mb
the values given are a= -3719 and b = 8416. I am having a bit of trouble working these out when b is greater then a, i know that:
-3719|8416=1=... I think?
Could someone help me here please. Thanks a lotl.
you're right that gcd(3719, 8416) = 1.
now use the extended euclidean algorithm to get it on the form na + mb = 1.
Extended Euclidean algorithm - Wikipedia, the free encyclopedia
you're right that gcd(3719, 8416) = 1.
now use the extended euclidean algorithm to get it on the form na + mb = 1.
Extended Euclidean algorithm - Wikipedia, the free encyclopedia
Thanks, I should have included that I went that far as well. i understand the algorithm (ax+by=gcd(a,b)), but where is the x and y coming from?
Also I have a second one: a= 1575 and b=231, which ii worked out the gcd to be 21, but how do i express that in ma+nb?
As a previous post suggested - ma+nb form comes directly from the euclidean algorithm - by repeated substitution. Plz have a look at the suggested wiki article and post any specific question that
you might have.
November 22nd 2009, 07:28 PM #2
Junior Member
Nov 2008
November 22nd 2009, 09:30 PM #3
Sep 2008
November 22nd 2009, 09:54 PM #4
Sep 2008
November 23rd 2009, 12:36 AM #5
Super Member
Apr 2009 | {"url":"http://mathhelpforum.com/discrete-math/116208-greatest-common-divisor.html","timestamp":"2014-04-19T10:36:01Z","content_type":null,"content_length":"39251","record_id":"<urn:uuid:7e22ff76-fcca-4d36-83a8-c6d4a5e4d0f8>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00554-ip-10-147-4-33.ec2.internal.warc.gz"} |
SailNet Community - View Single Post - Block and tackle breaking strength
I don't know how to consider most of the other factors, so I'm looking at an idealized picture for now. As long as my basic understanding (A x B) is correct I suppose next I'd consider the effects of
sheave diameter on the problem.
s/v Essorant
1972 Catalina 27 | {"url":"http://www.sailnet.com/forums/692356-post5.html","timestamp":"2014-04-19T14:01:31Z","content_type":null,"content_length":"33256","record_id":"<urn:uuid:02fbbe31-b7b2-44ed-84e8-9e1c0af3b1f4>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00231-ip-10-147-4-33.ec2.internal.warc.gz"} |
Summary: Efficient Construction of a Bounded Degree Spanner
with Low Weight
Sunil Arya
Michiel Smid
Let S be a set of n points in IRd
and let t > 1 be a real number. A t-spanner
for S is a graph having the points of S as its vertices such that for any pair p, q
of points there is a path between them of length at most t times the Euclidean
distance between p and q.
An efficient implementation of a greedy algorithm is given that constructs a
t-spanner having bounded degree such that the total length of all its edges is
bounded by O(log n) times the length of a minimum spanning tree for S. The
algorithm has running time O(n logd
Applying recent results of Das, Narasimhan and Salowe to this t-spanner
gives an O(n logd
n) time algorithm for constructing a t-spanner having bounded
degree and whose total edge length is proportional to the length of a minimum
spanning tree for S. Previously, no o(n2) time algorithms were known for con- | {"url":"http://www.osti.gov/eprints/topicpages/documents/record/903/1544711.html","timestamp":"2014-04-16T11:06:44Z","content_type":null,"content_length":"8079","record_id":"<urn:uuid:f9cf0368-24ed-49e6-b1db-3db1fd0e96ae>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00464-ip-10-147-4-33.ec2.internal.warc.gz"} |
All symbolic worksheet (formula derivation)
Currently Being Moderated
Oct 2, 2008 6:59 AM
I'm trying to use Mathcad to do an all-symbolic worksheet (formula derivation). I've used the symbolic pallette before for one-off symbolic calculations, but I'm having trouble obtaining a symbolic
result that I can use in the next expression, then the that symbolic result in the next, etc. I looked in the Mathcad Example Files section and didn't see anything related to an all symbolic
solutions. It doesn't seem like it would be that hard, but I'm missing something pretty basic. TIA BTW, I brute-forced this solution in Excel with the Solver add-in, but I want to "show my work&
quot; in Mathcad. The solution is: knowing sigma_eq and setting sigma_3=0, sigma_1 is maximum when sigma_2=(sigma_eq)*sqrt(1/3). At that point, sigma_1=(sigma_eq)*sqrt(4/3). (see attached Mathcad 13 | {"url":"http://communities.ptc.com/thread/7378","timestamp":"2014-04-19T19:59:49Z","content_type":null,"content_length":"105675","record_id":"<urn:uuid:9206c650-6c83-4096-b556-afa5b8e2dfdf>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00117-ip-10-147-4-33.ec2.internal.warc.gz"} |
254 Threads found on edaboard.com: Peak Rms
why does the volmeter measure the rms value of voltage and not the peak or avg. value:?:??
Embedded Systems and Real-Time OS :: 09.05.2009 06:55 :: renzworldc :: Replies: 2 :: Views: 1119
Hi! For example, RFMD's RF5117 and Intersil's ISL3984 Does Somebody know how to design the peak detector(or power detector) for WLAN HBT power amplifier? Thanks!
Other Design :: 13.05.2002 11:16 :: cwcwecan :: Replies: 1 :: Views: 2263
hello pals, As long as I know, PMPO goes for peak Maximum Power Output and its relation to the rms value depends on several factors. As a rms value depends on the shape of the wave, so it does the
PMPO. Due to the complexity of the audio signal and overshoot response of the amplifier, it would be impossible to predict EXACTLY the (...)
Hobby Circuits and Small Projects Problems :: 27.01.2003 09:44 :: 2000 :: Replies: 25 :: Views: 99774
So 240*sqrt(2) is the dc equivalent. Isn't this the peak value of the ac as well?
Mathematics and Physics :: 22.03.2004 12:17 :: ee01akk :: Replies: 7 :: Views: 7783
These low cost meters measure either peak or average and do a guess for the rms based on the assumption that the input is a sine wave. I know of some types which do a series of random samples and
calculate the rms from that. There is an IC that actually heats a resistor with the input signal and then heats another resistor with DC and (...)
Hobby Circuits and Small Projects Problems :: 31.08.2004 13:25 :: flatulent :: Replies: 3 :: Views: 878
Why not rule of thumb: (Vpeak-to-peak)*(0.707) = Vrms (Vrms)/(0.707) = Vpeak-to-peak ?
Electronic Elementary Questions :: 15.10.2004 20:11 :: djalli :: Replies: 25 :: Views: 8896
Hi ! Of course you have to adequate the 50V peak voltage signal to the input voltage limit of ports of the PICmicros (if Vdd = 5V, then the upper limit is 5V for the A/D analog or digital inputs).
Use a voltage divider (1:10). Another restriction is the sample frequency if you are going to use the A/D converters of common PICs (16F and 18F se
Microcontrollers :: 16.03.2005 07:25 :: rkodaira :: Replies: 3 :: Views: 1300
rms is just the average voltage of an AC signal which is 1/√2 × peak~peak voltage of the signal, the word bandwidth can often be found to describe the frequency response of an amplifier, it is
usually measured from the 3dB roll off point of the highest frequency to the lowest frequency of an amplifier or vice versa. eg. if an amplifier (...)
Electronic Elementary Questions :: 28.04.2005 09:21 :: Learner :: Replies: 7 :: Views: 4832
How to calculate the peak voltage on telephone line when the telephone ring?
Professional Hardware and Electronics Design :: 08.06.2005 02:39 :: cqjhq :: Replies: 4 :: Views: 2210
The oscilloscope measures voltage. You need to know the load impedance to get power. For sine waves, rms is sqrt(2) times the peak (one sided peak not peak to peak) peak power has two meanings. One
is the peak voltage related to the load impedance. Another is for (...)
Electronic Elementary Questions :: 08.06.2005 22:06 :: flatulent :: Replies: 4 :: Views: 1324
Is sin, for eletric energy. Thaks ET if only one sinus carrier at AC-insignal, is easy (0.7071 * Up-p)/2 = voltage rms Up-p is measured voltage peak to peak. --- but if measure complex multi
frequency and noise signal is more difficult. remember rms means 'root mean square' and represent powers heat di
Microcontrollers :: 15.12.2005 18:39 :: xxargs :: Replies: 5 :: Views: 967
moreover .why it should be rms ....why should not as +340V,-340V while referring because, simply said, this value (rms) is sort of a mean value which is engaged in, especially, power calculations,
etc. e.g. for resistive loads power can be simply calculated multiplying current and voltage rms values: P
Electronic Elementary Questions :: 16.12.2005 08:59 :: Eric Best :: Replies: 23 :: Views: 18939
comparing signals means comparing power of the two signals i understand from your questions that you have 5V DC and 2.5V AC peak to peak DC power will equal AC power when AC value is given as rms
(root mean square )value and not peak value that means if in your system : V DC *I DC=V rms AC *I (...)
Electronic Elementary Questions :: 10.02.2006 13:31 :: hani51 :: Replies: 4 :: Views: 926
I want to calculate True rms(Root Means Square) of Square Wave in Assembly Language of PIC 16F72. For a pure square wave: 1. Square waves: Like sine waves, square waves are described in terms of
period, frequency and amplitude: peak amplitude, Vp , and
Microcontrollers :: 25.03.2006 14:03 :: silvio :: Replies: 3 :: Views: 2390
Output voltage has it's peak, rms or average value. No matter how it is measured the result depends on it's waveform or shape over time. If the voltage is pure DC it's peak, rms or average values are
equal. Voltage with added ripple has different values for peak, rms and average measurement (...)
Electronic Elementary Questions :: 15.05.2006 15:04 :: Borber :: Replies: 4 :: Views: 1732
AWG 16 (.050" in diameter), is used to make an inductor of 100nH, to be used in a lumped elements banpass filter (Fc=450 MHz, IL=.40 dB) Can it handle 75w Watts averafge/ 200W peak of RF power?
RF, Microwave, Antennas and Optics :: 28.06.2006 19:02 :: Krytar :: Replies: 1 :: Views: 1554
Hi, not the expert in this field but I would say that the only difference might be in the time constant used in video portion of detector circuit. For peak detector the constant should be very small
so that instantaneous value of the signal is detected, for rms detector it should be long so that integrator like performance is obtained. flyhig
RF, Microwave, Antennas and Optics :: 25.08.2006 07:53 :: flyhigh :: Replies: 9 :: Views: 5900
Hi, I dont quiet understand the definition of peak to average ratio. Suppose my DAC outputs a signal with 10dB peak to average. If I measure the signal on the oscilloscope, will I see the peak signal
3.16 times (10dB) higher then the rms signal? example: rms = 0.1V peak to (...)
RF, Microwave, Antennas and Optics :: 28.11.2006 11:45 :: Jim cage :: Replies: 1 :: Views: 813
liuyonggen_1, An rms voltage, when connected to a resistive load will produce the same amount of power as A DC voltage of the same value. . Consider a square wave with a 1V "peak" value, a 0V
"valley" value, and a 0.5 (50%) duty cycle. The average value is 0.5. The rms value is 1/SQRT(2). The power dissipated in a 1 Ohm load = X D
Analog Circuit Design :: 06.12.2006 13:07 :: Kral :: Replies: 9 :: Views: 1957
Hello, i was wondering what is the peak voltage , average voltage , rms voltage that a transistor 0.18um TSMC nominal Vth can tolerate on its gate before breaking down. as i have a pulse on the
supply at start up and i want to know if it is OK , i think it is an electrical rule but i don't have the Documentation around me right now. thnx
Analog IC Design and Layout :: 28.01.2007 08:03 :: safwatonline :: Replies: 0 :: Views: 542
for example you have an AC source its voltage is expressed by V(t)=A*cos(ωt) Volt at any time t the voltage instantenous vale is expressed by the equ. voltage peak vale is A peak 2peak value is 2*A
while rms value is A/sqrt(2) rms for any waveform is the root of the mean of the signal squared.
Electronic Elementary Questions :: 12.04.2007 13:01 :: quaternion :: Replies: 2 :: Views: 2919
For the second part rms = peak/sqrt(2) rms = peak-peak/(2√2) Hope you got it.
Analog Circuit Design :: 09.05.2007 01:55 :: brmadhukar :: Replies: 1 :: Views: 897
The answer to this issue lies in how they measure the output power of the device. PMPO stands for "peak Music Power Output" or "peak Momentary Power Output". Notice the word peak. They calculate PMPO
based on the maximum power output of the device under perfect conditions and 100% efficiency. These conditions are impossible to obtain, and (...)
Electronic Elementary Questions :: 31.05.2007 08:35 :: A.Anand Srinivasan :: Replies: 6 :: Views: 3495
smartshashi, The rms value is the voltage in a purely resistive load that results in the same power (Heating effect) as a DC voltage of the same value. For example, a sine wave with a peak voltage of
1.0V. produces the same power into a resistive load as a DC voltage of (SQRT(2))/2 volts. So the rms vlaue of this AC voltage is (...)
Electronic Elementary Questions :: 31.05.2007 10:34 :: Kral :: Replies: 9 :: Views: 7488
Could someone please find me a definition? Is it from time average point of view ? Or from rms/peak voltage point of view? Thanks.
RF, Microwave, Antennas and Optics :: 29.02.2008 03:14 :: passerby :: Replies: 1 :: Views: 791
Its a simple calculation. Take the peak to peak value and divide it by square root of 2. Thats all. AVR Rulz!!
PC Programming and Interfacing :: 31.05.2008 10:38 :: boseji :: Replies: 2 :: Views: 885
Hi, Normal multimeters assume the input to be a sine wave and calculates rms value based on the peak voltage. Such meters show the same rms value for a sine wave and a nonsine wave (say square wave),
if both have the same peak value. True rms meters use special ICs to calculate the true (...)
Electronic Elementary Questions :: 20.10.2008 13:57 :: laktronics :: Replies: 8 :: Views: 1827
Hi all, Could you please explain some jitter defiinition definitions. Suppose that we have a sine wave: I found this defintions: Jitterpeak-to-peak = Jittermax - Jittermin Period Jitter: The maximum
change of the signal edge from the expected of ideal position in time Phase Jitter : The maximum
RF, Microwave, Antennas and Optics :: 06.04.2009 07:34 :: AdvaRes :: Replies: 0 :: Views: 999
Hello! In my SMPS I need to use a current transformer. My rms current is < 15A, but the peak value is 33A (worst case). My question is, would a 25A rms current transformer be OK? I would like to use
the CST306-2A transfomer: N: 1:100 Lm: 14mH Irms: 25A The problem is, the datasheet doesn't say anything about (...)
Power Electronics :: 07.05.2009 15:37 :: Mercury :: Replies: 4 :: Views: 4707
rms is all about comparing apples to apples, irrespective of what signal waveform is applied. The rms of a DC signal will be the DC signal, taking the rms of an AC, PWM, sawtooth or complex signal
allows us to compare their 'real value' when comparing between them. Most of the cheaper meters don't measure true rms. They (...)
Electronic Elementary Questions :: 09.05.2009 07:18 :: trekkytekky :: Replies: 1 :: Views: 2221
if you remove the rectifier+filter and add some DC bias for the ADC you can also measure the frequency, peak, rms and mean.
Analog Circuit Design :: 27.05.2009 07:30 :: xvibe :: Replies: 7 :: Views: 4029
Hi, Hope someone can help me with the definition of signal-to-noise ratio (SNR). On the last page of this document: it is said that SNR is the ratio of the peak-to-peak signal and the rms noise
However, wikipedia: says that it is the rat
Digital communication :: 24.02.2010 09:04 :: raebrm :: Replies: 2 :: Views: 2973
Unfortunately, you come out with the 10 ps at the end. Of course 10 ps it's still just a number without telling about peak-to-peak, rms phase or period jitter. Seriously, you'll have difficulties to
find a ready made 500 MHz crystal oscillator with a suficient jitter specification, you also must to decide about an I/O standard, ECL or (...)
Electronic Elementary Questions :: 14.04.2010 09:20 :: FvM :: Replies: 28 :: Views: 2668
Research shows that rms watts is defined as the continuous average power when the amplifier is driven with a sine wave while PMPO is the abbreviation for peak music power output which is recognised
as the total useless since there is no agreed method of arriving at a figure. I think by now he might have got the
Show DIY :: 25.01.2011 23:36 :: ckshivaram :: Replies: 2 :: Views: 22970
PMPO vs rms watts Those symbols are used for radio power. The real power is rms (Root Mean Square), while PMPO means music power (peak Music Power Output). How to compare those – you need to split
PMPO by 3 and then you will get rms (almost). For example if you have JVC with 19W rms / 50W PMPO. (...)
Miscellaneous Engineering :: 13.05.2010 06:42 :: gres :: Replies: 0 :: Views: 10795
Hello. just wanted to show the SMPS I have done. I dont know how to put it here. 1100W rms SMPS, 1700W peak For more informations. please see it at
Show DIY :: 30.05.2010 17:56 :: microsim :: Replies: 1 :: Views: 2980
if the current is sinusoidal then have a peak detector in th sy and calculate rms from peakvalue. srizbf 25thjune2010
Analog Circuit Design :: 25.06.2010 02:47 :: srizbf :: Replies: 4 :: Views: 726
For sinusoidal waveforms, the reference being 0.774Vrms = 0dBu: minus? dBu: -1dBu = 1.95Vpp -2dBu = 1.74Vpp -3dBu = 1.55Vpp -4dBu = 1.38Vpp -5dBu = 1.23Vpp -8dBu = 870mVpp -10dBu = 690mVpp -15dBu =
389mVpp -20dBu = 219mVpp -25dBu = 123mVpp -30dBu = 69mVpp -40dBu = 22mVpp -50dBu = 7mVpp Vpp is the rms voltage multiplied (...)
Electronic Elementary Questions :: 27.08.2010 23:32 :: Externet :: Replies: 1 :: Views: 987
BER = Bit Error Rate For a serial data stream this give the number of bits which are sampled wrong due to jitter. So it is a precentage of errors. e.g. 10e-12 means one out of 10e12 bits are wrong.
the rms (or sigma) value is given for random jitter (RJ) random jitter is gaussian and unbound (from the theory) For a gausian random jitter the
Analog IC Design and Layout :: 08.12.2010 02:18 :: qieda :: Replies: 1 :: Views: 791
The average voltage of a sine wave is equal 0V. The average voltage is simply mathematical average nothing more. rms represent DC equivalent for power calculation. For example if I have 230V DC and I
plug this voltage across light bulb. So to get the same amount of light with AC voltage you need connect 325Vp (peak) sin wave. Vrms =
Electronic Elementary Questions :: 24.12.2010 16:34 :: jony130 :: Replies: 8 :: Views: 24213
As FvM states, you need to convert it first. The rms conversion takes care of the polarity and wave shape. Just dividing it by 3 would still give you (assuming a sine wave) a signal of 14V peak to
peak. You may need to scale it before the rms conversion and you may need to amplify the result but there is no other reliable (...)
Microcontrollers :: 21.02.2011 06:23 :: betwixt :: Replies: 2 :: Views: 867
can someone help me with a circuit or a part number that converts 220v peak to peak from a transformer to 310v dc
Hobby Circuits and Small Projects Problems :: 26.04.2011 11:57 :: mazhara :: Replies: 5 :: Views: 2438
I want to measure rms voltage between points a (0.5) V and I rewrite a program in C to achieve my purpose may i know it does not work with me each time I simulate, I vary the amplitude of input
signal to the peak I receive on an LCD value false efficient value isis on my diagram is more than just a pic 16F877 connected with an LCD and a sine signa
Microcontrollers :: 17.05.2011 15:16 :: MARWEN007 :: Replies: 0 :: Views: 1025
I have a bit difficulty in understanding PMPO - the two different definitions about it and its purpose, and a question: Is it anything related to power consumption of the audio appliance? Anywhere i
can understand it clearly, pls elaborate your explanation. PMPO = peak Music Power Output/peak momentary power output??
Electronic Elementary Questions :: 10.06.2011 07:26 :: ElectroEnthusiast :: Replies: 6 :: Views: 2275
1) dB = 20*log_10(Vpp/0.707) 2) dB = 20*LOG10(Volts_peak_to_peak/SQRT(0.008*Z)) 3) db = (20log10) (rms ) & rms= 0.707*P-P 4) dB = 20 log10 (Voltage 1 / Voltage 2) 1) Maybe there is a mistype? If you
substitute Vpp/0.707 by Vpk*0.707, it is the expression of a voltage in dBV (dB realtive to 1V) assuming it is sinusoi
Mathematics and Physics :: 29.12.2011 14:43 :: zorro :: Replies: 8 :: Views: 7734
Hello guys I'm doing some signal processing/data analysis. Hope to get some advice on something. I collected some vibration data just the other day while metal cutting and I'm wondering what is the
correct (or better) way to do this. These are what the signals look like. Sampling rate of the data is 10000 hz.
Digital Signal Processing :: 19.05.2012 04:37 :: Luppy :: Replies: 4 :: Views: 456
Hello, Do you know of a mathematical expression that give the rms value of this Discontinuous current waveform? (i.e. supposing I know the peak value, the rise time, fall time and dead time)
Electronic Elementary Questions :: 18.08.2012 11:16 :: treez :: Replies: 6 :: Views: 502
well, I will use this AD736 for the 1st time like you! but I need to measure grid voltage. in the datasheet, there's some circuits. But, I see that the recommended input is 1v rms (you can consider
it 1v peak) and the max is 5v (sometimes = supply voltage). so, what is the voltage I need to feed into the IC?
Analog Circuit Design :: 05.03.2013 05:25 :: Prince Vegeta :: Replies: 1 :: Views: 256
Hi... I made a circuit that rectify an AC signal into DC voltage equals the peak of that AC signal (typical usage). I need to calculate the rms voltage of that AC sinewave (only sinewave, for ease)
signal and the specs are like the following: - 4 diodes for full-bridge rectifier. - 100uF/400v filtering capacitor. - a voltage divider with 11
Microcontrollers :: 29.03.2013 08:56 :: Prince Vegeta :: Replies: 4 :: Views: 281
rms() function gives Vpeak/CF, where CF means "Crest Factor". If signals are pure one-tone sinusoidal, CF is sqrt(2). See Note : rms() is for results of transient analysis. If your signals are
results of ac analysis, you can't use rms(). In this case, use Vout/sqrt(2) simply without invoking rms(Vou
RF, Microwave, Antennas and Optics :: 14.04.2013 04:28 :: pancho_hideboo :: Replies: 1 :: Views: 265 | {"url":"http://search.edaboard.com/peak-rms.html","timestamp":"2014-04-19T17:01:18Z","content_type":null,"content_length":"46283","record_id":"<urn:uuid:01371345-7264-41f1-b192-2fc1597c12fb>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00468-ip-10-147-4-33.ec2.internal.warc.gz"} |
How close?
January 21st 2010, 11:57 PM #1
Sep 2009
How close?
Can you please help me figure this out?
Problem: How close to -3 do we have to take x so that
$1/(x+3)^4 > 10,000$
$(x+3)^4$ is non-negative, so we can just move some stuff around:
$\frac{1}{(x+3)^4} > 10,000$
$\implies~ \frac{1}{10,000} > (x+3)^4$
$\implies~ \sqrt[4]{\left(\frac{1}{10,000}\right)} > |x+3|$
$\implies~ \frac{1}{10} > |x+3|$
Great thanks!
I just have a couple of rookie questions.
If it was negative, you couldn't move stuff around?
and why did the (x+3) become an absolute value?
Say for example we considered something like $\frac{1}{(x+3)^3} > 1000$ . It's possible for $(x+3)^3$ to be a negative number, but if it was negative, the value on the left-hand side would have
been negative, and a negative number can't be greater than 1000. So we just need to make a side note that we are only considering values of $x$ such that $x>-3$. From then we can continue the
As for your second question: Any time you take an even-powered root of a value, you must take its absolute value. In other words:
$a^2 = 25 ~\implies~ \sqrt{a^2}=\sqrt{25} ~\implies~ |a|=5 ~\implies~ a=\pm5$
Notice that had we not done so, we would not have found the solution $a=-5$, but clearly $(-5)^2=25$.
You do not need to do this when taking odd-powered roots.
January 22nd 2010, 12:06 AM #2
Senior Member
Jan 2010
January 22nd 2010, 12:24 AM #3
Sep 2009
January 22nd 2010, 12:34 AM #4
Senior Member
Jan 2010 | {"url":"http://mathhelpforum.com/calculus/124897-how-close.html","timestamp":"2014-04-18T16:21:57Z","content_type":null,"content_length":"40925","record_id":"<urn:uuid:d945fcf8-9205-4a86-b59e-e79c291e3d08>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00616-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Part 1: Use complete sentences to describe a real-world scenario that could be represented by the inequality 4x + 3y >= 36. Part 2: Choose one ordered pair that is a solution to the given inequality
and explain what that ordered pair means in the context of your real-world scenario.
• one year ago
• one year ago
Best Response
You've already chosen the best response.
You can start by associating x and y to different items, and then envision a solution. I chose for this example x would represent the price of one apple and y one orange, and 36, the constant,
the constant amount of money he has. The ordered pair solution act as a solution, and you can substitute them to double check their validity. Example: James wants to buy 4 apples and 3 oranges,
but only has 36 dollars. (5,3) The apples will cost 5 dollars each and the oranges will cost 3 dollars each, so he can easily buy both with 36 dollars Hope this helps.
Best Response
You've already chosen the best response.
@Homeworksucks thank you!
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50e63308e4b058681f3f33f5","timestamp":"2014-04-17T09:48:59Z","content_type":null,"content_length":"30668","record_id":"<urn:uuid:46db2237-079d-4f43-8c6f-86c4cee21ad9>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00640-ip-10-147-4-33.ec2.internal.warc.gz"} |
Java 7: Fork/Join Framework Example
The Fork/Join Framework in Java 7 is designed for work that can be broken down into smaller tasks and the results of those tasks combined to produce the final result. In general, classes that use the
Fork/Join Framework follow the following simple algorithm:
// pseudocode
Result solve(Problem problem) {
if (problem.size < SEQUENTIAL_THRESHOLD)
return solveSequentially(problem);
else {
Result left, right;
INVOKE-IN-PARALLEL {
left = solve(extractLeftHalf(problem));
right = solve(extractRightHalf(problem));
return combine(left, right);
In order to demonstrate this, I have created an example to find the maximum number from a large array using fork/join:
import java.util.Random;
import java.util.concurrent.ForkJoinPool;
import java.util.concurrent.RecursiveTask;
public class MaximumFinder extends RecursiveTask<Integer> {
private static final int SEQUENTIAL_THRESHOLD = 5;
private final int[] data;
private final int start;
private final int end;
public MaximumFinder(int[] data, int start, int end) {
this.data = data;
this.start = start;
this.end = end;
public MaximumFinder(int[] data) {
this(data, 0, data.length);
protected Integer compute() {
final int length = end - start;
if (length < SEQUENTIAL_THRESHOLD) {
return computeDirectly();
final int split = length / 2;
final MaximumFinder left = new MaximumFinder(data, start, start + split);
final MaximumFinder right = new MaximumFinder(data, start + split, end);
return Math.max(right.compute(), left.join());
private Integer computeDirectly() {
System.out.println(Thread.currentThread() + " computing: " + start
+ " to " + end);
int max = Integer.MIN_VALUE;
for (int i = start; i < end; i++) {
if (data[i] > max) {
max = data[i];
return max;
public static void main(String[] args) {
// create a random data set
final int[] data = new int[1000];
final Random random = new Random();
for (int i = 0; i < data.length; i++) {
data[i] = random.nextInt(100);
// submit the task to the pool
final ForkJoinPool pool = new ForkJoinPool(4);
final MaximumFinder finder = new MaximumFinder(data);
The MaximumFinder class is a RecursiveTask which is responsible for finding the maximum number from an array. If the size of the array is less than a threshold (5) then find the maximum directly, by
iterating over the array. Otherwise, split the array into two halves, recurse on each half and wait for them to complete (join). Once we have the result of each half, we can find the maximum of the
two and return it. | {"url":"http://fahdshariff.blogspot.gr/2012/08/java-7-forkjoin-framework-example.html","timestamp":"2014-04-21T04:33:17Z","content_type":null,"content_length":"92615","record_id":"<urn:uuid:f07e88c1-2383-4248-95e3-1a80550e28fc>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00098-ip-10-147-4-33.ec2.internal.warc.gz"} |
Parallel iterated bucket sort
- the 1994 ACM Symp. on Parallel Algorithms and Architectures , 1994
"... The queue-read, queue-write (qrqw) parallel random access machine (pram) model permits concurrent reading and writing to shared memory locations, but at a cost proportional to the number of
readers/writers to any one memory location in a given step. The qrqw pram model reflects the contention prope ..."
Cited by 30 (11 self)
Add to MetaCart
The queue-read, queue-write (qrqw) parallel random access machine (pram) model permits concurrent reading and writing to shared memory locations, but at a cost proportional to the number of readers/
writers to any one memory location in a given step. The qrqw pram model reflects the contention properties of most commercially available parallel machines more accurately than either the
well-studied crcw pram or erew pram models, and can be efficiently emulated with only logarithmic slowdown on hypercubetype non-combining networks. This paper describes fast, low-contention,
work-optimal, randomized qrqw pram algorithms for the fundamental problems of load balancing, multiple compaction, generating a random permutation, parallel hashing, and distributive sorting. These
logarithmic or sublogarithmic time algorithms considerably improve upon the best known erew pram algorithms for these problems, while avoiding the high-contention steps typical of crcw pram
algorithms. An illustrative expe...
- Proc. of the 2nd SODA , 1991
"... It has been shown previously that sorting n items into n locations with a polynomial number of processors requires Ω(log n/log log n) time. We sidestep this lower bound with the idea of Padded
Sorting, or sorting n items into n + o(n) locations. Since many problems do not rely on the exact rank of s ..."
Cited by 20 (3 self)
Add to MetaCart
It has been shown previously that sorting n items into n locations with a polynomial number of processors requires Ω(log n/log log n) time. We sidestep this lower bound with the idea of Padded
Sorting, or sorting n items into n + o(n) locations. Since many problems do not rely on the exact rank of sorted items, a Padded Sort is often just as useful as an unpadded sort. Our algorithm for
Padded Sort runs on the Tolerant CRCW PRAM and takes Θ(log log n/log log log n) expected time using n log log log n/log log n processors, assuming the items are taken from a uniform distribution.
Using similar techniques we solve some computational geometry problems, including Voronoi Diagram, with the same processor and time bounds, assuming points are taken from a uniform distribution in
the unit square. Further, we present an Arbitrary CRCW PRAM algorithm to solve the Closest Pair problem in constant expected time with n processors regardless of the distribution of points. All of
these algorithms achieve linear speedup in expected time over their optimal serial counterparts. 1 Research done while at the University of Michigan and supported by an AT&T Fellowship.
- Acta Informatica , 1992
"... Abstract. We present an optimal algorithm for sorting n integers in the range [1,n c] (for any constant c) fortheEREW PRAM model where the word length is n ɛ, for any ɛ>0.Using this algorithm,
the best known upper bound for integer sorting on the (O(log n) word length) EREW PRAM model is improved. I ..."
Cited by 13 (5 self)
Add to MetaCart
Abstract. We present an optimal algorithm for sorting n integers in the range [1,n c] (for any constant c) fortheEREW PRAM model where the word length is n ɛ, for any ɛ>0.Using this algorithm, the
best known upper bound for integer sorting on the (O(log n) word length) EREW PRAM model is improved. In addition, a novel parallel range reduction algorithm which results in a near optimal
randomized integer sorting algorithm is presented. For the case when the keys are uniformly distributed integers in an arbitrary range, we give an algorithm whose expected running time is optimal.
- in Proceedings of the European Conference on Parallel Processing, EUROPAR ’96 , 1996
"... This paper presents algorithms and lower bounds for several fundamental problems on the Exclusive Read, Concurrent Write Parallel Random Access Machine (ERCW PRAM) and some results for unbounded
fan-in, bounded fan-out (or `BFO') circuits. Our results for these two models are of importance because o ..."
Cited by 4 (2 self)
Add to MetaCart
This paper presents algorithms and lower bounds for several fundamental problems on the Exclusive Read, Concurrent Write Parallel Random Access Machine (ERCW PRAM) and some results for unbounded
fan-in, bounded fan-out (or `BFO') circuits. Our results for these two models are of importance because of the close relationship of the ERCW model to the OCPC model, a model of parallel computing
based on dynamically reconfigurable optical networks, and of BFO circuits to the OCPC model with limited dynamic reconfiguration ability. Topics: Parallel Algorithms, Theory of Parallel and
Distributed Computing. This research was supported by Texas Advanced Research Projects Grant 003658480. (philmac@cs.utexas.edu) y This research was supported in part by Texas Advanced Research
Projects Grants 003658480 and 003658386, and NSF Grant CCR 90-23059. (vlr@cs.utexas.edu) 1 Introduction In this paper we develop algorithms and lower bounds for fundamental problems on the Exclusive
Read Concurrent Wri... | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=1637540","timestamp":"2014-04-17T06:03:23Z","content_type":null,"content_length":"21753","record_id":"<urn:uuid:4ebd7f01-b112-43de-8c10-648fce5dd37a>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00537-ip-10-147-4-33.ec2.internal.warc.gz"} |
Shankar, PN (2007) Three-dimensional flow in a cylinder with a stress-free sidewall. Fluid dynamics research, 39 (7). pp. 569-589.
Full text not available from this repository.
Stokes flow in a cylindrical column of fluid, with a stress-free cylindrical sidewall, is considered. The motion is assumed to be generated by the linear, uniform motion of either or both of the flat
endwalls. The field is obtained by a vector eigenfunction expansion procedure. If the field is assumed to have a x3B8;- and z-dependence of the type exp(ix3B8; + kz), k has to satisfy the equation (1
x2212; k2)J13 + hat J1[2J12 x2212; hat J1{2hat J1 + (1 + k2)J1}] = 0, where hat J1 = kJ0 x2212; J1 and the argument of each Bessel function is k. This equation admits, unlike in the plane case and
with important consequences, not just a real sequence {tilde lambdan} of eigenvalues but also a complex one {tilde mun}. Using a least squares procedure to satisfy the boundary conditions on the top
and bottom walls, the three-dimensional velocity field in the column is determined for various values of column height h and wall speed ratio S. Detailed computations show that there are strong
effects of both the stress-free boundary and three-dimensionality. The principal effect of the former is to permit motion on that boundary leading to large azimuthal motions and of the latter, unlike
in the plane flow, to multiple primary eddies when h is sufficiently large. A number of new eddy structures are also found, which demonstrate that three-dimensionality often leads to the elimination
of compactness found in plane flows. It is finally shown that the flow fields exhibit interesting bifurcations as S and h are varied.
Item Type: Journal Article
Uncontrolled Keywords: Three-dimensional stokes flow;Stress-free boundaries;Eddy structure;Meniscus roll coating;Vector eigenfunction expansions;Bifurcations
Subjects: AERONAUTICS > Aeronautics (General)
Division/Department: Computational and Theoretical Fluid Dynamics Division
Depositing User: Ms. Alphones Mary
Date Deposited: 18 May 2009
Last Modified: 24 May 2010 09:56
URI: http://nal-ir.nal.res.in/id/eprint/5105
Actions (login required) | {"url":"http://nal-ir.nal.res.in/5105/","timestamp":"2014-04-16T22:43:03Z","content_type":null,"content_length":"16613","record_id":"<urn:uuid:127e0fb0-d1a2-4620-89dd-aefc9fda204d>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00223-ip-10-147-4-33.ec2.internal.warc.gz"} |
Euclid's Algorithm I
Copyright © University of Cambridge. All rights reserved.
How can we solve equations like $13x+29y=42$ or $2x+4y=13$ with the solutions $x$ and $y$ being integers? Equations with integer solutions are called Diophantine equations after Diophantus who lived
about 250 AD but the methods described here go back to Euclid (about 300 BC) and earlier. When people hear the name Euclid they think of geometry but the algorithm described here appeared as
Proposition 2 in Euclid's Book 7 on Number Theory.
First we notice that $13x+29y=42$ has many solutions, for example $x=1$, $y=1$ and $x=30$, $y=-12$. Can you find others (it has infinitely many solutions)? We also notice that $2x+4y=13$ has no
solutions because $2x+4y$ must be even and $13$ is odd. Can you find another equation that has no solutions?
If we can solve $3x+5y=1$ then we can also solve $3x+5y=456$. For example, $x=2$ and $y=-1$ is a solution of the first equation, so that $x=2\times 456$ and $y=-1\times 456$ is a solution of the
second equation. The same argument works if we replace $456$ by any other number, so that we only have to consider equations with $1$ on the right hand side, for example $P x+Q y=1$. However if $P$
and $Q$ have a common factor $S$ then $P x+Q y$ must be a multiple of $S$ so we cannot have a solution of $P x+Q y=1$ unless $S=1$. This means that we should start by considering equations $P x+Q y=
1$ where $P$ and $Q$ have no common factor.
Let us consider the example $83x+19y=1$. There is a standard method, called Euclid's Algorithm, for solving such equations. It involves taking the pair of numbers $P=83$ and $Q=19$ and replacing them
successively by other pairs $(P_k,Q_k)$. We illustrate this by representing each pair of integers $(P_k,Q_k)$ by a rectangle with sides of length $P_k$ and $Q_k$.
Draw an $83$ by $19$ rectangle and mark off $4$ squares of side $19$, leaving a $19$ by $7$ rectangle.
This diagram represents the fact that
$83=4\times 19+7$
In a few steps we shall split this rectangle into 'compartments' to illustrate the whole procedure for solving this equation. (You may like to try the java applet Solving with Euclid's Algorithm
which draws the rectangles and carries out all the steps automatically to solve equations of the form $P x+Q y=1$). We repeat this process using the $19$ by $7$ rectangle to obtain two squares of
side $7$, and a $7$ by $5$ rectangle. Next, the $7$ by $5$ rectangle splits into a square of side $5$, and a $5$ by $2$ rectangle.
The $5$ by $2$ rectangle splits into two squares of side $2$, and a $2$ by $1$ rectangle. The $2$ by $1$ rectangle splits into two squares of side $1$ with nothing left over and the process finishes
here as there is no residual rectangle. These diagrams illustrate the following equations:
\begin{eqnarray} 83 & = & 4\times 19 & + & 7\\ 19 & = & 2\times 7 & + & 5\\ 7 & = & 1\times 5 & + & 2\\ 5 & = & 2\times 2 & + & 1\\ 2 & = & 2\times 1 & + & 0 \end{eqnarray}
To find the solution we use the last non-zero remainder, namely $1=5-2[2]$
and successively substitute the remainders from the other equations until we get back to the first one giving a combination of the two original values $P=83$ and $Q=19$. The method in this example
has the following steps with the remainders given in square brackets.
\begin{eqnarray} 1 & = & [5]-2[2]\\ & = & [5]-2([7]-[5])\\ & = & -2[7]+3[5]\\ & = & -2[7]+3(19-2[7])\\ & = & (3\times 19)-8[7]\\ & = & (3\times 19)-8(83-(4\times 19))\\ & = & (-8\times 83)+(35\times
19). \end{eqnarray}
Thus a solution of $83x+19y=1$ is $x=-8$ and $y=35$.
Can you find a solution of $83x+19y=7$?
Can you now find a solution of $827x+191y=2$? You should first solve the equation $827x+191y=1$ (using the computer if you wish).
For the next article in the series, click here . | {"url":"http://nrich.maths.org/1357/index?nomenu=1","timestamp":"2014-04-20T21:22:29Z","content_type":null,"content_length":"7090","record_id":"<urn:uuid:5a1cb2b2-4f04-44cb-880e-4922596be08c>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00057-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: AFT book on mathematics exams (fwd)
Replies: 0
AFT book on mathematics exams (fwd)
Posted: Jun 9, 1997 1:03 AM
From Richard Askey:
The AFT book "What Students Abroad Are Expected To Know About
Mathematics: Exams from France, Germany and Japan with a comparative
look at the United States" has arrived. It containts the
1993 Brevet de College Exam in Mathematics, and the 1992 Baccalaure'at
Exam in Mathematics from France. The Brevet Diploma is earned by
66% of the age cohort at the end of ninth grade. The Baccalaure'at
is earned by 36% of the age cohort and is awarded at the end of
12th grade. Another 25% earn a vocational baccalaure'at diploma.
For Germany, the Realschule Exam from Baden-Wu"rttemberg is given.
This is given at the end of 10th grade. 43% earn this certificate,
and 26% the harder Abitur, which is taken at 18. One of these exams
is also printed here. From Japan, the high school entrance exam
for Tokyo are printed, as is an entrance exam for Tokyo Univ.
They do not say if this is for science students or for humanities
students, but it seems likely it was the one for science students.
See the MAA publication for both of these Tokyo Univ. exams from
1991. The one here is 1992, and both years were used by John Dossey in
his analysis of college entrance exams published in "Examining
the Examinations", edited by Edward Britton and Senta Raizen,
Kluwer, 1996. Now that these exams are both published, you can
check for yourself if you feel that 3/8th of these Japanese
exams have relatively equal emphasis on problem solving and routine
procedures as Dossey claimed. For the US, there are selected problems
from the SAT I: Reasoning Test, the SAT II Mathematics Level IIC test,
and some problems from the BC Advanced Placement Calculus exam.
There is an errata page inserted, which corrects a few errors.
Unfortunately, there are a number of other errors which were not
caught. In the Japanese problem which the Christian Science
Monitor published, and said they were still working on the solution,
both the statement of the problem and the solution contain errors.
One can see why the writer of the Monitor article was having
trouble. There is also a very nice observation which is given
in the solution without any explanation why it is true. The
problem and solution were first printed in Japan and probably
translated by someone who did not know much mathematics. A hint
about this nice observation should have been provided for
American readers, whose knowledge of geometry is in general not
as good as is the case for students who want to study science
and mathematics at Tokyo Univ.
Outside of my comment about John Dossey's analysis of these
exams, which includes some of the others as well, I will not
comment on the exams now, to give others a chance to look at
them themselves without my comments to color their thoughts.
I strongly recommend looking at this publication, which only
costs $10 and is available from
World Class Standards Series
AFT Order Department
555 New Jersey Avenue NW
Washington, DC 20001
Send a check made out to American Federation of Teachers. The $10
includes postage and handling. AFT does not hesitate to make comments
on what these exams seem to show.
Dick Askey | {"url":"http://mathforum.org/kb/thread.jspa?threadID=160266","timestamp":"2014-04-17T10:05:27Z","content_type":null,"content_length":"16763","record_id":"<urn:uuid:ba93e96b-33f6-4c45-a263-a0b3aec94d2c>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00351-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Use synthetic division to find P(3) for P(x) = x^4 – 6x^3 – 4x^2 – 6x – 2.
• one year ago
• one year ago
Best Response
You've already chosen the best response.
\[P(x)=x4−6x2−4x2−6x−2\] The polynomial remainder theorem says that If a polynomial \(P(x)\) is divided by \((x−r)\), then the remainder is a constant given by \(P(r)\).
Best Response
You've already chosen the best response.
so in this case you have \(r=3\), and you want to divide \(P(x)\) by \((x-3)\)
Best Response
You've already chosen the best response.
now bear with me as I have no idea how to typeset long division
Best Response
You've already chosen the best response.
Alright and I have no intention on how to learn! I'll do it by hand real quick
Best Response
You've already chosen the best response.
ya i dont know how to do it either:)
Best Response
You've already chosen the best response.
haha alright I didn't want to waste time by writing out each step by hand so I just filmed it. I'll upload the photo, but if it confuses you just ask me for the video! (because it is more of a
process that you have to see been done to get). I made a mistake in the last step of the video though. It's fixed in the picture. Just let me upload them real quick
Best Response
You've already chosen the best response.
k thanks:)
Best Response
You've already chosen the best response.
So basically what you do is multiply 3 by the row 3 column 1, then put that answer in row 2 column 2. Then in row 3 column 2 you put the sum of row 1 column 2 and row 2 column 2. Then repeat the
Best Response
You've already chosen the best response.
oooh thats pretty simple thanx for explaining it:)
Best Response
You've already chosen the best response.
no worries. And video embedding would actually be a really sweet Idea now that I think of it. I wonder if it already works.
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/509879d0e4b085b3a90d67b9","timestamp":"2014-04-21T15:58:39Z","content_type":null,"content_length":"50538","record_id":"<urn:uuid:5cc54e91-c4e1-4804-9586-9ba8702d2f22>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00066-ip-10-147-4-33.ec2.internal.warc.gz"} |
Quantifying Uncertainty in Computer Predictions | netl.doe.gov
Quantifying Uncertainty in Computer Model Predictions
The U.S. Department of Energy has great interest in technologies that will lead to reducing the CO[2] emissions of fossil-fuel-burning power plants. Advanced energy technologies such as Integrated
Gasification Combined Cycle (IGCC) and Carbon Capture and Storage (CCS) can potentially lead to the clean and efficient use of fossil fuels to power our nation. The development of new energy
technologies, however, takes a long time, as the technologies need to be tested at multiple scales, progressing from lab scale to pilot scale to demonstration scale before widespread deployment. In
addition to developing new energy technologies, NETL’s research is working to reduce the cost and time of technology development.
Advanced modeling and simulation capabilities can significantly reduce the time and cost of the development and deployment of energy technologies. In particular, modeling and simulation can be used
to increase the confidence as technologies are scaled up, such as, for example, when designing a 285 MWe gasifier based on data generated from a 13 MWth pilot-scale gasifier. This allows the rapid
scale-up of technologies, reducing or even avoiding costly intermediate-scale testing. New designs can be tested with the help of simulations to ensure reliable operation under a variety of operating
conditions. However, before simulation results can be used with confidence for scale-up, the reliability of the predictions must be established. Therefore, in 2011, NETL initiated work on the
verification, validation and uncertainty quantification of multiphase computational fluid dynamics (CFD) models that underpin the simulation of several advanced energy technologies, adapting methods
developed for other applications such as the stewardship of the nuclear stockpile. This involves exploring “how to make models as useful as possible by quantifying how wrong they are” as stated in a
National Academies report, the basic idea being quantifying the uncertainty in the predictions.
Multiphase CFD models, for example, have the ability to predict the performance of scaled-up fluidized bed reactors, but they must be validated with data from small, pilot-scale units. The validation
studies usually report the ability of the model to agree with measured values in qualitative terms (e.g. , “good” agreement). Because various sources of uncertainty unavoidably get introduced by the
time a numerical solution is computed, even though multiphase CFD models are based on a set of deterministic mathematical equations, the ideal of a “perfect” agreement between model and experiment is
practically unachievable.
NETL’s objective is to demonstrate how a comprehensive uncertainty quantification method can be adopted for describing the validity of multiphase CFD models. A gasifier simulation, for example, uses
a set of input parameters taken from the design (e.g. , geometry specifications, gas/solid flow rates, and composition) and laboratory measurements (e.g. , chemical reaction rates) and predicts the
quantity of interest (e.g. , carbon conversion, pressure drop). There exist a number of challenges when applying uncertainty quantification techniques. In multiphase flows, for example, many
uncertain parameters exist. Another challenge may be the computational cost, requiring a compromise in terms of the grid resolution used. Since the governing physics in multiphase flows is more
complex than in single phase flow simulations, the computational cost increase plays a key role in the determination of adequate sampling technique and number of samples.
Using a framework established by earlier researchers in this field, NETL researchers apply the following steps to describe the validity of the models they use and the differences observed in
predicted vs. observed phenomena: (1) identify and characterize the sources of uncertainty as being uncertainty due to inherent variation in a quantity (aleatory) or uncertainty due to information
missing on the part of modelers or experimenters (epistemic); (2) understand the propagation of uncertainties using quasi-Monte Carlo, Latin hypercube, orthogonal arrays, etc. calculations; (3)
estimate uncertainties due to numerical approximations (e.g. , discretization errors); (4) estimate uncertainty in experimental data; and (5) estimate model form uncertainty.
Some preliminary results of this research were published in two 2013 papers titled “Validation and Uncertainty Quantification of a Multiphase Computational Fluid Dynamics Model” and “Applying
Uncertainty Quantification to Multiphase Flow Computational Fluid Dynamics,” that were published in Industrial & Engineering Chemistry Research journal and in Powder Technology journal, respectively.
Contact: Mehrdad Shahnam, 304-285-4546 and Madhava Syamlal, 304-285-4685 | {"url":"http://www.netl.doe.gov/newsroom/labnotes/aug-2013/quantifying-uncertainty-in-computer-predictions","timestamp":"2014-04-20T03:39:59Z","content_type":null,"content_length":"33160","record_id":"<urn:uuid:3717fe96-f211-4f87-8bfe-c72f493a220e>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00296-ip-10-147-4-33.ec2.internal.warc.gz"} |
10.2. CATADIOPTRIC TELESCOPES WITH FULL-APERTURE CORRECTORS
Unlike sub-aperture catadioptric telescopes, combinations with spherical primary mirror are the most frequent form in arrangements with full-aperture corrector. Such combinations are attractive not
only for the ease of mirror fabrication, but also the possibility of influencing coma of the spherical mirror by varying its stop position - something that sub-aperture corrector catadioptrics can't
take advantage of. The corrector can be used with either a single-mirror (Newtonian or camera) or with two-mirror (usually Cassegrain) arrangements. While there is a number of possible forms of the
full-aperture corrector, the three types most often used for amateur telescopes are Schmidt corrector, Maksutov meniscus corrector and Houghton two-element corrector. The most common configuration
are catadioptric Newtonian - Schmidt-Newtonian (SN), Maksutov-Newtonian (MN) and (quite rare) Houghton-Newtonian - as well as catadioptric Cassegrain systems: Schmidt-Cassegrain (SCT),
Maksutov-Cassegrain (MCT) and (never saw one) Houghton-Cassegrain telescope (HCT).
Performance level achievable, as well as ease of manufacture, differ somewhat from one to another.
10.2.1. catadioptric dialytes
Before taking a closer look of these three full-aperture corrector types, it will be well worth the time to revisit the very first form of catadioptric telescope, technically in between sub-aperture
and full-aperture catadioptrics. This is catadioptric dialyte (generally, a lens objective consisting of two widely separated elements), in which the rear lens has been replaced by catadioptric
element - a negative meniscus concave toward front lens, with its rear surface reflecting converging beam back toward front lens, to form the final focus. The earliest of these designs date back to
the 19th century. In their simplest form, they consist of only two elements, yet can approach very high level of optical correction.
The catadioptric element in these dialyte designs is often referred to as Mangin mirror, although they are not quite the same thing. Mangin mirror was invented in 1876 by a French officer whose name
it carries, as an alternative to paraboloid (overcorrection from the concave front lens surface can be calibrated to cancel under-correction of the spherical reflecting concave surface). Not only
that it is originally intended to be a single image-forming element, it also historically follows, not precedes, the use of technically similar optical element in catadioptric dialytes. For those
reasons, in the absence of information on who did actually introduce the lens/mirror element in early dialytes, it is better to refer to it simply as catadioptric element.
Among the early catadioptric dialytes, those that are still viable systems - and both original and exceptional with respect to their correction level - are Hamiltonian, Schupmann and - much more
recent, but belonging to the same family - Honders telescope and astrograph.
1. Hamiltonian telescope
W. F. Hamilton conceived his idea of a dialyte catadioptric at the very beginning of the 19th century, and have it patented in 1814. In its simplest form, it consists from only two single glass
elements: a single lens at the front end, followed by a widely separated negative meniscus with silvered convex rear surface. Hamilton was the first to utilize the advantage of this type of
arrangement, which allows for weaker lens surfaces than standard achromats, due to its power being mostly produced by the reflecting surface. In fact, the first three refraction in a Hamiltonian
produce a collimated, or near-collimated beam of light, which is then focused by the reflecting surface, and actually slightly weakened by the fourth refraction. Resulting secondary spectrum is
typically 2 to 4 times lower (possibly more) than in a comparable standard doublet achromat.
Both, chromatic and monochromatic aberrations of this arrangement are easy to compute, being a sum of the aberrations of a single lens for object at infinity, a pair of single refracting surfaces for
which the object is the image formed by the preceding element (this is the same front surface of the catadioptric element, for the entering and exiting light), and a single reflecting surface, for
which the object is the image formed by the concave refracting surface (third refraction ) in front of it - typically at infinity (collimated or near-collimated beam).
While various configurations are possible, the usual one is with the rear flint element at half the focal length of the front crown lens. As a result, the flint element is about half the diameter of
the front lens, and its front concave radius - if producing collimated beam - has negative focal length of about half the front lens' focal length (this places its object - the image projected by the
front lens - at its focus). The final system focal length is mainly determined by the radius of curvature of the rear reflecting surface. It may be - and usually is - somewhat affected by the second
refraction at the glass surface preceding the reflector. If the final focus distance from the rear element is φ, in units of the element separation, the system focal length in this arrangement is
approximately ƒ~φƒ1, ƒ1 being the front lens' focal length, and the system's physical length is φƒ1/2.
FIGURE 163: Optical scheme of the Hamiltonian dialyte, relatively simple 2-element configuration with the potential of being made into a highly corrected system.
General optical scheme of the Hamiltonian is a positive crown lens at the front end, followed by flint catadioptric element at the rear end. Its optical prescription begins with the lens shape factor
q needed to generate spherical aberration and coma that will nearly offset those generated by the rear element. The q value is approximately in the range 1.2<q<1.4. It determines the lens' surface
radii ratio as R2/R1=(1+q)/(q-1), where R1 and R2 are the front and rear lens surface radius of curvature, respectively. Substituting it in Eq. 1.2, gives the radii to focal length relation as R1=(n1
-1)(1-R1/R2)ƒ1, n1 being the front lens' refractive index. The front convex surface of the catadioptric element placed at half the front lens' focal length from it produces collimated beam when its
focal length is -ƒ1/2. Thus, radius of curvature for this surface that will produce collimated beam is, from Eq. 1, R3=-(n2-1)ƒ1/2, n2 being the glass refractive of the rear element.
According to Eq. 100.1, dialyte doublet is achromatized (i.e. two widely separated wavelengths brought to the same focus) when Abbe numbers of the front and rear element relate as
where S is the element separation, numerically positive, 1/ƒf=(n1-1)[(1/R1)-(1/R2)] is the front lens' focal length and ƒr is either focal length of the third surface alone, ƒr=ƒ3=R3/(n2-1), when
light reflected from R4 passes through R3 without refraction (i.e. has its focus coinciding with the center of curvature of R3), or 1/ƒr=[1/(n2-1)R3]-1/ƒ', when refraction is taking place, with ƒ'
being the additional negative refracting power at R3, obtained from 1/ƒ'=(1/ƒ3)+[(2/R4)-(1/I)], where 2/R4 is the focal length of the reflecting surface and I is the final image separation from the
rear element (all three numerically negative). For ƒr=ƒ3=ƒ1/2=S, the needed Abbe number for the flint V2=V1/2, and for somewhat stronger ƒr with the additional final refraction at R3, the needed Abbe
number V2 for the rear element is nominally somewhat larger (i.e. of lower dispersive power) than V1/2.
This is what makes crown/flint combination suitable in a typical Hamiltonian arrangement with the element separation - and the front (refractive) surface of the rear element focal length ƒ3 - of
about 1/2 the front lens' focal length.
If the reflective surface R4 focuses at the center of curvature of R3, its radius of curvature is given by R4=2(R3+t), t being the rear element center thickness. If the final focus is to be at ƒ1/2
from the reflective surface which, with marginal ray height at the catadioptric element being 1/2 the aperture radius, results in the final system focal length nearly equal to that of the front lens,
needed radius of curvature of the reflecting surface R4=[2nR3/(n'-1)(n'-n)]+2t, with n=-n2 and n'=-1 being the incident and index of refraction at the glass surface (it is also obtained from Eq. 1,
with the mirror focus distance from R3 as object distance, and the final image distance from R3 as its image distance). For any given final image separation I from R3, needed reflecting surface
radius is
EXAMPLE: A 150 ƒ/12 Hamiltonian with the rear lens at 1/2 the front lens' focal length separation, and final focus at the front lens. The system focal length is nearly identical to that of the front
lens, thus we can start with ƒ1=1800mm. For BK7 crown (n1=1.5187, V1=64.4) front lens with the shape factor q=1.3, the lens surface curvature radii relate as R2=7.67R1, with R1=(n1-1)[(1-(R1/R2)]ƒ1=
812mm and R2=6228mm.
Needed radius of curvature of the F2 flint (n2=1.624, V2=34.4) rear element's front surface R3=-(n2-1)ƒ1/2=-562mm, and needed radius of curvature of the fourth, reflecting surface for the image
separation I=-900 and element thickness t=10mm is R4=-1482. Since the total refracting power at the rear element ƒ3=-660, best Abbe number for it is, according to Eq. (k), V2=0.58V1=37.15 (assuming
its refractive index unchanged; stronger index will require yet higher Abbe number, and vice versa), which is somewhat higher, numerically, than F2 flint's 36.6. It indicates that F3 flint, or even
F9, would probably be a better match with respect to minimizing secondary spectrum, but in either case minor adjustment will be necessary to compensate for less than perfect glass match and tighten
up the red and blue foci.
Other alternatives for minimizing secondary spectrum are better glass match, varying the element thickness or front lens' power, etc. Secondary spectrum of the adjusted system (SPECS) is 0.33mm (ƒ/
5400). It is 2.7 times lower than in a comparable Fraunhofer doublet, respectively, which puts this 150mm ƒ/12 Hamiltonian at the level of ƒ/32 Fraunhofer, with respect to the size of secondary
Unfortunately, as common to most dialytes, the field remains crippled by lateral chromatism. Its cause is in the significant element separation, resulting in the chief ray from the front lens
arriving at the rear lens well off its center. Since the two surfaces at this height have different powers, different wavelengths do not exit the lens nearly parallel, as they do when passing through
a lens near its center, but at an angle, and consequently are being focused at different heights in the image plane. This lateral color is created by the same dispersive power needed to correct the
secondary spectrum, thus cannot be avoided (unlike secondary spectrum, it is only mildly affected by relatively small changes in element separation, or power). Solution to this problem suggested by
Hamilton is to compensate - or significantly reduce - lateral color error by the use of matching eyepieces (a simple singlet's doublet at the bottom of FIG. 145, right, would be appropriate for this
purpose). It can also be eliminated by achromatizing either catadioptric element alone, or both front and rear elements.
Various arrangements of the two Hamilton elements are possible. The original configuration suffers from excessive lateral color, but it can be remedied by adding a single-lens corrector before the
focal plane (FIG. 164).
FIGURE 164: LEFT: Unusually fast 150mm ƒ/7.6 Hamiltonian (SPECS) illustrates well design flexibility. Diffraction limited field diameter set by astigmatism and coma is still near 0.8 degrees,
spherical aberration is corrected to 1/25 wave P-V and secondary spectrum is slightly over 0.13mm, or ƒ/8500, comparable to an ƒ/32 Fraunhofer doublet. Obviously, all Hamiltonians require diagonal
flat in order to make their image accessible. However, required minimum obstruction is quite small, approximately in 0.1D-0.15D range. Minimum obstruction for this ƒ/7.6 system is below ~0.12D, and
for well illuminated field limited by 1.25" barrel, it is still below 0.2D (D being the aperture diameter). If we'd only be able to correct that horrible lateral color!
The good news is - that is quite possible. A single positive field lens can accomplish just that. Shown to the right on FIG. 164 is a system with such simple corrector added. No calculations were
made, no serious attempts to optimize the arrangement; merely a positive (plano-convex) F2 lens was placed in the cone converging toward final focus, knowing that it should suppress lateral color (
). So it did, turning a system unusable without matching eyepieces into one with lateral color practically disappearing from this same 0.25° diameter field. In order to compensate for corrector's
secondary spectrum, the first two elements need to have it appropriately imbalanced (in this case, red and green focusing together, blue farther out). It can be accomplished by a small adjustment of
the separation between the two main elements. It is almost certain that the arrangement with field corrector for lateral color can be further optimized. Secondary spectrum is already exceptionally
low - less than ƒ/9000, placing this 150mm ƒ/7.4 system at a level of an ƒ/32 standard achromat. Both, remaining lateral color and coma visible in the outer field will likely be nearly cancelled in
design optimization.
More examples of the Hamiltonian can be found on Roger Ceragioli's "medial" telescopes' page. With four glass elements, as Ceragioli shows, correction is essentially perfect, including lateral color. | {"url":"http://www.telescope-optics.net/catadioptric_telescopes.htm","timestamp":"2014-04-18T00:20:12Z","content_type":null,"content_length":"33102","record_id":"<urn:uuid:75c0f43f-72b0-4f8f-a258-760ac58a5d49>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00067-ip-10-147-4-33.ec2.internal.warc.gz"} |
MathGroup Archive: October 1999 [00154]
[Date Index] [Thread Index] [Author Index]
Re: dummy index list
• To: mathgroup at smc.vnet.net
• Subject: [mg20275] Re: [mg20256] dummy index list
• From: "Andrzej Kozlowski" <andrzej at tuins.ac.jp>
• Date: Mon, 11 Oct 1999 02:19:51 -0400
• Sender: owner-wri-mathgroup at wolfram.com
I shall only attempt to answer question 1. First, the code you sent is
incorrect. I assume that you meant:
testlist = {a, -b, c, -d};
Cases[Flatten[testlist, Infinity, Times], _Symbol]
{a, b, c, d}
You can get the same result somewhat quicker with:
DeleteCases[testlist, -1, Infinity]
{a, b, c, d}
This code works because of the OneIdentity attribute of Times, which means
that Times[a] is just a. Moreover, this code has a certain advantage over
yours (probably irrelevant for your needs) in that it will also work in
cases like
testlist = {a^2, -b^2, c, -d};
here your code gives
Cases[Flatten[testlist, Infinity, Times], _Symbol]
{c, d}
DeleteCases[testlist, -1, Infinity]
{a , b , c, d}
Andrzej Kozlowski
Toyama International University
>From: "Arturas Acus" <acus at itpa.lt>
To: mathgroup at smc.vnet.net
>To: mathgroup at smc.vnet.net
>Subject: [mg20275] [mg20256] dummy index list
>Date: Sun, 10 Oct 1999 00:04:08 -0400
> Dear Group,
> I have 2 questions:
> 1) I want the fastest way to select dummy symbols
> from some expression. Suppose we have a list of
> dummy indices {a,-b,c, -d}. What is the fastest way to
> get rid of the minus sign?
> Here is my solution:
> testlist={a,-b,c, -d};
> Map[Cases[Flatten[#,Infinity,Times],_Symbol]&,testlist]
> However I am not satisfied and believe there should
> be a simple solution for such a simple task. I will use
> this function very often in future, so it should be
> as fast as possible.
> 2) I am not a professional programmer, so I am
> very interesting in algorithm speed estimates
> (there was a lot of such estimates published
> recently in this group).
> Do some tutorials on the web exist on this subject?
> At the moment I am interesting in Mathematica SameQ
> algorithm asymptotic (theoretic).
> Is it n or log(n) or some other?
> How one can know or guess this? Probably
> there are some tables for various basic operations,
> for example, like
> 1.the best selection algorithm can be done at speed ??
> 2. the list intersection algorithm is ??
> 3 the union can be done at ??
> and so on. Thanks.
> Dr. Arturas Acus
> Institute of Theoretical
> Physics and Astronomy
> Gostauto 12, 2600,Vilnius
> Lithuania
> E-mail: acus at itpa.lt
> Fax: 370-2-225361
> Tel: 370-2-612906 | {"url":"http://forums.wolfram.com/mathgroup/archive/1999/Oct/msg00154.html","timestamp":"2014-04-17T07:27:45Z","content_type":null,"content_length":"36854","record_id":"<urn:uuid:2df8a20d-d78f-45f6-adda-166db9124774>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00348-ip-10-147-4-33.ec2.internal.warc.gz"} |
Valley Glen, CA Calculus Tutor
Find a Valley Glen, CA Calculus Tutor
...Math is one of my strongest subjects. I took Prealgebra in Middle School and passed with an A. I have many math books that I would pull information from when tutoring.
13 Subjects: including calculus, physics, geometry, algebra 1
...I can diagnose problem areas quickly and help students get to where they need to be. By carefully observing how my students solve problems, I uncover their weaknesses if any. Once I know their
problem areas, I make sure they understand the concepts and provide them extra practice until they are comfortable with them.
11 Subjects: including calculus, geometry, algebra 1, algebra 2
...With all the experience I have gained over the years I feel very comfortable tutoring various levels of mathematics and different age groups of students. I am currently back at San Fernando
High School working for a non-profit organization as the new assistant site coordinator. I am the head of...
7 Subjects: including calculus, geometry, algebra 1, algebra 2
I've been tutoring since 1993 and I taught high school for one year. I like to have a friendly relationship with my students so its not such a drag for them to show up to sessions and so they
stay inspired to learn. I've worked with students with different academic backgrounds and learning abilities and understand the potential problems students may run into while learning new
10 Subjects: including calculus, chemistry, algebra 2, algebra 1
...I always meet students at their current skill levels and build from there. I believe mathematics is alive with a rich history and a wealth of applications, and this comes through in my
tutoring. I enjoy not only the mechanics of solving math problems, but also the philosophy of mathematics and how math applies to physics and engineering.
12 Subjects: including calculus, geometry, algebra 1, algebra 2
Related Valley Glen, CA Tutors
Valley Glen, CA Accounting Tutors
Valley Glen, CA ACT Tutors
Valley Glen, CA Algebra Tutors
Valley Glen, CA Algebra 2 Tutors
Valley Glen, CA Calculus Tutors
Valley Glen, CA Geometry Tutors
Valley Glen, CA Math Tutors
Valley Glen, CA Prealgebra Tutors
Valley Glen, CA Precalculus Tutors
Valley Glen, CA SAT Tutors
Valley Glen, CA SAT Math Tutors
Valley Glen, CA Science Tutors
Valley Glen, CA Statistics Tutors
Valley Glen, CA Trigonometry Tutors
Nearby Cities With calculus Tutor
Arleta, CA calculus Tutors
Boyle Heights, CA calculus Tutors
East Los Angeles, CA calculus Tutors
La Tuna Canyon, CA calculus Tutors
North Hollywood calculus Tutors
Rancho La Tuna Canyon, CA calculus Tutors
Sect La Vega, PR calculus Tutors
Sherman Oaks calculus Tutors
Sherman Village, CA calculus Tutors
Studio City calculus Tutors
Toluca Terrace, CA calculus Tutors
Valley Village calculus Tutors
Van Nuys calculus Tutors
Vistas De La Vega, PR calculus Tutors
West Toluca Lake, CA calculus Tutors | {"url":"http://www.purplemath.com/Valley_Glen_CA_Calculus_tutors.php","timestamp":"2014-04-16T19:01:35Z","content_type":null,"content_length":"24478","record_id":"<urn:uuid:7cf94255-c7d9-419c-8b02-5f035dad3495>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00348-ip-10-147-4-33.ec2.internal.warc.gz"} |
Physics Forums - Changing mass
- -
Changing mass
mraptor Jan31-13 01:15 PM
Changing mass
I want to calculate how is acceleration changing if I have changing mass, but constant trust i.e. :
T = m*a
a = T / m
(I know it has to be calculus).
Then again I also wan't to be able to calculate displacement and velocities etc..
Trying to find somewhere on the internet a tutorial on equations of motion when the acceleration is varying.. but most of the time I find equations for constant-acceleration.
Do you have a good tutorial ? (don't point me to wikipedia, it is good as reference but not as tutorial)
I would like also to have some simple Exersises, so I can figure out how it is done in general.
thank you
jbriggs444 Jan31-13 02:06 PM
Re: Changing mass
You are on the right track. a = T/m and it takes calculus.
The derivitive of 1/m with respect to m is -1/m^2
So for constant T, the derivitive of T/m with respect to m is -T/m^2
The minus sign indicates that as m increases the quotient T/m decreases.
mraptor Jan31-13 04:14 PM
Re: Changing mass
Nice.. ok now how can I calculate displacement or time taken to cross specific distans having this acceleration...
I suppose I can't use :
d = x + v*t + 1/2 a*t^2
because this is only valid for constant acceleration ?
jbriggs444 Feb1-13 07:02 AM
Re: Changing mass
You are correct. For a non-constant acceleration instead of computing the change in velocity by simply multiplying acceleration by time, you have to compute it by integrating acceleration over time
using calculus.
Similarly, for a non-constant velocity you compute change in position by integrating velocity over time rather than simply multiplying velocity by time.
You end up with a double integral.
The first integration to compute velocity as a function of time results in:
HallsofIvy Feb1-13 08:36 AM
Re: Changing mass
Conservation of momentum, mv, leads to (mv)'= m'v+ mv'= 0 (the ' indicates the derivative) if there is no external force. If there is a force, then we do not have conservartion of momentum but have
(mv)'= m'v+ mv'= F.
All times are GMT -5. The time now is 10:35 AM.
Powered by vBulletin Copyright ©2000 - 2014, Jelsoft Enterprises Ltd.
© 2014 Physics Forums | {"url":"http://www.physicsforums.com/printthread.php?t=668395","timestamp":"2014-04-17T15:35:30Z","content_type":null,"content_length":"7464","record_id":"<urn:uuid:245ddb3d-2490-4924-a1eb-3911cd399c31>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00385-ip-10-147-4-33.ec2.internal.warc.gz"} |
Sequence of numbers
September 29th 2011, 06:19 AM
Sequence of numbers
We've got a sequence of numbers {1,1}. We are creating a new sequence by adding to neighbouring elements. After first (n=1) adding we've sequence {1,2,1}, for n=2 we've a sequence {1,3,2,3,1}
.... etc. Prove that sum of cubes of elements after n-step is equal $9*7^{n-1}+1$ | {"url":"http://mathhelpforum.com/number-theory/189124-sequence-numbers-print.html","timestamp":"2014-04-19T12:46:13Z","content_type":null,"content_length":"3392","record_id":"<urn:uuid:5e863dce-03b8-49ed-bc6e-1459e7df0b1e>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00482-ip-10-147-4-33.ec2.internal.warc.gz"} |
special multiplication rule
January 5th 2008, 09:39 AM #1
special multiplication rule
Help! this is the problem:
a piece of complex technical equipment has 711 critical parts. If the probability of failure for each of these parts is 0.0001, find the probability of having at least one failure during a random
use of the equipment. It tells me that I need to use the special multiplication rule, but without numbers plugged into the equation, I don't understand. Any help will be greatly appreciated.
Help! this is the problem:
a piece of complex technical equipment has 711 critical parts. If the probability of failure for each of these parts is 0.0001, find the probability of having at least one failure during a random
use of the equipment. It tells me that I need to use the special multiplication rule, but without numbers plugged into the equation, I don't understand. Any help will be greatly appreciated.
I haven't heard of the 'special multiplication rule', but I would find the probability using complement.
$Pr(At \ least \ one \ failure)=1-Pr(No \ Failures)$
January 5th 2008, 10:01 AM #2 | {"url":"http://mathhelpforum.com/statistics/25582-special-multiplication-rule.html","timestamp":"2014-04-21T04:56:32Z","content_type":null,"content_length":"32158","record_id":"<urn:uuid:88051174-00d6-42b8-9b38-b7b38e21ea92>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00450-ip-10-147-4-33.ec2.internal.warc.gz"} |
Computer Science Colloquium
Computational Foundations for Statistical Learning: Enabling Massive Science
Alexander Gray
Friday, March 11, 2005 11:30 A.M.
Room 1302 Warren Weaver Hall
251 Mercer Street
New York, NY 10012-1185
Directions: http://cs.nyu.edu/csweb/Location/directions.html
Colloquium Information: http://cs.nyu.edu/csweb/Calendar/colloquium/index.html
Richard Cole cole@cs.nyu.edu, (212) 998-3119
The data sciences (statistics, and recently machine learning) have always been part of the underpinning of all of the natural sciences. `Massive datasets' represent potentially unprecedented
capabilities in a growing number of fields, but most of this potential remains unlocked, due to the computational intractability of the most powerful statistical learning methods. The computational
problems underlying many of these methods are related to some of the hardest problems of applied mathematics, but have unique properties which make classical solution classes inappropriate. I will
describe the beginnings of a unified framework for a large class of problems, which I call generalized N-body problems. The resulting algorithms, which I call multi-tree methods, appear to be the
fastest practical algorithms to date for several foundational problems. I will describe four examples -- all-nearest-neighbors, kernel density estimation, distribution-free Bayes classification, and
spatial correlation functions, and touch on two more recent projects, kernel matrix-vector multiplication and high-dimensional integration. I'll conclude by showing examples where these algorithms
are enabling previously intractable data analyses at the heart of major modern scientific questions in cosmology and fundamental physics.
| contact webmaster@cs.nyu.edu | {"url":"http://cs.nyu.edu/csweb/Calendar/colloquium/sp05/mar11.html","timestamp":"2014-04-20T20:57:55Z","content_type":null,"content_length":"9922","record_id":"<urn:uuid:fb42053f-0267-481a-a3cd-a8b6c86fa04c>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00575-ip-10-147-4-33.ec2.internal.warc.gz"} |
What Is Leverage Capital?
A company or institution can use its own funds plus borrowed funds for investment. This is also known as leverage capital. As long as the capital (owned assets), plus the borrowed funds are invested
as a rate of return higher than the interest on the borrowed funds, the company or institution makes money. The ratio of borrowed funds to the investor's own funds is the leverage ratio.
A simple example of using leverage capital is the business of a bank. A bank acts as a financial intermediary. Bank depositors and shareholders provide the capital. The bank loans the capital out to
qualified borrowers. As long as the rate of interest paid by borrowers exceeds the rate of interest that the bank has promised to depositors, then the bank realizes a profit.
Commercially insured banks have average leverage ratios of 15 to one. Their leverage ratios are typically much higher than that of corporations because banks are in business to provide leverage
capital for others. Although required to have certain reserves, banks typically make money by loaning money at interest. Banks protect themselves by tightening their lending practices to diminish the
possibility of loan defaults.
Financial leverage is also referred to as trading on equity. The ratio of financed debt to equity (leverage ratio) is an important statistic to determine the health of a company. Higher leverage
ratios make it important to understand the risk level of the company’s investments. If a company uses leverage capital, the potential returns and losses are magnified.
A company will leverage its equity because it gives the company the potential of a greater return on its funds. Leverage ratios for businesses are much lower than those for banks. If a company has
$200,000 US Dollars (USD) in capital and borrows $400,000 USD, the leverage ratio is two to one. If the company invests $600,000 USD, then the return on the investment needs to be at a higher percent
than the company owes the bank. A bank loan at five percent would require an investment return of more than five percent to be profitable for the company.
The net between the loan interest and the investment return is magnified by the leverage ratio. If a corporation borrows funds at five percent and has a return of ten percent the interest owed would
be $20,000 USD. The total return on its investment would be $60,000 USD. Once the loan is repaid the corporation would have a $40,000 USD increase in its equity.
When the return is lower than the loan interest rate, using leverage capital can decrease the total equity a corporation has. If the investment return above had only been four percent, then the total
return would have been $24,000 USD. The increase in equity is minimal. At a three percent, return the corporation begins to lose equity. The use of leverage capital needs to be weighed against all
risk factors. Corporations exercise caution and due diligence in choosing investments for leveraged funds. | {"url":"http://www.wisegeek.com/what-is-leverage-capital.htm","timestamp":"2014-04-17T14:26:42Z","content_type":null,"content_length":"67778","record_id":"<urn:uuid:5db8ca8e-0328-42b3-a678-08bdbce4467b>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00433-ip-10-147-4-33.ec2.internal.warc.gz"} |
Non-perturbative Quantum Effects 2000
Insight into field theory from the AdS-CFT correspondence
PoS(tmr2000)001 pdf
A brief history of the stringy instanton
PoS(tmr2000)002 pdf
Instantons and quaternions
PoS(tmr2000)003 pdf
Holography and the c-theorem
PoS(tmr2000)004 pdf
On the c-theorem in more than two-dimensions
PoS(tmr2000)005 pdf
Holographic anomalies
PoS(tmr2000)006 pdf
Superstring theory on AdS_3 times a coset manifold
PoS(tmr2000)007 pdf
Confining Wilson loops from supergravity and string models
PoS(tmr2000)008 pdf
Extended objects and SUGRAS
D-branes, categories and supersymmetry
D-branes in the presence of NS5 branes
PoS(tmr2000)010 pdf
The low energy dynamics of non-BPS branes
PoS(tmr2000)011 pdf
From ADM to ''brane world'' masses
PoS(tmr2000)012 pdf
Conformal and quasiconformal realizations of exceptional Lie groups
Gauged supergravities in 2d
PoS(tmr2000)014 pdf
Hidden superconformal symmetry
PoS(tmr2000)015 pdf
Harmonic superspaces and superconformal fields
PoS(tmr2000)016 pdf
Ultra-violet structure of supergravity theories
PoS(tmr2000)017 pdf
Regular BPS black holes, pinpointing the macroscopic and microscopic correspondence
PoS(tmr2000)018 pdf
Black holes radiate mainly on the brane
PoS(tmr2000)019 pdf
M-theory and dualities
Subgroups of Lie groups
Aspects of electromagnetic duality
PoS(tmr2000)021 pdf
Nonlinear self-duality and supersymmetry
PoS(tmr2000)022 pdf
Mirror symmetry and self-duality equations
PoS(tmr2000)023 pdf
Some classical solutions of matrix model equations
PoS(tmr2000)024 pdf
A statistical physics approach to M-theory integrals
PoS(tmr2000)025 pdf
Matrix model of the (1+1) dimensional dilatonic black hole in the double scaling limit
PoS(tmr2000)026 pdf
Noncommutative geometry
Renormalization group and the Riemann-Hilbert problem
Aspects of noncommutative field and string theories
PoS(tmr2000)028 pdf
New noncommutative gauge theories
PoS(tmr2000)029 pdf
A non-abelian Chern-Simons term for non-BPS D-branes
PoS(tmr2000)030 pdf
Branes, boundary CFT and non-commutativity
PoS(tmr2000)031 pdf
Quantum groups and quantum moduli spaces of flat connections
PoS(tmr2000)032 pdf
Continuous integrable systems and Boundary effects
Dual Baxter's equation and quantum algebraic geometry
PoS(tmr2000)033 pdf
Differential equations and integrable models
PoS(tmr2000)034 pdf
Finite size effects in perturbed boundary conformal field theories
PoS(tmr2000)035 pdf
Unstable particles in integrable quantum field theories
PoS(tmr2000)036 pdf
Boundary states in integrable quantum field theory
PoS(tmr2000)037 pdf
Boundary conditions in conformal and integrable theories
PoS(tmr2000)038 pdf
Solitonic sectors, conformal boundary conditions and three-dimensional topological field theory
PoS(tmr2000)039 pdf
Boundary flows for minimal models
PoS(tmr2000)040 pdf
Liouville with boundary
PoS(tmr2000)041 pdf
From reflection amplitudes to one-point functions in non-simply laced affine Toda theories and applications to coupled minimal models
PoS(tmr2000)042 pdf
Hidden Virasoro symmetry of Sine-Gordon theory
PoS(tmr2000)043 pdf
Null vectors in logarithmic conformal field theory
PoS(tmr2000)044 pdf
Applications and lattice models
Critical models of disorder in 2 spatial dimensions
PoS(tmr2000)045 pdf
Magnetization plateaux in quasi 1d systems
PoS(tmr2000)046 pdf
Universal ratios of the renormalization group
PoS(tmr2000)047 pdf
Valence bound ground states in quantum antiferromagnets and quadratic algebras
PoS(tmr2000)048 pdf
Life and death of Schrödinger cats in 1D interacting fermion systems
PoS(tmr2000)049 pdf
From fully-packed loops to meanders, exact exponents
PoS(tmr2000)050 pdf
Correlation functions of quantum integrable systems
PoS(tmr2000)051 pdf
The Drinfel'd twisted XYZ model
PoS(tmr2000)052 pdf
Construction of integrable models on ladders, and related quantum symmetries
PoS(tmr2000)053 pdf
''New'' boundary conditions in integrable lattice models
PoS(tmr2000)054 pdf | {"url":"http://pos.sissa.it/cgi-bin/reader/conf.cgi?confid=6","timestamp":"2014-04-19T00:27:49Z","content_type":null,"content_length":"24453","record_id":"<urn:uuid:c629635a-8ac6-4593-bb82-249a1c1f54df>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00182-ip-10-147-4-33.ec2.internal.warc.gz"} |
Brookfield, IL Precalculus Tutor
Find a Brookfield, IL Precalculus Tutor
...I received an A in pre-calculus my junior year of high school and have only improved my math skills while obtaining a BS in mechanical engineering and an MS in mechanical engineering. I am
professionally employed and calculus is part of everyday life for me. Trigonometry is an essential building block in the advanced math I have to do daily as a professional engineer.
20 Subjects: including precalculus, physics, calculus, algebra 1
...I am happy to work with anyone who is willing to work and am very patient with students as they try to understand new concepts. I have been in the Glenview area the past four years and have
tutored high schoolers from Notre Dame, New Trier, GBS, GBN, Deerfield High, Loyola Academy and Woodlands ...
20 Subjects: including precalculus, chemistry, calculus, physics
I am certified math teacher. Currently, I work as a substitute teacher at Elmwood Park School District and Morton High Schools in Cicero. I have been tutoring students since 2008 and preparing
them for ACT. I have BA in Mathematics and Secondary Education from Northeastern Illinois University.
12 Subjects: including precalculus, calculus, geometry, algebra 1
...I have tutored regular pre-calculus and advance pre-calculus to many high school students. Some of the topics I have worked with are Polar and Rectangular Coordinates, Complex Numbers, Series
and Sequences, Probability: Permutation & Combination, probability using Binomial Theorem, Introducing L...
11 Subjects: including precalculus, calculus, geometry, algebra 1
...My approach is skills-based and honed from years of keeping up in advanced math classes. I will teach you to develop and hone your study and time management skills to create an understanding
of advanced topics in math and science and make yourself a better writer. I create an environment of growth and accountability.
36 Subjects: including precalculus, English, reading, chemistry
Related Brookfield, IL Tutors
Brookfield, IL Accounting Tutors
Brookfield, IL ACT Tutors
Brookfield, IL Algebra Tutors
Brookfield, IL Algebra 2 Tutors
Brookfield, IL Calculus Tutors
Brookfield, IL Geometry Tutors
Brookfield, IL Math Tutors
Brookfield, IL Prealgebra Tutors
Brookfield, IL Precalculus Tutors
Brookfield, IL SAT Tutors
Brookfield, IL SAT Math Tutors
Brookfield, IL Science Tutors
Brookfield, IL Statistics Tutors
Brookfield, IL Trigonometry Tutors
Nearby Cities With precalculus Tutor
Argo, IL precalculus Tutors
Bellwood, IL precalculus Tutors
Berwyn, IL precalculus Tutors
Broadview, IL precalculus Tutors
Countryside, IL precalculus Tutors
Forest View, IL precalculus Tutors
La Grange Park precalculus Tutors
La Grange, IL precalculus Tutors
Lyons, IL precalculus Tutors
Mc Cook, IL precalculus Tutors
Mccook, IL precalculus Tutors
North Riverside, IL precalculus Tutors
Riverside, IL precalculus Tutors
Stickney, IL precalculus Tutors
Westchester precalculus Tutors | {"url":"http://www.purplemath.com/Brookfield_IL_Precalculus_tutors.php","timestamp":"2014-04-21T12:55:24Z","content_type":null,"content_length":"24480","record_id":"<urn:uuid:fcf93f01-260a-4c61-bf1d-1261e8b01f96>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00213-ip-10-147-4-33.ec2.internal.warc.gz"} |
Minimal models with local coefficients
up vote 4 down vote favorite
Let $X$ be a path-connected nilpotent space (meaning $\pi_1(X)$ is nilpotent and acts nilpotently on the higher homotopy groups). Let $\rho\colon\thinspace\pi_1(X)\to \mathrm{Gl}(V)$ be a
representation, where $V$ is a rational vector space. Then $\rho$ defines a local coefficient system on $X$, and we have the cohomology with local coefficients $H^\ast(X;\rho)$.
It is well known that Sullivan's theory of minimal models works well for nilpotent spaces, and one can (more or less) easily obtain a minimal, nilpotent cdga $M_X$ with $H(M_X)\cong H^*(X;\mathbb{Q})
$, whose isomorphism type determines the rational homotopy type of $X$. (Here I am assuming some finiteness hypotheses, such as are satisfied if $X$ is a manifold or finite CW complex.)
My question is, is there a (more or less) easy way to obtain a minimal (in some sense) dg-module $M_{X,\rho}$ over $M_X$ which has $H(M_{X,\rho})\cong H^\ast(X;\rho)$ as modules over $H^\ast(X;\
Ideally I would like to be able to calculate $H^1(X;\rho)$ when $X$ is a nilmanifold (so $\pi_1(X)$ is nilpotent with trivial higher homotopy groups) and calculate cup products $a_1\cup a_2\in H^2(X;
\rho_1\otimes\rho_2)$. So far I have looked at
Gómez-Tato, Antonio Théorie de Sullivan pour la cohomologie à coefficients locaux.[Sullivan's theory for cohomology with local coefficients] Trans. Amer. Math. Soc. 330 (1992), no. 1, 235–305.
and parts of Sullivan's original paper
Sullivan, Dennis Infinitesimal computations in topology. Inst. Hautes Études Sci. Publ. Math. No. 47 (1977), 269–331.
at.algebraic-topology rational-homotopy-theory
1 I think you are asking for the impossible here because the cohomology with local coefficients does not generally have a cup product ring structure. However, $H^*(X;\rho)$ is of course a module
over $H^*(X)$ and so you can ask for a dg-module over a cdga model for $X$. – Jeffrey Giansiracusa Jun 3 '11 at 9:42
Jeffrey, you're right, I'll edit the question. Thanks. – Mark Grant Jun 3 '11 at 10:13
add comment
1 Answer
active oldest votes
I think one could argue as follows: one can start by constructing a cochain complex $A^*(X,\rho)$ that computes the twisted cohomology so that it will be a module over the Sullivan
cochains $A^*(X)$ of $X$ (with constant coefficients). Then one can plug it in Hinich's machinery: see http://arxiv.org/PS_cache/q-alg/pdf/9702/9702015v1.pdf , section 3. A minimal model
would then be a cofibrant replacement of $A^*(X,\rho)$ as an $A^*(X)$-module.
up vote 3
down vote Let me also mention a reference that may be useful: Gomez-Tato, Halperin, Tanr\'e, Rational homotopy theory for non-simply connected spaces. Trans. Amer. Math. Soc. 352 (2000), no. 4,
add comment
Not the answer you're looking for? Browse other questions tagged at.algebraic-topology rational-homotopy-theory or ask your own question. | {"url":"http://mathoverflow.net/questions/66803/minimal-models-with-local-coefficients","timestamp":"2014-04-18T18:19:27Z","content_type":null,"content_length":"54621","record_id":"<urn:uuid:75afc76a-d681-4e0e-b6f8-9fb983d6e723>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00307-ip-10-147-4-33.ec2.internal.warc.gz"} |
CFM Calculation
Hi JM. If I assume you mean a flow of 90 SCFM (standard cubic feet per minute) of air, then I calculate a pressure drop on the order of 3.5 psi over the 150 foot length. This doesn't take into
account any elbows, reducers or expanders, valves or other potential restrictions. They will all increase the total pressure drop. If you're refering to a different gas, it will also affect the
If you'd like to understand how to calculate this pressure drop, essentially it's the application of the D'Arcy-Weisbach equation as given here:
The online calculators are a bit difficult to use as you need to do some research into properties and there's more than one calculator you need to use. I suppose you could do it that way, but I'd
suggest making your own calculator using Excel.
The best reference for this is the Crane Technical Paper #410. You can purchase it online here:
Edit: If you're adding flow to an existing line that already has flow going through it, the additional pressure drop won't be 3.5 psi. For example, if you've already got 90 SCFM of air flowing
through this pipe, the total pressure drop with the increased flow will be 13.8 psi, not simply whatever it was before plus 3.5 psi. The increased pressure drop is not linearly related to total flow. | {"url":"http://www.physicsforums.com/showthread.php?t=133548","timestamp":"2014-04-16T04:22:46Z","content_type":null,"content_length":"41865","record_id":"<urn:uuid:2e559c3a-0fcc-446b-8d43-7d9c2baf0c05>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00516-ip-10-147-4-33.ec2.internal.warc.gz"} |
Substraction not Definable in Simply Typed Lambda Calculus
I probably wrote this a couple of years ago when I was studying polymorphic types.
What happens if we have only monomorphic type when we define church numerals? Let us represent church numerals by the type (a -> a) -> a -> a for some monomorphic a:
> import Prelude hiding (succ, pred)
> type N a = (a -> a) -> a -> a
Zero and successor can still be defined. Their values do not matter.
> zero :: N a
> zero = error ""
> succ :: N a -> N a
> succ = error ""
Also assume we have pairs predefined:
> pair a b = (a,b)
We can even define primitive recursion, only that it does not have the desired type:
primrec :: (N -> b -> b) -> b -> N -> b
Instead we get the type below:
> primrec :: (N a -> b -> b) -> b -> N (N a, b) -> b
> primrec f c n =
> snd (n (\z -> pair (succ (fst z))
> (f (fst z) (snd z)))
> (pair zero c))
Therefore, predecessor does not get the type pred :: N a -> N a
we want. Instead we have:
> pred :: N (N a, N a) -> N a
> pred = primrec (\n m -> n) zero
Oleg Kiselyov, Predecessor and lists are not representable in simply typed lambda-calculus. | {"url":"http://www.iis.sinica.edu.tw/~scm/2007/substraction-not-definable-in-simply-typed-lambda-calculus/","timestamp":"2014-04-20T23:30:04Z","content_type":null,"content_length":"18108","record_id":"<urn:uuid:39756e69-f2a1-44c0-9a26-0813c8a277c4>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00240-ip-10-147-4-33.ec2.internal.warc.gz"} |
Algebra Tutors
Swampscott, MA 01907
Math teacher with 25+ years tutoring experience
...I have tutored test prep including ISEE, SSAT, SAT, SAT II, ACT. I have taught Middle and H.S. Math and home tutored homebound students in Math and English. I have taught H.S.
and have tutored middle school math and H.S. math (
I, II, Geometry and...
Offering 10+ subjects including algebra 1 and algebra 2 | {"url":"http://www.wyzant.com/Marblehead_Algebra_tutors.aspx","timestamp":"2014-04-19T08:26:43Z","content_type":null,"content_length":"58938","record_id":"<urn:uuid:acef2d45-67aa-4c3e-ae89-b3c6cdcee052>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00423-ip-10-147-4-33.ec2.internal.warc.gz"} |
CMS/CSHPM Summer 2005 Meeting
Extended affine Lie algebras are higher rank generalizations of affine Kac-Moody algebras. In this talk I will discuss recent developments in the representation theory of these algebras, and
their connections with hierarchies of soliton PDEs.
Very recently several new techniques in perturbative gauge theory have been introduced. At one-loop, any amplitude of gluons in N=4 super Yang-Mills can be written as a linear combination of
known scalar box integrals with coefficients that are rational functions. Using a generalization of unitarity cuts, in particular quadruple cuts, any coefficient can be easily written as the
product of four tree-level amplitudes. Therefore, this new technique solves the problem of computing one-loop amplitudes in N=4 super-Yang-Mills.
[no abstract]
The Topological A-Model is a "twisted" version of the N=(2,2) supersymmetric s-model, with target space, X. The ring of observables of the A-Model is isomorphic to a certain deformation of the
cohomology ring of X. I would like to present a generalization of this structure to the case of N=(0,2) supersymmetry. The data will consist of X and a rank-r holomorphic vector bundle V® X,
satisfying Ù^r V = K[X].
I will explain, first, from the point of view of the twisted supersymmetric s-model, why a finite-dimensional graded-commutative ring exists. And I will explain, in a few examples, how quantum
effect deform the ring structure.
This is joint work with Allan Adams and Morten Ernebjerg.
We define a natural analog of the Jimbo-Miwa tau-function on different strata of the space of holomorphic differentials over Riemann surfaces. We compute the tau-functions in terms of higher
genus generalization of Dedekind eta-function. The developed formalism is applied to rigorously compute the determinants of Laplace operators over Riemann surfaces in flat metrics with conical
singularities. The holomorphic factorization formula for such determinants gives the higher genus generalization of genus one expression by Ray-Singer.
This is a joint work with Alexey Kokotov.
We semiclassicalise the standard notion of differential calculus in noncommutative geometry on algebras and quantum groups. We show in the symplectic case that the infinitesimal data for a
differential calculus is a symplectic connection, and interpret its curvature as lowest order nonassociativity of the exterior algebra. In the Poisson-Lie group case we study left-covariant
infinitesimal data in terms of partial connections. We show that the moduli space of bicovariant infinitesimal data for quasitriangular Poisson-Lie groups has a canonical reference point which is
flat in the triangular case. Using a theorem of Kostant, we completely determine the moduli space when the Lie algebra is simple: the canonical partial connection is the unique point for other
than sl[n], n > 2, when the moduli space is 1-dimensional. This proves that the deformation-theoretic exterior algebra on standard quantum groups must be nonassociative and we provide it as a
super-quasiHopf algebra. More generally, we show that many standard quantisations in physics including of coadjoint orbits (such as fuzzy spheres) have naturally nonassociative differential
structures. Our methods also quantise quasi-Poisson manifolds of interest in string theory.
Mostly joint work with E. J. Beggs.
I will describe some recent results on analyticity and ill-posedness.
I will describe recent joint work with Mina Aganagic and Cumrun Vafa, which reinterprets the square of the open topological string wave function (also known as the generating function for open
Gromov-Witten invariants) in terms of counting supersymmetric microstates localized on a stringy defect in a gravitational theory in 4 dimensions. I will also sketch the sense in which the wave
function property of the topological string, which plays a crucial role in this work, is related to integrability.
[no abstract]
We study integrability aspects of superstrings on AdS[5] ×S^5. We show that a one parameter family of flat currents, which is gauge equivalent to that obtained by Bena, Polchinski and Roiban, is
manifestly invariant under a generalized Z[4] transformation. This symmetry is expected to simplify analysis of the currents because the Z[4] transformation is an automorphism of PSU(2,2|4), the
isometry in the theory.
We perform the canonical analysis of the theory. Especially we calculate the Poisson bracket of the currents. This bracket results in an algebra which includes a Schwinger term. Because of the
Schwinger term, more work is needed in understanding the quantum integrability properties of the system.
We describe a Poisson structure compatible with a cluster algebra structure. In particular case of cluster algebra formed by Penner coordinates on the decorated Teichmuller space that leads to a
known Weil-Petersson symplectic form a Teichmuller space.
We use an inverse scattering approach to study multi-peakon solutions of the Degasperis-Procesi (DP) equation, an integrable PDE similar to the Camassa-Holm shallow water equation. The spectral
problem associated to the DP equation is equivalent under a change of variables to what we call the cubic string problem, which is a third order non-selfadjoint generalization of the well-known
equation describing the vibrational modes of an inhomogeneous string attached at its ends.
For the discrete cubic string (analogous to a string consisting of n point masses) we solve explicitly the inverse spectral problem of reconstructing the mass distribution from suitable spectral
data, and this leads to explicit formulas for the general n-peakon solution of the DP equation. Central to our study of the inverse problem is a peculiar type of simultaneous rational
approximation of the two Weyl functions of the cubic string, similar to classical Padé-Hermite approximation but with lower order of approximation and an additional symmetry condition instead.
The results obtained are intriguing and nontrivial generalizations of classical facts from the theory of Stieltjes continued fractions and orthogonal polynomials.
This talk is based on joint work with Hans Lundmark (Linköping University, Sweden) which, under the same title, appeared recently (International Mathematics Research Papers, vol. 2005, 2,
In 1988 Atiyah and Hitchin introduced a Poisson bracket (PB) on meromorphic functions defined on the Riemann sphere.
Can one replace the Riemann sphere by a Riemann surface of genus g > 0? Are there other natural Poisson structures?
We survey recent progress in these problems. It is based on the theory of classical completely integrable systems. | {"url":"http://cms.math.ca/Events/summer05/abs/st.html","timestamp":"2014-04-20T15:57:20Z","content_type":null,"content_length":"19844","record_id":"<urn:uuid:66708a08-bdc2-4be5-b51a-c1102285dfe1>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00629-ip-10-147-4-33.ec2.internal.warc.gz"} |
Skokie Calculus Tutor
Find a Skokie Calculus Tutor
I am a senior physics major at Northeastern Illinois University preparing to enter a PhD physics program after graduation. I have helped one high school physics student achieve success in her
class as well as solid conceptual understanding of the material. I have also assisted countless classmates...
19 Subjects: including calculus, chemistry, reading, English
I am very proficient in mathematics as I studied outside the USA at the advanced college level and focused on math all throughout my educational process. My tutoring experience includes working
with younger students under my own tutor's supervision and also working with fellow students in need of m...
21 Subjects: including calculus, chemistry, physics, statistics
...I look forward to helping you succeed in mathematics.I have a teaching certificate in mathematics issued by the South Carolina State Department of Education. During my two and half years of
teaching high school math, I have had the opportunity to teach various levels of Algebra 1 and Algebra 2. I have a teaching certificate in mathematics issued by the South Carolina Department of
12 Subjects: including calculus, geometry, algebra 1, algebra 2
...After that was completed, I tutored kindergartners and first-graders in reading and phonics at Booker T. Washington K-8 elementary school in Birmingham, AL. I've been playing the clarinet
since I was 10, and now I'm 25.
10 Subjects: including calculus, reading, algebra 1, algebra 2
...I also provide enough practice problems for them to improve their skills. I make sure I understand the need of the students and help them with patience. This way I create an environment where
learning math becomes fun and productive.
11 Subjects: including calculus, geometry, algebra 2, trigonometry | {"url":"http://www.purplemath.com/skokie_il_calculus_tutors.php","timestamp":"2014-04-17T01:04:15Z","content_type":null,"content_length":"23774","record_id":"<urn:uuid:2a22cf7b-e47b-474b-a91b-6368b93670fd>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00303-ip-10-147-4-33.ec2.internal.warc.gz"} |
Theoretical Physics Publications
(1) D. J. O'Connor, B. L. Hu, and T. C. Shen, "Symmetry behavior in the Einstein universe: effect of spacetime curvature and arbitrary field coupling", Phys. Lett. 130 B, 31 (1983).
(2) T. C. Shen, B. L. Hu, and D. J. O'Connor, "Symmetry behavior of the static Taub universe: effect of curvature anisotropy", Phys. Rev. D 31, 2401 (1985).
(3) B. L. Hu, and T. C. Shen, "Weak angle from Kaluza-Klein theory with deformed internal space", Phys. Lett. 178 B, 373 (1986).
(4) T. C. Shen, and J. Sobczyk, "Higher-dimensional self-consistent solution with deformed internal spaces", Phys. Rev. D36, 397 (1987).
(5) T. C. Shen, "Bubbles without cores", Phys. Rev. D 37, 3537 (1988).
(6) T. C. Shen, "On the possibility of gravitationally induced semi-classical vacuum decay", Phys. Lett. 220 B, 42 (1989).
(7) J. C. Huang, and T. C. Shen, "Chiral symmetry breaking in QED for weak coupling", J. Phys. G 17, 573 (1991). | {"url":"http://www.usu.edu/nanolab/theoreticalPhysics.html","timestamp":"2014-04-19T17:17:10Z","content_type":null,"content_length":"3411","record_id":"<urn:uuid:dd7a3171-33aa-4dea-8d09-95096d18d496>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00428-ip-10-147-4-33.ec2.internal.warc.gz"} |
TwistyPuzzles.com > Member Collections > Lesovik314 - My Collection
Type C WitTwo
2x2x2 Mini
Dayan Zhanchi 2x2x2
Puzzle made for the 25th anniversary of the cube. Good for speedcubing after lubricating.
The massproduced brother of the Ultimate Bandaged 3x3x3. This version uses flat tiles instead of Lego bricks.
Mass produced version.
Mass produced version.
Mass produced version.
Mass produced version.
Exactly like a standard Rubik's Cube, except the plastic is white.
A 3x3x3 that allows swapping of centers with edges.
Invented by Hidetoshi Takeji, Mirror Blocks is a simple 3x3 Rubik's cube, but with an interesting twist.
Sudoku cube with Numbers from 1-9 in right order
Sudoku cube with Numbers from 1-9 in any order
A 3x3x3 with circles on yellow and white faces. These faces can only be moved when the circles are restored.
3X3X3 (PICTURE)
A 3x3x3 custom sticker variation, filled with mathematical considerations, which shows a single closed loop line on it.
English version. Sold in the US.
Shengshou 4x4x4
Dayan+Mf8 4x4x4
2000 Edition.
Shengshou 5x5x5
A mass produced Mixup Cube. Part of a series of four similar puzzles. Compared with the "traditional" Mixup Cube this one has twice the pieces in each ring.
A mass produced Mixup Cube. Part of a series of four similar puzzles. Compared with the "3x3x3"-variant one ring is further split up.
A mass produced Mixup Cube. Part of a series of four similar puzzles. Compared with the "3x3x3"-variant two rings are further split up.
A mass produced Mixup Cube. Part of a series of four similar puzzles. Compared with the "3x3x3"-variant three rings are further split up.
A variation on the Helicopter Cube that uses curved cuts to reveal the otherwise hidden centerpieces.
The most basic cornerturning hexahedron. This version has six colours.
An incredible creation by Katsuhiko Okamoto. A very rare puzzle for many years it became mass produced in 2011.
Mefferts renamed version of the Fadi cube
A cornerturning hexahedron. Very similar to the MasterSkewb but without its corners.
The skewb is a very solid and satisfying puzzle to play with.
A Rubik’s Cube with edges that turn by gears.
A Gear Cube with eight geared edges replaced with four non-geared edges.
A Gear Cube with four geared edges replaced with four non-geared edges.
A 3-D puzzle solely based on gears.
A Gear Cube combined with the Mixup Cube. It allows 90°-moves.
Looks like a 2x4x4 but like the Mixup Cube allows strange moves.
A 3x3x4 that allows additional mixup turn.
A 3x4x4 that allows additional mixup turn.
A 4x4x4 that allows additional mixup turn.
A pillowed version of the Hexaminx.
Square-1 Ultimate
A modified Square-1 where the corners are split into two pieces
A new challenge to the Square-1 / Cube 21.
A new challenge to the Square-1 / Cube 21.
A 3x3x3 variation built by Katsuhiko Okamoto.
A skewb in shape of a rhombohedron.
A stupid cuboid built just for the sake of completeness.
It is not a boob cube but a boob can solve this one too.
A super Square-1 reduced to 2 layers.
A formerly custom now mass produced 2x2x3 cuboid, commonly known as the "Slim Tower"
Custom made 2x3x3 cuboid, invented by Tony Fisher.
Yet another Tony Fisher masterpiece. The first fully functional irregular cuboid ever.
Not the first fully functional 3x3x4 but the first mass produced.
Not the first fully functional 3x3x4 but the first mass produced.
Cavin's Puzzle
A built-up of the 3x3x4 (Circle) of the same inventor and producer.
The long awaited bigger brother of the 2x3x4.
Traiphum 3x5x5
A fully functional cuboid with three different dimensions which allows 90° moves on all axis.
The first mass produced 4x4x5.
A fully functional and fully proportional 4x4x6.
The first mass produced 4x5x5.
A 3x3x3 with one side extended with an additional rotating layer. Member of a whole series similar to the Crazy Plus 3x3x3s.
A 3x3x3 with two sides extended with an additional rotating layer. Member of a whole series similar to the Crazy Plus 3x3x3s.
A 3x3x3 with three sides extended with an additional rotating layer. Member of a whole series similar to the Crazy Plus 3x3x3s.
A 3x3x3 with five sides extended with an additional rotating layer. Member of a whole series similar to the Crazy Plus 3x3x3s.
A 3x3x3 with three sides extended with an additional rotating layer. Member of a whole series similar to the Crazy Plus 3x3x3s.
A 3x3x3 with four sides extended with an additional rotating layer. Member of a whole series similar to the Crazy Plus 3x3x3s.
A 3x3x4 with two layers removed but not hidden.
A mass produced 2x2x4 cuboid.
An enhanced version of the Floppy Cube from the same inventor. A floppy cube where you can turn the faces 90 degrees.
Maru Face Turning Octahedron
Geared version of a Magic Octahedron.
A commercial puzzle made by exchanging pieces of several Super Square-1's.
A new design of megaminx with ridges on the corner pieces.
The easiest cornerturning dodecahedron. The name is taken from its hexahedral cousin.
A mass produced Dodecahedral corners only Megaminx, which is known as "Kilominx".
The face turning dodecahedron with two layers per face. A Megaminx with an additional layer on each face.
A face turning dodecahedron with only 4 pieces along one edge.
Mf8 Megaminx
This puzzle tries to reimplement the Helicopter Cube in a dodecahedral solid. A mass produced version followed one year later.
By its time it was the most valuable puzzle known to exists. What else can somebody say about a puzzle sold for $3550?
Essentially a Deeper cut megaminx at heart, the Pyraminx crystal adds a new twist to the classic puzzle.
The second mass produced cornerturning dodecahedron.
A face turning dodecahedron. The cuts are deeper than in the Pyraminx Crystal.
A three layered Megaminx. One of the most famous custom twisty puzzles before massproduction started.
Traiphum 1x1x1 Morphix.
The pillowed brother of the Pyramorphix.
A Gear Cube transformed into the shape of Reuleaux tetrahedron.
The "geared" variant of the Pyraminx.
A higher-order Pyramorphix. A 3x3x3 transformed into the shape of a tetrahedron.
Traiphum Mastermorphix.
A 4x4x4 in shape of a tetrahedron.
QJ Pyraminx
Speed Pyraminx Ultimate
A 5x5x5 in shape of a tetrahedron.
Mass produced remake of the original Magic Jewel.
The puzzle is an extension of the Megaminx to a truncated icosahedron (soccerball or buckyball for the chemists). All 32 sides are rotatable.
A Floppy Cube with one piece removed and the rest transformed into the shape of a heart.
A 5x5x5 without corners, wings and edges. The X- and T-faces are distinguishable.
Lingao Magic
The 12 panel, five ring Magic by Lingao.
Lingao Ultra Master Magic
Point all 18 clocks to 12 to solve.
Keychain 1x1x3
6X6X6 & UP
YuXin 11x11x11
Shengshou 6x6x6
Shengshou 7x7x7
Shengshou 8x8x8
Shengshou 9x9x9 | {"url":"http://www.twistypuzzles.com/cgi-bin/cm-view.cgi?ckey=3120","timestamp":"2014-04-16T05:28:21Z","content_type":null,"content_length":"59334","record_id":"<urn:uuid:354689f0-c083-4f4b-b4d5-709181e260dc>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00460-ip-10-147-4-33.ec2.internal.warc.gz"} |
Edgemoor, DE Precalculus Tutor
Find an Edgemoor, DE Precalculus Tutor
...Depending on the level of math and the personality of each student, I teach using many different teaching styles and math programs such as Everyday Math, Connected Math, PMI and traditional
text books. As a mother of three, I know what it is like to place your trust in someone to care for your child. Therefore, I treat every student as I would want my children to be treated.
12 Subjects: including precalculus, geometry, trigonometry, algebra 2
...Also, I have tutored students in ODE's for over ten years. I worked for close to three years as a pension actuary and have passed the first three exams given by the Society of Actuaries, which
rigorously cover such topics as calculus, probability, interest theory, modeling, and financial derivat...
19 Subjects: including precalculus, calculus, trigonometry, statistics
...Geometry is a branch of math concerned with points, lines, angles, shapes, and space (on a plane). It also develops skills such as logic, reasing, and proofs. Many students have a hard time
with geometry because it is more visual than their previous math studies. I have been tutoring high school geometry for over twenty years.
39 Subjects: including precalculus, chemistry, physics, calculus
...I previously taught Algebra I, II, III, Geometry, Trigonometry, Precalculus, Calculus, Intro to Statistics, and SAT review in a public school. I have tutored students as well, with tasks
ranging from locating weak spots in arithmetic skills, assisting honors students looking for a jump start, he...
12 Subjects: including precalculus, calculus, statistics, geometry
...I taught full time for five years at the College of William and Mary, leaving to join my wife in Delaware. Since moving, I have been writing extensively while working part time as an SAT tutor.
I have learned a few tricks, and I know my stuff, but you will find I am a very down-to-earth person ...
22 Subjects: including precalculus, reading, English, algebra 1 | {"url":"http://www.purplemath.com/edgemoor_de_precalculus_tutors.php","timestamp":"2014-04-20T11:11:07Z","content_type":null,"content_length":"24613","record_id":"<urn:uuid:ecb5525c-8132-4b45-ae6b-49abc15f64f5>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00402-ip-10-147-4-33.ec2.internal.warc.gz"} |
Advanced Matplotlib: Part 1
Exclusive offer: get 50% off this eBook here
Matplotlib for Python Developers — Save 50%
Build remarkable publication-quality plots the easy way
$26.99 $13.50
by Sandro Tosi | October 2009 | Open Source
In this article by Sandro Tosi we are about to explore some advanced aspects of Matplotlib. The topics that we are going to cover in detail are:
• Matplotlib's object-oriented interface
• Subplots and multiple figures
• Additional and shared axes
• Logarithmic scaled axes
• Date plotting with ticks formatting and locators
• Text properties, fonts, LaTeX typewriting
• Contour plots and image plotting
The basis for all of these topics is the object-oriented interface.
Object-oriented versus MATLAB styles
We have seen a lot of examples, and in all of them we used the matplotlib.pyplot module to create and manipulate the plots, but this is not the only way to make use of the Matplotlib plotting power.
There are three ways to use Matplotlib:
• pyplot: The module used so far in this article
• pylab: A module to merge Matplotlib and NumPy together in an environment closer to MATLAB
• Object-oriented way: The Pythonic way to interface with Matplotlib
Let's first elaborate a bit about the pyplot module: pyplot provides a MATLAB-style, procedural, state-machine interface to the underlying object-oriented library in Matplotlib.
A state machine is a system with a global status, where each operation performed on the system changes its status.
matplotlib.pyplot is stateful because the underlying engine keeps track of the current figure and plotting area information, and plotting functions change that information. To make it clearer, we did
not use any object references during our plotting we just issued a pyplot command, and the changes appeared in the figure.
At a higher level, matplotlib.pyplot is a collection of commands and functions that make Matplotlib behave like MATLAB (for plotting).
This is really useful when doing interactive sessions, because we can issue a command and see the result immediately, but it has several drawbacks when we need something more such as low-level
customization or application embedding.
If we remember, Matplotlib started as an alternative to MATLAB, where we have at hand both numerical and plotting functions. A similar interface exists for Matplotlib, and its name is pylab.
pylab (do you see the similarity in the names?) is a companion module, installed next to matplotlib that merges matplotlib.pyplot (for plotting) and numpy (for mathematical functions) modules in a
single namespace to provide an environment as near to MATLAB as possible, so that the transition would be easy.
We and the authors of Matplotlib discourage the use of pylab, other than for proof-of-concept snippets. While being rather simple to use, it teaches developers the wrong way to use Matplotlib.
The third way to use Matplotlib is through the object-oriented interface (OO, from now on). This is the most powerful way to write Matplotlib code because it allows for complete control of the result
however it is also the most complex. This is the Pythonic way to use Matplotlib, and it's highly encouraged when programming with Matplotlib rather than working interactively. We will use it a lot
from now on as it's needed to go down deep into Matplotlib.
Please allow us to highlight again the preferred style that the author of this article, and the authors of Matplotlib want to enforce: a bit of pyplot will be used, in particular for convenience
functions, and the remaining plotting code is either done with the OO style or with pyplot, with numpy explicitly imported and used for numerical functions.
In this preferred style, the initial imports are:
import matplotlib.pyplot as plt
import numpy as np
In this way, we know exactly which module the function we use comes from (due to the module prefix), and it's exactly what we've always done in the code so far.
Now, let's present the same piece of code expressed in the three possible forms which we just described.
First, we present it in the style, pyplot only:
In [1]: import matplotlib.pyplot as plt
In [2]: import numpy as np
In [3]: x = np.arange(0, 10, 0.1)
In [4]: y = np.random.randn(len(x))
In [5]: plt.plot(x, y)
Out[5]: [<matplotlib.lines.Line2D object at 0x1fad810>]
In [6]: plt.title('random numbers')
In [7]: plt.show()
The preceding code snippet results in:
Now, let's see how we can do the same thing using the pylab interface:
$ ipython -pylab
In [1]: x = arange(0, 10, 0.1)
In [2]: y = randn(len(x))
In [3]: plot(x, y)
Out[3]: [<matplotlib.lines.Line2D object at 0x4284dd0>]
In [4]: title('random numbers')
In [5]: show()
Note that:
ipython -pylab
is not the same as running ipython and then:
from pylab import *
This is because ipython's-pylab switch, in addition to importing everything from pylab, also enables a specific ipython threading mode so that both the interactive interpreter and the plot window can
be active at the same time.
Finally, lets make the same chart by using OO style, but with some pyplot convenience functions:
In [1]: import matplotlib.pyplot as plt
In [2]: import numpy as np
In [3]: x = np.arange(0, 10, 0.1)
In [4]: y = np.random.randn(len(x))
In [5]: fig = plt.figure()
In [6]: ax = fig.add_subplot(111)
In [7]: l, = plt.plot(x, y)
In [8]: t = ax.set_title('random numbers')
In [9]: plt.show()
The pylab code is the simplest, and ,pyplot is in the middle, while the OO is the most complex or verbose.
As the Python Zen teaches us, "Explicit is better than implicit" and "Simple is better than complex" and those statements are particularly true for this example: for simple interactive sessions,
pylab or ,pyplot are the perfect choice because they hide a lot of complexity, but if we need something more advanced, then the OO API makes clearer where things are coming from, and what's going on.
This expressiveness will be appreciated when we will embed Matplotlib inside GUI applications.
From now on, we will start presenting our code using the OO interface mixed with some pyplot functions.
A brief introduction to Matplotlib objects
Before we can go on in a productive way, we need to briefly introduce which Matplotlib objects compose a figure.
Let's see from the higher levels to the lower ones how objects are nested:
│Object │Description │
│FigureCanvas│Container class for the Figure instance │
│Figure │Container for one or more Axes instances │
│Axes │The rectangular areas to hold the basic elements, such as lines, text, and so on │
Our first (simple) example of OO Matplotlib
In the previous pieces of code, we had transformed this:
In [5]: plt.plot(x, y)
Out[5]: [<matplotlib.lines.Line2D object at 0x1fad810>]
In [7]: l, = plt.plot(x, y)
The new code uses an explicit reference, allowing a lot more customizations. As we can see in the first piece of code, the plot() function returns a list of Line2D instances, one for each line (in
this case, there is only one), so in the second code, l is a reference to the line object, so every operation allowed on Line2D can be done using l.
For example, we can set the line color with:
Instead of using the keyword argument to plot(), so the line information can be changed after the plot() call.
In the previous section, we have seen a couple of important functions without introducing them. Let's have a look at them now:
• fig = plt.figure(): This function returns a Figure, where we can add one or more Axes instances.
• ax = fig.add_subplot(111): This function returns an Axes instance, where we can plot (as done so far), and this is also the reason why we call the variable referring to that instance ax (from
Axes). This is a common way to add an Axes to a Figure, but add_subplot() does a bit more: it adds a subplot. So far we have only seen a Figure with one Axes instance, so only one area where we
can draw, but Matplotlib allows more than one.
add_subplot() takes three parameters:
fig.add_subplot(numrows, numcols, fignum)
• numrows represents the number of rows of subplots to prepare
• numcols represents the number of columns of subplots to prepare
• fignum varies from 1 to numrows*numcols and specifies the current subplot (the one used now)
Basically, we describe a matrix of numrows*numcols subplots that we want into the Figure; please note that fignum is 1 at the upper-left corner of the Figure and it's equal to numrows*numcols at the
bottom-right corner. The following table should provide a visual explanation of this:
│numrows=2, numcols=2, fignum=1 │numrows=2, numcols=2, fignum=2 │
│numrows=2, numcols=2, fignum=3 │numrows=2, numcols=2, fignum=4 │
Build remarkable publication-quality plots the easy way
Published: November 2009
eBook Price: $26.99
Book Price: $44.99
See more
Some usage examples are:
ax = fig.add_subplot(1, 1, 1)
Where we want a Figure with just a single plot area (like in all the previous examples).
ax2 = fig.add_subplot(2, 1, 2)
Here, we define the plot's matrix as made of two subplots in two different rows, and we want to work on the second one (fignum=2).
An interesting feature is that we can specify these numbers as a single parameter merging the numbers in just one string (as long as all of them are less than 10). For example:
ax2 = fig.add_subplot(212)
which is equivalent to:
ax2 = fig.add_subplot(2, 1, 2)
A simple example can clarify a bit:
In [1]: import matplotlib.pyplot as plt
In [2]: fig = plt.figure()
In [3]: ax1 = fig.add_subplot(211)
In [4]: ax1.plot([1, 2, 3], [1, 2, 3]);
In [5]: ax2 = fig.add_subplot(212)
In [6]: ax2.plot([1, 2, 3], [3, 2, 1]);
In [7]: plt.show()
We will use a simple naming convention for the variables that we are using. For example, we call all the Axes instance variables ax, and if there is more than one variable in the same code, then we
add numbers at the end, for example, ax1, ax2, and so on.
This will allow us to make changes to the Axes instance after it's created, and in the case of multiple Axes, it will allow us to modify any of them after their creation. The same applies for
multiple figures.
Multiple figures
Matplotlib also provides the capability to draw not only multiple Axes inside the same Figure, but also multiple figures.
We can do this by calling figure() multiple times, keeping a reference to the Figure object and then using it to add as many subplots as needed in exactly the same way as having a single Figure.
We can now see a code with two calls to figure():
In [1]: import matplotlib.pyplot as plt
In [2]: fig1 = plt.figure()
In [3]: ax1 = fig1.add_subplot(111)
In [4]: ax1.plot([1, 2, 3], [1, 2, 3]);
In [5]: fig2 = plt.figure()
In [6]: ax2 = fig2.add_subplot(111)
In [7]: ax2.plot([1, 2, 3], [3, 2, 1]);
In [8]: plt.show()
This code snippet generates two windows with one line each:
Note how the Axes instances are generated by calling the add_subplot() method on the two different Figure instances. As a side note, when using pylab or pyplot, we can call figure() with an integer
parameter to access a previously created Figure: figure(1) returns a reference to the first Figure, figure(2) to the second one, and so on.
Additional Y (or X) axes
There are situations where we want to plot two sets of data on the same image. In particular, this is the case when for the same X variable, we have two datasets (consider the situation where we take
two measurements at the same time, and we want to plot them together to spot some relationships).
Matplotlib can do it:
In [1]: import matplotlib.pyplot as plt
In [2]: import numpy as np
In [3]: x = np.arange(0., np.e, 0.01)
In [4]: y1 = np.exp(-x)
In [5]: y2 = np.log(x)
In [6]: fig = plt.figure()
In [7]: ax1 = fig.add_subplot(111)
In [8]: ax1.plot(x, y1);
In [9]: ax1.set_ylabel('Y values for exp(-x)');
In [10]: ax2 = ax1.twinx() # this is the important function
In [11]: ax2.plot(x, y2, 'r');
In [12]: ax2.set_xlim([0, np.e]);
In [13]: ax2.set_ylabel('Y values for ln(x)');
In [14]: ax2.set_xlabel('Same X for both exp(-x) and ln(x)');
In [15]: plt.show()
What's really happening here is that two different Axes instances are placed such that one is on top of the other. The data for y1 will go in the first Axes instance, and the data for y2 will go in
the second Axes instance.
The twinx() function does the trick: it creates a second set of axes, putting the new ax2 axes at the exact same position of ax1, ready to be used for plotting.
This is the reason why we had to set the red color for the second line: the plot information was reset so that line would have been blue, as if it was part of a completely new figure.
We can see that by using ax1 and ax2 for referring to Axes instances, we are able to modify the information (in this case, the axes labels) for both of them. Of course, since X is shared between the
two, we have to call set_xlabel() for just one Axes instance.
Using two different Axes also allows us to have different scales for the two plots.
The complementary function, twiny(), allows us to share the Y-axis with two different X-axes.
Logarithmic Axes
Another interesting feature of Matplotlib is the possibility to set the axes scale to a logarithmic one. We can independently set the X, the Y, or both axes to a logarithmic scale.
Let's see an example where both subplots and the logarithmic scale are put together:
In [1]: import matplotlib as mpl
In [2]: mpl.rcParams['font.size'] = 10.
In [3]: import matplotlib.pyplot as plt
In [4]: import numpy as np
In [5]: x = np.arange(0., 20, 0.01)
In [6]: fig = plt.figure()
In [7]: ax1 = fig.add_subplot(311)
In [8]: y1 = np.exp(x/6.)
In [9]: ax1.plot(x, y1);
In [10]: ax1.grid(True)
In [11]: ax1.set_yscale('log')
In [12]: ax1.set_ylabel('log Y');
In [13]: ax2 = fig.add_subplot(312)
In [14]: y2 = np.cos(np.pi*x)
In [15]: ax2.semilogx(x, y2);
In [16]: ax2.set_xlim([0, 20]);
In [17]: ax2.grid(True)
In [18]: ax2.set_ylabel('log X');
In [19]: ax3 = fig.add_subplot(313)
In [20]: y3 = np.exp(x/4.)
In [21]: ax3.loglog(x, y3, basex=3);
In [22]: ax3.grid(True)
In [23]: ax3.set_ylabel('log X and Y');
In [24]: plt.show()
The output of the preceding code is as follows:
Note how the characters in this image are smaller than those in the other plots. This is because we had to reduce the font size to avoid the labels and plots overlapping with each other.
semilogx() (and the twin function semilogy()) is a commodity function that merges plot() and ax.set_xscale('log') functions in a single call. The same holds for loglog(), which makes a plot with log
scaling on both X and Y axes.
The default logarithmic base is 10, but we can change it with the basex and basey keyword arguments for their respective axes. The functions set_xscale() or set_yscale() are more general as they can
also be applied to polar plots, while semilogx(), semilogy(), or loglog() work for lines and scatter plots.
Share axes
With twinx(), we have seen that we can plot two Axes on the same plotting area sharing one axis. But what if we want to draw more than two plots sharing an axis? What if we want to plot on different
Axes in the same figure, still sharing that axis? Some areas where we might be interested in such kind of graphs are:
• Financial data—comparing the evolution of some economic indicators over the same time
• Hardware testing—plotting the electrical signals received at each pin of a parallel or serial port
• Health status— showing the development of some medical information in a given time frame (such as blood pressure, beating heart rate, weight, and so on)
Note that while having the same unit measure on the shared axis, the other is free to have any unit; this is very important as it allows us to group up heterogeneous information.
Matplotlib makes it very easy to share an axis (for example, the X one) on different Axes instances, for example, pan and zoom actions on one graph are automatically replayed to all the others.
In [1]: import matplotlib as mpl
In [2]: mpl.rcParams['font.size'] = 11.
In [3]: import matplotlib.pyplot as plt
In [4]: import numpy as np
In [5]: x = np.arange(11)
In [6]: fig = plt.figure()
In [7]: ax1 = fig.add_subplot(311)
In [8]: ax1.plot(x, x);
In [9]: ax2 = fig.add_subplot(312, sharex=ax1)
In [10]: ax2.plot(2*x, 2*x);
In [11]: ax3 = fig.add_subplot(313, sharex=ax1)
In [12]: ax3.plot(3*x, 3*x);
In [13]: plt.show()
Again, we have to use a smaller font for texts. When printed, it looks like a standard subplot image. However, if you run the code on ipython, then you'll observe that when zooming, panning, or
performing other similar activities on a subplot, all the others will be modified too, according to the same transformation.
As we can expect, there are a couple of keyword arguments; sharex and sharey, and it's also possible to specify both of them together. In particular, this is useful when the subplots show data with
the same units of measure.
Build remarkable publication-quality plots the easy way
Published: November 2009
eBook Price: $26.99
Book Price: $44.99
See more
>> Continue Reading Advanced Matplotlib: Part 2
[ 1 | 2 ]
If you have read this article you may be interested to view :
About the Author :
Sandro Tosi is a Debian Developer, Open Source evangelist, and Python enthusiast. After completing a B.Sc. in Computer Science from the University of Firenze, he worked as a consultant for an energy
multinational as System Analyst and EAI Architect, and now works as System Engineer for one of the biggest and most innovative Italian Internet companies.
Books From Packt
Moodle 1.9 for Second Language eZ Publish 4: Enterprise Web Sites Ext JS 3.0 Joomla! 1.5 Pentaho Reporting 3.5 for Java Papervision3D jQuery 1.3 with JBoss RichFaces
Teaching Step-by-Step Cookbook SEO Developers Essentials PHP 3.3
Not very object orientet by
As I can see, all the examples use pyplot, which is stateful. This isn't a very object orientet approach, where you would avoid using pyplot and pylab.
Post new comment | {"url":"https://www.packtpub.com/article/advanced-matplotlib-part1","timestamp":"2014-04-18T09:00:51Z","content_type":null,"content_length":"91248","record_id":"<urn:uuid:4001a22d-0546-4470-ab73-a3d3db013561>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00284-ip-10-147-4-33.ec2.internal.warc.gz"} |
, 1997
"... Recently the authors have proposed a homogeneous and self-dual algorithm for solving the monotone complementarity problem (MCP) [5]. The algorithm is a single phase interior-point type method,
nevertheless it yields either an approximate optimal solution or detects a possible infeasibility of th ..."
Cited by 13 (1 self)
Add to MetaCart
Recently the authors have proposed a homogeneous and self-dual algorithm for solving the monotone complementarity problem (MCP) [5]. The algorithm is a single phase interior-point type method,
nevertheless it yields either an approximate optimal solution or detects a possible infeasibility of the problem. In this paper we specialize the algorithm to the solution of general smooth convex
optimization problems that also possess nonlinear inequality constraints and free variables. We discuss an implementation of the algorithm for large-scale sparse convex optimization. Moreover, we
present computational results for solving quadratically constrained quadratic programming and geometric programming problems, where some of the problems contain more than 100,000 constraints and
variables. The results indicate that the proposed algorithm is also practically efficient. Department of Management, Odense University, Campusvej 55, DK-5230 Odense M, Denmark. E-mail:
eda@busieco.ou.dk y ...
, 2010
"... Recently, Bai et al. [Bai Y.Q., Ghami M. El, Roos C., 2004. A comparative study of kernel functions for primal-dual interior-point algorithms in linear optimization. SIAM Journal on
Optimization, 15(1), 101-128.] provided a unified approach and comprehensive treatment of interior-point methods for l ..."
Add to MetaCart
Recently, Bai et al. [Bai Y.Q., Ghami M. El, Roos C., 2004. A comparative study of kernel functions for primal-dual interior-point algorithms in linear optimization. SIAM Journal on Optimization, 15
(1), 101-128.] provided a unified approach and comprehensive treatment of interior-point methods for linear optimization based on the class of eligible kernel functions. In this paper we generalize
the analysis presented in the above paper to the Cartesian P∗(κ)-linear complementarity problem over symmetric cones via the machinery of the Euclidean Jordan algebras. The symmetry of the resulting
search directions is forced by using the Nesterov-Todd scaling scheme. The iteration bounds for the algorithms are performed in a systematic scheme, which highly depend on the choice of the eligible
kernel functions. Moreover, we derive the iteration bounds that match the currently best known iteration bounds for large- and small-update methods, namely O((1 + 2κ) √ r log r log r ε) and O((1 +
2κ) √ r log r), respectively, where r denotes the rank of the associated Euclidean Jordan ε algebra and ε the desired accuracy. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=2776432","timestamp":"2014-04-18T19:24:13Z","content_type":null,"content_length":"15708","record_id":"<urn:uuid:b0ad6d80-2881-43cd-8376-8fc7db26181f>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00632-ip-10-147-4-33.ec2.internal.warc.gz"} |
How to Tutor Mathematics
Edit Article
Edited by Nevcamion, Monica, BriDGeT, Lillian May and 11 others
For many people mathematics is one of the most challenging subjects in school. Here is a good method of helping them out.
1. 1
Choose a specific problem. Usually this will be a problem from the student's homework, textbook, or class. You should choose the easiest problem the student has difficulty solving.
2. 2
Ask the student to attempt the problem. Have them explain to you both what they do in each step and why.
3. 3
Correct errors. Often the student cannot find the correct solution because they made an error in the solution algorithm, algebra, or arithmetic. If so, correct the error for them and tell then
what they should have done.
4. 4
Remind the student that they are intelligent. Making mistakes is a part of learning, and even the best mathematicians still make mistakes sometimes. Making mistakes does not mean that the student
is stupid or that they "suck at math". Especially with younger students, it is important to keep up their confidence.
5. 5
Guide the student through the problem. When the student asks "What do I do next?" Do not show them how to solve the problem, instead show them how to do the next step in general. For example: If
the student has come to the sum of two unlike fractions, show them how to add two unlike fractions in general (using variables or numerals as appropriate to their level of mathematics).
6. 6
Let the student try again. Now that you have corrected their errors and guided them in the right direction, let the student attempt to progress through the problem once more.
7. 7
Repeat. Correct errors and Guide your student through the problem until they obtain the correct solution. Then choose a new problem and do it all over again.
8. 8
Congratulate the student. When they solve a problem all on their own, note that they now understand the material and congratulate them on this accomplishment.
• Make sure you know how to do that type of mathematics. Don't tutor stuff you don't know.
• If you know what level of mathematics you will be tutoring, freshen up on the ideas and methods before you meet the student for tutoring.
• If the student you are tutoring is still having difficulties with mathematics ask him/her what they like to to do on their time off and find a mathmatical connection to the problems they are
having difficulties with.
• Study the history of mathematics. This way you can tell students where a problem comes from and why is was considered important. Often you can also learn of geometric representations or
alternative methods of solution.
• Watch out for students who will try to get you to solve all of the problems for them. You are tutoring them, not doing their homework. | {"url":"http://www.wikihow.com/Tutor-Mathematics","timestamp":"2014-04-16T13:26:40Z","content_type":null,"content_length":"65388","record_id":"<urn:uuid:4cdd5aec-3644-46d6-86fe-58276e0a33c9>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00323-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Tools Discussion: All Topics, Jeff's Wednesday ToolFest Question
Discussion: All Topics
Topic: Jeff's Wednesday ToolFest Question
<< see all messages in this topic
< previous message | next message >
Subject: RE: Jeff's Wednesday ToolFest Question
Author: Mathman
Date: Jun 15 2005
On Jun 15 2005, Jeff L wrote:
All I can suggest is available practice sheet generators [you can make your own
or I can tell you how using a spreadsheet]. As you say, there is "skill"
involved, and as with any other similar endeavour, skill comes only with
practice. So ...practice, practice, practice.
Look for other related difficulties they may be having, and strengthen those
[also with practice]. I'm referring here particularly to the times-table,
which is not usually memorised by those who have sought other means such as
software, or finger-math for even that. One skill builds on another.
For long division they need to be able to use the "gzinta rule". That is, they
need to gain some skill at recognising what numbers divide into which [a
consequence of really knowing the times-table]. Again, that is a skill that
takes practice, and so an earlier exercise in problems such as 127/25 will
require the knowledge that division requires multiples of 5 [or 25], and,
further, that the closest one less than 127 is 125, so a result of "5" is a good
start to the first guess. Other first "guesses" could include "6", since the
"2" of "25" divides the "12" of "125" by that amount, but further work [doing
the multiplication for subtraction] would reveal that to be wrong. Long
division is not a simple single stage algorithm. It is complex, and depends
much upon earlier skills. They must be taught not just "how to" do something,
but what to look for while doing it. It might be advantageous to have them do
an exercises with several problems and just work each out to determine the first
digit. Their reward will be marks for doing them only that far, and they will
see some success. Then move on a stage to the first subtraction then carry down
to the next figure, and so on, ending there for marks to that point before
moving ahead until they can do the entire thought process. It is in fact a
skill, a method, a device, not a natural phenomenon, so they need to learn the
process ...step by step. The skill involves even going up blind alleys, as with
guessing "6" rather than "5" in the above, but coming back very quickly ...with
Software? I don't think it is an advantage at all in this important early
study, except to generate problems for practice, which involves earlier
knowledge and skill ...i.e. "number sense". Such problems [and final answers]
can be readily generated using a spreadsheet.
I'm going back at least 30 years to when I was teaching myself to program in
BASIC. Very early in the game [and it was a game] I set myself the task of
developing an algorithm to calculate a long division to any desired number of
decimal places, not those limited by the burnt in instructions given by "PRINT
12/7". i decided on using string manipulations. The point is that is was then
that I discovered how dmaned difficult long division really is. I then stopped
taking it for granted, and thinking it was easy for them because it was easy for
me. It's not. It requires much thought, and much practice for the beginner.
Talking about memory, please don't ask me to recall how I did that algorithm,
but I did, and it worked ...another skill that needs constant practice.
> Is there good interactive-type software for math before algebra?
Reply to this message Quote this message when replying?
yes no
Post a new topic to the General Discussion discussion
Discussion Help | {"url":"http://mathforum.org/mathtools/discuss.html?context=all&do=r&msg=19310","timestamp":"2014-04-16T10:33:31Z","content_type":null,"content_length":"18790","record_id":"<urn:uuid:9d121542-be90-43dc-bb4e-726a254d8e5b>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00588-ip-10-147-4-33.ec2.internal.warc.gz"} |
I have a degree in Mathematics and Physics from the Danish University Aarhus, comparable to a masters degree with thesis - majoring in Mathematics. In 1991-92 I was a visting scholar at UCLA, Los
Angeles, following graduate courses in Applied Mathematics. Since 1992 I have been a teacher in a high school (gymnasium) in Denmark. Special interests: Applied mathematics, graphics and popularizing
erik10 has shared five Posts
Showing top 5. See all Posts from erik10
erik10 has asked 37 Questions
Showing top 5. See all Questions from erik10
erik10 has contributed 18 Answers
Showing top 5. See all Answers from erik10 | {"url":"http://www.mapleprimes.com/users/erik10","timestamp":"2014-04-21T12:59:06Z","content_type":null,"content_length":"68850","record_id":"<urn:uuid:4dae10a7-0a8c-48c4-837d-b570255efd0a>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00480-ip-10-147-4-33.ec2.internal.warc.gz"} |
MathGroup Archive: February 2009 [00489]
[Date Index] [Thread Index] [Author Index]
Re: Problem with DSolve
• To: mathgroup at smc.vnet.net
• Subject: [mg96495] Re: Problem with DSolve
• From: dh <dh at metrohm.com>
• Date: Sat, 14 Feb 2009 03:14:24 -0500 (EST)
• References: <gn3bnf$pt9$1@smc.vnet.net>
Hi Tony,
solve first the general problem:
sol = y[x] /. DSolve[{y'[x] == .02*y[x] - y[x]^2}, y[x], x][[1]]
then determine the constant C[1]:
Solve[(sol /. x -> 0) == a, C[1]]
hope this helps, Daniel
Tony wrote:
> can anyone help what is wrong?
> On version 7 I enter
> DSolve[{y'[x]==.02*y[x]-y[x]^2,y[0]==a},y[x],x]
> and get
> During evaluation of In[58]:= Solve::ifun: Inverse functions are being used by Solve, so some solutions may not be found; use Reduce for complete solution information. >>
> During evaluation of In[58]:= Solve::ifun: Inverse functions are being used by Solve, so some solutions may not be found; use Reduce for complete solution information. >>
> During evaluation of In[58]:= DSolve::bvnul: For some branches of the general solution, the given boundary conditions lead to an empty solution. >> | {"url":"http://forums.wolfram.com/mathgroup/archive/2009/Feb/msg00489.html","timestamp":"2014-04-18T11:13:13Z","content_type":null,"content_length":"25885","record_id":"<urn:uuid:3ec11c3f-2176-4102-b5b0-f2c726baa85e>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00337-ip-10-147-4-33.ec2.internal.warc.gz"} |
Tips on optimizing these functions
Hello everyone,
I wrote a bunch of recursive functions to operate on multi-dimensional
matrices. The matrices are allocated dynamically in a non-contiguous way,
i.e. as an array of pointers pointing to arrays of data,or other pointers
if the matrix has more than 2 dimensions.
The parameters passed to these functions are:
- current_dimension: counts (from 0 to dimensions-1) the matrix
dimension on which the function is working, it's the variable passed on
the stack by the recursion
- dimensions: number of matrix's dimensions
- elem_size: size of the matrix's elements
- dimensions_sizes: a vector containing the 'size' of each dimension
For example, to work on a 10x20 matrix of integers, following the
ordering of above, we would pass:
(0,2,sizeof(int),(unsigned int [2]){10,20})
for a 10x20x15 one, we would pass
(0,3,sizeof(int),(unsigned int [3]){10,20,15})
The functions work fast for allocation and freeing, 'cause calls to
malloc and free take up most of the execution time. They're somewhat slow
at copying or initialising matrices. For initialization I mean assign a
scalar value to the elements of the matrix.
I've done some benchmarks with copying and initialisation. Compared to a
specific-nested-loop solution, the functions take up to twice the time.
However, turning on some optimization flags, specifically '-O3' with gcc,
the gap between the recursive and the specific solution reduces to 20%.
So, have you got any advice about optimizing this code?
Other suggestions are welcomeas well.
Here follows the copying function. The initialising function is almost
NB: to better understand the code you should imagine to work with a bi-
dimensional matrix (implemented as a pointer to pointer in the code). The
recursive step casts either the matrix to a vector, if the function
reached the elements' dimension, ending recursion, or the rows of the
matrix to a bi-dimensional matrix (again, pointer to pointer), continuing
typedef unsigned char byte;
// this one copy one row of the matrix. The row is supposed to store the
value of elements, not pointers
void _copy_row(void* dest, void* src, unsigned short elem_size, unsigned
int n)
unsigned short length;
byte* d1,*d2;
d1 = (byte*)dest;
d2 = (byte*)src;
// copy byte to byte
while (n > 0)
for (length = 0; length < elem_size; length++)
(*d1) = (*d2);
// this is the recursive function
void _vec_copy(byte current_dimension, byte dimensions,unsigned short
elem_size, unsigned int* dimensions_size, void** restrict dest, void**
restrict src)
int i; // row index
if (current_dimension < dimensions)
if (current_dimension == dimensions -1)
_copy_row((void*)dest, (void*)src, elem_size,dimensions_size
for (i = 0; i < dimensions_size[current_dimension]; i++)
_vec_copy(current_dimension+1, dimensions,
elem_size,dimensions_size, (void**)dest[i], (void**)src[i]); | {"url":"http://www.velocityreviews.com/forums/t637211-tips-on-optimizing-these-functions.html","timestamp":"2014-04-24T04:11:01Z","content_type":null,"content_length":"40612","record_id":"<urn:uuid:7ff37695-500c-470a-a710-fc7ad07d60c2>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00090-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: Prediction of local code modifications
Chris F Clark <cfc@shell01.TheWorld.com>
Wed, 02 Apr 2008 11:24:17 -0400
From comp.compilers
| List of all articles for this month |
From: Chris F Clark <cfc@shell01.TheWorld.com>
Newsgroups: comp.compilers
Date: Wed, 02 Apr 2008 11:24:17 -0400
Organization: The World Public Access UNIX, Brookline, MA
References: 08-03-105 08-03-109 08-04-003
Keywords: code, optimize
Posted-Date: 02 Apr 2008 22:54:48 EDT
Tim Frink <plfriko@yahoo.de> writes:
> I'm not sure if dynamic programming is an approach that I can apply to
> my problem. When I understand the idea of dynamic programming
> correctly, it exploits the idea of "overlapping subproblems" and
> "memoization", i.e. it is assumed that the problem can be divided into
> independent subproblems which can be solved separately and then their
> optimal solution can be used to construct the global optimal solution.
> For my problem with the alignment I could divide the code into smaller
> subproblems where I could try to find an optimal local
> solution. However, these subproblems are not independent. When I move
> some code locally in one place (let's say that's the region of the
> first subproblem), then this might possibly also influence some
> following code in another region that I consider as a further
> subproblem. Thus, calculating separate optimal local solutions and
> them combine them will not work for me.
Actually, that is exactly the kind of problem dynamic programming is
designed to solve. The criteria for dynamic programming is only that
if you have already picked all the other problems with the value that
leads to the optimal solution, then you can pick the value to the last
problem that leads to the optimal solution (and so on working
backwards). Dynamic programming does not require the choices to be
independent (and in fact is designed to solve the case where the
choices are interdependent). If the choices are independent, simpler
solutions can be used.
Thus, dynamic programming is often a backtracking approach, you pick a
solution for each of the problems in some order (often a greedy
algorithm is used to come up with the initial solution) and compute
the final score. Then, you revisit the items in the opposite order
you originally picked them and try a different value for each one,
until you have the best result. It is useful to think of the choices
as a tree, where each choice made constrains the other subsequent
choices (i.e. when you put one block in a regions of memory, you are
then constrained not to put another block in that same memory), and
you are visiting the tree of all possible choices in an organized
fashion. Now, as I recall (and I haven't done any significant dynamic
programming recently), there usually is a method to prune unfruitful
subtree explorations, i.e. a way to determine if changing the value of
some value cannot possibly lead to a better solution than the one that
has already been calculated, but that may or may not be a required
feature of the method.
It is worth noting, that because of the required backtracking, dynamic
programming solutions usually grow exponentially with input problem
size. That is, if your problem adds one more binary decision to the
previous problem, the new problem takes roughly twice as long to
solve, because you have doubled the size of tree you have to explore
to find the correct solution.
Hope this helps,
Chris Clark Internet: christopher.f.clark@compiler-resources.com
Compiler Resources, Inc. or: compres@world.std.com
23 Bailey Rd Web Site: http://world.std.com/~compres
Berlin, MA 01503 voice: (508) 435-5016
USA fax: (978) 838-0263 (24 hours)
Post a followup to this message
Return to the comp.compilers page.
Search the comp.compilers archives again. | {"url":"http://compilers.iecc.com/comparch/article/08-04-009","timestamp":"2014-04-18T18:58:40Z","content_type":null,"content_length":"10242","record_id":"<urn:uuid:776e3d1d-c3a1-4b09-bf6b-ed0967346715>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00351-ip-10-147-4-33.ec2.internal.warc.gz"} |
Strength of Materials Lecture Notes Anna University ME2254 SM Notes
Strength of Materials Lecture Notes Anna University ME2254 Notes
Tags : Anna University, Automobile, MECH, Notes, 4th sem notes, 4th Semester, Auto 4th sem, Automobile 4th sem notes, Lecture Notes, ME2254 notes, Mech 4th sem, Mech 4th Sem Notes, MECH notes,
Production engg 4th sem notes, Production Engg notes.
Strength of Materials Lecture Notes Anna University- ME2254 Lecture Notes, Subject Notes MECHANICAL, PRODUCTION and AUTOMOBILE 4th Semester
Anna University Strength of Materials Lecture Notes anna university – ME2254 Anna University Notes download. Here we have provided Mech, Production and Automobile Engineering Fourth (4th) Semester
Notes – STRENGTH OF MATERIALS and content of ME2254 SM Notes (Syllabus). download Anna University Strength of Materials Notes – ME2254 Lecture Notes, Subject Notes, Unit Wise Notes for MECHANICAL AND
AUTOMOBILE ENGINEERING 4th Semester.
ME2254 STRENGTH OF MATERIALS LECTURE NOTES ANNA UNIVERSITY - ME2254 NOTES
Subjects : Strength of Materials
Subject Code : ME2254
Department : MECHANICAL, PRODUCTION AND AUTOMOBILE Engineering
Semester : 4th Sem
University : Anna University, Chennai
Anna University Strength of Materials Notes – ME2254 Notes download
STRENGTH OF MATERIALS notes
Content in ME2254 Notes :
ME2254 STRENGTH OF MATERIALS
(Common to Mechanical, Automobile & Production)
Rigid and Deformable bodies – Strength, Stiffness and Stability – Stresses; Tensile,
Compressive and Shear – Deformation of simple and compound bars under axial load –
Thermal stress – Elastic constants – Strain energy and unit strain energy – Strain energy
in uniaxial loads.
UNIT II BEAMS – LOADS AND STRESSES
Types of beams: Supports and Loads – Shear force and Bending Moment in beams –
Cantilever, Simply supported and Overhanging beams – Stresses in beams – Theory of
simple bending – Stress variation along the length and in the beam section – Effect of
shape of beam section on stress induced – Shear stresses in beams – Shear flow
Analysis of torsion of circular bars – Shear stress distribution – Bars of Solid and hollow
circular section – Stepped shaft – Twist and torsion stiffness – Compound shafts – Fixed
and simply supported shafts – Application to close-coiled helical springs – Maximum
shear stress in spring section including Wahl Factor – Deflection of helical coil springs
under axial loads – Design of helical coil springs – stresses in helical coil springs under
torsion loads
Elastic curve of Neutral axis of the beam under normal loads – Evaluation of beam
deflection and slope: Double integration method, Macaulay Method, and Moment-area
Method –Columns – End conditions – Equivalent length of a column – Euler equation –
Slenderness ratio – Rankine formula for columns
Biaxial state of stresses – Thin cylindrical and spherical shells – Deformation in thin
cylindrical and spherical shells – Biaxial stresses at a point – Stresses on inclined plane
– Principal planes and stresses – Mohr’s circle for biaxial stresses – Maximum shear
stress – Strain energy in bending and torsion.
1. Popov E.P, “Engineering Mechanics of Solids”, Prentice-Hall of India, New Delhi,
2. Beer F. P. and Johnston R,” Mechanics of Materials”, McGraw-Hill Book Co, Third
Edition, 2002.
1. Nash W.A, “Theory and problems in Strength of Materials”, Schaum Outline Series,
McGraw-Hill Book Co, New York, 1995
2. Kazimi S.M.A, “Solid Mechanics”, Tata McGraw-Hill Publishing Co., New Delhi,
3. Ryder G.H, “Strength of Materials, Macmillan India Ltd”., Third Edition, 2002
4. Ray Hulse, Keith Sherwin & Jack Cain, “Solid Mechanics”, Palgrave ANE Books,
5. Singh D.K “Mechanics of Solids” Pearson Education 2002.
6. Timoshenko S.P, “Elements of Strength of Materials”, Tata McGraw-Hill, New Delhi,
ME2254 Strength of Materials Lecture notes anna university above covered almost all topic but still there may be any topic missing. So if you find any topic missing or any problem with downloading
ME2254 SM notes kindly leave your comments
If you need more studying materials for ME2254 SM comment below.. | {"url":"http://www.kinindia.com/university/strength-of-materials-lecture-notes-me2254-notes/","timestamp":"2014-04-19T12:00:45Z","content_type":null,"content_length":"40841","record_id":"<urn:uuid:f75989fa-66cf-4613-9ff3-ad996d510e73>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00354-ip-10-147-4-33.ec2.internal.warc.gz"} |
Proportions and word problems
February 16th 2008, 09:35 PM #1
Proportions and word problems
I have a few word problems that are getting the best of me. I can figure out what I think the answer is but I am having problems figuring out the formula to use.
the first one is:
If a person cross country skis for 35min,m how many calories will be burned (we are given a chart that shows 700 cal/h)
I may be wrong, but it sounds as though youy can go about it this way:
If there are 60 minutes in an hour, and you can burn 700 calories per hour, divide 700 by 60, then multiply that number by 35. That should give ou the correct amount of calories burned in 35
$x=408\; Calories$
February 16th 2008, 09:53 PM #2
February 17th 2008, 01:19 AM #3
Oct 2007 | {"url":"http://mathhelpforum.com/algebra/28439-proportions-word-problems.html","timestamp":"2014-04-16T05:04:53Z","content_type":null,"content_length":"32418","record_id":"<urn:uuid:1f8c0bd3-7cac-4b4c-89a9-a601239aeee5>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00003-ip-10-147-4-33.ec2.internal.warc.gz"} |
probability question
May 24th 2009, 01:47 PM #1
Junior Member
Apr 2009
probability question
A community meeting was attended by 37 parents along with 26 highschool students. I committee is to be formed to find ways to encourage students to take calculus before graduating highschool.
This commitee will consist of 8 people,4 students and 4 parents. If you are one of the students at the meeting,what is the probability that you will be chosen for the commitee
so the 26C4 is obviously 26 students,4 possible picks,but could you enlighten me on how you get
25C3,is that the possibility of you getting picked after one pick has been chosen,or is that AFTER youve been picked,and you must divide it by the original amount
May 24th 2009, 02:33 PM #2
May 24th 2009, 04:39 PM #3
Junior Member
Apr 2009 | {"url":"http://mathhelpforum.com/statistics/90318-probability-question.html","timestamp":"2014-04-18T12:11:25Z","content_type":null,"content_length":"34960","record_id":"<urn:uuid:72a8865c-1f40-4d2c-83e7-be7108e47ebe>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00250-ip-10-147-4-33.ec2.internal.warc.gz"} |
At The Intersection Of Texas Avenue And University ... | Chegg.com
At the intersection of Texas Avenue and University Drive, a yellow, subcompact car with mass 950 kg traveling east on University collides with a maroon pickup truck with mass 1800 kg that is
traveling north on Texas and ran a red light . The two vehicles stick together as a result of the collision and, after the collision, the wreckage is sliding at 16.0 m/s in the direction 24.0^0 east
of north. The collision occurs during a heavy rainstorm; you can ignore friction forces between the vehicles and the wet road.
Calculate the speed of the car before the collision and the speed of the truck before the collision. | {"url":"http://www.chegg.com/homework-help/questions-and-answers/intersection-texas-avenue-university-drive-yellow-subcompact-car-mass-950-kg-traveling-eas-q1605786","timestamp":"2014-04-21T00:19:48Z","content_type":null,"content_length":"21661","record_id":"<urn:uuid:6c08bcb3-1ad1-4a86-8cfa-793279f55dc0>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00012-ip-10-147-4-33.ec2.internal.warc.gz"} |
Most functional
John Tromp
Judges' comments:
To build:
On a 32-bit machine:
make tromp
On a 64-bit machine:
make tromp64
# And mentally substitute ./tromp64 for ./tromp everywhere below
To run:
cat ascii-prog.blc data | ./tromp -b
cat binary-prog.Blc data | ./tromp
(cat hilbert.Blc; echo -n 1234) | ./tromp
(cat oddindices.Blc; echo; cat primes.blc | ./tromp -b) | ./tromp
cat primes.blc | ./tromp -b | ./primes.pl
Selected Judges Remarks:
The judges dare to say that the data files this entry is processing are more obfuscated than the entry itself.
Author’s comments:
This program celebrates the close connection between obfuscation and conciseness, by implementing the most concise language known, Binary Lambda Calculus (BLC).
BLC was developed to make Algorithmic Information Theory, the theory of smallest programs, more concrete. It starts with the simplest model of computation, the lambda calculus, and adds the minimum
amount of machinery to enable binary input and output.
More specifically, it defines a universal machine, which, from an input stream of bits, parses the binary encoding of a lambda calculus term, applies that to the remainder of input (translated to a
lazy list of booleans, which have a standard representation in lambda calculus), and translates the evaluated result back into a stream of bits to be output.
Lambda is encoded as 00, application as 01, and the variable bound by the n'th enclosing lambda (denoted n in so-called De Bruijn notation) as 1^{n}0. That’s all there is to BLC!
For example the encoding of lambda term S = \x \y \z (x z) (y z), with De Bruijn notation \ \ \ (3 1) (2 1), is 00 00 00 01 01 1110 10 01 110 10
In the closely related BLC8 language, IO is byte oriented, translating between a stream of bytes and a list of length-8 lists of booleans.
The submission implements the universal machine in the most concise manner conceivable. It lacks #defines and #includes, and compiles to a (stripped) executable of under 6K in size.
Without arguments, it runs in byte mode, using standard in- and output. With one (arbitrary) argument, it runs in bit mode, using only the least significant bit of input, and using characters ‘0’ and
‘1’ for output.
The program uses the following exit codes: 0: OK; result is a finite list 5: Out of term space 6: Out of memory 1,2,3,4,8,9: result not in list form
The size of the term space is fixed at compile time with -DA
A half byte `cat'
The shortest (closed) lambda calculus term is \x x (\ 1 in De Bruijn notation) which is the identity function. When its encoding 0010 is fed into the universal machine, it will simply copy the input
to the output. (well, not that simply, since each byte is smashed to bits and rebuilt from scratch) Voila: a half byte cat:
echo " Hello, world" | ./tromp
Hello, world
Since the least significant 4 bits of the first byte are just arbitrary padding that is ignored by the program, any character from ASCII 32 (space) through 47 (/) will do, e.g.:
echo "*Hello, world" | ./tromp
Hello, world
Bad programs
If the input doesn’t start with a valid program, that is, if the interpreter reaches end-of-file during program parsing, it will crash in some way:
echo -n "U" | ./tromp
Segmentation fault
Furthermore, the interpreter requires the initial encoded lambda term to be closed, that is, variable n can only appear within at least n enclosing lambdas. For instance the term \ 5 is not closed,
causing the interpreter to crash when looking into a null-pointer environment:
echo ">Hello, world" | ./tromp
Segmentation fault
Since these properties can be checked when creating BLC programs, the interpreter doesn’t bother checking for it.
A Self Interpreter
The BLC universal machine may be small at 650 bytes of C (952 bytes including layout), but written as a self interpreter in BLC it is downright minuscule at 232 bits (29 bytes):
The byte oriented BLC8 version weighs in at 43 bytes (shown in hexadecimal).
0decb f0fc3
9befe 185f7
0b7fb 00cf6
7bb03 91a1a
(cat uni8.Blc; echo " Ni hao") | ./tromp
Ni hao
A prime number sieve
Even shorter than the self-interpreter is this prime number sieve in 167 bits (under 21 bytes):
The n'th bit in the output indicates whether n is prime:
cat primes.blc | ./tromp -b | head -c 70
For those who prefer to digest their primes in decimal, there is oddindices.Blc, which will print the indices of all odd characters (with lsb = 1) separated by a given character:
(cat oddindices.Blc; echo -n " "; cat primes.blc | ./tromp -b) | ./tromp | head -c 70
A Space filling program
Program hilbert.Blc, at 143 bytes, is a very twisty “one-liner” (shown in hexadecimal):
1818181 8111154 6806041 55ff041
9d f9 de 16 ff fe 5f 3f
ef f615ff9 46 84 058117e 05
cb fe bc bf
ee86cb9 4681600 5c0bfac bfbf71a
85 e0 5c f4
14d5fe0 8180b048d0800e078 016445f
fe 5f
f7 ffffe5fff2fc 02f7ad97f5bf ff
ff bf ff ca af ff
7817ffa df76695 4680601 57f7e16
05 c1
3fe80b2 2c18581 bfe5c10 42ff805
de ec 06 c2 c0 c0
60 8191a00167fb cbcfdf65f7c0 a20
It expects n arbitrary characters of input, and draws a space filling Hilbert curve of order n:
(cat hilbert.Blc; echo -n "1") | ./tromp
| |
(cat hilbert.Blc; echo -n "12") | ./tromp
_ _
| |_| |
|_ _|
_| |_
(cat hilbert.Blc; echo -n "123") | ./tromp
_ _ _ _
| |_| | | |_| |
|_ _| |_ _|
_| |_____| |_
| ___ ___ |
|_| _| |_ |_|
_ |_ _| _
| |___| |___| |
(cat hilbert.Blc; echo -n "1234") | ./tromp
_ _ _ _ _ _ _ _
| |_| | | |_| | | |_| | | |_| |
|_ _| |_ _| |_ _| |_ _|
_| |_____| |_ _| |_____| |_
| ___ ___ | | ___ ___ |
|_| _| |_ |_| |_| _| |_ |_|
_ |_ _| _ _ |_ _| _
| |___| |___| |_| |___| |___| |
|_ ___ ___ ___ ___ _|
_| |_ |_| _| |_ |_| _| |_
| _ | _ |_ _| _ | _ |
|_| |_| | |___| |___| | |_| |_|
_ _ | ___ ___ | _ _
| |_| | |_| _| |_ |_| | |_| |
|_ _| _ |_ _| _ |_ _|
_| |___| |___| |___| |___| |_
A BrainFuck interpreter
The smallest known BF interpreter is written in… you guessed it, BLC, coming in at 112 bytes (including 3 bits of padding):
od -t x4 bf.Blc
0000000 01a15144 02d55584 223070b7 00f032ff
0000020 7f85f9bf 956fe15e c0ee7d7f 006854e5
0000040 fbfd5558 fd5745e0 b6f0fbeb 07d62ff0
0000060 d7736fe1 c0bc14f1 1f2eff0b 17666fa1
0000100 2fef5be8 ff13ffcf 2034cae1 0bd0c80a
0000120 e51fee99 6a5a7fff ff0fff1f d0049d87
0000140 db0500ab 3bb74023 b0c0cc28 10740e6c
It expects its input to consist of a Brainfuck program (looking only at bits 0,1,4 to distinguish among ,-.+<>][ ) followed by a ], followed by the input for the Brainfuck program.
more hw.bf
cat bf.Blc hw.bf | ./tromp
Hello World!
Curiously, the interpreter bf.Blc is the exact same size as hw.bf.
A BLC assembler
Writing BLC programs can be made slightly less painful with this parser that translates single-letter-variable lambda calculus into BLC:
echo "\f\x f (f (f x))" > three
cat parse.Blc three | ./tromp
Converting between bits and bytes
THe program inflate.Blc and its inverse deflate.Blc allow us to translate between BLC and BLC8. If you assemble a byte oriented program, you’ll need to compact it into BLC8:
So we could assemble an input reversing program as
echo "\a a ((\b b b) (\b \c \d \e d (b b) (\f f c e))) (\b \c c)" > reverse
cat parse.Blc reverse | ./tromp > reverse.blc
and change it to BLC8 with
cat deflate.Blc reverse.blc | ./tromp > rev.Blc
wc rev.Blc
0 1 9 rev.Blc
and then try it out with:
cat rev.Blc - | ./tromp
Hello, world!
!dlrow ,olleH
Symbolic Lambda Calculus reduction
BLC8 program symbolic.Blc shows individual reduction steps on symbolic lambda terms. Here it is used to show the calculation of 2^3 in Church numerals:
echo "(\f\x f (f (f x))) (\f\x f (f x))" > threetwo
cat parse.Blc threetwo | ./tromp > threetwo.blc
cat symbolic.Blc threetwo.blc | ./tromp
(\a \b a (a (a b))) (\a \b a (a b))
\a (\b \c b (b c)) ((\b \c b (b c)) ((\b \c b (b c)) a))
\a \b (\c \d c (c d)) ((\c \d c (c d)) a) ((\c \d c (c d)) ((\c \d c (c d)) a) b)
\a \b (\c (\d \e d (d e)) a ((\d \e d (d e)) a c)) ((\c \d c (c d)) ((\c \d c (c d)) a) b)
\a \b (\c \d c (c d)) a ((\c \d c (c d)) a ((\c \d c (c d)) ((\c \d c (c d)) a) b))
\a \b (\c a (a c)) ((\c \d c (c d)) a ((\c \d c (c d)) ((\c \d c (c d)) a) b))
\a \b a (a ((\c \d c (c d)) a ((\c \d c (c d)) ((\c \d c (c d)) a) b)))
\a \b a (a ((\c a (a c)) ((\c \d c (c d)) ((\c \d c (c d)) a) b)))
\a \b a (a (a (a ((\c \d c (c d)) ((\c \d c (c d)) a) b))))
\a \b a (a (a (a ((\c (\d \e d (d e)) a ((\d \e d (d e)) a c)) b))))
\a \b a (a (a (a ((\c \d c (c d)) a ((\c \d c (c d)) a b)))))
\a \b a (a (a (a ((\c a (a c)) ((\c \d c (c d)) a b)))))
\a \b a (a (a (a (a (a ((\c \d c (c d)) a b))))))
\a \b a (a (a (a (a (a ((\c a (a c)) b))))))
\a \b a (a (a (a (a (a (a (a b)))))))
As expected, the resulting normal form is Church numeral 8.
Taking only the first line of output gives us a sort of BLC disassembler, an exact inverse of the above assembler. The prime number sieve disassembles as follows:
cat symbolic.Blc primes.blc | ./tromp | head -1
\a (\b b (b ((\c c c) (\c \d \e e (\f \g g) ((\f c c f ((\g g g) (\g f (g g)))) (\f \g \h \i i g (h (d f))))) (\c \d \e b (e c))))) (\b \c c (\d \e d) b)
Hardly any less obfuscated…
The last line of cat symbolic.Blc primes.blc | ./tromp | head -16 starts out as \a \b b (\c \d c) (\c c (\d \e d) (\d d (\e \f f) (\e e (\f \g g) ((\f (\g \h \i
The \a is for ignoring the rest of the input (to which the universal machine applies the decoded lambda term). The \b b (..) (…) is the list constructor, usually called cons, applied to a head (a
list element) and a tail (another list). In this case the element is (\c \d c), which represents the boolean true, and which we use to represent a 0 bit. This is the bit that says 0 is not prime. The
next list element (following another cons) is (\d \e d). Another 0 bit, this time saying that 1 is not prime. The third list element is (\e \f f), a 1 bit, confirming our suspicion that 2 is prime.
As is the next number, according to (\f \g g). We can see that the tail after the first 4 elements is still subject to further reduction. The bit for number 4 will show up for the first time in line
30, as (\g \h g), or 0, as the result of zeroing out all multiples of the first prime, 2. Since my computer reaches swap hell before line 40, we can’t see the next bit arriving, at least not in this
symbolic reduction.
Performance is quite decent, and amazingly good for such a tiny implementation, being roughly 50% slower than a Haskell implementation of the universal machine using so-called Higher Order Abstract
Syntax which relies on the highly optimized Haskell runtime system for evaluation. Of course individual blc programs running under the interpreter perform much worse than that same program written in
Our interpreter copes well with extra levels of interpretation:
time cat primes.blc | ./tromp -b | head -c 210 > /dev/null
real 0m0.043s
time cat uni.blc primes.blc | ./tromp -b | head -c 210 > /dev/null
real 0m0.191s
time cat uni.blc uni.blc primes.blc | ./tromp -b | head -c 210 > /dev/null
real 0m1.919s
time cat uni.blc uni.blc uni.blc primes.blc | ./tromp -b | head -c 210 > /dev/null
real 0m23.514s
time cat uni.blc uni.blc uni.blc uni.blc primes.blc | ./tromp -b | head -c 210 > /dev/null
real 4m52.700s
Obfuscation is due entirely to conciseness. Some questions to ponder:
Which of the term space codes 0,1,2,3 serves multiple purposes?
Why is the environment pointer pointing into the term space?
What does the test u+m&1? do?
How does the program reach exit code 0?
And how do any of those blc programs work?
The program freely (without casting) converts between int and int*, causing many warnings; note: expected ‘int *’ but argument is of type ‘int’ warning: assignment from incompatible pointer type
warning: assignment makes integer from pointer without a cast warning: assignment makes pointer from integer without a cast warning: incompatible implicit declaration of built-in function ‘calloc’
warning: incompatible implicit declaration of built-in function ‘exit’ warning: passing argument 1 of ‘d’ makes pointer from integer without a cast warning: passing argument 1 of ‘p’ makes pointer
from integer without a cast warning: pointer/integer type mismatch in conditional expression
Avoiding these would make the program substantially longer, and detract from its single minded focus on conciseness.
It implicitly declares functions read, write, exit and calloc, the latter two incompatibly. 32 bit and 64 bit executables are separate Makefile targets, involving a change from int to long and from a
hardcoded sizeof of 4 to 8.
The program has been tested to work correctly on Linux/Solaris/MacOSX both in 32 and 64 bits.
How the program works
See the file how13.
Christopher Hendrie, Bertram Felgenhauer, Alex Stangl, Seong-hoon Kang, and Yusuke Endoh have contributed ideas and suggestions for improvement.
Binary Lambda Calculus http://en.wikipedia.org/wiki/Binary_lambda_calculus
G J Chaitin, Algorithmic information theory, Volume I, Cambridge Tracts in Theoretical Computer Science, Cambridge University Press, October 1987. http://www.cs.auckland.ac.nz/~chaitin/cup.html
Jean-Louis Krivine. 2007. A call-by-name lambda-calculus machine Higher Order Symbol. Comput. 20, 3 (September 2007), 199-207. http://www.pps.univ-paris-diderot.fr/~krivine/articles/lazymach.pdf
© Copyright 1984-2012, Leo Broukhis, Simon Cooper, Landon Curt Noll - All rights reserved
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License. | {"url":"http://www.ioccc.org/2012/tromp/hint.html","timestamp":"2014-04-21T12:08:49Z","content_type":null,"content_length":"18708","record_id":"<urn:uuid:aecadf9e-1979-420d-90bf-84198f6bc0fd>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00331-ip-10-147-4-33.ec2.internal.warc.gz"} |
Probability abused to justify theism
Warning: this post is rather longer than my usual postings.
One popular pseudo-scientific argument used by religious believers to justify their faith in a guiding deity is based on probability, or more correctly, the abuse of probability. The use of
probability in science is common, but unfortunately even some scientists have misconceptions about it.
The believer argument goes something like this: that the existing universe could have come about by chance is so extremely unlikely as to have an infinitesimal probability of having happened. There
must have been a guiding hand in the creation of the universe, in the form of the believer’s favorite anthropomorphic deity.
The primary flaw in this crude attempt at a justification for belief in a supernatural deity is its mischaracterization of the unbeliever’s argument. Yes, if all of the matter in the universe were
simply crammed into one location and was unaffected by anything other than random happenings at the subatomic level, it indeed seems unlikely that anything coherent would emerge. Unfortunately, this
argument hinges on completely random behavior for all the contents of the universe.
The fact is that there’s a guiding “hand” in creating the universe from the “soup” of matter that emerged from the putative “big bang”. But that hand need
be some conscious being that happens to look like a human being but is actually an infinitely capable deity. The universe’s organization is not completely at the mercy of randomness, and hence, the
“calculations” (Most believer apologists haven’t a clue how to estimate probabilities, of course, so they don’t offer any such calculations at all!) must account for non-random processes. In the real
universe, the behavior of matter and energy isn't completely random and
self-organizing processes
go on all the time, but believers for the most part seem not to know anything about that. For example, the weather self-organizes into storms, rather than being just a chaotic foam of random
For a general audience, I can’t go into too much detail without resorting to mathematical concepts, but the laws of the natural world are generally nonlinear, which makes predicting the evolution of
matter from one state to the next rather challenging – weather forecasting is a perfect example of this. All the natural laws are fundamentally nonlinear because any linear process goes on in a
straight line for infinite time. The equations used to describe atmospheric motion are “deterministic”: for a given set of initial conditions, they make a clear and unambiguous prediction for a
future state. Unfortunately, even a tiny change in the initial conditions (which can’t be known with infinite accuracy) can result in a large change in the predicted future state. This is why the
weather is notoriously difficult to predict far into the future.
The same principle applies to ‘hindcasting” – that is, the task of running the equations backward from some point to try to deduce conditions at an earlier time. Thus, we can’t simply take things as
we know them at present and run the clock backward to calculate what conditions were 15 billion years ago when the universe began. The laws of nature, however imperfectly we understand them at the
moment, describe what amounts to a deterministic control over the natural world, but we can’t employ those laws to account for everything that went on during the last 15 billion years. All we know is
that the development of the universe from the primordial ‘soup’ was distinctly not random. Matter condensed from pure energy as the universe expanded. Under the influence of gravity, matter collected
into clouds and some of them collapsed to form stars. Star groups became galaxies under the influence of rotational inertia. Planets formed within solar systems. Somehow, life began (we don’t yet
understand how) and evolved following the laws of evolution. And so on. It wasn’t random, but it also was unpredictable in its details. Hence, the argument that a completely random assembling of the
universe is probabilistically impossible is pretty much irrelevant – that process wasn't random!
Another logical fallacy in this argument is the assumption that we’re only concerned with the specific path by which the universe as we know it developed. Yes, it certainly could be argued that if we
could follow the exceeding large number of events that led to the present, that exact sequence would be only one amongst a very, very large number of alternative paths, the end result of each of
these different paths would depart from the present to a widely-varying degree.
If we could somehow “rewind the tape” to go back to that primordial soup at some instant in the deep past, and then let the system go forward again, it would indeed be rather unlikely that all those
events would ever occur in exactly the same way as it did to produce the universe we know. But given the laws of the universe, the universe simply would evolve differently. It's a virtual certainty
that some sort of universe would evolve, but it wouldn't be an exact fit to the one we inhabit. Our finite knowledge of the detailed disposition of all the matter and energy in the universe would be
inadequate to describe its state at some moment long ago, so we could only guess at those properties of our universe at the moment it universe began 15 billion years ago. What’s remarkable is that
science has come so far in understanding these processes. It does so by, among other things, by not postulating unnecessary supernatural interventions in that evolution.
Given the “guiding hand” of the natural laws governing the universe, something sort of resembling the present could emerge 15 billion years after rewinding the tape, but it likely wouldn’t be
precisely what things are now, as we know them. Humans might not exist, or if something humanoid was inhabiting a new Earth-like planet, they certainly wouldn’t be us. Under the non-random processes
associated with those natural laws, an exceedingly large number of distinct outcomes is still possible of course. But the fallacy of the believer’s use of the probability argument is it's insistence
that we must account quantitatively for the likelihood of the exact state of the universe as it now is known. We can't be sure that all the possible outcomes are equally likely, but this seems like a
plausible assumption, in the absence of information to the contrary. The probability of the exact universe as we know it is indeed exceedingly small, but there's a 100% probability there would be a
universe that would resemble the current one more or less, under the natural laws of the universe! Thus, this "probability argument" fails utterly to establish a need for a deity. I don't care what
theists choose to believe in the absence of logic and compelling evidence, but their use of this argument is simply specious.
Of course, this begs the question of why the laws of the universe are as they are. That’s an entirely different topic and theists have their usual “pat” explanation for that – again, their favorite
deity is their “logical” explanation. I’m not going to go into that one here – it can wait for another time.
4 comments:
Glad you pointed this out, Chuck, because the statistically illiterate miss it every time: True, the odds for this particular arrangement of the universe are infinitesimally small. But the odds
in favor of some arrangement of matter and energy – happily for us – are somewhat better! Great post, Chuck! Ron
The essence of the 'fine tuning' argument isn't so much referring to the probabilities of our universe existing in its exact current form, but the probabilities that the existing laws of nature
themselves would be in such a precise configuration to support life, let alone intelligent life. William Lane Craig's work on this topic is one of the better/complete presentations of it that I'm
aware of.
The "fine tuning" argument is one to which I refer at the end of my blog post here. I'll gladly address that at some future time, but it's off-topic in this thread. The long chain of specific
events by which our universe came to be are the focus for certain "probabilistic" arguments by theists. This is quite distinct from the "fine tuning" argument, as noted.
Also: If we rewound to 15 billion years ago and the random process started again, we might end up with a BETTER world, i.e., there is nothing about the process that indicates that the results are
the optimal ones.
Thus, the small probability of this exact configuration is small partly because there are better configurations possible. | {"url":"http://cadiiitalk.blogspot.com/2012/07/probability-abused-to-justify-theism.html","timestamp":"2014-04-20T03:10:00Z","content_type":null,"content_length":"133518","record_id":"<urn:uuid:e60f0dc1-2e96-43b3-a5d8-b02eead7de77>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00485-ip-10-147-4-33.ec2.internal.warc.gz"} |