content
stringlengths 86
994k
| meta
stringlengths 288
619
|
|---|---|
J Page
J Tutorial and Statistical Package, 2003, 67 pp.
A J tutorial with examples chosen mostly from elementary statistics. The principal J verbs are summarized so that they may be used independently as a statistical package.
Acrobat file: jtsp.pdf
J4.06 file: jtsp.ijs
J4.06 file (Statistical Package only): jsp.ijs
J601 file (Statistical Package only): jsp601.ijs
A Simple Machine Language Simulator, 2003.
A simulation to illustrate machine-language programming for a one-address decimal computer with an order code of 15 instructions.
J4.06 file: comp.ijs
LGP-30 Simulator, 2003
This program simulates machine-language programming for the Royal McBee LGP-30 computer. The 21-page paper, "The LGP-30: The University of Alberta's First Computer", discusses the experience with the
LGP-30 at the University of Alberta and describes the simulator in some detail.
Acrobat file: The LGP-30: ...
J4.06 file: lgp30.ijs
A second LGP-30 simulator completed in February 2006 which uses programmed binary arithmetic operatios rather than the J floating-point operations is also available.
J4.06 file: lgp30mk2.ijs
My Life with Array Languages, 2005, 19 pp.
This paper is a less technical version of A Lecture on Array Languages given above and is intended for the general reader.
Acrobat file: MyLife.pdf
J4.06 file: Not available
|
{"url":"http://webdocs.cs.ualberta.ca/~smillie/Jpage/Jpage.html","timestamp":"2014-04-17T00:49:23Z","content_type":null,"content_length":"9428","record_id":"<urn:uuid:0ac4f978-325c-4ec4-badb-c25b327d563e>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00075-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
How to solve friction problems.
• one year ago
• one year ago
Best Response
You've already chosen the best response.
do u have any specific situation?
Best Response
You've already chosen the best response.
well in general, u solve it like any other force related problem. draw a force-body diagram. resolve all the forces acting on the body in 2 axes. then calculate the net force acting on it. here
the important thing to keep in mind, is the direction of the frictional force. it will always be in the opposite direction to the relative motion between the 2 surfaces in contact.
Best Response
You've already chosen the best response.
hii migo !! methinks you should have some problem to discuss upon. The direction of friction force is not always opposite to direction of motion. Friction sometimes acts along the direction of
motion while sometimes opposite . You'd understand it's real nature when you solve problems !!
Best Response
You've already chosen the best response.
yes, as @meera_yadav pointed out, it does not always oppose the direction of motion. as I said earlier, it always opposes the 'relative motion'. u need to understand the difference between these
two terms.
Best Response
You've already chosen the best response.
remember vaidelhi says " it will always be in the opposite direction to the RELATIVE motion between the 2 surfaces in contact."
Best Response
You've already chosen the best response.
After reading my notes you don't need any more thing to know about friction.
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/4ff2d5cbe4b03c0c488b34ec","timestamp":"2014-04-20T16:07:59Z","content_type":null,"content_length":"41204","record_id":"<urn:uuid:183a76ba-4098-4afd-a571-9887b2969a43>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00006-ip-10-147-4-33.ec2.internal.warc.gz"}
|
6.3.2 Side Length, Volume, and Surface Area of Similar Solids
│ │ Side Length, Volume, and Surface Area of Similar Solids (applet) │
│ │ │
│ │ The user can manipulate the scale factor that links two three-dimensional rectangular prisms and learn about the relationships among edge lengths, surface areas, and volumes. │
Your task is to investigate how changing the lengths of the sides of a rectangular prism affects the volume and surface area of the prism. First notice that the two given rectangular prisms are
congruent (equal angles and equal sides). Now change the size of the purple prism (A) by grabbing the red dot and dragging it diagonally. Are the two prisms still congruent? Are they similar? Click
on "Show Volume." Change the size of prism A again and observe the changes in the measurements. What is being depicted in the graph? What can you say about the relationship between the side lengths
and the volume of a rectangular prism?
Next, click on "Show Surface Area" and "Hide Volume." Again, change the size of prism A and observe the changes in measurement. What is being depicted in the graph? What can you say about the
relationship between the side lengths and the surface area of a rectangular prism?
How to Use the Interactive Applet
To change the size of prism A, adjust the blue slider or drag the upper right-hand vertex (red circle) in a diagonal motion. Click on the Show/Hide Surface Area button to show or hide the graph,
values, and ratio of the surface areas. Click on the Show/Hide Volume button to show or hide the graph, values, and ratio of the volumes.
As students experiment with different ratios of side lengths (different scale factors), they have the opportunity to observe and interpret the changes in the volume and surface-area data. Students
should be encouraged to compare the scale factor to the ratio of the volumes and to the ratio of the surface areas and Side Length and Area of Similar Figures to look for patterns. Teachers can help
students consider the relationships between scale factor, side length, volume, and surface area by asking questions like, What is being depicted in the "Volume" graph? Similar questions can be asked
about the "Surface Area" graph. Creating tables of values for scale factor, side length, surface area, and volume may help students organize their information and more easily examine how a change in
side length affects surface area and volume.
Students may notice a difference in the appearance of the graphs. It is important to focus on why the relationship between side length and volume is cubic whereas the relationship between side length
and surface area is quadratic. It contributes to students' understanding of the measures of length, surface area, and volume, and it can help students learn about scale factors. Teachers can help
students take notice of that difference by asking questions like, Why does the graph depicting the relationship between side length and surface area differ from the graph depicting the relationship
between side length and volume? Teachers can then help students understand that difference by asking questions like, Compare the volume scale factor and the surface-area factor. What is the
relationship between those factors? How are those factors represented in the "Volume" and "Surface Area" graphs?
Take Time to Reflect
The Geometry Standard states that "in grades 6–8 all students should understand relationships among the angles, side lengths, perimeters, areas, and volumes of similar objects."
• How would these activities help students develop an understanding of surface area and volume of similar solids?
• How does the dynamic nature of the figure help support the development of this type of understanding?
• What other concepts related to surface area and volume are important for students in grades 6–8 to understand?
Also see:
• 6.3 Learning about Length, Perimeter, Area, and Volume of Similar Objects Using Interactive Figures
□ 6.3.1 Side Length and Area of Similar Figures
|
{"url":"http://www.nctm.org/standards/content.aspx?id=25097","timestamp":"2014-04-16T13:19:42Z","content_type":null,"content_length":"42605","record_id":"<urn:uuid:cdf210ff-a2f0-419f-8f7e-ddb78666e8e4>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00077-ip-10-147-4-33.ec2.internal.warc.gz"}
|
(technical) Multivariate randomization balance tests in small samples
Posted on May 2, 2011 by Cyrus
In their 2008 Statistical Science article, Hansen and Bowers (link) propose randomization-based omnibus tests for covariate balance in randomized and observational studies. The omnibus tests allow
you to test formally whether differences across all the covariates resemble what might happen in a randomized experiment. Previous to their paper, most researchers tested balance one covariate at a
time, making ad hoc judgments about whether apparent imbalance in one or another covariate suggested deviations from randomization. What’s nice about Hansen and Bowers’s approach is that it
systematizes such judgments into a single test.
To get the gist of their approach, imagine a simple random experiment on $N$ units for which $M = N/2$ [CDS: note, this was corrected from the original; results in this post are for a balanced
design, although the Hansen and Bowers paper considers arbitrary designs.] units are assigned to treatment. Suppose for each unit i we record prior to treatment a $P$-dimensional covariate, $x_i =
(x_{i1},\hdots,x_{iP})'$. Let $\mathbf{X}$ refer to the $N$ by $P$ matrix of covariates for all units. Define $d_p$ as the difference in the mean values of covariate $x_p$ for the treated and control
groups, and let $\mathbf{d}$ refer to the vector of these differences in means. By random assignment, $E(d_p)=0$ for all $p=1,..,P$, and $Cov(\mathbf{d}) = (N/(M^2))S(\mathbf{X})$, where $S(\mathbf
{X})$ is the usual sample covariance matrix [CDS: see update below on the unbalanced case]. Then, we can compute the statistic, $d^2 = \mathbf{d}'Cov(\mathbf{d})^{-1} \mathbf{d}$. In large samples,
Hansen and Bowers explain that randomization implies that this statistic will be approximately chi-square distributed with degrees of freedom equal to $rank[Cov(\mathbf{d})]$ (Hansen and Bowers 2008,
229). The proof relies on standard sampling theory results.
These results from the setting of a simple randomized experiment allow us to define a test for covariate balance in cases where the data are not from an experiment, but rather from a matched
observational study, or where the data were from an experiment, but we might worry that there were departures from randomization that lead to confounding. In Hansen and Bowers’s paper, the test that
they define relies on the large sample properties of $d^2$. Thus, the test consists of computing $d^2$ for the sample at hand, and computing a p-value against the limiting $\chi^2_{rank[Cov(\mathbf
{d})]}$ distribution that should obtain under random assignment.
I should note that in Hansen and Bowers’s paper, they focus not on the case of a simple randomized experiment, but rather on cluster- and block-randomized experiments. It makes the math a bit uglier,
but the essence is the same.
The question I had was, what is the small sample performance of this test? In small samples we can use $d^2$ to define an exact test. Does it make more sense to use the exact test? In order to
address these questions, I performed some simulations against data that were more or less behaved. These included simulations with two normal covariates, one gamma and one log-normal covariate, and
two binary covariates. (For the binary covariates case, I couldn’t use a binomial distribution, since this sometimes led to cases with all 0′s or 1′s. Thus, I fixed the number of 0′s and 1′s for each
covariate and randomly scrambled them over simulations.) In the simulations, the total number of units was 10, and half were randomly assigned to treatment. Note that this implies 252 different
possible treatment profiles.
The results of the simulations are shown in the figure below. The top row is the for the normal covariates, the second row for the gamma and log-normal covariates, and the bottom row for the binary
covariates. I’ve graphed the histograms for the approximate and exact p-values in the left column; we want to see a uniform distribution. In the middle column is a scatter plot of the two p-values
with a reference 45-degree line; we want to see them line up on the 45-degree line. In the right column, I’ve plotted distribution of the computed $d^2$ statistic against the limiting $\chi^2_{rank
[Cov(\mathbf{d})]}$ distribution; we want to see that they agree.
When the data are normal, the approximation is quite good, even with only 10 units. In the two latter cases, approximations do not fare well, as the test statistic distribution deviates substantially
from what would be expected asymptotically. The rather crazy-looking patterns that we see in the binary covariates case is due to the fact that there are a small discrete number of difference in mean
values possible. Presumably in large samples this would smooth out.
What we find overall though is that the approximate p-value tends to be biased toward 0.5 relative to the exact p-value. Thus, when the exact p-value is large, the approximation is conservative, but
as the exact p-value gets small, the approximation becomes anti-conservative. This is most severe in the skew (gamma and log-normal) covariates case. In practice, one may have no way of knowing
whether the relevant underlying covariate distribution is better approximated as normal or skew. Thus, it would seem that one would want to always use the exact test in small samples.
Code demonstrating how to compute Hansen and Bowers’s approximate test, the exact test, as well as code for the simulations and graphics is here (multivariate exact code).
The general expression for $Cov(\mathbf{d})$ covering an unbalanced randomized design is,
$Cov(\mathbf{d}) = \frac{N}{M(N-M)}S(\mathbf{X})$.
Leave a Reply Cancel reply
This entry was posted in Uncategorized. Bookmark the permalink.
|
{"url":"http://cyrussamii.com/?p=749","timestamp":"2014-04-20T18:23:03Z","content_type":null,"content_length":"30269","record_id":"<urn:uuid:6d9e2537-59a2-4441-bfd5-76511bbda6f5>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00318-ip-10-147-4-33.ec2.internal.warc.gz"}
|
A to Z Kids Stuff | Preschool Shapes Theme
Color on Paper
Need: pairs of shapes cut from construction paper, crayons or markers.
Place a shape from each pair on a table. Give each child a paper cut in a shape. Have the child go to the table and find its mate. Children then can draw on their shape papers.
Shape imagination creations
Describe and compare two- and three- dimensional shapes
Need: paper shapes of squares, circles, triangles
Give each child a cutout of a circle, a square, and a triangle. Show examples of how a circle can become a wheel or how a triangle can become a tree. Ask children to use their imaginations and
create pictures by combining a variety of shapes.
Cutting Corners
Provide the children with squares, rectangles and triangles cut from such materials as construction paper, and wallpaper. Let the children use scissors to cut off all the corners. Have them glue
their shapes and corners on sheets of construction paper.
Shape Mobiles
Need: cardboard and paper shapes, crayons, scissors, yarn, tape, hole puncher.
Cut yarn into strings. Knot one end of each piece of string and tape the other to make a needle. Children can punch holes in the shapes and string them for hanging.
Children may wish to use the cardboard cutouts to trace more shapes.
*Bulletin Boards
Remembering The Shapes
Needed:: 8 or more colored (depending on how many shapes you want them to learn) pieces of construction paper. Marker --- to label the shapes and their
Cut out each shape big enough to put up around the room.
Shapes --- circle, oval, rectangle, triangle, square, diamond, heart,
octagon, etc.
Cut out the shapes from large sheets of construction paper.
Label the shapes with names such as (teddy the triangle, olivia the oval, etc)
The children will begin to recognize the shapes by the names.
Contributed by: Ms Kelly
*Group Time
Shape Sort
Need: posterboard in red, blue, yellow (can use construction paper)
From red posterboard cut out:
1 large circle, 1 medium size square, 1 small triangle.
From blue posterboard cut out:
1 large square, 1 medium sized triangle, 1 small circle.
From yellow posterboard cut out:
1 large triangle, 1 medium sized circle, 1 small square.
Mix up the shapes and lay them out on a table or on the floor. Let the children take turns sorting the shapes into piles by color, by size and then by shape.
Felt Material Shapes
A to Z Kids Stuff eBook
Trace the shapes out first and pour rice or beans inside the shape glue or staple and you got a mini bean bag shape. Contributed By: Mary
Folding Shapes
Need: Cutouts of various geometric shapes, cutouts of some shapes folded in half.
Set out all shapes on a table. Then let children examine folded shapes (ask children not to unfold them). Point out that all folded shapes have a straight line and ask children to point to one.
Encourage children to match folded shapes to the complete shapes.
Circle Time Shape
Each month place tape on the circle time rug in a different shape.
One month you sit in a circle, the next month it could be a square...
Note: Masking tape children can pick at and tear. Postal tape works
better and is hard for a child to pick at and tear.
What Shape Is It?
Place objects with distinct shapes in the feely box (such as marbles, dice, pyramid, deck of cards, book, ball, button,etc). Encourage children to reach in and identify the shape of the object
they are feeling before they pull it out.
The Circle
A circle, a circle, (draw in the air)
Draw it round and fat. (use index finger to draw circle in the air)
A circle, a circle, (repeat action)
Draw it for a hat. (draw a circle in the air overhead)
A circle, a circle, (repeat action)
Draw it just for me. (draw in the air)
A circle, a circle, (repeat action)
Now jump and count to three: One! Two! Three!
The Circle
A circle, a circle, (draw in the air)
Draw it round and fat. (use index finger to draw circle in the air)
A circle, a circle, (repeat action)
Draw it for a hat. (draw a circle in the air overhead)
A circle, a circle, (repeat action)
Draw it just for me. (draw in the air)
A circle, a circle, (repeat action)
Now jump and count to three: One! Two! Three!
Draw a Circle
Draw a circle, draw a circle
Made very round.
Draw a circle, draw a circle
No corners can be found.
Suzy Circle
I'm Suzy Circle.
I'm happy as can be.
I go round and round.
Can you draw me?
Circles Four
Children act out actions in the fingerplay
Draw a circle in the air.
Draw a small one, now compare.
Make one big; make one small;
Now draw a short one; now make one tall.
Circle Trees
Draw a tree with bare branches on a large piece of blue paper and attach fringed green construction paper below it for grass.
Let the children glue circles they have punched out of construction paper with a hole punch on the branches and beneath the tree.
To make an autumn scene have the children punch out red, yellow and orange circles; to make a winter scene, white circles; to make a spring scene, pink circles; to make a summer scene green
Circle Round 1
Need: jar and container lids in a variety of sizes.
Invite children to draw small circles inside larger circles. Children start by tracing a large lid and then trace smaller and smaller lids inside. Encourage children to help one another find lids
that fit inside other lids.
Circle Round 2
Need: paper circles in various sizes
Children can glue the circles onto the paper. They can overlap the circles to create designs.
Circle Lollipops
Need: circles cut from posterboard, various-sized precut color construction paper circles, straws.
Give each child a circle cut from posterboard. Have the children design their lollipop using various circles as decorations. These colored circles can be glued onto the posterboard circle. The
handle of the Circle Lollipop is a straw stapled to the posterboard circle.
All of the Circle Lollipops could be displayed on the bulletin board by arranging them in a circular pattern. With the circles point outward and the straws pointing inward.
Purple Circle Prints
Need: circular sponges , paint(purple), newsprint.
Cut sponges into circular shapes (can have various size circles). Have each child select a circle sponge and dip the sponge into a pan of purple paint. Then child presses the sponge onto
newsprint to make a Purple Circle Print. Encourage children to make a sheet full of circular patterns.
*Circle Day
Have the children bring in circular objects to display on a round table. Or they can wear clothes that contain circular designs.
<Next Page>
Follow Us
Click here to include your favorite shapes activity in this theme!
|
{"url":"http://www.atozkidsstuff.com/shapes.html","timestamp":"2014-04-24T02:50:49Z","content_type":null,"content_length":"27625","record_id":"<urn:uuid:5e40f437-4659-41f4-8757-b2373bca62e9>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00444-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: Differencing two equations
Replies: 7 Last Post: Sep 8, 2013 2:56 AM
Messages: [ Previous | Next ]
Re: Differencing two equations
Posted: Feb 13, 2013 4:47 AM
Sometimes I think Mathematica might better be called MetaMathematica, not so
much a tool for doing mathematics but a tool for making the tools to do
Quite often it will be very useful to "combine and repackage" Mathematica
procedures into forms that are more natural and convenient for your
application. Writing some definitions and functions along these lines can be
very useful. You don't have to limit yourself to the hooks and buttons that
are manifest in plain Mathematica.
David Park
From: G B [mailto:g.c.b.at.home@gmail.com]
Maybe there's a mathematical reason why simple operations on equations
aren't handled as I'd expect. Mathematica is clearly a powerful tool, but
the few times I've tried picking it up in the past I wind up getting stymied
by the obtuse syntax for certain simple operations.
In this case, I could probably use the power of the tool all at once by
simply treating my set of equations as a unit and asking Mathematica to
reason about them as a group and solve my problem directly. The problem is
that I, and my audience, could probably gain some insight into the problem
by working through a few of the intermediate results. By treating my
equations lexically, rather than mathematically, I'm forgoing Mathematica's
expertise and only allowing it to ensure I don't make transcription errors.
Date Subject Author
2/11/13 Re: Differencing two equations Ray Koopman
2/12/13 Re: Differencing two equations David Bailey
2/13/13 Re: Differencing two equations David Park
2/13/13 Re: Differencing two equations Alexei Boulbitch
2/14/13 Re: Differencing two equations Noqsi
9/8/13 Re: Differencing two equations Youngjoo Chung
|
{"url":"http://mathforum.org/kb/thread.jspa?threadID=2434493&messageID=8310896","timestamp":"2014-04-20T08:39:17Z","content_type":null,"content_length":"23285","record_id":"<urn:uuid:34187b40-4ced-47cb-85ee-61f2760f8938>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00517-ip-10-147-4-33.ec2.internal.warc.gz"}
|
floating point values
floating point values
i am trying to do the following:
program that reads two floating-point values representing angles and displays a message stating which angle has the greater tangent value. Note that the trigonometric functions in the math
library expect angles in radians, not degrees. 360 degrees = 2 x pi radians, so 45 degrees would be about 0.785 radians.
I have come up with code that takes in 2 numbers and displays the highest values. What I am stuck on is the calculations part. I am searching online and reading the books. Any help in this area
will be great. Here is the code I have so far:
#include <iostream>
using std::cout;
using std::cin;
using std::endl;
int main()
float x,y;
cout<< "Enter the first angle: ";
cin>> x;
cout<< "Enter the second angle: ";
cin>> y;
if ( x > y )
cout << " The highest angle is "<< x <<endl;
cout<< "The highest angle is "<< y <<endl;
All you have to do is compare tan(angle1) with tan(angle2). If tan(angle1) > tan(angle2) then output angle1 else output angle2.
Note: you should also be #include'ing <cmath> if you are going to be calling the tan function in your code. Also, your angles would be better as doubles and not floats.
i am not sure what you mean. dont i have to put in somewhere the formula to figure out which angle is larger? also i am not sure what you mean by tan(angle)
Originally Posted by jmarsh56
dont i have to put in somewhere the formula to figure out which angle is larger?
Originally Posted by jmarsh56
i am trying to do the following:
program that reads two floating-point values representing angles and displays a message stating which angle has the greater tangent value.
I'm working off of what you said in the original post... which is finding the angle has the greater tangent value, not finding the greater angle. To do that you must call the tangent function
which is called tan and pass into it the angle (in radians). So... as an example to display the tangent of an angle input by the user:
double angle;
cout << "Enter an angle (in radians): ";
cin >> angle;
cout << "The tangent of " << angle << " radians is " << tan(angle) << '.' << endl;
Enter an angle (in radians): .785398
The tangent of .785398 radians is 1.
thanks, i added that in with a few more lines i came up with and received the answer i was looking for. thanks again
The remark about a double being better than a float was because doubles are much more accurate.
|
{"url":"http://cboard.cprogramming.com/cplusplus-programming/73896-floating-point-values-printable-thread.html","timestamp":"2014-04-20T07:06:28Z","content_type":null,"content_length":"10699","record_id":"<urn:uuid:8ed9c1a8-f542-4f8f-b2ac-a041ae7c5aa7>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00126-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Open Wide, Look Inside
Tomorrow, March 14th, is Pi Day. No, that’s not a typo. It is Pi day, as in 3.14159… you get the idea. The first Pi Day celebration was held at the San Francisco Exploratorium in 1988. That means
tomorrow is the 20th anniversary of Pi Day.
What is pi anyway? I’m sure you remember it from math in some formula you memorized, but do you really know what it is? Pi represents the relationship between a circle's diameter (its width) and its
circumference (the distance around the circle). Pi is always the same number, no matter the circle you use to compute it. In school we generally approximate pi to 3.14 in school, but professionals
often use more decimal places and extend the number to 3.14159.
One activity I loved doing with students was to ask them to bring in a can and lid that would soon be recycled. I always brought in a few extras so that there would be a variety of sizes. Each
student was given a lid and directed to measure the diameter and circumference. Students then divided the circumference by the diameter. We recorded the results on the overhead and discussed them.
Most were amazed to find that the results were nearly the same, allowing for some margin of error in measurement. This is a quick and fun and provides a meaningful way to introduce the concept of pi.
What will you be doing for Pi Day? I hope you’ll be celebrating in some small way. Perhaps you could make a pi necklace. If you’re looking for ideas, visit the Exploratorium pi site. Since tomorrow
will be poetry Friday, I just may write some pi poems.
|
{"url":"http://blog.richmond.edu/openwidelookinside/archives/107","timestamp":"2014-04-20T02:18:10Z","content_type":null,"content_length":"30015","record_id":"<urn:uuid:793dcf1c-2221-4964-b1eb-ed2b4b9ae867>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00225-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Cambridge University puts Newton's papers online
(PhysOrg.com) -- In a project that has long been overdue, Cambridge University, thanks to a hefty gift from the Polonsky Foundation (supporter of education and arts) and a grant from Britain’s Joint
Information Services Committee (JISC), has put some of Isaac Newton’s original papers online for any and all to see. Of particular interest to most will be Newton’s own annotated copy of Philosophiae
Naturalis Principia Mathematica, considered by many to be one of the greatest published works by any scientist ever. For those looking for a little behind the scenes work, the University has also
published Newton’s so-called “Waste Book,” a diary of sorts that Newton inherited from his step-father which he took along with him and used for jotting notes about such things as his ideas on
calculus while away from school due to the Great Plague in 1665.
In viewing the material, which can be paged through in a PDF type format, by clicking arrows, it’s easy to see that the digitization of Newton’s papers have come none too soon, as many of the pages
are tattered, smeared and even burned-looking in some places. Thus, not only has putting the papers online made them accessible to anyone with a computer and an Internet connection, it has also
caused them to be saved for posterity in an electronic form that will ensure they will be accessible to all those who may wish to view them in the future as well.
It was in Principia Mathematica that Newton laid out his theories on the laws of motion and universal gravitation which some suggest laid the groundwork for Einstein’s theories on relativity. And if
that weren’t enough, Newton is also widely credited with “inventing” calculus, a mathematical science without which the modern world would simply not exist.
In all there are more than 4,000 pages of Newton’s work displayed on the site, which took a team of photo copyists the better part of this past summer to capture, though it’s obvious in looking at
the results that there were many slow-downs as pages had to have some restorative efforts made in order to present them. Those working on the project are to be commended as the results show great
care and dedication to a single purpose; namely showcasing one of history’s brightest minds.
It’s intriguing to see the notes Newton himself made on the first edition of Principia Mathematica, in preparing for the second, and happily, the University has announced that they will be adding
translations for all of the text and notes as early as next year.
The University has also announced plans to make the works of other famous scientists available as the future unfolds and hopefully will continue to add more of the Newton library too, as thus far
only about 20% of their collection has been made available online.
1.7 / 5 (6) Dec 12, 2011
The triagonal shape of Newton's head indicates, he suffered with Asperger's syndromme, a mild form of autism, which manifests with hypertrophy of prefrontal cortex:
Children with autism have broader upper face, including wider eyes, shorter middle region of the face, including the cheeks and nose, broader or wider mouth and philtrum above the top lip.
5 / 5 (1) Dec 12, 2011
Wow, thank you Cambridge !!
5 / 5 (5) Dec 12, 2011
The triagonal shape of Newton's head indicates, he suffered with Asperger's syndromme, a mild form of autism, which manifests with hypertrophy of prefrontal cortex:
Phrenology was debunked in the 19th Century champ. Although I'm sure you'll find somewhere to fit it into your wacky Dense Aether Theory....
1 / 5 (6) Dec 12, 2011
Phrenology was debunked in the 19th Century champ..
Well, many 19th century ideas are returning by now. At the case of autists this connection to phrenology is rather straightforward, because their brains grow faster http://news.bbc.c...7149.stm
Who occupies the religious stance by now? The fact something has been debunked before years doesn't mean, it cannot have its bit of truth later.
5 / 5 (6) Dec 12, 2011
1. Why did you post as Rawa1 and then respond to my post as Callippo? Are you trying to make people think you actually have real friends? Just like you are trying to convince people that Dense Aether
theory is a real theory?
2. What religious stance? Where did religion come into this?
3. The New Scientist article you linked was intersting. But it simply says that there seems to be a correlation between rapid brain growth as an infant and Autism. The article also clearly states you
can't measure someone's head to diagnose Autism.
Is there anything else you'd like to be wrong about today?
1.1 / 5 (8) Dec 12, 2011
The article also clearly states you can't measure someone's head to diagnose Autism.
So why it names "Head size gives autism early warning?"
What religious stance? Where did religion come into this?
Religion is in stance, there is absolutely nothing on the phrenology.
Why did you post as Rawa1 and then respond to my post as Callippo?
Because I'm using different computer at work and at home. I'm not trying to convince people about anything of my privacy.
5 / 5 (4) Dec 12, 2011
If you read entire articles instead of just the titles you may start to understand things better. The title provides a snapshot of the content of an article. The detail is contained in the body of
the article itself.
The headline "Head size gives autism early warning" refers to the fact that a correlation has been found between rapid growth of an infants brain size under 1 year old and Autism. If you read to the
end of the artcile you would have found the following:
"Janet Lainhart of the University of Utah told New Scientist that the new study sheds important new light on the developmental origins of autism, but she cautions that head size measurements alone
cannot be used to screen children for the disorder: "You certainly wouldn't want to be taking head circumference measures and telling parents, 'Your child is at risk for autism.'"
5 / 5 (6) Dec 12, 2011
Religion is in stance, there is absolutely nothing on the phrenology.
That sentence doesn't actually make sense.
Because I'm using different computer at work and at home. I'm not trying to convince people about anything of my privacy.
Physorg requires different user accounts for different computers do they? hmm must be a new security policy they have brought in. Except that its not because I use the same account at home and work.
Of course I have real friends and don;t need to invent them...
not rated yet Dec 13, 2011
Rawa/Callipo is a nut case...let's not encourage him
1 / 5 (2) Dec 13, 2011
Einstein, Newton displayed autistic traits
Accelerated Head Growth Can Predict Autism Before Behavioral Symptoms Start http://www.scienc...0127.htm
Why to deny the obvious things? If you didn't realize already, I'm not interested about opinions of other people - I'm just announcing the new obvious things here. If I wouldn't sure with it, I
wouldn't waste my time with talking about it here.
3.7 / 5 (3) Dec 13, 2011
Perhaps Zephir thinks he is a modern analog of Newton.
Though his picture isn't a good match his behavior is.
not rated yet Dec 13, 2011
Because I'm using different computer at work and at home. I'm not trying to convince people about anything of my privacy.
Physorg requires different user accounts for different computers do they? hmm must be a new security policy they have brought in. Except that its not because I use the same account at home
and work. Of course I have real friends and don;t need to invent them...
I think he means to say he doesn't want his work to spy on his rantings or figure out what he's posting... of course, posting a statement like that in a thread such as this does make a connection
that someone closely discerning his posts could use to figure it out.... But that's his problem.
1 / 5 (1) Dec 13, 2011
Perhaps Zephir thinks he is a modern analog of Newton
Rather counter-analogy? Newton introduced formal view into existing nonformal ideas about reality, whereas I'm introducing nonformal view into existing formal ideas about reality.
4 / 5 (4) Dec 13, 2011
Either way you think you have all the answers.
Newton's were testable because he formalized the ideas. You evade testing by turning your, whatever they are, into word wooze that you never take a stand on. Except of course that AWITBS is the
answer to the Life, The Universe and Everthing including politics and finance.
1 / 5 (3) Dec 13, 2011
You evade testing by turning your, whatever they are, into word wooze that you never take a stand on
AWT is defined with its postulates, I cannot change its meaning even if I would want to... It's not the fuzzy landscape of string theories (bosonic string theory, I string theory, IIA string theory,
IIB string theory, HO string theory, HE string theory, K-theory, F-theory, L-theory, M-theory, string field theory, little string theory, N=2 superstring theory, ...)
3.7 / 5 (3) Dec 14, 2011
AWT is defined with its postulates, I cannot change its meaning even if I would want to...
Of course you can. Its yours, even when you pretended to be several different people at least one male and one female, Alexa and Alizee. And the postulates you post tend to be word wooze when I see
them. Try a formal presentation of the postulates. All in one place and nicely labeled. A link to something in English will do. Google translation isn't good enough for that on the other blog you
. It's not the fuzzy landscape of string theories
Like or not it isn't fuzzy. It has 30 years of mathematical work behind it. That the landscape produces a vast array of possible universes doesn't bother me a bit as I think the idea that there MUST
be a single theory that will exactly produce one universe and that being ours is silly. That the math is unfinished does bother me. Thirty years and it still isn't finished. Still that is more than
you have.>>
3.7 / 5 (3) Dec 14, 2011
You left out a few. Kaluza-Klein theory, Quantum Loop Theory, Twistor theory and more but they all could be true in some universe IF they are mathematically valid. So far String HYPOTHESIS doesn't
have a fully valid math.
You however just have word wooze and hand waving.
|
{"url":"http://phys.org/news/2011-12-cambridge-university-newton-papers-online.html","timestamp":"2014-04-16T07:29:08Z","content_type":null,"content_length":"89037","record_id":"<urn:uuid:957bb2cf-3d09-4204-8e47-5c341e223128>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00507-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Volume of a Pentagonal Pyramid
Pentagonal Pyramid
Pyramid is a structure of a polygon with the upper part converging to a fixed point. The converging fixed point is called apex.
There are mainly two parts for a pyramid. They are apex and base. Apex is the converging fixed point and the bottom part of the pyramid is called base of the pyramid.
Pyramids are usually named according to their bases. The important types of pyramids are
1) Triangular Pyramid
2) Square pyramid
3) Pentagonal Pyramid
4) Hexagonal Pyramid
Pyramids are again divided into oblique and regular based on the shape of the base.
Lets consider Pentagonal pyramid.
Pentagonal Pyramid
A pyramid with a Pentagonal base is called Pentagonal pyramid.
The figure of a triangular pyramid is shown below
Pentagonal Pyramid Faces
A pentagonal pyramid has 6 faces. Top you can see 5 sides faces which are triangular in shape and base which is a pentagon.
What is a Pentagonal Pyramid?
A Pentagonal pyramid has 6 faces. It has 5 sides faces, which are triangular in shape and a bottom, base which is a Pentagon. Since the bottom face is Pentagon in shape, the name of the solid is
Pentagonal pyramid. There are 6 vertices and 10 edges to a Pentagonal pyramid.
Pentagonal Pyramid Net
A net is a pattern that we can cut and fold to make a model of the solid. The net of Pentagonal pyramid consists of 5 triangles and a Pentagon. The base of the pyramid is a Pentagonal and the five
lateral faces are triangles.
The net of the Pentagonal pyramid is given below.
We can fold the outer five triangles to form the Pentagonal pyramid.
Volume of a Pentagonal Pyramid
Volume is the total area in the inner part. We can say that it is the amount of water the solid will replace when immersed in a container of water.
Volume of a Pentagonal Pyramid Formula
V = ($\farc{1}{3}$) x Base area x h
where and h is the height of the Pentagonal pyramid.How do you Find the Volume of a Pentagonal Pyramid
Surface Area of a Pentagonal Pyramid
Surface area is the total area on the outer part of the pyramid. Surface area is divided into two. 1) Lateral surface area 2) Base area
Lateral surface area is the area of the lateral faces of the pyramid. That is, in the case of the pentagonal pyramid, the area of the five lateral faces constitutes the lateral surface area.
Base area is the area of the base of the pyramid.
Surface Area of a Pentagonal Pyramid Formula
Surface area = Base area + Lateral surface area
S = Base area + $\frac{1}{2}$ x perimeter of the base x slant height.How do you Find the Surface Area of a Pentagonal pyramid
|
{"url":"http://math.tutornext.com/geometry/pentagonal-pyramid.html","timestamp":"2014-04-19T04:28:56Z","content_type":null,"content_length":"19968","record_id":"<urn:uuid:b4aef750-941e-4a77-9d99-5404081736ee>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00174-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Cambridge University puts Newton's papers online
(PhysOrg.com) -- In a project that has long been overdue, Cambridge University, thanks to a hefty gift from the Polonsky Foundation (supporter of education and arts) and a grant from Britain’s Joint
Information Services Committee (JISC), has put some of Isaac Newton’s original papers online for any and all to see. Of particular interest to most will be Newton’s own annotated copy of Philosophiae
Naturalis Principia Mathematica, considered by many to be one of the greatest published works by any scientist ever. For those looking for a little behind the scenes work, the University has also
published Newton’s so-called “Waste Book,” a diary of sorts that Newton inherited from his step-father which he took along with him and used for jotting notes about such things as his ideas on
calculus while away from school due to the Great Plague in 1665.
In viewing the material, which can be paged through in a PDF type format, by clicking arrows, it’s easy to see that the digitization of Newton’s papers have come none too soon, as many of the pages
are tattered, smeared and even burned-looking in some places. Thus, not only has putting the papers online made them accessible to anyone with a computer and an Internet connection, it has also
caused them to be saved for posterity in an electronic form that will ensure they will be accessible to all those who may wish to view them in the future as well.
It was in Principia Mathematica that Newton laid out his theories on the laws of motion and universal gravitation which some suggest laid the groundwork for Einstein’s theories on relativity. And if
that weren’t enough, Newton is also widely credited with “inventing” calculus, a mathematical science without which the modern world would simply not exist.
In all there are more than 4,000 pages of Newton’s work displayed on the site, which took a team of photo copyists the better part of this past summer to capture, though it’s obvious in looking at
the results that there were many slow-downs as pages had to have some restorative efforts made in order to present them. Those working on the project are to be commended as the results show great
care and dedication to a single purpose; namely showcasing one of history’s brightest minds.
It’s intriguing to see the notes Newton himself made on the first edition of Principia Mathematica, in preparing for the second, and happily, the University has announced that they will be adding
translations for all of the text and notes as early as next year.
The University has also announced plans to make the works of other famous scientists available as the future unfolds and hopefully will continue to add more of the Newton library too, as thus far
only about 20% of their collection has been made available online.
1.7 / 5 (6) Dec 12, 2011
The triagonal shape of Newton's head indicates, he suffered with Asperger's syndromme, a mild form of autism, which manifests with hypertrophy of prefrontal cortex:
Children with autism have broader upper face, including wider eyes, shorter middle region of the face, including the cheeks and nose, broader or wider mouth and philtrum above the top lip.
5 / 5 (1) Dec 12, 2011
Wow, thank you Cambridge !!
5 / 5 (5) Dec 12, 2011
The triagonal shape of Newton's head indicates, he suffered with Asperger's syndromme, a mild form of autism, which manifests with hypertrophy of prefrontal cortex:
Phrenology was debunked in the 19th Century champ. Although I'm sure you'll find somewhere to fit it into your wacky Dense Aether Theory....
1 / 5 (6) Dec 12, 2011
Phrenology was debunked in the 19th Century champ..
Well, many 19th century ideas are returning by now. At the case of autists this connection to phrenology is rather straightforward, because their brains grow faster http://news.bbc.c...7149.stm
Who occupies the religious stance by now? The fact something has been debunked before years doesn't mean, it cannot have its bit of truth later.
5 / 5 (6) Dec 12, 2011
1. Why did you post as Rawa1 and then respond to my post as Callippo? Are you trying to make people think you actually have real friends? Just like you are trying to convince people that Dense Aether
theory is a real theory?
2. What religious stance? Where did religion come into this?
3. The New Scientist article you linked was intersting. But it simply says that there seems to be a correlation between rapid brain growth as an infant and Autism. The article also clearly states you
can't measure someone's head to diagnose Autism.
Is there anything else you'd like to be wrong about today?
1.1 / 5 (8) Dec 12, 2011
The article also clearly states you can't measure someone's head to diagnose Autism.
So why it names "Head size gives autism early warning?"
What religious stance? Where did religion come into this?
Religion is in stance, there is absolutely nothing on the phrenology.
Why did you post as Rawa1 and then respond to my post as Callippo?
Because I'm using different computer at work and at home. I'm not trying to convince people about anything of my privacy.
5 / 5 (4) Dec 12, 2011
If you read entire articles instead of just the titles you may start to understand things better. The title provides a snapshot of the content of an article. The detail is contained in the body of
the article itself.
The headline "Head size gives autism early warning" refers to the fact that a correlation has been found between rapid growth of an infants brain size under 1 year old and Autism. If you read to the
end of the artcile you would have found the following:
"Janet Lainhart of the University of Utah told New Scientist that the new study sheds important new light on the developmental origins of autism, but she cautions that head size measurements alone
cannot be used to screen children for the disorder: "You certainly wouldn't want to be taking head circumference measures and telling parents, 'Your child is at risk for autism.'"
5 / 5 (6) Dec 12, 2011
Religion is in stance, there is absolutely nothing on the phrenology.
That sentence doesn't actually make sense.
Because I'm using different computer at work and at home. I'm not trying to convince people about anything of my privacy.
Physorg requires different user accounts for different computers do they? hmm must be a new security policy they have brought in. Except that its not because I use the same account at home and work.
Of course I have real friends and don;t need to invent them...
not rated yet Dec 13, 2011
Rawa/Callipo is a nut case...let's not encourage him
1 / 5 (2) Dec 13, 2011
Einstein, Newton displayed autistic traits
Accelerated Head Growth Can Predict Autism Before Behavioral Symptoms Start http://www.scienc...0127.htm
Why to deny the obvious things? If you didn't realize already, I'm not interested about opinions of other people - I'm just announcing the new obvious things here. If I wouldn't sure with it, I
wouldn't waste my time with talking about it here.
3.7 / 5 (3) Dec 13, 2011
Perhaps Zephir thinks he is a modern analog of Newton.
Though his picture isn't a good match his behavior is.
not rated yet Dec 13, 2011
Because I'm using different computer at work and at home. I'm not trying to convince people about anything of my privacy.
Physorg requires different user accounts for different computers do they? hmm must be a new security policy they have brought in. Except that its not because I use the same account at home
and work. Of course I have real friends and don;t need to invent them...
I think he means to say he doesn't want his work to spy on his rantings or figure out what he's posting... of course, posting a statement like that in a thread such as this does make a connection
that someone closely discerning his posts could use to figure it out.... But that's his problem.
1 / 5 (1) Dec 13, 2011
Perhaps Zephir thinks he is a modern analog of Newton
Rather counter-analogy? Newton introduced formal view into existing nonformal ideas about reality, whereas I'm introducing nonformal view into existing formal ideas about reality.
4 / 5 (4) Dec 13, 2011
Either way you think you have all the answers.
Newton's were testable because he formalized the ideas. You evade testing by turning your, whatever they are, into word wooze that you never take a stand on. Except of course that AWITBS is the
answer to the Life, The Universe and Everthing including politics and finance.
1 / 5 (3) Dec 13, 2011
You evade testing by turning your, whatever they are, into word wooze that you never take a stand on
AWT is defined with its postulates, I cannot change its meaning even if I would want to... It's not the fuzzy landscape of string theories (bosonic string theory, I string theory, IIA string theory,
IIB string theory, HO string theory, HE string theory, K-theory, F-theory, L-theory, M-theory, string field theory, little string theory, N=2 superstring theory, ...)
3.7 / 5 (3) Dec 14, 2011
AWT is defined with its postulates, I cannot change its meaning even if I would want to...
Of course you can. Its yours, even when you pretended to be several different people at least one male and one female, Alexa and Alizee. And the postulates you post tend to be word wooze when I see
them. Try a formal presentation of the postulates. All in one place and nicely labeled. A link to something in English will do. Google translation isn't good enough for that on the other blog you
. It's not the fuzzy landscape of string theories
Like or not it isn't fuzzy. It has 30 years of mathematical work behind it. That the landscape produces a vast array of possible universes doesn't bother me a bit as I think the idea that there MUST
be a single theory that will exactly produce one universe and that being ours is silly. That the math is unfinished does bother me. Thirty years and it still isn't finished. Still that is more than
you have.>>
3.7 / 5 (3) Dec 14, 2011
You left out a few. Kaluza-Klein theory, Quantum Loop Theory, Twistor theory and more but they all could be true in some universe IF they are mathematically valid. So far String HYPOTHESIS doesn't
have a fully valid math.
You however just have word wooze and hand waving.
|
{"url":"http://phys.org/news/2011-12-cambridge-university-newton-papers-online.html","timestamp":"2014-04-16T07:29:08Z","content_type":null,"content_length":"89037","record_id":"<urn:uuid:957bb2cf-3d09-4204-8e47-5c341e223128>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00507-ip-10-147-4-33.ec2.internal.warc.gz"}
|
How to make a proper guess at a particular solution
October 17th 2010, 10:32 PM #1
Nov 2008
How to make a proper guess at a particular solution
]My question is how do you from a proper guess at the particuliar solution of a differential equation for example.
Say this is the right hand of the equation
$<br /> sin(2t)+te^t+4<br />$
I know it helps to break it up into seperate parts, but I am working with a book solution here and my guess are just off it seems from the books proper guess.
My guess for
$<br /> Acos(2t)+Bsin(2t)<br />$
Book's guess
$<br /> t(Acos(2t)+Bsin(2t)<br />$
and the for the term of 4 the book's guess was
$<br /> Dt^2$
How do you properly determine a guess like this?
]My question is how do you from a proper guess at the particuliar solution of a differential equation for example.
Say this is the right hand of the equation
$<br /> sin(2t)+te^t+4<br />$
I know it helps to break it up into seperate parts, but I am working with a book solution here and my guess are just off it seems from the books proper guess.
My guess for
$<br /> Acos(2t)+Bsin(2t)<br />$
Book's guess
$<br /> t(Acos(2t)+Bsin(2t)<br />$
Look at the roots of the characteristic equation, or the general solution to the homogeneous equation. In the first case I bet (but cant guarantee since you have not posted the left hand side of
the problem) that 2i is a root, and in the second case the general solution includes sin and cos functions of 2t.
Why , Yes, it is (+,-) 2i and 0,0
Say, could you take the time to post the left hand side? I'm a newbie in a class, and I would be very interested in seeing the whole problem.
Sorry for a late reply, very busy.
Thanks for the informative reply Captain Black.
So it would seem so to take a look at the complimentary/homogenous solution to perform your guess on the particuliar.
My guess would not be correct because, let me try to see why. But you can help explain.
1. The obvious you stated, a term in the complimentary solution cannot be a term in the particuliar solution
hence why the addition of the variable, t, in the particuliar solution ?
Now another question as well. For the term of (4) in the right-hand side of the differiential equation's solution.
Why not an undetermined coeff. as such would not be suitable?
$<br /> Dt<br />$
Is that because it would conflict at our guess in the particuliar solution of the term
$<br /> t(Asin(2t)+Bcos(2t))<br />$
October 17th 2010, 10:57 PM #2
Grand Panjandrum
Nov 2005
October 18th 2010, 05:56 AM #3
Nov 2008
October 19th 2010, 01:46 PM #4
Sep 2010
October 19th 2010, 07:27 PM #5
Grand Panjandrum
Nov 2005
October 25th 2010, 09:33 PM #6
Nov 2008
|
{"url":"http://mathhelpforum.com/differential-equations/160051-how-make-proper-guess-particular-solution.html","timestamp":"2014-04-20T01:09:30Z","content_type":null,"content_length":"47615","record_id":"<urn:uuid:0bdfc731-ebb9-4cd7-81b7-da5e4aacc73d>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00024-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Functional Graph Library
FGL - A Functional Graph Library
The functional graph library provides a collection of graph operations to be used in functional languages, such as ML or Haskell.
The library is based on the idea of inductive graphs. An inductive view of graphs is given by the following description: a graph is either empty, or it is extended by a new node together with edges
from its predecessors and to its successors. This idea is explained the paper Inductive Graphs and Functional Graph Algorithms. The library is in an intermediate stage and is available in two
• Standard ML (1997 Standard). The focus is on providing a variety of modules containing alternative implementations of functional graphs such that for a specific application the most efficient
implementation can be chosen.
Go to FGL/ML.
• Haskell (1998 Standard). This is the second, but still preliminary version. In particular, currently only the binary tree implementation of functional graphs is provided (all the advanced
implementations of the ML version make use of imperatively updatable arrays).
Go to FGL/Haskell.
New version available!
|
{"url":"http://web.engr.oregonstate.edu/~erwig/fgl/","timestamp":"2014-04-19T09:26:10Z","content_type":null,"content_length":"3143","record_id":"<urn:uuid:209ddaaf-0711-4507-9e0d-e8f1324a8ca6>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00184-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Learning (by) Teaching
This Monday, a very basic investigation of trig functions/reinforcement of function transformation skills with a standard/honors junior class. I was toying with the idea of just throwing a matching
activity to the students, but I'm still new to using those and besides they take so much work cutting and organizing.
The activity below is
characteristic of how I teach and what I'm aiming for is student ownership (discovery, confidence, etc) and connection to previous materials.
Students will be doing this in pairs, each pair doing either the sine or cosine activity. Afterwards, pairs will combine into groups of four so that each group has one pair which has done the sine
and one which has done the cosine. They will then compare and discuss the question at the bottom.
Any suggestions for improvement are, of course, welcome.
Investigation Transformations of Trigonometric Functions
3 comments:
1. I'm guessing this is a review?
I might use a smaller angle, say pi/6. But I'm not at all sure that would work better.
How did you make the graph paper? (Or, where did you find it?) I can't seem to get multiples of pi on the x-axis. (My latest attempt was for a graph I drew in Geogebra.)
2. Sue, this is kinda a review of function transformations, by applying it to understanding trigonometric functions (amplitude, period, principal axis). As such, it's both a review and an
The graph paper. Oh my. I made it in geogebra, then made the labels out of text-boxes in Microsoft Word. I'm sure there is good graph paper available online somewhere, but I wasn't able to find
it quickly. By the way that's also why I didn't use smaller intervals on the angle... it would have been too cluttered and too much work.
3. Oh and I just noticed a typo, corrected for today's class: general form should be y = A*sin(B(x-C))+D, not -D. Not that it matters...
|
{"url":"http://juliatsygan.blogspot.com/2011/03/features-of-trigonometric-functions.html","timestamp":"2014-04-18T13:06:26Z","content_type":null,"content_length":"82965","record_id":"<urn:uuid:bdcfbe93-9b0a-4821-a580-35a21259f7d7>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00036-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Movies illustrating the Dehn twists about the rabbit's ears
These movies illustrate the "rabbit trick", i.e. Douady's question of determining the Thurston equivalence class of the topological polynomial obtained by applying a Dehn twist to the rabbit
polynomial. The answer is given in the paper Thurston equivalence of topological polynomials, with Volodymyr V. Nekrashevych.
In that paper, we constructed a branched covering on the moduli space of topological polynomials with the same post-critical dynamics as the rabbit polynomial. The movies illustrate the Julia set of
these topological polynomials as a point moves in moduli space; they should be thought of as an exploration of a fractal in C^2, which is a bundle over a moduli space fractal (in grey) with fibres
the Julia sets of the corresponding topological polynomial.
I wrote in PDF this brief explanation of the movies, with active hyperlinks to them.
These movies are best understood in conjunction with Nekrashevych's preprint An uncountable family of 3-generated groups acting on the binary tree.
The movies are produced by the following C++ code, run on PC/Linux: movie.C, Makefile.
AVI movies
MPG movies
MOV movies
|
{"url":"http://www.uni-math.gwdg.de/laurent/pub/rabbit/","timestamp":"2014-04-16T19:27:25Z","content_type":null,"content_length":"4058","record_id":"<urn:uuid:cee43523-68c0-4e74-aaa9-e5bf0e4a8fa6>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00041-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Fearless Symmetry: Exposing the Hidden Patterns of Numbers by Avner Ash
Amazon.com Product Description (ISBN 0691124922, Hardcover)
Mathematicians solve equations, or try to. But sometimes the solutions are not as interesting as the beautiful symmetric patterns that lead to them. Written in a friendly style for a general
audience, Fearless Symmetry is the first popular math book to discuss these elegant and mysterious patterns and the ingenious techniques mathematicians use to uncover them.
Hidden symmetries were first discovered nearly two hundred years ago by French mathematician Évariste Galois. They have been used extensively in the oldest and largest branch of mathematics--number
theory--for such diverse applications as acoustics, radar, and codes and ciphers. They have also been employed in the study of Fibonacci numbers and to attack well-known problems such as Fermat's
Last Theorem, Pythagorean Triples, and the ever-elusive Riemann Hypothesis. Mathematicians are still devising techniques for teasing out these mysterious patterns, and their uses are limited only by
the imagination.
The first popular book to address representation theory and reciprocity laws, Fearless Symmetry focuses on how mathematicians solve equations and prove theorems. It discusses rules of math and why
they are just as important as those in any games one might play. The book starts with basic properties of integers and permutations and reaches current research in number theory. Along the way, it
takes delightful historical and philosophical digressions. Required reading for all math buffs, the book will appeal to anyone curious about popular mathematics and its myriad contributions to
everyday life.
(retrieved from Amazon Mon, 30 Sep 2013 13:42:05 -0400)
|
{"url":"https://www.librarything.com/work/1062406","timestamp":"2014-04-20T08:06:02Z","content_type":null,"content_length":"76560","record_id":"<urn:uuid:956df129-9afe-4f87-a34d-a5e84ea56b2d>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00555-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Summary: Wavelength Assignment in Optical Networks
with Fixed Fiber Capacity
Matthew Andrews and Lisa Zhang
Bell Laboratories, 600 Mountain Avenue, Murray Hill, NJ 07974
Abstract. We consider the problem of assigning wavelengths to de
mands in an optical network of m links. We assume that the route of
each demand is fixed and the number of wavelengths available on a fiber
is some parameter µ. Our aim is to minimize the maximum ratio be
tween the number of fibers deployed on a link e and the number of fibers
required on the same link e when wavelength assignment is allowed to
be fractional.
Our main results are negative ones. We show that there is no constant
factor approximation algorithm unless NP#ZPP. No such negative re
sult is known if the routes are not fixed. In addition, unless all lan
guages in NP have randomized algorithms with expected running time
O(n polylog(n) ), we show that there is no log # µ approximation for any
# # (0, 1) and no log # m approximation for any # # (0, 0.5). Our anal
ysis is based on hardness results for the problem of approximating the
chromatic number in a graph.
|
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/833/3912930.html","timestamp":"2014-04-17T07:46:08Z","content_type":null,"content_length":"8419","record_id":"<urn:uuid:01646fae-7448-4d57-b760-3e32d2d4d944>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00520-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Find the possible minimum
June 9th 2009, 12:08 PM
Find the possible minimum
What is the possible minimum for the expression with the range of x and y given as |SQRT X| / (2y +1) when x and y are given as:
1<=x<=4 0<=y<=8
(That's absolute value of the square root of x in the first part.)
June 9th 2009, 12:29 PM
If you are looking for the minimum in a fraction,
then you want the smallest possible numerator and the largest possible denominator.
$\sqrt{1} = 1$
$\sqrt{4} = 2$
use 1 for the numerator, x=1
$(2\times 0+1) = 1$
$(2\times 8+1) = 17$
use 17 for the denominator, y=8
$\frac {\sqrt{1}}{2\times 8+1} =\frac {1}{17} =$ the minimum
June 9th 2009, 12:32 PM
make denominator as large as possible and the numerator as small as possible
what is the max value for denominator in the interval that they gave you it is 1 or 4 ..or ??
and the numerator as small as possible it is 4 or 3 or ??
did recognize what I mean ?
someone reply before me
June 9th 2009, 12:43 PM
Thanks...I knew I had to use the largest X and smallest Y, but I went blank on words for the explanation.
|
{"url":"http://mathhelpforum.com/algebra/92334-find-possible-minimum-print.html","timestamp":"2014-04-16T21:05:02Z","content_type":null,"content_length":"7382","record_id":"<urn:uuid:15565d85-b7bd-4b72-8ae4-4083a771b721>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00401-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
A paraglider of mass 90 kg is pulled by a rope attached to a speedboat. With the rope making an angle of 20 degrees, to the horizontal the paragilder is moving in a straight line parellel to the
surface of the water with an acceleration of 1.2ms^-2. The tension in the rope is 250N. Calculate the magnitude of the vertical lift force acting on the glider.
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/5058b14ce4b0cc122892f5bb","timestamp":"2014-04-18T20:50:38Z","content_type":null,"content_length":"79733","record_id":"<urn:uuid:269acbce-4311-477e-84c8-d0617ed28121>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00091-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Verona, NJ Science Tutor
Find a Verona, NJ Science Tutor
You have important goals for your education. I can help you get the test score you need to accomplish your goals. I scored a perfect 800 on the SAT Critical Reading, Mathematics, and Writing
tests, and I've been helping students improve their scores for over ten years.
10 Subjects: including ACT Science, SAT math, SAT reading, SAT writing
...The qualities that I have as a tutor are that I'm very knowledgeable and enthusiastic with the subject matter,I can deliver it in a very simple and understanding fashion for the student, and
most importantly, I'm very patient because I understand the many frustrations the student is already facin...
18 Subjects: including biostatistics, calculus, statistics, algebra 1
For the past decade, I have worked with students in a variety of academic settings. While working on my MSED in Childhood Mathematics Education (Grades 1-6), I worked in New York City public
schools as a substitute teacher, and as a tutor both privately as well as at Sylvan Learning Center. I have tutored k-12th grade students in all subject areas and provided homework support as
18 Subjects: including physics, writing, algebra 1, elementary (k-6th)
...The SAT tests skills in arithmetic, number theory, algebra and geometry. Unlike the ACT, many of the most-used formulas are printed right on the test booklet, so students need to do less
memorization. I can help students re-learn those math skills that are weak, or have been forgotten, and I can teach them test-taking strategies and shortcuts to optimize their math scores.
9 Subjects: including ACT Science, SAT math, SAT writing, GMAT
...My greatest strength is understanding how my pupil thinks and knowing how to connect with him or her and pass along knowledge. I seek to empower learners, not only to understand what I am
teaching, but also to understand how to learn and succeed in whatever they attempt. I am experienced in tea...
43 Subjects: including chemistry, physical science, English, writing
Related Verona, NJ Tutors
Verona, NJ Accounting Tutors
Verona, NJ ACT Tutors
Verona, NJ Algebra Tutors
Verona, NJ Algebra 2 Tutors
Verona, NJ Calculus Tutors
Verona, NJ Geometry Tutors
Verona, NJ Math Tutors
Verona, NJ Prealgebra Tutors
Verona, NJ Precalculus Tutors
Verona, NJ SAT Tutors
Verona, NJ SAT Math Tutors
Verona, NJ Science Tutors
Verona, NJ Statistics Tutors
Verona, NJ Trigonometry Tutors
Nearby Cities With Science Tutor
Bloomfield, NJ Science Tutors
Caldwell, NJ Science Tutors
Cedar Grove, NJ Science Tutors
Essex Fells Science Tutors
Fairfield, NJ Science Tutors
Glen Ridge Science Tutors
Little Falls, NJ Science Tutors
Livingston, NJ Science Tutors
Montclair, NJ Science Tutors
North Caldwell, NJ Science Tutors
Roseland, NJ Science Tutors
Totowa Science Tutors
Wallington Science Tutors
West Orange Science Tutors
Woodland Park, NJ Science Tutors
|
{"url":"http://www.purplemath.com/verona_nj_science_tutors.php","timestamp":"2014-04-17T15:33:21Z","content_type":null,"content_length":"24166","record_id":"<urn:uuid:3c31ac48-d570-4fc3-b8ab-acda818deb5b>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00557-ip-10-147-4-33.ec2.internal.warc.gz"}
|
215(9): Self Consistent Calculation of the Force Law for Orbital Precession
This note summarizes the important calculations that lead to the classical force law for orbital precession. The calculation is done in two entirely different ways, giving the same result. The
lagrangian method is very elegant and simple, and starts from eq. (22). The kinematic method is more complicated but gives the same result. These important calculations mean that:
1) Orbital precession can be explained classically and cannot be used as a test for general relativity.
2) The correct force law is a sum of inverse square and cubed terms in r. The force law of general relativity of the Einstein type is incorrect, it gives a sum of inverse square and fourth terms
using the same lagrangian method. There are several big mistakes in Einsteinian general relativity (EGR) and this is one of the worst ones. The EGR theory is complete nonsense for this reason alone.
The same force law is true for any conical section, so is true also for gravitational deflection. For the hyperbola the eccentricity epsilon is greater than unity, and as Horst Eckardt showed
graphically yesterday, the conical section can be made into a circle by varying x. If the photon has mass m light deflection due to gravitation is governed by the same conical section and the same
force law.
|
{"url":"http://drmyronevans.wordpress.com/2012/04/13/2159-self-consistent-calculation-of-the-force-law-for-orbital-precession/","timestamp":"2014-04-20T19:28:00Z","content_type":null,"content_length":"26535","record_id":"<urn:uuid:ebd8670f-6606-44ab-aea3-8099575e6930>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00513-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Is it possible to create this image with one continuous line
General Question
Is it possible to create this image with one continuous line and without going through the same line twice?
Also how can I easily show if any variation of this type of question is true or not? Or is it a NP hard problem?
Observing members: 0 Composing members: 0
6 Answers
You can do trial and error for the next 24 hrs. or find the mathematical proof. It should be on google, as this is a common type of puzzle.
It’s impossible. This is covered by Euler’s Theorem in graph theory. It goes back to the 1700s. (Here and here). A graph (consisting of vertex points connected by line segments) cannot be traced
without lifting the pencil if there are more than two odd vertices. A vertex is odd or even depending on how many line segments connect to it. In your graph, all three bottom vertices are odd (3, 5,
3). That makes it impossible.
It makes intuitive sense: An even vertex can be entered and exited again an integral number of times. An odd, vertex, however, must represent either the start or finish of thbe path. Where does that
leave the 3rd vertex?
No, it’s not possible. I just worked on it for about 20 minutes. You can do the first house very easily, but not with the second one attatched.
It is not possible and here is why:
If the graph is traceable, all vertices must have an even number of connections from it, except for possibly two of them. This is because you must both arrive and leave at every point except for the
start and the end. You may visit each point any number of times, but since you must both arrive and leave (2 connections) each time you visit a vertex, the total number of connections for a
transversed vertex is
2*(# of visits) = # of connections, an even number
If the start and the end are on different vertices, those two would have an odd number. This is a quick method of finding the start and finish vertices, by locating the two vertices with an odd
number of connections. If the graph has more than two vertices with an odd number, then is not traceable. If one vertex is the start and a second is the end, then the others must be transversed and
the above formula applies to the number of connections. For the formula to give the correct number of connections for those other odd vertices, we must have visited them a fractional number of times,
which is not possible.
In the example, we have four vertices with an odd number of connections. These four points have 3, 3, 7, and 5 connections. If the two 3’s are the start and end, then we must have visited the vertex
with 7 connections 3.5 times and the vertex with 5 connections 2.5 times, a logical impossibility.
Here’s a more fluffy answer for this specific drawing:
Three of the vertexes are odd. (the one connecting the two roofs, and the two outside bottom corners of the drawing)
So even if you chose one of them as your start point, and the other as your end point, you’re still gonna have that third one to deal with. And since it has an odd number of lines connecting to it,
you will not be able to draw all those lines and still continue to your chosen end point.
Answer this question
This question is in the General Section. Responses must be helpful and on-topic.
|
{"url":"http://www.fluther.com/121290/is-it-possible-to-create-this-image-with-one-continuous-line/","timestamp":"2014-04-19T03:35:54Z","content_type":null,"content_length":"38513","record_id":"<urn:uuid:5916c31e-3158-49d6-80bf-c631d9ce448e>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00524-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[Numpy-discussion] cumsum much slower than simple loop?
Pauli Virtanen pav@iki...
Fri Feb 10 02:55:26 CST 2012
10.02.2012 05:39, Dave Cook kirjoitti:
> Why is numpy.cumsum (along axis=0) so much slower than a simple loop?
> The same goes for numpy.add.accumulate
The reason is loop ordering. The reduction operator when using `cumsum`
or `add.reduce` does the summation in the inmost loop, whereas the
`loopcumsum` has the summation in the outmost loop.
Although both algorithms do the same number of operations, the latter is
more efficient with regards to CPU cache (and maybe memory data
dependency) --- the arrays are in C-order so summing along the first
axis is wasteful as the elements are far from each other in memory.
The effect goes away, if you use a Fortran-ordered array:
a = np.array(a, order='F')
print a.shape
Numpy does not currently have heuristics to determine when swapping the
loop order would be beneficial in accumulation and reductions. It does,
however, have the heuristics in place for elementwise operations.
Pauli Virtanen
More information about the NumPy-Discussion mailing list
|
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2012-February/060298.html","timestamp":"2014-04-17T01:47:53Z","content_type":null,"content_length":"3695","record_id":"<urn:uuid:80c3fda8-7496-4d21-82f3-03e6a46cf3bd>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00115-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Rodeo, CA Algebra 2 Tutor
Find a Rodeo, CA Algebra 2 Tutor
...As a doctoral student in clinical psychology, I was hired by the university to teach study skills, test taking skills, and time management to incoming freshmen students. As part of this job, I
was trained in and provided materials for each of these topics. I often find, when working with my stu...
20 Subjects: including algebra 2, calculus, geometry, biology
GENERAL EXPERIENCE: As an undergraduate student at Florida International University, I often found myself tutoring my study groups in several subject areas, from Biochemistry to advanced Calculus,
which greatly helped my performance in each class. By my senior year, I obtained a teaching assistant...
24 Subjects: including algebra 2, chemistry, calculus, physics
I have been helping others to learn since I was in school. I have tutored or developed lessons to help people learn topics such as algebra, electronics, and the Bible. My occupation has been as a
technical trainer over the past 30 years.
21 Subjects: including algebra 2, physics, calculus, algebra 1
...While I was there, I also took various Calculus courses and courses in other areas of math that built on what I learned in high school. I'm a definite believer in the value of knowing the ways
the world works, and the value of a good education. That said, I’ve been through the education system, and have seen its flaws, and places where it could work better.
6 Subjects: including algebra 2, physics, calculus, algebra 1
...I have tutored others, off the record, in various K-6 subject matter. All in all, I have had plenty of experience tutoring elementary level students and have helped them better understand their
subject material. I started playing the clarinet in 5th grade.
34 Subjects: including algebra 2, chemistry, writing, calculus
|
{"url":"http://www.purplemath.com/Rodeo_CA_Algebra_2_tutors.php","timestamp":"2014-04-18T05:41:38Z","content_type":null,"content_length":"23937","record_id":"<urn:uuid:d024fd1c-1ab4-4e6a-89d4-6f8bf9f51dce>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00558-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Guy Medal in Gold
Royal Statistical Society Guy Medal in Gold
The Royal Statistical Society Guy Medal in Gold is named after the distinguished statistician, William Guy FRS. The medal is:-
... intended to encourage the cultivation of statistics in their scientific aspects and promote the application of numbers to the solution of important problems in all the relations of life in
which the numerical method can be employed, with a view to determining the laws which regulate them.
Fellows of the Royal Statistical Society who have made innovative contributions to the theory or application of statistics are considered for Gold Medals.
1892 Charles Booth 1953 A Bradford Hill 1981 D G Kendall
1894 Robert Giffen 1955 E S Pearson 1984 H E Daniels
1900 J Athelsten Baines 1960 F Yates 1986 B Benjamin
1907 F Y Edgeworth 1962 Harold Jeffreys 1987 R L Plackett
1908 P G Craigie 1966 J Neyman 1990 P Armitage
1911 G Udny Yule 1968 M G Kendall 1993 G E P Box
1920 T H C Stevenson 1969 M S Bartlett 1996 P Whittle
1930 A W Flux 1972 H Cramer 1999 Michael Healy
1935 A L Bowley 1973 David Cox 2002 D Lindley
1945 M Greenwood 1975 G A Barnard 2005 John Nelder
1946 R A Fisher 1978 Roy Allen 2008 James Durbin
MacTutor links:
Royal Statistical Society, etc:
Royal Statistical Society Guy Gold Medal
Royal Statistical Society Guy Silver Medal
Royal Statistical Society Guy Bronze Medal
Other Web site:
Royal Statistical Society Web site
JOC/EFR August 2009
The URL of this page is:
|
{"url":"http://www-groups.dcs.st-and.ac.uk/~history/Societies/RSSGuyGold.html","timestamp":"2014-04-19T06:52:13Z","content_type":null,"content_length":"6720","record_id":"<urn:uuid:16da6fa9-4da0-4776-b272-7a227970dbdd>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00337-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Guttenberg, NJ Math Tutor
Find a Guttenberg, NJ Math Tutor
...I also recently taught a physics course at a local university. My student evaluations said that I was an excellent one-on-one teacher. Based on teaching in the classroom and one-on-one
tutoring session, I think that anyone can learn math and science.
10 Subjects: including calculus, differential equations, algebra 1, algebra 2
...These office hours were for any math students, so I quickly became adept at answering questions about almost any math-related problem, be it a problem with an integral for a calculus student,
or a misunderstanding about trigonometry. I try to guide students to understanding the material by tryin...
18 Subjects: including differential equations, probability, discrete math, SAT math
...For the math whiz, I've got some material that'll knock their socks off. As a graduate student in a biomedical science field, I'm a huge biology dork. I've taken too many graduate biology
courses to count.
17 Subjects: including calculus, GRE, SAT writing, Regents
...To guarantee that each of my students receive my absolute focus and attention, I take only a few clients per year. Regarding my educational background, I have attended medical, law, and
graduate school and tutored students throughout - to say that I am passionate about learning and education wou...
34 Subjects: including algebra 1, grammar, precalculus, trigonometry
...I tutor many subjects in math, including algebra 1, algebra 2, prealgebra, and pre-calculus. I also teach high school biology and proofreading and writing skills of all levels, having had
extensive training and experience in professional and academic writing. I have backgrounds in test prep in LSAT and ACT, and I am trained in and have taught classical and contemporary singing.
24 Subjects: including calculus, ACT Math, reading, linear algebra
Related Guttenberg, NJ Tutors
Guttenberg, NJ Accounting Tutors
Guttenberg, NJ ACT Tutors
Guttenberg, NJ Algebra Tutors
Guttenberg, NJ Algebra 2 Tutors
Guttenberg, NJ Calculus Tutors
Guttenberg, NJ Geometry Tutors
Guttenberg, NJ Math Tutors
Guttenberg, NJ Prealgebra Tutors
Guttenberg, NJ Precalculus Tutors
Guttenberg, NJ SAT Tutors
Guttenberg, NJ SAT Math Tutors
Guttenberg, NJ Science Tutors
Guttenberg, NJ Statistics Tutors
Guttenberg, NJ Trigonometry Tutors
|
{"url":"http://www.purplemath.com/guttenberg_nj_math_tutors.php","timestamp":"2014-04-19T04:56:57Z","content_type":null,"content_length":"23973","record_id":"<urn:uuid:6e60d1df-3629-4818-bc7d-fe8a7318277c>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00243-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Thomas J. Sargent
with Martin Ellison
July 2012
The welfare cost of random consumption fluctuations is known from De Santis (2007) to be increasing in the level of individual consumption risk in the economy. It is also known from Barillas et al.
(2009) to increase if agents in the economy care about robustness to model misspecification. In this paper, we combine these two effects and calculate the cost of business cycles in an economy with
consumers who face individual consumption risk and who fear model misspecification. We find that individual risk has a greater impact on the cost of business cycles if agents already have a
preference for robustness. Correspondingly, we find that endowing agents with concerns about a preference for robustness is more costly if there is already individual risk in the economy. The
combined effect exceeds the sum of the individual effects.
with Lars Peter Hansen
July 2012
For each of three types of ambiguity, we compute a robust Ramsey plan and an associated worst-case probability model. Ex post, ambiguity of type I implies endogenously distorted homogeneous beliefs,
while ambiguities of types II and III imply distorted heterogeneous beliefs. Martingales characterize alternative probability specifications and clarify distinctions among the three types of
ambiguity. We use recursive formulations of Ramsey problems to impose local predictability of commitment multipliers directly. To reduce the dimension of the state in a recursive formulation, we
transform the commitment multiplier to accommodate the heterogeneous beliefs that arise with ambiguity of types II and III. Our formulations facilitate comparisons of the consequences of these
alternative types of ambiguity.
with Lars Peter Hansen
January 2011
We formulate two continuous-time hidden Markov models in which a decision maker distrusts both his model of state dynamics and a prior distribution of unobserved states. We use relative entropy's
role in statistical model discrimination % using historical data, we use measures of statistical model detection to modify Bellman equations in light of model ambiguity and to calibrate parameters
that measure ambiguity. We construct two continuous time models that are counterparts of two discrete-time recursive models of \cite{hansensargent07}. In one, hidden states appear in continuation
value functions, while in the other, they do not. The formulation in which continuation values do not depend on hidden states shares features of the smooth ambiguity model of Klibanoff, Marinacci,
and Mukerji. For this model, we use our statistical detection calculations to guide how to adjust contributions to entropy coming from hidden states as we take a continuous time limit.
June 28, 2010
Keynote address at ICORES10, Prague, June 28, 2010 given by Professor Stephen Stigler of the University of Chicago.
May 2010
This paper with Lars Hansen corrects typos that appeared in the version that was published in 2007 in the Journal of Economic Theory. The corrections appear in blue.
May 2010
This paper with Lars Hansen corrects typos that appeared in the version that was published in 2005 in the Journal of Economic Theory. The corrections appear in blue.
with Martin Ellison
July 2010
In this thoroughly revised version, we defend the forecasting performance of the FOMC from the recent criticism of Christina and David Romer. One argument is just to graph the data and note that the
discrepancies spotted by Romer and Romer are small, expecially after Greenspan took over from Volcker. We spend most of our time on another more sophisticated argument. This argument is that the FOMC
forecasts a worst-case scenario that it uses to design decisions that will work well enough (are robust) despite possible misspecification of its model. Because these FOMC forecasts are not
predictions of what the FOMC expects to occur under its model, it is inappropriate to compare their performance in a horse race against other forecasts. Our interpretation of the FOMC as a robust
policymaker can explain all the findings of the Romers and rationalises differences between FOMC forecasts and forecasts published in the Greenbook by the staff of the Federal Reserve System.
with Lars Peter Hansen
May 2010
This is a survey paper about exponential twisting as a model of model distrust. We feature examples from macroeconomics and finance. The paper is for a handbook of Monetary Economics edited by
Benjamin Friedman and Michael Woodford.
by Anastasios G. Karantounias (with Lars Peter Hansen and Thomas J. Sargent)
October 2009
This paper studies an optimal fiscal policy problem of Lucas and Stokey (1983) but in a situation in which the representative agent's distrust of the probability model for government expenditures
puts model uncertainty premia into history-contingent prices. This gives rise to a motive for expectation management that is absent within rational expectations and a novel incentive for the planner
to smooth the shadow value of the agent's subjective beliefs in order to manipulate the equilibrium price of government debt. Unlike the Lucas and Stokey (1983) model, the optimal allocation, tax
rate, and debt all become history dependent despite complete markets and Markov government expenditures.
with Lars Peter Hansen
January 2009
This paper is a comprehensive overhaul of our earlier paper ``Fragile Beliefs and the Price of Model Uncertainty’. A representative consumer uses Bayes' law to learn about parameters and to construct
probabilities with which to perform ongoing model averaging. The arrival of signals induces the consumer to alter his posterior distribution over parameters and models. The consumer copes with
specification doubts by slanting probabilities pessimistically. One of his models puts long-run risks in consumption growth. The pessimistic probabilities slant toward this model and contribute a
counter-cyclical and signal-history-dependent component to prices of risk We use detection error probabilities to discipline risk-sensitivity parameters.
with Lars Peter Hansen
December 2008
We use two risk-sensitivity operators to construct the stochastic discount factor for a representative consumer who evaluates consumption streams in light of parameter estimation and model selection
problems that present long run risks. The arrival of signals induces the consumer to alter his posterior distribution over models and parameters. The consumer expresses his doubts about model
specifications and priors by slanting them in directions that are pessimistic in terms of value functions. His twistings over model probabilities give rise to time-varying model uncertainty premia
that contribute a volatile time-varying component to the marketprice of model uncertainty.
with Lars Peter Hansen and Ricardo Mayer
October 30, 2008
For linear quadratic Gaussian problems, this paper uses two risk-sensitivity operators defined by Hansen and Sargent to construct decision rules that are robust to misspecifications of (1) transition
dynamics for possibly hidden state variables, and (2) a probability density over hidden states induced by Bayes' law. Duality of risk-sensitivity to the `multiplier preferences’ min-max expected
utility theory of Hansen and Sargent allows us to compute risk-sensitivity operators by solving two-player zero-sum games. That the approximating model is a Gaussian joint probability density over
sequences of signals and states gives important computational simplifications. We exploit a modified certainty equivalence principle to solve four games that differ in continuation value functions
and discounting of time t increments to entropy. In Games I, II, and III, the minimizing players' worst-case densities over hidden states are time inconsistent, while Game IV is an LQG version of a
game of \citet{hs2005a} that builds in time consistency. We describe how detection error probabilities can be used to calibrate the risk-sensitivity parameters that govern fear of model
misspecification in hidden Markov models.
with Timothy Cogley, Riccardo Colacito, and Lars Peter Hansen
January 11, 2008
We study how a concern for robustness modifies a policy maker's incentive to experiment. A policy maker has a prior over two submodels of inflation-unemployment dynamics. One submodel implies an
exploitable trade-off, the other does not. Bayes' law gives the policy maker an incentive to experiment. The policy maker fears that both submodels and his prior probability distribution over them
are misspecified. We compute decision rules that are robust to misspecifications of each submodel and of a prior distribution over submodels. We compare robust rules to ones that Cogley, Colacito,
and Sargent (2007) computed assuming that the models and the prior distribution are correctly specified. We explain how the policy maker's desires to protect against misspecifications of the
submodels, on the one hand, and misspecifications of the prior over them, on the other, have different effects on the decision rule.
with Lars Peter Hansen
November 22, 2006
Responding to criticisms of Larry Epstein and his coauthors, this paper describes senses in which various representations of preferences from robust control are or are not time consistent. We argue
that the senses in which preferences are not time consistent do not hinder applications.
with Francisco Barillas and Lars Peter Hansen
July 2008
Reinterpreting most of the market price of risk as a market price of model uncertainty eradicates the link between asset prices and measures of the welfare costs of aggregate fluctuations that were
proposed by Hansen, Sargent, and Tallarini (1999), Tallarini (2000), and Alvarez and Jermann (2004). Market prices of model uncertainty contain informationabout compensation for removing model
uncertainty, not the consumption fluctuations that Lucas (1987, 2003) studied. By using the preference specification of Kreps and Porteus with intertemporal elasticity of one put the mean and
standard deviation of the stochastic discount factor close to the bounds of Hansen and Jagannathan (1991), but only for very high values of a risk aversion parameter, and he needed a substantially
higher risk aversion parameter for a trend-stationary model of consumption than for a random walk model. A max-min expected utility theory lets us reinterpret Tallarini's risk-aversion parameter as
measuring a representative consumer's doubts about the model specification. We use model detection error probabilities instead of risk-aversion experiments to calibrate that parameter. Values of
detection error probabilities that imply a somewhat but not overly cautious representative consumer give market prices of model uncertainty that approach the Hansen-Jagannathan bounds. Fixed
detection error probabilities give rise to virtually identical asset prices for Tallarini's two models of consumption growth. We calculate the welfare costs of removing model uncertainty and find
that they are large.
with Lars Peter Hansen
June 2005
In a Markov decision problem with hidden state variables, a decision maker expresses fears that his model is misspecified by surrounding it with a set of alternatives that are nearby as measured by
their expected log likelihood ratios (entropies).Sets of martingales represent alternative models. Within a two-player zero-sum game under commitment, a minimizing player chooses a martingale at time
$0$.Probability distributions that solve distorted filtering problems serve as state variables, much like the posterior in problems without concerns about misspecification. We state conditions under
which an equilibrium of the zero-sum game with commitment has a recursive representation that can be cast in terms of two risk-sensitivity operators. We apply our results to a linear quadratic
example that makes contact with the analysis of Basar and Bernhard (1995) and Whittle (1990).
with Lars Peter Hansen
May 2006
In a Markov decision problem with hidden state variables, a posterior distribution serves as a state variable and Bayes' law under the approximating model gives its law of motion. A decision maker
expresses fear that his model is misspecified by surrounding it with a set of alternatives that are nearby as measured by their expected log likelihood ratios (entropies). Sets of martingales
represent alternative models. A decision maker constructs a sequence of robust decision rules by pretending that there is a sequence of minimizing players who choose increments to a martingale from
within this set. One risk sensitivity operator induces robustness to perturbations of the approximating model conditioned on the hidden state. Another risk sensitivity operator induces robustness
with respect to a prior distribution over the hidden state. We thereby extend the approach of Hansen and Sargent (IEEE Transactions on Automatic Control, 1995) to problems that contain hidden states.
We study linear quadratic examples.
with Lars Peter Hansen, Gauhar Turmuhambetova, and Noah Williams
September 2005
This paper integrates a variety of results in robust control theory in the context of an approximating model that is a diffusion. The paper is partly a response to some criticisms of Anderson,
Hansen, and Sargent (see below) by Chen and Epstein. It formulates two robust control problems -- a multiplier problem from the literature on robust control and a constraint formulation that looks
like Gilboa-Schmeidler's min-max expected utility theory. The paper studies the connection between the two problems, states an observational equivalence result for them, links both problems to `risk
sensitive' optimal control, and discusses time consistency of the preference orderings associated with the two robust control problems.
with Lars Hansen
Prepared for a Fed conference in honor of Dale Henderson, Richard Porter, and Peter Tinsley
The paper reviews how the structure of the Simon-Theil certainty equivalence result extends to models that incorporate a preference for robustness to model uncertainty. A model of precautionary
savings is used an example.
with Evan Anderson and Lars Hansen
April 2003
This paper supersedes `Risk and Robustness in Equilibrium’, also on this web page. A representative agent fears that his model, a continuous time Markov process with jump and diffusion components,is
misspecified and therefore uses robust control theory to make decisions. Under the decision maker's approximating model, that cautious behavior puts adjustments for model misspecification into market
prices for risk factors. We use a statistical theory of detection to quantify how much model misspecification the decision maker should fear, given his historical data record. A semigroup is a
collection of objects connected by something like the law of iterated expectations. The law of iterated expectations defines the semigroup for a Markov process, while similar laws define other
semigroups. Related semigroups describe (1) an approximating model; (2) a model misspecification adjustment to the continuation value in the decision maker's Bellman equation;(3) asset prices; and
(4) the behavior of the model detection statistics that we use to calibrate how much robustness the decision maker prefers. Semigroups 2, 3, and 4 establish a tight link between the market price of
uncertainty and a bound on the error in statistically discriminating between an approximating and a worst case model.
with Lars Hansen
November 19, 2002
This is a comprehensive revision of an earlier paper with the same title. We describe an equilibrium concept for models with multiple agents who, as under rational expectations share a common model,
but all of whom doubt their model, unlike rational expectations. Agents all fear model misspecification and perform their own worst-case analyses to construct robust decision rules. Although the
agents share the approximating models, their differing preferences cause their worst-case models to diverge. We show how to compute Stackelberg (or Ramsey) plans where both leaders and followers fear
model misspecification.
with Lars Peter Hansen
January 22, 2001
Paper prepared for presentation at the meetings of the American Economic Association in New Orleans , Jan 5, 2001 . This paper is a summary of results presented in more detail in Hansen, Sargent,
Turmuhambetova, and Williams (2001) -- see below. That paper formulates two robust control problems -- a multiplier problem from the literature on robust control and a constraint formulation that
looks like Gilboa-Schmeidler's min-max expected utility theory.
with Marco Cagetti, Lars Peter Hansen, and Noah Williams
January 2001
A continuous time asset pricing model with robust nonlinear filtering of a hidden Markov state.
with Lars Peter Hansen
December 2000
The text of Sargent's Frisch lecture at the 2000 World Congress of the Econometric Society; also the basis for Sargent's plenary lecture at the Society for Economic Dynamics in Costa Rica, June 2000.
with Lars Peter Hansen and Neng Wang
August 25, 2000
This paper reformulate Hansen, Sargent, and Tallarini's 1999 (RESTud) model by concealing elements of the state from the planner and the agents, forcing them to filter. The paper describes how
jointly to do robust filtering and control, then computes the appropriate `market prices of Knightian uncertainty.' Detection error probabilities are used to discipline the one free parameter that
robust decision making adds to the standard rational expectations paradigm.
Risk and Robustness in General Equilibrium
with Evan Anderson and Lars Hansen
March 1998
This paper describes a preference for robust decision rules in discrete time and continuous time models. The paper extends earlier work of Hansen, Sargent, and Tallarini in several ways. It permits
non-linear-quadratic Gaussian set ups. It develops links between asset prices and preferences for robustness. It links premia in asset prices from Knightian uncertainty to detection error statistics
for discriminating between models.
January 1998
Discussion of Laurence Ball
with Lars Peter Hansen and Thomas Tallarini
April 1997
|
{"url":"https://files.nyu.edu/ts43/public/robustness.html","timestamp":"2014-04-16T04:54:56Z","content_type":null,"content_length":"29544","record_id":"<urn:uuid:f4f1b017-1b4a-4227-809c-3e99be15c5e5>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00360-ip-10-147-4-33.ec2.internal.warc.gz"}
|
First-Order Deformations
by hilbertthm90 5 Comments
Today we actually get to some deformations. Let ${X_0}$ be a scheme of finite type over ${k}$. First, we’ll be working with the “ring of dual numbers” a lot, so we’ll just define it to be ${R=k[\
epsilon]/(\epsilon^2)}$. Let’s recall a few useful properties first.
To give a map in ${k}$-schemes: Spec ${R\rightarrow X_0}$ is equivalent specifying a ${k}$-point and an element of the Zariski tangent space at that point.
Flatness is another important concept. An ${R}$-module, ${M}$, is flat if and only if the map ${M/\epsilon M\rightarrow M}$ by multiplication by ${\epsilon}$ is an injective. The proof is quite
straightforward: Consider the exact sequence ${0\rightarrow k\stackrel{\epsilon}{\rightarrow} R\rightarrow k \rightarrow 0}$. If ${M}$ flat, tensor this with ${M}$ and you get ${0\rightarrow M/\
epsilon M\stackrel{\epsilon}{\rightarrow} M\rightarrow M/\epsilon M\rightarrow 0}$, and hence injectivity. If the map is injective, then ${Tor^1(M,k)=0}$, so ${M}$ is flat.
Let’s move on now that those are out of the way. What exactly should these deformations be? Let’s say we have a nice family of schemes. This means that there is a map ${f:X\rightarrow T}$, and nice
in our case means flat. ${T}$ parametrizes this family, since the fiber over any point gives a scheme ${X_t}$. There is a special fiber ${X_0}$, and deformations of ${X_0}$ are the schemes that occur
in this family in a neighborhood of ${X_0}$. (Recall, this is still a “touchy-feely” idea of what a deformation is, don’t take this to be the definition).
For instance, you could parametrize some curves over ${\mathbb{A}^1_k}$ by ${Spec\left(k[x,y,t]/(xy-t)\right)\rightarrow Spec(k[t])}$. We could think about this family as hyperbolas $X_t=\{(x,y): xy=
t\}$. As $t$ approaches 0, the hyperbolas degenerate into the coordinate axes. Deformations in this family of the special fiber ${X_0}$ are all irreducible, yet the special fiber is reducible.
Now for what we really care about in this post. A first-order deformation of ${X_0}$ is a scheme ${X'}$, flat over ${R}$ such that ${X'\otimes_R k\simeq X_0}$. Note the terminology comes from the
fact that the family is over Spec ${R}$ which remembers “tangent” information. A second-order deformation would keep track of information via a family over Spec ${k[\epsilon]/(\epsilon^3)}$, etc
(anyone see a completion happening in the near future?).
A first-order deformation ${X'}$ is something that completes the fiber diagram:
${\begin{matrix} X_0 & \rightarrow & X'\\ \downarrow & & \downarrow f\\ Spec k & \rightarrow & Spec R \end{matrix}}$
where ${f}$ is flat. Maybe a notation will be useful later: Def(${X_0/k}$).
Let’s use the last couple of posts to classify the first-order deformations of a non-singular scheme over an algebraically closed field. The claim is that Def(${X/k}$) (where these are up to
isomorphism) is in bijective correspondence with ${H^1(X, \mathcal{T}_X)}$. So by the last post it is enough to show bijective correspondence with the infinitesimal extensions of ${X}$ by ${\mathcal
The proof is just adapting what we said at the start of the post. If ${X'\in Def(X/k)}$, then by flatness we can tensor with ${0\rightarrow k\rightarrow R\rightarrow k\rightarrow 0}$ and it remains
exact: ${0\rightarrow \mathcal{O}_X\rightarrow \mathcal{O}_{X'}\rightarrow \mathcal{O}_X}$. Thus ${X'}$ is an infinitesimal extension of ${X}$ by ${\mathcal{O}_X}$. Conversely, any such extension is
flat and hence a first-order deformation.
5 thoughts on “First-Order Deformations”
1. Is Hartshorne your source for this material? I vaguely remember seeing it in the exercises some time back, but I wasn’t able to solve it then.
Also, do we need $M$ to be finitely generated for the statement about flatness over the ring of dual numbers to hold? The proof I know of the statement that $Tor_1(M,k)=0$ implies flatness
requires finite generation (via Nakayama’s lemma), but it is possible things are different for this particular ring (or that the result I know is weaker than necessary).
2. hilbertthm90
September 27, 2010 at 9:05 am
The past two posts were a set of three exercises that I solved. This post was extrapolated from the flatness section of III in Hartshorne. Overall, I’ve been glancing through Hartshorne’s GTM on
Deformation Theory and Ravi Vakil’s notes to get some more of an overview of the subject.
The more general statement of the Tor result is: If M is any module over a Noetherian ring, then it is flat if and only if for every prime $p\subset A$, $Tor_1^A(M, A/p)=0$.
A quick sketch: $Tor_1(M, N)=0$ for all N if and only if the functor $- \otimes_A M$ is exact and hence $M$ flat. But Tor commutes with limits, so $Tor_1(M, N)=0$ for all N if and only if $Tor_1
(M,N)=0$ for all finitely generated N. But finitely generated modules have a filtration with quotients of the form $A/p_i$ for some primes $p_i$. So take the long exact sequence associated to $0\
to p_i \to A \to A/p_i \to 0$ to see that if $Tor_1(M, p)=0$ for all primes, then $Tor_1(M, N)=0$ for all finitely generated modules.
3. That’s a nice argument; thanks for explaining.
Leave a Reply Cancel reply
|
{"url":"http://hilbertthm90.wordpress.com/2010/09/26/first-order-deformations/","timestamp":"2014-04-16T22:05:56Z","content_type":null,"content_length":"93553","record_id":"<urn:uuid:16263753-6578-490c-8c14-43f6bebac3a7>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00241-ip-10-147-4-33.ec2.internal.warc.gz"}
|
search results
Expand all Collapse all Results 1 - 5 of 5
1. CJM Online first
Addendum to ``Nearly Countable Dense Homogeneous Spaces'
This paper provides an addendum to M. Hrušák and J. van Mill ``Nearly countable dense homogeneous spaces.'' Canad. J. Math., published online 2013-03-08 http://dx.doi.org/10.4153/CJM-2013-006-8.
Keywords:countable dense homogeneous, nearly countable dense homogeneous, Effros Theorem, Vaught's conjecture
Categories:54H05, 03E15, 54E50
2. CJM Online first
Non-tame Mice from Tame Failures of the Unique Branch Hypothesis
In this paper, we show that the failure of the unique branch hypothesis (UBH) for tame trees implies that in some homogenous generic extension of $V$ there is a transitive model $M$ containing $Ord
\cup \mathbb{R}$ such that $M\vDash \mathsf{AD}^+ + \Theta \gt \theta_0$. In particular, this implies the existence (in $V$) of a non-tame mouse. The results of this paper significantly extend J.
R. Steel's earlier results for tame trees.
Keywords:mouse, inner model theory, descriptive set theory, hod mouse, core model induction, UBH
Categories:03E15, 03E45, 03E60
3. CJM Online first
Nearly Countable Dense Homogeneous Spaces
We study separable metric spaces with few types of countable dense sets. We present a structure theorem for locally compact spaces having precisely $n$ types of countable dense sets: such a space
contains a subset $S$ of size at most $n{-}1$ such that $S$ is invariant under all homeomorphisms of $X$ and $X\setminus S$ is countable dense homogeneous. We prove that every Borel space having
fewer than $\mathfrak{c}$ types of countable dense sets is Polish. The natural question of whether every Polish space has either countably many or $\mathfrak{c}$ many types of countable dense sets,
is shown to be closely related to Topological Vaught's Conjecture.
Keywords:countable dense homogeneous, nearly countable dense homogeneous, Effros Theorem, Vaught's conjecture
Categories:54H05, 03E15, 54E50
4. CJM 2012 (vol 64 pp. 1378)
On Weakly Tight Families
Using ideas from Shelah's recent proof that a completely separable maximal almost disjoint family exists when $\mathfrak{c} \lt {\aleph}_{\omega}$, we construct a weakly tight family under the
hypothesis $\mathfrak{s} \leq \mathfrak{b} \lt {\aleph}_{\omega}$. The case when $\mathfrak{s} \lt \mathfrak{b}$ is handled in $\mathrm{ZFC}$ and does not require $\mathfrak{b} \lt {\aleph}_{\
omega}$, while an additional PCF type hypothesis, which holds when $\mathfrak{b} \lt {\aleph}_{\omega}$ is used to treat the case $\mathfrak{s} = \mathfrak{b}$. The notion of a weakly tight family
is a natural weakening of the well studied notion of a Cohen indestructible maximal almost disjoint family. It was introduced by Hrušák and GarcÃa Ferreira, who applied it to the Katétov order
on almost disjoint families.
Keywords:maximal almost disjoint family, cardinal invariants
Categories:03E17, 03E15, 03E35, 03E40, 03E05, 03E50, 03E65
5. CJM 1999 (vol 51 pp. 309)
Symmetric sequence subspaces of $C(\alpha)$, II
If $\alpha$ is an ordinal, then the space of all ordinals less than or equal to $\alpha$ is a compact Hausdorff space when endowed with the order topology. Let $C(\alpha)$ be the space of all
continuous real-valued functions defined on the ordinal interval $[0, \alpha]$. We characterize the symmetric sequence spaces which embed into $C(\alpha)$ for some countable ordinal $\alpha$. A
hierarchy $(E_\alpha)$ of symmetric sequence spaces is constructed so that, for each countable ordinal $\alpha$, $E_\alpha$ embeds into $C(\omega^{\omega^\alpha})$, but does not embed into $C(\
omega^{\omega^\beta})$ for any $\beta < \alpha$.
Categories:03E13, 03E15, 46B03, 46B45, 46E15, 54G12
|
{"url":"http://cms.math.ca/cjm/msc/03E15?fromjnl=cjm&jnl=CJM","timestamp":"2014-04-19T04:37:32Z","content_type":null,"content_length":"32912","record_id":"<urn:uuid:c5314916-65ab-4ce3-bba5-045e4db25b8b>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00573-ip-10-147-4-33.ec2.internal.warc.gz"}
|
4.5.1 Search, Insert, Delete in Bottom-up Splaying
Next: 4.6 Amortized Algorithm Analysis Up: 4.5 Splay Trees Previous: 4.5 Splay Trees
Search (i, t)
If item i is in tree t, return a pointer to the node containing i; otherwise return a pointer to the null node.
• Search down the root of t, looking for i
• If the search is successful and we reach a node x containing i, we complete the search by splaying at x and returning a pointer to x
• If the search is unsuccessful, i.e., we reach the null node, we splay at the last non-null node reached during the search and return a pointer to null.
• If the tree is empty, we omit any splaying operation.
Example of an unsuccessful search: See Figure 4.23.
Insert (i, t)
• Search for i. If the search is successful then splay at the node containing i.
• If the search is unsuccessful, replace the pointer to null reached during the search by a pointer to a new node x to contain i and splay the tree at x
For an example, See Figure 4.24.
Delete (i, t)
• Search for i. If the search is unsuccessful, splay at the last non-null node encountered during search.
• If the search is successful, let x be the node containing i. Assume x is not the root and let y be the parent of x. Replace x by an appropriate descendent of y in the usual fashion and then splay
at y.
For an example, see Figure 4.25.
Next: 4.6 Amortized Algorithm Analysis Up: 4.5 Splay Trees Previous: 4.5 Splay Trees eEL,CSA_Dept,IISc,Bangalore
|
{"url":"http://lcm.csa.iisc.ernet.in/dsa/node94.html","timestamp":"2014-04-17T06:43:33Z","content_type":null,"content_length":"6110","record_id":"<urn:uuid:a7defbea-9a20-4a8e-88b1-d9d7b3945a25>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00547-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Physics Forums - View Single Post - Conserved quantities in the Doran Metric?
I've been doing some amateur simulations of particle trajectories in a few well-known metrics (using GNU Octave and Maxima), and in the case of the Schwarzschild and Gullstrand-Painleve metrics I
have the ability to check my results using freely available equations for conservation of energy and angular momentum.
In the case of the Doran metric however, the geodesic equations are far messier(!), but I believe I now have a "correct" simulation. I would like to check this also, but have not seen any reference
to equations for conserved quantities for this metric.
Has anyone here looked into this, or does anyone know any useful links, or are there any computer algebra wizards that can help?
|
{"url":"http://www.physicsforums.com/showpost.php?p=3802904&postcount=1","timestamp":"2014-04-19T04:47:34Z","content_type":null,"content_length":"9033","record_id":"<urn:uuid:8aab2583-404a-45fa-adf5-2cd42a375239>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00297-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Exact and Approximate
Posted Wednesday, January 04, 2012 5:39 AM
SSCommitted Frank Hamersley (1/4/2012)How weird stuff like this survives in a commercial offering beggars belief (well mine if no-one elses)!
Group: General I'm with you! It baffled me when I first came across it, and it baffles me still.
Forum Members
Last Login:
April 17, 2014
7:26 AM
Points: 1,658,
Visits: 6,002
Posted Wednesday, January 04, 2012 5:47 AM
Frank Hamersley (1/4/2012)Whomever the bright sparks were that decided an arbitary 6 digit minimum is to be forced on the resultant without any other consideration except adding up
the number of significant digits on the input side....I am still gobsmacked that anyone would think this is viable. Slack in my book.
Two points: first DECIMAL(77,40) into 38 precision just won't go. Second, a strongly-typed language requires a type that has a defined precision and scale before the computation
occurs. As far as the decision to choose 6 is concerned, here's a quote from the SQL Server Programmability Team:
Group: General
Forum Members ...we try to avoid truncating the integral part of the value by reducing the scale (thus truncating the decimal part of the value instead). How much scale should be sacrificed? There
Last Login: is no right answer. If we preserve too much, and the result of the multiplication of large numbers will be way off. If we preserve too little, multiplication of small numbers becomes
Yesterday @ an issue.
5:09 PM
Points: http://blogs.msdn.com/b/sqlprogrammability/archive/2006/03/29/564110.aspx
Visits: 10,932
Paul White
SQL Server MVP
Posted Wednesday, January 04, 2012 9:47 AM
Frank Hamersley (1/4/2012)How weird stuff like this survives in a commercial offering beggars belief (well mine if no-one elses)!
SSCrazy Eights My implicit expectation - formed in earlier days (aka Fortran IV and CDC ASM albeit with some gray cells "preserved" by vitamin B) - that whilst decimal precision would not
neccesarily be increased by multiplication ... it would _never_ be diminished.
Whomever the bright sparks were that decided an arbitary 6 digit minimum is to be forced on the resultant without any other consideration except adding up the number of significant
Group: General digits on the input side....I am still gobsmacked that anyone would think this is viable. Slack in my book.
Forum Members
Last Login: I guess I am living in earlier days where the computation unit used accumulators that greatly exceeded the accuracy of the inputs for precisely this reason - and this has not been
Today @ 9:56 encoded in SQL Server. That said I accept no apologists - as how hard would it be to lock in the decimals and trap any digits overflow occuring (in the significant digits)?
AM </rant>
Points: 8,288,
Visits: 8,739
I too regard it as crazy nonsense, but there are some better ways of addressing it than the one you suggest:
(1) for multiplication, compute exact result (allowing up to 77 digits in the result); eliminate leading zeroes; if integer part of the result won't fit in 28 digits, throw an error;
if it will, and there are fewer than 38 digits including fractional part, return exact result; if there are more than 38 digits, round the fractional part to get down to 38 and if
this results in 0 throw an underflow error. Probably "lost significance" errors should be introduced for cases where the integer part is 0 and such a long string of initial zeroes is
required after the decimal point that rounding causes a significant proportional error. For division, calculate to 38 significant digits, detecting whether this includes the whole of
the integer part (if it doesn't, throw an overflow error) or there are 38 zeroes immediately after the decimal point (if so throw an underflow error). This is rather like your
solution but avoids throwing an error when trivial rounding is needed to fit into 38 digits. Of course some programs may rely on the bizarre rounding behaviour of the current numeric
type(s) so there would have to be an option to switch that behavious on (a per connection option, not a global one), but of course that option should be deprecated immediately.
(2) throw away the whole numeric nonsense and introduce base 10 floating point (mantissa and exponent are both binary, but the exponent indicates a power of 10 not of 2) as per the
version of the IEEE floating point standard ratified in 2008 as its replacement. Of course this only works if all the hardware is going to support IEEE 754-2008, or there is software
support of that standard whenever the hardware doesn't support it. It also means that SQL has to wake up and decide to treat the exception conditions (including lost significance
error as well as overflow and underflow), infinities, and NaN of the standard instead of just ignoring them (and the platform below the RDBMS's data engine has to allow it to do so).
So it probably can't happen any time soon, which is a great pity. In any case, converting to it could potentially make certain programs which rely on "rounding as you go" (usually
with the money datatype, rather than with the numeric type on which it is based) stop working {because, for example, they rely on inequalities like (10.00/3)*5.00 <> (10.00*5.00)/
3.00} unless they were rewritten to do any "premature" rounding they want explicitly (or we could have a "rounded decimal float" type, which used decimal float format but did the
extra rounding behind the scenes); although I would like to think that no-one writes code like that I suspect people probably do.
(3) introduce a binary floating point based on the binary128 format of IEEE 754-2008 and get rid of the numeric nonsense, without initially introducing proper exception, NaN, and
infinity handling. This would vastly reduce the representation error caused in converting from decimal string format to binary floating point format, and would be vastly less
development that 2 above. As with 2, converting to it from exact numerics might involve work to reproduce the strange rounding behaviour of numeric types for sufficiently bizarre
programs. This should of course be seen as a first step on the road to 2.
In everything above, "get rid of" should of course be read as "deprecate for a couple of releases and then remove support".
Posted Wednesday, January 04, 2012 10:17 AM
SQL Kiwi (1/4/2012)Two points: first DECIMAL(77,40) into 38 precision just won't go.
SSCrazy Eights Clearly not.
Second, a strongly-typed language requires a type that has a defined precision and scale before the computation occurs.
Group: General
Forum Members There I disagree with you on two counts. Firstly, strong typing no requirement that the precision and scale be fixed before computation occurs (indeed all the languages I know that
Last Login: impose such a requirement are at best weakly typed). Secondly, you appear to be claiming that SQL is strongly typed.
Today @ 9:56
AM As far as the decision to choose 6 is concerned, here's a quote from the SQL Server Programmability Team:
Points: 8,288,
Visits: 8,739 ...we try to avoid truncating the integral part of the value by reducing the scale (thus truncating the decimal part of the value instead). How much scale should be sacrificed? There
is no right answer. If we preserve too much, and the result of the multiplication of large numbers will be way off. If we preserve too little, multiplication of small numbers becomes
an issue.http://blogs.msdn.com/b/sqlprogrammability/archive/2006/03/29/564110.aspx
Well, they got that wrong. If they ever truncate the integral part of the value instead of raising an error, which seems to me to be implied by their statement that preserving too
much scale makes the multiplication of large numbers produce way off results, the have a hopelessly broken type. If they think that they have to determine the scale before carrying
out computation, they are hopelesly deluded; and if they don't think that, what is the basis for picking a fixed scale at all in the cases where the precision as calculated by their
rules (the only basis I can think of is a decision that doing it right would be too much work)?
In previous discussions on this topic we've noticed that with the utterly bizarre rules we have here it's possible to obtain an overflow error by multiplying a numeric value by 1, and
also by dividing a numeric value by 1. We can do this even if the 1 we are using is typed decimal(p,0). Surely no one can imagine that such bizarre results are remotely acceptable?
edit: If anyone wants a good survey of modern type theory, there's an excellent paper by Luca Cardelli and Peter Wegner which, although it was written 26 years ago, is still very
highly regarded. You can get it from Luca's website, PDF at either here for A4 format or here for American Letter format.
Posted Wednesday, January 04, 2012 10:24 AM
SSC Rookie SQL Kiwi (1/3/2012)
steven.malone (1/3/2012)Very interesting.
Declaring the decimal variables as DECIMAL(35,20) gives the correct result.
Group: General But declaring the decimal variables as DECIMAL(35,19) gives the rounded result.
Forum Members
Last Login: Two DEC(35,20) multiplied gives a result of (71,40). Precision 71 exceeds the available 38 by 33, so scale is reduced by 33 to 7. Result is DEC(38,7) and the result is correct.
March 26, 2014 Two DEC(35,19) multiplied gives a result of (71,38). Precision 71 exceeds the available 38 by 33, so scale is reduced by 33 to 5. However, minimum scale is 6, so the result is DEC
2:52 PM (38,6) with a risk of very large values causing an error, and the result is rounded to 6 decimal places.
Points: 46,
Visits: 87
I need a clarification here.
Microsoft article states that the minimum scale of a decimal can be 0.
So I don't get the point where the minimum scale is defined as 6
Posted Wednesday, January 04, 2012 10:35 AM
very good question!!!
thanks Paul!!!
Ten Centuries
DBA - SQL Server 2008
MCITP | MCTS
Group: General
Forum Members remember is live or suffer twice!
Last Login:
Tuesday, April
15, 2014 2:25
Points: 1,253,
Visits: 13,546
Posted Wednesday, January 04, 2012 10:37 AM
prudhviraj.sql (1/4/2012)@Paul
SSCrazy Eights I need a clarification here.
Microsoft article states that the minimum scale of a decimal can be 0.
Group: General http://msdn.microsoft.com/en-us/library/ms187746.aspx
Forum Members
Last Login: So I don't get the point where the minimum scale is defined as 6
Today @ 9:56
Points: 8,288, 6 isn't the minimum scale of a numeric type, it is the minimum scale of the type of a numeric value which is the direct result of the multiplication of two numeric values whose scales
Visits: 8,739 add up to 6 or more. Of course if the result is placed immediately into a variable or a column with a predefined type you never see the type of the direct result. It's also the
minimum scale of the type of a numeric which is the direct result of a division where the precision of the divisor plus the scale of the dividend is greater than 4 and also of the
type of teh direct result of an addition or subtraction or modulus where either of the two scales of the things being added/subtracted/remaindered is greater than 5.
See http://msdn.microsoft.com/en-us/library/ms190476.aspx
Posted Wednesday, January 04, 2012 6:59 PM
SSC Rookie Toreador (1/4/2012)
... It baffled me when I first came across it, and it baffles me still.
Group: General
Forum Members I put it down to expediency prevailing over sensibility.
Last Login:
Sunday, March It was seen as too hard to build an ALU (in code) to support the SQL numeric data type that manages overflow and underflow in a consistent and intuative way.
16, 2014 4:41
AM They were probably concerned about performance first (accuracy second) and so limited it to simple arithmetic on the total number of significant digits - with a minima of 6 decimals
Points: 26, thown in "because we reckon no-one will care/notice/perhaps it will get fixed by a later release".
Visits: 102
Posted Wednesday, January 04, 2012 7:44 PM
SSC Rookie Just to jazz things up ... I have refactored the code to make it work on another SQL dialect as follows ...
DECLARE @n1 DECIMAL(38,20)
Group: General , @n2 DECIMAL(38,20)
Forum Members , @n3 REAL
Last Login: , @n4 REAL
Sunday, March , @n5 DOUBLE PRECISION
16, 2014 4:41 , @n6 DOUBLE PRECISION
Points: 26, SELECT @n1 = 123.4567
Visits: 102 , @n2 = 0.001
SELECT @n3 = @n1
, @n4 = @n2
, @n5 = @n1
, @n6 = @n2
SELECT n_decimal = CONVERT(VARCHAR, @n1 * @n2)
, n_real = CONVERT(VARCHAR, @n3 * @n4)
, n_double = CONVERT(VARCHAR, @n5 * @n6)
.... which produces ....
(1 row affected)
(1 row affected)
n_decimal n_real n_double
------------------------------ ------------------------------ ------------------------------
0.1234567000000000000000000000 .12345670908689499 .1234567
(1 row affected)
... which is what I expected and suggests that a competent solution is possible if you care enough!
FWIW if I get a chance I will run it on some other (mainstream) platforms at hand.
Posted Wednesday, January 04, 2012 7:56 PM
The various schemes for encoding numbers all have advantages and disadvantages. SQL Server uses a decimal type that has fixed precision and scale. All expressions have a well-defined
type, and for SQL Server that means fixed precision and scale if that type is decimal. For example, the computed column in the following table has a type of DECIMAL(19,8):
CREATE TABLE dbo.Example
Group: General col1 DECIMAL(9,4) NULL,
Forum Members col2 DECIMAL(9,4) NULL,
Last Login: col3 AS col1 * col2
Yesterday @ )
5:09 PM GO
Points: EXECUTE sys.sp_columns
11,168, @table_owner = N'dbo',
Visits: 10,932 @table_name = N'Example',
@column_name = N'col3'
There are many quirks to SQL Server, including the way it handles rounding, truncation, and conversions. In many cases these quirks are preserved to avoid breaking existing
applications. That's a purely practical matter, and doesn't imply that everyone is happy about the state of affairs, or wouldn't approach things differently if done again. On that
note, the proper place to suggest improvements or alternatives is Connect. By and large, the people that make decisions about future product directions do not carefully read QotD
The point of this QotD is very much to emphasise that using excessive precision or scale can have unintended consequences. Very few real-world uses would require anything like the
DECIMAL(38,20) types specified in the question. Using appropriate types (which in SQL Server can include both precision and scale) is important.
As far as I recall, the issue with multiplying or dividing by one was a bug in type inference, which has since been fixed.
Paul White
SQL Server MVP
|
{"url":"http://www.sqlservercentral.com/Forums/Topic1229027-2669-4.aspx","timestamp":"2014-04-20T17:51:38Z","content_type":null,"content_length":"188563","record_id":"<urn:uuid:1901e1e2-f437-47e7-8d08-714ec4b2cfed>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00288-ip-10-147-4-33.ec2.internal.warc.gz"}
|
August 31st 2007, 11:38 AM
If the revenue for selling x units of a product is given by R=x(25-1/2x), find the number of units sold if the revenue is $294.50
August 31st 2007, 11:43 AM
August 31st 2007, 11:51 AM
actually i got stuck...
we get x^2-50x+589=0
now solving this quadratic equation we get x1=19 and x2=31
now hat should i choose the value of x?
August 31st 2007, 11:59 AM
|
{"url":"http://mathhelpforum.com/business-math/18274-revenue-print.html","timestamp":"2014-04-17T14:48:34Z","content_type":null,"content_length":"5379","record_id":"<urn:uuid:8c465a26-951f-46bc-9bf0-a9e543383d64>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00369-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Do hyperKahler manifolds live in quaternionic-Kahler families?
up vote 19 down vote favorite
A geometry question that I thought about more seriously a few years ago... thought it'd be a good first question for MO.
I'm aware that there are a number of Torelli type theorems now proven for compact HyperKahler manifolds. Also, I think that Y. Andre has considered some families of HyperKahler (or holomorphic
symplectic) manifolds in some paper.
But, when I see such a moduli problem studied, the data of a HyperKahler manifold seems to include a preferred complex structure. For example, a HyperKahler manifold is instead viewed as a
holomorphic symplectic manifold. I'm aware of various equivalences, but there are certainly different amounts of data one could choose as part of a moduli problem.
I have never seen families of HyperKahler manifolds, in which the distinction between hyperKahler rotations and other variation is suitably distinguished. Here is what I have in mind, for a
"quaternionic-Kahler family of HyperKahler manifolds:
Fix a quaternionic-Kahler base space $X$, with twistor bundle $Z \rightarrow X$. Thus the fibres $Z_x$ of $Z$ over $X$ are just Riemann spheres $P^1(C)$, and $Z$ has an integrable complex structure.
A family of hyperKahler manifolds over $X$ should be (I think) a fibration of complex manifolds $\pi: E \rightarrow Z$, such that:
1. Each fibre $E_z = \pi^{-1}(z)$ is a hyperKahler manifold $(M_z, J_z)$ with distinguished integrable complex structure $J_z$.
2. For each point $x \in X$, let $Z_x \cong P^1(C)$ be the twistor fibre. Then the family $E_x$ of hyperKahler manifolds with complex structure over $P^1(C)$ should be (isomorphic to) the family $
(M, J_t)$ obtained by fixing a single hyperKahler manifold, and letting the complex structure vary in the $P^1(C)$ of possible complex structures. (I think this is called hyperKahler rotation).
In other words, the actual hyperKahler manifold should only depend on a point in the quaternionic Kahler base space $X$, but the complex structure should "rotate" in the twistor cover $Z$.
This sort of family seems very natural to me. Can any professional geometers make my definition precise, give a reference, or some reason why such families are a bad idea? I'd be happy to see such
families, even for hyperKahler tori (which I was originally interested in!)
dg.differential-geometry ag.algebraic-geometry moduli-spaces complex-geometry
It is rather difficult to write an $\textit{explicit}$ hyper-Kahler metric (for reasons that I won't go into), whereas holomorphic symplectic structure is often manageable and by Calabi –
Yau, in the compact case it guarantees the existence of the h-K metric. In fact, my recollection from 15 years ago is that the only general method was hyper-Kahler reduction (ADHM and
generalizations). It is symmetric in the sense you indicated and you can do it in a family if you like. Is this in the direction that you wanted? – Victor Protsak May 25 '10 at 8:49
I understand that one can find a family of holomorphic symplectic compact manifolds, parameterized by a base space $P^1(C)$, by hyperKahler rotation. But I'm looking for a quaternionic-Kahler base
manifold $X$, such that each point $x$ of this base manifold corresponds to such a $P^1(C)$-family of holomorphic symplectic manifolds (where the $P^1(C)$ is the fibre of the twistor cover $Z_x$).
Can you explain further "you can do it in a family"? In a quaternionic-Kahler family? – Marty May 25 '10 at 15:53
add comment
1 Answer
active oldest votes
What you suggested makes sense. You propose to replace the $P^1$ fibre by the twistor space of an HK manifold M, so that the big total space would not only display separately the complex
structures of M, but allow deformations of M to be parametrized by X. I think the real question is whether there exist sensible examples over a compact QK base like X$=S^4$ in which a
consistent choice of complex structure on the varying HK manifolds is therefore not possible. I am not sure. The problem is that the construction looks a bit unwieldly, and experience
dictates that it is more natural to look for bundles whose fibres are HK. In this sense, your idea is very close to a known (but in some sense simpler) construction that goes under the
heading "Swann bundle" or "C map".
up vote
8 down Let me add two comments in support of your question. First, the concept of a manifold foliated by HK manifolds (like $T^4$ or K3) is very powerful. This is most familiar in work on special
vote holonomy, but here's a more classical construction: the curvature tensor at each point of a Riemannian 4-manifold can be used to construct a singular Kummer surface and an associated K3 (the
intersection of 3 quadrics in $P^5$), but the complex structure is fixed so not twistorial. Second, escaping from quaternions, one sees twistor space fibres in the following situation: each
fibre of the twistor space $SO(2n+1)/U(n)$ parametrizing a.c.s.'s on the sphere $S^{2n}$ can be identified with the twistor space of $S^{2n-2}$!
add comment
Not the answer you're looking for? Browse other questions tagged dg.differential-geometry ag.algebraic-geometry moduli-spaces complex-geometry or ask your own question.
|
{"url":"http://mathoverflow.net/questions/13188/do-hyperkahler-manifolds-live-in-quaternionic-kahler-families/33846","timestamp":"2014-04-16T13:28:30Z","content_type":null,"content_length":"57201","record_id":"<urn:uuid:001b9824-bc68-49fb-aa0d-1d2f93993ba0>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00572-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Unknown Coordinates
November 12th 2009, 05:47 AM #1
Nov 2009
Unknown Coordinates
Well, I have a problem, We're given 3 points(sorry if i've defined/called it wrong) which is A B and C, which also has each a coordinates, but the problem is, only A and B has, C doesn't have
any, and we're tasked to find it, but I don't know the formula in finding it, so I can't proceed in measuring the distance between AC and BC, only AB.
A(-1, 2)
Could someone please tell me how to find the C coordinates?
NOTE: My professor said it is an equilateral triangle when we're going to measure it.
Well, I have a problem, We're given 3 points(sorry if i've defined/called it wrong) which is A B and C, which also has each a coordinates, but the problem is, only A and B has, C doesn't have
any, and we're tasked to find it, but I don't know the formula in finding it, so I can't proceed in measuring the distance between AC and BC, only AB.
A(-1, 2)
Could someone please tell me how to find the C coordinates?
NOTE: My professor said it is an equilateral triangle when we're going to measure it.
The information that the triangle is equilateral is very important here.
We know that the distance from point A to point B (which is a side of a triangle here) can be calculated using the following formula:
AB = sqrt( (x2-x1)² + (y2-y1)² )
By applying the formula we can calculate the distance between points A(-1,2) and B(4,2)
AB = sqrt ( (-1-4))² + (2-2)² ) = sqrt(25+0) = 5
Since the triangle ABC is equilateral, we know that AB = BC = AC = 5
If we mark the unknown point as C(a,b), we'll get two equations that we can use to determine the coordinates a and b.
Here are a few tips to get you started. I'll be posting the full solution a bit later.
Isn't a,b somehow related to points AB? couldn't really get the equations.
Okay, to continue from where I left:
The points were A(-1,2), B(4,2) and C(a,b)
We'll get two equations
The length of AC
5 = sqrt( (-1 -a)² + (2-b)² )
The length of BC
5= sqrt( (4-a)² + (2-b)² )
From this, we'll get the equation AC = BC
sqrt( (-1 -a)² + (2-b)² ) = sqrt( (4-a)² + (2-b)² )
(-1 -a)² + (2-b)² = (4-a)² + (2-b)²
We're a bit lucky here that the y-coordinate for both A and B is the same,
so we'll notice that the (2-b)² on both sides will cancel each other out, and
we'll be left with a simple equation to calculate a
(-1 -a)² = (4-a)² <=> 1+2a+a² = 16 -8a + a² <=> 10a = 15 <=> a= 3/2
After we've solved a, you can use either one of the equations AC = 5 or BC = 5 to solve b by replacing a with 3/2.
Let's use BC for example:
5 = sqrt( (4-3/2)² + (2-b)² ) <=> 25 = (5/2)² + (2-b)² <=> 25 = 25/4 + 4 -4b + b² <=> b² - 4b - 59/4 = 0
Solve the equation and you'll get b= 1/2 * (4 ± 5 sqrt(3))
This means that there are actually two possible y-coordinates for point C, which makes sense if you think about it geometrically.
I hope this helped!
You could really give the x and y coordinates in C(a,b) any names. It doesn't have to be C(a,b), it could be C(x0,y0), C(u,v) or whatever.
The equation to calculate the distance of two points is really just the Pythagoran theorem c² = a² + b².
A very brilliant mind of yours, though it took me some minutes to understand what you've wrote. Thanks a lot, this really helps.
Just a question bro, how did you manage to get off that b² here(b² - 4b - 59/4 = 0 ) so that you can get the value of b?
Since we have a quadratic equation, the simplest way to solve it is to use the quadratic formula.
The solutions the polynom ax² + bx + c = 0 are
x= ( -b±sqrt(b² - 4ac) ) / 2a
With the equation b² - 4b - 59/4 = 0
a=1 , b=4 and c = 59/4
For more info, see this link:
The Quadratic Formula Explained
You're cooler than I thought. Thanks again bro.
unknown coordinates
posted by dissidia
Given two coordinates find the third assuming you meant to say AB is one side of an equilateral triangle.The answers thus far only define the lenght of a side of the triangle .The solution to
either question can be derived more simply.If the midpoint of an equalateral is 2.5 the side is 5 and the altitude is 2.5 times radial 3 so the latter dimension is used to get the two new values
of y and x is 2.5 in both cases
unknown coordinates
Hi isosky
I do not understand your message. I have supplied a method to find the two unknown coordinates Tell me how i can help you to understand this.Remember that the question is to supply the missing
November 12th 2009, 06:27 AM #2
Junior Member
Nov 2009
November 12th 2009, 06:52 AM #3
Nov 2009
November 12th 2009, 06:55 AM #4
Junior Member
Nov 2009
November 12th 2009, 07:01 AM #5
Junior Member
Nov 2009
November 12th 2009, 07:16 AM #6
Nov 2009
November 12th 2009, 07:37 AM #7
Nov 2009
November 12th 2009, 07:43 AM #8
Junior Member
Nov 2009
November 12th 2009, 07:55 AM #9
Nov 2009
November 12th 2009, 11:17 AM #10
Super Member
Nov 2007
Trumbull Ct
November 17th 2009, 06:11 PM #11
Super Member
Nov 2007
Trumbull Ct
|
{"url":"http://mathhelpforum.com/geometry/114084-unknown-coordinates.html","timestamp":"2014-04-16T19:13:17Z","content_type":null,"content_length":"56385","record_id":"<urn:uuid:f5a00ec9-e76e-4a86-95c4-3cce4255be24>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00190-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Methane Symmetry Operations
Methane Symmetry Operations
4.2 Proper rotations
Consider first a proper-rotation point-group operation C[n ], since proper rotations represent a slightly easier case than sense-reversing point-group operations. Following the commonly accepted
prescription [4], we must rotate the vibrational displacement vectors d[i], but leave the equilibrium position labels unchanged. This geometrical operation can be represented algebraically as
where M is the 3 x 3 proper rotation matrix D(C[n]) associated with the operation C[n] in (eq. 1). The index j is chosen for given i such that the equation
involving the equilibrium positions, is satisfied.
New Eulerian angles are chosen such that
is satisfied. It is always possible to do this, since the product of two rotations, e.g., M and S(χ , θ , φ), can always be represented as a third rotation.
R[new] is set equal to R for proper-rotation point-group operations.
Replacing d[i] by (d[i])[new] , etc., on the right-hand side of (eq. 9), we obtain the new expression
This is consistent with a left-hand side obtained by replacing R[i] by +R[j]. Thus, proper rotations correspond to pure permutation operations, with the permuted indices related by equation (eq. 13).
Figure 3 illustrates: (a) an arbitrary instantaneous configuration of the methane molecule, (b) the transformation of vibrational displacement vectors required for the point group operation C[3]
(111), and (c) the transformation of rotational angles required for C[3](111). It can be seen that the complete transformation consists of a rotation of the vibrational displacement vectors through
120° in a left-handed sense about the (1,1,1) direction, followed by a rotation of the molecule-fixed axis system (containing the equilibrium positions and attached displacement vectors) through 120°
in a right-handed sense about the (1,1,1) direction. The final result corresponds to the permutation (132) as defined in Section 3.
|
{"url":"http://physics.nist.gov/Pubs/Methane/chap042.html","timestamp":"2014-04-17T03:49:52Z","content_type":null,"content_length":"4930","record_id":"<urn:uuid:2c03de6a-8d14-43b7-9126-65ae376fb508>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00440-ip-10-147-4-33.ec2.internal.warc.gz"}
|
15 ridiculous Japanese Products
Check This Out!
'And fun? If maths is fun, then getting a tooth extraction is fun. A viral infection is fun. Rabies shots are fun.'
'God exists because Mathematics is consistent, and the devil exists because we cannot prove it'
'Who are you to judge everything?' -Alokananda
Re: 15 ridiculous Japanese Products
Unbelievable how much junk can be produced by the human race! That is why I do math. When I get a creative junk idea it fits on one or two sheets of paper and is easily disposable.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Super Member
Re: 15 ridiculous Japanese Products
the nail-cutter was much more ridiculous than any other items
Jake is Alice's father, Jake is the ________ of Alice's father?
Why is T called island letter?
think, think, think and don't get up with a solution...
Re: 15 ridiculous Japanese Products
More ridiculous then the chin holder? Or the one where she is keeping her hair out of the noodles?
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: 15 ridiculous Japanese Products
That is where I tricked you! Those are excellent ideas!
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: 15 ridiculous Japanese Products
Tricked whom? How?
10 Worst Nail Arts
'And fun? If maths is fun, then getting a tooth extraction is fun. A viral infection is fun. Rabies shots are fun.'
'God exists because Mathematics is consistent, and the devil exists because we cannot prove it'
'Who are you to judge everything?' -Alokananda
Re: 15 ridiculous Japanese Products
I did not mean you.
Those are awful. One worse than the other. The one with the lawn on her fingernail!!!!!!!
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: 15 ridiculous Japanese Products
I don't think they are really bad though. I think it was phan's post which you answered
'And fun? If maths is fun, then getting a tooth extraction is fun. A viral infection is fun. Rabies shots are fun.'
'God exists because Mathematics is consistent, and the devil exists because we cannot prove it'
'Who are you to judge everything?' -Alokananda
Re: 15 ridiculous Japanese Products
Which post?
The one with the creepy eye will give me nightmares for the rest of my life.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: 15 ridiculous Japanese Products
The one which you deleted.
Żthat was not creepy enough
'And fun? If maths is fun, then getting a tooth extraction is fun. A viral infection is fun. Rabies shots are fun.'
'God exists because Mathematics is consistent, and the devil exists because we cannot prove it'
'Who are you to judge everything?' -Alokananda
Re: 15 ridiculous Japanese Products
Only my posts are creepier than those nails are.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: 15 ridiculous Japanese Products
I would like a nail art showing binary digits
'And fun? If maths is fun, then getting a tooth extraction is fun. A viral infection is fun. Rabies shots are fun.'
'God exists because Mathematics is consistent, and the devil exists because we cannot prove it'
'Who are you to judge everything?' -Alokananda
Re: 15 ridiculous Japanese Products
Here is a really weird suggestion. How about short clean nails. Not manicured, not painted, no artwork, or hideous symbols.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: 15 ridiculous Japanese Products
I think we both have that
'And fun? If maths is fun, then getting a tooth extraction is fun. A viral infection is fun. Rabies shots are fun.'
'God exists because Mathematics is consistent, and the devil exists because we cannot prove it'
'Who are you to judge everything?' -Alokananda
Re: 15 ridiculous Japanese Products
Hi Agnishom;
I meant instead of those nails.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
|
{"url":"http://mathisfunforum.com/viewtopic.php?pid=276652","timestamp":"2014-04-17T06:42:15Z","content_type":null,"content_length":"25652","record_id":"<urn:uuid:c31d4021-1cd2-4aec-a9ff-a2e6d6d143f6>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00390-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Semidirect products and PROPs
up vote 6 down vote favorite
$\newcommand{\p}{\mathcal{P}}$Let G be a group, and let $G_n$ denote the wreath product $G^n \rtimes \Sigma_n$.
There seems to be a notion of a PROP $\p$ where the role of the symmetric groups $\Sigma_n$ is instead played by the groups $G_n$. An example would be $G=SO(N)$ and $\p$ the PROP associated to the
framed little N-disc operad. Here $\p(1,n)$ has an action of $G_1^{op} \times G_n$ -- the copy of $G_1^{op}$ rotates the entire disc "counterclockwise" (i.e. an element $g \in SO(N)$ acts via $g^{-1}
$), and the action of $G_n$ on $\p(1,n)$ is the evident one. The gluing maps are then suitably equivariant under this simultaneous action of $G$ on the input/output legs, so to speak.
Is there a standard name for this kind of PROP/operad? Cf. how one calls an operad where $\Sigma_n$ has been replaced by $B_n$ a braided operad. I would also be happy to hear of any paper where this
kind of gadget has been defined and/or studied.
Addendum, Jan 23 2013. Sorry if it is poor form to bump an inactive question only to advertise your own work, but I just ran across this old question. When I had thought some more about this I
realized eventually that what I described above is really a special case of the notion of a colored PROP/operad, except the collection of colors do not form a set but a category. In this case there
is only one color but its automorphism group is $G$, that is, the collection of colors is exactly the one-object category corresponding to the group $G$.
The notion of a PROP/operad which is colored by a category is defined in my paper http://arxiv.org/abs/1205.0420 . This is actually somewhat more general than what Salvatore and Wahl define, even
when the category in question is a group.
operads terminology
add comment
1 Answer
active oldest votes
Dear Dan,
up vote 3 down You can find this notion defined in the paper "Framed discs operads and the equivariant recognition principle" by Nathalie Wahl and Paolo Salvatore: http://arxiv.org/abs/math/
vote 0106242 . .
add comment
Not the answer you're looking for? Browse other questions tagged operads terminology or ask your own question.
|
{"url":"http://mathoverflow.net/questions/61732/semidirect-products-and-props/66113","timestamp":"2014-04-19T15:03:58Z","content_type":null,"content_length":"50889","record_id":"<urn:uuid:86621ce0-d1a0-4e82-a81b-1f2eedf3f2ef>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00352-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Saint Albans, NY Trigonometry Tutor
Find a Saint Albans, NY Trigonometry Tutor
...I personalize my instruction to the students' interests to make the subject matter more relevant to their point of reference. I am passionate about learning and enjoy when the individuals pick
up on this enthusiasm and revisit the subject matter we are studying with a new perspective.Algebra con...
45 Subjects: including trigonometry, English, chemistry, GED
...It is easy for me to get along with people and help them with any problems they have, in education and even personally if necessary. A little about me, I grew up in New York with great family
and friends. I am responsible, hardworking, caring, and a great listener.
29 Subjects: including trigonometry, chemistry, reading, English
...I can also help with discrete math and computer architecture classes. I have basic level exerience with EAGLE CAD circuit board layout software. I have a Bachelor of Science in mechanical
engineering with a concentration in vehicle engineering and minor in electrical engineering from Cornell Un...
26 Subjects: including trigonometry, chemistry, calculus, geometry
...I have also taken summer courses in Buenos Aires, Argentina. My work experience includes internships at a mental health institute in Manizales, Colombia, and at a European-Union financed
organization in El Alto, Bolivia. I know how frustrating foreign language learning can be, and with my students I feel that mutual goal setting and patience are of paramount importance.
13 Subjects: including trigonometry, Spanish, English, algebra 1
Hello! My name is Lawrence and I would like to teach you math! Since 2004, I have been tutoring students in mathematics one-on-one.
9 Subjects: including trigonometry, calculus, geometry, algebra 1
Related Saint Albans, NY Tutors
Saint Albans, NY Accounting Tutors
Saint Albans, NY ACT Tutors
Saint Albans, NY Algebra Tutors
Saint Albans, NY Algebra 2 Tutors
Saint Albans, NY Calculus Tutors
Saint Albans, NY Geometry Tutors
Saint Albans, NY Math Tutors
Saint Albans, NY Prealgebra Tutors
Saint Albans, NY Precalculus Tutors
Saint Albans, NY SAT Tutors
Saint Albans, NY SAT Math Tutors
Saint Albans, NY Science Tutors
Saint Albans, NY Statistics Tutors
Saint Albans, NY Trigonometry Tutors
|
{"url":"http://www.purplemath.com/Saint_Albans_NY_Trigonometry_tutors.php","timestamp":"2014-04-18T21:54:31Z","content_type":null,"content_length":"24410","record_id":"<urn:uuid:3f73e891-29da-4906-85f6-d25770c5a83f>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00101-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Latest posts of: Spine - Arduino Forum
Ok, It turns out I am an idiot!
I had a slight error in my Y position calculation, + instead of - (comes from c^2 - a^2 = b^2). I was so involved with thinking of all these other possibilities I overlooked a simple pythagorean
theorem typo.
I re-ran my simulations with comparing to microsoft excel and the arduino answers are on average about 0.5mm off either direction with the worst case around 1mm error. This is well within the error
of other parameters in my system so It is good enough for this application.
Thanks to all that answered and sorry for my massive loss of brain function!
|
{"url":"http://forum.arduino.cc/index.php?action=profile;u=49424;sa=showPosts","timestamp":"2014-04-20T14:25:35Z","content_type":null,"content_length":"24169","record_id":"<urn:uuid:41ceec1f-4215-4833-8f31-da2cfc5268d3>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00596-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Gaussian Mixture Model (GMM) - Gaussian Mixture Regression (GMR) - File Exchange - MATLAB Central
Please login to add a comment or rating.
Comments and Ratings (16)
20 Feb
17 Jan good!
Thanks for you code. It is great! But to address my problem, I need every Gaussion function shares exactly the same width. Is that possible or which part I need to amend?
07 Aug
2011 Best
I'm trying to implement this for my data but at the first stage, I'm getting the following error:
Warning: Matrix is singular, close to singular or badly scaled.
10 Jun Results may be inaccurate. RCOND = NaN.
2011 > In gaussPDF at 21
In EM at 94
Can you please help me?
Nice implementation.
Just a tiny little thing: I was wondering why there is no function that computes the output (i.e. the probability) of a learned GMM for a set of data points. I know that it can easily be
31 May done by summing the PDFs of all GMM components (using the function gaussPDF(...)) multiplied by their respective prior.
2011 Also, some of the dimensions of the vectors and matrices in the help-text of the function gaussPDF(...) are not correct. It should be:
Mu: D x 1 array
Sigma: D x D array
26 Apr you shoud change your parameters !! but i can't interpret the resualt !!
12 Feb I had such problem also. The reason was that some other kmeans.m (with other order of inputs) overrides original one. In my example it was kmeans from MatlabArsenal. So I have deleted its
2011 path from Matlab's directories ("File->Set Path...")
08 Dec
If I run demo1, demo2 or demo3 I always get the same error
??? Error using ==> kmeans at 46
Data dimension does not match dimension of centres
Error in ==> EM_init_kmeans at 27
25 Nov [Data_id, Centers] = kmeans(Data', nbStates);
Error in ==> demo1 at 52
[Priors, Mu, Sigma] = EM_init_kmeans(Data, nbStates);
What I am doing wrong?
This also does not work for 3D data, line 28 of plotGMM explicitly considers only the 2D case since the [cos(t) sin(t)] constrains the result to be a length(t) x 2 matrix:
08 Feb X = [cos(t) sin(t)] * real(stdev) + repmat(Mu(:,j)',nbDrawingSeg,1);
The documentation appears great, but suggests that this can handle arbitrarily high dimensional cases.
04 Dec
18 Nov I guess nobody tried to plot the GMM for the 1-D data. I am getting "Matrix dimensions must agree" error in plotGMM line 28.
05 Mar Excellent!
12 Nov There is also a kmeans function in LeSage's toolbox that you need to avoid when trying to use EM_init_kmeans. The demos ran as expected and the code seems pretty well documented.
18 Oct good
15 Jul this is good
|
{"url":"http://www.mathworks.com/matlabcentral/fileexchange/19630-gaussian-mixture-model--gmm--gaussian-mixture-regression--gmr-","timestamp":"2014-04-25T01:44:47Z","content_type":null,"content_length":"39594","record_id":"<urn:uuid:e82ebaed-235f-4098-aeb7-27667de355aa>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00589-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Vector Algebra
February 18th 2010, 01:06 AM
Vector Algebra
V (x , y , z) = 10xyz - (x^2)z + 10y(z^2) -20
find the grad of V.
if i transfer to the form below the how do i associate the -20 with ?
10xyz ax - (x^2)z ay + 10y(z^2) az -20
anyone able to help?
is the answer 10yz - 0 + 20yz ?
many thanks (Happy)
February 18th 2010, 01:43 AM
Please explain how you got this form '10xyz ax - (x^2)z ay + 10y(z^2) az -20'?
and how you get 10yz +20yz
If you are trying to get the gradient of V(x,y,z) you will need to express the gradient in the direction of x, the direction of y and the direction of z.
Thus you should be looking at how you might express this gradient as a vector with partial derivatives.
February 18th 2010, 03:57 AM
The derivative of a constant is 0.
However, your whole concept here is wrong. The grad of a function is a vector:
$abla f(x,y,z)= \frac{\partial f}{\partial x}\vec{i}+ \frac{\partial f}{\partial y}\vec{j}+ \frac{\partial f}{\partial z}\vec{k}$
not another real valued function as you have written it.
|
{"url":"http://mathhelpforum.com/calculus/129422-vector-algebra-print.html","timestamp":"2014-04-16T20:56:56Z","content_type":null,"content_length":"5630","record_id":"<urn:uuid:e781e069-828c-4bf6-af99-b2e6115e186a>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00498-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Urgent! Please, Trig Help!
Hello, I have had quite a bit of troule with three of the trig problems that i had for homework. If you could show me the steps to get the answer that would be great because i'm sure there's
going to be a problem like these on a quiz we will have pretty soon.
1. sec(x)+csc(x)
-------------- Express in 1 trig function
2. cos(x)
----------- + tan(x) Express in 1 trig function
3. (sin(x))(cox(x)+sin(x)tan(x))
Thanks ALOT for the help
|
{"url":"http://mathhelpforum.com/trigonometry/1835-urgent-please-trig-help.html","timestamp":"2014-04-17T01:10:59Z","content_type":null,"content_length":"41924","record_id":"<urn:uuid:80c7754f-827c-4ad5-8ce5-f3097e0bbe8e>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00327-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Linear algebra question
May 5th 2007, 07:45 AM #1
Junior Member
Dec 2006
Linear algebra question
If anyone could explain how the following is done, it would be greatly appreciated.
Give an example of the following. If no example exists, explain why.
- A set of vectors in M sub 3,4 which are linearly independent but do not span M sub 3, 4
Assuming that M_{3,4} denotes the set of all 3x4 matrices (over R?)
are linearly independent and do not span M_{3,4} and so are the elements
of a set with the desired property.
Last edited by CaptainBlack; May 6th 2007 at 01:44 AM. Reason: to avoid anyone knowing that he made a mistake!
would not those matrices need a third row? since they have to be elements of M_{3,4}?
May 5th 2007, 07:59 AM #2
Grand Panjandrum
Nov 2005
May 5th 2007, 08:20 AM #3
May 6th 2007, 01:42 AM #4
Grand Panjandrum
Nov 2005
|
{"url":"http://mathhelpforum.com/advanced-algebra/14579-linear-algebra-question.html","timestamp":"2014-04-17T14:09:03Z","content_type":null,"content_length":"41950","record_id":"<urn:uuid:e2e1403f-3ac0-475c-ab5c-0e42d532baff>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00460-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Are there smooth bodies of constant width?
up vote 12 down vote favorite
The standard Reuleaux triangle is not smooth, but the three points of tangential discontinuity can be smoothed as in the figure below (left), from the Wikipedia article. However, it is unclear (to
me) from this diagram whether the curve is $C^2$ or $C^\infty$.
Meissner’s tetrahedron is a 3D body of constant width, but it is not smooth, as is evident in the right figure below.
My question is:
Are there $C^\infty$ constant-width bodies in $\mathbb{R}^d$ (other than the spheres)?
The image of Meissner’s tetrahedron above is taken from the impressive work of Thomas Lachand–Robert and Edouard Oudet, "Bodies of constant width in arbitrary dimension" (Math. Nachr. 280, No. 7,
740-750 (2007); pre-publication PDF here).
I suspect the answer to my question is known, in which case a reference would suffice. Thanks!
Addendum. Thanks to the knowledgeable (and rapid!) answers by Gerry, Anton, and Andrey, my question is completely answered—I am grateful!!
mg.metric-geometry reference-request
I trust you mean, other than the sphere. – Gerry Myerson Feb 3 '11 at 22:40
@Gerry: Yes, that's what I meant, thanks; updated the question. – Joseph O'Rourke Feb 3 '11 at 22:48
You ask whether the curve on the diagram is smooth. It isn't: it's made from six circle arcs. – Zsbán Ambrus Feb 4 '11 at 10:25
Thanks, Zsbán. So I guess just $C^1$? – Joseph O'Rourke Feb 4 '11 at 10:45
add comment
5 Answers
active oldest votes
Fillmore showed that there are sets of constant width in $\mathbb R^d$ with analytic boundaries which have a trivial symmetry group (so these are very different from spheres; see
"Symmetries of surfaces of constant width", J. Differential Geom., Vol 3, (1969), pp. 103-110).
up vote 22
down vote Moreover, the set of bodies of constant width with analytic boundaries is dense in the space of all convex bodies of constant width in $\mathbb R^d$ with respect to the Hausdorff metric
accepted (see e.g. "Smooth approximation of convex bodies" by Schneider).
1 In the second paragraph, a "of constant width" is missing; it is implicit, but it disturbed me for a second. – Benoît Kloeckner Feb 11 '11 at 12:11
@Benoît Kloeckner: Edited, thanks! – Andrey Rekalo Feb 11 '11 at 12:16
2 For an ignoramus like me, these results are quite impressive and highly counter-intuitive. – Olivier Dec 11 '11 at 14:19
I took a look at the Fillmore paper, and just before his Corollary to Theorem 2 -- which reads "Corollary. There exists an analytic hypersurface of constant width in E^n having the
same group of symmetries as a regular n-simplex." -- he writes "If we imitate the construction of a Reuleux triangle . . .. Thus:" This seems to imply that he is assuming that [the
intersection of four balls in 3-space, centered at the vertices of a regular tetrahedron and each with radius = the side-length of the tetrahedron] is a body of constant width. But
this is known to be false. – Daniel Asimov Jan 11 '13 at 4:50
add comment
Jay P Fillmore, Symmetries of surfaces of constant width, J Differential Geometry 5 (1969) 103-110, says: the curve $$x_1=h\cos\theta-{dh\over d\theta}\sin\theta,\qquad x_2=h\sin\theta+{dh
\over d\theta}\cos\theta$$ where $h=a+b\cos3\theta$, $0\lt8b\lt a$, is analytic and of constant width. If we rotate this curve in Euclidean space ${\bf E}^n$ about an $(n-2)$-dimensional
axis perpendicular to the line $\theta=0$, we obtain an analytic surface, not a sphere, of constant width in ${\bf E}^n$.
up vote 13
down vote The paper may be available at http://www.intlpress.com/JDG/archive/1969/3-1&2-103.pdf
add comment
Take any odd $C^\infty$-function $f$ on the sphere. Consider convex set $$R_\epsilon=\{\\,x\in\mathbb R^n\mid\langle x,u\rangle\le 1+\epsilon{\cdot}f(u)\ \ \text{for any}\ \ u\in\mathbb
up vote 13 {S}^{n-1}\\,\}.$$ Clearly for all sufficiently small $\epsilon>0$, $R_\epsilon$ is a smooth body of constant width.
down vote
add comment
Take any odd smooth function h on the unit (d-1)-sphere and take a constant r>0 large enough to ensure that h+r is the support function of a convex body K
(the condition for h+r to be the support function of a smooth convex body whose boundary has positive Gaussian curvature is that the eigenvalues of Hess(h)+(h+r).Id be positive).
This convex body K is of constant width 2r.
Moreover, any smooth convex body with constant width 2r whose boundary has positive Gaussian curvature can be constructed in this way :
up vote 4
down vote If S is a closed convex hypersurface of constant width 2r, then S is the sum of a sphere of radius r with a "projective hedgehog" H whose support function h is the odd part of the
support function of S (and which can be regarded as the locus of the middles of S's diameters)." ;
See for instance:
Y. Martinez-Maure, Arch. Math., Vol. 67, 156-163 (1996), page 157.
add comment
Michael Kallay characterized the set of all planar sets with a given width functions: See M. Kallay, Reconstruction of a plane convex body from the curvature of its boundary. Israel J.
up vote 2 Math. 17 (1974), 149–161. and M. Kallay, The extreme bodies in the set of plane convex bodies with a given width function. Israel J. Math. 22 (1975), no. 3-4, 203–207.
down vote
add comment
Not the answer you're looking for? Browse other questions tagged mg.metric-geometry reference-request or ask your own question.
|
{"url":"http://mathoverflow.net/questions/54252/are-there-smooth-bodies-of-constant-width/83173","timestamp":"2014-04-20T06:41:02Z","content_type":null,"content_length":"78028","record_id":"<urn:uuid:e160f7ce-b757-4763-89a8-b0e8c7d2340b>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00216-ip-10-147-4-33.ec2.internal.warc.gz"}
|
flipcode - Merging Polygons And Sub-Pixel Gaps
Hi Dennis,
Two excellent questions! The first question, in particular, I've been waiting for somebody to ask (I even hinted around about it in a previous response.) The second question is actually the last
phase to the solution for the first. Confused? :)
To clarify your first question, what you're really asking is how to automatically "optimize" a set of polygons. The answer to this is non-trivial. Fortunately, it's on my list of "documents to write
when time allows." Unfortunately, time doesn't look very forgiving for me in the near future, so I'll run through the important points here, and skip the details. It's been a long time since I had to
do this (before my TRI days), so I beg your pardon in advance for anything I may overlook.
Don't get lost in your problem of "merging polygons" - that's only a part of the problem. I'm going to attack it in terms of optimization. This is not to be confused with LOD reduction. We want to
keep the exact same topography for the scene, but we just want to do it in fewer polygons.
When I originally set out to do this, my goal was to find "the most optimal set of convex n-gons from a set of polygons." Let's pull that statement apart. I realize this sounds over-dramatic, but
please bear with me. The three parts are "most optimal set" and "convex n-gons" and "set of polygons". I'll cover the last two first, because they're quickest.
Obviously, "convex n-gons" means just what it says. I chose to go with them because, at the time, it was best (and I'm still convinced that they're good to go for the current state of things.)
When I say "set of polygons", I mean set of any kind of polygon. I didn't actually intend on handling any kind of input polygon, but as it turns out the technique allows for complex input polygons.
Now about that "most optimal set" bit. What does this mean, and to whom? No I'm not trying to act like a psychologist, I really mean it. For example, I might optimize the set into the fewest possible
polygons, which would speed up my pipeline. However, this doesn't necessarily always get you the best frame rate.
In figure 1,we see a standard wall with an archway. This shape is represented with 9 polygons (12 triangles.) Take special notice of the red square. This is the portion of the surface that is within
the frustum (or not occluded by something.) In this case, I count a minimum of 5 polygons (6 triangles) inside the frustum.
In figure 2, we see a much different approach. This shape is also represented by 9 polygons, but 18 triangles (as opposed to 12 triangles in figure 1.) However, there is only one polygon (2
triangles) in the frustum.
Obviously, it depends on what is being viewed as to whether or not there is a savings in going with the way figure 2 has things laid out, but in practice, I found this way to be better than the
standard solution. This is especially true for highly occluded scenes.
In short, you have to completely rebuild your scene. Anything less would simply be an incomplete solution. The advantage to doing so is that your scene gets completely optimized.
The steps are as follows:
1. Completely rebuild the scene into complex polygons
2. Split the whole scene up using whichever "best fit" solution you want
3. Remove t-junctions
STEP 1: HOMOGENIZE
Step 1 is really the most involved. Notice that step 1 uses complex polygons, not convex polygons. This means that polygons can be concave and can also have holes. The only limitation I place on
this, is that they cannot have crossing edges. Also, note that they will need to be correct complex polygons (i.e. the winding order of the vertices in the holes go opposite to the winding order of
the exterior vertices.) This may sound difficult, but its not too bad, really, if you're careful.
Step one actually starts by removing t-junctions. But since I'm going to cover them in step 3, I'll not waste the space here. Just know that the whole process starts and ends with t-junction removal.
Start by choosing a polygon (it doesn't matter which.) Put the vertices into an ordered list. If your input polygon is a triangle, you'll have three vertices in your list: A->B->C. Remember, this
list represents vertices, so it should be treated like a circular queue (i.e. A -> B -> C -> A ...) Now throw that polygon away; consider it "processed."
We're going to be building on this list of vertices, adding vertices to it until we have a complete polygon and we can find no more polygons that share edges with it. In order for a polygon to be
considered for merging, we need to make sure that the polygons have the same properties. For example, the two polygons must be coplanar. They must have the same material applied to them, etc.
So let's start merging. Find a polygon that shares an edge with our vertex list (ABC.) Remember, sharing edges go the opposite direction. You would naturally think that two polygons with the same
winding order would be shared in the order they appear. For example, polygon ABC and polygon DEF might share BC with EF. In reality, it's BC shared with FE. Here's an example of two polygons sharing
an edge:
When you find a polygon that shares an edge (and shares the same surface properties), it's time to merge them. Simply delete the two vertices from polygon DEF that were shared with the vertex list.
This would delete E and F, leaving us with D. If your input set is more than triangles, you will have more vertices left over. Insert the remaining vertices (in this case, just D) into the list
between the two vertices that were shared (in this case, between B and C.) Now throw away the polygon that was just merged and consider it "processed." Our new list now is A->B->D->C.
Continue this process until you find no more shared edges. At that point, you'll have your first (potentially complex) polygon. Pick another polygon (any polygon will do) and start over. Lather,
rinse, repeat.
Before I go on to step 2, let me briefly point out a few common gotchas (at least, the ones I can remember.) First, if your input data has a lot of garbage in it (as was most of the archaic 3DS file
stuff I was working with), you might run into some degenerate polygons (i.e. polygons that are infinitely thin) or polygons that are multiply defined (i.e. three vertices used to define two identical
polygons with the same normal.) These can really trip you up, so be careful, and use lots of ASSERTS.
STEP 2: SPLIT IT UP
At this point, you have no more polygons; they've all been merged into a bunch of vertex lists (which, in effect are still polygons, but lets think of them as lists for now.) How we proceed from
here, depends on which best fit solution we want to go with. I'm going to stick to what I know, Monty, and go with what's behind figure #2. After reciting a short prayer to the deity of computational
geometry, we're ready to begin (no, that doesn't actually help, but the chicks dig it. =)
We're going to be working with one list at a time, splitting them each up into usable polygons. For this, we'll need a concavity test and an area calculation that works with complex polygons. You'll
find them in the appendices (?!) at the end of this document. We want to split up these polygons until we get a "best set" of convex n-gons. If you want triangles in the end, then you'll have to take
that last step on your own (there are plenty of references out there that cover this.)
If the list of vertices defines a convex polygon, consider it finished. Otherwise, we'll need to split it up. This process begins by selecting an edge (two consecutive entries in the list) from the
list of vertices. We'll attempt to split the polygon by that edge. Here's an example:
In this example, we have two possible splits. Notice that if we tried to split by any other edge, the polygon would remain whole. We'll be testing each possible edge for a split. We don't want to
stop when we find one that works; remember, we want a "best fit" solution. What we're looking for is the split that results in the single, largest, valid fragment. Looking at figure four, we see two
potential splits. On the left, we find a split that results in a very small fragment and a larger fragment; we'll keep track of the area of the larger fragment. On the right, we see another split. If
we calculate the areas, we'll find that the area of the larger fragment is not as big as the larger area of the previous split. I sure hope that's not too confusing.
Once we've tried all possible splits for this polygon, we go back to the one that resulted in the largest fragment. It is now safe to throw away our current list of vertices and replace it by the two
smaller fragments.
Next, go back up a few paragraphs to the point where I said, "If the list of vertices defines a convex polygon..." and (you know the drill) - lather, rinse, repeat.
So, how do you split a complex polygon? This actually goes beyond the scope of this document, but there are plenty of references on the net. Paul Bourke's home page (http://www.swin.edu.au/astronomy/
pbourke/) is a good place to start.
STEP 3: REMOVE T-JUNCTIONS
If you've made it this far, you've probably noticed that we've completely munged our world geometry (twice!) in order to completely reconstruct it. At this point, we've got a terribly optimized
database, but filled with a lot of t-junctions, because of the way we've split up the polygons. No problem, step 3 to the rescue!
I've done t-junction removal two ways in the past. Take a peek at figure 5:
On the left, we have the standard "split it up" and on the right, we have the "slide". These aren't dance moves, they're just lame names I come up with for stuff when I'm bored.
Splitting it up is really straight forward, but ends up with an extra polygon. Occasionally, however, you can slide a vertex to collapse two polygons into one. It's usually a good idea to do this
whenever possible, because in this case, fewer polygons is always the best. Note that you can only slide a vertex when the two polygons being collapsed are coplanar (and share the same surface
properties, don't forget!)
We must first detect when a vertex is the source of a t-junction. Looking closely at either half of figure 5, notice how the two polygons share two vertices, but not an edge. This is a great trivial
rejection test. If you don't find this to be the case, then you're guaranteed not to have a t-junction with these two polygons. Otherwise, we need to test for a t-junction. Testing for a t-junction
involves ignoring all vertices that are shared and testing the remaining vertices to see if they lie on the edge defined by the opposite polygon.
How do you determine if a point lies on the edge? Here's a simple way (there are more efficient solutions, but this one, at least, is pretty simple): Calculate the vector of the edge and then
calculate the vector from one point on the edge to the vertex being tested. Normalize them and then take the absolute value of their dot products. If that value is within a tolerance to 1.0, then the
vertex lies on the line defined by the edge. The only thing left to do, is to determine if the vertex is between both edge points. You can do this, by creating two vectors (one from each endpoint of
the edge, pointing at the vertex) and see if they point at one another. For those purists out there, you'll find a more efficient solution on Paul Bourke's page.
Once we determine that there is, in fact, a t-junction, we can split the opposing polygon. This is as simple as inserting the source t-junction vertex into the opposing polygon (between the two edge
vertices) and replacing that polygon with the two pieces. This is all just simple vertex manipulation, but what about sliding?
Sliding involves a little bit of extra work. As a matter of fact, it's a lot trickier than I expected to explain, and since I'm already on page four or five of this response, I'm going to totally
bail on this one and leave it up as an exercise to the reader. :) Actually, if somebody is interested, send in another question and I'll cover it.
That pretty much covers the topic, but we've still got some loose ends to clean up...
The way I've always done concavity testing was quite simple. I would build a plane out of each edge. You can do this, by taking the cross product of the polygon's normal and the edge-vector. This
gives you an "edge-plane" normal. An edge plane is that plane, which is defined by the two endpoints of the edge, and perpendicular to the polygon.
If you were to attempt to bisect the polygon on all of its edge planes, what would happen? If the polygon was convex, nothing would happen. So we test for this. The quick and dirty answer is to
simply visit each edge-plane and test each vertex to make sure they're all on the same side. If not, then the polygon is not convex.
We need an area routine that works for complex polygons. I found a fantastic little ditty in CGP&P that worked for 2D polygons, and extended that to 3D. It's fast, and very accurate:
areaXY = 0;
areaYZ = 0;
areaZX = 0;
for (each edge)
areaXY += (edge1.x + edge0.x) * (edge0.y - edge1.y) / 2;
areaYZ += (edge1.y + edge0.y) * (edge0.z - edge1.z) / 2;
areaZX += (edge0.x + edge1.x) * (edge1.z - edge0.z) / 2;
area = sqrt(areaXY * areaXY + areaYZ * areaYZ + areaZX * areaZX);
I should warn you that this is not trivial code to write. This is the kind of code that can be pretty hairy without good debugging tools and techniques. I remember actually writing a mini-debugger
for this that would render the polygons (perpendicular to their plane) to a series of image files, so I could investigate what was happening along each step of the process. When I eventually tackle
this again (and I will -- it's already been scheduled), I'll be sure to publish the source code to it.
In my experience, I've seen this algorithm (or something very similar to it, depending on how true my recollection really is) generate, on average, greater than 50% polygon reduction, with similar
improvements to my frame rates. Personally, I would consider that well worth the effort.
Response provided by Paul Nettle
|
{"url":"http://www.flipcode.com/archives/Merging_Polygons_And_Sub-Pixel_Gaps.shtml","timestamp":"2014-04-21T07:17:25Z","content_type":null,"content_length":"22915","record_id":"<urn:uuid:94c39d76-a427-4a02-92fa-cca1041d207c>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00427-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Little problem I created for myself
July 22nd 2013, 02:13 PM #1
Oct 2011
Little problem I created for myself
I want to have some algebra expression so that once it is solved the answer becomes $E=mc^2$. I want it to be a little complicated though where the answer isn't obvious and you do actually have
to reduces, cancel ect. Something like this for example. I know it doesn't mean anything but to give you an idea, once solved and everything cancels ect the solution is $E=mc^2$
$ac^2+\frac{E^2}{6-m^2} = -6-a \frac{1}{E^2}$
Re: Little problem I created for myself
I want to have some algebra expression so that once it is solved the answer becomes $E=mc^2$. I want it to be a little complicated though where the answer isn't obvious and you do actually have
to reduces, cancel ect. Something like this for example. I know it doesn't mean anything but to give you an idea, once solved and everything cancels ect the solution is $E=mc^2$
$ac^2+\frac{E^2}{6-m^2} = -6-a \frac{1}{E^2}$
Well, you could start with E = mc^2 and add a term to both sides, divide both sides by something, square both sides, etc. If you want to have real fun you could do something like...
$E = mc^2$
$E + 3ac = mc^2 + 10ac^3$
or something and make them solve for a such that E = mc^2.
Of course you could have some real fun and use $E^2 = p^2c^2 + (mc^2)^2$, where p is the momentum. That's the general form for E = mc^2.
Re: Little problem I created for myself
Thanks topsquark, I had something like that in mind but I wanted to make it a little more complicated and not make it apparent that the answer is $E=mc^2$
Re: Little problem I created for myself
I want to have some algebra expression so that once it is solved the answer becomes $E=mc^2$. I want it to be a little complicated though where the answer isn't obvious and you do actually have
to reduces, cancel ect. Something like this for example. I know it doesn't mean anything but to give you an idea, once solved and everything cancels ect the solution is $E=mc^2$
$ac^2+\frac{E^2}{6-m^2} = -6-a \frac{1}{E^2}$
As an exercise for someone you can ask him to solve for E and x(both real number) and for $m e 0$ and $c e 0$ the following:
$2{E^2} + {(mxc)^2} + {m^2}{c^4} - 2mxcE - 2{c^2}Em \le 0$
or even better the equivalent inequality:
$2{\left( {\frac{E}{c}} \right)^2} + {(mc)^2} + {(mx)^2} - 2\frac{{mxE}}{c} - 2mE \le 0$
Last edited by ChessTal; July 23rd 2013 at 03:45 AM.
Re: Little problem I created for myself
As an exercise for someone you can ask him to solve for E and x(both real number) and for $m e 0$ and $c e 0$ the following:
$2{E^2} + {(mxc)^2} + {m^2}{c^4} - 2mxcE - 2{c^2}Em \le 0$
or even better the equivalent inequality:
$2{\left( {\frac{E}{c}} \right)^2} + {(mc)^2} + {(mx)^2} - 2\frac{{mxE}}{c} - 2mE \le 0$
When I paste these into Wolfram it tells me it doesn't understand the equation and the closest answer it gives is $c^4m^2$
Re: Little problem I created for myself
July 22nd 2013, 05:26 PM #2
July 23rd 2013, 02:12 AM #3
Oct 2011
July 23rd 2013, 03:41 AM #4
Junior Member
Jan 2010
July 23rd 2013, 05:15 AM #5
Oct 2011
July 23rd 2013, 10:18 AM #6
Junior Member
Jan 2010
|
{"url":"http://mathhelpforum.com/algebra/220748-little-problem-i-created-myself.html","timestamp":"2014-04-16T14:27:35Z","content_type":null,"content_length":"53667","record_id":"<urn:uuid:807dbf90-4d64-49fd-9e11-31103b5ac9c1>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00225-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Cheyney Precalculus Tutor
Find a Cheyney Precalculus Tutor
...Taught high school math and have extensive experience tutoring in SAT Math. Able to help students improve their math skills and also learn many valuable test-related shortcuts and strategies.
Scored 770/800 on SAT Reading in high school and 790/800 on January 26, 2013 test.
19 Subjects: including precalculus, calculus, statistics, geometry
...Chemistry in its purest form is just about relationships. When people think of chemistry, they get caught up in numbers and complex science, but when the relationships between materials are
shown, it all fits together like a puzzle. When it "clicks," it's so incredibly useful in daily life.
14 Subjects: including precalculus, chemistry, algebra 1, algebra 2
...Beyond academics, I spend my time backpacking, kayaking, weightlifting, jogging, bicycling, metalworking, woodworking, and building a wilderness home of my own design. In between formal
tutoring sessions, I offer my students FREE email support to keep them moving past particularly tough problems...
14 Subjects: including precalculus, physics, calculus, geometry
...I am passionate about Math in the early years, from Pre-Algebra through Pre-Calculus. Middle school and early High School are the ages when most children develop crazy ideas about their
abilities regarding math. It upsets me when I hear students say, 'I'm just not good in math!' Comments like ...
9 Subjects: including precalculus, geometry, algebra 1, algebra 2
...I have been an SAT, ACT, PSAT tutor for over 10 years. My first career was in business as a Vice President, consultant, and trainer. During this time I taught Business Management at the
University of Wisconsin and Chestnut Hill College.
35 Subjects: including precalculus, chemistry, English, geometry
|
{"url":"http://www.purplemath.com/Cheyney_Precalculus_tutors.php","timestamp":"2014-04-16T07:31:29Z","content_type":null,"content_length":"23896","record_id":"<urn:uuid:a894e73b-2dbb-4b0e-9119-875f45b91200>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00591-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Talk Tennis - View Single Post - could the numbering system be a loop?
No... The easiest example off the top of my head is the following: the limit of e^x as x→∞ = ∞. The limit of e^x as x→-∞ = 0. If +∞ = -∞, then the two limits would be equal, and no one is arguing the
possibility of 0 = ∞.
If you then want to ask if 0 does indeed equal infinity, we could just use a similar argument: the the limit of e^x as x→0 = 1. By definition, 1 (a finite number) cannot equal infinity.
Many people would rather die than think; in fact, most do. - Bertrand Russell
|
{"url":"http://tt.tennis-warehouse.com/showpost.php?p=6993231&postcount=6","timestamp":"2014-04-20T10:32:49Z","content_type":null,"content_length":"17340","record_id":"<urn:uuid:43fc4c8e-cb4a-4c69-9e79-4d42ee73cb83>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00518-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[Numpy-tickets] [NumPy] #601: function for computing powers of a matrix
[Numpy-tickets] [NumPy] #601: function for computing powers of a matrix
NumPy numpy-tickets@scipy....
Sun Oct 28 13:37:24 CDT 2007
#601: function for computing powers of a matrix
Reporter: LevGivon | Owner: somebody
Type: enhancement | Status: new
Priority: normal | Milestone:
Component: numpy.linalg | Version: devel
Severity: normal | Resolution:
Keywords: |
Comment (by charris):
''Is the power of a defective matrix mathematically defined?''
Arbitrary powers? No, try to find the square root of [[0, 1],[0,0]].
Ticket URL: <http://scipy.org/scipy/numpy/ticket/601#comment:3>
NumPy <http://projects.scipy.org/scipy/numpy>
The fundamental package needed for scientific computing with Python.
More information about the Numpy-tickets mailing list
|
{"url":"http://mail.scipy.org/pipermail/numpy-tickets/2007-October/001421.html","timestamp":"2014-04-19T12:27:29Z","content_type":null,"content_length":"3771","record_id":"<urn:uuid:8e86c914-8286-4dc9-9b2c-84b63b25f422>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00343-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Irrational Decimals
Date: 08/04/98 at 08:54:13
From: Scott Bennett
Subject: Decimal representations of rational numbers
I was recently shown a proof stating that 0.9 (repeating) is actually
equal to one. I have come to accept this proof, but I'm wondering from
a representation standpoint whether they are actually considered to be
mathematically equal.
The proof involves using an sum of 9/(10^n) for n = 1 to infinity,
which is an expression for 0.9 repeating. I was actually able to
justify this by pulling the 9 out of the sum and multiplying it by the
sum of 1/(10^n) for n = 1 to infinity.
This opens up a whole new can of worms, specifically that any
irrational number could possibly be expressed as an infinite sum of
rational numbers. And since the sum of two or more rational numbers is
another rational number, there may not be any irrational numbers as the
term is currently defined.
My question is, are the decimal representations of rational numbers and
the rational numbers themselves mathematically equal, or would they
more appropriately be called equivalent?
Date: 08/04/98 at 16:59:58
From: Doctor Rob
Subject: Re: Decimal representations of rational numbers
You are beginning to discover some of the beauties and mysteries of
It is true that the sum of any finite number of rational numbers is
again a rational number, but the sum of an infinite number of rational
numbers may be irrational. After all, any decimal expansion, such as:
Pi = 3.14159265358979323846...
sqrt(2) = 1.414213562...
and so on, are actually infinite sums of rational numbers:
3.14159... = 3 + 1/10 + 4/10^2 + 1/10^3 + 5/10^4 + 9/10^5 + ...
Yes, every irrational number is the sum of an infinite number of
rational numbers. That does not preclude them from existing. The
rational numbers are just those whose decimal expansions are
eventually periodic. All the others are irrational.
In answer to your last question, the decimal representations and the
rational numbers are mathematically equal. Some rational numbers have
more than one decimal expansion, as you have come to accept:
1 = 1.00000000... = 0.99999999...
for example. That is not an impediment.
Keep asking good questions!
- Doctor Rob, The Math Forum
Check out our web site! http://mathforum.org/dr.math/
Date: 08/05/98 at 11:39:15
From: Anonymous
Subject: Re: Decimal representations of rational
How can you say that an infinite sum of rational numbers can be an
irrational number? If you can add two rational numbers and get a
rational number, then by the associative property of addition, you
could add up any number of rational numbers, even an infinite number,
and get a rational number.
It defies logic to say that you can show irrationality by demonstrating
that a number cannot be shown to be the root of Qx+P = 0. There are an
infinite number of integers, and no one can know whether there are
numbers out there for which this method will or will not work. Maybe
our computing power just is not enough yet to find these numbers, but
this does not mean they don't exist, they're just beyond our current
To me, you could prove irrationality by showing that there is not a
repeating pattern. But what if the repeating pattern is infinitely
long? And what if the pattern is not necessarily (I don't know the
right word for it) linear? Maybe something like:
1.32435465768798109111012 ...
There is a repeating pattern, but it's not linear, like:
1.35246135246 ...
(like I said, I don't know the right word for it, but linear seems
Thanks, this has been fun and educational, and has definitely made me
think. This revelation has only come to me in the past few days through
some research and just plain thinking about things, and it has shaken
me up a little bit, but I think that's good now and then.
Date: 08/05/98 at 16:09:16
From: Doctor Rob
Subject: Re: Decimal representations of rational
When you pass from a finite number of terms to an infinite number of
terms, your proof breaks down, and no substitute proof is available.
You may get for your sum either a rational or an irrational number.
See below.
An irrational number is defined as one which cannot be written in the
form a/b for integers a and b with b nonzero. (You might as well assume
that b > 0, since a/[-b] = [-a]/b.) Those are precisely the solutions
of b*x + (-a) = 0, so any number which is not the solution of such a
linear equation with integer coefficients is by the definition an
irrational number.
Every rational number a/b has a finite repeating sequence of digits
in its decimal expansion. The proof goes like this. Use long division
to divide b into a. Look at the sequence of remainders you get.
All of them are smaller than b (if you have done the long division
correctly!). After at most b steps, you will either get a remainder
of 0 (and the decimal expansion terminates) or you will get a nonzero
remainder you have already seen previously. This is an application of
the Pigeonhole Principle. (If you put n+1 things in n boxes, at least
one box must contain two of them.) From that point on, the division
will exactly duplicate the steps following that previously seen point,
including getting that same remainder yet again, after the same number
of steps. If you are getting the same sequence of remainders, you will
be getting the same sequence of quotient digits, so that will repeat
with the same period as the remainder sequence.
The above argument is independent of the size of a and b, so is valid
for every integer choices for those numbers. If b is large, the period
may be long, but it is finite, since its length is at most b-1. (Why
can I use "b-1" here, instead of "b"? Think about it.)
Here is an example: 16/37.
37 ) 16.0000000000000
Sequence of remainders: 16, 12, 9, 16, 12, 9, 16, 12, 9, 16, ...
--------- --------- --------- --------
If two remainders are equal, then when you bring down the 0, the result
will also be the same, and the quotient digit will be the same, and the
new remainder will be the same, so all the steps afterwards will look
like a repetition of earlier steps.
Another example: 93/176.
176 ) 93.000000000000...
Sequence of remainders: 93, 50, 148, 72, 16, 160, 16, 160, 16, ...
------- ------- -------
There is a certain pattern to the decimal digits, but they do not form
a periodic sequence of digits. All those decimals cannot be numbers of
the form a/b, for any a or b, by the above argument. Thus they are
irrational. Here is another kind:
1/10^1 + 1/10^4 + 1/10^9 + 1/10^16 + 1/10^25 + ... + 1/10^(n^2) + ...
= 0.1001000010000001000000001000000000010000000000001000...
The gaps between the 1's get longer and longer, so it is not a periodic
repeating decimal. That means it must be irrational.
Keep thinking and asking good questions!
- Doctor Rob, The Math Forum
Check out our web site! http://mathforum.org/dr.math/
|
{"url":"http://mathforum.org/library/drmath/view/52829.html","timestamp":"2014-04-17T12:31:03Z","content_type":null,"content_length":"12922","record_id":"<urn:uuid:7e0348ee-b746-4031-819b-af3a0d78c2ac>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00205-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Niwot Algebra Tutor
Find a Niwot Algebra Tutor
...I am happy to help with any Algebra 2 topics including: complex numbers, transformations on functions (polynomial, exponential, logarithmic, and trigonometric), operations on polynomial
expressions, equations and functions, circular and triangular trigonometry, rational and radical expressions an...
14 Subjects: including algebra 1, algebra 2, reading, geometry
...I have mentored many junior level engineers, introducing them to test driven development. domain driven design, and iterative software development. I have been a professional Software Engineer
since 1978. I started my programming career using Pascal but have programmed primarily in Java since 1997.
17 Subjects: including algebra 1, algebra 2, geometry, statistics
...I specialize in tutoring students in economics, SAT/ACT prep, and math. I embrace the 'flipped classroom model,' which means I typically assign students material to read and problems to try
before our session. In my sessions, we follow an agenda: diagnose the problem, test the student's learning, review the answer, and repeat.
41 Subjects: including algebra 1, algebra 2, Spanish, reading
...I believe that anyone can learn, depending on their interest and what they want to do with the knowledge. If one wants to learn, anything is possible! There are no limits and with that mindset
anything is achievable.
20 Subjects: including algebra 1, algebra 2, reading, English
I am a young, energetic teacher at 36 and i love what I do! I have been teaching with a dynamic philosophy for 10 years and love working with 6th grade and up. I have extensive curriculum
knowledge and a creative approach to teaching.
21 Subjects: including algebra 2, algebra 1, reading, elementary math
|
{"url":"http://www.purplemath.com/niwot_algebra_tutors.php","timestamp":"2014-04-21T00:02:51Z","content_type":null,"content_length":"23507","record_id":"<urn:uuid:219e2143-445c-4f64-84b2-4bbbfb891833>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00436-ip-10-147-4-33.ec2.internal.warc.gz"}
|
This page uses JavaSketchpad. You may drag any of the red points, and the construction will adjust accordingly.
Explanation of the construction: We need a length s such that s^2 = AB*BC. This can be written s/AB = BC/s, which suggests similar triangles. The triangle AGE has a right angle at G, being inscribed
in a semicircle, and triangles ABG and GBE are right triangles similar to it and thus each other. We have thus equal ratios of short leg to long leg, BG/AB = BE/BG, and since BE = BC, we can use BG
for the side s.
Back to ruler and compass constructions main page.
Back to Ken Brakke's home page.
|
{"url":"http://www.susqu.edu/brakke/constructions/24-squarerectangle.htm","timestamp":"2014-04-20T03:46:21Z","content_type":null,"content_length":"5112","record_id":"<urn:uuid:3401dacd-b752-468d-9651-ed8e29ba6139>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00128-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Interger.parseInt Question
April 5th, 2013, 05:21 PM #1
Join Date
Mar 2012
Thanked 0 Times in 0 Posts
I copied this code from a book, any while it will compile, I get an exception error when I try to run it. Any idea what the issue is?
class Power
public static void main ( String[] args )
int num = Integer.parseInt( args[0] ) ;
int square = (int) Math.pow( num, 2 ) ;
int cube = (int) Math.pow( num, 3 ) ;
int sqrt = (int) Math.sqrt( num ) ;
System.out.println( num + "squared is " + square ) ;
System.out.println( num + " cubed is " + cube ) ;
System.out.println( "Square root of " +num+" is " + sqrt ) ;
I get an exception error
Please copy the full text of the error message and paste it here.
If you don't understand my answer, don't ignore it, ask a question.
at Power.main(power.java:5)
The error message says that the array: args is empty (it does not have an element at index 0)
The code should test the length of the args array and make sure it has some content before trying to get at any of its elements.
The user executing the program needs to have an arg on the commandline following the classname for the java program to pass to the main() method in the args array.
If you don't understand my answer, don't ignore it, ask a question.
Thanks Norm,
I appreciate the response, but could you explain that in laymans terms? That would really help. Sorry, but I'm just starting out with Java.
When an application is launched, the runtime system passes the command-line arguments to the application's main method via an array of Strings. You need to pass arguments while you run this code
as on this line no 5 : int num = Integer.parseInt( args[0] ) ; you are fetching arguments from the command line.
so when you run the code using the command : java Power 2
you will get the output as below:
2squared is 4
2 cubed is 8
Square root of 2 is 1
can you please show me the exact code you used to correct it? That would really help me understand what I need to do. Thanks in advance.
The user needs to enter:
java Power 2
to execute the program without any errors.
If you want the program to catch errors
Pseudo code:
if length of the array passed to main() is < 1
then print error message to user and exit the program
else use the first arg of the array
If you don't understand my answer, don't ignore it, ask a question.
Thanks for the reply Norm, Where exactly do I insert the line Java Power 2?
That is the command line that should be used when the program is executed.
How do you execute the program?
Here's a sample of the console when I execute the java command:
D:\JavaDevelopment\Testing\ForumQuestions9>java TestCode13 args-here
If you don't understand my answer, don't ignore it, ask a question.
I'm using Bluej
Sorry, I have no idea how to use your IDE.
Once the program is finished being developed, move it out of the IDE to execute it.
If you don't understand my answer, don't ignore it, ask a question.
Ok, that makes sense, I'll do that. Thanks for all your help Norm. I'm sure I'll be posting many, many more times in this forum, hehe.
April 5th, 2013, 05:29 PM #2
Super Moderator
Join Date
May 2010
Eastern Florida
Thanked 1,953 Times in 1,927 Posts
April 5th, 2013, 06:21 PM #3
Join Date
Mar 2012
Thanked 0 Times in 0 Posts
April 5th, 2013, 06:59 PM #4
Super Moderator
Join Date
May 2010
Eastern Florida
Thanked 1,953 Times in 1,927 Posts
April 5th, 2013, 10:35 PM #5
Join Date
Mar 2012
Thanked 0 Times in 0 Posts
April 6th, 2013, 12:58 AM #6
Junior Member
Join Date
Apr 2013
Thanked 3 Times in 3 Posts
April 6th, 2013, 10:54 AM #7
Join Date
Mar 2012
Thanked 0 Times in 0 Posts
April 6th, 2013, 11:02 AM #8
Super Moderator
Join Date
May 2010
Eastern Florida
Thanked 1,953 Times in 1,927 Posts
April 6th, 2013, 11:22 AM #9
Join Date
Mar 2012
Thanked 0 Times in 0 Posts
April 6th, 2013, 11:38 AM #10
Super Moderator
Join Date
May 2010
Eastern Florida
Thanked 1,953 Times in 1,927 Posts
April 6th, 2013, 11:40 AM #11
Join Date
Mar 2012
Thanked 0 Times in 0 Posts
April 6th, 2013, 11:43 AM #12
Super Moderator
Join Date
May 2010
Eastern Florida
Thanked 1,953 Times in 1,927 Posts
April 6th, 2013, 11:46 AM #13
Join Date
Mar 2012
Thanked 0 Times in 0 Posts
|
{"url":"http://www.javaprogrammingforums.com/whats-wrong-my-code/27114-interger-parseint-question.html","timestamp":"2014-04-16T10:19:38Z","content_type":null,"content_length":"102770","record_id":"<urn:uuid:f2b07ccf-d4ed-4bd9-9191-7c762bdd18d7>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00543-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Fords ACT Tutor
...Also I am equipped with the Algebra resources like online tutoring videos, power points, worksheets, online resources of preparing question papers for tests and quizzes. Whatever the needs of
the students in Algebra-2 I can provide them appropriately and teach satisfactorily. Since I have exper...
10 Subjects: including ACT Math, calculus, geometry, algebra 1
...I hold a BA in Math. I am also a native speaker of French and Spanish, which I tutor on a regular basis. finally I have a Master of Science in Computer Sciences. I am a hands-on facilitator.
14 Subjects: including ACT Math, Spanish, calculus, ASVAB
...I'm sharing my test scores here. Hopefully they give a sense of my mastery of standardized testing strategy. Of course, the numbers don't tell the whole story, and I think what I really bring
to the table is the patience and experience to bring students towards mastery themselves.
36 Subjects: including ACT Math, English, chemistry, calculus
...I am comfortable and experienced with all levels of students. Because many of my students have achieved dramatic increases in their scores, some people get the impression that I am mainly an
SAT coach. Such is NOT the case.
23 Subjects: including ACT Math, English, calculus, geometry
...My fascination of science and the universe is a passion I've had my entire life. A graduate of Rutgers University with a Bachelor of Science in mechanical engineering, I am very familiar with
the topics covered in the math section of the SAT. I have taken both the PSAT and SAT tests as well as courses designed to prepare students for the tests.
21 Subjects: including ACT Math, chemistry, physics, calculus
|
{"url":"http://www.purplemath.com/Fords_ACT_tutors.php","timestamp":"2014-04-18T13:39:51Z","content_type":null,"content_length":"23254","record_id":"<urn:uuid:1588afb9-b790-4b97-a4a0-656e21428de8>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00115-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Understanding Protein Structure from a Percolation Perspective
• We are sorry, but NCBI web applications do not support your browser and may not function properly.
More information
Biophys J. Sep 16, 2009; 97(6): 1787–1794.
Understanding Protein Structure from a Percolation Perspective
Underlying the unique structures and diverse functions of proteins are a vast range of amino-acid sequences and a highly limited number of folds taken up by the polypeptide backbone. By investigating
the role of noncovalent connections at the backbone level and at the detailed side-chain level, we show that these unique structures emerge from interplay between random and selected features.
Primarily, the protein structure network formed by these connections shows simple (bond) and higher order (clique) percolation behavior distinctly reminiscent of random network models. However, the
clique percolation specific to the side-chain interaction network bears signatures unique to proteins characterized by a larger degree of connectivity than in random networks. These studies reflect
some salient features of the manner in which amino acid sequences select the unique structure of proteins from the pool of a limited number of available folds.
Anfinsen's landmark discovery (1) that the three-dimensional structure of protein is encoded in the amino acid sequence was made more than three decades ago. Although enormous progress has taken
place in decoding the principles of protein folding, a definite scenario, as in the case of the identification of triplet genetic code for amino acid sequence in proteins (2–4) has not yet emerged.
This is due to the fact that several factors such as the random and the selective behavior of the poly-peptide chain, optimization of geometry and energy play a role in the folding of proteins to
their unique native state (5,6). Additionally, evolution has played a major role in selecting proteins, whose structures are optimized for functioning in their environment. Hence, the optimization of
any specific parameter could have taken place to the extent of necessary and sufficient level and not necessarily to the maximum extent. Many important investigations have been carried out for
several decades addressing different aspects. The selection of secondary structures due to geometric constraints (7), the geometry optimization model (5) and the energy landscape model (6) are a few
examples. Furthermore, the availability of a large number of protein structures has aided in formulating and testing the proposed hypotheses. In this study, we have investigated the network of
connections made by noncovalent interactions within the proteins, with a focus of identifying random as well as selective regimes in the network.
It is well known that proteins respect severe constraints imposed by folding entropy (7) and their backbone is arranged in regular arrays of secondary structures such as helices and sheets (8). The
backbone endows the protein a robust skeletal structure composed of optimally packed, immutable folds (8–11) that are resilient to local variations and mutations (12,13). Furthermore, extensive
sequence-structure correlation studies have shown a diversity of sequences for a given backbone structure. However, the underlying global structure of amino acid linkages formed via noncovalent
side-chain interactions, which are also known to be crucial for the stability and uniqueness of protein structure, has received much less attention (14). The element of randomness at the noncovalent
interaction level has been investigated at a preliminary level by considering the protein structures as networks (15) (K. V. Brinda, S. Vishveshwara, and S. Vishveshwara, unpublished data).
In this study, we have constructed structure networks (graphs) of several proteins based on the noncovalent interactions, both at the backbone level as well as including all the atoms of the side
chains. The network parameters obtained from such graphs are compared with different random models, ranging from the most basic, unconstrained random model (Erdős-Rényi (ER)) to the ones constrained
to mimic the protein topology. We specifically compare the percolation behavior of the protein with those of the random graphs by investigating the percolation of basic connections (bond percolation)
(16) as well as higher order connections (clique percolation) (17). We find a striking resemblance between the bond percolation of the protein and all the random models. Additionally, we also find
that the clique-percolation profile of the protein backbone connection graph resembles those of the random graphs. Interestingly, the protein side-chain connectivity graph exhibits clique
percolation, which does not take place in any of the random models. Furthermore, we also observe such a percolating clique in decoy structures, which are poor in secondary structures and represent
the molten globule state (18,19). By our study, we have been able to distinguish the side-chain connectivity in well packed secondary structures as the selective feature unique to folded proteins in
their native state. Thus, the protein adopts the unique fold/structure in which the sequence is capable of making a percolating clique. In other words, the side chains interact in a highly connected
fashion, stitching different secondary, super-secondary structures and stabilizing the protein structure at the global level. Our results are consistent with the fact that diverse sequences carrying
out a variety of functions can adopt the same fold. We have considered the ubiquitous fold of TIM barrel (α/β fold), which is taken up by a large number of dissimilar sequences carrying out diverse
functions, the Helix bundles (all-α) and the Lectins (all-β). We show that the commonality between them is a percolating clique of side-chain connectivity, which link different secondary and
super-secondary structures.
Data set
The data set used for this analysis on the general features consists of a set of 50 single-chain proteins (10 proteins for each size of 200, 400, 600, 800, and 1000 amino acids) with known structures
obtained from the Protein Data Bank (20) (Table S4 in the Supporting Material). To investigate the fold specific features we have considered a data set of 15 proteins (five proteins for each of the
folds: α/β, all-α and all-β) obtained from the Protein Data Bank (Table S5). The decoy structures were taken from Decoys ‘R’ Us database (18).
Networks and percolation theory
Much of the analysis of the protein network is based on key concepts borrowed from complex network theory and percolation studies. Broadly, a network (graph) consists of a collection of points
(nodes) connected to one another by bonds (links). The nature of the network and the degree to which it is connected largely depends on the guiding principles governing the formation of links; for a
class of random networks the formation of a link depends on a given probability of connection. The links, for instance, depend on the noncovalent connections in the case of protein structures and on
the interacting proteins in protein-protein interaction network. A signature feature identifying properties of a network is the degree distribution, the degree being the number of links connected to
a node. For example, a large class of random networks is known to exhibit degree distributions that peak around a specific value. On the other hand, some of the real-world networks such as the
protein-protein interaction network or the spread of diseases (21,22), exhibit scale free networks or small-world network behavior in which certain nodes are highly connected.
The hallmark of a broad class of random networks is the presence of a transition point at which a giant connected cluster percolates the system whereas below this threshold (critical point), only
smaller clusters are present. At the simplest level, the giant cluster may consist of connected bonds and the transition point can be identified by the size of the largest cluster as a function of
the probability of connections. Instead of a simple bond percolation, we can envisage the percolation of more densely connected object-clique percolation. A clique, in a network, is a cluster where
each node is connected to every other node. If the number of nodes in a clique is k, a community is defined as the collection of adjacent k-cliques where each clique shares k-1 nodes with the
adjacent clique (17). Hence the largest community, which spans over the entire network, is a percolated clique and we use the terminology of “largest community” for clique percolation.
Representation of protein structures as networks
Protein side-chain network (PScN) is constructed on the basis of the details of the side-chain interactions, which is quantified in terms of the extent of interaction (23). Protein backbone network
(PBN) is constructed by considering the Cα atom of each residue in the protein as a node and any two Cα atoms (excluding the sequence neighbors) situated at a distance less than a cut-off distance
are connected by an edge (24). A brief description of this method is provided in the Supporting Material. The principle behind construction of PScN and PBN is pictorially depicted in Fig. 1. In this
study, we identify the number of connections in PBN as a function of Cα-Cα distance ranging from 4.5 Å to 10 Å and I[min] ranging from 1% to 9% in PScN.
Representation of noncovalent connections for the protein backbone (PBN) and the side-chain (PScN) graphs. Two amino acids (ARG255 and ASP56) are shown in ball and stick model in the protein
dihydropteroate synthase from Escherichia coli (Protein Data ...
Random network models
Three types of random graphs are used for comparison with the protein graphs. One of the models (RM1) is a simple unconstrained model similar to that of ER. The second one (RM2) is constrained to the
topology of the protein, which obeys the rule of excluded volume. The third one (RM3) is the same as RM2, except that the node (amino acid) position is also constrained to that of the protein.
ER random network model (RM1) and mapping of connection to probability
The ER model is arguably the best studied model for random networks. It has the simple feature that any node can be linked to any other with some probability p. Several features of this model are
known analytically. In particular, its degree distribution for a number of links k follows a Poisson curve n(k) = N (pN)^k e^−pN /k!, where N is the total number of nodes, and the critical
probability for the bond percolation transition is at p =1/N. For the k-clique percolation transition, critical probability is at p(k) = 1/[(k − 1)N]^1/(k−1) . Based on compelling trends that we
observed in protein structure, we have used the ER model and variants thereof to compare with the network properties of proteins.
We selected random graphs of the node sizes 200, 400, 600, 800, and 1000 to represent proteins of different sizes. Several realizations of ER random graphs were generated for the given node size with
varying probability of edges. The number of edges (the average of 10 proteins of chosen size and obtained under a given condition) in the protein graph is matched to the corresponding probability of
connection in the ER graph. Thus, the number of edges is matched to the probability of connection.
Constrained random network models
Finite size random node-constrained random edge model (RM2)
Proteins are of finite size and the RM1 model, which is not constrained in space, is not the best random model to compare the protein structure networks. Hence we have constructed random models,
which are constrained to finite size, idealized to spherical shape to mimic the shape approximately taken up by globular proteins. In this model, the nodes are generated randomly within a sphere, the
radius of which is chosen as approximately the average radius of gyration (R) from the data set of globular proteins of selected size. Hence each of the node coordinate (x,y,z) is within the
spherical limit of R. The random model thus constructed, exhibits a compactness similar to real proteins, as the radius of gyration is a measure of compactness of protein (25). The specified numbers
of edges (corresponding to the number found in protein of the selected size in both the PBN and PScN) are distributed randomly among a pair of nodes, which are within a distance of 6.5 Å or 7.5 Å, or
8.5 Å in three-dimensional space. A distance of 6.5 Å corresponds to the first peak in the radial distribution of residues in the interior of proteins (26,27). However, 7.5 Å, or 8.5 Å distances are
also used not to ignore any atom-atom contact (see Fig. S3). Second, stearic contact is avoided by not connecting the nodes, which are within 4.5 Å of each other. Such a model is protein-like in its
size, has realistic connections in space, and respects the excluded volume criterion. This model is averaged over 20 random realizations.
Protein nodes constrained random edge model (RM3)
The RM2 model mentioned above captures many features of proteins and is a generalized model applicable to a large number of globular proteins. However, it deviates from the exact size and does not
follow the chain connectivity. These features can be incorporated in a protein specific model, by keeping the nodes of the random graph identical to that of the selected protein and randomly rewiring
only the edges. To make realistic edges, the specified number of connections (corresponding to the number found in protein of the selected size in both the PBN and PScN) are randomly distributed
within a physical distance (4.5 Å < distance < 6.5 Å or 7.5 Å or 8.5 Å) of each amino acid in the protein structure. Because the number of edges within a sphere of 6.5 Å is much greater than the
maximum number found in the PScN for a given node size (see Table S2), it is possible to randomly distribute the edges of smaller number. In the case of PBN, the number of edges corresponding to a
lower cutoff (4–9 Å) is selected randomly from the repertoire of edges obtained from a cutoff of 10 Å. In this way, 10 realizations for each protein in the data set are created and finally evaluated
parameters are averaged over each of the 10 proteins in the data set. We denote this model as RM3 model. If proteins are optimally packed with secondary and super-secondary structures, irrespective
of the side chain (5), this model provides a reference point to test the exclusive role played by side-chain interaction because the topology of the model is strictly constrained to that of the
Community identification
For community identification, we have used the program CFinder (v.1.21) (28). An example of k-clique (k = 3) community in the PScN (protein dihydropteroate synthase from Escherichia coli at I[min] =
3%) is shown in Fig. 2.
Largest k-clique (k = 3) community in the dihydropteroate synthase from Escherichia coli (PDB ID = 1AJ0) at I[min] = 3%.
Protein structure and the random networks
Two types of protein structure graphs have been investigated in this study. The PBN represents the polypeptide chain packing and the PScN focuses on the details of side-chain interactions in the
proteins. From the network point of view, the number of connections for a given node size differs depending on the criteria used for connections. For example, proteins of the size of ~400 amino acids
make 396–3679 number of Cα-Cα connections, when the residues with in a range of 4.5–10 Å are considered to be connected in PBN. Similarly, the number of connections for a 400 residue protein varies
from 798 to 133 in PScN, depending on the side-chain connection strengths ranging from I[min] of 1–9% (see Table S2). An important difference to notice between PBN and PScN is that the PBNs
accommodate more number of edges than the PScNs. There is very little overlap between the number of connections of backbone and the side-chain regimes. The number of edges plays a significant role in
the corresponding random graphs because the likelihood of percolation increases with an increase in the probability of connections and one can comfortably separate the random graphs as PBN or PScN
like. For the sake of brevity, we have presented the results pertaining to the node size of 400, although qualitatively the same results are obtained for other sizes. (Some important results for
other sizes are presented in the Supporting Material.) We characterize the PBN and PScN in terms of their degree distribution and compare them with the three random models. Next, we examine the
percolation behavior at the simple bond-connection level and then at the clique-connection level.
Degree distribution
It is noteworthy that the degree distribution of PBN and PScN follow approximately the same behavior as that of the RM1 model at different levels of connections (see Fig. S1). The degree distribution
plots of PScN fit best to the Poisson distribution (see Fig. S2) and this rules out scale-free behavior in protein structure networks. They do differ slightly from RM1 model. For example, the Poisson
fitting parameters are different for RM1 and PScN (see Table S1). Additionally, the number of orphan nodes, which are not connected to any other node in the network, is higher in protein structure
network than RM1 (see Fig. S1). The RM2 and RM3 models as expected exhibit the degree distribution behavior closer to the protein case, with increased number of orphan nodes compared to RM1. Thus,
there is an element of randomness in the noncovalent interactions within proteins. However, a larger number of orphan nodes in the protein case imply more connections in the connected regions, as the
total number of nodes and edges are comparable for the protein and the random graphs. Although this effect does not cause any drastic change at the degree distribution level, the effect of this can
be seen in clique percolation, as discussed in a later section.
Bond percolation
In this study, we characterize the percolation properties of proteins based on our reference random networks. We compare the sizes of the largest clusters in protein structure networks to those of
the reference networks as a function of probability of edge formation.
As mentioned earlier, the key factor is the number of edges that a protein can make, depending on the definition of contact. There is an inherent limitation to connections in proteins, due to factors
like excluded volume, the nodes being connected as a polymer chain, and the geometry adopted by proteins. We adhere to the number of connections in protein graphs while constructing the random
graphs. (However, the number of connections is expressed as the probability of connection as given for 400 node graphs in Table S2.) The only freedom we exercise is to distribute the nodes and the
edges randomly or in a constrained manner as described in the Methods section.
Bond percolation behavior is examined by plotting the size of the largest cluster as a function of the probability of connection. In the PBN, the size of the largest cluster reaches a maximum (size
of the number of nodes) at a probability of connection being 0.006 (corresponding to Cα distance cutoff of 5 Å) as shown in Fig. 3 b. Even at the minimum possible probability of connection (Cα
distance cutoff of 4.5 Å), the size of the largest cluster is very close to that of the maximum. This implies that the percolation at the backbone level is almost complete at the minimum realistic
probability of connection. Strikingly, the size of the largest cluster is obtained at around the same probability of connection in RM2 and RM3, indicating that the backbone connections in a random
model obeying the constraints of protein topology and excluded volume exhibits the features of the protein graph. The size of the largest cluster in RM1, however, reaches the maximum at an increased
probability of connection of 0.02 and the percolation transition also starts at a higher probability of connection than that of the protein and in the random models RM2 and RM3. The side-chain graph
(PScN) on the other hand can take up much less number of connection, Here the maximum size of the largest cluster is slightly smaller than that of the node size, due to the existence of orphan nodes
at all levels of probability (I[min]) (Fig. 3 a). This is achieved around a probability of 0.01 (I[min] = 1%) and the bond percolation transition takes place around the probability of 0.005 (I[min] ~
4%). As expected, the behavior of the constrained random models RM2 and RM3 is very close to that of the protein. The onset of percolation transition and the attainment of the largest cluster on the
other hand are shifted to higher probabilities connections in RM1. Thus, the proteins behave random-like in their bond percolation feature, which is quite evident by almost identical behavior of
random models constrained to protein geometry.
Largest cluster profile (averaged over 10 realizations for size 400 nodes) of (a) PScN and corresponding random models: RM1, RM2, and RM3, (b) PBN and corresponding random models: RM1, RM2, and RM3.
In the side-chain profile, both PScN and RM1 show transition ...
Clique percolation
In recent years, clique percolation transition is being used to uniquely identify local structural units of the real-world networks where more densely connected regions are considered to be essential
in making predictions about yet unknown functions of proteins (28). Here too, such a percolation study serves to pinpoint the denser connectivity of the largest cluster of the protein structure
network. We observe the behavior of the largest community of k-clique as a function of probability where k = 3 (Although we obtain cliques of larger sizes, a large percolating community is obtained
only for k = 3 in proteins. Therefore, all the clique percolation studies are carried out at k = 3 and the largest community is defined only for this case). In the backbone profile (Fig. 4 b), the
probability range captures complete clique percolation transition of PBN and partial transition for RM1. Obviously, an uncorrelated random network requires more number of edges to attain a saturated
community, which falls out of the backbone probability range. The RM2 and RM3 models with the protein geometry and topology constraints move closer to the PBN than the RM1 model as anticipated. The
side-chain profile (Fig. 4 a), however, quite strikingly distinguishes PScN from all other reference networks. At a probability of 0.01 (I[min] ~ 1%), the largest community for PScN shows beginning
of percolation transition with a steep increase in the community size. (It is to be noted that the community size in PScN will not reach the maximum of node size as in the case of PBN, even at the
maximum possible probability of side-chain connections in proteins.) In contrast, the RM1 and even the constrained models RM2 and RM3 do not start percolating at all even at the maximum possible
connection (atom-atom connection) level. An increase in the constraint by significantly decreasing outer topological boundary of nodes (from 8.5 Å to 6.5 Å, which effectively reduces the random
selection of edges) also does not result in the onset of clique percolation. The result discussed here for the 400 node size is a general phenomenon common to proteins of all sizes. Relevant results
for 200 and 600 node sizes are presented in Fig. S5.
Clique percolation profile (averaged over 10 realizations for size 400 nodes) of (a) PScN, corresponding RM1 and constrained random networks, and (b) PBN, corresponding RM1 and constrained random
networks. Number of nodes in the largest community is plotted ...
The decoy structures simulated from the native structures have been generally associated with the molten globule state (18,19). We have examined the side-chain percolating communities in a set of 10
decoys for each of the 10 proteins (see Table S6). We observe that they have features common to those of native structures and they differ mainly by their reduction in the secondary structural
content. The relevance of this result is discussed in the Discussion.
Clique percolation in proteins of different folds
The fact that amino acid sequence dictates the structure of proteins is well accepted in molecular and structural biology. The structures of >50,000 proteins have been resolved (20) and it has been
possible to model the structures of new sequences using the available structures as templates (29,30). The success rate of modeling is high when there is high sequence similarity (>30%) with proteins
of known structure. There are many structures (folds), however, that are taken up by a large number of sequences with a similarity as low as one can get by chance. The conventional methods of
modeling fail in such a situation because there is no unifying principle. From this study, we believe that the possibility of a percolating clique can be a common phenomenon to stabilize a given fold
adopted by diverse sequences. Hence, in this section, we have elucidated the details of the percolating cliques, which stabilize all-α, all-β, and one of the widely adopted α/β folds (TIM barrel is
adopted by a large number of protein sequences with low similarity). This observation also provides a rationale for the fact that a vast range of amino acid sequences take up a highly limited number
of folds.
The α/β barrel fold, or known more commonly as TIM barrel fold, first discovered in the structure of the protein triose phosphate isomerase, is one of the most ubiquitous folds in nature and has been
extensively studied for the understanding it provides of protein structure, function and folding (31–35). We observe two or more large percolating cliques (at I[min] = 3%) for the proteins of the TIM
fold (Fig. 5 and Fig. S4). These communities become further connected when the probability of connection increases at the maximum possible side-chain connection (I[min] = 1%) (see Table S5). The
resulting giant community spans over the whole protein connecting several secondary structural elements. We notice the diversity of residues taking part in the clique formation in different proteins
of the same TIM barrel fold. Consequently, the overall size of the community is similar in each of the TIM barrel proteins though it differs significantly in its residue arrangements. Furthermore,
the location of the percolating cliques in different proteins is different with respect to the overall geometry. Thus, the only feature common in all the TIM barrel folds is the occurrence of
percolating side-chain cliques that stitch different secondary and super-secondary structures.
Clique percolation in TIM barrel protein dihydropteroate synthase (PDB ID = 1AJ0) (left), helix bundle protein cobalamin adenosyltransferase (PDB ID = 1NOG) (center), and lectin protein manganese
concanavalin A (PDB ID = 1DQ6) (right) at I[min] = 3%. The ...
The helix bundle fold consists of several parallel or anti-parallel α helices. In our study, we notice, unlike TIM barrel fold, five or six small percolating cliques (at I[min] = 3%) in all the
proteins of helix bundle fold. With the increase in the probability of connection (at I[min] = 1%), these small communities get connected to each other resulting in a giant community (Fig. 5 and
Table S5). In accordance with the results for TIM barrel fold, the giant community in helix bundle proteins spans over the whole structure linking the secondary structural elements.
The third fold we have studied is lectin, a well-known example of all-β fold. The communities observed in lectins (at I[min] = 3%) have varying sizes. We observe two or three large communities and
several small communities (Fig. 5). As in the case of other two folds mentioned above, these communities connect each other to give rise to a giant community at the maximum possible side-chain
connection (I[min] = 1%), which in turn, spans over the whole protein stitching the secondary structural elements.
In all the three folds, we observe diversity in residue type and arrangement involved in the formation of percolating cliques (see Table S5). The diversity in sequences is reflected in the
composition and the architecture of these percolating cliques, thus accounting for the same fold adopted by dissimilar sequences and providing a rationale for limited conformational space.
The natural tendency of the polypeptide chain for the formation of secondary structures and their optimal packing limits the number of protein folds (5). On the other hand, the amino acid sequence in
a protein uniquely determines the structure and hence the side chains and the order of their appearance in the chain ought to play a crucial role in selecting the unique structure. In other words,
the folded structure of proteins is a result of the combination of certain statistically probable events and some selective events. In this study, we have addressed this issue by comparing the
protein structure networks (made by the noncovalent connection both at the backbone level (PBN) and at the level including the details of the side chain (PScN)) with random models with and without
realistic constraints such as protein topology and excluded volume. A simple bond percolation and an intricate connection of clique percolation are studied. The bond percolation at all levels of
protein structure network resembles that of random networks. The clique percolation at the backbone level also resembles those of random models. On the other hand, only the protein side-chain network
at the high level of connections (low I[min]) is capable of clique percolation and none of the random models (including the one very similar to that of proteins) exhibited clique percolation. In
general, clique percolation can take place in any system, given a large number of connections (17). The special feature of proteins is the existence of a percolating clique with a limited number of
realistically possible connections, specifically atom-atom contact of noncovalently interacting side chains.
Optimal packing of secondary structures is also required for the uniqueness of proteins and it has been argued (36) that the polypeptide backbone inherently posses this feature. The percolating
cliques of side chains, in addition to the packed secondary structures due to the backbone, confer uniqueness to the protein structure. An important issue with this regard is the manner in which
molten globule structures differ from those of the native structures (37–42). The loss of secondary structures and a slight increase in the radius of gyration are considered to be the properties of
molten globules. Computationally, decoy structures generated from the native structures have been considered to be equivalent of molten globule state. In this study, we have considered 10 decoy
structures (18) of each of 10 different proteins (see Table S6) and compared the size of the largest community with those of the native structures. In most cases there is not much significant
difference in terms of the size and there are substantial overlaps between the residues in the largest community of the decoys and their native states. (This may be due to the fact that the decoy
structures are still in the conformational space close to that of the native.) However, the percentage of secondary structures in the decoys has reduced significantly. Thus, it is clear that the
uniqueness of the native states is due to both the optimal packing of secondary structures and their intactness preserved by a percolating community made up of the interactions of side chains.
Correlating the structure of proteins to their functions is an important goal of structural biologists. Experimentally, this aspect is probed by obtaining different complex structures of a given
protein from x-ray crystallography and the dynamical structures are captured by NMR spectroscopy. Computationally, molecular dynamics simulations provide information by spanning the equilibrium
conformational space. Because it is computationally expensive to carry out long time simulations, normal mode analysis (43–48) and elastic network models (ENM) (49–51) have been developed to extract
meaningful dynamical modes from the static x-ray structures. ENM uses simplified potentials in which the C[α] atom represents the residue, making the investigations of large system computationally
accessible. ENM, which considers both the sequential and the special neighbors of a chosen residue in the polypeptide chain in their formalism, has done exceedingly well in characterizing complicated
systems (52–57) due to the simplicity of the potential it uses. From this study, it seems that there is an important role played by the collective interaction of side-chain atoms. Further analyses
would need to investigate whether the incorporation of an additional term in the ENM potential to represent the collective interactions of side chains in a simplified manner would further push ENM
toward enhancing the accuracy of the model. Similarly, the concept of side-chain clique percolation can be incorporated in protein structure prediction methods to see if it improves the accuracy and/
or the efficiency of the prediction.
In summary, it seems that the uniqueness of the protein structure is brought out by extremely specific side-chain interactions, along with well packed secondary structures. Our results are consistent
with the sequence based statistical coupling analysis on evolutionary data on proteins (58,59). The nonbonded connections between side-chain atoms pervade the protein structure and stitch the
secondary and super-secondary structures, stabilizing the fold taken up by the packing of the polypeptide chain. We have shown this feature in proteins belonging to three different folds. Thus, the
key to the unique structure is indeed in the amino acid sequence, whereas the polypeptide backbone has given myriad structures to choose from. Although the protein sequence has the information to the
protein fold in the form of percolating cliques of side-chain interactions, many sequences can hold the key to the same fold as shown in the case of diverse sequences belonging to the ubiquitous TIM
barrel fold. Specifically, different combinations of the amino acid type and its position in the sequence, which can interact at the atomic level in a correlated fashion, are likely to stabilize the
unique structure. This also provides a rationale for the fact that a vast range of amino acid sequences take up a highly limited number of folds.
This work was supported by the National Science Foundation (DMR 06-44022 CAR) and Mathematical Biology project (DSTO773) funded by the Department of Science and Technology, India, for computational
Supporting Material
Document S1. Methods, five figures, and six tables:
Anfinsen C.B. Principles that govern the folding of protein chains. Science. 1973;181:223–230. [PubMed]
Watson J.D., Crick F.H. Molecular structure of nucleic acids; a structure for deoxyribose nucleic acid. Nature. 1953;171:737–738. [PubMed]
Crick F.H., Barnett L., Brenner S., Watts-Tobin R.J. General nature of the genetic code for proteins. Nature. 1961;192:1227–1232. [PubMed]
Khorana H.G., Buchi H., Ghosh H., Gupta N., Jacob T.M. Polynucleotide synthesis and the genetic code. Cold Spring Harb. Symp. Quant. Biol. 1966;31:39–49. [PubMed]
Trinh X.H., Trovato A., Seno F., Banavar J.R., Maritan A. Geometrical model for the native-state folds of proteins. Biophys. Chem. 2005;115:289–294. [PubMed]
Miyashita O., Wolynes P.G., Onuchic J.N. Simple energy landscape model for the kinetics of functional transitions in proteins. J. Phys. Chem. B. 2005;109:1959–1969. [PubMed]
Ramachandran G.N., Ramakrishnan C., Sasisekharan V. Stereochemistry of polypeptide chain configurations. J. Mol. Biol. 1963;7:95–99. [PubMed]
Przytycka T., Aurora R., Rose G.D. A protein taxonomy based on secondary structure. Nat. Struct. Biol. 1999;6:672–682. [PubMed]
Chothia C. Proteins. One thousand families for the molecular biologist. Nature. 1992;357:543–544. [PubMed]
Denton M., Marshall C. Protein folds: laws of form revisited. Nature. 2001;410:417. [PubMed]
Hoang T.X., Trovato A., Seno F., Banavar J.R., Maritan A. Geometry and symmetry presculpt the free-energy landscape of proteins. Proc. Natl. Acad. Sci. USA. 2004;101:7960–7964. [PMC free article] [
Kimura M. Evolutionary rate at the molecular level. Nature. 1968;217:624–626. [PubMed]
King J.L., Jukes T.H. Non-Darwinian evolution. Science. 1969;164:788–798. [PubMed]
Greene L.H., Higman V.A. Uncovering network systems within protein structures. J. Mol. Biol. 2003;334:781–791. [PubMed]
Brinda K.V., Vishveshwara S. A network representation of protein structures: implications for protein stability. Biophys. J. 2005;89:4159–4170. [PMC free article] [PubMed]
16. Stauffer D. Taylor and Francis; London, UK: 1985. Introduction to Percolation Theory.
Derényi I., Palla G., Vicsek T. Clique percolation in random networks. Phys. Rev. Lett. 2005;94:1–4. [PubMed]
Samudrala R., Levitt M. Decoys ‘R’ Us: a database of incorrect protein conformations to improve protein structure prediction. Protein Sci. 2000;9:1399–1401. [PMC free article] [PubMed]
Yang J.S., Chen W.W., Skolnick J., Shakhnovich E.I. All-atom ab initio folding of a diverse set of proteins. Structure. 2007;15:53–63. [PubMed]
Berman H.M., Westbrook J., Feng Z., Gilliland G., Bhat T.N. The Protein Data Bank. Nucleic Acids Res. 2000;28:235–242. [PMC free article] [PubMed]
21. Albert R., Barabasi A.-L. Statistical mechanics of complex networks. Rev. Mod. Phys. 2002;74:47–97.
Amaral L.A., Scala A., Barthelemy M., Stanley H.E. Classes of small-world networks. Proc. Natl. Acad. Sci. USA. 2000;97:11149–11152. [PMC free article] [PubMed]
Kannan N., Vishveshwara S. Identification of side-chain clusters in protein structures by a graph spectral method. J. Mol. Biol. 1999;292:441–464. [PubMed]
Patra S.M., Vishveshwara S. Backbone cluster identification in proteins by a graph theoretical method. Biophys. Chem. 2000;84:13–25. [PubMed]
Sistla R.K., Brinda K.V., Vishveshwara S. Identification of domains and domain interface residues in multidomain proteins from graph spectral method. Proteins. 2005;59:616–626. [PubMed]
26. Miyazawa S., Jernigan R.L. Estimation of effective interresidue contact energies from protein crystal structures: quasi-chemical approximation. Macromolecules. 1985;18:534–552.
Miyazawa S., Jernigan R.L. Residue-residue potentials with a favorable contact pair term and an unfavorable high packing density term, for simulation and threading. J. Mol. Biol. 1996;256:623–644. [
Adamcsek B., Palla G., Frakas I., Derényi I., Vicsek T. CFinder: locating cliques and overlapping modules in biological networks. Bioinformatics. 2006;22:1021–1023. [PubMed]
Sali A., Blundell T.L. Comparative protein modelling by satisfaction of spatial restraints. J. Mol. Biol. 1993;234:779–815. [PubMed]
Rohl C.A., Baker D. De novo determination of protein backbone structure from residual dipolar couplings using Rosetta. J. Am. Chem. Soc. 2002;124:2723–2729. [PubMed]
Kannan N., Selvaraj S., Gromiha M.M., Vishveshwara S. Clusters in alpha/beta barrel proteins: implications for protein structure, function, and folding: a graph theoretical approach. Proteins. 2001;
43:103–112. [PubMed]
Lesk A.M., Branden C.I., Chothia C. Structural principles of alpha/beta barrel proteins: the packing of the interior of the sheet. Proteins. 1989;5:139–148. [PubMed]
Wodak S.J., Lasters I., Pio F., Claessens M. Basic design features of the parallel alpha beta barrel, a ubiquitous protein-folding motif. Biochem. Soc. Symp. 1990;57:99–121. [PubMed]
Murzin A.G., Lesk A.M., Chothia C. Principles determining the structure of beta-sheet barrels in proteins. II. The observed structures. J. Mol. Biol. 1994;236:1382–1400. [PubMed]
Bharat T.A., Eisenbeis S., Zeth K., Hocker B. A beta alpha-barrel built by the combination of fragments from different folds. Proc. Natl. Acad. Sci. USA. 2008;105:9942–9947. [PMC free article] [
Banavar J.R., Maritan A. Physics of proteins. Annu. Rev. Biophys. Biomol. Struct. 2007;36:261–280. [PubMed]
Kuwajima K., Yamaya H., Miwa S., Sugai S., Nagamura T. Rapid formation of secondary structure framework in protein folding studied by stopped-flow circular dichroism. FEBS Lett. 1987;221:115–118. [
Goto Y., Fink A.L. Phase diagram for acidic conformational states of apomyoglobin. J. Mol. Biol. 1990;214:803–805. [PubMed]
Chyan C.L., Wormald C., Dobson C.M., Evans P.A., Baum J. Structure and stability of the molten globule state of guinea-pig alpha-lactalbumin: a hydrogen exchange study. Biochemistry. 1993;32
:5681–5691. [PubMed]
40. Luthey-Schulten Z., Ramirez B.E., Wolynes P.G. Helix-coil, liquid crystal and spin glass transitions of a collapsed heteropolymer. J. Phys. Chem. 1995;99:2177–2185.
Levitt M., Gerstein M., Huang E., Subbiah S., Tsai J. Protein folding: the endgame. Annu. Rev. Biochem. 1997;66:549–579. [PubMed]
Ferreiro D.U., Hegler J.A., Komives E.A., Wolynes P.G. Localizing frustration in native proteins and protein assemblies. Proc. Natl. Acad. Sci. USA. 2007;104:19819–19824. [PMC free article] [PubMed]
43. Goldstein H. Addison-Wesley; Reading, MA: 1950. Classical Mechanics.
Go N., Noguti T., Nishikawa T. Dynamics of a small globular protein in terms of low-frequency vibrational modes. Proc. Natl. Acad. Sci. USA. 1983;80:3696–3700. [PMC free article] [PubMed]
Brooks B., Karplus M. Normal modes for specific motions of macromolecules: application to the hinge-bending mode of lysozyme. Proc. Natl. Acad. Sci. USA. 1985;82:4995–4999. [PMC free article] [PubMed
Levitt M., Sander C., Stern P.S. Protein normal-mode dynamics: trypsin inhibitor, crambin, ribonuclease and lysozyme. J. Mol. Biol. 1985;181:423–447. [PubMed]
Ma J. Usefulness and limitations of normal mode analysis in modeling dynamics of biomolecular complexes. Structure. 2005;13:373–380. [PubMed]
Van Wynsberghe A.W., Cui Q. Interpreting correlated motions using normal mode analysis. Structure. 2006;14:1647–1653. [PubMed]
Tirion M.M. Large amplitude elastic motions in proteins from a single-parameter, atomic analysis. Phys. Rev. Lett. 1996;77:1905–1908. [PubMed]
Bahar I., Atilgan A.R., Erman B. Direct evaluation of thermal fluctuations in proteins using a single-parameter harmonic potential. Fold. Des. 1997;2:173–181. [PubMed]
Yang L.-W., Chng C.-P. Coarse-grained models reveal functional dynamics - I. Elastic network models—theories, comparisons and perspectives. Bioinform. Bio. Insights. 2008;2:25–45. [PMC free article]
Hinsen K. Analysis of domain motions by approximate normal mode calculations. Proteins. 1998;33:417–429. [PubMed]
Bahar I., Erman B., Jernigan R.L., Atilgan A.R., Covell D.G. Collective motions in HIV-1 reverse transcriptase: examination of flexibility and enzyme function. J. Mol. Biol. 1999;285:1023–1037. [
Keskin O., Bahar I., Flatow D., Covell D.G., Jernigan R.L. Molecular mechanisms of chaperonin GroEL-GroES function. Biochemistry. 2002;41:491–501. [PubMed]
Xu C., Tobi D., Bahar I. Allosteric changes in protein structure computed by a simple mechanical model: hemoglobin T<-->R2 transition. J. Mol. Biol. 2003;333:153–168. [PubMed]
Cui Q., Li G., Ma J., Karplus M. A normal mode analysis of structural plasticity in the biomolecular motor F(1)-ATPase. J. Mol. Biol. 2004;340:345–372. [PubMed]
Wang Y., Rader A.J., Bahar I., Jernigan R.L. Global ribosome motions revealed with elastic network model. J. Struct. Biol. 2004;147:302–314. [PubMed]
Lockless S.W., Ranganathan R. Evolutionarily conserved pathways of energetic connectivity in protein families. Science. 1999;286:295–299. [PubMed]
Dima R.I., Thirumalai D. Determination of network of residues that regulate allostery in protein families using sequence analysis. Protein Sci. 2006;15:258–268. [PMC free article] [PubMed]
Articles from Biophysical Journal are provided here courtesy of The Biophysical Society
Your browsing activity is empty.
Activity recording is turned off.
See more...
|
{"url":"http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2749797/?tool=pubmed","timestamp":"2014-04-21T12:35:06Z","content_type":null,"content_length":"118518","record_id":"<urn:uuid:f659d21c-4879-4e6b-9e82-44f8155661f8>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00089-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Mehryar Mohri - Foundations of Machine Learning -- Errata
Foundations of Machine Learning as well as their corresponding corrections. We are grateful to all readers who kindly bring those to our attention.
• Page 34, Definition 3.1: it should be noted that "Z" is an arbitrary input space.
• Page 36, first inline equation: all instances of "$g \in H$" (which the supremum is taken over) should be "$g \in G$".
• Page 43, proof of Theorem 3.4: The definition of $I_2$ should have the condition $\beta_i \leq 0$ (instead of $\beta_i < 0$).
• Page 44, Example 3.3: "figure 3.2(a)" should read "figure 3.3(a)" and "figure 3.2(b)" should read "figure 3.3(b)".
• Page 46, proof of Theorem 3.5, second paragraph: "$H$ includes by restriction to $S'$" should read "$H$ induces by restriction to $S'$".
• Page 49, proof of Theorem 3.6: The line above eq. (3.34) should read "with very high probability $(1 - 8 \epsilon)$" (instead of $(1 - \epsilon)$).
• Page 50, first second to last paragraph: "...PAC-learning in the non-realizable case is not possible..." should read "..PAC-learning in the realizable case is not possible...".
• Page 57, Exercise 3.12: The hint should read "show that $\{2^{-i} : i < m\}$ can be fully shattered for any $m \in \mathbb{N}$"
• Page 66, first paragraph: "$\nabla_w (F)" should instead be written "$\nabla F(w)$" and "its Hessian the" should read "its Hessian is the"
• Page 78, Lemma 4.2: The word "function" should appear at the end of the first sentence.
• Page 79, proof of Lemma 4.2, third inline equation: in the first equality, parentheses are missing around the two terms that are preceded by the factor 1/2.
• Page 80, proof of Theorem 4.4: there is no need to resort to $\Phi_\rho - 1$, the proof holds directly with $\Phi_\rho$.
• Page 94, second inline equation: the "$\sigma^n" in the denominator of the expression should instead be "$\sigma^{2n}$".
• Page 95, first line of the proof: $\Phi(x) \colon {\cal X} \to \Rset$ should read $\Phi(x) \colon \cX \to \Rset^{\cal X}$.
• Page 95, proof of Theorem 5.2, following third inline equation: $g$ should be defined in terms of $x_j'$ (instead of $x_j$).
• Page 99, second line to last, "$(x_1, x_1', x_2, x_2') \mapsto K'(y_1, y_2)" should read "$(x_1, x_1', x_2, x_2') \mapsto K'(x_1', x_2')".
• Page 100, last two sentences in proof of Theorem 5.3: "$K_n$" should instead be "$K^n$" (in two places).
• Page 102, last line of Theorem 5.5: "$w \cdot \Phi(x)$" should be "$\langle w, \Phi(x) \rangle$" (for consistency of notation).
• Page 121, definition 6.1: "$\epsilon > 0$ and" should be removed.
• Page 121, definition 6.1: "$poly(1/\epsilon, 1/\delta, n, size(c)$" should instead read "$poly(1/\delta, n, size(c))$".
• Page 121, definition 6.1: it should be added that "$O(n)$ is a bound on the cost of representing $x \in \cal X$".
• Page 125, last sentence of proof of Theorem 6.1: "follows from the identity" should read "follows from the inequality".
• Page 135, Theorem 6.3: Instead of "$a_t > 0$" it should read "$\alpha_t > 0$".
• Page 167, pseudocode of Dual Perceptron algorithm: $\alpha_{t + 1}$ should be replaced by $\alpha_t$.
• Page 168, pseudocode of Kernel Perceptron algorithm: $\alpha_{t + 1}$ should be replaced by $\alpha_t$.
• Page 181, exercise 7.10, second paragraph: the definition of $m_i$ in the first sentence of that paragraph is given in the special case of the zero-one loss. For the general case, the sentence
should be replaced by: "Let $m_i$ be the cumulative loss of hypothesis $h_i$ on the points $(x_i, \ldots, x_T)$, that is $m_i = \sum_{t = i}^T L(h_i(x_t), y_t)$".
• Page 181, exercise 7.10, in the text following the inline equation: $i^* = argmin_i m_i / (T - i)$ should be replaced by $i^* = argmin_i m_i / (T - i + 1)$ .
• page 189, third paragraph: $W=(w_1^\top, \ldots, w_k^\top)^\top$ should read $W=(w_1, \ldots, w_k)^\top$.
• Page 190, line 5: the empirical Rademacher complexity symbol should be replaced by that of Rademacher complexity.
• Page 191, equation 8.12: the factor 4k^2 should be 2k^2 instead.
• Page 191, optimization problem: the constraints $\xi_i \geq 0$ should be added.
• Page 192, section 8.3.2 first paragraph: "exercise 9.5" should read "exercise 8.4".
• Page 207, exercise 8.4: "family of base hypothesis" should read "family of base hypotheses".
• Page 283, inline after (12.2): $U^\top X X^\top U$ should read $Tr[U^\top X X^\top U]$.
• Page 354, paragraph preceding Definition B.7: all "$x$" should be bolded.
• Page 357, equation (B.13): "$g(\bar x_i)$" should instead be "$g_i(\bar x)$".
• Page 357, proof of Theorem B.8: "$\bar x$" should be bolded throughout.
• Page 357, proof of Theorem B.8, inline equation: the second and fourth inequalities should be equalities. Also, the final inequality holds due to the "(third condition and g(x) \leq 0)".
• Page 369, first line of section D.1: extra space before the comma should be removed.
• Page 370, line 5: $\phi'(t) \leq \frac{(b-a)^2}{4}$ should read $\phi''(t) \leq\frac{(b-a)^2}{4}$.
• Page 371, last line of lemma D.2: $\E[e^{sV} | Z ]$ should read $\E[e^{tV} | Z ]$.
|
{"url":"http://www.cs.nyu.edu/~mohri/mlbook/errata_p2.html","timestamp":"2014-04-18T10:36:43Z","content_type":null,"content_length":"6072","record_id":"<urn:uuid:7345c56c-9e22-4e13-ae23-1d3331aaf938>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00099-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Journal of the Brazilian Society of Mechanical Sciences and Engineering
Services on Demand
Related links
Print version ISSN 1678-5878
J. Braz. Soc. Mech. Sci. & Eng. vol.33 no.1 Rio de Janeiro Jan./Mar. 2011
TECHNICAL PAPERS
DYNAMICS VIBRATION AND ACOUSTICS
A review on extension of Lagrangian-Hamiltonian mechanics
Vikas Rastogi^I; Amalendu Mukherjee^II; Anirban Dasgupta^III
^Irastogivikas@yahoo.com, Sant Longowal Institute of Engineering and Technology, Mechanical Engineering Department 148106 Longowal, Punjab, India
^IIIanir@iitkgp.ernet.in, Indian Institute of Technology, Mechanical Engineering Department 721302 Kharagpur, W. Bengal, India
This paper presents a brief review on Lagrangian-Hamiltonian Mechanics and deals with the several developments and extensions in this area, which have been based upon the principle of D'Alambert or
the other. It is not the intention of the authors to attempt to provide a detailed coverage of all the extensions of Lagrangian-Hamiltonian Mechanics, whereas detailed consideration is given to the
extension of Noether's theorem for nonconservative systems only. The paper incorporates a candid commentary on various extensions including extension of Noether's theorem through differential
variational principle. The paper further deals with an extended Lagrangian formulation for general class of dynamical systems with dissipative, non-potential fields with an aim to obtain invariants
of motion for such systems. This new extension is based on a new concept of umbra-time, which leads to a peculiar form of equations termed as 'umbra-Lagrange's equation'. This equation leads to a
simple and new fundamental view on Lagrangian Mechanics and is applied to investigate the dynamics of asymmetric and continuous systems. This will provide help to understand physical interpretations
of various extensions of Lagrangian-Hamiltonian Mechanics.
Keywords: Lagrangian-Hamiltonian Mechanics, Umbra-Lagrangian, Noether's theorem
From the late seventeenth century to the nineteenth century classical mechanics (Goldstein, 1980; Sudarshan and Mukunda, 1974) was one of the main driving forces in the development of physics,
interacting strongly with developments in mathematics, both by borrowing and lending. In fact, mechanics and indeed all theoretical science is a game of mathematical make-believe. The topics
developed by its main protagonists, Newton, Lagrange, Euler, Hamilton and Jacobi among several others form the basis of classical mechanics.
Since the last few decades, the subject of classical mechanics itself was undergoing a rebirth and expansion with strong developments in mathematics. There has been an explosion of research in the
classical dynamical systems, focused on the discovery of advanced mathematics (e.g. Lie Algebra, differential geometry, etc.) (Sattinger and Weaver, 1986; Bluman and Kumei, 1989; Gilmore, 1974). The
aforementioned occurrences in the second part of the 20^th century have radically changed the nature of the field of classical mechanics. The first development has led to modeling and analysis of
complex, multi-bodied (often elastic bodied) structures, such as satellites, robot manipulators, turbo machinery and vehicles. The second has led to the development of numerical techniques to derive
the describing equations of motion of a dynamical system, integration, simulation and obtaining the response. This new computational capability has encouraged scientists and engineers to model and
numerically analyze complex dynamical systems, which in past either they could not be analyzed, or were analyzed using gross simplifications.
The prospect of using computational techniques to model a dynamical system has also led dynamicists to reconsider existing methods of obtaining equations of motion. The methods of Lagrange (Lagrange,
1788) and Hamilton (Baruh, 1999) are used to carry out the primary task of deriving the equations of motion. Generalized coordinates which do not necessarily have to be physical coordinates are used
as motion variables in these methods. This makes the Lagrangian-Hamiltonian approach more flexible than the Newtonian, as Newtonian approach is implemented using physical coordinates. The use of
Lagrange's formulation of dynamics offers the quickest way of deriving system equations for complex physical systems, and are preferable compared to the Newtonian approach. However, the Lagrangian
approach has certain limitations. The elimination of the constraint forces from the Lagrange's formulation does not allow one to directly calculate these forces. They can, however, be determined
using an indirect approach. Besides this, Lagrange's equation suffers heavily in the presence of time fluctuating parameters, non-potential fields, general dissipation and gyroscopic forces.
Derivation of the Lagrange's equations of motion for nonconservative and dissipative system (Rosenberg, 1977; Meirovitch, 1970; Whitaker, 1959) is essentially patchwork. This hinders the analysis of
such systems, which the Lagrangian can afford. Nevertheless, the greatest advantage of Lagrangian formulation is that it brings out the connection between conservation laws and important symmetry
properties of dynamical systems. Knowledge of conservation laws is of great importance in the analysis of dynamical systems as they lead to a complete integrability of dynamical system. The
fundamental symmetries motivated the study of conservation laws from geometrical and group-theoretical point of view. The theorem of Emmy Noether (Noether, 1918) is one of the most fundamental
justifications for conservation laws. Her theorem tells us that conservation laws follow from the symmetry property of nature. From the literature (Goldstein, 1980), (Sudarshan and Mukunda, 1974), it
is found that translational symmetry implies momentum conservation, time translational symmetry implies energy conservation and rotational symmetry implies conservation of angular momentum. There
exists a fundamental theorem called Noether's theorem (Noether, 1918 ), which shows that indeed, for every spatial continuous symmetry of a system, which can be described by a Lagrangian, some
physical quantity is conserved and the theorem also allows us to find that quantity.
The objective of the paper is to present the developments in the field of Lagrangian-Hamiltonian Mechanics with particular regard to extension of Noether's theorem. In recent years, the authors have
attempted to develop alternative method to the construction of the first integrals of dynamical systems by means of extended Noether's theorem. Typical contributions in this area are given in
reference (Rastogi, 2005; Mukherjee et al., 2006; Mukherjee et al., 2007; Mukherjee et al., 2009). Investigating such alternatives has been applied to analyze the dynamics through invariance of the
action integral for some engineering applications, which are rarely applied. Moreover, such research will open new horizons for the physics students, who are conversant with this theorem and its
applications. A brief review on such major extensions has been presented in this paper through variational principle and group-theoretical approach or the other. This paper is divided into different
sections, each dealing with the various aspects of the subjects. It begins with a summary of the evolution of classical Lagrangian-Hamiltonian Mechanics followed by a general overview of extension
through variational or group theory. The fourth section of this work presents the alternative method of extension of Lagrangian-Hamiltonian Mechanics through umbra time. Few examples have been
provided to elucidate the concept in brief.
F(t = external force with time fluctuation
= umbra-Hamiltonian of the system
K = stiffness of the spring in N/m
= Lagrangian of the system
= umbra-Lagrangian of the system
R = damping coefficient of the damper in N-s/m
= infinitesimal generator of rotational SO (2) group
= real time component of
= umbra time component of
= real-time potentials for resistive elements
= total-umbra potential
= umbra-potential for compliance elements
= umbra-potential for external forces
= umbra-potential for resistive elements
= umbra-kinetic energy
= umbra-co-kinetic energy
= generalized force
= generalized velocity
= mass of the body in Kg= real-time momentum
= umbra-time momentum
= generalized displacement in real time
= generalized displacement in umbra-time
= generalized velocity in real time
= generalized velocity in umbra-time
= linear displacements in real time or umbra-time, where
= linear velocity in real time or umbra-time, where
t = real-time in s
= umbra-time in s
Evolution of Classical Lagrangian-Hamiltonian Mechanics
The major contribution in classical mechanics came from Lagrange (1788). The contributions of Lagrange put the field of analytical mechanics into a structured form now known as Lagrangian mechanics.
In the original derivation, Lagrange's equations were written for conservative systems only, and applicable when the system is closed, constraints are integrable, and there are no gyroscopic forces.
Hamilton (Baruh, 1999; Gantmachar, 1970; Calkin, 2000) has developed the most general principle of least action and showed that the Lagrangian with time integration provided the definition of action
and minimization of this action integral established the Lagrange's generalized equation. The main advantage of this new formulation is that it holds for any system subject to constraints and
independently of the co-ordinates, which are chosen to represent the motion. However, the problem of dissipation was handled by Rayleigh (Gantmachar, 1970; Jose and Saletan, 1998), who attempted to
enlarge the scope of Lagrange's equation to incorporate dissipative forces in this generalized equation. He added velocity dependent potential through virtual work done by the dissipative elements
and then re-encapsulated in an extended formula. In this formulation, the velocity's dependent potential should not be brought inside the scope of total derivative with respect to time, otherwise an
unrealistic momentum and inertia would enter in the equation. This is the reason why the velocity's dependent Rayleigh potential fails in the case of gyroscopic forces.
The next problem is to deal with Nonholonomic systems in classical mechanics as to determine the equation of motion for constrained systems. When physical constraints are imposed on an unconstrained
set of particles, forces of constraints are engendered, which ensure the satisfaction of the constraints. The equation of motion developed for such constrained systems is based on the principle of
D'Alambert, and later elaborated by Lagrange (1788) through Lagrange multipliers. Since its initial formulation by Lagrange, the problem of constrained motion has been vigorously and continuously
worked on by various scientist including Volterra, Boltzmann, Hamel, Whittaker and Synge to name a few. Gauss (1829) explained a new general principle for the motion of constrained mechanical systems
referred to as Gauss's principle, by making use of acceleration. Gibbs (1879) and Appell (1899) independently discovered a new equation, which is known as the Gibbs-Appell equations of motion
(Appell, 1899). Pars (1979) also referred to the Gibbs-Appell equations as the most comprehensive equations of motion so far discovered. Routh Gantmachar (1970) proposed the equations of motion in a
potential field taking a part of the Lagrangian variables and a part of the Hamiltonian variables, called as Routh variables. Lie (Hassani, 1999; Olver, 1986) introduced the group theory for
canonical transformations by considering infinitesimal transformations.
Extensions of Lagrangian-Hamiltonian Mechanics through Variational Principle and Group-theoretical Approach
Apparently, the first to notice the connection of conservation laws to invariance properties of dynamical systems was Jacobi (1884), who has derived the conservation law for linear and angular
momentum from the Euclidean invariance of the Lagrangian. Emmy Noether (1918) formulated a theorem to find the invariants of the dynamical system and showed a relationship between symmetry aspects of
conservation laws and invariance properties of space and time, i.e., their homogeneity and isotropy. These fundamental symmetries motivated the study of conservation laws from geometrical and
group-theoretical point of view. Most of the results of conservation laws of classical mechanics based on Noetherian approach could be found in the research papers of Hill (1951), and Desloge and
Karch (1977), where it has been applied as a reliable tool to find new conservation laws of dynamical systems.
The physics associated with the classical conservation laws widely attracted the investigations in this field, intriguing problems of classical mechanics by engineers and theoretical physicists, who
formulated newer types of constant of motion. In their several papers, Vujanovic (1970), Djukic and Vujanovic (1975) and Vujanovic (1978) have investigated this field of analytical mechanics and
developed a new approach to obtain constants of motion. Vujanovic (1970) has established a group-variational procedure for finding first integral of dynamical systems. Djukic and Vujanovic (1975)
have proposed a Noether's theorem for mechanical system with non-conservative forces. Primarily, this theory was based on the idea that the transformations of time and generalized coordinates
together with dissipative forces determine the transformations of generalized velocities. Vujanovic (1978) has reported a method for finding the conserved quantities of non-conservative holonomic
systems based on the differential variational principle of D'Alembert, which was equally valid for both conservative and non-conservative systems. His research work has shown that the existence of
first integrals mainly depends on the existence of solutions of partial differential equations, known as Killing equations (Hassani, 1999; Olver, 1986).
The above procedures, however, do not have generality of the Noether's theorem, as it mainly depends on the particular structure of the special class of problems being attempted. However, our choice
to relate the alternative method of umbra Lagrangian mechanics is motivated by the fact that Noether's theorem, extended by Bahar et al. (1987) tackles both the aspects which are of considerable
importance in the study of conservation laws. On phenomenological level, it shows the connection of conservation laws of some non-conservative system to the symmetries of space and time. On the other
hand, it also possesses a pragmatic value as it could be used in engineering applications.
The significant work in this direction was reported by Bahar and Kwatny (1987), who provided a useful method based on a differential variational principle (Vujanovic, 1978) in order to extend
Noether's theorem to constrained-nonconservative dynamical systems, which includes the influence of dissipation and constraints, and thus making it suitable for use in engineering applications. The
main focus of their research work was primarily concerned with the extension of the notion of variation, which also included variation in time, thus leading to non-contemporaneous variation (NCV).
Here the use of NCV is limited to first order terms, and was denoted by Δ as the convention adopted in Vujanovic (1978), whereas δ is the contemporaneous variation (CV). The symbol δ also defines a
simultaneous or Lagrange's variation. A representative point A that is on the actual path at time t and an infinitesimal point B on the varied path at the same time t are correlated by
where Fig. 1. The following definition was used
The Non-contemporaneous variation of any function
Putting Eq. (2) in (3) and following the procedure as given in reference (Bahar and Kwatny, 1987; Vujanovic, 1978), one may obtain
Equation (4) demonstrates that the usual commutatively rule does not extend to NCV. For the derivation of the Noether's theorem, one may consider the variational expression as L can be replaced by
Thus expression (5) has been defined by authors (Bahar and Kwatny, 1987) as physical motivation and simply considered this equation to undergo two different transformations given as follows:
(a) Non-Contemporaneous Transformation (NCT): The variational expression given in Expression (5) may be written in the form
Expression (6) in non-contemporaneous may be written as
Expression (7) will finally reduce to
The last two terms of expression (8) may also be written in NCV as well.
(b) Contemporaneous Transformation (CT): Following the usual process to rewrite expression (6) as
and following the procedure as in reference (Bahar and Kwatny, 1987; Vujanovic, 1978), one may obtain the following equation:
The first integrals or conserved quantities may be obtained if right hand side of Eq. (10) can be made to vanish, then bracketed quantity under time derivative sign become a constant
Then, one may get
Equation (13) is a conserved quantity. Now considering the linear infinitesimal one parameter transformation as followed in Bahar [29], one may obtain the conservation law
where ξ[i], ô[i] and ρ[i] are the linear infinitesimal one-parameter transformations. This may also be obtained by following usual approach and generalized killing equations, which has been obtained
in reference (Djukic and Vujanovic (1975). Many such examples of the Noether's theorem are contained in Vujanovic and Jones (1989). A variety of methods have been developed for the search of
conservation laws such as methods of integrating factors, also termed as direct or ad hoc procedure as reported by Sarlet and Bahar (1980), and Djukic and Sutela (1984). Other methods were based on
similarity variables (Jones and Ames, 1967) and transformation approach as presented by Crespo da Silva (1974). In this way, some procedures of group-theoretical approach with considerable generality
have been established, which related the existence of first integrals to the symmetries of certain mathematical objects and served for describing the dynamical systems.
Several other studies concerned with the symmetry aspects of Lagrangian and Hamiltonian formalism have been considered in the review papers of Katzin and Levine (1976), and Fokas (1979). A
generalization of Noether Theorem in classical mechanics has been attempted by Sarlet and Cantrijn (1981). Another class of methods, in the spirit of finding invariants of motion for time-dependent
parameters, are primarily established by few researchers such as Lewis and Leach (1982), who have reported an approach of finding exact invariants for one-dimensional time-dependent classical
Hamiltonians, and as Sarlet (1983), who has developed a methodology of finding first integrals for one-dimensional particle motion in a non-linear, time-dependent potential field. Motivated by the
research works of Vujanovic (1970), Vujanovic (1978) and Djukic and Vujanovic (1975), Simic (2002) has analyzed polynomial conservation laws of one-dimensional non-autonomous Lagrangian dynamical
system and demonstrated that final form of dynamical system and corresponding conservation law depends on the solution of the so-called potential equation, which will be presented as
In Eq. (15),
Variational principles (Gelfend and Fomin, 1963) and principle of virtual work continued to attract interest of the researchers and have great importance in physics and mathematics. These principles
helped in establishing connections and applications of these disciplines, and in devising diverse approximation techniques. Arizmendi et al. (2003) developed a variant of the usual Lagrangian, which
describes both the equations of motion and the variational equations of the system. The required Lagrangian is defined in an extended configuration space comprising both the original configurations
of the system and all virtual displacements joining any two integral curves. An extremal principal for obtaining the variational equations of a Lagrangian system is reviewed and formalized by Delgado
et al. (2004) by relating the new Lagrangian function (Arizmendi et al., 2003) needed in such scheme to a prolongation (Hassani, 1999; Olver, 1986) of the original Lagrangian. In their work, they
considered an N-degree of freedom dynamical system described by an autonomous non-singular Lagrangian function a = 1.2...N defined in the tangent bundle TQ of its configuration Manifold Q. Now, an
extended configuration space D (D'Alambert's configuration manifold) was considered, comprising of both the original configuration of the system plus all possible "virtual displacements" joining, in
a first approximation, any two of the extremal paths of the original system. With the help of L, they defined new Lagrangian
where ε is virtual displacement and
Alternative Method for Extending Lagrangian-Hamiltonian Mechanics
As detailed in the previous section, the procedure and methods developed by various researchers did not consider the generality of Noether's theorem, as it was mainly focused on the particular
structure of the special class of dynamical problems being studied. So, it is necessary to extend the scope of Lagrangian and Noether's theorem, which includes the influence of dissipation and
sometimes constraints, thus making it suitable for the larger and complex engineering applications. To overcome the limitations and enlarging the scope of Lagrangian-Hamiltonian mechanics, a new
proposal of additional time like variable 'umbra-time' was made by Mukherjee (1994) and this new concept of umbra-time leads to a peculiar form of equation, which is termed as umbra-Lagrange's
equation. A brief and candid commentary on idea of umbra Lagrangian is given by Brown (2007). This idea was further consolidated by presenting an important issue of invariants of motion for the
general class of system by extending Noether's theorem (Mukherjee, 2001). This notion of umbra-time is again used to propose a new concept of umbra-Hamiltonian, which is used along with the extended
Noether's theorem to provide an insight into the dynamics of systems with symmetries. The advantages of using such Lagrangian are many ways as one may get the both aspects of the problem. It provides
a great insight of the dynamical system through extended Noether's theorem and on the other hand, it gives a pragmatic value since it could be used as a reliable tool for derivation of new
conservation laws for many engineering problems, where the physicist can play a leading role. One of the most important insights gained from the umbra-Lagrangian formalism is that its underlying
variational principle (Rastogi, 2005) is possible, which is based on the recursive minimization of functionals. In this direction, Rastogi (2005) also defined all these notions in an extended
manifold comprising of real time, and umbra and real time displacements and velocities. The umbra Lagrangian theory has been used successfully to study invariants of motion for non-conservative
mechanical and thermo-mechanical systems [48]. In another paper, the authors applied umbra Lagrangian to study dynamics of an electro-mechanical system comprising of an induction motor driving an
elastic rotor (Mukherjee et al., 2009). This system was symmetric in two sets of coordinates, one set was mechanical or geometrical, and the other symmetry was in electrical domain. Recently,
Mukherjee et al. (2009) presented the extension for Lagrangian-Hamiltonian Mechanics for continuous systems and investigated the dynamics of an internally damped rotor through dissipative coupling.
Some basic concepts of umbra-Hamiltonian theory may be given in Appendix A for ready reference. The concept of umbra-Lagrangian may be represented as shown in Fig. 2 and briefly expressed as follows:
(a) D'Alembert's basic idea of allowing displacements, when the real time is frozen, is conveniently expressed in terms of umbra-time.
(b) Umbra-time may be viewed as the interior time of a system.
(c) Potential, kinetic and co-kinetic energies stored in storage elements like symmetric compliant and inertial fields can be expressed as functions in umbra-time (umbra-displacements and
(d) The effort of any external force, resistive element or field, gyroscopic element (treated as anti-symmetric resistive field), transformer or lever element, anti-symmetric compliant field and
sensing element depends on displacements and velocities in real time. The potentials associated with them are obtained by evaluation of work-done through umbra-displacements.
In formulating the umbra-Lagrangian for a system, two classes of elements are generally required: (a) storage elements, whose energies are defined in terms of umbra-displacements and
umbra-velocities, and (b) rest of the elements for which the efforts returned are evaluated entirely in terms of real time and their umbra-potential are obtained by umbra-displacement of the
corresponding element. These two categories of elements can be identified through breaking the system into its basic entities or dynamical units. Bond graphs (Mukherjee and Karmakar, 2000; Karnopp et
al., 1990) may be one of the tools for representing the dynamics of the system and obtaining the expressions for either the classical or the umbra-Lagrangian as provided in details in Appendix B.
The broad principle, on which the creation of umbra-Lagrangian and other relevant energies (Mukherjee, 2001; Rastogi, 2005; Mukherjee et al., 2009) are based, can be summarized as follows:
(a) All temporal fluctuations of parameters are in real time.
(b) The co-kinetic and potential energies would normally be evaluated taking generalized velocities and displacements as function of umbra time and it is assumed that there are n generalized
co-ordinates. All definitions are separately given in Nomenclature.
The umbra-potential (for potential forces only) is defined as
where a bold face letter represents a vector quantity. As an example, the umbra-potential energy for a spring with time varying stiffness can be written as
Likewise, the umbra-kinetic energy is defined as
and the umbra co-kinetic energy as
For instance, the umbra co-kinetic energy for a time varying mass can be represented as
(c) The umbra-potential associated with generalized resistive fields is evaluated, which is based on the philosophy that resistive fields open the system, and thus they observe the states of motion
in real time as an external observer. The force generated by them does work on the system through umbra generalized displacements
For example, umbra-potential for a damper with time varying damping coefficient may be written as
It is significant to note that in classical approach, one may incorporate dissipative forces through Rayleigh potentials, which in linear case can be defined as
In such case, the anti-symmetric part of [R] in Eq. (24), if present, has no contribution to
(d) The umbra-potential associated with external generalized forces may be incorporated as
To illustrate, one may find the umbra-potential for any external force
The total umbra-potential may be obtained by summing-up all the potentials represented by Eqs. (17), (22) and (26) and expressed as
and the umbra-Lagrangian would, therefore, be
New Lagrange's equations for a general class of systems may be given as
Noether's theorem (Noether, 1918) states that, if the Lagrangian of a system is invariant under a family of single parameter groups, then each such group renders a constant of motion. The extended
Noether's theorem, as discussed in paper (Mukherjee et al., 2009) may lead to a constant of motion, or trajectories, on which some dynamical quantity remains conserved.
The umbra-Lagrangian may be defined on extended manifold, which consists of real displacements and velocities as well as umbra-displacements and velocities and real time (Mukherjee et al., 2009),
Here, the super dot j^th parameter (or group) may be decomposed as follows:
The general forms for
where j^th transformation may then be expressed as
Using Eq. (30) along with Eqs. (33) and (34) in the previous equation (35), the extended Noether's theorem may be obtained and written as
In terms of the differential one-forms
The term on the left-hand side is the classical Noether term while the term on the right-hand side is additional and termed here as modulatory convection term. This modulatory convection term is made
zero to obtain the conserved quantity. So, whenever the extended Lagrangian is found invariant, there is either the general conserved quantity or a trajectory on which such quantity remains
conserved. The aforementioned methodology may be explained with two simple examples provided in the next subsections.
Example 1. Simple mass-spring-damper system with time fluctuating parameters
Let us consider a simple mass spring damper system as shown in Fig. 3. Using the aforementioned procedure, the umbra-Lagrangian for this system may be expressed as
The equation of motion obtained from the above umbra-Lagrangian through Eq. (30) may be written as
Example 2. Two oscillators with gyroscopic coupling
Let us consider an example of a system with two similar oscillators having mass m and stiffness K, and with a gyroscopic coupling of strength γ as shown in Fig. 4(a). The umbra-Lagrangian of the
system may be written as
If the umbra-Lagrangian admits the one-parameter rotational group, then the infinitesimal generators of the rotational SO (2) group may be written as
and the symmetry (invariance) condition for umbra-Lagrangian may be expressed as
Through Eq. (36), one obtains
where C is a constant of integration. The first term is the moment of momentum, and the second term is contributed by the gyroscopic coupling.
The umbra-Hamiltonian (discussed in Appendix A) of the system may be expressed as
Applying the second theorem of umbra-Hamiltonian and finding
Again, the corollary of the second theorem of umbra-Hamiltonian gives
and on substituting the expression of
Both Examples illustrated in this paper provide an overview of the whole concept. It is apparent throughout the paper that the proposed extension of Lagrangian-Hamiltonian mechanics in terms of umbra
philosophy gives a new dimension for analyzing the dynamical systems with non-conservative and non-potential forces.
The paper presented a brief review on the literature in Lagrangian-Hamiltonian Mechanics. Most of the research papers and books available in this field are incorporated, which undoubtedly enhanced
and enriched the field of Mechanics. In this paper, authors have presented a brief review on extension of Lagrangian-Hamiltonian Mechanics. Various previous extensions on the subject matter were
discussed with a particular regard to extension of Noether's theorem with nonconservative and non-holonomic systems for general class of systems. After review on literature on this subject matter,
the following points are concluded:
(i) The procedure and methodology developed by other researchers don't have any generalization of Noether's theorem, as it has been mainly applied on the particular structure of the problems, which
were rather mathematical without much physical interpretations of the real system. In this way, there is a substantial loss of generalization of the theorem, which may be applied to any engineering
problems. However, in recent years, few researchers have applied the generalized Noether's theorem in few engineering applications.
(ii) In contrast to all the previous extensions, the philosophy developed by the authors has addressed the issue of nonconservative and dissipative forces by assuming a new Lagrangian, which find
wider applications for engineering problems. The authors have devised a new methodology to find invariants of motions of the dynamical systems. Gauge transformations [48], bi-symmetric rotor-motor
system [49], dynamics of the rotor with internal damping [50] and few others are the applications already published in archival literature.
(iii) It is noteworthy to say that the alternative methods developed by the authors give more transparent physical interpretations, which enable the analyst to make further use of these first
integrals in stability analysis.
(iv) In this article, the authors intended to provide critical evaluations of other extensions, which are rarely applied in the real-world problems.
Appell, P., 1899, "Su rune Forme Genrale des Equations de la Dynamique", C. R. Acad. Sci., Paris, Vol. 129, pp. 459-460. [ Links ]
Arizmendi, C.M., Delgado, J., Nunez-Yepez, H.N. and Salas-Brito, A. L., 2003, "Lagrangian Description of the Variational Equations", Chaos, Solutions and Fractals, Vol. 18, pp. 1065-1073. [ Links ]
Bahar, L.Y. and Kwatny, H.G., 1987, "Extension of Noether's Theorem to Constrained Non-Conservative Dynamical Systems", Inter. J. Non-Linear Mechanics, Vol. 22, No. 2, pp.125-138. [ Links ]
Baruh, H., 1999, "Analytical Dynamics", WCB and McGraw-Hill, Singapore. [ Links ]
Bluman, G.W. and Kumei, S., 1989, "Symmetries and Differential Equations", Applied Mathematical Science, Springer-Verlag, New York. [ Links ]
Brown, F.T., 2007, Engineering System Dynamics, 2^nd ed., CRC, Taylor & Francis. [ Links ]
Calkin, M.G., 2000, "Lagrangian and Hamiltonian Mechanics", Allied Publisher Limited, First Indian Reprint, New Delhi. [ Links ]
Cantrijn, F. and Sarlet, W., 1980 "Note on Symmetries and Invariants for Second Order Ordinary Differential Equations", Physics Letters A, Vol. 77, No. 6, pp.404-406. [ Links ]
Corben, H.C. and Stehle, P., 1960, "Classical Mechanics", Wiley, New York. [ Links ]
Crespo da Silva, M.R.M., 1974, "A Transformation Approach for Finding First Integrals of Motion of Dynamical Systems", Inter. J. Non-Linear Mechanics, Vol. 9, pp. 241-250. [ Links ]
Desloge, E.A. and Karch, R.I., 1977, "Noether Theorem in Classical Mechanics", Am. J. Physics, Vol. 45, No. 4, pp. 336-339. [ Links ]
Djukic, Dj. and Vujanovic, B., 1975, "Noether Theory in Classical Mechanics", Acta Mech., Vol. 23, pp. 17-27. [ Links ]
Djukic, Dj. and Sutela, T., 1984, "Integrating Factors and Conservation Laws of Non-conservative Dynamical Systems", Inter. J. Non-Linear Mechanics, Vol. 19, No. 4, pp. 331-33. [ Links ]
Fokas, A.S., 1979, "Group Theoretical Aspects of Constants of Motions and Separable Solutions in Classical Mechanics", J. Math. Anal., Vol. 68, pp. 347-370. [ Links ]
Gantmachar, F., 1970, "Lectures in Analytical Mechanics", Mir Publication, Moscow. [ Links ]
Gauss, C.F., 1829, "Ube rein neues allgemeines Grundegesetz der Mechanik", J. Reine Ang. Math., Vol. 4, pp. 232-235. [ Links ]
Gelfend, I.M. and Fomin, S.V., 1963, Calculus of Variations, Prentice Hall Inc, Englewood Cliffs, N.J. [ Links ]
Gibbs, J.W., 1879, "On Fundamental Formulae of Dynamics", Am. J. Math., Vol. 2, pp. 49-64. [ Links ]
Gilmore, R., Lie Groups, 1974, "Lie Algebra and Some of their Applications", John Wiley, New York. [ Links ]
Goldstein, H., 1980, "Classical Mechanics", Addison-Wesley, MA, 1980. [ Links ]
Hassani, S., 1999, "Mathematical Physics", Springer-Verlag, New York-Heidelberg-Berlin, pp. 936-1000. [ Links ]
Hill, E.L., 1951, "Hamilton Principle and the Conservation Theorems of Mathematical Physics", Rev. Modern Physics, Vol. 23, No. 3, pp. 253-260. [ Links ]
Jacobi, C.G.J., 1884, "Vorlesungenuber Dynamik", Werke, Supplementband, Berlin. [ Links ]
Jones, S.E. and Ames, W.F., 1967, "Similarity Variables and First Integrals for Ordinary Differential Equations", Inter. J. Non-Linear Mechanics, Vol. 2, pp. 257-260. [ Links ]
Jose, J.V. and Saletan, E.J., 1998, "Classical Dynamics-A Contemporary Approach", Cambridge University Press, U.K. [ Links ]
Karnopp, D.C., 1977, "Lagrange's Equations for Complex Bond Graph Systems", Journal of the Dynamic Systems, Measurement, and Control, ASME Trans., Vol. 99, (Dec.), pp.300-306. [ Links ]
Karnopp, D.C., Rosenberg, R.C. and Margolis, D.L., 1990, "System Dynamics: A Unified Approach", John-Wiley and Sons Inc., USA. [ Links ]
Katzin, G.H. and Levine, J., 1976, "A Gauge Invariant Formulation of Time-Dependent Dynamical Symmetry Mappings and Associated Constants of Motion for Lagrangian Particle Mechanics", Inter. J. Math.
Physics, Vol. 17, No. 7, pp. 1345-1350. [ Links ]
Lagrange, J.L., 1788, Mechanique Analytique, Paris. [ Links ]
Lewis, H.R. and Leach, P.G.L., 1982, "A Direct Approach to Finding Exact Invariants for One-Dimensional Time-Dependent Classical Hamiltonians", Inter. J. Math. Physics, Vol. 23, No. 12, pp.
2371-2374. [ Links ]
Meirovitch, L., 1970, "Methods of Analytical Dynamics", McGraw-Hill Book Company, New York. [ Links ]
Mukherjee, A., 1994, "Junction Structures of Bondgraph Theory from Analytical Mechanics Viewpoint", Proceeding of CISS-1^st Conference of International Simulation Societies, Zurich, Switzerland,
pp.661-666. [ Links ]
Mukherjee, A. and Karmakar, R., 2000, "Modelling and Simulation of engineering System through Bond Graph", Narosa publishing House, New Delhi, reprinted by CRC press for North America and by Alpha
Science for Europe. [ Links ]
Mukherjee, A., 2001, "The Issue of Invariants of Motion for General Class of Symmetric Systems through Bond Graph and Umbra-Lagrangian", Proceedings of International Conference on Bond Graph Modeling
and Simulation (ICBGM), Phoenix, Arizona, USA, pp. 295-304. [ Links ]
Mukherjee, A. , Rastogi, V. and Das Gupta, A., 2006, "A Methodology for finding Invariants of Motion for Asymmetric Systems with Gauge-transformed Umbra Lagrangian Generated by Bond Graphs",
Simulation, Vol. 82(4), pp. 207-226. [ Links ]
Mukherjee, A., Rastogi, V. and Das Gupta, A., 2007, "A Study of a Bi-Symmetric Electro-mechanical System through Umbra Lagrangian Generated by Bond Graphs, and Noether's Theorem", Simulation, Vol. 83
(9), pp. 611-630. [ Links ]
Mukherjee, A., Rastogi, V. and Das Gupta, 2009, "Extension of Lagrangian-Hamiltonian Mechanics for Continuous Systems - Investigations of Dynamic Behaviour of One-dimensional Internally Damped Rotor
Driven Through Dissipative Coupling", Non-linear Dynamics, Springer, 58(1), pp. 107-127. [ Links ]
Noether, E., 1918, "Invariante Variationsprobleme", Ges. Wiss. Gottington, Vol. 2, pp. 235-237. [ Links ]
Olver, P., 1986, "Application of Lie groups to Differential Equations", Springer-Verlag. [ Links ]
Rosenberg, R.M., 1977, "Analytical Dynamics of Discrete Systems", Plenum Press, New York. [ Links ]
Sarlet, W. and Bahar, L.Y., 1980, "A Direct Construction of First Integrals for Certain Non-linear Dynamical Systems", Inter. J. Non-Linear Mechanics, Vol. 15, pp. 133-146. [ Links ]
Sarlet, W. and Cantrijn, F., 1981, "Generalizations of Noether's theorem in Classical Mechanics", SIAM Rev., Vol. 23, No. 4, pp. 467-494. [ Links ]
Sarlet, W., 1983, "First Integrals for One-dimensional Particle Motion in a Non-linear, Time-dependent Potential Field", Inter. J. Non-Linear Mechanics, Vol. 18 No. 4, pp. 259-268. [ Links ]
Sattinger, D.H. and Weaver, O.L., 1986, "Lie Groups and Algebras with Applications to Physics Geometry and Mechanics", Springer Verlag, New York-Heidelberg-Berlin. [ Links ]
Simic, S.S., 2002, "On the Symmetry Approach to Polynomial Conservation Laws of One-dimensional Lagrangian Systems", Inter. J. Non-Linear Mechanics, Vol. 37, pp. 197-211. [ Links ]
Sudarshan, E.C.G. and Mukunda, N., 1974, "Classical Dynamics: A Modern Perspective", John Wiley & Sons, New York. [ Links ]
Synge, J.L. and Griffith, B.A., 1959, "Principles of Mechanics", McGraw-Hill, New York. [ Links ]
Rastogi, V., 2005, "Extension of Lagrangian-Hamiltonian Mechanics, Study of Symmetries and Invariants", Ph.D. Thesis, Indian Institute of Technology, Kharagpur, India. [ Links ]
Vujanovic, B., 1970, "A Group-Variational Procedure for Finding First Integrals of Dynamical Systems", Inter. J. Non-Linear Mechanics, Vol. 5, pp. 269-278. [ Links ]
Vujanovic, B., 1978, "Conservation Laws of Dynamical Systems via d'Alembert's Principle", Inter. J. Non-Linear Mechanics, Vol. 13, pp.185-197. [ Links ]
Vujanovic, B. and Jones, S.E., 1989, "Variational Methods in Nonconservative Phenomena", Academic press, Boston. [ Links ]
Whitaker, E.T., 1959, "A Treatise on the Analytical Dynamics of particles and Rigid Bodies", Fourth Edition, Cambridge University Press, New York. [ Links ]
Paper accepted November, 2010.
Technical Editor: Domingos Rade(Footnotes)
Appendix A
Concept of umbra-Lagrangian and umbra-Hamiltonian
Mukherjee (1994) introduced a concise and modified form of Lagrange's equation and manifested the use of this new scheme to arrive at system models in the presence of time fluctuating parameters,
general dissipation and gyroscopic couplings, etc. In this scheme, real and virtual energies (or work) are separated by introduction of an additional time like parameter, which is termed as
'umbra-time'. The prefix 'umbra' was appended to all type of energies, and corresponding Lagrangian was termed as the "umbra Lagrangian". The basic idea presented in reference (Mukherjee, 1994; 2001)
leading to umbra-Lagrangian and umbra-Lagrange's equation may be briefly expressed as follows:
(a) Umbra-time is the beholder of D'Alembert's basic idea of allowing displacements, when the real time is frozen.
(b)Umbra-time may be viewed as the interior time of a system.
(c) Potential, kinetic and co-kinetic energies stored in storage elements like symmetric compliant and inertial fields can be expressed as functions in umbra-time (umbra-displacements and
(d) The effort of any external force, resistive element or field, gyroscopic element (treated as anti-symmetric resistive field), transformer or lever element, anti-symmetric compliant field and
sensing element depends on displacements and velocities in real time. The potentials associated with them are obtained by evaluation of work-done through umbra-displacements.
The broad principle on which the creation of umbra-Lagrangian and other relevant energies (Mukherjee, 1994; Mukherjee, 2001; Rastogi, 2005; Mukherjee et al., 2006; Mukherjee et al., 2007; Mukherjee
et al., 2009) are summarized in the section IV. However, the umbra-Hamiltonian (Mukherjee, 2001) may be represented as
The umbra-Hamiltonian
The two theorems of the umbra-Hamiltonian may be given as
Corollary of Theorem 2
If for a system
Appendix B
Generation of umbra-Lagrangian through Bond graphs
Karnopp [51] proposed an algorithm to arrive at Lagrange's equations for complex systems through its bondgraph model. The steps of Karnopp algorithm may be briefed as
(1) Apply the required causality at all effort and flow sources and use the junction structure elements (only) to extend the causality as far as possible within the bondgraph. If causal conflicts
arise at this stage, there is a fundamental contradiction within the model and it must be reformulated.
(2) Choose a '1' junction for which the flow is not yet causally determined or insert a '1' junction into any causally undetermined bond and attach an artificial flow source to '1' junction.
(3) Apply the required causality to the artificial source and extend the causality as far as possible into the bondgraph using junction structure element.
(4) Return to step (2) and continue until all bonds have been causally oriented.
Now at this stage, an extension of Karnopp's algorithm is presented with a detailed procedure and may appear more elaborate for generation of umbra-Lagrangian of the system. The models may be
classified as follows:
(a) System with no modulated two-port transformers. Such bondgraph models may be called holonomic.
(b) System with modulated two-port transformers. Such bondgraph may be called as non-holonomic.
To explain the procedure, one may take an example as shown in Fig. 5(a) with its bondgraph model in Fig. 5(b). Now, the additional steps for generation of umbra-Lagrangian may be given as:
(i) Create two copies of each junction excluding two-port elements side by side; associate one with t-variable (the real time). The space between these two may be designated as trans-temporal space.
(ii) Insert the artificial sources to their corresponding junctions. Those inserted in the t-component are designated as function of t (see Fig. 6(a)).
(iii) Insert the original flow sources at their respective junctions on the t; insert the effort sources in
(iv) Insert all I- and C-elements and fields at their respective junctions on
(v) R-elements and fields (including gyrators) observe the motion in real time t and apply the force on the system, the corresponding umbra-potentials associated with them is generated through work
done by these forces undergoing umbra-displacements. These features may be incorporated by inserting them in trans-temporal space and adding suitable trans-temporal bonds; 0-junction and suitable
activations as shown in Fig.6 (b). Such bondgraph may be termed as umbra-Lagrangian generator bond graphs.
The umbra-Lagrangian for Fig. 6(b) may be expressed as
Now, it is easy to verify that umbra-Lagrangian of Eq. (B1) renders right equation of motion through Eq. (30) for the system.
|
{"url":"http://www.scielo.br/scielo.php?script=sci_arttext&pid=S1678-58782011000100004&lng=en&nrm=iso","timestamp":"2014-04-20T00:40:46Z","content_type":null,"content_length":"111999","record_id":"<urn:uuid:2586982f-efde-4542-b0c5-75b8d9a9964e>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00071-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Topology and Geometry in Open CASCADE. Part 3
OK, let's continue to eat our elephant bit by bit. The next bit is edge. Hope you won't get difficulties with it.
Edge is a topological entity that corresponds to 1D object – a curve. It may designate a face boundary (e.g. one of the twelve edges of a box) or just a ‘floating' edge not belonging to a face
(imagine an initial contour before constructing a prism or a sweep). Face edges can be shared by two (or more) faces (e.g. in a stamp model they represent connection lines between faces) or can only
belong to one face (in a stamp model these are boundary edges). I'm sure you saw all of these types – in default viewer, in wireframe mode, they are displayed in red, yellow and green respectively.
Edge contains several geometric representations (refer to the diagram in Part1):
- Curve C(t) in 3D space, encoded as Geom_Curve. This is considered as a primary representation;
- Curve(s) P(t) in parametric 2D space of a surface underlying each face the edge belongs to. These are often called pcurves and are encoded as Geom2d_Curve;
- Polygonal representation as an array of points in 3D, encoded as Poly_Polygon3D;
- Polygonal representation as an array of indexes in array of points of face triangulation, encoded as Poly_PlygonOnTriangulation.
The latter two are tessellation analogues of exact representations with the help of former two.
These representations can be retrieved using already mentioned BRep_Tool, for instance:
Standard_Real aFirst, aLast, aPFirst, aPLast;
Handle(Geom_Curve) aCurve3d = BRep_Tool::Curve (anEdge, aFirst, aLast);
Handle(Geom2d_Curve) aPCurve = BRep_Tool::CurveOnSurface (anEdge, aFace, aPFirst, aPLast);
The edge must have pcurves on all surfaces, the only exception is planes where pcurves can be computed on the fly.
The edge curves must be coherent, i.e. go in one direction. Thus, a point on the edge can be computed using any representation - as C(t), t from [first, last]; S1 (P1x (u), P1y(u)), u from [first1,
last1], where Pi – pcurve in parametric space of surface Si
Edge flags
Edge has two special flags:
- "same range" (BRep_Tool::SameRange()), which is true when first = first_i and last = last_i, i.e. all geometric representations are within the same range;
- "same parameter" (BRep_Tool::SameParameter()), which is true when C(t) = S1(P1x(t), P1y(t)), i.e. any point along the edge corresponds to the same parameter on any of its curves.
Many algorithms assume that they are both set, therefore it is recommended that you ensure that these conditions are respected and flags are set.
Edge's tolerance is a maximum deviation between its 3D curve and any other representation. Thus, its geometric meaning is a radius of a pipe that goes along its 3D curve and encompass curves restored
from all representations.
Special edge types
There are two kinds of edges that are distinct from others. These are:
- seam edge – one which is shared by the same face twice (i.e. has 2 pcurves on the same surface)
- degenerated edge – one which lies on a surface singularity that corresponds to a single point in 3D space.
The sphere contains both of these types. Seam-edge lies on pcurves corresponding to surface U iso-lines with parameters 0 and 2*PI. Degenerated edges lies on North and South poles and correspond to V
iso-lines with parameters –PI/2 and PI/2.
Other examples - torus, cylinder, and cone. Torus has two seam-edges – corresponding to its parametric space boundaries; cylinder has a seam-edge. Degenerated edge represents on a cone apex.
To check if the edge is either seam or degenerated, use BRep_Tool::IsClosed(), and BRep_Tool::Degenerated().
Edge orientation
Forward edge orientation means that its logical direction matches direction of its curve(s). Reversed orientation means that logical direction is opposite to curve's direction. Therefore, seam-edge
always has 2 orientations within a face – one reversed and one forward.
To be continued...
P.S. As usual, many thanks to those who voted and sent comments. Is this series helpful ?
14 comments:
1. Svetlozar KostadinovFebruary 13, 2009 at 12:16 AM
Very useful article even for non-beginners. Roman, keep going and go deeper into the geometry/topology stuff.
Beside this, the rate buttons don't work on Opera. I voted in IE.
2. Are you kidding me?
Don't doubt more if it is useful or not. This is ESSENTIAL.
I am very faithful to your blog.
Come on, let's go through the faces because I am getting several issues to solve!
As far as possible, you are getting my project alive!
Thank you Roman
3. OK, folks, thanks for continued support. Yes, face is a next target. Stay tuned ;-)
4. Great and interesting articles! With such background theory get easier to read throught the OCC code!
5. Rob BachrachFebruary 13, 2009 at 4:03 PM
Roman, thanks for all of this. I've understood OCC for years, but now I start to really understand it.
6. Ever thought about writing "Open CASCADE for Dummies" :)
7. Great article Roman, much enjoying the topology series.
A little surprised that the face is the following article, I thought Wire was the node up the topology graph? Thanks for your efforts!
8. Great blog, thanks for posting all this fun stuff.
It would be great if you could elaborate on orientation. It seems to me, that sometimes, the orientation attribute of a TopoDS_Edge refers the orientation of the TopoDS_Edge with respect to the
underlying TopoDS_TEdge (or BRepCurve) and other times the orientation of the TopoDS_Edge refers to the orientation of the TopoDS_Edge with respect to the TopoDS_Face that it bounds. I am always
confused as to which one I actually have at any given moment.
9. Thanks for continuous feedback, guys, and high ratings. Appreciate your interest and support.
OK to include more details on wires and orientation.
We are working hard here at Intel to release new Beta Update for Parallel Amplifier and Inspector (www.intel.com/go/parallel, fighting with remaining bugs and polishing GUI. In addition to other
personal projects, this makes it more challenging to quickly issue posts. But I will keep on, I promise...
Thanks again.
10. As productive as you are, how the heck did you manage to escape OpenCascade? It's a wonder that they didn't break out handcuffs when you announced you were leaving! :)
A Dummies book is not a bad idea. If not, could you at least take your blog posts and put them in a PDF some time? A PDF would be easier if someone wants to print it out.
11. Hi Roman,
first of all thanks for the series - it's really essential!
However, there's an issues I didn't get. It's about orientation of seam-edges. Why do these always have two different orientations? If there are two separate pcurves for each seam-edge it is
possible to decide how those are parameterized, isn't it? Additionally the rule you introduced in part 4 (material on the left for forward edges or on the right for reversed edges) will not be
violated if the edges are all forward (or reversed).
If the solution comes in one of the next articles you mentioned in part 4 don't bother answering this post.
12. Mark, I left the company quite gracefully and continue to maintain relationships with many folks there. This is a great team.
Pawel, remember that curve representations (3D and all pcurves) must be consistent – see Part3. Thus, in 2D parametric space they will be parallel and co-directional (see sphere pcurves in
figures in Parts 3 and 4). Thus, to keep material on the left on the one and on the right of the other, one must be forward and the other – reversed.
13. Ok I finally got it! For some reason I misinterpreted the arrows as the edge orientation (not the parameterization direction!).
14. Good Article
|
{"url":"http://opencascade.blogspot.com/2009/02/topology-and-geometry-in-open-cascade_12.html","timestamp":"2014-04-23T11:46:24Z","content_type":null,"content_length":"91971","record_id":"<urn:uuid:44759446-a787-4c20-843f-9ed38e701d1c>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00003-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Wauna Math Tutor
Find a Wauna Math Tutor
...I am convinced that these students, in concert with the school and parents, can find the success they want and are capable of in any academic environment. I served as a missionary for two years
in Colombia, South America prior to finishing college. There I learned to speak fluent Spanish, learn...
48 Subjects: including ACT Math, prealgebra, algebra 1, Spanish
...I would love to be in a position to help others come to appreciate the mysteries and wonders science and math reveal about our world. My past teaching experience includes four years instructing
beginning and intermediate college astronomy laboratories, as well as individual student tutoring for ...
5 Subjects: including prealgebra, precalculus, algebra 1, geometry
...In my journey so far I have earned: - Associate of Arts and Sciences degree from Tacoma Community College - Bachelors of Science in Biology from Washington State University - I am very excited
to say I have been accepted to medical school and will begin in July of 2014!! While tutoring childr...
25 Subjects: including statistics, geometry, trigonometry, probability
...I want to thank you for considering me as your tutor, and I sincerely hope I will be able to assist you in your educational journey.I am a college graduate with a minor in Biology. As such I
have taken numerous math and science classes such as algebra, geometry, chemistry, precalculus, physics, anatomy, and calculus. I have previous experience tutoring math up to the precalculus
22 Subjects: including geometry, ACT Math, SAT math, algebra 2
...I was often the go-to girl for helping my friends understand math concepts they were struggling with. I was able to figure out how to explain the concepts or problems in a way that made sense
to them. Later, I used those same skills in a math tutoring program I helped set up at my high school through our school's chapter of the National Honor Society.
35 Subjects: including statistics, linear algebra, English, algebra 1
|
{"url":"http://www.purplemath.com/wauna_math_tutors.php","timestamp":"2014-04-18T11:18:24Z","content_type":null,"content_length":"23647","record_id":"<urn:uuid:4afd785f-7bea-409f-8621-83a1765b5ccb>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00341-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Best Response
You've already chosen the best response.
Question* im a dumb |:
Best Response
You've already chosen the best response.
I can answer from least right off -4, 4/3, square root of 10... Now to help you figure out the subsets I suggest this site http://www.studyzone.org/mtestprep/math8/a/number_subsets7l.cfm -4 is an
interger, 4/3 is a rational number, and the square root of 105 is irrational....Does this help?
Best Response
You've already chosen the best response.
Yes thank you
Best Response
You've already chosen the best response.
:) No problem
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/50695007e4b0e3061a1d95aa","timestamp":"2014-04-18T08:29:05Z","content_type":null,"content_length":"35071","record_id":"<urn:uuid:5342b4b3-c1b3-434a-a1e3-bbacdf90cd1b>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00139-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The Fibonacci Series
Because division or growth by the Golden Mean has no beginning or ending, it is not a good candidate for constructing forms of any kind. It is necessary to have a definable starting point if you want
to build anything.
However, there is an approximation to the Golden Mean that nature uses, called the Fibonacci Sequence. Leonardo Fibonacci was a monk who noticed that branches on trees, leaves on flowers, and seeds
in pine cones and sunflower seeds arranged themselves in this sequence.
The Fibonacci Sequence is based on the golden mean ratio, or Phi (). To the left of the equal sign is raised to a power, and to the right of the equal sign is the Fibonacci Sequence:
Each digit in the second column to the left of the symbol is the sum of the 2 before it in the previous row:
1 + 0 = 1, 1 + 1 = 2, 2+1 = 3, 3 + 2 = 5, 5 + 3 = 8, ..... and so on.
Notice that when one divides the digit by the one before it in the sequence, the result approaches Ø:
1 / 1 = 1.0
2 / 1 = 2.0
3 / 2 = 1.5
5 / 3 = 1.67
8 / 5 = 1.60
13 / 8 = 1.625
21 / 13 = 1.6153846
34 / 21 = 1.6190476
55 / 34 = 1.617647
Here is a chart that shows this:
Notice that is approached both from above and below. This is not an asymptotic approach, but one which affords a full view of both sides. Like a signal locking in on the target, the Fibonacci
sequence homes in on .
The ratio is attained from the division of integers. Integers, or whole numbers, represent things that can be designed and built in the 'real world.' Even though is unattainable, nature can closely
approximate it, as we can see from the following photographs:
A Nautilus shell (1)
Echeveria Agavoides (2)
Helianthus Annus (2)
human (3)
The human body has many Phi/Fibonacci relationships.
The leg: The distance from the hip to the knee, and from the knee to the ankle.
The face: The distance from the top of the head to the nose, and from the nose to the chin.
The arm: The distance between the shoulder joint to the elbow, and from the elbow to the finger tips.
The hand: The distance between the wrist, the knuckles, the first and second joints of the fingers, and the finger tips.
Of course every body is different and these relationships are only approximate. Once you become aware of the Fibonacci sequence / Phi ratio, however, you begin to see it in many life forms.
(As an aside, note that in the Fibonacci sequence, you don't have to start with 1. Begin with any number and proceed
with the rule "the current number is the sum of the 2 previous numbers." Do the division of the current number by the prior number in the sequence and will always be the result.)
On to The Properties of Phi
(1) The Geometry of Art and Life -- Matila Ghyka
(2) The Curves of Life -- Theodore Andreas Cook
(3) The Power Of Limits -- Gyorgy Doczi
Return to Geometry Home Page The Big Picture Home
|
{"url":"http://www.kjmaclean.com/Geometry/Fibonacci.html","timestamp":"2014-04-18T23:16:21Z","content_type":null,"content_length":"15719","record_id":"<urn:uuid:ac6a4d0e-1cef-42bf-af7e-7d574f6a2114>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00506-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Worked example 11.6: Oscillating disk
Next: Orbital motion Up: Oscillatory motion Previous: Worked example 11.5: Gravity
Question: A uniform disk of radius
Answer: The moment of inertia of the disk about a perpendicular axis passing through its centre is
The angular frequency of small amplitude oscillations of a compound pendulum is given by
Hence, the answer is
Richard Fitzpatrick 2006-02-02
|
{"url":"http://farside.ph.utexas.edu/teaching/301/lectures/node148.html","timestamp":"2014-04-16T04:11:50Z","content_type":null,"content_length":"4446","record_id":"<urn:uuid:4485a423-3332-45c9-916d-35386423c71d>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00457-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Compact Hausdorff spaces without isolated points in ZF
up vote 6 down vote favorite
S is uncountable := |$\mathbb{N}$| < |S|
S is noncountable := |S| $\not\leq |\mathbb{N}|$
(X,$T$) is a nice space := (X,$T$) is a compact Hausdorff space without isolated points
Does [ ZF / ZF + Countable Choice ] prove that every nice space is [ uncountable / noncountable ] ?
If not, is it known to prove that the statement implies some choice principle?
What if the spaces are additionally assumed to be metrizable?
Now, that's basically 12 questions, so I certainly don't anticipate answers for all of them.
If it matters, the one I'm most curious about is "Does ZF prove that every nice space is noncountable?".
set-theory gn.general-topology axiom-of-choice
add comment
2 Answers
active oldest votes
One of the usual ways of proving in ZFC that every compact Hausdorff space $X$ without isolated points (perfect space) is uncountable is by proving that there is a copy of the Cantor space
$2^\omega$ inside it, as follows. Pick two points and separate them with neighborhoods $U_0, U_1$ having disjoint closures. Inside each of these neighborhoods, pick two points and
separating neighborhoods $U_{00},U_{01}\subset U_0$ and $U_{10},U_{11}\subset U_1$ having disjoint closures inside those neighborhoods, and so on proceeding inductively. Every infinite
binary sequence $s\in 2^{\omega}$ determines a unique nesting sequence of these sets, which must be nonempty. And so we have continuum many points in $X$, so it is uncountable.
This proof, however, makes several uses of the axiom of choice. First, we have the choices involved with picking the points to be separated, and second, the choices involved with picking
the separating neighborhoods. Although there are only countably many choices being made here, this is an instance of Dependent Choice, a stronger principle than mere countable choice, since
the choices are being made in succession. Finally, third, a subtle point, we have the choices involved in picking for each binary sequence a single point from the intersection of the
corresponding nested neighborhoods. After all, there could be many points in that intersection.
With some additional assumptions on $X$, however, we can get around these uses of choice, and thereby obtain answers to some of your questions. For example, if we only aim to prove that $X$
is noncountable, rather than uncountable, then we may assume towards contradiction that $X$ is countable, which provides for us a canonical way of picking points from the space. (In the
case of the first use of choice, it would suffice if $X$ were separable, since we could just pick points from a fixed countable dense set.) If $X$ were a metric space, then we have a
canonical way to pick neighborhoods of any given point. Also, by making these neighborhoods shrink to $0$ as the construction proceeds, we ensure that the intersection of the nested sets
contains a single point.
up vote 8
down vote Thus, this argument shows in ZF, without any choice, that every compact Hausdorff metric space having no isolated points is noncountable. More generally, it shows, again without any choice,
accepted that every separable compact Hausdorff metric space without isolated points has uncountable size at least continuum.
If we have Dependent Choice, then we can prove that every compact Hausdorff metric space is uncountable of size at least continuum, since DC allows us to overcome the first two uses of
choice (picking the points and the neighborhoods), and by shrinking the neighborhoods we avoid the need for choice in the last step.
A clever person may be able to improve these arguments to cover additional cases.
Meanwhile, let me mention an interesting example on the other side of the question. This example illustrates that several of the usual equivalent formulations of compactness are no longer
equivalent in the non-AC context. Namely, it is consistent with ZF that there is an infinite but Dedekind finite set $D$ of real numbers. That is, $D$ is infinite, but has no countably
infinite subset. It follows that $D$ has at most finitely many isolated points, since otherwise we could enumerate the rational intervals and find these isolated points, thereby enumerating
a countably infinite subset of $D$, which is impossible. Let us simply omit these finitely many isolated points and thereby assume without loss of generality that $D$ is an infinite
Dedekind-finite set of reals having no isolated points. Since $D$ is Dedekind-finite, every sequence in $D$ has only finitely many values and hence has a convergent (constant) subsequence.
Thus, $D$ is a sequentially compact set of reals. In other words, $D$ is a sequentially compact metrizable space with no isolated points. However, $D$ is not uncountable in the sense you
mentioned, since we don't even have $|\mathbb{N}|\leq D$, as there is no countably infinite subset of $D$. Nevertheless, $D$ is noncountable.
Joel, I'm not following you when you say, "which provides for us a canonical way of picking points from the space". It sounds like you need to choose a bijection with $\mathbb{N}$ in
order to proceed. – Todd Trimble♦ Sep 12 '10 at 13:44
If $X$ is countable, you can fix an enumeration of $X$ for the rest of the proof, and use this to select points from $X$. – Joel David Hamkins Sep 12 '10 at 13:47
Yes. My head must be in topos theory too much, where we sometimes have difficulty making even just one choice. ;-) – Todd Trimble♦ Sep 12 '10 at 14:00
Everything Joel said here is correct, but a warning for others about something that confused me once. We have to be careful about the term "perfect space". Sometimes people use the term
1 for any space with no isolated points, like the Wikipedia article does. Without a compactness or completeness condition you can't prove in ZFC that every uncountable, second countable
space $U$ contains a copy of Cantor space, or even has cardinality continuum, but you can prove $U$ has an uncountable closed subset with no isolated points. Proof sketch: throw out
every countable basic open neighborhood of $U$. – Carl Mummert Sep 12 '10 at 23:44
add comment
I'd like to extend on Joel's answer, and point that ZF itself cannot prove that every "nice space" is uncountable.
It is consistent that the axiom of choice fails and there exists a Dedekind-finite set $X$ which can be topologized as follows:
1. $X$ is Hausdorff compact;
2. $X$ is strongly connected (every real valued function is constant);
3. The topology is an order topology of a dense linear order with endpoints (if we remove the endpoints then this is a locally-compact space, and every closed interval is compact).
It follows that there are no isolated points, so this is a nice space. However the set itself is Dedekind-finite and so not uncountable (but it is still noncountable). Such
up vote 3 down topological space is called a Läuchli continuum.
For example if we consider Mostowski's ordered model, by adding two endpoints to the atoms the result is a Dedekind-finite $X$ with the above properties.
I elaborated on this in a recent math.SE answer which includes all the relevant references and more.
It should be pointed that in the great book of choice principles the existence of non-trivial Läuchli continua is the negation of Form 155; whereas countable choice is Form 8. I could
not find much connection between the two forms in the site.
add comment
Not the answer you're looking for? Browse other questions tagged set-theory gn.general-topology axiom-of-choice or ask your own question.
|
{"url":"http://mathoverflow.net/questions/38450/compact-hausdorff-spaces-without-isolated-points-in-zf","timestamp":"2014-04-18T21:00:19Z","content_type":null,"content_length":"66644","record_id":"<urn:uuid:fcf99131-446d-4b09-bc67-ff9ea2d31534>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00548-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Table of Contents
gluNurbsCurve - define the shape of a NURBS curve
void gluNurbsCurve( GLUnurbs* nurb,
GLint knotCount,
GLfloat *knots,
GLint stride,
GLfloat *control,
GLint order,
GLenum type )
eqn not supported
Specifies the NURBS object (created with gluNewNurbsRenderer).
Specifies the number of knots in knots. knotCount equals the number of control points plus the order.
Specifies an array of knotCount nondecreasing knot values.
Specifies the offset (as a number of single-precision floating-point values) between successive curve control points.
Specifies a pointer to an array of control points. The coordinates must agree with type, specified below.
Specifies the order of the NURBS curve. order equals degree + 1, hence a cubic curve has an order of 4.
Specifies the type of the curve. If this curve is defined within a gluBeginCurve/gluEndCurve pair, then the type can be any of the valid one-dimensional evaluator types (such as GL_MAP1_VERTEX_3
or GL_MAP1_COLOR_4). Between a gluBeginTrim/gluEndTrim pair, the only valid types are GLU_MAP1_TRIM_2 and GLU_MAP1_TRIM_3.
Use gluNurbsCurve to describe a NURBS curve.
When gluNurbsCurve appears between a gluBeginCurve/gluEndCurve pair, it is used to describe a curve to be rendered. Positional, texture, and color coordinates are associated by presenting each as a
separate gluNurbsCurve between a gluBeginCurve/gluEndCurve pair. No more than one call to gluNurbsCurve for each of color, position, and texture data can be made within a single gluBeginCurve/
gluEndCurve pair. Exactly one call must be made to describe the position of the curve (a type of GL_MAP1_VERTEX_3 or GL_MAP1_VERTEX_4).
When gluNurbsCurve appears between a gluBeginTrim/gluEndTrim pair, it is used to describe a trimming curve on a NURBS surface. If type is GLU_MAP1_TRIM_2, then it describes a curve in two-dimensional
(u and v) parameter space. If it is GLU_MAP1_TRIM_3, then it describes a curve in two-dimensional homogeneous (u, v, and w) parameter space. See the gluBeginTrim reference page for more discussion
about trimming curves.
The following commands render a textured NURBS curve with normals:
gluBeginCurve(nobj) ; gluNurbsCurve(nobj, ..., GL_MAP1_TEXTURE_COORD_2);
gluNurbsCurve(nobj, ..., GL_MAP1_NORMAL);
gluNurbsCurve(nobj, ..., GL_MAP1_VERTEX_4);
gluEndCurve(nobj) ;
To define trim curves which stitch well, use gluPwlCurve. gluBeginCurve(3G) , gluBeginTrim(3G) , gluNewNurbsRenderer(3G) , gluPwlCurve(3G)
|
{"url":"http://www.xfree86.org/4.8.0/gluNurbsCurve.3.html","timestamp":"2014-04-18T08:24:12Z","content_type":null,"content_length":"5039","record_id":"<urn:uuid:9c88cfa4-8741-4ae9-86ef-4a34c23803ab>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00663-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Applications of Umbral Calculus Associated with
Abstract and Applied Analysis
Volume 2012 (2012), Article ID 865721, 12 pages
Research Article
Applications of Umbral Calculus Associated with -Adic Invariant Integrals on
^1Department of Mathematics, Sogang University, Seoul 121-742, Republic of Korea
^2Department of Mathematics, Kwangwoon University, Seoul 139-701, Republic of Korea
Received 8 November 2012; Accepted 22 November 2012
Academic Editor: Gaston N'Guerekata
Copyright © 2012 Dae San Kim and Taekyun Kim. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction
in any medium, provided the original work is properly cited.
Recently, Dere and Simsek (2012) have studied the applications of umbral algebra to some special functions. In this paper, we investigate some properties of umbral calculus associated with -adic
invariant integrals on . From our properties, we can also derive some interesting identities of Bernoulli polynomials.
1. Introduction
Let be a fixed prime number. Throughout this paper, , and denote the ring of -adic integers, the field of -adic rational numbers, and the completion of algebraic closure of , respectively.
Let . Let be space of uniformly differentiable functions on . For , the -adic invariant integral on is defined by see [1, 2].
From (1.1), we have where (see [1–6]). Let be the set of all formal power series in the variable over with Let and let denote the vector space of all linear functional on .
The formal power series, defines a linear functional on by setting see [7, 8].
In particular, by (1.4) and (1.5), we get where is the Kronecker symbol (see [7]). Here, denotes both the algebra of formal power series in and the vector space of all linear functional on , so an
element of will be thought of as both a formal power series and a linear functional. We shall call the umbral algebra. The umbral calculus is the study of umbral algebra.
The order of power series is the smallest integer for which does not vanish. We define if . From the definition of order, we note that and .
The series has a multiplicative inverse, denoted by or , if and only if .
Such a series is called invertible series. A series for which is called a delta series (see [7, 8]). Let . Then, we have By (1.5) and (1.6), we get see [7].
Notice that for all in , and for all polynomials , see [7, 8].
Let . Then, we have where the sum is over all nonnegative integers such that (see [8]).
By (1.10), we get Thus, from (1.12), we have see [7].
By (1.13), we get Thus, by (1.14), we see that Let us assume that is a polynomial of degree . Suppose that with and . Then, there exists a unique sequence of polynomials satisfying for all .
The sequence is called the Sheffer sequence for , which is denoted by .
The Sheffer sequence for is called the Appell sequence for , or is Appell for , which is indicated by .
For , it is known that see [7, 8].
Let . Then, we have where is the compositional inverse of , and see [7, 8].
We recall that the Bernoulli polynomials are defined by the generating function to be with the usual convention about replacing by (see [1–16]).
In the special case, are called the th Bernoulli numbers. By (1.21), we easily get Thus, by (1.22), we see that is a monic polynomial of degree . It is easy to show that see [13–15].
From (1.2), we can derive the following equation: Let us take . Then, from (1.21), (1.22), (1.23), and (1.24), we have where (see [1, 2]). Recently, Dere and simsek have studied applications of
umbral algebra to some special functions (see [7]). In this paper, we investigate some properties of umbral calculus associated with -adic invariant integrals on . From our properties, we can derive
some interesting identities of Bernoulli polynomials.
2. Applications of Umbral Calculus Associated with -Adic Invariant Integrals on
Let be an Appell sequence for . By (1.19), we get Let us take . Then, is clearly invertible series. From (1.21) and (2.1), we have Thus, by (2.2), we get From (1.21), (2.1), and (2.3), we note that
is an Appell sequence for .
Let us take the derivative with respect to on both sides of (2.2). Then, we have Thus, by (2.4), we get where . Thus, by (2.6), we get From (1.25) and (2.7), we have By (2.5), we see that Thus, by (
2.9), we have and we can derive the following equation.
From (2.3) and (2.10), By (2.8) and (2.11), we see that Therefore, by (2.5), we obtain the following theorem.
Theorem 2.1. For , one has where .
Corollary 2.2. For , one has
Let us consider the linear functional that satisfies for all polynomials . It can be determined from (1.9) that By (1.24) and (2.16), we get Therefore, by (2.17), we obtain the following theorem.
Theorem 2.3. For , one has That is In particular, one has
From (1.24), one has By (1.25) and (2.21), we get where .
Therefore, by (2.22), we obtain the following theorem.
Theorem 2.4. For , we have In particular, one obtains
The higher order Bernoulli polynomials are defined by In the special case, , are called the th Bernoulli numbers of order (). From (2.25), we note that By (2.25) and (2.26), we get From (2.26) and (
2.27), we note that is a monic polynomial of degree with coefficients in . For , let us assume that By (2.28), we easily see that is an invertible series. From (2.25) and (2.28), we have From (2.29),
we note that is an Appell sequence for . Therefore, by (2.29), we obtain the following theorem.
Theorem 2.5. For and , one has In particular, the Bernoulli polynomials of order are given by That is
Let us consider the linear functional that satisfies for all polynomials . It can be determined from (1.9) that Therefore, by (2.34), we obtain the following theorem.
Theorem 2.6. For , one has That is In particular, one gets
Remark 2.7. From (1.11), we note that By Theorems 2.3 and 2.6 and (2.38), we get Let be the Sheffer sequence for .
Then the Sheffer identity is given by see [7, 8], where . From Theorem 2.5 and (2.40), we have
This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education, Science and Technology 2012R1A1A2003786.
1. T. Kim, “Symmetry $p$-adic invariant integral on ${\mathbf{Z}}_{p}$for Bernoulli and Euler polynomials,” Journal of Difference Equations and Applications, vol. 14, no. 12, pp. 1267–1277, 2008.
View at Publisher · View at Google Scholar
2. T. Kim, “$q$-Volkenborn integration,” Russian Journal of Mathematical Physics, vol. 9, no. 3, pp. 288–299, 2002. View at Zentralblatt MATH
3. T. Kim, “An identity of the symmetry for the Frobenius-Euler polynomials associated with the fermionic $p$-adic invariant $q$-integrals on ${ℤ}_{p}$,” The Rocky Mountain Journal of Mathematics,
vol. 41, no. 1, pp. 239–247, 2011. View at Publisher · View at Google Scholar
4. S.-H. Rim and J. Jeong, “On the modified $q$-Euler numbers of higher order with weight,” Advanced Studies in Contemporary Mathematics, vol. 22, no. 1, pp. 93–98, 2012.
5. S.-H. Rim and S.-J. Lee, “Some identities on the twisted $\left(h,q\right)$-Genocchi numbers and polynomials associated with $q$-Bernstein polynomials,” International Journal of Mathematics and
Mathematical Sciences, vol. 2011, Article ID 482840, 8 pages, 2011. View at Publisher · View at Google Scholar
6. H. Ozden, I. N. Cangul, and Y. Simsek, “Remarks on $q$-Bernoulli numbers associated with Daehee numbers,” Advanced Studies in Contemporary Mathematics, vol. 18, no. 1, pp. 41–48, 2009.
7. R. Dere and Y. Simsek, “Applications of umbral algebra to some special polynomials,” Advanced Studies in Contemporary Mathematics, vol. 22, no. 3, pp. 433–438, 2012.
8. S. Roman, The Umbral Calculus, Academic Press, New York, NY, USA, 2005.
9. J. Choi, D. S. Kim, T. Kim, and Y. H. Kim, “Some arithmetic identities on Bernoulli and Euler numbers arising from the $p$-adic integrals on ${ℤ}_{p}$,” Advanced Studies in Contemporary
Mathematics, vol. 22, no. 2, pp. 239–247, 2012.
10. D. Ding and J. Yang, “Some identities related to the Apostol-Euler and Apostol-Bernoulli polynomials,” Advanced Studies in Contemporary Mathematics, vol. 20, no. 1, pp. 7–21, 2010. View at
Zentralblatt MATH
11. G. Kim, B. Kim, and J. Choi, “The DC algorithm for computing sums of powers of consecutive integers and Bernoulli numbers,” Advanced Studies in Contemporary Mathematics, vol. 17, no. 2, pp.
137–145, 2008. View at Zentralblatt MATH
12. C. S. Ryoo, “On the generalized Barnes type multiple $q$-Euler polynomials twisted by ramified roots of unity,” Proceedings of the Jangjeon Mathematical Society, vol. 13, no. 2, pp. 255–263,
13. Y. Simsek, “Generating functions of the twisted Bernoulli numbers and polynomials associated with their interpolation functions,” Advanced Studies in Contemporary Mathematics, vol. 16, no. 2, pp.
251–278, 2008. View at Zentralblatt MATH
14. Y. Simsek, “Special functions related to Dedekind-type DC-sums and their applications,” Russian Journal of Mathematical Physics, vol. 17, no. 4, pp. 495–508, 2010. View at Publisher · View at
Google Scholar
15. K. Shiratani and S. Yokoyama, “An application of $p$-adic convolutions,” Memoirs of the Faculty of Science, Kyushu University Series A, vol. 36, no. 1, pp. 73–83, 1982. View at Publisher · View
at Google Scholar
16. Z. Zhang and H. Yang, “Some closed formulas for generalized Bernoulli-Euler numbers and polynomials,” Proceedings of the Jangjeon Mathematical Society, vol. 11, no. 2, pp. 191–198, 2008. View at
Zentralblatt MATH
|
{"url":"http://www.hindawi.com/journals/aaa/2012/865721/","timestamp":"2014-04-19T18:49:57Z","content_type":null,"content_length":"483205","record_id":"<urn:uuid:1cf6ac63-e740-4e36-9305-3a9a68cbfd0a>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00529-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Prisms are solids (three‐dimensional figures) that, unlike planar figures, occupy space. They come in many shapes and sizes. Every prism has the following characteristics:
• Bases: A prism has two bases, which are congruent polygons lying in parallel planes.
• Lateral edges: The lines formed by connecting the corresponding vertices, which form a sequence of parallel segments.
• Lateral faces: The parallelograms formed by the lateral edges.
A prism is named by the polygon that forms its base, as follows:
• Altitude: A segment perpendicular to the planes of the bases with an endpoint in each plane.
• Oblique prism: A prism whose lateral edges are not perpendicular to the base.
• Right prism: A prism whose lateral edges are perpendicular to the bases. In a right prism, a lateral edge is also an altitude.
In Figure 1, prism (a) is a right triangular prism, prism (b) is a right rectangular prism, and prism (c) is an oblique pentagonal prism. The altitude in prism (c) is called h.
Figure 1 Different types of prisms.
|
{"url":"http://www.cliffsnotes.com/math/geometry/geometric-solids/prisms","timestamp":"2014-04-17T21:51:13Z","content_type":null,"content_length":"121356","record_id":"<urn:uuid:dee0d89f-b69e-4afa-9e07-1a0e503a72d7>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00057-ip-10-147-4-33.ec2.internal.warc.gz"}
|
MathGroup Archive: December 2005 [00089]
[Date Index] [Thread Index] [Author Index]
Re: A question about algebraic numbers using Mathematica
• To: mathgroup at smc.vnet.net
• Subject: [mg62804] Re: A question about algebraic numbers using Mathematica
• From: dh <dh at metrohm.ch>
• Date: Mon, 5 Dec 2005 13:41:12 -0500 (EST)
• References: <dn0ug5$8b7$1@smc.vnet.net>
• Sender: owner-wri-mathgroup at wolfram.com
Hi Kent,
I assume by Q[r] you mean the field extension of the rationals by r.
Because of the extension field minimal polynomial p[x]:
x^4 - 2c x^3 + (c^2 - 2a^2) x^2 + 2a^2 c x - a^2 c^2
r^4 can be written as
r^4== -(- 2c x^3 + (c^2 - 2a^2) x^2 + 2a^2 c x - a^2 c^2)
This shows that all numbers can be written as polynomials in r with
dgeree <=3.
Therefore, to get the inverse of 2-r we need to determine a number y:
y= y0 + y1 r + y2 r^2 + y3 r^3
so that
y (2-r) ==1
y(2-r)-1 ==0
Towards this aim, we multiply and expand y(2-r)-1, then replace r^4 by
lower powers and finally collect coefficients of factors of r:
y= y0 + y1 r + y2 r^2 + y3 r^3;
rep= r^4-> -(- 2c r^3 + (c^2 - 2a^2) r^2 + 2a^2 c r - a^2 c^2);
res1= CoefficientList[Expand[y (2-r)-1] /. rep ,r]
Now, because of y(2-r)-1 ==0, all these coefficients must er zero:
Solve[ Thread[res1=={0,0,0,0}], {y0,y1,y2,y3} ]
this gives the inverse of 2-r
Kent Holing wrote:
> I want to find the inverse of 2 - r in Q[r] where r is a root of the equation
> x^4 - 2c x^3 + (c^2 - 2a^2) x^2 + 2a^2 c x - a^2 c^2 = 0 for a, b and c integers.
> Can this be done for general a, b and c? (I know how to do it for specific given numerical values of a, b and c.)
> Kent Holing
|
{"url":"http://forums.wolfram.com/mathgroup/archive/2005/Dec/msg00089.html","timestamp":"2014-04-18T19:01:49Z","content_type":null,"content_length":"35533","record_id":"<urn:uuid:d9914c2e-6e3d-4dc9-ae5d-74f1b01fc26c>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00384-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Find the equation of the exponential function, am I doing it wrong?
December 4th 2012, 02:55 PM #1
Junior Member
Oct 2012
Find the equation of the exponential function, am I doing it wrong?
If I'm doing it wrong, where am I doing it wrong?
Here is my work
f(3) = 10 ; f(9) = 40
10a^6 = 40
a^6 = 4
a = 4^(1/6)
a = 1.25992105
10 = c(1.25992105)^3
1.25992105^3 ≈ 2
10/2 = C
5 = C
f(x) = 2(4)^x
is this the correct equation?
Re: Find the equation of the exponential function, am I doing it wrong?
I know I'm doing it wrong, because when I check the answer it's way off.
Re: Find the equation of the exponential function, am I doing it wrong?
I have absolutely no idea what you are doing there.
This is a perfect example of just 'jumping' into the middle of some thought process that we have no clue about.
So start over. Post the exact original wording of the question.
Re: Find the equation of the exponential function, am I doing it wrong?
the f(3) = 10 and f(9) = 40 are the two points we are given to find the equation.
the 10a^6 = 40 is the equation to find what a is for y = c*a^x
Re: Find the equation of the exponential function, am I doing it wrong?
The exact wording of the problem is.
Find the equation of the exponential function with f(3) = 10 and f(9) = 40
Re: Find the equation of the exponential function, am I doing it wrong?
That is an oddly worded question.
So you want $f(x)=ca^x$ so $10=ca^3~\&~40=ca^9~.$
From which we can see that $\frac{10}{a^3}=\frac{40}{a^9}$
So $a^6=4$ or $a=\sqrt[6]{4}$.
Can you find $c~?$
Re: Find the equation of the exponential function, am I doing it wrong?
If I were to plug that into the formula
f(x) = ca^x
10 = c((4^1/6)^3)
I must need to review my exponents.
Would my next step be 10 = c(4)^3/6
trying to simplify my problem.
Re: Find the equation of the exponential function, am I doing it wrong?
Re: Find the equation of the exponential function, am I doing it wrong?
Then C = 5
Re: Find the equation of the exponential function, am I doing it wrong?
I did find the correct equation. What I was doing wrong was breaking 4^1/6 down to decimal form when I should have left it as 2^1/3.
December 4th 2012, 02:56 PM #2
Junior Member
Oct 2012
December 4th 2012, 03:05 PM #3
December 4th 2012, 03:08 PM #4
Junior Member
Oct 2012
December 4th 2012, 03:10 PM #5
Junior Member
Oct 2012
December 4th 2012, 03:28 PM #6
December 4th 2012, 03:40 PM #7
Junior Member
Oct 2012
December 4th 2012, 03:52 PM #8
December 4th 2012, 03:58 PM #9
Junior Member
Oct 2012
December 4th 2012, 06:00 PM #10
Junior Member
Oct 2012
|
{"url":"http://mathhelpforum.com/algebra/209084-find-equation-exponential-function-am-i-doing-wrong.html","timestamp":"2014-04-18T11:10:18Z","content_type":null,"content_length":"60210","record_id":"<urn:uuid:4124ea23-6547-4b94-a4f3-f89db1a881f3>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00015-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Question about generating rays from camera. - Graphics Programming and Theory
Hey. I just finished doing my ray marcher with distance fields. It supports ambient occlusion, shadows, diffuse lighting. It's pretty basic. Here's render from it: (1024 x 1024)
But ok, let's get to the question. How do you generate eye rays? What's the easiest way to do this?
I'm using this method:
But it distorts my image even at field of view of 45... The fov on render above me is 22.5.
What's the proper method to generate eye rays?
|
{"url":"http://www.gamedev.net/topic/630406-question-about-generating-rays-from-camera/","timestamp":"2014-04-18T20:49:53Z","content_type":null,"content_length":"105042","record_id":"<urn:uuid:0c438f55-07ba-4743-a352-ef06908746a0>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00510-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Wauwatosa, WI Calculus Tutor
Find a Wauwatosa, WI Calculus Tutor
...Ever since I was little math has intrigued me. I remember once when I was in Algebra 2 I was having a really difficult time comprehending the principle of "collecting like terms". I'll never
forget how frustrating not being able to understand this was.
18 Subjects: including calculus, algebra 1, ACT Math, precalculus
...I, myself, am a visual learner. If I don't see a math problem worked out step-by-step, it's hard for me to understand how to navigate from the question to the correct answer. My greatest
strength, as a tutor, is that I'm able to relate to the student; I'm able to get down to their level and understand what it is they are not understanding.
32 Subjects: including calculus, Spanish, English, chemistry
...I have 10 plus years as a Math classroom teacher using best practices teaching methods. Algebra 2 is a continuation of Algebra 1 and can be overwhelming for some students. With a little help
of an experienced tutor, most anxieties can be calmed.
9 Subjects: including calculus, geometry, algebra 1, algebra 2
...I am patient, positive, and here to help! If you are frustrated working on a problem, I will be right there with you until you fully understand it. I am passionate about education and I want
to help make your classes easier for you.I have taken Calculus I through Calculus III and I tutored Calc III at Marquette University.
12 Subjects: including calculus, geometry, algebra 1, GRE
...Your eventual goal is to be able to use concepts and skills that you see and hear and repeat specific processes to get the right answer. For this to happen and become natural, you must
actually do the work yourself and practice. I understand this and most of what I do is to help you improve your understanding of the subject from these perspectives.
22 Subjects: including calculus, Spanish, writing, geometry
Related Wauwatosa, WI Tutors
Wauwatosa, WI Accounting Tutors
Wauwatosa, WI ACT Tutors
Wauwatosa, WI Algebra Tutors
Wauwatosa, WI Algebra 2 Tutors
Wauwatosa, WI Calculus Tutors
Wauwatosa, WI Geometry Tutors
Wauwatosa, WI Math Tutors
Wauwatosa, WI Prealgebra Tutors
Wauwatosa, WI Precalculus Tutors
Wauwatosa, WI SAT Tutors
Wauwatosa, WI SAT Math Tutors
Wauwatosa, WI Science Tutors
Wauwatosa, WI Statistics Tutors
Wauwatosa, WI Trigonometry Tutors
Nearby Cities With calculus Tutor
Brookfield, WI calculus Tutors
Brown Deer, WI calculus Tutors
Butler, WI calculus Tutors
Elm Grove, WI calculus Tutors
Glendale, WI calculus Tutors
Greenfield, WI calculus Tutors
Menomonee Falls calculus Tutors
Milwaukee, WI calculus Tutors
New Berlin, WI calculus Tutors
River Hills, WI calculus Tutors
Saint Francis, WI calculus Tutors
Waukesha calculus Tutors
West Allis, WI calculus Tutors
West Milwaukee, WI calculus Tutors
Whitefish Bay, WI calculus Tutors
|
{"url":"http://www.purplemath.com/Wauwatosa_WI_Calculus_tutors.php","timestamp":"2014-04-20T06:37:26Z","content_type":null,"content_length":"24255","record_id":"<urn:uuid:3d5ad3cf-3157-4264-a66a-34d29980c82a>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00608-ip-10-147-4-33.ec2.internal.warc.gz"}
|
After tax returns, constant growth stock, NPV, cash budget,
1. 365178
After tax returns, constant growth stock, NPV, cash budget,
EQ 2. (TCO D) After-tax returns
West Corporation has $50,000, which it plans to invest in marketable securities. The corporation is choosing between the following three equally risky securities: Alachua County tax-free municipal
bonds yielding 6 percent; Exxon bonds yielding 9.5 percent; GM preferred stock with a dividend yield of 9 percent. West's corporate tax rate is 35 percent. What is the after-tax return on the best
investment alternative? (Assume the company chooses on the basis of after-tax returns.)
EQ 2. (TCO D) After-tax returns
The XYZ Corporation has $1000,000 which it plans to invest in marketable securities. The corporation is choosing between the following three equally risky securities: Greenville County tax-free
municipal bonds yielding 7 percent; AB corp. bonds yielding 11.5 percent; XZ corp. preferred stock with a dividend yield of 10 percent. XYZ's corporate tax rate is 35 percent. What is the after-tax
return on the best investment alternative? (Assume the company chooses on the basis of after-tax returns.)
EQ 4. (TCO E) Constant growth stock
The last dividend paid by ABC Company was $2.00. ABC's growth rate is expected to be a constant 4 percent. ABC's required rate of return on equity (ks) is 9 percent. What is the current price of
ABCs common stock?
EQ 4. (TCO E) Constant growth stock
The last dividend paid by XYZ Company was $1.00. XYZs growth rate is expected to be a constant 5 percent. XYZ's required rate of return on equity (ks) is 10 percent. What is the current price of
XYZ's common stock?
EQ 4. (TCO E) Constant growth stock
The last dividend paid by Klein Company was $1.00. Klein's growth rate is expected to be a constant 4 percent. Klein's required rate of return on equity (ks) is 12 percent. What is the current price
of Klein's common stock?
EQ 5. (TCO B, F) NPV
As the director of capital budgeting for Bingo Corporation, you are evaluating two mutually exclusive projects with the following net cash flows:
A B
-$150,000 -$225,000
1 $55,000 $85,000
2 $70,000 $55,000
3 $70,000 $65,000
4 $75,000 $55,000
5 $80,000 $65,000
If Bingo Corporation's cost of capital is 10 percent, defend which project would you choose.
EQ 5. (TCO B, F) NPV
As the director of capital budgeting for ABC Corporation, you are evaluating two mutually exclusive projects with the following net cash flows:
A B
-$200,000 -$125,000
1 $65,000 $60,000
2 $60,000 $40,000
3 $50,000 $40,000
4 $65,000 $35,000
5 $50,000 $45,000
EQ 5. (TCO B, F) NPV
As the director of capital budgeting for Denver Corporation, you are evaluating two mutually exclusive projects with the following net cash flows:
A B
-$100,000 -$125,000
1 $25,000 $25,000
2 $30,000 $35,000
3 $30,000 $35,000
4 $25,000 $35,000
5 $30,000 $45,000
If Denver's cost of capital is 15 percent, defend which project would you choose.
EQ 6. (TCO G) Cash budget
XYZ Corporation's budgeted monthly sales are $4,500. Forty percent of its customers pay in the first month and take the 1 percent discount. The remaining 60 percent pay in the month following the
sale and do not receive a discount. XYZ's bad debts are very small and are excluded from this analysis. Purchases for next month's sales are constant each month at $1,200. Other payments for wages,
rent, and taxes are constant at $800 per month. Construct a single month's cash budget with the information given. What is the average cash gain or (loss) during a typical month for XYZ Corporation?
EQ 6. (TCO G) Cash budget
ABC Corporation's budgeted monthly sales are $4,000. Forty percent of its customers pay in the first month and take the 3 percent discount. The remaining 60 percent pay in the month following the
sale and do not receive a discount. ABC's bad debts are very small and are excluded from this analysis. Purchases for next month's sales are constant each month at $2,000. Other payments for wages,
rent, and taxes are constant at $500 per month. Construct a single month's cash budget with the information given. What is the average cash gain or (loss) during a typical month for ABC Corporation?
EQ 9. (TCO D)
Which of the following Treasury bonds will have the largest amount of interest rate risk (price risk) and why?
A. A 7% coupon bond which matures in 12 years.
B. A 9% coupon bond which matures in 10 years.
C. A 12% coupon bond which matures in 7 years.
D. A 7% coupon bond which matures in 9 years.
E. A 10% coupon bond which matures in 10 years.
EQ 9. (TCO D)
All treasury securities have a yield to maturity of 7%-- so the yield curve is flat. If the yield to maturity on all Treasuries were to decline to 6%, which of the following bonds would have the
largest percentage increase in price and why?
A. 15 year zero coupon Treasury bond.
B. 12 year Treasury bond with a 10% annual coupon.
C. 15 year Treasury bond with a 12 percent annual coupon.
D. 2 year zero coupon Treasury bond.
E. 2 year Treasury bond with a 15% annual coupon.
EQ 9. (TCO D)
Which of the following statements is most correct and why?
A. If a bond sells for less than par, and then its yield to maturity is less than its coupon rate.
B. If a bond sells at par, and then its current yield will be less than its yield to maturity.
C. Assuming that both bonds are held to maturity and are of equal risk, a bond selling for more than par with ten years to maturity will have a lower current yield and higher capital gain relative to
a bond that sells at par.
D. Answers A and C are correct.
E. None of the answers above is correct.
EQ 10. (TCO C) Payback period
The ABC Corporation is considering a project which has an up-front cost paid today at t = 0. The project will generate positive cash flows of $70,000 a year at the end of each of the next five years.
The project's NPV is $90,000 and the company's WACC is 12 percent. What is the project's simple, regular payback?
EQ 10. (TCO C) Payback period
The Bingo Corporation is considering a project which has an up-front cost paid today at t = 0. The project will generate positive cash flows of $85,000 a year at the end of each of the next five
years. The project's NPV is $100,000 and the company's WACC is 10 percent. What is the project's simple, regular payback?
EQ 10. (TCO C) Payback period
Haig Aircraft is considering a project which has an up-front cost paid today at t = 0. The project will generate positive cash flows of $60,000 a year at the end of each of the next five years. The
project's NPV is $75,000 and the company's WACC is 10 percent. What is the project's simple, regular payback?
EQ 11. (TCO H) WACC
A company has determined that its optimal capital structure consists of 30 percent debt and 70 percent equity. Given the following information, calculate the firm's weighted average cost of capital.
Rd = 6%
Tax rate = 35%
P0 = $35
Growth = 0%
D0 = $3.00
EQ 11. (TCO H) WACC
A company has determined that its optimal capital structure consists of 50 percent debt and 50 percent equity. Given the following information, calculate the firm's weighted average cost of capital.
Rd = 7%
Tax rate = 40%
P0 = $30
Growth = 0%
D0 = $2.50
EQ 11. (TCO H) WACC
A company has determined that its optimal capital structure consists of 40 percent debt and 60 percent equity. Given the following information, calculate the firm's weighted average cost of capital.
rd= 6%
Tax rate = 40%
P0 = $25
Growth = 0%
D0 = $2.00
(12.3) Sunk costs
Which of the following statements is CORRECT?
a. A sunk cost is any cost that must be expended in order to complete a project and bring it into operation.
b. A sunk cost is any cost that was expended in the past but can be recovered if the firm decides not to go forward with the project.
c. A sunk cost is a cost that was incurred and expensed in the past and cannot be recovered if the firm decides not to go forward with the project.
d. Sunk costs were formerly hard to deal with, but once the NPV method came into wide use, it became possible to simply include sunk costs in the cash flows and then calculate the PV.
e. A good example of a sunk cost is a situation where a retailer opens a new store, and that leads to a decline in sales of some of the firm's existing stores.
(12.3) Externalities
Which of the following statements is CORRECT?
a. An externality is a situation where a project would have an adverse effect on some other part of the firm's overall operations. If the project would have a favorable effect on other operations,
then this is not an externality.
b. An example of an externality is a situation where a bank opens a new office, and that new office causes deposits in the bank's other offices to decline.
c. The NPV method automatically deals correctly with externalities, even if the externalities are not specifically identified, but the IRR method does not. This is another reason to favor the NPV.
d. Both the NPV and IRR methods deal correctly with externalities, even if the externalities are not specifically identified. However, the payback method does not.
e. The identification of an externality can never lead to an increase in the calculated NPV.
(Comp: 12.1-12.4) Ann. op. CFs, depr'n and int. given
As a member of Midwest Corporation's financial staff, you must estimate the Year 1 operating cash flow for a proposed project with the following data. What is the Year 1 operating cash flow?
Sales revenues, each year $35,000
Depreciation $10,000
Other operating costs $17,000
Interest expense $4,000
Tax rate 35.0%
a. $12,380
b. $13,032
c. $13,718
d. $14,440
e. $15,200
(Comp: 12.1-12.4) Ann. op. CFs, depr'n and int. given
You work for the Sing Oil Company, which is considering a new project whose data are shown below. What is the project's operating cash flow for Year 1?
Sales revenues, each year $55,000
Depreciation $8,000
Other operating costs $25,000
Interest expense $8,000
Tax rate 35.0%
a. $21,185
b. $22,300
c. $23,415
d. $24,586
e. $25,815
(Comp: 12.1-12.4) NPV, SL, constant CFs, cannibalization
TexMex Products is considering a new salsa whose data are shown below. The equipment that would be used would be depreciated by the straight-line method over its 3-year life, would have zero salvage
value, and no new working capital would be required. Revenues and other operating costs are expected to be constant over the project's 3-year life. However, this project would compete with other
TexMex products and would reduce their pre-tax annual cash flows. What is the project's NPV? (Hint: Cash flows are constant in Years 1-3.)
WACC 10.0%
Pre-tax cash flow reduction in other products (cannibalization) $5,000
Investment cost (depr'ble basis) $65,000
Straight-line depr'n rate 33.333%
Sales revenues, each year $75,000
Annual operating costs, ex. depr'n $25,000
Tax rate 35.0%
a. $25,269
b. $26,599
c. $27,929
d. $29,325
e. $30,792
(14.3) Additional funds needed
Jefferson City Computers has developed a forecasting model to estimate its AFN for the upcoming year. All else being equal, which of the following factors is most likely to lead to an increase of the
additional funds needed (AFN)?
a. A sharp increase in its forecasted sales.
b. A sharp reduction in its forecasted sales.
c. The company reduces its dividend payout ratio.
d. The company switches its materials purchases to a supplier that sells on terms of 1/5,
net 90, from a supplier whose terms are 3/15, net 35.
e. The company discovers that it has excess capacity in its fixed assets.
26 (14.3) Additional funds needed
Which of the following statements is CORRECT?
a. Since accounts payable and accrued liabilities must eventually be paid off, as these
accounts increase, AFN as calculated by the AFN equation must also increase.
b. Suppose a firm is operating its fixed assets at below 100% of capacity, but it has no excess current assets. Based on the AFN equation, its AFN will be larger than if it had been operating with
excess capacity in both fixed and current assets.
c. If a firm retains all of its earnings, then it cannot require any additional funds to support sales growth.
d. Additional funds needed (AFN) are typically raised using a combination of notes payable,
Long-term debt and common stock. Such funds are non-spontaneous in the sense that they
require explicit financing decisions to obtain them.
e. If a firm has a positive free cash flow, then it must have either a zero or a negative AFN.
(14.3) Additional funds needed--positive AFN
Clayton Industries is planning its operations for next year, and Ronnie Clayton, the CEO, wants you to forecast the firm's additional funds needed (AFN). Data for use in your forecast are shown
below. Based on the AFN equation, what is the AFN for the coming year? Dollars are in millions.
Last year's sales = S0 $350 Last year's accounts payable $40
Sales growth rate = g 30% Last year's notes payable (to bank) $50
Last year's total assets = A0 $500 Last year's accruals $30
Last year's profit margin = M 5% Target payout ratio 60%
a. $102.8
b. $108.2
c. $113.9
d. $119.9
e. $125.9
(14.5) Finding the target fixed assets/sales ratio
Last year Emery Industries had $450 million of sales and $225 million of fixed assets, so its FA/Sales ratio was 50%. However, its fixed assets were used at only 65% of capacity. If the company had
been able to sell off enough of its fixed assets at book value so that it was operating at full capacity, with sales held constant at $450 million, how much cash (in millions) would it have
a. $74.81
b. $78.75
c. $82.69
d. $86.82
e. $91.16
(14.5) Forecasting financial requirements
Which of the following statements is CORRECT?
a. When we use the AFN formula, we assume that the ratios of assets and liabilities to
sales (A*/S0 and L*/S0) vary from year to year in a stable, predictable manner.
b. When fixed assets are added in large, discrete units as a company grows, the
c. assumption of constant ratios is more appropriate than if assets are relatively small and can be added in small increments as sales grow.
c. Firms whose fixed assets are 'lumpy' frequently have excess capacity, and this should be accounted for in the financial forecasting process.
d. For a firm that uses lumpy assets, it is impossible to have small increases in sales without expanding fixed assets.
e. A graph showing the relationship between assets and sales is always linear if economies of scale exist.
(14.3) AFN formula method
Which of the following statements is CORRECT?
a. Inherent in the basic, unmodified AFN formula are these two assumptions: (1) each asset item must grow at the same rate as sales, and (2) spontaneous liability accounts must also grow at the same
rate as sales.
b. If a firm's assets are growing at a positive rate, but its retained earnings are not increasing, then it would be impossible for the firm's AFN to be negative.
c. If a firm increases its dividend payout ratio in anticipation of higher earnings, but sales and earnings actually decrease, then the firm's actual AFN must, mathematically, exceed the previously
calculated AFN.
d. Higher sales usually require higher asset levels, and this leads to what we call AFN. However, the AFN will be zero if the firm chooses to retain all of its profits, i.e., to have a zero dividend
payout ratio.
e. Dividend policy does not affect the requirement for external funds based on the AFN formula method.
The solution computes After tax returns, constant growth stock, NPV, cash budget, payback, WACC in 30 problems
|
{"url":"https://brainmass.com/business/weighted-average-cost-of-capital/365178","timestamp":"2014-04-21T14:58:28Z","content_type":null,"content_length":"45897","record_id":"<urn:uuid:47526113-26ea-4e5a-a753-32665414c601>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00590-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Recursive unsolvability of Post’s problem of ‘tag’ and other topics in the theory of Turing machines
Results 1 - 10 of 76
, 1990
"... Linear logic, introduced by Girard, is a refinement of classical logic with a natural, intrinsic accounting of resources. We show that unlike most other propositional (quantifier-free) logics,
full propositional linear logic is undecidable. Further, we prove that without the modal storage operator, ..."
Cited by 90 (17 self)
Add to MetaCart
Linear logic, introduced by Girard, is a refinement of classical logic with a natural, intrinsic accounting of resources. We show that unlike most other propositional (quantifier-free) logics, full
propositional linear logic is undecidable. Further, we prove that without the modal storage operator, which indicates unboundedness of resources, the decision problem becomes pspace-complete. We also
establish membership in np for the multiplicative fragment, np-completeness for the multiplicative fragment extended with unrestricted weakening, and undecidability for certain fragments of
noncommutative propositional linear logic. 1 Introduction Linear logic, introduced by Girard [14, 18, 17], is a refinement of classical logic which may be derived from a Gentzen-style sequent
calculus axiomatization of classical logic in three steps. The resulting sequent system Lincoln@CS.Stanford.EDU Department of Computer Science, Stanford University, Stanford, CA 94305, and the
Computer Science Labo...
- Annals of Pure and Applied Logic , 1998
"... Girard and Reynolds independently invented System F (a.k.a. the second-order polymorphically typed lambda calculus) to handle problems in logic and computer programming language design,
respectively. Viewing F in the Curry style, which associates types with untyped lambda terms, raises the questions ..."
Cited by 58 (4 self)
Add to MetaCart
Girard and Reynolds independently invented System F (a.k.a. the second-order polymorphically typed lambda calculus) to handle problems in logic and computer programming language design, respectively.
Viewing F in the Curry style, which associates types with untyped lambda terms, raises the questions of typability and type checking . Typability asks for a term whether there exists some type it can
be given. Type checking asks, for a particular term and type, whether the term can be given that type. The decidability of these problems has been settled for restrictions and extensions of F and
related systems and complexity lower-bounds have been determined for typability in F, but this report is the rst to resolve whether these problems are decidable for System F. This report proves that
type checking in F is undecidable, by a reduction from semiuni cation, and that typability in F is undecidable, by a reduction from type checking. Because there is an easy reduction from typability
to typ...
, 1992
"... this paper we will restrict attention to propositional linear logic. The sequent calculus notation, due to Gentzen [10], uses roman letters for propositions, and greek letters for sequences of
formulas. A sequent is composed of two sequences of formulas separated by a `, or turnstile symbol. One may ..."
Cited by 24 (1 self)
Add to MetaCart
this paper we will restrict attention to propositional linear logic. The sequent calculus notation, due to Gentzen [10], uses roman letters for propositions, and greek letters for sequences of
formulas. A sequent is composed of two sequences of formulas separated by a `, or turnstile symbol. One may read the sequent \Delta ` \Gamma as asserting that the multiplicative conjunction of the
formulas in \Delta together imply the multiplicative disjunction of the formulas in \Gamma. A sequent calculus proof rule consists of a set of hypothesis sequents, displayed above a horizontal line,
and a single conclusion sequent, displayed below the line, as below: Hypothesis1 Hypothesis2 Conclusion 4 Connections to Other Logics
"... . In this paper we consider lower bounds for external-memory computational geometry problems. We find that it is not quite clear which model of computation to use when considering such problems.
As an attempt of providing a model, we define the external memory Turing machine model, and we derive low ..."
Cited by 23 (4 self)
Add to MetaCart
. In this paper we consider lower bounds for external-memory computational geometry problems. We find that it is not quite clear which model of computation to use when considering such problems. As
an attempt of providing a model, we define the external memory Turing machine model, and we derive lower bounds for a number of problems, including the element distinctness problem, in this model.
For these lower bounds we make the standard assumption that records are indivisible. Waiving the indivisibility assumption we show how to beat the lower bound for element distinctness. As an
alternative model, we briefly discuss an external-memory version of the algebraic computation tree. 1. Introduction The Input/Output (or just I/O) communication between fast internal memory and
slower external storage is the bottleneck in many large-scale computations. The significance of this bottleneck is increasing as internal computation gets faster, and as parallel computation gains
popularity. Currently,...
- Advances in Linear Logic , 1994
"... Introduction There are many interesting fragments of linear logic worthy of study in their own right, most described by the connectives which they employ. Full linear logic includes all the
logical connectives, which come in three dual pairs: the exponentials ! and ?, the additives & and \Phi, and ..."
Cited by 21 (0 self)
Add to MetaCart
Introduction There are many interesting fragments of linear logic worthy of study in their own right, most described by the connectives which they employ. Full linear logic includes all the logical
connectives, which come in three dual pairs: the exponentials ! and ?, the additives & and \Phi, and the multiplicatives\Omega and . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ........ . SRI International Computer Science Laboratory, Menlo Park CA 94025 USA. Work
supported under NSF Grant CCR-9224858. lincoln@csl.sri.com http://www.csl.sri.com/lincoln/lincoln.html Patrick Lincoln For the most part we will consider fragments of linear logic built up using
these connectives in any combination. For example, full linear logic formulas may employ any connective, while multiplic
- THEORY OF COMPUTING SYSTEMS , 1995
"... Splicing systems are generative mechanisms based on the splicing operation introduced by Tom Head as a model of DNA recombination. We prove that the generative power of finite extended splicing
systems equals that of Turing machines, provided we consider multisets or provided a control mechanism is ..."
Cited by 19 (3 self)
Add to MetaCart
Splicing systems are generative mechanisms based on the splicing operation introduced by Tom Head as a model of DNA recombination. We prove that the generative power of finite extended splicing
systems equals that of Turing machines, provided we consider multisets or provided a control mechanism is added. We also show that there exist universal splicing systems with the properties above, i.
e. there exists a universal splicing system with fixed components which can simulate the behaviour of any given splicing system, when an encoding of the particular splicing system is added to its set
of axioms. In this way the possibility of designing programmable DNA computers based on the splicing operation is proved.
- Natural Computing , 2008
"... Abstract. A highly desired part of the synthetic biology toolbox is an embedded chemical microcontroller, capable of autonomously following a logic program specified by a set of instructions,
and interacting with its cellular environment. Strategies for incorporating logic in aqueous chemistry have ..."
Cited by 19 (5 self)
Add to MetaCart
Abstract. A highly desired part of the synthetic biology toolbox is an embedded chemical microcontroller, capable of autonomously following a logic program specified by a set of instructions, and
interacting with its cellular environment. Strategies for incorporating logic in aqueous chemistry have focused primarily on implementing components, such as logic gates, that are composed into
larger circuits, with each logic gate in the circuit corresponding to one or more molecular species. With this paradigm, designing and producing new molecular species is necessary to perform larger
computations. An alternative approach begins by noticing that chemical systems on the small scale are fundamentally discrete and stochastic. In particular, the exact molecular counts of each
molecular species present, is an intrinsically available form of information. This might appear to be a very weak form of information, perhaps quite difficult for computations to utilize. Indeed, it
has been shown that error-free Turing universal computation is impossible in this setting. Nevertheless, we show a design of a chemical computer that achieves fast and reliable Turing-universal
computation using molecular counts. Our scheme uses only a small number of different molecular species to do computation of arbitrary complexity. The total probability of error of the computation can
be made arbitrarily small (but not zero) by adjusting the initial molecular counts of certain species. While physical implementations would be difficult, these results demonstrate that molecular
counts can be a useful form of information for small molecular systems such as those operating within cellular environments. Key words. stochastic chemical kinetics; molecular counts;
Turing-universal computation; probabilistic computation 1. Introduction. Many
- Journal of Computer and System Sciences , 1995
"... We settle an open problem, the inclusion problem for pattern languages [1, 2]. This is the first known case where inclusion is undecidable for generative devices having a trivially decidable
equivalence problem. The study of patterns goes back to the seminal work of Thue [16] and is important also, ..."
Cited by 17 (3 self)
Add to MetaCart
We settle an open problem, the inclusion problem for pattern languages [1, 2]. This is the first known case where inclusion is undecidable for generative devices having a trivially decidable
equivalence problem. The study of patterns goes back to the seminal work of Thue [16] and is important also, for instance, in recent work concerning inductive inference and learning. Our results
concern both erasing and nonerasing patterns. Categories and Subject Descriptors: F.4.3 [Mathematical Logic and Formal Languages ]: Formal Languages --- Decision problems, Algebraic language theory;
F.4.1 [Mathe- matical Logic and Formal Languages]: Mathematical Logic --- Computability theory. General Terms: Theory, Formal Languages Additional Key Words and Phrases: Patterns, Inclusion problems,
Equivalence problems, Descriptive patterns, Unavoidable patterns 1 Introduction. The main result Instead of an exhaustive definition for a language, [7], it is sometimes better to give more leeway in
the defi...
, 1996
"... The multiplicative fragment of second order propositional linear logic is shown to be undecidable. Introduction Decision problems for propositional (quantifier-free) linear logic were first
studied by Lincoln et al. [LMSS]. In referring to linear logic fragments, let M stand for multiplicatives, A ..."
Cited by 14 (3 self)
Add to MetaCart
The multiplicative fragment of second order propositional linear logic is shown to be undecidable. Introduction Decision problems for propositional (quantifier-free) linear logic were first studied
by Lincoln et al. [LMSS]. In referring to linear logic fragments, let M stand for multiplicatives, A for additives, E for exponentials (or modalities), 1 for first order quantifiers, 2 for second
order propositional quantifiers, and I for "intuitionistic" version. In [LMSS] it was shown that full propositional linear logic is undecidable and that MALL is PSPACEcomplete. The main problems left
open in [LMSS] were the NP-completeness of MLL, the decidability of MELL, and the decidability of various fragments of propositional linear logic without exponentials but extended with second order
propositional quantifiers. The decision problem for MELL is still open, but almost all the other problems have been solved: ffl The NP-completeness of MLL has been obtained by Kanovich [K1].
Moreover, Linco...
- Journal of Symbolic Logic , 1995
"... . Recently, Lincoln, Scedrov and Shankar showed that the multiplicative fragment of second order intuitionistic linear logic is undecidable, using an encoding of second order intuitionistic
logic. Their argument applies to the multiplicative-additive fragment, but it does not work in the classical c ..."
Cited by 12 (3 self)
Add to MetaCart
. Recently, Lincoln, Scedrov and Shankar showed that the multiplicative fragment of second order intuitionistic linear logic is undecidable, using an encoding of second order intuitionistic logic.
Their argument applies to the multiplicative-additive fragment, but it does not work in the classical case, because second order classical logic is decidable. Here we show that the
multiplicative-additive fragment of second order classical linear logic is also undecidable, using an encoding of two-counter machines originally due to Kanovich. The faithfulness of this encoding is
proved by means of the phase semantics. In this paper, we write LL for the full propositional fragment of linear logic, MLL for the multiplicative fragment, MALL for the multiplicative-additive
fragment, and MELL for the multiplicative-exponential fragment. Similarly, we write ILL, IMLL, etc. for the fragments of intuitionistic linear logic, LL2, MLL2, etc. for the second order fragments of
linear logic, and ILL2, IML...
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=72629","timestamp":"2014-04-18T22:07:53Z","content_type":null,"content_length":"40438","record_id":"<urn:uuid:9b08d521-82e8-4f78-9fdd-b6d1e28c998a>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00363-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Michael J. Mossinghoff
Department of Mathematics and Computer Science Office: Chambers 3039
Davidson College Office hours: M 3:00-3:45, TWTh 3:00-4:00
Box 6996 Phone: 704-894-2238
Davidson, NC 28035-6996 Fax: 704-894-2005
My vita is available, and so are titles and abstracts of many mathematical talks of mine.
You can view my academic family tree.
• Spring 2014: Math 150, Linear Algebra.
• Spring 2014: Math 360, Topology.
• During the summer of 2014, I will co-host the Summer@ICERM program on Polygons and Polynomials at the Institute for Computational and Experimental Research in Mathematics (ICERM) at Brown
University, together with Sinai Robins of Nanyang Technological University.
• The Palmetto Number Theory Series (PANTS) XX conference on number theory was held at Davidson College on September 7-8, 2013.
• I am an associate editor of Mathematics of Computation.
• My research interests lie in number theory, including analytic, computational, combinatorial, and transcendental aspects. I am particularly interested in extremal problems concerning integer
polynomials, especially problems dealing with Mahler's measure of a polynomial, and problems regarding factors of polynomials with restricted coefficients. I am also interested in some problems
in discrete geometry.
• A nice introduction to many of the problems I work on can be found in the book of P. Borwein, Computational Excursions in Analysis and Number Theory (Springer-Verlag, 2002).
• I maintain a site on Lehmer's problem concerning Mahler's measure of polynomials with integer coefficients.
• I also maintain a site on Wieferich pairs, Barker sequences, and circulant Hadamard matrices.
Selected Publications
• The second edition of my textbook Combinatorics and Graph Theory, written with John Harris and Jeffry Hirst, appeared in 2008. It is published by Springer, in the series Undergraduate Texts in
• The first edition of Combinatorics and Graph Theory, also published by Springer, appeared in 2000.
• I am co-author of the Maple V Quick Reference (Brooks/Cole, 1994), with Nancy Blachman.
Preprints and Recent Reprints of Research Papers (PDF)
Information on published papers, and some more reprints, are available on my vita.
Upcoming Conferences
You can find me at the following upcoming meetings and conferences.
• Southeast Regional Meeting on Numbers (SERMON), Wofford College, Spartanburg, South Carolina, April 26-27, 2014.
• Polygons and Polynomials, the Summer@ICERM 2014 research program for undergraduates, held at the Institute for Computational and Experimental Research in Mathematics (ICERM), Brown University,
Providence, Rhode Island, June 16 - August 8, 2014.
Michael Mossinghoff
mimossinghoff at davidson dot edu
Last modified April 14, 2014.
|
{"url":"http://academics.davidson.edu/math/mossinghoff/","timestamp":"2014-04-16T07:47:01Z","content_type":null,"content_length":"22578","record_id":"<urn:uuid:a08893fd-5276-45c9-a616-c66e80ac38db>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00057-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Polyhedron Project
Date: 10/29/2003 at 00:25:39
From: Alan
Subject: project ideas on a polyhedron
Dear Dr Math,
I read about the tetrahedron and it fascinated me. I would like to do
a science project having someting to do with this. What can I try?
Date: 10/29/2003 at 06:43:53
From: Doctor Korsak
Subject: Re: project ideas on a polyhedron
Hello Alan,
For your science project you can build a tetrahedron from cardboard or
perhaps clear plastic sheets cut into four triangles that can be glued
together. I gather that if you read about it, you know the
arrangement of the four sides and six edges forming a so-called closed
2-D surface in 3-D space.
Now, if you want something more exciting to add to that, make out of
cardboard the only other known polyhedron (as far as I know) having
the same property as the tetrahedron: every possible edge made by
joining any two vertices of the polyhedron is already in the
polyhedron. It is a 7 vertex polyhedron, as described in the
following book:
"Excursions Into Mathematics" by Anatole Beck, Michael N. Bleicher,
Donald W. Crowe, Worth Publishers, 1969, pp.31-39.
This polyhedron, therefore, could have 7 ants sitting at its vertices
such that each ant can see every other ant. It is topologically
equivalent to a doughnut, or a coffee cup.
In case you have trouble finding the book, here is a description
using coordinate geometry:
Vertex x y z
2 3 -3 1
3 1 -2 3
4 -1 2 3
5 -3 3 1
6 -3 -3 0
The 14 faces of the polyhedron are the triangles comprised by the
following triples of vertices:
Using 3-D coordinate geometry, you can compute the lengths of the
triangle sides and cut them out of some cardboard, leaving tabs along
edges for gluing them together. Better yet, you could carve a very
intriguing figure out of a solid block of plexiglass, or perhaps
wood. The above mentioned book displays a layout of partially joined
triangles in Illustration 33 and actually shows the finished
polyhedron in Illustration 32.
Of course, if you have access to Mathematica or some other graphics
software, you could do all kinds of fancy things like rotating this
polyhedron in 3-D to view it!
Please contact Dr. Math if you need further help.
- Doctor Korsak, The Math Forum
Date: 10/30/2003 at 01:37:04
From: Alan
Subject: Thank you
Thank you! I will consult you again when I encounter other problems.
|
{"url":"http://mathforum.org/library/drmath/view/64652.html","timestamp":"2014-04-20T01:17:42Z","content_type":null,"content_length":"7573","record_id":"<urn:uuid:f43f95d6-2ab9-41a1-b566-bc74b53e658b>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00170-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Hamilton Saturated Hypergraphs of Essentially Minimum Size
For $1\leq \ell< k$, an $\ell$-overlapping cycle is a $k$-uniform hypergraph in which, for some cyclic vertex ordering, every edge consists of $k$ consecutive vertices and every two consecutive
edges share exactly $\ell$ vertices. A $k$-uniform hypergraph $H$ is $\ell$-Hamiltonian saturated, $1\le \ell\le k-1$, if $H$ does not contain an $\ell$-overlapping Hamiltonian cycle $C^{(k)}_n(\ell)
$ but every hypergraph obtained from $H$ by adding one more edge does contain $C^{(k)}_n(\ell)$. Let $sat(n,k,\ell)$ be the smallest number of edges in an $\ell$-Hamiltonian saturated $k$-uniform
hypergraph on $n$ vertices. Clark and Entringer proved in 1983 that $sat(n,2,1)=\lceil \tfrac{3n}2\rceil$ and the second author showed recently that $sat(n,k,k-1)=\Theta(n^{k-1})$ for every $k\ge2$.
In this paper we prove that $sat(n,k,\ell)=\Theta(n^{\ell})$ for $\ell=1$ as well as for all $k\ge5$ and $\ell\ge0.8k$.
Saturation number; Hamiltonian cycles; Hypergraphs
Full Text:
|
{"url":"http://www.combinatorics.org/ojs/index.php/eljc/article/view/v20i2p25/0","timestamp":"2014-04-21T03:03:43Z","content_type":null,"content_length":"16034","record_id":"<urn:uuid:d1021b65-0c08-482e-b9b3-547523c9a4c5>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00135-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Impedance of a parallel RC circuit?
thanks but is it ok if i square the equation and then finally square root to get the Xc back?
Xc=-1/wc ???? is it right???
No. You need to keep the complex part of the equation. You can use the trick that Phrak described to get the more normal form of A + jB for the complex number. Unless you are talking about DC, the
capacitor will affect the voltage division, so you can't just get rid of the complex part of the voltage divider.
|
{"url":"http://www.physicsforums.com/showthread.php?t=446802","timestamp":"2014-04-17T21:34:27Z","content_type":null,"content_length":"48284","record_id":"<urn:uuid:4897f6fb-58b8-4e99-8e9f-90e44ea4c878>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00624-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Bending and Torsion Minimization of Toroidal Loops
Avik Das
EECS Department
University of California, Berkeley
Technical Report No. UCB/EECS-2012-165
June 8, 2012
We focus on an optimization problem on parameterized surfaces of genus one. In particular we trade off the penalty functions for bending a toroidal path and for applying a twist to it and aim to find
local minima of this cost function. This analysis forms a key element in demonstrating the different regular homotopy classes of tori. A generalization of this surface optimization, which considers
curvature as well as any shearing of its parameter grid, may be used to find the most optimal direct path from an arbitrary closed manifold of genus one into one of the four basic representatives of
the four regular homotopy classes of tori.
BibTeX citation:
Author = {Das, Avik},
Title = {Bending and Torsion Minimization of Toroidal Loops},
Institution = {EECS Department, University of California, Berkeley},
Year = {2012},
Month = {Jun},
URL = {http://www.eecs.berkeley.edu/Pubs/TechRpts/2012/EECS-2012-165.html},
Number = {UCB/EECS-2012-165},
Abstract = {We focus on an optimization problem on parameterized surfaces of genus one. In particular we trade off the penalty functions for bending a toroidal path and for applying a twist to it and aim to find local minima of this cost function. This analysis forms a key element in demonstrating the different regular homotopy classes of tori. A generalization of this surface optimization, which considers curvature as well as any shearing of its parameter grid, may be used to find the most optimal direct path from an arbitrary closed manifold of genus one into one of the four basic representatives of the four regular homotopy classes of tori.}
EndNote citation:
%0 Report
%A Das, Avik
%T Bending and Torsion Minimization of Toroidal Loops
%I EECS Department, University of California, Berkeley
%D 2012
%8 June 8
%@ UCB/EECS-2012-165
%U http://www.eecs.berkeley.edu/Pubs/TechRpts/2012/EECS-2012-165.html
%F Das:EECS-2012-165
|
{"url":"http://www.eecs.berkeley.edu/Pubs/TechRpts/2012/EECS-2012-165.html","timestamp":"2014-04-21T14:42:25Z","content_type":null,"content_length":"5831","record_id":"<urn:uuid:db1877c2-98e4-449e-aa4e-04be72eff1b0>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00501-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Linear Programming
Linear Programming
Linear programming, or so-called “solver” PC software, can be used to figure out the best answer to an assortment of questions expressed in terms of functional relationships. In a fundamental
sense, linear programming is a straightforward development from the more basic “what if” approach to problem solving. In a traditional “what-if” approach, one simply enters data or a change in
input values in a computer spreadsheet and uses spreadsheet formulas and macros to calculate resulting output values. A prime advantage of the “what if” approach is that it allows managers to
consider the cost, revenue, and profit implications of changes in a wide variety of operating conditions.
An important limitation of the “what if” method is that it can become a tedious means of searching for the best answer to planning and operating decisions. Linear programming can be thought of as
performing “what-if in reverse.” All you do is specify appropriate objectives and a series of constraint conditions, and the software will determine the appropriate input values. When production
goals are specified in light of operating constraints, linear programming can be used to identify the cost-minimizing operating plan. Alternatively, using linear programming techniques, a manager
might find the profit-maximizing activity level by specifying production relationships and the amount of available resources.
Linear programming has proven to be an adept tool for solving problems encountered in a number of business, engineering, financial, and scientific applications. In a practical sense, typically
encountered constrained optimization problems seldom have a simple rule-of-thumb solution. This chapter illustrates how linear programming can be used to quickly and easily solve real-world
decision problems.
Linear programming is a useful method for analyzing and solving certain types of management decision problems. To know when linear programming techniques can be applied, it is necessary to
understand basic underlying assumptions.
inequality constraints
Many production or resource constraints faced by managers are inequalities. Constraints often limit the resource employed to less than or equal to (≤) some fixed amount available. In other
instances, constraints specify that the quantity or quality of output must be greater than or equal to (≥) some minimum requirement. Linear programming handles such constraint inequalities easily,
making it a useful technique for finding the optimal solution to many management decision problems. Atypical linear programming problem might be to maximize output subject to the constraint that no
more than 40 hours of skilled labor per week be used. This labor constraint is expressed as an inequality where skilled labor ≤ 40 hours per week. Such an operating constraint means that no more
than 40 hours of skilled labor can be used, but some excess capacity is permissible, at least in the short run. If 36 hours of skilled labor were fruitfully employed during a given week, the 4
hours per week of unused labor is called excess capacity.
Linearity Assumption
As its name implies, linear programming can be applied only in situations in which the relevant objective function and constraint conditions are linear. Typical managerial decision problems that
can be solved using the linear programming method involve revenue and cost functions and their composite, the profit function. Each must be linear; as output increases, revenues, costs, and profits
must increase in a linear fashion. For revenues to be a linear function of output, product prices must be constant. For costs to be a linear function of output, both returns to scale and input
prices must be constant.
Constant input prices, when combined with constant returns to scale, result in a linear total cost function. If both output prices and unit costs are constant, then profit contribution and profits
also rise in a linear fashion with output. Product and input prices are relatively constant when a typical firm can buy unlimited quantities of input and sell an unlimited amount of output without
changing prices. This occurs under conditions of pure competition. Therefore, linear programming methods are clearly applicable for firms in perfectly competitive industries with constant returns
to scale. However, linear programming is also applicable in many other instances. Because linear programming is used for marginal analysis, it focuses on the effects of fairly modest output, price,
and input changes. For moderate changes in current operating conditions, a constant-returns-to-scale assumption is often valid. Similarly, input and output prices are typically unaffected by modest
changes from current levels. As a result, sales revenue, cost, and profit functions are often linear when only moderate changes in operations are contemplated and use of linear programming methods
is valid.
To illustrate, suppose that an oil company must choose the optimal output mix for a refinery with a capacity of 150,000 barrels of oil per day. The oil company is justified in basing its analysis
on the $25-per-barrel prevailing market price for crude oil, regardless of how much is purchased or sold. This assumption might not be valid if the companies were to quickly expand refinery output
by a factor of 10, but within the 150,000 barrels per day range of feasible output, prices will be approximately constant. Up to capacity limits, it is also reasonable to expect that a doubling of
crude oil input would lead to a doubling of refined output, and that returns to scale are constant. In many instances, the underlying assumption of linearity is entirely valid. In other instances
in which the objective function and constraint conditions can be usefully approximated by linear relations, the linear programming technique can also be fruitfully applied. Only when objective
functions and constraint conditions are inherently nonlinear must more complicated mathematical programming techniques be applied. In most managerial applications, even when the assumption of
linearity does not hold precisely, linear approximations seldom distort the analysis.
|
{"url":"http://e-university.wisdomjobs.com/managerial-economics-tutorial-307/linear-programming-1634.html","timestamp":"2014-04-21T09:35:43Z","content_type":null,"content_length":"31349","record_id":"<urn:uuid:17f231f7-11d0-46bc-950a-f972f5cb70b9>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00348-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Data Mining Classification and Prediction ppt
Data Mining Classification and Prediction Presentation Transcript
1.Data Mining: Concepts and Techniques
2.Classification and Prediction
What is classification? What is prediction?
Issues regarding classification and prediction
Classification by decision tree induction
Bayesian Classification
Classification by backpropagation
Classification based on concepts from association rule mining
Other Classification Methods
Classification accuracy
3.Classification vs. Prediction
predicts categorical class labels
classifies data (constructs a model) based on the training set and the values (class labels) in a classifying attribute and uses it in classifying new data
models continuous-valued functions, i.e., predicts unknown or missing values
Typical Applications
credit approval
target marketing
medical diagnosis
treatment effectiveness analysis
4.Classification—A Two-Step Process
Model construction: describing a set of predetermined classes
Each tuple/sample is assumed to belong to a predefined class, as determined by the class label attribute
The set of tuples used for model construction: training set
The model is represented as classification rules, decision trees, or mathematical formulae
Model usage: for classifying future or unknown objects
Estimate accuracy of the model
The known label of test sample is compared with the classified result from the model
Accuracy rate is the percentage of test set samples that are correctly classified by the model
Test set is independent of training set, otherwise over-fitting will occur
5.Classification Process (1): Model Construction
6.Classification Process (2): Use the Model in Prediction
7.Supervised vs. Unsupervised Learning
Supervised learning (classification)
Supervision: The training data (observations, measurements, etc.) are accompanied by labels indicating the class of the observations
New data is classified based on the training set
Unsupervised learning (clustering)
The class labels of training data is unknown
Given a set of measurements, observations, etc. with the aim of establishing the existence of classes or clusters in the data
8.Classification and Prediction
What is classification? What is prediction?
Issues regarding classification and prediction
Classification by decision tree induction
Bayesian Classification
Classification by backpropagation
Classification based on concepts from association rule mining
Other Classification Methods
Classification accuracy
9.Issues regarding classification and prediction (1): Data Preparation
Data cleaning
Preprocess data in order to reduce noise and handle missing values
Relevance analysis (feature selection)
Remove the irrelevant or redundant attributes
Data transformation
Generalize and/or normalize data
10.Issues regarding classification and prediction (2): Evaluating Classification Methods
Predictive accuracy
Speed and scalability
time to construct the model
time to use the model
handling noise and missing values
efficiency in disk-resident databases
understanding and insight provded by the model
Goodness of rules
decision tree size
compactness of classification rules
11.Classification and Prediction
What is classification? What is prediction?
Issues regarding classification and prediction
Classification by decision tree induction
Bayesian Classification
Classification by backpropagation
Classification based on concepts from association rule mining
Other Classification Methods
Classification accuracy
12.Classification by Decision Tree Induction
Decision tree
A flow-chart-like tree structure
Internal node denotes a test on an attribute
Branch represents an outcome of the test
Leaf nodes represent class labels or class distribution
Decision tree generation consists of two phases
Tree construction
At start, all the training examples are at the root
Partition examples recursively based on selected attributes
Tree pruning
Identify and remove branches that reflect noise or outliers
Use of decision tree: Classifying an unknown sample
Test the attribute values of the sample against the decision tree
13.Training Dataset
14.Output: A Decision Tree for “buys_computer”
15.Algorithm for Decision Tree Induction
Basic algorithm (a greedy algorithm)
Tree is constructed in a top-down recursive divide-and-conquer manner
At start, all the training examples are at the root
Attributes are categorical (if continuous-valued, they are discretized in advance)
Examples are partitioned recursively based on selected attributes
Test attributes are selected on the basis of a heuristic or statistical measure (e.g., information gain)
Conditions for stopping partitioning
All samples for a given node belong to the same class
There are no remaining attributes for further partitioning – majority voting is employed for classifying the leaf
There are no samples left
16.Attribute Selection Measure
Information gain (ID3/C4.5)
All attributes are assumed to be categorical
Can be modified for continuous-valued attributes
Gini index (IBM IntelligentMiner)
All attributes are assumed continuous-valued
Assume there exist several possible split values for each attribute
May need other tools, such as clustering, to get the possible split values
Can be modified for categorical attributes
17.Information Gain (ID3/C4.5)
Select the attribute with the highest information gain
Assume there are two classes, P and N
Let the set of examples S contain p elements of class P and n elements of class N
18.Information Gain in Decision Tree Induction
19.Attribute Selection by Information Gain Computation
20.Gini Index (IBM IntelligentMiner)
21.Extracting Classification Rules from Trees
22.Avoid Overfitting in Classification
The generated tree may overfit the training data
Too many branches, some may reflect anomalies due to noise or outliers
Result is in poor accuracy for unseen samples
Two approaches to avoid overfitting
Prepruning: Halt tree construction early—do not split a node if this would result in the goodness measure falling below a threshold
Difficult to choose an appropriate threshold
Postpruning: Remove branches from a “fully grown” tree—get a sequence of progressively pruned trees
Use a set of data different from the training data to decide which is the “best pruned tree”
23.Approaches to Determine the Final Tree Size
Separate training (2/3) and testing (1/3) sets
Use cross validation, e.g., 10-fold cross validation
Use all the data for training
but apply a statistical test (e.g., chi-square) to estimate whether expanding or pruning a node may improve the entire distribution
Use minimum description length (MDL) principle:
halting growth of the tree when the encoding is minimized
24.Enhancements to basic decision tree induction
Allow for continuous-valued attributes
Dynamically define new discrete-valued attributes that partition the continuous attribute value into a discrete set of intervals
Handle missing attribute values
Assign the most common value of the attribute
Assign probability to each of the possible values
Attribute construction
Create new attributes based on existing ones that are sparsely represented
This reduces fragmentation, repetition, and replication
25.Classification in Large Databases
Classification—a classical problem extensively studied by statisticians and machine learning researchers
Scalability: Classifying data sets with millions of examples and hundreds of attributes with reasonable speed
Why decision tree induction in data mining?
relatively faster learning speed (than other classification methods)
convertible to simple and easy to understand classification rules
can use SQL queries for accessing databases
comparable classification accuracy with other methods
26.Scalable Decision Tree Induction Methods in Data Mining Studies
SLIQ (EDBT’96 — Mehta et al.)
builds an index for each attribute and only class list and the current attribute list reside in memory
SPRINT (VLDB’96 — J. Shafer et al.)
constructs an attribute list data structure
PUBLIC (VLDB’98 — Rastogi & Shim)
integrates tree splitting and tree pruning: stop growing the tree earlier
RainForest (VLDB’98 — Gehrke, Ramakrishnan & Ganti)
separates the scalability aspects from the criteria that determine the quality of the tree
builds an AVC-list (attribute, value, class label)
27.Data Cube-Based Decision-Tree Induction
Integration of generalization with decision-tree induction (Kamber et al’97).
Classification at primitive concept levels
E.g., precise temperature, humidity, outlook, etc.
Low-level concepts, scattered classes, bushy classification-trees
Semantic interpretation problems.
Cube-based multi-level classification
Relevance analysis at multi-levels.
Information-gain analysis with dimension + level.
28.Presentation of Classification Results
29. Classification and Prediction
What is classification? What is prediction?
Issues regarding classification and prediction
Classification by decision tree induction
Bayesian Classification
Classification by backpropagation
Classification based on concepts from association rule mining
Other Classification Methods
Classification accuracy
30.Bayesian Classification: Why?
Probabilistic learning: Calculate explicit probabilities for hypothesis, among the most practical approaches to certain types of learning problems
Incremental: Each training example can incrementally increase/decrease the probability that a hypothesis is correct. Prior knowledge can be combined with observed data.
Probabilistic prediction: Predict multiple hypotheses, weighted by their probabilities
Standard: Even when Bayesian methods are computationally intractable, they can provide a standard of optimal decision making against which other methods can be measured
31.Bayesian Theorem
32.Naïve Bayes Classifier (I)
33.Naive Bayesian Classifier (II)
34.Bayesian classification
35.Estimating a-posteriori probabilities
36.Naïve Bayesian Classification
37.Play-tennis example: estimating P(xi|C)
38.Play-tennis example: classifying X
39.The independence hypothesis…
40.Bayesian Belief Networks (I)
41.Bayesian Belief Networks (II)
Bayesian belief network allows a subset of the variables conditionally independent
A graphical model of causal relationships
Several cases of learning Bayesian belief networks
Given both network structure and all the variables: easy
Given network structure but only some variables
When the network structure is not known in advance
42.Classification and Prediction
What is classification? What is prediction?
Issues regarding classification and prediction
Classification by decision tree induction
Bayesian Classification
Classification by backpropagation
Classification based on concepts from association rule mining
Other Classification Methods
Classification accuracy
43.Neural Networks
prediction accuracy is generally high
robust, works when training examples contain errors
output may be discrete, real-valued, or a vector of several discrete or real-valued attributes
fast evaluation of the learned target function
long training time
difficult to understand the learned function (weights)
not easy to incorporate domain knowledge
44.A Neuron
45.Network Training
The ultimate objective of training
obtain a set of weights that makes almost all the tuples in the training data classified correctly
Initialize weights with random values
Feed the input tuples into the network one by one
For each unit
Compute the net input to the unit as a linear combination of all the inputs to the unit
Compute the output value using the activation function
Compute the error
Update the weights and the bias
46.Multi-Layer Perceptron
47.Network Pruning and Rule Extraction
48.Classification and Prediction
What is classification? What is prediction?
Issues regarding classification and prediction
Classification by decision tree induction
Bayesian Classification
Classification by backpropagation
Classification based on concepts from association rule mining
Other Classification Methods
Classification accuracy
49.Association-Based Classification
Several methods for association-based classification
ARCS: Quantitative association mining and clustering of association rules (Lent et al’97)
It beats C4.5 in (mainly) scalability and also accuracy
Associative classification: (Liu et al’98)
It mines high support and high confidence rules in the form of “cond_set => y”, where y is a class label
CAEP (Classification by aggregating emerging patterns) (Dong et al’99)
Emerging patterns (EPs): the itemsets whose support increases significantly from one class to another
Mine Eps based on minimum support and growth rate
50.Classification and Prediction
What is classification? What is prediction?
Issues regarding classification and prediction
Classification by decision tree induction
Bayesian Classification
Classification by backpropagation
Classification based on concepts from association rule mining
Other Classification Methods
Classification accuracy
51.Other Classification Methodsk-nearest neighbor classifier
case-based reasoning
Genetic algorithm
Rough set approach
Fuzzy set approaches
52.Instance-Based Methods
53.The k-Nearest Neighbor Algorithm
54.Discussion on the k-NN Algorithm
55.Case-Based Reasoning
56.Also uses: lazy evaluation + analyze similar instances
Difference: Instances are not “points in a Euclidean space”
Example: Water faucet problem in CADET (Sycara et al’92)
Instances represented by rich symbolic descriptions (e.g., function graphs)
Multiple retrieved cases may be combined
Tight coupling between case retrieval, knowledge-based reasoning, and problem solving
Research issues
Indexing based on syntactic similarity measure, and when failure, backtracking, and adapting to additional cases
57.Genetic Algorithms
GA: based on an analogy to biological evolution
Each rule is represented by a string of bits
An initial population is created consisting of randomly generated rules
e.g., IF A1 and Not A2 then C2 can be encoded as 100
Based on the notion of survival of the fittest, a new population is formed to consists of the fittest rules and their offsprings
The fitness of a rule is represented by its classification accuracy on a set of training examples
Offsprings are generated by crossover and mutation
58.Rough Set Approach
Rough sets are used to approximately or “roughly” define equivalent classes
A rough set for a given class C is approximated by two sets: a lower approximation (certain to be in C) and an upper approximation (cannot be described as not belonging to C)
Finding the minimal subsets (reducts) of attributes (for feature reduction) is NP-hard but a discernibility matrix is used to reduce the computation intensity
59.Fuzzy Set Approaches
Fuzzy logic uses truth values between 0.0 and 1.0 to represent the degree of membership (such as using fuzzy membership graph)
Attribute values are converted to fuzzy values
e.g., income is mapped into the discrete categories {low, medium, high} with fuzzy values calculated
For a given new sample, more than one fuzzy value may apply
Each applicable rule contributes a vote for membership in the categories
Typically, the truth values for each predicted category are summed
60.Classification and Prediction
What is classification? What is prediction?
Issues regarding classification and prediction
Classification by decision tree induction
Bayesian Classification
Classification by backpropagation
Classification based on concepts from association rule mining
Other Classification Methods
Classification accuracy
61.What Is Prediction?
Prediction is similar to classification
First, construct a model
Second, use model to predict unknown value
Major method for prediction is regression
Linear and multiple regression
Non-linear regression
Prediction is different from classification
Classification refers to predict categorical class label
Prediction models continuous-valued functions
62.Predictive Modeling in Databases
Predictive modeling: Predict data values or construct generalized linear models based on the database data.
One can only predict value ranges or category distributions
Method outline:
Minimal generalization
Attribute relevance analysis
Generalized linear model construction
Determine the major factors which influence the prediction
Data relevance analysis: uncertainty measurement, entropy analysis, expert judgement, etc.
Multi-level prediction: drill-down and roll-up analysis
63.Regress Analysis and Log-Linear Models in Prediction
64. Locally Weighted Regression
65.Prediction: Numerical Data
66.Prediction: Categorical Data
67.Classification and Prediction
What is classification? What is prediction?
Issues regarding classification and prediction
Classification by decision tree induction
Bayesian Classification
Classification by backpropagation
Classification based on concepts from association rule mining
Other Classification Methods
Classification accuracy
68.Classification Accuracy: Estimating Error Rates
69.Boosting and Bagging
Boosting increases classification accuracy
Applicable to decision trees or Bayesian classifier
Learn a series of classifiers, where each classifier in the series pays more attention to the examples misclassified by its predecessor
Boosting requires only linear time and constant space
70.Boosting Technique (II) — Algorithm
Classification is an extensively studied problem (mainly in statistics, machine learning & neural networks)
Classification is probably one of the most widely used data mining techniques with a lot of extensions
Scalability is still an important issue for database applications: thus combining classification with database techniques should be a promising topic
Research directions: classification of non-relational data, e.g., text, spatial, multimedia, etc..
|
{"url":"http://www.studygalaxy.com/ordinaryview.php?id=539","timestamp":"2014-04-16T16:11:58Z","content_type":null,"content_length":"33703","record_id":"<urn:uuid:4168c375-2182-49c0-9a05-85d075081bed>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00300-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Pre-Calculus: Trig Equations and Quadratic Formula Video | MindBites
Pre-Calculus: Trig Equations and Quadratic Formula
About this Lesson
• Type: Video Tutorial
• Length: 14:55
• Media: Video/mp4
• Use: Watch Online & Download
• Access Period: Unrestricted
• Download: MP4 (iPod compatible)
• Size: 161 MB
• Posted: 01/22/2009
This lesson is part of the following series:
Trigonometry: Full Course (152 lessons, $148.50)
Pre-Calculus Review (31 lessons, $61.38)
Trigonometry: Trigonometric Identities (23 lessons, $26.73)
Trigonometry: Solving Trigonometric Equations (5 lessons, $7.92)
Pre Cal: Solving Trigonometric Equations (5 lessons, $7.92)
Sometimes, trigonometric equations cannot be factored. To solve these equations, Professor Burger shows you how to apply the quadratic formula to find solutions to these equations. This is a
multi-step process that starts with simplifying the equation. After the equation is simplified, you will be able to solve the quadratic formula and then enter that answer in to solve for X. In the
lesson example, Professor Burger uses both a calculator and graphing to ensure he has the correct points. This lesson explains the covered material by walking through sample equations 3sin^2(2x) +
sin (2x)-1 = 0. This equation has 2 solutions over the interval from zero to 2*pi. This lesson is loaded with warnings about easy mistakes to make and pitfalls to be wary of when evaluating problems
like this.
Taught by Professor Edward Burger, this lesson was selected from a broader, comprehensive course, Precalculus. This course and others are available from Thinkwell, Inc. The full course can be found
at http://www.thinkwell.com/student/product/precalculus. The full course covers angles in degrees and radians, trigonometric functions, trigonometric expressions, trigonometric equations, vectors,
complex numbers, and more.
Edward Burger, Professor of Mathematics at Williams College, earned his Ph.D. at the University of Texas at Austin, having graduated summa cum laude with distinction in mathematics from Connecticut
He has also taught at UT-Austin and the University of Colorado at Boulder, and he served as a fellow at the University of Waterloo in Canada and at Macquarie University in Australia. Prof. Burger has
won many awards, including the 2001 Haimo Award for Distinguished Teaching of Mathematics, the 2004 Chauvenet Prize, and the 2006 Lester R. Ford Award, all from the Mathematical Association of
America. In 2006, Reader's Digest named him in the "100 Best of America".
Prof. Burger is the author of over 50 articles, videos, and books, including the trade book, Coincidences, Chaos, and All That Math Jazz: Making Light of Weighty Ideas and of the textbook The Heart
of Mathematics: An Invitation to Effective Thinking. He also speaks frequently to professional and public audiences, referees professional journals, and publishes articles in leading math journals,
including The Journal of Number Theory and American Mathematical Monthly. His areas of specialty include number theory, Diophantine approximation, p-adic analysis, the geometry of numbers, and the
theory of continued fractions.
Prof. Burger's unique sense of humor and his teaching expertise combine to make him the ideal presenter of Thinkwell's entertaining and informative video lectures.
About this Author
2174 lessons
Founded in 1997, Thinkwell has succeeded in creating "next-generation" textbooks that help students learn and teachers teach. Capitalizing on the power of new technology, Thinkwell products prepare
students more effectively for their coursework than any printed textbook can. Thinkwell has assembled a group of talented industry professionals who have shaped the company into the leading provider
of technology-based textbooks. For more information about Thinkwell, please visit www.thinkwell.com or visit Thinkwell's Video Lesson Store at http://thinkwell.mindbites.com/.
Thinkwell lessons feature a star-studded cast of outstanding university professors: Edward Burger (Pre-Algebra through...
Recent Reviews
This lesson has not been reviewed.
Please purchase the lesson to review.
This lesson has not been reviewed.
Please purchase the lesson to review.
Solving Trigonometric Equations Using the Quadratic Formula
Sometimes when you have an equation that you want to solve that has trigonometric function, and you try to factor it because it looks like it should be factorable somehow, sometimes you just may not
be able to factor it. It’s sort of a sad truth of life sometimes. But in that case we can use the quadratic formula or some other trick. Let me actually show you an example where the thing looks so
factorable and yet it turns out we’ve got to do something else.
So, how about this exotic equation. Suppose we have 23sin2sin210xx+?= and now I want to solve that for x. Okay, now what’s the first thing I notice? Well, I see 2sin2x and a sin2x, so it sounds like,
gee, maybe I could sort of factor this thing. It looks like a quadratic, and maybe I could sort of write it out somehow. Now, you may be more comfortable thinking about it this way—and in fact, this
might not be a bad idea. In fact, let me just write it out. I could write this. I could replace the 2sin2x by just something—I’ll call it s for something. So I could write this as 3 times something
squared, plus just the something, minus 1 equals zero, where we should remember what the something is. The something is sin2x. This sometimes makes it easier to sort of see what’s going on in terms
of what you have to factor without worrying about all those pesky trig functions until the very end when you really need to factor completely.
Well, now you might want to try to factor this, and try to put two binomials together and make combos and stuff like that. It turns out you won’t be able to succeed because this actually can’t be
factored in some nice way. So what do you do? Do you throw your hands up? No! You use the quadratic formula to let you solve for x. So if we do that, let’s use the quadratic formula right now. So
what does s equal? Well, s equals negative b—so again, the roles here, this would be playing the role of a; b would be the invisible +1; and c is the not-so-invisible -1. So I see s equals negative
b—so that’s –1, 2b±, which is just 1 squared, which is 1, -4ac. Well, that’s 4 times –3, which would be -12, but that negative sign with a -2 makes this a +12, and that’s all under the square root,
all divided by 2a, which would be 6. So I see that s actually equals two numbers. It equals 1316?+ and 1316??.
Now, let me actually evaluate those numbers numerically—at least approximate them numerically because we’re going to be crunching them with the two signs. So I see that s either equals—when I put the
plus sign in there, 1316?+, if you computer that on the calculator you see 0.43425 and it keeps going. And if you put in the minus there, 1316??, that will be a negative number, and it turns out that
the negative number is -0.7675918 and it keeps going forever, too. So now I’ve solved the s part of the problem. Unfortunately, that wasn’t the question. The question wasn’t “find s.” The question
was, ”find x.”
So I have to remember now what x equals. So now I’m going to see two here two separate, little teeny simple trig equations that I have to consider. So let’s now consider those trig equations. So what
I see now is sin2xequals that; sin2xequals that. So let’s write those out here. In fact, let’s just solve them separately. But let me just write them both out before we forget what they are. So we
have two things to do. We have sin2xequals, and the first one is 0.43425, so we have to solve that. But then we also have to solve sin2xequals the other solution, which is -0.76759 and such. So let’s
solve this one first. Let’s take a look at this one here first.
Well, how do I solve that? By the way, just like in these other problems we’ve been looking at, we want to just look for the solutions that are between zero and 2?. So in fact what I want here are
the solutions to this original equation, where the x is going to live inside of zero to 2?, so anywhere from zero to 360°, if you wish. So I only want to find those x’s that are in there.
Okay, let’s see. I want sine of something to be 0.43-something. How many solutions should there be? Well, here’s the sine curve. Let’s just take a look at that and visualize it. I’m only looking from
zero to 2? so I don’t look at those things. I just look at that complete cycle. And I want to know, where does that have a height of 0.4-something? Well, 0.4-something is around here, and I see there
are two places. Someplace right over here, and then someplace that’s
? minus that. So I’ve got to first of all find that reference angle, and I can just do that on a calculator by using the arc sine or the inverse sine function. So if you do that, you take your
calculator and use inverse sine, and what I see is that 2x will equal—well, how many radians? Well, it turns out it’s 0.44921 and so on radians. That’s how many radians this whole angle is. But
remember I want actually half of that, so we just have to divide through by 2. So if I divide through by 2, what do I see? Well, I’m going to see that divided by two, which is 0.224607 radians.
There’s one answer. All that work and we just have one answer. Boy, you’d think after all that work you’d have a ton of answers.
Okay, there’s another solution to this thing, right? Remember what we just saw was where we equal 0.43-something, which is over here. But there’s another solution. That other solution is way over
here. So how do I find that solution? Well, if this angle right here gives me a value for sine of that, then the angle that’s sort of symmetric but a little bit in toward the ? radian by the same
exact amount will give me another value by symmetry. So all I have to do now is take ? and subtract off the angle that I found here. So I have to take ? and subtract that number, so the other
solution is 2x?=minus that number that we just found. Well, what is that? Well, I take ? and then you subtract off 0.44921, and that equals 2.6927. So to find the other solution I have to divide this
by 2, so I see x equals 1.3461 radians. So there’s a second answer. And these are the two answers to this particular equation.
But now I’ve got to do the exact same thing with this one! Oh, boy, it never ends. It never, ever ends. So how do I do that? These two solutions that we just found were for this red possibility, but
now we also have to consider this possibility, so that’s another probably two solutions; that would be my guess. Let’s take a look at the sine curve again and see where those things are.
So I’m only looking between zero and 2? and I want to know, well, for which angles do I get -0.76? So -0.76 is around here somewhere, and you can see there are actually going to be two answers again.
One is going to be over here and one is going to be over here. So let’s see if we can find those. How the heck are we going to find those values? Well, we’ve got to be a little bit careful here. So I
could use the arc sine function again. What I do is I put in the inverse sine of that, and what does that give me? That has a negative sign in front of it. Oh, that negative sign is so critical, so
critical. The whole would change. You know the whole world would be a completely different place if I wouldn’t put a negative sign right there? It’s true.
Okay, I get the answer of -0.87507-something. Now, what does that mean exactly? Let’s think about that, because that seems to be a negative answer, so that’s a little disturbing. Why is that a
negative answer? Because the answer should be sort of living around here. Well, we have to remember what the inverse sine does. The inverse sine operates on a slightly different region than we’re
looking at. We have to now consider the following. The inverse sine is operating right along here. This is where the inverse sine is operating, and this is 2?, and this is -2?. So the answer that we
found for what angle gives me -0.76-something is actually this negative angle right here. Well, that’s not exactly the one we want. We know which one we want. We want it between zero and 2?. So what
should I do?
Well, let’s think about it for a second. We have the sine function here, and I’ve got this piece sort of way down here, not exactly where I wanted it, but what can I do? Well, notice if I were to add
2? to that answer, where would that put me? Well, that would put me right over here. So if I add 2?, that shifts me right over to here, and now that’s one of the answers I want, so that looks good.
So what I have to do is take that answer I just found, which I know is wrong, but I should add 2? to it. That will shift me over to the right region I’m looking at. That will give me this wing here.
Why that wing? Because notice that that’s sort of living on this part of the wing, so when I shift it I’m going to live on the corresponding part of the wing. I won’t live over here; I’ll live over
here. So let’s add 2? to that, and what I get is that 2x equal 5.40811-something. And notice that’s in the right range because 5.4-something is what? It’s going to be between ? and 2?. ? is
3.14-something, 2? is 6-point-something, so this is right somewhere in-between it, so that looks good, at least visually. And so therefore if I want to find out what x equals, I just divide that by 2
and I see that one solution for x here is going to be 2.70405 and it keeps going for a while. Okay, so there’s one solution.
But don’t forget, in this one just like the other, there must be two solutions. What we just found was this solution right here. How would I find this solution? Well, it’s going to be a little tricky
because I have to figure out what this angle should be, and all I know is that these things are going to be symmetric, so they’re right around here. So I know this value; how do I find this value?
Well, let’s think about it. If I know this value, I know its distance away from 2?. So what is its distance away from 2?? Well, that distance actually is exactly the distance that we already found.
It turns
out it’s going to be this value right here. So if you take 2? and subtract this angle, you take 2?, and now we’re going to undo a little what we just did. Take 2? and subtract off the angle we found
here, 5.40811 and so forth. What we get is this thing without the negative sign, so we get 0.87-something. That tells me this little difference here, and that’s how much I have to add to this side to
bring it in.
Let me say that again. I’ve got this angle right here. I just found that. But I want to find this little teeny piece there because that’s how much I have to add to ?. So I took 2? minus that angle,
found that value to be 0.87507 and so forth, and now I just take ? and add that amount and that should be my other answer. So I just add this to ?, plus ?. Isn’t this amazingly involved? And so what
that tells me is that the other answer is 2x equals 4.0166-something. And notice that also makes sense because that’s just a little bit bigger than ? but not as big as the 5 answer we got before, so
that looks good. I just divide by 2 now, finally, to get x alone. Phew! This is exhausting, isn’t it? But yet if you’re careful, you can nail everything. You see 2.0083-something radians. These are
all in radians, of course.
So these are the two solutions from the interval zero to 2? for this equation. We already found the two solutions from zero to 2? for this equation. And we saw that those two equations together,
those solutions, actually give us the solutions with the original quadratic. So I used the quadratic formula. I found the two values for s, but was actually sine. I had to solve both those sine
equations separately, got two answers for this and got two answers for that. There are four solutions to this baby. But really take it careful and easy when you have quadratics and trig things
because, as you can see, there are lot of different cases that you have to look at very, very carefully, including the graphs, to make sure you get those right points. And also caution when you’re
taking the inverse sine function. Remember the region where the inverse sine function is defined and make sure that you’re finding the right spot.
Good luck. You can do it.
Get it Now and Start Learning
Embed this video on your site
Copy and paste the following snippet:
Link to this page
Copy and paste the following snippet:
|
{"url":"http://www.mindbites.com/lesson/1233-pre-calculus-trig-equations-and-quadratic-formula","timestamp":"2014-04-20T06:13:01Z","content_type":null,"content_length":"67201","record_id":"<urn:uuid:63f3cc2e-7530-4fce-a483-1bdcee0e4b49>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00093-ip-10-147-4-33.ec2.internal.warc.gz"}
|
specialization order
The specialisation order
The specialisation order is a way of turning any topological space $X$ into a preordered set (with the same underlying set).
Given a topological space $X$ with topology $\mathcal{O}(X)$, the specialization order $\leq$ is defined by either of the following two equivalent conditions:
1. $x \leq y$ if and only if $x$ belongs to the closure of $\{y\}$; we say that $x$ is a specialisation of $y$.
2. $x \leq y$ if and only if $\forall_{U \colon \mathcal{O}(X)} (x \in U) \Rightarrow (y \in U).$
(Note: some authors use the opposite ordering convention.)
$X$ is $T_0$ if and only if its specialisation order is a partial order. $X$ is $T_1$ iff its specialisation order is equality. $X$ is $R_1$ (like $T_1$ but without $T_0$) iff its specialisation
order is an equivalence relation. (See separation axioms.)
Given a continuous map $f: X \to Y$ between topological spaces, it is order-preserving relative to the specialisation order. Thus, we have a faithful functor $Spec$ from the category of $\Top$ of
topological spaces to the category $\Pros$ of preordered sets.
In the other direction, to each proset $X$ we may associate a topological space whose elements are those of $X$, and whose open sets are precisely the upward-closed sets with respect to the preorder.
This topology is called the specialization topology. This defines a functor
$i \colon ProSet \to Top$
which is a full embedding; the essential image of this functor is the category of Alexandroff spaces (spaces in which an arbitrary intersection of open sets is open). Hence the category of prosets is
equivalent to the category of Alexandroff spaces.
In fact, we have an adjunction $i \dashv Spec$, making $ProSet$ a coreflective subcategory of $Top$. In particular, the counit evaluated at a space $X$,
$i(Spec(X)) \to X,$
is the identity function at the level of sets, and is continuous because any open $U$ of $X$ is upward-closed with respect to $\leq$, according to the second equivalent condition of the definition of
the specialization order.
This adjunction restricts to an adjoint equivalence between the categories $\Fin\Pros$ and $\Fin\Top$ of finite prosets and finite topological spaces. The unit and counit are both identity functions
at the level of sets, so we in fact have an equivalence between these categories as concrete categories.
|
{"url":"http://ncatlab.org/nlab/show/specialization+order","timestamp":"2014-04-21T00:16:04Z","content_type":null,"content_length":"23271","record_id":"<urn:uuid:161cbff4-5cec-4b9f-9dcf-d0473c968fae>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00045-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Summary: Empirical nonparametric control charts for
high-quality processes
Willem Albers
Department of Applied Mathematics
University of Twente
P.O. Box 217, 7500 AE Enschede
The Netherlands
Abstract. For attribute data with (very) small failure rates often control charts are used which decide whether
to stop or to continue each time r failures have occurred, for some r 1. Because of the small probabilities
involved, such charts are very sensitive to estimation effects. This is true in particular if the underlying failure
rate varies and hence the distributions involved are not geometric. Such a situation calls for a nonparametric
approach, but this may require far more Phase I observations than are typically available in practice. In the
present paper it is shown how this obstacle can be effectively overcome by looking not at the sum but rather at
the maximum of each group of size r.
Keywords and phrases: Statistical Process Control, health care monitoring, geometric charts, average run length,
estimated parameters, order statistics
2000 Mathematics Subject Classification: 62P10, 62C05, 62G15
1 Introduction and motivation
High-quality processes are by now a regular phenomenon in industrial settings, due tot
the fact that production standards have been increasing over the last few decades. Moreover,
|
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/183/0124285.html","timestamp":"2014-04-16T18:41:37Z","content_type":null,"content_length":"8414","record_id":"<urn:uuid:5da3767c-c2cb-4cc4-a70f-e0a5e880a442>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00399-ip-10-147-4-33.ec2.internal.warc.gz"}
|
OpenGL gluProject() - strange results
up vote 4 down vote favorite
I'm tying to use gluProject function, to get point coordinates in 2d window after "rendering". The problem is, that I get strange results. For example: I've got a point with x=16.5. When I use
gluProject on it I get x= -6200.0.
If I understand gluProject OK, I should get a pixel position of that point on my screen after "rendering" - am I right? How can I convert that strange result into on-screen pixel coordinates?
Thank you for any help!
Code I use (by "sum1stolemyname"):
GLdouble modelview[16], projection[16]
GLint viewport[4];
glGetDoublev(GL_MODELVIEW_MATRIX, *modelView);
glGetDoublev(GL_PROJECTION_MATRIX, *projection);
glGetIntegerv(GL_VIEWPORT, *viewport);
double tx, ty, tz;
for(i = 0; i < VertexCount; i++)
gluProject(vertices[i].x, vertices[i].y, vertices[i].z,
modelview, projection, viewport,
&tx, &ty, &tz)
opengl projection
1 This won't compile. You are mixing the * and & operators. – Karel Petranek Nov 27 '10 at 22:14
Corrected. Thanks! – MattheW Nov 27 '10 at 22:20
add comment
1 Answer
active oldest votes
Yeah it does unfortunately it does it as far as the far plane so you can construct a 'ray' into the world. It does not give you the actual position of the pixel you are drawing in 3D
space. What you can do is make a line from the screen to your point you get from the gluProject then use that to find the intersection point with your geometry to get the point in 3D
up vote 2 space. Or another option is to modify your input matrices and viewport so the far plane is a more reasonable distance.
down vote
1 Ha my bad I am thinking of GLUnProject. – Justin Meiners Nov 27 '10 at 22:36
add comment
Not the answer you're looking for? Browse other questions tagged opengl projection or ask your own question.
|
{"url":"http://stackoverflow.com/questions/4294158/opengl-gluproject-strange-results","timestamp":"2014-04-23T19:07:26Z","content_type":null,"content_length":"65404","record_id":"<urn:uuid:6d98992b-5a10-46a6-8c34-2cccba73d605>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00150-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Simple Equivariant homology [no borel-Moore]
up vote 0 down vote favorite
Hey. I'm working with Bredon's equivariant cohomology. At some point I need to compute the $4$th equivariant cohomology group of $S^1 \x D^3$ relatively to its boundary for the antipodal action of $\
I found a paper that just use "equiviriant poincaré duality" to bring the problem to compute the $0$th equivariant homology which is said to be $\mathbb{Z}_2$. This sounds trivial, but Bredon does
not define what equivariant homology is, just cohomology, hence he does not talk of Poincare duality either (At least not in the 30 first pages of his paper "Equivariant Cohomology Theories" I use.).
I'm trying to search for a nice definition of equivariant homology, but every paper I find use concept I don't master well or at all (Vector bundles, groupoids, Borel-Moore Homology). I also tried to
design my own definition of equivariant homology, simply letting $H_0$ be the quotient of equivariant chains $H_0 = C_0^{\mathbb Z_2}/\partial(C_1^{\mathbb Z_2})$ with $C_i^{\mathbb Z_2} = \lbrace c
\in C_i| (1+\mathbb Z) \curvearrowright c = c \rbrace$. But using the relative exact sequence of chain, one finds $C_0^{\mathbb Z_2}(S^1\times D^3, S^1 \times S^2)$ is a subgroup of $C_0(S^1\times D^
3, S^1 \times S^2)$, which is $0$, so $H_0^{\mathbb Z_2}(S^1\times D^3, S^1 \times S^2) \approx 0$, not $\mathbb Z_2$ !
Many thanks for any help !
equivariant-cohomology homology ag.algebraic-geometry definitions
1 mathoverflow.net/questions/42548/… – stankewicz Apr 13 '12 at 15:15
Yeah, I've already found it. But I'm rather asking "what is equivariant homology ?". – laerne Apr 13 '12 at 16:33
add comment
1 Answer
active oldest votes
I'm afraid this is not an easy subject to get into. There is no problem defining Bredon homology. Maybe first in print in a 1975 memoir of Soren Illman. A more recent summary is in my
``Equivariant homotopy and cohomology theory''. However, just that won't help you. Tautologically, Poincar\'e duality is about duality. A manifold M embedded in Euclidean space is (after
adding a disjoint basepoint) Spanier-Whitehead dual to the Thom complex of its normal bundle. Equivariantly, you must start with an embedding of M into a representation, and then to
up vote understand Poincar\'e duality you must use $RO(G)$-graded Bredon homology and cohomology, which is only available for those coefficient systems that extend to Mackey functors. Even then,
3 down equivariant orientation theory is very subtle. There is a long paper by Costenoble, Waner, and myself that explains what is going on conceptually and geometrically. Unfortunately, unlike
vote nonequivariantly, unless one soups up cohomology to deal with equivariant fundamental groupoids, orientations are not (as far as is known) definable in purely homological terms. This is a
fascinating area, still in its infancy, but not an easy one.
Huuu... This is not encouraging, especially since I don't know what a Mackey functor and most of those concepts are... – laerne Apr 13 '12 at 16:31
add comment
Not the answer you're looking for? Browse other questions tagged equivariant-cohomology homology ag.algebraic-geometry definitions or ask your own question.
|
{"url":"http://mathoverflow.net/questions/93958/simple-equivariant-homology-no-borel-moore?sort=votes","timestamp":"2014-04-18T06:18:36Z","content_type":null,"content_length":"55482","record_id":"<urn:uuid:ba080c0f-3a3e-4e41-960e-58f367f3ae79>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00566-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Arithmetic and Logical Instructions
Next: Constant-Manipulating Instructions Up: Description of the MIPS Previous: Addressing Modes
In all instructions below, Src2 can either be a register or an immediate value (a 16 bit integer). The immediate forms of the instructions are only included for reference. The assembler will
translate the more general form of an instruction (e.g., add) into the immediate form (e.g., addi) if the second argument is constant.
abs Rdest, Rsrc Absolute Value
Put the absolute value of the integer from register Rsrc in register Rdest.
add Rdest, Rsrc1, Src2 Addition (with overflow)
addi Rdest, Rsrc1, Imm Addition Immediate (with overflow)
addu Rdest, Rsrc1, Src2 Addition (without overflow)
addiu Rdest, Rsrc1, Imm Addition Immediate (without overflow)
Put the sum of the integers from register Rsrc1 and Src2 (or Imm) into register Rdest.
and Rdest, Rsrc1, Src2 AND
andi Rdest, Rsrc1, Imm AND Immediate
Put the logical AND of the integers from register Rsrc1 and Src2 (or Imm) into register Rdest.
div Rsrc1, Rsrc2 Divide (with overflow)
divu Rsrc1, Rsrc2 Divide (without overflow)
Divide the contents of the two registers. Leave the quotient in register lo and the remainder in register hi. Note that if an operand is negative, the remainder is unspecified by the MIPS
architecture and depends on the conventions of the machine on which SPIM is run.
div Rdest, Rsrc1, Src2 Divide (with overflow)
divu Rdest, Rsrc1, Src2 Divide (without overflow)
Put the quotient of the integers from register Rsrc1 and Src2 into register Rdest.
mul Rdest, Rsrc1, Src2 Multiply (without overflow)
mulo Rdest, Rsrc1, Src2 Multiply (with overflow)
mulou Rdest, Rsrc1, Src2 Unsigned Multiply (with overflow)
Put the product of the integers from register Rsrc1 and Src2 into register Rdest.
mult Rsrc1, Rsrc2 Multiply
multu Rsrc1, Rsrc2 Unsigned Multiply
Multiply the contents of the two registers. Leave the low-order word of the product in register lo and the high-word in register hi.
neg Rdest, Rsrc Negate Value (with overflow)
negu Rdest, Rsrc Negate Value (without overflow)
Put the negative of the integer from register Rsrc into register Rdest.
nor Rdest, Rsrc1, Src2 NOR
Put the logical NOR of the integers from register Rsrc1 and Src2 into register Rdest.
Put the bitwise logical negation of the integer from register Rsrc into register Rdest.
ori Rdest, Rsrc1, Imm OR Immediate
Put the logical OR of the integers from register Rsrc1 and Src2 (or Imm) into register Rdest.
rem Rdest, Rsrc1, Src2 Remainder
remu Rdest, Rsrc1, Src2 Unsigned Remainder
Put the remainder from dividing the integer in register Rsrc1 by the integer in Src2 into register Rdest. Note that if an operand is negative, the remainder is unspecified by the MIPS architecture
and depends on the conventions of the machine on which SPIM is run.
rol Rdest, Rsrc1, Src2 Rotate Left
ror Rdest, Rsrc1, Src2 Rotate Right
Rotate the contents of register Rsrc1 left (right) by the distance indicated by Src2 and put the result in register Rdest.
sll Rdest, Rsrc1, Src2 Shift Left Logical
sllv Rdest, Rsrc1, Rsrc2 Shift Left Logical Variable
sra Rdest, Rsrc1, Src2 Shift Right Arithmetic
srav Rdest, Rsrc1, Rsrc2 Shift Right Arithmetic Variable
srl Rdest, Rsrc1, Src2 Shift Right Logical
srlv Rdest, Rsrc1, Rsrc2 Shift Right Logical Variable
Shift the contents of register Rsrc1 left (right) by the distance indicated by Src2 (Rsrc2) and put the result in register Rdest.
sub Rdest, Rsrc1, Src2 Subtract (with overflow)
subu Rdest, Rsrc1, Src2 Subtract (without overflow)
Put the difference of the integers from register Rsrc1 and Src2 into register Rdest.
xor Rdest, Rsrc1, Src2 XOR
xori Rdest, Rsrc1, Imm XOR Immediate
Put the logical XOR of the integers from register Rsrc1 and Src2 (or Imm) into register Rdest.
Next: Constant-Manipulating Instructions Up: Description of the MIPS Previous: Addressing Modes Antony Hosking
Fri Apr 12 10:48:03 EST 1996
|
{"url":"https://www.cs.purdue.edu/homes/hosking/502/spim/node13.html","timestamp":"2014-04-17T01:33:30Z","content_type":null,"content_length":"12265","record_id":"<urn:uuid:e445ac50-8575-4ae9-9285-5c614737aef6>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00271-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Integration via substitution
October 18th 2009, 09:53 PM #1
Sep 2007
Integration via substitution
Hi all,
I've got a bit of a problem with this question, if anyone could show the working it would be appreciated. I've got the suspicion it's meant to be done via substitution, but looking to learn.
$\int^{\pi/8}_0 (\theta + \cos4\theta) d\theta$
I'm pretty sure that the answer is (32 + pi^2) / 128 but of course, the answer isn't everything.
Anyone able to help me out? Cheers.
Hi all,
I've got a bit of a problem with this question, if anyone could show the working it would be appreciated. I've got the suspicion it's meant to be done via substitution, but looking to learn.
$\int^{\pi/8}_0 (\theta + \cos4\theta) d\theta$
I'm pretty sure that the answer is (32 + pi^2) / 128 but of course, the answer isn't everything.
Anyone able to help me out? Cheers.
Break it up into the sum of two integrals. One can be done directly, the other via a simple substitution.
Here's my working so far, however I think I've done something wrong.
$<br /> \int^{\pi/8}_0 (\theta + \cos4\theta) d\theta<br />$
$<br /> \int^{\pi/8}_0 \theta d\theta + \int^{\pi/8}_0 \cos4\theta d\theta<br />$
$<br /> \int^{\pi/8}_0 \theta d\theta + \cos4\int^{\pi/8}_0 \theta d\theta<br />$
$<br /> [\frac{\theta^2}{2}]^{\pi/8}_0 + [\cos2\theta^2]^{\pi/8}_0<br />$
Last edited by mr fantastic; October 19th 2009 at 12:36 AM. Reason: Restored original post
It's cos(4theta), presumably, not cos(4)*theta. You can't pull it out of the integral.
Yeah looking back it's quite silly of me. Brain snap.
This could be just as silly, but I think it's better.
Let $4\theta = x$ and $dx = 4d\theta$
$<br /> \int^{\pi/8}_0 \cos x dx<br />$
$<br /> \int^{\pi/8}_0 \frac{1}{4} \sin x <br />$
$<br /> [\frac{1}{4} \sin(4\theta)]^{\pi/8}_0<br />$
$<br /> = \frac{1}{4} <br />$
Yeah looking back it's quite silly of me. Brain snap.
This could be just as silly, but I think it's better.
Let $4\theta = x$ and $dx = 4d\theta$
$<br /> \int^{\pi/8}_0 \cos x dx<br />$
$<br /> \int^{\pi/8}_0 \frac{1}{4} \sin x <br />$
$<br /> [\frac{1}{4} \sin(4\theta)]^{\pi/8}_0<br />$
$<br /> = \frac{1}{4} <br />$
No need to go back to the original variable with definite integrals. Just substitute in the integral terminals as well. $\theta = 0 \Rightarrow x = ....$ and $\theta = \frac{\pi}{8} \Rightarrow x
= ....$ and evaluate the new definite integral. Saves time and money.
October 18th 2009, 09:55 PM #2
Aug 2009
October 18th 2009, 10:22 PM #3
Sep 2007
October 18th 2009, 10:53 PM #4
Aug 2009
October 19th 2009, 12:15 AM #5
October 19th 2009, 12:32 AM #6
Sep 2007
October 19th 2009, 12:38 AM #7
|
{"url":"http://mathhelpforum.com/calculus/108921-integration-via-substitution.html","timestamp":"2014-04-16T16:51:06Z","content_type":null,"content_length":"53133","record_id":"<urn:uuid:841fc057-e68f-4afd-8f1a-340397155d6a>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00566-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Mathematics instruction is a lengthy, incremental process that spans all grade levels.
Jim's Hints
A-Plus Flashcard Maker. You can create math fact flashcards online. Customize your flashcards by type of number operation or even enter your own values to create individual flashcards.
Ask Dr. Math.. Dr. Math is an online math tutorial service, maintained by Drexel University, Philadelphia, PA. Students can browse a large archive of math questions and answers and post their own
questions as well. This tutorial site never closes!
Cognitive Strategies in Math. This site presents several thinking strategies that students can learn to master math computation and applied math problems. It is sponsored by the Special Education
Department, University of Nebraska-Lincoln.
Math Central. Billing itself as ‘an Internet service for mathematics students and teachers’, this site contains math teaching resources, a forum to post math questions, and a challenging ‘math
problem of the month.’ Math Central is sponsored by the University of Regina, Saskatchewan, Canada. ||Re
Math Worksheet Generator. Sponsored by Intervention Central, this free site allows users to create math computation worksheets and answer keys for addition, subtraction, multiplication, and division.
Use the Worksheet Generator to make math worksheets to use with students who need to build fluency with math facts.
Numberfly: Early Math Fluency Probes. Numberfly is a free application from Intervention Central that allows educators to create CBM progress-monitoring probes of 3 types that assess students'
developing numeracy skills: Quantity Discrimination, Missing Number, and Number Identification. This application also includes instructions for administering and scoring these early math assessments,
as well as suggestions for using Early Math Fluency Probes in a school-wide RTI Universal Screening.
Teacher2Teacher. Sponsored by Drexel University, Philadelphia, PA, Teacher2Teacher describes itself as “a resource for teachers and parents who have questions about teaching mathematics.”
Participants can browse archived math teaching questions by level (elementary, secondary), pose their own teaching questions, and take part in on-line discussions on math instruction topics of
|
{"url":"http://www.interventioncentral.org/academic-interventions/math","timestamp":"2014-04-19T17:33:46Z","content_type":null,"content_length":"32398","record_id":"<urn:uuid:db93b4bf-c1c5-4426-8c88-8cebe22d002b>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00622-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Commutative Ring
UCLA, MATH 207c
Excerpt: ... p-ADIC ANALYTIC FAMILIES OF MODULAR FORMS 28 References Books [AFC] [BCM] [CGP] [CPI] [CRT] [GME] [IAT] [ICF] [LEC] [LFE] [MFM] [MFG] K. Iwasawa, Algebraic functions. Translations of
Mathematical Monographs, 118. American Mathematical Society, Providence, RI, 1993. xxii+287 N. Bourbaki, Alg`bre Commutative, Hermann, Paris, 196183 e K. S. Brown, Cohomology of Groups, Graduate
texts in Math. 87, Springer, 1982 K. Iwasawa, Collected Papers, Vol. 1-2, Springer, 2001 H. Matsumura, Commutative Ring Theory, Cambridge studies in advanced mathematics 8, Cambridge Univ. Press,
1986 H. Hida, Geometric Modular Forms and Elliptic Curves, 2000, World Scientic Publishing Co., Singapore (a list of errata downloadable at www.math.ucla.edu/ hida) G. Shimura, Introduction to the
Arithmetic Theory of Automorphic Functions, Princeton University Press and Iwanami Shoten, 1971, Princeton-Tokyo L. C. Washington, Introduction to Cyclotomic Fields, Graduate Text in Mathematics, 83,
Springer, 1980 H. Hida, Cohomological modular ...
|
{"url":"http://www.coursehero.com/keyword/commutative-ring/","timestamp":"2014-04-16T10:13:08Z","content_type":null,"content_length":"56296","record_id":"<urn:uuid:a90d2776-cbf7-4788-8e1e-7d94ab2dacb7>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00231-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Structural kinetic modeling of metabolic networks
• We are sorry, but NCBI web applications do not support your browser and may not function properly.
More information
Proc Natl Acad Sci U S A. Aug 8, 2006; 103(32): 11868–11873.
Applied Mathematics, Biochemistry
Structural kinetic modeling of metabolic networks
To develop and investigate detailed mathematical models of metabolic processes is one of the primary challenges in systems biology. However, despite considerable advance in the topological analysis
of metabolic networks, kinetic modeling is still often severely hampered by inadequate knowledge of the enzyme–kinetic rate laws and their associated parameter values. Here we propose a method that
aims to give a quantitative account of the dynamical capabilities of a metabolic system, without requiring any explicit information about the functional form of the rate equations. Our approach is
based on constructing a local linear model at each point in parameter space, such that each element of the model is either directly experimentally accessible or amenable to a straightforward
biochemical interpretation. This ensemble of local linear models, encompassing all possible explicit kinetic models, then allows for a statistical exploration of the comprehensive parameter space.
The method is exemplified on two paradigmatic metabolic systems: the glycolytic pathway of yeast and a realistic-scale representation of the photosynthetic Calvin cycle.
Keywords: systems biology, computational biochemistry, metabolomics, metabolic regulation, biological robustness
Cellular metabolism constitutes a complex dynamical system and gives rise to a wide variety of dynamical phenomena, including multiple steady states and temporal oscillations. The elucidation,
understanding, and eventually prediction of the behavior of metabolic systems represent one of the primary challenges in the postgenomic era (1 –5). To this end, substantial effort has been
dedicated in recent years to develop and investigate detailed models of cellular metabolic processes (6, 7).
Once a mathematical model is established, it can serve a multitude of purposes: It can be regarded as a “virtual laboratory” that allows the building up of a characteristic description of the system
and gives insights into fundamental design principles of cellular functions, such as adaptability, robustness, and optimality (8 –10). Likewise, mathematical models of cellular metabolism serve as a
basis to investigate questions of major biotechnological importance, such as the effects of directed modifications of enzymatic activities to improve a desired property of the system (11).
However, although there has been a formidable progress in the structural (or topological) analysis of metabolic systems (12, 13), and despite the long history of metabolic modeling, dynamic models of
cellular metabolism incorporating a realistic complexity are still scarce.
This scarcity is owed to the fact that the construction of such models encompasses a number of profound difficulties. Most importantly, the construction of kinetic models relies on the precise
knowledge of the functional form of all involved enzymatic rate equations and their associated parameter values. Furthermore, even if both are available from the literature, parameter values may (and
usually do) depend on many factors such as tissue type or experimental and physiological conditions. Likewise, most enzyme–kinetic rate laws have been determined in vitro, and often there is only
little guidance available whether a particular rate function is still appropriate in vivo.
In this work, we aim to overcome some of these difficulties and propose a bridge between structural modeling, which is based on the stoichiometry alone (12 –14), and explicit kinetic models of
cellular metabolism. In particular, we demonstrate that it is possible to acquire an exact, detailed, and quantitative picture of the bifurcation structure of a given metabolic system, without
explicitly referring to any particular set of differential equations.
Our approach starts with the observation that in most circumstances an explicit kinetic model is not necessary. For example, to determine under which conditions a steady state loses its stability,
only a local linear model of the system at this state is needed, i.e., we only need to know the eigenvalues of the associated Jacobian matrix. Note that by stating this assertion, and unlike related
approaches to qualitative modeling (14, 15), we do not aim at an approximation of the system. The boundaries of an oscillatory region in parameter space that arise out of a Hopf (HO) bifurcation are
actually and exactly determined by the eigenvalues of the Jacobian. Likewise, other bifurcations, including bifurcations of higher codimension, can be deduced from the spectrum of eigenvalues and
give rise to specific dynamical behavior.
The basis of our approach thus consists of giving a parametric representation of the Jacobian matrix of an arbitrary metabolic system at each possible point in parameter space, such that each element
is accessible even without explicit knowledge of the functional form of the rate equations. Once this representation of the Jacobian is obtained, it allows to give a detailed statistical account of
the dynamical capabilities of a metabolic system, including the stability of steady states, the possibility of sustained oscillations, as well as the existence of quasiperiodic and chaotic regimes.
Moreover, the analysis is quantitative, i.e., it allows the deduction of specific biochemical conditions under which a certain dynamical behavior occurs and allows the assessment of the plausibility
or robustness of experimentally observed behavior by relating it to a quantifiable region in parameter space.
Structural Kinetic Modeling
The temporal behavior of a metabolic network, consisting of m metabolites and r reactions, can be described by a set of differential equations (6)
where S denotes the m-dimensional vector of biochemical reactants and N the m × r stoichiometric matrix. The r-dimensional vector of reaction rates ν(S, k) consists of nonlinear (and often unknown)
functions, which depend on the substrate concentrations S, as well as on a set of parameters k.
In the following, we will not assume explicit knowledge of the functional form of the rate equations but instead aim at a parametric representation of the Jacobian of the system. As the only
mathematical assumption about the system, we require the existence of a positive state S^0 that fulfills the steady-state condition Nν(S^0, k) = 0. Note that the state S^0 is not required to be
unique or stable.
Using the definitions (16, 17)
with i = 1, …, m and j = 1, …, r and applying the variable substitution S[i] = x[i]S[i]^0, the system can be rewritten in terms of new variables x(t)
The corresponding Jacobian of the normalized system at the steady state x^0 = 1 is
Because the new variables x are related to S by a simple multiplicative constant, J[x] can be straightforwardly transformed back into the original Jacobian.
Any further evaluation of the Jacobian now rests on the interpretation of the terms in Eq. 4. We begin with an analysis of the matrix Λ: Its elements Λ[ij] have the units of an inverse time and
consist of the elements of the stoichiometric matrix N, the vector of steady-state concentrations S^0, and the steady-state fluxes ν(S^0). Provided a metabolic system is designated for mathematical
modeling, we can assume that there exists some knowledge about the relevant concentrations, i.e., for each metabolite, we can specify an interval S[i]^− ≤ S[i]^0 ≤ S[i]^+, which defines a
physiologically feasible range of the respective concentration. Furthermore, the steady-state fluxes ν(S^0) are subject to the mass-balance constraint Nν(S^0) = 0, leaving only r − rank(N)
independent reaction rates (6). Again, an interval ν[i]^− ≤ ν[i]^0 ≤ ν[i]^+ can be specified for all independent reaction rates, defining a physiologically admissible flux space.
In the following, we denote S^0 and ν(S^0), usually corresponding to an experimentally observed state of the system, as the “operating point” at which the Jacobian is to be evaluated. This
information, together with the stoichiometric matrix N, fully specifies the matrix Λ.
The interpretation of the matrix θ[x]^μ in Eq. 4 is slightly more subtle because it involves the derivatives of the unknown functions μ(x) with respect to the new normalized variables at the point x^
0 = 1. Nevertheless, an interpretation of these parameters is possible and does not rely on the explicit knowledge of the detailed functional form of the rate equations: Each element θ[x[i]]^μ[j] of
the matrix θ[x]^μ measures the normalized degree of saturation of the reaction ν[j] with respect to a substrate S[i] at the operating point S^0. In particular, the dependence of almost all
biochemical rate laws ν[j](S) on a biochemical reactant S[i] can be written in the form ν[j](S, k) = k[v]S[i]^n/f[m](S, k), where n denotes an integer exponent and f[m](S, k) a polynomial of order m
in S[i] with positive coefficients k. All other reactants have been absorbed into k (6). After applying the transformation of Eq. 2, we obtain
with α [S[i]^0→0^α] = 1 and lim[S[i]^0→∞^α] = 0. To evaluate the matrix θ[x]^μ, we thus restrict each saturation parameter to a well defined interval, specified in the following way: As for most
biochemical rate laws n = m = 1, the partial derivative usually takes a value between zero and unity, determining the degree of saturation of the respective reaction. In the case of cooperative
behavior with exponents n = m ≥ 1, the normalized partial derivative lies in the interval [0, n] and, analogously, in the interval [0, −m] for inhibitory interaction with n = 0 and m ≥ 1. For
examples and proof of Eq. 5, see, respectively, Materials and Methods and the Supporting Appendix and Figs. 8–11, which are published as supporting information on the PNAS web site.
The matrices θ[x]^μ and Λ, as defined above, fully specify the Jacobian of the system. In the following, both quantities are treated as free parameters, defining the physiologically admissible
“parameter space” of the system. Importantly, our representation of the Jacobian fulfills the following three essential conditions. (i) The reconstructed Jacobian represents the exact Jacobian at
this point in parameter space. There is no approximation involved. (ii) Each term in the Jacobian is either directly experimentally accessible, such as flux or concentration values, or has a well
defined biochemical interpretation, such as a normalized degree of saturation of a given reaction. (iii) The Jacobian does not depend on a particular choice of specific rate functions. Rather, it
encompasses all possible kinetic models of the system that are consistent with the considerations above. In this sense, the reconstructed Jacobian is exhaustive.
An Illustrative Example
Before an application to more detailed biochemical models, we exemplify our approach using a simple hypothetical pathway. Suppose the reaction scheme depicted in Fig. 1, consisting of two metabolites
and three reactions, is designated for mathematical modeling. The starting point of our analysis is then usually an experimentally observed operating point, characterized by metabolite concentrations
S^0 = (G^0, T^0) and flux values ν^0 = (ν[1]^0, ν[2]^0, ν[3]^0). Furthermore, an analysis of the stoichiometric matrix N reveals that there is only one independent steady-state reaction rate c, with
ν[1]^0 = ν[2]^0 = c, and ν[3]^0 = 2c. Thus, we only require knowledge of the average overall flux through the pathway, specifying the value c. This information already enables the construction of the
matrix Λ, which defines the operating point at which the system is to be evaluated.
A simple pathway, reminiscent of a minimal model of yeast glycolysis (18). One unit of glucose (G) is converted into two units of ATP (T), with ATP exerting a positive feedback on its own production.
(Upper) A schematic representation. (Lower) The corresponding ...
The only remaining parameters are now the elements of the matrix θ[x]^μ. Starting with the dependence of each reaction on its substrate and assuming conventional biochemical rate laws, we obtain θ[G]
^μ2[2] with respect to its substrate glucose (G). Furthermore, θ[T]^μ3[3] with respect to ATP (T). Additionally, the known regulatory feedback of the metabolite T upon the reaction ν[2] is
incorporated by θ[T]^μ2n ≥ 1 denotes a positive integer. The matrix θ[x]^μ thus contains three nonzero values, each restricted to a well defined interval
We emphasize that the three elements of θ[x]^μ represent bona fide parameters of the system, specifying the Jacobian matrix no less unique and quantitative than a corresponding set of Michaelis
constants. Given the elements of θ[x]^μ as free parameters, we thus have obtained a parametric representation of the Jacobian matrix, which encompasses all possible kinetic models consistent with the
experimentally observed operating point. In the remainder of this work, we use our approach to evaluate the dynamical capabilities of two more complex examples of metabolic system.
Glycolytic Pathway
Among the most classical and probably best studied examples of a biochemical oscillator is the breakdown of sugar by means of glycolysis in yeast. Damped and sustained glycolytic oscillations have
been observed for several decades and have triggered the development of a large variety of kinetic models (18 –20). In the following, we will address some of the characteristic questions that led to
the development of those earlier models and show that these questions can be readily answered by using the concept of structural kinetic modeling. Given a schematic representation of the pathway, as
depicted in Fig. 2, the first and foremost question is to establish whether the proposed reaction mechanism indeed facilitates sustained oscillations at the observed operating point. And, if yes,
what are the specific kinetic conditions under which sustained oscillations can be expected?
A medium-complexity representation of the yeast glycolytic pathway (19). The system consists of eight metabolites and eight reactions. The main regulatory step is the phosphofructokinase (PFK), here
combined with the hexokinase (HK) reaction into the ...
We start out by constructing the matrix Λ using the experimentally observed state S^0 and ν^0, identified here with the average concentration and flux values reported in refs. 19 and 20.
Additionally, the matrix of saturation coefficients θ[x]^μ has to be specified. For simplicity, we assume that all reactions are irreversible and depend on their respective substrates only, resulting
in 13 free parameters. Based on our discussion of conventional biochemical rate laws above, the saturation coefficients are restricted to the unit interval θ[S]^μ
For the dependence of the PFK–HK reaction on ATP, we follow a previously proposed kinetic model (19) and assume linear activation because of its effect as a substrate and a saturable inhibition
involving a positive exponent n ≥ 1. The corresponding parameter is thus θ[ATP]^μ1 = 1 − ξ, with ξ n]. No further assumptions about the detailed functional form of any of the rate equations are
necessary. For an explicit representation of both matrices Λ and θ[x]^μ, see the Supporting Appendix. To investigate the possibility of sustained oscillation, we begin with the most simple scenario
and set θ[S]^μ = 1 for all reactions, corresponding to bilinear mass-action kinetics. Note, however, that the inhibition term is still assumed to be an unspecified nonlinear function. Fig. 3 shows
the largest eigenvalue of the resulting Jacobian at the experimentally observed operating point as a function of the feedback strength ξ. For sufficient inhibition, the spectrum of eigenvalues passes
through a HO bifurcation, and the system facilitates sustained oscillations. Importantly, for a HO bifurcation to occur at the observed operating point, an exponent n ≥ 2 is needed, irrespective of
the detailed functional form of the rate equation.
Dynamics of the glycolytic pathway. (Upper) The eigenvalue with the largest real part as a function of the inhibitory feedback strength ξ of ATP on the combined PFK–HK reaction ν[1]. All other
saturation parameters are θ ...
We have to highlight one fundamental aspect of our analysis: Given our parametric representation of the Jacobian, the impact of the inhibition is decoupled from the steady-state concentrations and
flux values the system adopts (the latter being solely determined by the matrix Λ). Thus, we specifically ask whether the assumed inhibition is indeed a necessary condition for the observation of
oscillations at the experimentally observed operating point. In contrast to this fact, using a conventional kinetic model and reducing the influence of the regulation, i.e., by increasing the
corresponding Michaelis constant, would concomitantly result in altered steady-state concentrations, thus not straightforwardly contributing to the answer to this question.
Furthermore, because glycolytic oscillations have no obvious physiological role and are only observed under rather specific experimental conditions, some questions concerning their possible
functional significance have been raised. One assertion is that the observed oscillations might only be an unavoidable side effect of the regulatory interactions, optimized for other purposes (6).
Indeed, as shown in Fig. 3, a varying feedback strength ξ allows for different dynamical regimes. In particular, an intermediate value speeds up the response time with respect to perturbations, as
also frequently observed in explicit models of cellular regulation (21).
Statistical Analysis of the Parameter Space
Going beyond the case of bilinear kinetics, we now evaluate the properties of a Jacobian at the most general level. All saturation coefficients θ[S]^μ Fig. 2. The steady-state concentrations and flux
values are again restricted to the experimentally observed operating point. To assess the dynamical properties of the system, the saturation coefficients θ[S]^μ [R]^max of its eigenvalues is
recorded. Fig. 4 shows the histogram of the largest real part within the spectrum of eigenvalues, with λ[R]^max > 0 implying instability of the operating point. In the absence of inhibitory feedback
ξ = 0, the operating point is likely to be unstable, i.e., most realizations result in a spectrum of eigenvalues with at least one positive real part.
The distribution of the largest real part λ[R]^max within the spectrum of eigenvalues for 10^5 realizations of the Jacobian matrix. For each realization, the 12 saturation parameters θ[S]^μ were
sampled randomly from a uniform distribution ...
Two ways to circumvent this inherent instability are conceivable. First, we can ask about the dependence on particular reactions, that is, whether the saturation (or nonsaturation) of a specific
reaction contributes to an increased stability of the system. To this end, the correlation coefficient between λ[R]^max, reflecting the stability of the system, and the saturation parameters θ[S]^μ
was estimated. Indeed, several parameters θ[S]^μ show a strong correlation with λ[R]^max, indicating that their value essentially determines the stability of the system (for data see Supporting
Appendix and Figs. 12–14, which are published as supporting information on the PNAS web site). Fig. 4 Left depicts the distribution of λ[R]^max under the assumption that these reactions are
restricted to weak saturation. In this case, the resulting distribution is shifted toward negative values, corresponding to an increased probability of the system to operate at a stable steady state.
The second option to ensure stability of the system arises from the negative feedback of ATP upon the combined PFK–HK reaction. Fig. 4 Right shows the distribution of the largest real part λ[R]^max
of the eigenvalues for a nonzero feedback strength ξ > 0. Again, the distribution is markedly shifted toward negative values, increasing the probability of a stable steady state.
To investigate the role of the feedback in more detail, Fig. 5 depicts the distribution of λ[R]^max as a function of the feedback strength ξ. As can be observed, in the absence of the regulatory
feedback the system is prone to instability, i.e., it is not possible (or rather unlikely) for the observed operating point to exist as a stable steady state. Subsequently, as the feedback strength
is increased, the probability of obtaining a stable steady state increases. For an intermediate value ξ = 1, the system is fully stable: Any realization of the Jacobian will result in a stable steady
state, independent of the detailed functional form of the rate equations or their associated parameters. However, as the feedback is increased further, the operating point again loses its stability.
This time the instability arises out of a HO bifurcation, indicating the presence of sustained oscillations.
The distribution of the largest real part λ[R]^max of the eigenvalues as a function of the feedback strength ξ. All other saturation parameters are sampled from a uniform distribution. (Left)
Color-coded visualization of the resulting distribution ...
Based on these findings, we can summarize some essential properties of the pathway depicted in Fig. 2: Given the experimentally observed metabolite concentrations and flux values, our results show
that in the absence of the regulatory interaction it would not be possible (or highly unlikely) to observe either sustained oscillations or a stable steady state. However, for sufficiently large
inhibitory feedback, the system will inevitably exhibit sustained oscillations. Furthermore, as the feedback strength ξ n] is bounded by an exponent n of the (unspecified) rate equation, n ≥ 2 is
required for the existence of sustained oscillations. As demonstrated, our method thus allows one to derive the likeliness or plausibility of the experimentally observed oscillations, as well as the
specific kinetic requirements for oscillations to occur, without referring to the detailed functional form of the rate equations.
Photosynthetic Calvin Cycle
The CO[2]-assimilating Calvin cycle, taking place in the chloroplast stroma of plants, is a primary source of carbon for all organisms and is of central importance for many biotechnological
applications. However, even when restricting an analysis to the core pathway, the construction of a detailed kinetic model already entails considerable challenges with respect to the required rate
equations and kinetic parameters (22, 23).
In the following, we thus use a representation of the photosynthetic Calvin cycle, as adapted from earlier kinetic models (22, 23), to demonstrate the applicability of our approach to a system of a
reasonable complexity. The structural kinetic model consists of 18 metabolites, subject to 2 conservation relations, and 20 reactions, including 3 export reactions, starch synthesis, and regeneration
of ATP. For a schematic representation of the pathway, see Supporting Appendix and Figs. 15–17, which are published as supporting information on the PNAS web site.
We seek to describe a general strategy to extract information about the dynamical capabilities of the system, without referring to an explicit set of differential equations. Our agenda focuses on (i)
the stability and robustness of the experimentally observed concentration and flux values, (ii) the relative impact or importance of each reaction on the dynamical properties of the system, (iii) the
existence and quantification of different dynamical regimes such as oscillations and multistability, and (iv) the possibility of complex or chaotic temporal behavior.
The starting point is again an experimentally observed state, characterized by the vector of metabolite concentrations S^0 and flux values ν^0, as reported by Petterson and Ryde-Petterson (22).
Although additional knowledge on the reactions is often available, for the moment we assume that all reactions depend only on their substrates and products, with parameters θ[S]^μ [P]^μ Λ and θ[S]^μ,
constitutes the structural kinetic model of the Calvin cycle at the observed operating point.
As a first approximation, we commence with global saturation parameters, θ[S]^μ and θ[P]^μ, set equal for all reactions. Although clearly oversimplified, the resulting bifurcation diagram, depicted
in Fig. 6, already reveals some fundamental dynamical properties of the system. First, the observed operating point is indeed a stable steady state for most parameters θ[S]^μ and θ[P]^μ.
Interestingly, however, in the absence of product inhibition θ[P]^μ = 0, a steady state is no longer feasible. In particular, for pure irreversible mass-action kinetics (θ[S]^μ = 1, θ[P]^μ = 0),
corresponding to a nonenzymatic chemical system, the pathway could not operate at the observed steady state. Second, for low product saturation (θ[P]^μ close to zero), a HO bifurcation occurs.
Although this result does not necessarily imply that this region within parameter space is actually accessible under normal conditions, it shows the dynamical capability of the system to generate
sustained oscillations; i.e., there exists a region in parameter space that allows for oscillatory behavior. Additionally, for low values of the substrate saturation θ[S]^μ, a saddle-node (SN)
bifurcation occurs. This result shows that the observed steady state will eventually lose its stability, i.e., there are conditions under which the observed steady state is no longer stable. And,
indeed, both dynamical features have been observed for the Calvin cycle: Photosynthetic oscillations are known for many decades and have been subject to extensive experimental and numerical studies (
24). Likewise, multistability was recently found in a detailed kinetic model of the Calvin cycle and verified in vivo (23).
The bifurcation diagram of the Calvin cycle at the observed operating point with respect to the two global saturation parameters θ[S]^μ [P]^μ
To proceed with a systematic analysis, the next step is to drop the assumption of global saturation parameters. All individual parameters θ[S]^μ [P]^μ = −1/3. Of foremost interest is again the
stability of the experimentally observed operating point: Evaluating an ensemble of 5 × 10^5 random realizations of the Jacobian at this operating point, the system gives rise to a stable steady
state in ≈94.3% of all cases (see Supporting Appendix and Figs. 15–17 for convergence and dependence on ensemble size). Thus, the stability of the observed operating point is indeed generic and does
not rely on a specific choice of the kinetic parameters.
As for the remaining ≈5.7% of models, corresponding to the case where the observed operating point is unstable, ≈5.1% give rise to a single positive eigenvalue. Only ≈0.6% correspond to a more
complex situation, with two or more real parts >0. The latter case, although only restricted to a small region within parameter space, holds profound implications for the possible dynamics of the
system. As a further step within our approach, the existence of certain bifurcations of higher codimension allows the prediction of specific dynamics (see Materials and Methods). Fig. 7 shows a
bifurcation diagram of the Calvin cycle within a particular region of parameter space where such bifurcations occur. Here, the system gives rise to a Gavrilov–Guckenheimer (GG) bifurcation, implying
the existence of quasiperiodic dynamics and making the existence of chaotic dynamics likely. In close vicinity of the GG bifurcation, we also find a double Hopf (DH) bifurcation, formed by the
interaction of two codimension-1 HO bifurcations. The generic existence of a chaotic parameter region close to the DH bifurcation can be proven (25, 26).
Bifurcation diagrams of the Calvin cycle as a function of the saturation of the Rubisco reaction with respect to ribulose-1,5-bisphosphate (RuBP) and the saturation of the Aldolase reaction with
respect to glyceraldehyde-3-phosphate (GAP), while all other ...
Thus, our results demonstrate the possibility of quasiperiodic and chaotic dynamics for the model of the photosynthetic Calvin cycle, without relying on any particular assumptions about the
functional form of the kinetic rate equations. Furthermore, because it is a quantitative method, we can assert that complex dynamics at the operating point are confined to a rather small region in
parameter space and that the experimentally observed steady state is generically stable.
Discussion and Conclusions
We have presented a systematic approach to explore and quantify the dynamic capabilities of a metabolic system. Based on a parametric representation of the Jacobian matrix, constructed in such a way
that each element is either directly experimentally accessible or amenable to a clear biochemical interpretation, we look for characteristic bifurcations that give insight into the possible dynamics
of the system. Our method then builds on the construction of a large ensemble of models, encompassing all possible explicit kinetic models, to statistically explore and quantify the parameter regions
associated with a specific dynamical behavior.
One of the primary advantages of our approach is that available information, such as experimentally accessible concentration values, can be readily incorporated into the description of the system.
Focusing on a particular observed operating point, our approach then allows for the identification of crucial reaction steps that predominantly contribute to the stability, and thus robustness, of
the observed state and results in specific biochemical conditions for which certain dynamical behavior can be expected. Furthermore, by taking bifurcations of higher codimension into account, we go
beyond the usually considered case and are able to predict the possibility of complex or chaotic dynamics, often a nontrivial task even if an explicit kinetic model is available.
Exemplified here with two paradigmatic examples of metabolic pathways, our approach holds a vast number of possible further applications. In particular, with respect to biotechnological applications,
a desired flux distribution must not necessarily be stable. By using our approach, it is thus possible to incorporate dynamic aspects into the description of the system and explore the conditions
that support the stability of directed modifications of the system. Likewise, structural kinetic modeling can serve as a prequel to explicit mathematical modeling, aiming to identify crucial reaction
steps and parameters in best time.
Materials and Methods
Interpretation of the Saturation Matrix.
Our approach relies crucially on the interpretation of the elements of the matrix θ[x]^μ. As a simple example, consider a single bilinear reaction rate of the form ν(S[1], S[2]) = v[max]S[1]S[2].
Then, according to Eq. 2, the normalized rate is μ(x[1], x[2]) = x[1]x[2]; thus,
In the case of Michaelis–Menten kinetics ν(S) = v[max]S/(K[M] + S), depending on a single substrate S, we obtain
Clearly, the partial derivative θ[x]^μ S^0. The limiting cases are lim[s^0→0]θ[x]^μ = 1 (linear regime) and lim[s^0→∞]θ[x]^μ = 0 (full saturation). This result implies that the saturation parameter
indeed covers the full interval, which holds likewise for the general case of Eq. 5. For additional instances of specific rate functions, see the Supporting Appendix and Figs. 8–10.
Note that, except for the change in variables, the saturation parameters θ[x]^μ are reminiscent of the scaled elasticity coefficients, as defined in the realm of metabolic control analysis (6).
However, for our reasoning to hold, the analysis is restricted to unidirectional reactions, i.e., in the case of reversible reactions, forward and backward terms have to be treated separately.
Because the denominator is usually preserved for both terms, no additional free saturation parameters arise.
Another close analogy to the saturation parameters is found within the power-law approximation, where each enzyme kinetic rate law is replaced by a function of the form ν[j](S)=α[j][i]S[i]^g[ij] with
g[ij] denoting the “effective kinetic order” of the reaction (6). In fact, the power-law formalism can be regarded as the simplest possible way to specify explicit nonlinear functions that are
consistent with a given Jacobian. Applying the transformation of Eq. 2, we obtain μ[j](x) = [i]x[i]^g[ij], thus θ[x[i]]^μ[j] = g[ij]. However, beyond the properties of the Jacobian itself, only
little confidence can be placed in an actual numerical integration of these functions (6). Generally, it is possible to specify several classes of explicit functions that, by construction, result in
a given Jacobian but have no, or only little, biochemical justification otherwise. Consequently, we opt for using the properties of the parametric representation of the Jacobian directly, instead of
going the loop way by means of auxiliary ad hoc functions.
Dynamics and Bifurcations.
One of the foundations of our approach is the fact that knowledge of the Jacobian matrix alone is sufficient to deduce certain characteristic bifurcations of a metabolic system. In general, the
stability of a steady state is lost either in a HO bifurcation or in a bifurcation of SN type, both of codimension-1. Of particular interest for revealing insights about the dynamical behavior of
systems are bifurcations of higher codimension, such as the Takens–Bogdanov (TB), the GG, and the DH bifurcation (16, 25). Each of these local bifurcations of codimension-2 arises out of an
interaction of two codimension-1 bifurcations and has important implications for the possible dynamical behavior. For instance, the TB bifurcation indicates the presence of a homoclinic bifurcation
and therefore the possibility of spiking or bursting behavior. The presence of a GG bifurcation shows that complex (quasiperiodic or chaotic) dynamics exist generically in a certain parameter space.
In the same way, the DH bifurcation indicates the generic existence of a chaotic parameter region. For details, see refs. 16 and 25, the Supporting Appendix, and Fig. 10.
Supplementary Material
Supporting Information:
R.S. was supported by the State Brandenburg Hochschul-und Wissenschaftsprogramm 2004–2006. T.G. and B.B. were supported by German VW-Stiftung and Deutsche Forschungsgemeinschaft Grant SFB 555. T.G.
was supported by the Alexander von Humboldt Foundation.
double Hopf
Conflict of interest statement: No conflicts declared.
This paper was submitted directly (Track II) to the PNAS office.
Palsson B. O. Nat. Biotechnol. 2000;18:1147–1150. [PubMed]
Kell D. B. Curr. Opin. Microbiol. 2004;7:296–307. [PubMed]
3. Fernie A. R., Trethewey R. N., Krotzky A. J., Willmitzer L. Nat. Rev. Mol. Cell Biol. 2004;5:1–7.
Westerhoff H. V., Palsson B. O. Nat. Biotechnol. 2002;22:1249–1252. [PubMed]
Palsson B. O., Joshi A., Ozturk S. S. Fed. Proc.; 1987. pp. 2485–2489. [PubMed]
6. Heinrich R., Schuster S. The Regulation of Cellular Systems. New York: Chapman & Hall; 1996.
Hashimoto K., Tomita M., Takahashi K., Shimizu T. S., Matsuzaki Y., Miyoshi F., Saito K., Tanida S., Yugi K., Venter J. C., Hutchison C. A., III Bioinformatics. 1999;15:72–84. [PubMed]
Morohashi M., Winn A. E., Borisuk M. T., Bolouri H., Doyle J., Kitano H. J. Theor. Biol. 2002;216:19–30. [PubMed]
Stelling J., Sauer U., Szallasi Z., Doyle F. J., III, Doyle J. Cell. 2004;118:675–685. [PubMed]
Angeli D., Ferrell J. E., Jr, Sontag E. D. Proc. Natl. Acad. Sci. USA. 2004;101:1822–1827. [PMC free article] [PubMed]
Stephanopoulos G. N., Alper H., Moxley J. Nat. Biotechnol. 2004;22:1261–1267. [PubMed]
Famili I., Förster J., Nielsen J., Palsson B. O. Proc. Natl. Acad. Sci. USA. 2003;100:13134–13139. [PMC free article] [PubMed]
Schuster S., Fell D. A., Dandekar T. Nat. Biotechnol. 2000;18:326–332. [PubMed]
Bailey J. E. Nat. Biotechnol. 2001;19:503–504. [PubMed]
Gagneur J., Cesari G. FEBS Lett. 2005;579:1867–1871. [PubMed]
16. Gross T. Population Dynamics: General Results from Local Analysis. Tönning, Germany: Der Andere; 2004.
Gross T., Feudel U. Phys. Rev. E Stat. Nonlin. Soft Matter Phys. 2006;73:016205. [PubMed]
Wolf J., Passarge J., Somsen O. J. G., Snoep J., Heinrich R., Westerhoff H. V. Biophys. J. 2000;78:1145–1153. [PMC free article] [PubMed]
Hynne F., Danø S., Sørensen P. G. Biophys. Chem. 2001;94:121–163. [PubMed]
Rosenfeld N., Elowitz M., Alon U. J. Mol. Biol. 2002;323:785–793. [PubMed]
22. Petterson G., Ryde-Petterson U. Eur. J. Biochem. 1998;175:661–672.
Poolman M. G., Ölcer H., Lloyd J. C., Raines C. A., Fell D. A. Eur. J. Biochem. 2001;268:2810–2816. [PubMed]
Ryde-Petterson U. Eur. J. Biochem. 1991;198:613–619. [PubMed]
25. Kuznetsov Y. A. Elements of Applied Bifurcation Theory. Berlin: Springer; 1995.
26. Gross T., Ebenhöh W., Feudel U. Oikos. 2005;109:135–144.
Articles from Proceedings of the National Academy of Sciences of the United States of America are provided here courtesy of National Academy of Sciences
• MedGen
Related information in MedGen
• PubMed
PubMed citations for these articles
Your browsing activity is empty.
Activity recording is turned off.
See more...
|
{"url":"http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1524928/?tool=pubmed","timestamp":"2014-04-19T07:11:35Z","content_type":null,"content_length":"106478","record_id":"<urn:uuid:98a4fdca-728e-4d08-b1cc-1337b4863abb>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00347-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Forum: Teacher2Teacher - Q&A #4990
View entire discussion
[<<prev] [next>>]
From: Kristina (for Teacher2Teacher Service)
Date: Nov 02, 2000 at 15:21:31
Subject: Re: Teaching algebra in middle school
Hi Alice,
Here's an opposing view for you from another Teacher2Teacher Associate.
You'll see that Chris' definition is not the same as Martha's. He's talking
about a course, Algebra I, not just algebraic reasoning.
Let us know if this information is useful to you.
-Kristina, for the T2T service
Chris wrote:
As a math teacher, I am convinced that it is often not appropriate for some
middle school students to take Algebra I. Since Algebra is the study of
patterns, one could argue that algebraic reasoning occurs in grades much lower
than fifth grade. Therefore, the argument that it should be taught to all in
middle school because it is taught in fifth grade, is not logical. Formal
algebraic operations is an abstract thinking process that can be daunting for
some middle school students. Unfortunately, it is in states like Florida that
mandate algebra to all that creates this problem. You push a kid too early
into Algebra, and you could limit the potential of that student because their
foundation will be weak. When states like Florida mandate Algebra for all,
what ends up happening is that the Algebra that is taught, because it must
reach all students, is watered down, lacking in rigor and depth, and
demonstrably inferior to what the program should be.
Algebra for all middle school students (11-14 year olds) is not something we
should pursue as a nation unless we are willing to sacrifice the math
potential of many of our students.
Chris Mahoney
Brookwood School
Post a public discussion message
Ask Teacher2Teacher a new question
|
{"url":"http://mathforum.org/t2t/message.taco?thread=4990&message=6","timestamp":"2014-04-20T03:43:32Z","content_type":null,"content_length":"5870","record_id":"<urn:uuid:69d2e990-65bc-498e-a49c-6143413b0325>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00132-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
y" + 3y' + 2y = 6 y(0)=0 y'(0) = 2 with laplace
Best Response
You've already chosen the best response.
y'' = sy^2 - sy(0) -y'(0)
Best Response
You've already chosen the best response.
y' = sy - y(0)
Best Response
You've already chosen the best response.
(sy^2 -2) +3(sy) +2y = 6
Best Response
You've already chosen the best response.
solve the quadratic
Best Response
You've already chosen the best response.
yes, im stuck in integral (6+2s) / s(s+2)(s+1)
Best Response
You've already chosen the best response.
First write everything in terms of the laplace transform. THen solve the equation by converting back.
Best Response
You've already chosen the best response.
no, so you got y to be that
Best Response
You've already chosen the best response.
Use particla fractions.
Best Response
You've already chosen the best response.
then you must use partial fractions
Best Response
You've already chosen the best response.
6+2s = As + B(s+2)+C(s+1) then?
Best Response
You've already chosen the best response.
= 3/s -4/(s+1) +1/(s+2)
Best Response
You've already chosen the best response.
I created and went straight to wolframa for the partial fractions
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
partial fractions are very standard
Best Response
You've already chosen the best response.
then you convert them back
Best Response
You've already chosen the best response.
the 3/s goes to 3 from memory
Best Response
You've already chosen the best response.
and the other two are time shifted exponentials
Best Response
You've already chosen the best response.
Laplace is all about matching and partial fractions, at least in solving simple ODE systems.
Best Response
You've already chosen the best response.
6+2s = As + B(s+2)+C(s+1) s=-1 --> 4 = -A+B s=-2 --> 2 = -2A-C for the last s what number should i choose?
Best Response
You've already chosen the best response.
1/(s-a) = e^(at) ( I googled this lol )
Best Response
You've already chosen the best response.
s=0 lol
Best Response
You've already chosen the best response.
remember you can pick any value for s, just that some values will make the simultaneous eqns alot easierto solve
Best Response
You've already chosen the best response.
6+2s = As + B(s+2)+C(s+1) s=-1 --> 4 = -A+B s=-2 --> 2 = -2A-C s=0 --> 6 = 2B+C ---------------------- 4+A = B 6 = 2(4+A)+C 6 = 8 +2A+C elimination -2 = 2A+C 2 = -2A -C infinity?
Best Response
You've already chosen the best response.
y= 3-4e^(-t) + e^(-2t)
Best Response
You've already chosen the best response.
you make no sense at all lol
Best Response
You've already chosen the best response.
this is why people need to pay attention in high school and first year uni maths course , so they absolutely hammer in the basics
Best Response
You've already chosen the best response.
could u help me?
Best Response
You've already chosen the best response.
its just simultaneous eqns , takes for ever, you need to set up a matrix etc.
Best Response
You've already chosen the best response.
it will take me like 10mins to type it up , I aint doing it lol
Best Response
You've already chosen the best response.
matrix? really?
Best Response
You've already chosen the best response.
I can see that you reach the point \(Y(s)=\frac{2s+6}{s(s+1)(s+2)}\). Now we should use partial fractions to write the expression in a form that can be easily to find its inverse laplace
transform. That's \[{2s+6 \over s(s+1)(s+2)}={a \over s}+{b \over s+1}+{c \over s+2}\]. Multiplying both sides by \(s(s+1)(s+2)\) gives: \[2s+6=a(s+1)(s+2)+bs(s+2)+cs(s+1)\] Plugging \(s=0\)
gives \(2a=6 \implies a=3\); \(s=-1\) gives \(-b=4 \implies b=-4\), and \(s=-2\) gives \(2c=2 \implies c=1\). So, \(Y(s)=3\frac{1}{s}-\frac{4}{s+1}+\frac{1}{s+2}\). Hence \(y(t)=3-4e^{-t}+e^{-2t}
Best Response
You've already chosen the best response.
Hello suzi!! Does the answer make sense to you?
Best Response
You've already chosen the best response.
thank you anwara
Best Response
You've already chosen the best response.
You're welcome!
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/4de455d64c0e8b0b2703aed8","timestamp":"2014-04-19T04:42:16Z","content_type":null,"content_length":"106706","record_id":"<urn:uuid:d32bc086-1d4f-4d0f-9e43-af27c116e741>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00476-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Mplus Discussion >> Variance of a higher-order factor is negative
Kaigang Li posted on Tuesday, July 28, 2009 - 12:26 am
I am conducting a higher order CFA with 8 1st-order factors and 3 2rd-order factors.
I ran the following input and get “WARNING: THE LATENT VARIABLE COVARIANCE MATRIX (PSI) IS NOT POSITIVE DEFINITE. …. ”
fSE by Q6SEE2 ...;
fSSF1 by Q17SSF1 ...;
fSSFa2 by Q22SSFA1 ...;
fGP by ...;
Q33P2 with Q34P3;
fENA1 by ...;
fENN2 by...;
fEN by fENA1 fENN2;
fEXP1 by ...;
fEXN2 by ...;
fEX by fEXP1 fEXN2;
The variance of 2rd-order factor(FEN)is negative.
FEN -0.037 0.075 -0.489 0.625
When fix the variance of FEN to 0 I got the same warning.
fEN by fENA1 fENN2;
When I fixed FEN to 1.0 and freed fENA1 as
fEN by fENA1* fENN2;
I got the same warning and another error
TRUSTWORTHY ...."
But when I did not free the first factor with only fixing the FEN to 1.0, the model fit well.
fEN by fENA1 fENN2;
I am not sure if it makes sense. Would you give any suggestion? Thank you very much!
Linda K. Muthen posted on Tuesday, July 28, 2009 - 7:58 am
If the second-order factor has no variance, the model is not appropriate for the data.
The following statements should not be used:
fEN by fENA1 fENN2;
In them, you fix the metric of the factor twice by fixing a factor loading to one and the factor variance to one. This is not correct.
Kaigang Li posted on Tuesday, July 28, 2009 - 9:11 am
Hi Professor Muthen,
Thanks for your quick response. Would you tell me what would be the reason that results in the zero variance since the EFA produced two factors for the FEN scale?
I have other second-order factors which have variances as follows:
FSE 1.180 0.148 7.984 0.000
FSS 0.542 0.154 3.528 0.000
FGP 2.136 0.166 12.889 0.000
FEN -0.037 0.075 -0.489 0.625
FEX 5.840 2.548 2.292 0.022
May I break down just FEN into two factors without using fEN by fENA1 fENN2; any more, but keep other seond-order factors?
Thanks a lot.
Linda K. Muthen posted on Tuesday, July 28, 2009 - 12:54 pm
How did you do a second-order EFA?
Kaigang Li posted on Tuesday, July 28, 2009 - 7:06 pm
I don't think I have done second-order EFA, instead I just did first-order EFA. Should I refer to the example 4.5 and 4.6 in the Mplus guide book for second-order EFA? Any suggestions for other
reference would greatly appreciated. Thank you very much!
Linda K. Muthen posted on Wednesday, July 29, 2009 - 1:42 pm
I asked because you implied that you had done a second-order EFA and therefore the second-order CFA should fit. Given the one second-order factor has no variance. It should not be included in the
Kaigang Li posted on Wednesday, July 29, 2009 - 2:02 pm
Sorry about the confusion. I am considering to remove that second-order factor. Thanks!
Kaigang Li posted on Wednesday, July 29, 2009 - 10:04 pm
Professor Muthen,
I got another question. I read a very good discussion regarding fit indices for categorical outcomes at http://www.statmodel.com/discussion/messages/23/26.html?1244640406 posted 10 ages ago. I am
carrying out SEM with a binary outcome variable using MLR. It seems that so far Mplus still does not provide fit indices for categorical outcomes. I am wondering there is any conclusion on that or
there is any alternative way to diagnose model fit with categorical outcomes. Any suggestion and useful references are appreciated.
Linda K. Muthen posted on Thursday, July 30, 2009 - 5:47 am
Chi-square and other related fit statistics are not available when means, variances, and covariances are not sufficient statistics for model estimation. These fit statistics are available for
categorical outcomes with weighted least squares estimation.
Back to top
|
{"url":"http://www.statmodel.com/cgi-bin/discus/discus.cgi?pg=next&topic=9&page=4508","timestamp":"2014-04-17T18:39:59Z","content_type":null,"content_length":"28531","record_id":"<urn:uuid:2fc7d1b3-2e67-4e98-8a04-685db83a0876>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00447-ip-10-147-4-33.ec2.internal.warc.gz"}
|
1. Introduction2. Results and Discussion2.1. Optimized Structures of o-LiBH4-Cl and h-LiBH4-Cl Solid Solutions2.2. Thermodynamics of the Solid Solution Formation2.3. Structural Properties of o-Li(BH4)1−xClx and h-Li(BH4)1−yCly2.3.1. Computational Study of the o-Li(BH4)1−xClx and h-Li(BH4)1−yCly Vibrational Properties2.3.2. Experimental Study of LiBH4-LiCl Mixture and Solid Solution3. Calculations3.1. Ab Initio3.2. CALPHAD4. Experimental Section5. ConclusionsAcknowledgmentsReferences
Pure orthorhombic (o-LiBH[4], Pnma space group) and hexagonal (h-LiBH[4], P6[3]mc space group) structures of LiBH[4] were optimized at the DFT-Perdew-Burke-Ernzerhof (DFT-PBE) level of theory,
starting from the experimental coordinates [21].The optimized unit cell structures are shown in Figure 1, whereas the most important structural parameters are summarized in Table 1 in comparison with
experimental data.
Optimized crystal structures of pure and Cl-substituted unit cells of orthorhombic and hexagonal LiBH[4]. Positions of H1, H2, and H3 are shown for the x = 0.
crystals-02-00144-t001_Table 1
Lattice parameters and most relevant bond lengths for optimized DFT models of pure and Cl-substituted orthorhombic and hexagonal phases of LiBH[4], compared to the corresponding experimental values,
if available. B-H distances and <B-H> average distances are expressed in Å, cell volume in Å^3, the average bond angles <HBH> in degrees. Structures are displayed in Figure 1.
o-LiBH[4] Pnma a b c Volume B-H1 B-H2 B-H3 <B-H> <HBH>
Exp.^a 7.121 4.406 6.674 209.4 1.213 1.224 1.208 1.215 109.3
DFT 7.328 4.379 6.494 208.4 1.230 1.233 1.229 1.231 109.3
DFT -LiBH[4] + LiCl
o-Li(BH[4])[0.75]Cl[0.25] 7.255 4.314 6.472 202.6 1.232 1.235 1.228 1.232 108.8
o-Li(BH[4])[0.5]Cl[0.5]/C[1] 7.111 4.245 6.538 197.3 1.228 1.234 1.225 1.229 109.0
o-Li(BH[4])[0.5]Cl[0.5]/C[2] 7.207 4.250 6.425 196.8 1.228 1.235 1.228 1.230 109.0
o-Li(BH[4])[0.5]Cl[0.5]/C[3] 7.181 4.241 6.477 197.2 1.231 1.234 1.227 1.232 108.0
o-Li(BH[4])[0.25]Cl[0.75] 7.079 4.161 6.538 192.7 1.226 1.234 1.226 1.232 108.9
h-LiBH[4] P6[3]mc
Exp.^a 4.267 4.267 6.922 109.1 0.962 1.024 1.024 1.003 108.4
DFT 4.216 4.216 6.291 96.8 1.220 1.223 1.223 1.222 109.4
DFT supercell 8.297 8.289 13.333 793.2 - - - 1.229 109.5
DFT h-LiBH[4] + LiCl
h-Li(BH[4])[0.94]Cl[0.06] 8.256 8.274 13.238 782.8 - - - 1.229 109.5
h-Li(BH[4])[0.5]Cl[0.5] 8.016 8.067 12.740 718.1 - - - 1.228 109.5
h-Li(BH[4])[0.06]Cl[0.94] 7.886 7.888 12.493 672.9 1.220 1.223 - 1.222 109.5
^a Ref. [21].
For the orthorhombic crystal, a systematic overestimation of the B-H bond length is observed, as already pointed out in literature [2,19], which however does not significantly affect the unit cell
The hexagonal phase is stable at high temperature (T ≥ 381 K) [12]. Therefore, without inclusion of temperature effects, full geometry optimization starting from the experimental structure
dramatically distorts the internal geometry and the cell volume, as reported in Table 1. Indeed, for the unit cell of h-LiBH[4], frequency calculation at Gamma point on the optimized structure
reveals six imaginary frequencies, showing the instability of this phase when temperature is not taken into account. Maintaining the cell parameters fixed at the experimental values does not remove
the structural instability, as the internal atomic displacements are still very large, as shown also by Miwa et al. [22],due to the dynamic disorder of BH[4]^− units [23]. In order to overcome these
difficulties and, moreover, to simulate solid solutions, a supercell approach was adopted in the case of h-LiBH[4], considering a larger and more representative scenario (for the original cell Z = 2
→ 12 atoms, while in the supercell Z = 16 → 96 atoms). In this operation of doubling each cell vector, symmetry is completely lost and borohydride tetrahedra are free to rotate, so removing the
phonon instability found in the single unit cell. Indeed, when computing frequency values for the hexagonal supercell, only one imaginary mode is found at −20 cm^−1. In the following, to avoid
numerical problems due to the variable number of imaginary frequencies for the structures, we have used the whole set of frequencies by considering also the imaginary values (by using their absolute
value in the statistical thermodynamic formulae) for the calculation of the zero point energy (ZPE) and enthalpy corrections, as described below.
The relative energy stabilities calculated for the two phases favor the orthorhombic structure with respect to the hexagonal, as expected, but their enthalpy difference (^ORTHO − HEXΔH = 10 kJ∙mol^−1
per formula unit) overestimates experimental measurements [24,25], due to the instability of the hexagonal model.
Even if the stable LiCl polymorph at room temperature is cubic, corresponding structures were obtained for both the orthorhombic and hexagonal phases as the full substitution of BH[4]^- with
chlorine. The difference in enthalpy for the three phases of LiCl was calculated for the phase diagram calculation, giving ^CUBIC − ORTHOΔH = 21.8 kJ∙mol^−1 and ^CUBIC − HEXΔH = 15.1 kJ∙mol^−1.
The simulation of orthorhombic solid solution o-Li(BH[4])[1−x]Cl[x], where four borohydride units are present in the o-LiBH[4] unary cell, was performed, considering three compositions with molar
fraction of chlorine, x, equal to 0.25, 0.50 and 0.75. The corresponding unary cells are reported in Figure 1. For x = 0.5, three non-equivalent configurations were computed, namely C1, C2 and C3, as
shown in Figure 1. Among them, C3 structure turned out to be the most stable, because of the largest Cl-Cl distance. Conversely, for the hexagonal solid solution h-Li(BH[4])[1−x]Cl[x], due to the
large size of the supercell, only three compositions have been considered, that are Li(BH[4])[0.94]Cl[0.06], Li(BH[4])[0.5]Cl[0.5], and Li(BH[4])[0.06]Cl[0.94].
For the orthorhombic solid solutions, the unit cell volume decreases with the increase of Cl substitution, as expected, with a maximum variation of about 10% for the highest chlorine content (x =
0.75). For the hexagonal structures, the volume decreases as a function of Cl content, with a maximum variation of about 13% for x = 0.94. It can be mentioned here that the calculated cell volume
reduction for the Cl-substituted o-LiBH[4] and h-LiBH[4] correlates well with the experimentally observed values [12].
Thermal entropy was calculated for each composition using classical statistical thermodynamic formulae. As it was computed using the harmonic set of frequencies, the hindered BH[4]^− rotation is
included, as well as the frustrated translation of the substituting Cl. It turned out that the hindered BH[4]^− soft rotational motion is compensated by the frustrated translation of the substituting
Cl ion, so that the thermal entropy contribution is very small, well below the contribution due to the ideal configurational entropy.
The enthalpy of mixing was computed at room temperature for the models shown in Figure 1. The obtained results are shown in Figure 2 as a function of the increasing LiCl mole fraction. According to
simulations, o-LiBH[4] is slightly destabilized by the substitution of chloride inside the lattice, since computed enthalpy of mixing (ΔH[mix]) values are positive and small (less than 2 kJ mol^−1
per formula unit). On the contrary, in the hexagonal phase, the enthalpy of mixing shows slightly negative values, less than −1 kJ mol^−1 per formula unit, so that the h-LiBH[4] is, to some extent,
stabilized by LiCl. As stated above, the thermal entropy contribution is very small, so ideal entropy of mixing (ΔS[mix]) has been considered for free energy calculations. When ΔH[mix] and −TΔS[mix]
terms are summed up at room temperature, the free energy of mixing (ΔG[mix]) becomes close to zero for the orthorhombic phase and slightly negative for the hexagonal phase for all compositions.
Enthalpy of mixing for the orthorhombic and hexagonal LiBH[4]-LiCl solid solutions as a function of LiCl mole fraction. Ab initio results are reported as squares (ortho) and circles (hexa). Results
of CALPHAD modeling are represented by lines (continuous = ortho, dashed = hexa).
In order to describe thermodynamic behavior of the LiBH[4]-LiCl system, the CALPHAD approach [26] was used. Due to the lack of thermodynamic data for the LiBH[4]-LiCl liquid mixture, the phase
diagram was evaluated, neglecting the liquid phase, and it is considered only below the melting temperature of LiBH[4].The presence of an attractive interaction in the liquid state could induce a
stabilization of the liquid phase below the melting temperatures of pure components, as found experimentally by in situ X-ray diffraction [12].
On the basis of the evaluated thermodynamic parameters, the pseudo binary phase diagram for the LiBH[4]-LiCl system has been calculated and the results shown in Figure 3. The calculated solubility of
Cl into o-LiBH[4] is rather low and it reaches a value x = 0.1 at about 500 K for the h-LiBH[4] phase. A eutectoid phase transition is calculated at 372 K and x = 0.04. The phase diagram behavior is
very sensitive to both lattice stabilities (i.e., free energy difference between different phases of the pure components) as well as to the free energy of mixing. The very strong instability,
predicted for the metastable o-LiCl (^CUBIC − ORTHOΔH = 21.8 kJ∙mol^−1), and the calculated nearly zero ΔG[mix] , prevent any relevant solubility of Cl inside the orthorhombic phase. The negative ΔG
[mix] for the hexagonal solid solution and a more stable h-LiCl, if compared to the orthorhombic one (^HEX − ORTHOΔH = 6.7 kJ∙mol^−1), allow for the stabilization of the hexagonal phase with respect
to the orthorhombic one when Cl is added. Of course, a lower value of ^HEX − ORTHOΔH and a more negative enthalpy of mixing for the hexagonal phase would promote a higher solubility of Cl in h-LiBH
[4] phase.
Calculated LiBH[4]-LiCl pseudo binary phase diagram.
In the mid-infrared region the vibrational spectra of LiBH[4], orthorhombic and hexagonal phases exhibit stretching (ν[1], ν[3]), bending (ν[2], ν[4]) and combinational (not accounted for in the
computed spectra) modes of the BH[4]^− anions (Figure 4) [19,27,28,29,30]. The modes ν[3] and ν[4] are triply degenerate in free tetrahedral BH[4]^− ions, and the ν[2] is doubly degenerate. Due to
the different BH[4]^− site symmetry in the Pnma and P6[3]mc space groups, C[s] and C[3v] respectively, vibrational spectra of h-LiBH[4] are expected to be simpler, since the ν[2] mode remains
degenerate and both ν[3], ν[4] split into only two components each. The computed IR spectrum of the h-LiBH[4] supercell, however, is much more complex, due to the absence of any symmetry at all and
32 BH[4]^− ions in the cell: see Figure 4b. Nevertheless, several observations can be made.
Substitution of BH[4]^− with Cl^− in the unit cell of o-LiBH[4] does not modify significantly its IR vibrations. Stretching ν[3] and bending ν[4] modes of BH[4]^− appear to be the most sensitive to
the Cl^− substitution, moving to higher frequencies by Δν = +10...+35 cm^−1 (Figure 4a). The ν[1] and ν[2] modes are the least affected by the presence of Cl^−, being shifted only with the highest
amount of Cl^− in the unit cell (x = 0.75). These negligible modifications in the BH[4]^− vibrational profile can be explained by the BH[4]^− low site symmetry in pure and all the Cl-substituted
unary cells of o-Li(BH[4])[1−x]Cl[x], and that the substitution does not cause a significant change in the number of IR-active peaks. Since the symmetry in the supercell of the h-LiBH[4] and h-Li(BH
[4])[1−y]Cl[y] is completely removed, the spectra of all compounds on the Figure 4b appear to be much more complex than are expected for the h-LiBH[4] unary cell. Stretching modes in the 2500–2300 cm
^−1 region evidently shift to higher wavenumbers with increasing Cl^− concentration. Apparently, it is difficult to distinguish between the spectra of pure and Cl-substituted o-LiBH[4] or h-LiBH[4]
when the molar concentration of Cl^− is small (compare black and blue curves in Figure 4).
Computed infrared spectra of Cl^− substitution into the LiBH[4] (a) Orthorhombic, positions of the fundamental modes in pure o-LiBH[4] are shown by dotted lines; (b) Hexagonal, borders of the regions
of the fundamental modes in pure h-LiBH[4] are shown by dotted lines (for clearness, the positions, since there are too many modes). Intensity in the bending regions 1500–900 cm^−1 borders of the
regions are shown instead of the is expanded for clarity.
Orimo et al. [29] suggested a correlation between the position of the ν[1] and ν[2] modes and the stability in the Li, Na, K, Rb, Cs borohydrides: in the most stable, CsBH[4] ν[2] has the lowest
energy of vibrations. In this way, the almost unaffected position of the ν[1] and ν[2] modes in the Cl-substituted LiBH[4] can be interpreted as evidence of a minor effect of the chlorine
substitution on the stability of LiBH[4].
The structure and vibrations of LiBH[4]-LiCl mixture (1:1) were studied by powder X-ray diffraction and infrared spectroscopy. No chlorine substitution is found after hand mixing of the sample, as
expected [12], (Figure 5a), whereas after annealing, a small amount of chlorine is found both in the hexagonal and orthorhombic phases of LiBH[4] (Figure 5b, Table 2).
Rietveld refinement profiles of (a) LiBH[4]-LiCl (1:1) hand-mixed mixture, measured by powder X-ray diffraction (Cu Kα1 and Kα2) at T = 25 °C, top bars: LiCl, bottom bars: o-LiBH[4]; (b) LiBH[4]-LiCl
(1:1) hand-mixed mixture after annealing, within 30 min after the infrared measurements, top bars: LiCl, middle bars: o-Li(BH[4])[0.90]Cl[0.10]; bottom bars: h-Li(BH[4])[0.96]Cl[0.04].
crystals-02-00144-t002_Table 2
Phase composition of the LiBH[4]-LiCl hand-milled mixture before and after annealing, as found by Rietveld refinement.
Molar fraction, mol%
LiBH[4]-LiCl (1:1) S1 hm
LiClo-LiBH[4] 4456
LiBH[4]-LiCl (1:1) S1 hm, annealed
LiClo-Li(BH[4])[0.90]Cl[0.10]h-Li(BH[4])[0.96]Cl[0.04] 344323
The ATR spectrum of the hand-mixed mixture is shown in Figure 6 (violet curve). It is very similar to that of pure LiBH[4] (grey curve). This is in agreement with the presence of pure o-LiBH[4] and
LiCl in the sample.
Experimental ATR spectra of pure LiBH[4] and LiBH[4]-LiCl mixtures, hand-mixed (S1 hm, violet curve) and annealed (S1, black curves 1–5). The spectra 1–5 were obtained at room temperature, within 40
min after annealing and at ca. 1 min time step. The curves S1 hm and LiBH[4] are translated vertically for clearness. Peaks marked with * are attributed to impurities.
After annealing, however, the spectrum is strongly modified (Figure 6, black curve 1), but these modifications are not preserved with time. In particular, the spectrum 1 in Figure 6 has one broad
peak in the B-H, stretching at ca. 2400–2100 cm^−1, which within a short time (ca. 20 min), splits into two (spectra 2–5), narrows, and gains intensity. HBH bending modes in the 1300–1000 cm^−1
region are also modified: the peak at 1298 cm^−1 splits into two components at 1310–1286 cm^−1, and the peaks at 1229 and 1081 cm^−1 grow in intensity and shift upwards. The combinational modes at
2181 cm^−1 and in the 2300–2550 cm^−1 regions also gain intensity. These changes in the infrared spectra are similar to those observed [23] in the in situ Raman measurements of the LiBH[4] upon
heating in the 22–139 °C temperature range, and should be associated with C[3ν] → C[s] site symmetry lowering of the BH[4]^− tetrahedra upon P6[3]mc → Pnma phase transformation in LiBH[4]. It is
important to note that for the in situ Raman experiment, the phase transformation was observed during heating and cooling. For this study, all spectra were obtained at room temperature after the
annealing of LiBH[4] with LiCl. This fact evidences the role of Cl in the short-term stabilization of the high temperature hexagonal phase of LiBH[4]. In fact, Arnbjerg et al. [12] have demonstrated
that the phase transition temperature in LiBH[4] (during cooling) can approach 20 °C, depending on the degree of Cl-substitution. They note that this change in the phase transition temperature
indicates a stabilization of the hexagonal phase caused by the incorporation of Cl^− in the LiBH[4] structure.
According to the X-ray diffraction data, which were obtained shortly after the infrared experiment (and therefore better correspond to the curve 5, Figure 6), o-Li(BH[4])[0.90]Cl[0.10] , phase
prevails in this sample, which explains its similarity to the spectrum of pure o-LiBH[4]. Note that the computed spectra of o-LiBH[4] and the o-Li(BH[4])[0.75]Cl[0.25] are very similar (Figure 4a).
Present results are in good agreement with those reported by Arnbjerg et al. [12]. In fact, after annealing during cooling, the hexagonal phase found at high temperature is quenched in the mixture.
The ortho-to-hexa phase transition is promoted at RT, as evidenced by ATR measurements (Figure 6). During the phase transformation, the Cl^− content in LiBH[4] is strongly reduced and LiCl is formed
The Cl^− content in the orthorhombic phase (x = 0.10), observed in PXRD measurements, is in agreement with previous experiments [12], but it turns out higher than that predicted by combined ab initio
and CALPHAD calculations (Figure 3). As described before, the estimated solubility range is very sensitive to the results of ab initio calculation, which would need a more accurate determination of
lattice stability in order to fully describe the experimental findings.
|
{"url":"http://www.mdpi.com/2073-4352/2/1/144/xml","timestamp":"2014-04-20T23:29:04Z","content_type":null,"content_length":"102930","record_id":"<urn:uuid:9a29e0fc-1260-4474-8307-4439b7d7db2d>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00194-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Physicists break record for extreme quantum state - physicsworld.com
Physicists in China have broken their own record for the number of photons entangled in a "Schrödinger's cat state". They have managed to entangle eight photons in the state, beating the previous
record of six, which they set in 2007. The Schrödinger's cat state plays an important role in several quantum-computing and metrology protocols. However, it is very easily destroyed when photons
interact with their surroundings, prompting the researchers to describe its creation in eight photons as "state of the art" in quantum control.
In Erwin Schrödinger's famous thought experiment of 1935, all of the molecules in a cat are in a superposition of two extreme states – living and dead – and an observer cannot tell which until a
measurement puts the cat into one of the two states. Today physicists use the term "Schrödinger's cat state" (or Greenberger–Horne–Zeilinger state) to describe any multi-particle quantum system that
is in a superposition of extreme states.
For example, a pair of entangled photons can be created in the lab such that they are in a superposition of both photons having horizontal polarization and both having vertical polarization.
Entanglement is a quantum effect, which means that particles such as photons can have a much closer relationship than is allowed by classical physics. By measuring the polarization of one of the
pair, we immediately know the state of the other, no matter how far apart they are.
The Schrödinger's cat state of eight entangled photons was created by Jian-Wei Pan and colleagues at the University of Science and Technology of China in Hefei. The team began by firing laser light
at a nonlinear crystal, which converts single high-energy photons into pairs of entangled lower-energy photons with perpendicular polarizations. The polarization of one of the photons was then
rotated by 90°, which puts each pair into a two-photon Schrödinger's cat state.
Pairing up photons
Pan and colleagues then took one photon from each pair and combined the quartet in an optical network consisting of three polarizing beam splitters. One photon leaves each of the network's four
outputs only if all four photons have the same polarization. As there is no way of knowing what this common polarization is, the photons are therefore entangled in a Schrödinger's cat state. But as
each of the four photons is already entangled with one other photon, all eight photons are therefore entangled in a Schrödinger's cat state.
This entanglement was established by measuring the polarizations of the eight photons as they emerged from the experiment. This reveals the "fidelity" of the eight-photon Schrödinger's cat state,
which effectively says how close the different states are to the ideal Schrödinger's cat. The team measured a fidelity value of 0.708 – much larger than the threshold value of 0.5, above which a
state is considered to be entangled.
According to Xiao-Qi Zhou of the University of Bristol, UK, Pan and team were able to entangle eight qubits because they managed to separate the photons into "ordinary light" and "extraordinary
light". Both types are produced by parametric down conversion and ensuring that four extraordinary photons are sent for further entanglement boosts the efficiency of the process.
Hyper-entanglement could be next
Pan told physicsworld.com that there are several ways that the team can take this work forward. One is to use "hyper-entanglement" to create a 16-qubit Schrödinger's cat state for their eight
photons. Hyper-entanglement makes use of more than one degree of freedom of the photon – momentum and polarization, for example – which multiplies the number of states that can be entangled. In 2008
the team used hyper-entanglement to create a 10-state Schrödinger's cat state using five photons.
Zhou points out that the technique of separating ordinary and extraordinary light could also be used to entangle six photons at a higher efficiency than previously possible. This, he thinks, could be
used to create a wide range of different entangled states that could be used in quantum computing.
The Schrödinger's cat state could be particularly useful for quantum error correction, which protects a quantum computation from the destructive effects of noise. For example, one bit of quantum
information (a qubit) could be encoded into all eight photons of a Schrödinger's cat state. If the polarization of one of the eight photons is inadvertently flipped, for example, this can be
corrected by determining the value of the other seven photons.
9 comments
Comments on this article are now closed.
• Sasanka Datta Jun 23, 2011 3:26 AM
The comments related to the articles in this site should be filtered. The site is excellent for professional and amateur physicists but comments are sometimes irrelevant.
• gone Jun 23, 2011 8:17 AM
Photon becomes quartet?
"Pan and colleagues then took one photon from each pair and combined the quartet" - OK, I think what this means is that four post non-linear crystal pairs each provide one photon to create a
quartet - not that somehow a single photon becomes a quartet. It's not completely clear - can a PW person confirm?
• Hamish Johnston Jun 23, 2011 8:38 AM Bristol, United Kingdom
The former
Thanks for the comment. The former is correct, the four photons together made the quartet.
Originally posted by gone View comment
"Pan and colleagues then took one photon from each pair and combined the quartet" - OK, I think what this means is that four post non-linear crystal pairs each provide one photon to
create a quartet - not that somehow a single photon becomes a quartet. It's not completely clear - can a PW person confirm?
• exponent137 Jun 23, 2011 8:57 AM
Time of duration
How it is with duration of this coherence state? This is also important.
• zrzzz Jun 23, 2011 11:49 AM
What a charmed life you must lead.
Originally posted by Sasanka Datta View comment
The comments related to the articles in this site should be filtered. The site is excellent for professional and amateur physicists but comments are sometimes irrelevant.
So you want PhysicsWorld to hire a full-time moderator just so you never have to come across a comment that offends your delicate sensibilities? What would be "on topic", comments that agree
with your singular worldview?
• chaoyanglu Jun 24, 2011 4:06 PM
please correct the typo: Jian-Wei Pan not Jian-Weo Pan, thanks
• magesh Jun 24, 2011 4:57 PM pune, India
So we have a evidence of string theory
This is a good one..Its a sweet information for string theory Physists...
• reader01 Jun 28, 2011 11:03 AM
differences between particles
In this case ( entanglement ) we can see big differences between particles in process of particles´ entanglement. Photons as they moove c speed are not so easy entangled as protons or
electrons. But as they are not influence each other, their density in small volume can be much higher. But problem is how to make these photons entangled in very short time ( as they are
still in this volume )? Entanglement show us quantum differences between different particles. These differences should be described by quantum eaquations. All principles of entanglement of
different particles should be compared and this comparism will show us quantum character of these particles and also character of entaglement itself.
• reader01 Jul 20, 2011 1:02 PM
I know it sound unrealistic
but can we make entangled photons by anihilation of entangled protons and entangled antiprotons?? I know the problem is haw to make antiprotons entangled...We can preserve antiprotons for
needed long time but can we make them entangled??? And if we have both entangled than are entangled arising photons???
|
{"url":"http://physicsworld.com/cws/article/news/2011/jun/22/physicists-break-record-for-extreme-quantum-state","timestamp":"2014-04-19T12:00:06Z","content_type":null,"content_length":"50118","record_id":"<urn:uuid:578f1f4d-2150-475e-a2f6-57bd225ccb29>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00639-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Are there two non-diffeomorphic smooth manifolds with the same homology groups?
up vote 4 down vote favorite
I know that there definitely are two topological spaces with the same homology groups, but which are not homeomorphic. For example, one could take T^2 and S^{1}\vee S^{1}\vee S^{2} (or maybe S^{1}\
wedge S^{1}\wedge S^{2}), which have the same homology groups but different fundamental groups. But are there any examples in the smooth category?
homology smooth-manifolds
The Lens spaces $L(p,q_1)$ and $L(p,q_2)$ for suitable choices of $q_i$'s are homotopy equivalent but not homeomorphic. For example, one can take $L(7,1)$ and $L(7,2)$. – Somnath Basu Apr 27 '10 at
add comment
4 Answers
active oldest votes
Sure -- there are an abundance of homology spheres in dimension 3 (the wikipedia article is pretty nice).
For other examples, in dimension 4 you can find smooth simply-connected closed manifolds whose second homology groups (the only interesting ones) are the same but which have
up vote 10 down vote different intersection pairings.
This last subject is very rich. For bathroom reading on it, I cannot recommend Scorpan's book "The Wild World of 4-Manifolds" highly enough.
add comment
More surprisingly, you can find smooth manifolds which are homeomorphic (and in particular, have the same homology) but are not diffeomorphic! The best-known examples are exotic
up vote 8 down spheres.
I actually knew that. For some reason, I didn't make the connection between homeomorphism and same homology groups. Thanks! – Kirill Levin Oct 30 '09 at 4:11
add comment
A more trivial example is R^n and R^m for m and n different. (More generally two contractible spaces of different dimensions.)
up vote 7 down vote
I was thinking of a circle and an annulus, you beat me :-) – Patrick I-Z Jan 6 '11 at 0:29
add comment
Serre has shown with the help of two embeddings phi and psi of a quadratic number field into C that there exist two projective surfaces V(phi)and V(psi) over C which have non
isomorphic fundamental groups (and so are non homeomorphic) but have isomorphic Betti numbers.
The comparison theorem between étale cohomology and singular cohomology ( which didn't exist when Serre wrote his article ) even proves thar these surfaces have the same singular
cohomology with value in any finite abelian group or over Q_l(l-adics) for any prime l.
up vote 5 down I don't know if these surfaces have the same homology and so I don't answer your question in the strict sense (anyway, now you have Andy's and Eric's most satisfying solutions); but
vote these remarks in an algebraic geometry context might interest you. Serre's article is
Exemples de variétés projectives conjuguées non homéomorphes, C.R. Acad.Sci.Paris 258 (1964), 4194-4196
It is of course reproduced in his Collected Papers.
add comment
Not the answer you're looking for? Browse other questions tagged homology smooth-manifolds or ask your own question.
|
{"url":"http://mathoverflow.net/questions/3390/are-there-two-non-diffeomorphic-smooth-manifolds-with-the-same-homology-groups?sort=votes","timestamp":"2014-04-17T01:37:44Z","content_type":null,"content_length":"64166","record_id":"<urn:uuid:e99059db-9610-47ef-8563-bbef9c4c4e51>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00661-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Optimal discretization and expansion order of arbitrary data
Hi all,
I am trying to figure out 1) What to call my problem so I can better research the literature, and 2) see if anyone here knows of a solution.
Essentially, I have a large set of f(x) vs x points (~20,000) which I need to split in to subdomains in x, and within each subdomain calculate a functional expansion of f(x). I want to do this in an
optimal manner such that 1) the number of subdomains is minimized - or at least manageable, and 2) the number of expansion orders (probably Legendre) within each subdomain is also minimized.
Does anyone have any idea what 'field' of math this could be considered, and where to begin searching around? Unfortunately, this is just a minor step in what I have to do so I don't want to expend
much effort here.
Thanks for your help!
|
{"url":"http://www.physicsforums.com/showthread.php?p=4262237","timestamp":"2014-04-19T22:58:11Z","content_type":null,"content_length":"25912","record_id":"<urn:uuid:ff8dc29a-7b2a-40bc-9fd4-92f962ed1a6c>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00124-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Heating and cooling rates
Next: Thermal Balance Calculation Up: Sample Results: Low Density Previous: Ionization Balance Contents
A by-product of the ionization and excitation balance is the emissivity and opacity of the gas, which correspond to the net heating and cooling rates. Figure 6 shows the heating and cooling rates as
a function of temperature and ionization parameter for the various elements. Heating rates are shown as solid curves, cooling rates as dashed curves. Rates assume solar abundances ([Grevesse, Noels
and Sauval 1996]), and are given in units of erg s
A coronal plasma cools more efficiently, in general, than a photoionized plamsa since the ionization state is lower at a given temperature. Figure 7 shows the cooling rate as a function of
temperature for such a plasma. Comparison of these rates with the results of Figure 6 shows similarity with the cooling rate at the lowest ionization parameter plotted there (log
Next: Thermal Balance Calculation Up: Sample Results: Low Density Previous: Ionization Balance Contents Tim Kallman 2014-04-04
|
{"url":"http://heasarc.gsfc.nasa.gov/docs/software/lheasoft/xstar/docs/html/node180.html","timestamp":"2014-04-24T05:49:18Z","content_type":null,"content_length":"5586","record_id":"<urn:uuid:e8a1c854-aaae-4e7f-94eb-7b4b6386896e>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00547-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Post synaptic behavior
Next: Glutamate Up: Synapses Previous: Synapses
Given that a certain amount of transmitter is released from the synaptic terminal, we want to now model the current that appears in the postsynaptic site. To do this we need to model s(t), the
fraction of open channels open due to the transmitter T. The typical way to model these is via a series of reactions or so called Markov models in which the probability of channels opening and
closing is described by a series of rate constants. This kind of detail is well beyond what one wants for simple modeling, thus, instead, we will describe a couple of simple models for the 4 primary
types of postsynaptic response induced by the two principle neurotransmitters.
G. Bard Ermentrout
|
{"url":"http://www.cnbc.cmu.edu/~bard/synapse/node4.html","timestamp":"2014-04-19T05:50:09Z","content_type":null,"content_length":"2692","record_id":"<urn:uuid:c864571b-fcc2-425b-81c1-a7a8b87e0ce0>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00595-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Palmer Township, PA ACT Tutor
Find a Palmer Township, PA ACT Tutor
...I used to work for Kaplan, one of the big tutoring companies, but I knew that I'd be better able to help students on my own. I've taught everything from Debate and Creative Writing to
Calculus, Physics, and the MCATs. I even tutor the ASVAB, LSAT (a particularly fun test), GRE, GMAT and most of...
34 Subjects: including ACT Math, English, calculus, writing
...You need to expose yourself to a large amount of SAT math problems. Make certain you do the practice test. When we meet I will give you 4 worksheets of 25 problems.
22 Subjects: including ACT Math, geometry, ASVAB, algebra 1
...In addition to being a certified English teacher with experience teaching at both the middle school and high school level, I am a member of a long-term writers' group editing novels. I have
worked in classrooms with students as young as kindergarten on all aspects of reading, writing, language l...
25 Subjects: including ACT Math, reading, chemistry, English
...Science and math, unlike some other subjects, don't depend on natural aptitude to signal success. Instead, success depends only on how hard students are willing to work on it. To this end, I
expect all of my students to work, but I don't expect them to understand the material the first (or fifth) time around.
17 Subjects: including ACT Math, chemistry, calculus, geometry
...I have been a practicing engineer for many years and thus I am familiar with many practical applications of math concepts to real world examples. My teaching philosophy is to maintain a
student-focused and student-engaged learning environment to ensure student comprehension and student success. ...
12 Subjects: including ACT Math, calculus, geometry, statistics
Related Palmer Township, PA Tutors
Palmer Township, PA Accounting Tutors
Palmer Township, PA ACT Tutors
Palmer Township, PA Algebra Tutors
Palmer Township, PA Algebra 2 Tutors
Palmer Township, PA Calculus Tutors
Palmer Township, PA Geometry Tutors
Palmer Township, PA Math Tutors
Palmer Township, PA Prealgebra Tutors
Palmer Township, PA Precalculus Tutors
Palmer Township, PA SAT Tutors
Palmer Township, PA SAT Math Tutors
Palmer Township, PA Science Tutors
Palmer Township, PA Statistics Tutors
Palmer Township, PA Trigonometry Tutors
Nearby Cities With ACT Tutor
Alpha, NJ ACT Tutors
Bethlehem, PA ACT Tutors
Catasauqua ACT Tutors
Easton, PA ACT Tutors
Forks Township, PA ACT Tutors
Freemansburg, PA ACT Tutors
Glendon, PA ACT Tutors
Harmony Township, NJ ACT Tutors
Nazareth, PA ACT Tutors
New Hanover Twp, PA ACT Tutors
Phillipsburg, NJ ACT Tutors
Riegelsville ACT Tutors
Stockertown ACT Tutors
Tatamy ACT Tutors
West Easton, PA ACT Tutors
|
{"url":"http://www.purplemath.com/Palmer_Township_PA_ACT_tutors.php","timestamp":"2014-04-18T09:02:30Z","content_type":null,"content_length":"24071","record_id":"<urn:uuid:7ca5d543-6498-4083-9351-2966be02fc34>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00065-ip-10-147-4-33.ec2.internal.warc.gz"}
|