content
stringlengths 86
994k
| meta
stringlengths 288
619
|
|---|---|
Kolman Busby Ross
Sponsored High Speed Downloads
Title: Discrete Mathematical Structures (6th Edition) Author: Bernard Kolman,Robert Busby,Sharon C. Ross, Publisher: Prentice Hall Pages: 552 Published: 2008-07-24
Math 272-01, Spring 2011 Course Syllabus, Call # 42643 Title: Discrete Mathematics Text: Discrete Mathematical Structures, 6th edition, by Kolman, Busby, and Ross.
Professional Bachelor in Software Development: Literature, spring 2014 Contract-based Software Development Textbooks: Kolman, Busby, Ross: Discrete Mathematical Structures, Sixth Edition.
By Kolman, Busby, Ross - Prentice-Hall Strengths: Starts in the right place, with a little background, then logic and proofs. Goes basically in the right order. How to write algorithms in pseudocode
is in Appendix A.
Text: Discrete Mathematical Structures 6th ed., by Kolman, Busby, and Ross. Course Topics: Sets, sequences, matrices, logic, counting (permutations and combina-tions), relations and digraphs,
functions. Additional topics according to student interest
Title: Discrete Mathematical Structures (6th Edition) Author: Bernard Kolman,Robert Busby,Sharon C. Ross, Publisher: Prentice Hall Pages: 552 Published: 2008-07-24
80 ð Ó | & K G (Discrete Mathematics) r ð Ó M ¨ G + G 3/3 ö Õ ¼ > Discrete Mathematics, Johnsonbaugh. Discrete Mathematical Structure, Kolman, Busby, and Ross.
Kolman, Busby & Ross Discrete mathematical structures Pearson International, 2009 Lipschutz, Lipson Discrete mathematics Schaum´s Ouline Series, 2007 Matthews Elementary linear algebra Nolt, Rohatyn
&Varzi Logic Schaum´s Outline Series, 2011
Discrete Mathematical Structures, Latest Edition, Kolman, Busby and Ross, Pearson Prentice Hall Scientific or Graphing Calculator with functions nPr, nCr, and x! Last revised: 12/2012 Course
coordinator: Don Reichman Information resources:
Kolman/Busby/Ross: Discrete Math Structures, 6/e . ISBN: 0132297515 (ISBN-13: 9780132297516) Title: Discrete Mathematics Author: Pearson Education Created Date:
Discrete Mathematical Structures, 5th Edition; Kolman, Busby, and Ross; Prentice Hall; 2004; ISBN# 0-13-045797-3 VI. Course Objectives This course is designed to provide the student with knowledge of
mathematical topics related to
Kolman/Busby/Ross, Discrete Math Structures, 6/e . TECHNICAL AND BUSINESS MATH . Carmen/Saunders, Mathematics for the Trades: A Guided Approach, 9/e . Cleaves/Hobbs, Business Math, 8/e . Cleaves/
Hobbs, Study Guide for Business Math, 8/e .
Authors: Kolman, Busby, and Ross Publisher: Prentice Hall The covered subjects of this course are given from Chapter 1 to Chapter 8 of this textbook. The table of contents entries for these chapters
are: 1. Fundamentals. Sets and Subsets.
Discrete Mathematical structure, Kolman, Busby, Ross 9. Graph Theory, Narsingh Deo 10. Linear Algebra Hadley 11. Numerical Analysis & Computational procedure S.A.Mollah 12. Mathematical Analysis,
Malik & Arora 13. Differential Calculus, Ghosh, Maity
Kolman,Busby and Ross, “Discrete mathematical Structures and graph theory” 1003 Introduction to logic circuits and Digital Design Digital Logical Circuits Boolean algebra Truth tables Combinational
circuits Flip-flops, Registers Counters ...
Kolman, Busby and Ross, “Discrete Mathematical Structure”, PHI, 1996. Reference Books: 1. H.K. Dass, “Advanced Engineering Mathematics”; S.Chand & Co., 9th Revised Ed.,2001. 2. S.K. Sarkar, “Discrete
Maths”; S. Chand & Co., 2000 Kolman, Busby & Ross "Discrete Mathematical
Kolman, Busby, Ross Discrete Mathematical Structures Skvarcius and Robinson Discrete Mathematics with Computer Science Applications . Title MATH210_May2006 Author: mathews Created Date: 7/1 ...
Kolman Busby, Ross, Discrete Mathematical Structures, Pearson Education, 6th Ed., 2009. Reference Book(s) R1. C.L. Liu, Elements of Discrete Mathematics, 2 nd Edition, McGraw Hill, 1986. R2. Goodaire
& Parmenter : Discrete mathematics & graph theory, Pearson Education, 2000.
Reading: Kolman, Section 7.1 CSCI 1900 – Discrete Structures Trees – Page 2 Brain Teaser Assume you are driving with your child, and ... -Kolman, Busby, and Ross, p. 254 CSCI 1900 – Discrete
Structures Trees – Page 4 Trees – Our Definition
1-Kolman, Busby, and Ross, "Discrete Mathematical Structure ", 6th edition,Printice Hall 2008. Course Title : Discrete Mathematics 630260 Prerequisite : Logic Circuits 630211 Text Book : Discrete
Mathematics and its Applications,By: Kenneth H.Rosen, McGraw Hill,2013 7th edition ...
[T2]Kolman, Busby and Ross, “Discrete Mathematical Structure”, PHI, 1996. REFERENCE BOOKS: [T1]S.K. Sarkar, “Discrete Maths”; S. Chand & Co., 2000 . BCA 2nd Semester Principles Of Management Exam
Code: 104 OBJECTIVE The basic objective of this ...
By Kolman, Busby, and Ross Course Information, Assignments, and Lecture Schedule All course information, assignments, lecture notes, and practice exams can be accessed from the web through the
instructor’s homepage: http://cs.unm.edu/~joel/
Discrete Mathematical Structures(3rd Edition) by Kolman, Busby & Ross PHI. 2. Discrete Mathematical Structures with Applications to Computer Science by Tremblay &Manohar, Tata McGraw- Hill. 3.
Combinatorial Mathematics, C.L. Liu (McGraw Hill)
Scope of Syllabus: Kolman, Busby &Ross and Tremblay&Manohar Unit I: section 2.1, to 2.4, 1.1, 1.2. Unit II: section 4.1 to 4.5, (excluding 4.6), 4.7, 4.8, 7.1 (excluding product ...
Text: Discrete Mathematical Structures (Sixth Edition), by Kolman, Busby and Ross; Publisher: Pearson/Prentice-Hall, 2009. Course Goals: General Education Skill Objectives: (1) Thinking Skills:
Students engage in the process of inquiry and problem solving.
MATH 2420 Kolman Busby and Ross Discrete Mathematical Structures 6th 9780132297516 Pearson N/S/C/E/A/F A A Math/Computer Science/ENGR MATH 2431 Stewart Calculus: Concepts and Contexts (Text &
WebAssign & E-Book) 4th 0538796855 Cengage Text & WebAssign & E-Book
Kolman, Busby and Ross, “ Discrete Mathematical Structures ” , Prentice Hall of India, NewDelhi (2009). Unit I : Chapters 3 and 4 Unit II : Chapters 6 and 7 Unit III : Chapter 9 (9.1,9.2,9.4) and
chapter 10 (10.1-10.5) 2.
Kolman, Busby & Ross, Discrete Mathematical Structures, Prentice-Hall Publishers. 4. Ralph P. Grimaldi, Discrete and Combinatorial Mathematics: An Applied Introduction, Addison-Wesley Pub. Co., 1985.
===== COURSE: PAKISTAN STUDIES CLASS: BS (IT) SEMESTER: II Week ...
1. Elements of Discrete Mathematics, C L Liu, D P Mohanpatra,TMH 2. Discrete Mathematics, Schaum’s Outlines,Lipschutz,Lipson TMH. 3. Discrete Mathematical Structures, Kolman, Busby, Ross, 6th ed.,
PHI, 2009
Discrete Mathematical Structures by Kolman, Busby & Ross. 4. Graph Theory with Applications to Engineering and Computer Science by Narsingh Deo. PAPER CODE BCA - 201 Mathematics II (Calculus ) Jiwaji
University , Gwalior – BCA – Session 2011-14 (Affiliated Colleges)
Discrete Mathematical Structures, Kolman, Busby, Ross, Pearson Prentice Hall, 6th Edition, 2009, 0-13-207845-7; 978-0-13-207845-0 (Required) References: Discrete Mathematics and its applications,
Rosen, Kenneth H. McGraw-Hill Higher Education, 2007, 0071244743;
Discrete mathematical structures by B Kolman RC Busby, S Ross PHI Pvt. Ltd. Discrete mathematical structures by RM Somasundaram (PHI) EEE edition References:
By Kolman, Busby, and Ross. 2 Grading • Homework: – Problem sets assigned Mondays, Tuesdays, and Wednesdays. – Neither turned in nor graded. – Students are highly encouraged to work together. •
Quizzes: – Each Monday covering pervious week’s homework.
Bernard Kolman, Robert C. Busby ,Sharon Cutler Ross, “Discrete Mathematical Structures”, 5th Edition, Pearson Education, 2004 . Created Date: 12/24/2010 10:46:34 AM ...
Discrete Mathematical Structures, 6/e, Kolman, Busby, Ross PHI 2009. 4. Discrete Mathematics, 6/e, Johnsonbaugh, Pearson, 2005. 5. Discrete Mathematics, 6/ e, Malik, Sen, Cengage Learning 2004. 6.
Discrete Mathematics for computer science, Bogart Stein , Drysdale, Springer 2005.
Course Content & Delivery Text: Discrete Mathematical Structures (6nd edition) by Kolman, Busby, and Ross; ISBN 0132297515 Course Topics: The following is a detailed list of the sections that will be
covered from your
• Kolman, Busby & Ross Discrete Mathematical Structures, 5th Edition, Pearson education, 2003 • Trembly. J.P & Manohar. P,Discrete Mathematical Structures with Applications to Computer Science, 1975.
M.Sc. (C. S.) I Semester CS-103 Operating System
1.Kolman, Busby, Ross, ”Discrete Mathematical Structures”, 5th edition, PHI. Reference Books: 1. Gary Haggard, John Schlipf, Sue White sides, ”Discrete Mathematics for Computer Science”, Thomson
Publications. 2.
Discrete Mathematical Structures, Kolman, Busby, Ross, Pearson Prentice Hall, 6th Edition, 2009, 0-13-207845-7; 978-0-13-207845-0 (Required) References: Discrete Mathematics and its applications,
Rosen, Kenneth H. McGraw-Hill Higher Education, 2007, 0071244743;
Discrete mathematical structures, Bernard Kolman, Robert C. Busby, Shron Ross, 3rd edition, 1998, Prentice hall of India, New Delhi. A.C.S.’12 . Title: Microsoft Word - App-35 DISCRETE
MATHEMATICS.docx Author: unomnoc Created Date:
Discrete Mathematics Textbooks 1. Discrete Mathematics with Proof by Gossett. 2. Discrete Mathematics with Combinatorics by Anderson 3. Discrete Mathematical Structures by Kolman, Busby, and Ross.
Kolman, Busby & Ross, Discrete Mathematical Structures, 4 th edition, 2000, Prentice- Hall Publishers. Ralph P. Grimaldi, Discrete and Combinatorial Mathematics: An Applied Introduction,
Addison-Wesley Pub. Co., 1985.
1.Kolman, Busby, Ross, ”Discrete Mathematical Structures”, 5 ...
“Discrete Mathematical Structures, 4th edition”, Kolman, Busby & Ross, Prentice-Hall Publishers 2000. 4. “Discrete and Combinatorial Mathematics: An Applied Introduction”, Ralph P. Grimaldi,
Addison-Wesley Pub. Co., 1985. Course Name: Operating Systems
Discrete Mathematical Structures, Kolman, Busby, Ross, 6 th ed., PHI, 2009 4. Discrete Mathematics, Johnsonbaugh, 6 th ed., Pearson, 2005 5. Discrete Mathematics, Malik, Sen, 6 th ed., Cengage
Learning, 2004 6.
Discrete Mathematical Structures, Kolman, Busby, Ross, 6th ed., PHI, 2009 4. Discrete Mathematics, Johnsonbaugh, 6th ed., Pearson, 2005 5. Discrete Mathematics, Malik, Sen, 6th ed., Cengage Learning,
2004 www.jntu9.com || www.jntu9.ina JNTU9 ...
... Kolman, Busby & ross, Discrete Mathemaitca1. 5. P Education 2003. References: 1) Richard Johnsonbaugh, Discrete Mathematics. 5th Edn Pearson Education (LPE) 2009. BLDE Association’s A. S. Patil
College of Commerce(Autonomous), Bijapur, BCA Programme 6.
Discrete Structure TUTORIAL NO 1 TITLE :SET THEORY OBJECTIVE:To study set theory. REFERENCE: 1.Discrete mathematical structures by Kolman ,Busby and Ross,Fourth
Kolman, B., Busby, R.C. and Ross, S.C., Discrete Mathematical Structures, Fourth Edition, Prentice Hall, 2000. 5. Ross, S.M., Introduction to Probability Models, Eighth Edition, Academic Press, 2003.
Title - C5. DEFINITIVE COURSE DOCUMENT AND COURSE FILE
Discrete Mathematical Structures, Kolman, Busby, Ross, 6 th ed., PHI, 2009 4. Discrete Mathematics, Johnsonbaugh, 6 th ed., Pearson, 2005 5. Discrete Mathematics, Malik, Sen, 6 th ed., Cengage
Learning, 2004 6.
|
{"url":"http://ebookily.org/pdf/kolman-busby-ross","timestamp":"2014-04-23T07:45:16Z","content_type":null,"content_length":"40759","record_id":"<urn:uuid:9bf137aa-ef82-41ac-a4a9-7b682857a244>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00577-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Elisabeth decided to define her own project. She became interested in the Golden Ratio. She came to me with a boatload of questions she wanted to tackle ranging from: 1) How can I create the Golden
Spiral? to 2) Why is the Golden Rectangle aesthetically pleasing?
I said, "yeah, that's some good stuff there. I trust you. Do it."
Two days later, I sit down with her as she's working and watch her think. She has this perplexed look on her face as she is shuffling through her notes. She has about a half-dozen different
rectangles sketched with different dimensions along with more questions than what she started with.
She looks up and says, "The more I look through all this, I'm wondering, 'what's my question?'"
That's right, Elisabeth. Don't ever forget that.
This year, I decided to take a much more hands-off approach when it came to student projects. There were some homeruns, but there were too many swings-and-misses. Some students opted not to even step
to the plate. I suppose that's what happens when students are offered more autonomy. But, I didn't do enough to prepare them to make decisions in such an open ended environment. I think I was too
For the final project, I gave my 8th graders seven choices; one of which was to determine the angle that would maximize the distance traveled by a projectile.
What they knew:
• Linear motion model.
• Vertical motion model.
What they didn't know:
• Vertical and horizontal motion do not affect one another.
• How vertical and horizontal motion work together to determine the path of a projectile.
• Trig ratios
Last year I had students do an investigation on trig ratios prior to working with projectile motion. But due to a shortened school year and the fact that all of my students will be taking geometry
next year, I had to cut something.
It took a few short conversations for the group to get the fact that horizontal and vertical work together to determine the path and that they needed to use the vertical motion model to determine how
long the ball would be in the air. From there, they could figure out how far it would go.
But there was one problem: they didn't know how fast the ball was travelling which made it impossible to determine the vertical and horizontal components.
The Process
Q: How fast is the ball travelling when it is hit?
A: I didn't specify, did I?
This led to a nice conversation on how we need to eliminate as many variables as we can.
Solution: Pick a velocity and work with it. They chose 100 ft/sec.
Q: So how fast is the ball travelling vertically and how fast is it travelling horizontally?
A: That depends.
Q: On what?
So we took turns pushing Joey around the room from behind and the side simultaneously. Each time one person pushed harder than the other.
Conclusion: If the person from the back pushes harder, Joey goes forward more. If the person from the side pushes harder, Joey moves to his left more.
Then we talked about how the velocities can be modeled using vectors and we can use what we know about triangles. Since the forces are perpendicular, we have a right triangle.
Q: If all we know is the hypotenuse of the right triangle, how do we find the other lengths?
A: Is that really all you know?
Solution: They settled on using a 45-45-90 since that is the only way they could figure out the other two sides.
Q: But what do we do for other angles?
A: Yeah, that's kinda tough, huh? Why don't you use a protractor to draw the angle you want, build the triangle you want and measure.
Q: Can we use GeoGebra?
A: Or that.
They used an applet with a fixed hypotenuse of 100 and gathered data on the other two sides.
Q: Is there an easier way?
A: Yeah. It's called sine and cosine. See how these ratios don't change as long as the angle remains constant? (it took a little longer than that, but you get the point)
They were off and running.
• 45 degrees maximizes distance.
• Complementary angles yield the same distance.
• Oh, and this:
I think you physics folks would say something like this:
Preconceptions? Misconceptions? Heck, I don't care what we call them. All I know is that I have kids coming to class and making decisions with their heart and not their head. Intuition is great.
Inductive logic is great. But it just isn't enough. Back it up. Verify it. Embrace the conflict that arises when what you thought was true turns out to be, well, not so much.
I've taken to putting these ( )conceptions front and center. Put them out there for kids to wrestle with. Plug in some numbers. Argue. Get frustrated. And then walk away with a little more
understanding than they did before.
Today's episode centered on the equation:
(2/3) (3x + 14) = 7x + 6 and students were asked to multiply both sides by 3.
And, of course, they came up with 2(9x + 42) = 21x + 18.
Well because, naturally, a(bc) = (ab) (bc).
So what do you do?
The younger me would have said something profound like, "You don't distribute multiplication over multiplication. I'll say it again slower for those of you taking notes. You. Don't. Distribute.
Multiplication. Over. Multiplication."
There ya go. My finest pedagogical moments summed up by slowly repeating a negative definition of a property they obviously don't fully understand.
The older, wiser, 5-kid-having self is a bit more patient.
Up on the board goes:
True or False
2(3 · 4) = 2(3) · 2(4)
2( 3 + 4 ) = 2(3) + 2(4)
Most kids said that their gut told them that both equations were true. In fact, many said, "true" before the virtual ink had dried.
"But what does your head tell you? Verify that both equations are true."
Oh, no they're not both true.
"Ok, good. So now you have a conflict. What you think should be true is different than what you know is true. Why?"
This is why I have been calling these things preconceptions. Students bring something to the task. Always. They never come empty handed. These responses that #needaredstamp are usually a right idea
used at the wrong time. It's like a kid who has never played sports before goes from learning basketball to soccer. Coach says dribble and the kid picks up the ball and bounces it as he runs down the
pitch. Right rule; wrong application.
I've had kids tell me that they do certain operations on a problem because "it just felt right." I'm not sure how to address that other than to put them in a position for their feelings to betray
them and help them deal with the disappointment in a constructive way.
Next weeks episode: Why Love Isn't an Emotion
My day begins at the door where I greet my students with handshakes and fist bumps.
Her day begins with five little boys climbing into bed and dog-piling her.
My lesson planning is done sitting at my desk.
Her lesson planning is often done at the bottom of that pile.
My lessons are informed by pacing guides, practice tests and proficiency levels.
Her lessons are informed by bugs in the backyard, bicycle tires and brotherly conflict.
My students use manipulatives to learn counting techniques.
Her students count out baby carrots as they make Dad's lunch.
I use rabbits to teach about exponential growth.
She uses a persuasive essay so her students can decide if they really want a rabbit.
My students solve fraction problems about baking and measuring.
Her students cut recipes in half, measure and bake.
My students eat lunch and go out to a treeless field for recess.
Her students eat lunch in the tree.
My classes end when the bell rings.
Her classes end at bedtime.
I use formative assessment to shape lessons.
She uses formative assessment to shape lives.
My students call me "Mister."
Her students call her "Mommy."
One of us teaches. The other pseudo-teaches.
I had a copy of John Van de Walle's book on my desk the other day when a student asks, "Mr. Cox, what are you reading?"
"Oh, this book on how to teach math. It's pretty interesting."
"Oh, yeah? What's so interesting about it?"
"Well, I like this chapter on Teaching Through Problem Solving. It mentions three different ways problem solving can be taught: Teaching for problem solving, Teaching about problem solving and
Teaching through problem solving."
"What's the difference?"
"Well, teaching for problem solving is when students will learn a certain set of skills and then later be asked to solve a problem using those skills. Teaching about problem solving is when students
learn about particular strategies for solving problems. And teaching through problem solving is when students are given the problem first and then they figure out how they want to solve it. It's kind
of the opposite of teaching for problem solving."
"Hmm. So teaching for problem solving means the teacher shows us how to do stuff first?"
"Yeah, pretty much."
"They don't think we can think for ourselves? That's kind of offensive."
|
{"url":"http://coxmath.blogspot.com/2011_05_01_archive.html","timestamp":"2014-04-20T18:23:01Z","content_type":null,"content_length":"106218","record_id":"<urn:uuid:a5f50780-85a4-4f71-bbe2-0f699cfaef8d>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00175-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Financial Calculators
Determine how much money you will have if you make an annual contribution to an annuity or savings account.
Create a printable amortization schedule for a car loan.
Compare the monthly payments, total interest and fees of multiple loans or mortgages.
Calculate the interest you will earn on a deposit based on the interest rate and length of time.
Compare the costs over time associated with two different options based on how much and how often you have to pay.
Determine the most expensive house that you can afford based on your income and loan information.
Track your personal net worth into the future based on your savings goals.
Use this calculator see how much you can save by reducing your utility bill by just a couple of percent.
|
{"url":"http://www.myamortizationchart.com/financial-calculators/","timestamp":"2014-04-19T17:32:57Z","content_type":null,"content_length":"5510","record_id":"<urn:uuid:878d40db-db80-4b8d-bd64-ed5e64073dda>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00283-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Resonance frequency of a matrix system
Hi all,
I have a 2d system of the form:
dm1/dt + A11 m1 + A12 m2 = 0
dm2/dt + A21 m2 + A22 m2 = 0
I saw that the resonance frequency of such a system is defined as:
fr = 1/(2*pi) [ (A11 + A22)/2 + sqrt( ((A11-A22)/2)^2 + A12*A21 )
if I'm not mistaking - I can't even remember where I saw that formula.
Could someone confirm that it's the exact formula and/or explain why this holds and how this was obtained and/or how does this extend to the 3d case?
Thanks a lot!
|
{"url":"http://www.physicsforums.com/showthread.php?p=4268748","timestamp":"2014-04-16T13:43:23Z","content_type":null,"content_length":"19800","record_id":"<urn:uuid:7b9282c6-1f29-43da-828e-ea1e15fc6b57>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00443-ip-10-147-4-33.ec2.internal.warc.gz"}
|
hi all
hi i am yashwanth,in India. and i don't have big background . i am a student of 16 yrs.i love programming. (but not a pro
Knowledge is knowing that a tomato is a
fruit, but Wisdom is knowing not to put it in a fruit salad
Re: hi all
Hi yazz;
Welcome to the forum. You are correct, programming requires a lot of math to do it well. Programmers who do not realize this will have a difficult time no matter how clever they are with their
favorite language.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: hi all
yeah and you guys are interesting...
Last edited by yazz (2014-01-15 03:58:30)
Knowledge is knowing that a tomato is a
fruit, but Wisdom is knowing not to put it in a fruit salad
Re: hi all
and i am happy on receiving quick reply..bobbym
Knowledge is knowing that a tomato is a
fruit, but Wisdom is knowing not to put it in a fruit salad
Re: hi all
Math is an interesting subject when you are not being burdened by having to do it in a classroom. So I guess, if you are interested in an interesting subject then you become interesting.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: hi all
wow...i know you are a math genie...are you in programming as well...
Knowledge is knowing that a tomato is a
fruit, but Wisdom is knowing not to put it in a fruit salad
Re: hi all
I am not any type of genius unless a non math genius is a type of math genius. I started as a math type then became a programmer because I wanted to solve tough math questions using a computer. I
thought that if I used a computer I would not have to use any more math. I was not very bright back then either. Then I found that I needed more math than ever before so I came back to math.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: hi all
if you don't mind,can you share,what was your mindset when you were in my age..just to confirm that i am in right path..
Knowledge is knowing that a tomato is a
fruit, but Wisdom is knowing not to put it in a fruit salad
Re: hi all
You are perhaps on a better path than I was. There was no internet and no computers when I was your age. Naturally, there was no programming either. Math was taught in a manner that when we would
enter a physics or chemistry class we were completely lost. We knew nothing of how to calculate a number. This effectively ended my dreams of being a biochemist or chemist.
Because I could not relate mathematics to anything in the real world I was always asking the question, what is this good for. So I say learn the computer first. Learn how to calculate real numbers on
it. Then take up math. You will never ask the question that confused me so much.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: hi all
yeah i will..and thank you...
Knowledge is knowing that a tomato is a
fruit, but Wisdom is knowing not to put it in a fruit salad
Re: hi all
But when a math question comes up come in here and post it.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: hi all
yeah..and can i post my homeworks questions.i mean it seems you and your team can teach me better than my teacher.bcoz my math teacher is not so great and lively and friendly as you are..she only
wants us to score up and up and quality education is
Last edited by yazz (2014-01-15 05:24:40)
Knowledge is knowing that a tomato is a
fruit, but Wisdom is knowing not to put it in a fruit salad
Re: hi all
Remember to show whatever work you have done. It is easier to assist when you know exactly where the questioner went wrong.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: hi all
ok thank you..
Knowledge is knowing that a tomato is a
fruit, but Wisdom is knowing not to put it in a fruit salad
Re: hi all
Hi yazz,
Welcome to the forum!
Character is who you are when no one is looking.
Re: hi all
thank you..
Knowledge is knowing that a tomato is a
fruit, but Wisdom is knowing not to put it in a fruit salad
Star Member
Re: hi all
Welcome to the forum!!!
igloo myrtilles fourmis
Super Member
Re: hi all
Hello yazz, welcome to the forum!
Though the curriculum is good, in a country as populated as India, teaching quality is something which often gets little importance. Nonetheless, the fact that you are trying to supplement that
lacking with online resources is commendable!
I have discovered a truly marvellous signature, which this margin is too narrow to contain. -Fermat
Give me a lever long enough and a fulcrum on which to place it, and I shall move the world. -Archimedes
Young man, in mathematics you don't understand things. You just get used to them. - Neumann
|
{"url":"http://mathisfunforum.com/viewtopic.php?id=20391","timestamp":"2014-04-20T21:13:05Z","content_type":null,"content_length":"28998","record_id":"<urn:uuid:6e1be44c-7f7e-4b45-b7e3-8c049b5b682d>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00349-ip-10-147-4-33.ec2.internal.warc.gz"}
|
East Point, GA Geometry Tutor
Find an East Point, GA Geometry Tutor
...I teach systematic methods to make sure that every student does the best they can on the SAT math. We begin by identifying areas that students struggle with and identify methods that make
sense and that the students can use on a regular basis. Once students begin to build up confidence we progress to more difficult questions.
17 Subjects: including geometry, chemistry, writing, physics
...Tutored on Precalculus topics during high school and college. Completed coursework through Multivariable Calculus. I love helping students understand Precalculus!
28 Subjects: including geometry, calculus, statistics, GRE
...During my time in high school and college, I did well in my Math (Calculus I-II), Chemistry, and Physics courses and have tutored in all of these subjects. Currently, I co-teach Math 1 and GPS
Algebra 1. I also run an After-School Math Tutorial program where I tutor students who are in Math 1-3.
13 Subjects: including geometry, chemistry, physics, calculus
...In this position, I work as a part of the Student Support Team, teaching phonics, math, and reading to struggling students. As a 1st grade interventionist, the majority of my day is used to
help Kindergarten and 1st grade students who are struggling with phonics. It is such a difficult subject to grasp for beginning readers and they need all the support they can get!
14 Subjects: including geometry, reading, writing, algebra 1
...I am certified and have successful classroom teaching experience. I am certified and experienced with gifted students, regular students, as well as students with learning difficulties. I,
also, have several years experience helping students prepare for standardized tests.
23 Subjects: including geometry, reading, accounting, algebra 1
Related East Point, GA Tutors
East Point, GA Accounting Tutors
East Point, GA ACT Tutors
East Point, GA Algebra Tutors
East Point, GA Algebra 2 Tutors
East Point, GA Calculus Tutors
East Point, GA Geometry Tutors
East Point, GA Math Tutors
East Point, GA Prealgebra Tutors
East Point, GA Precalculus Tutors
East Point, GA SAT Tutors
East Point, GA SAT Math Tutors
East Point, GA Science Tutors
East Point, GA Statistics Tutors
East Point, GA Trigonometry Tutors
Nearby Cities With geometry Tutor
Atlanta geometry Tutors
Austell geometry Tutors
College Park, GA geometry Tutors
Decatur, GA geometry Tutors
Doraville, GA geometry Tutors
Forest Park, GA geometry Tutors
Hapeville, GA geometry Tutors
Lake City, GA geometry Tutors
Mableton geometry Tutors
Morrow, GA geometry Tutors
Norcross, GA geometry Tutors
Peachtree City geometry Tutors
Riverdale, GA geometry Tutors
Tucker, GA geometry Tutors
Union City, GA geometry Tutors
|
{"url":"http://www.purplemath.com/east_point_ga_geometry_tutors.php","timestamp":"2014-04-19T17:21:58Z","content_type":null,"content_length":"24113","record_id":"<urn:uuid:626937a7-98c7-4076-a65e-9bd043fa5aa3>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00288-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Change of Representation for Statistical Relational Learning
Jesse Davis, Irene Ong, Jan Struyf, Elizabeth Burnside, David Page, Vítor Santos Costa
Statistical relational learning (SRL) algorithms learn statistical models from relational data, such as that stored in a relational database. We previously introduced view learning for SRL, in which
the view of a relational database can be automatically modified, yielding more accurate statistical models. The present paper presents SAYU-VISTA, an algorithm which advances beyond the initial view
learning approach in three ways. First, it learns views that introduce new relational tables, rather than merely new fields for an existing table of the database. Second, new tables or new fields are
not limited to being approximations to some target concept; instead, the new approach performs a type of predicate invention. The new approach avoids the classical problem with predicate invention,
of learning many useless predicates, by keeping only new fields or tables (i.e., new predicates) that immediately improve the performance of the statistical model. Third, retained fields or tables
can then be used in the definitions of further new fields or tables. We evaluate the new view learning approach on three relational classification tasks.
URL: http://www.cs.wisc.edu/~jdavis/davis.pdf
|
{"url":"http://www.ijcai.org/papers07/Abstracts/IJCAI07-437.html","timestamp":"2014-04-20T05:42:37Z","content_type":null,"content_length":"2140","record_id":"<urn:uuid:5995596b-2fad-46e9-9e27-01bc2f880917>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00373-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: Logic Help
Replies: 1 Last Post: Jul 30, 1998 11:58 AM
Messages: [ Previous | Next ]
Re: Logic Help
Posted: Jul 30, 1998 11:58 AM
Russell -
I would prove this by trying all the possible values of P and Q. There are
only Four combinations, T,T, T,F, F,T and F,F for P,Q. Evaluate the left
and right sides of each equation for each of the four cases and see if they
are equal.
>Can anyone help me prove the following three equivalences:
>(P or Q) and not(P and Q) = (P and notQ) or (Q and notP)
>P <---> Q = (P and Q) or (notP and notQ)
>(P ---> R) and (Q ---> R) = (P or Q) ---> R
>If you could email me with the proofs, I would very much appreciate it.
>Thanx in advance.
> -Russell (Russ256@aol.com)
|
{"url":"http://mathforum.org/kb/thread.jspa?threadID=351760&messageID=1078082","timestamp":"2014-04-16T20:07:16Z","content_type":null,"content_length":"17590","record_id":"<urn:uuid:53528a41-6592-4f90-9c1c-9c9cde699e44>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00371-ip-10-147-4-33.ec2.internal.warc.gz"}
|
with applications to detecting neural dynamics
The Library
Semi-parametric dynamic time series modelling with applications to detecting neural dynamics
Rigat, Fabio, 1975- and Smith, J. Q., 1953- (2007) Semi-parametric dynamic time series modelling with applications to detecting neural dynamics. Working Paper. Coventry: University of Warwick. Centre
for Research in Statistical Methodology. (Working papers).
WRAP_Rigat_07-07v3.pdf - Published Version - Requires a PDF viewer such as GSview, Xpdf or Adobe Acrobat Reader
Download (1272Kb)
This paper illustrates the theory and applications of a methodology for non-stationary time series modeling which combines sequential parametric Bayesian estimation with non-parametric change-point
testing. A novel Kullback-Leibler divergence between posterior distributions arising from different sets of data is proposed as a nonparametric test statistic. A closed form expression of this test
statistic is derived for exponential family models whereas Markov chain Monte Carlo simulation is used in general to approximate its value and that of its critical region. The effects of detecting a
change-point using our method are assessed analytically for the one-step ahead predictive distribution of a linear dynamic Gaussian time series model. Conditions under which our approach reduces to
fully parametric state-space modeling are illustrated. The method is applied to estimating the functional dynamics of a wide range of neural data, including multi-channel electroencephalogram
recordings, the learning performance in longitudinal behavioural experiments and in-vivo multiple spike trains. The estimated dynamics are related to the presentation of visual stimuli, to the
generation of motor responses and to variations of the functional connections between neurons across different experiments.
Item Type: Working or Discussion Paper (Working Paper)
Subjects: Q Science > QA Mathematics
Divisions: Faculty of Science > Statistics
Library of Time-series analysis, Change-point problems, Neurons -- Mathematical models
Series Name: Working papers
Publisher: University of Warwick. Centre for Research in Statistical Methodology
Place of Coventry
Date: 2007
Volume: Vol.2007
Number: No.7
Number of 32
Status: Not Peer Reviewed
Access Open Access
rights to
Funder: University of Warwick. Centre for Research in Statistical Methodology
H. Akaike. Likelihood of a model and information criteria. Journal of Econometrics, 16:3–14, 1981. H. Akaike. On the likelihood of a time series model. The Statistician, 27:217–235,
1978. J.H. Albert and S. Chib. Bayes inference via Gibbs sampling for autoregressive time series subject to Markov mean and variance shifts. Journal of Business and Economic Statistics,
11:1 – 15, 1993. M.M. Barbieri and J.O. Berger. Optimal predictive model selection. The Annals of Statistics, 32:870–897, 2004. M.J. Bayarri and J.O. Berger. The interplay between
Bayesian and frequentist analysis. Statistical Science, 19:58 – 80, 2004. M.J. Bayarri and J. Morales. Bayesian measures of surprise for outlier detection. Journal of Statistical
Planning and Inference, 111:3 – 22, 2003. P. B´elisle, L. Joseph, B.MacGibbon, D.B.Wolfson, and R. du Berger. Change-point analysis of neuron spike train data. Biometrics, 54:113 – 123,
1998. T. Bengtsson and J.E. Cavanaugh. An improved Akaike information criterion for state-space model selection. Computational Statistics and Data Analysis, 50:2635 – 2654, 2006. J.O.
Berger and M.L. Bayarri. Measures of surprise in bayesian analysis. ISDS Discussion Paper, 46, 1997. J.O. Berger, L. Brown, and R.L. Wolpert. A unified conditional frequentist and
Bayesian test for fixed and sequential hypothesis testing. The Annals of Statistics, 22:1787 – 1807, 1994. J. Bernardo. Expected information as expected utility. The Annals of
Statistics, 7: 686–690, 1979. J.M. Bernardo and A.F.M. Smith. Bayesian theory. Wiley, 2007. G. E. P. Box. Sampling and Bayes’ inference in scientific modelling and robustness. Journal
of the American Statistical Association A, 143:383–430, 1980. D.R. Brillinger. Some statistical methods for random processes data from seismology and neurophysiology. The Annals of
Statistics, 16:1–54, 1988. E. Brown, R.E. Kass, and P.P. Mitra. Multiple neural spike train data analysis: state-of-the-art and future challenges. Nature Neuroscience, 7:456–461, 2004.
E.N. Brown and R. Barbieri. Dynamic analyses of neural representations using the state-space modeling paradigm. In: Madras, B., Von Zastrow, M. Colvis, C., Rutter, J. Shurtleff, D. and
Pollock, J. eds. ”The Cell Biology of Addiction”, New York, Cold Spring Harbor Laboratory press, 2006. G. Buzs´aki. Large scale recording of neuronal ensembles. Nature Neuroscience, 7:
446–451, 2004. O. Cappe, E. Moulines, and T. Ryden. Inference in Hidden Markov Models. Springer, 2005. B.P. Carlin, A.E. Gelfand, and A.F.M. Smith. Hierarchical Bayesian analysis of
changepoint problems. Applied Statistics, 41:389–405, 1992. C. Carota, G. Parmigiani, and N. Polson. Diagnostic measures for model criticism. Journal of the American Statistical
Association, 91:753–762, 1996. S. Chib. Estimation and comparison of multiple change-point models. Journal of Econometrics, 86:221 – 241, 1998. F. Critchley, P. Marriott, and M. Salmon.
Preferred point geometry and the local differential geometry of the Kullback-Leibler divergence. The Annals of Statistics, 22:1587–1602, 1994. A.P. Dawid. Present position and potential
developments: some personal views. statistical theory: the prequential approach. Journal of the Royal Statistical Society A, 147:278–292, 1984. A. Delorme, S. Makeig, M. Fabre-Thorpe,
and T. J. Sejnowski. From single trial EEG to brain area dynamics. Neurocomputing, 44:1057–1064, 2002. P. Diaconis and D. Ylvisaker. Conjugate priors for exponential families. The
Annals of Statistics, 7:269–281, 1979. A. Doucet, N. De Freitas, and N.J. Gordon. Sequential Monte Carlo Methods in Practice. Springer-Verlag, New York, 2001. U.T. Eden and E.N. Brown.
Continuous-time filters for state estimation from point process models of neural data. Statistica Sinica, 2008. U.T. Eden, L.M. Frank, R. Barbieri, V. Solo, and E.N. Brown. Dynamic
analysis of neural encoding by point process adaptive filtering. Neural Computation, 16: 971–998, 2004. P. Fearnhead and Z. Liu. On-Line Inference for Multiple Change Points. Journal of
the Royal Statistical Society B, 69:589–605, 2007. D. Ferger. Nonparametric tests for nonstandard change-point problems. The Annals of Statistics, 23:1848–1861, 1995. S. E. Fienberg.
Stochastic models for single neuron firing trains: a survey. Biomet- rics, 30:399–427, 1974. S. Fr¨uhwirth-Shnatter. Bayesian model discrimination and Bayes factors for linear gaussian
state-space models. Journal of the Royal Statistical Society B, 1:237 – 246, 1995. S. Fr¨uhwirth-Shnatter. Markov chain monte carlo estimation of classical and dynamic switching and
mixture models. Journal of the American Statistical Asso- ciation, 96:194 – 209, 2001. S. Fr¨uhwirth-Shnatter. Finite Mixture and Markov Switching Models. Springer, 2006. D. Gamerman. A
dynamic approach to the statistical analysis of point processes. Biometrika, 79:39–50, 1992. S. Geisser and W.F. Eddy. A predictive approach to model selection. Journal of the American
Statistical Association, 74:153 – 160, 1979. A.E. Gelfand and A. F. M. Smith. Sampling-based approaches to calculating marginal densities. Journal of the American Statistical
Association, 85:398–409, 1990. A. Gelman, X.L. Meng, and H.S. Stern. Posterior predictive assessment of model fitness via realized discrepancies. Statistica Sinica, 6:733–807, 1996. Z.
Ghahramani and G.E. Hinton. Variational learning for switching state-space models. Neural Computation, 12:831 – 864, 2000. C. Goutis and C. Robert. Model choice in generalised linear
models: A Bayesian approach via Kullback-Leibler projections. Biometrika, 85:29–37, 1998. E. Guti´errez-Pe˜na. Moments for the canonical parameter of an exponential family under a
conjugate distribution. Biometrika, 84:727–732, 1997. I. Guttman. The use of the concept of a future observation in goodness-of-fit problems. Journal of the Royal Statistical Society B,
29:83 – 100, 1967. P. Hall. On Kullback-Leibler loss and density estimation. The Annals of Statistics, 15:1491–1519, 1987. J.D. Hamilton. Time Series Analysis. Princeton University
Press, 1994. J.D. Hamilton. Analysis of time series subject to changes in regime. Journal of Econometrics, 45:39 – 70, 1990. C. Han and B.P. Carlin. Markov chain Monte Carlo methods for
computing Bayes factors: A comparative review. Journal of the American Statistical Association, 96:1122 – 1132, 2001. W. H¨ardle, H. L¨utkepohl, and R. Chen. A review of nonparametric
time series analysis. International Statistical Review, 65:49 – 72, 1997. P.J. Harrison and C.F. Stevens. Bayesian forecasting. Journal of the Royal Statis- tical Society B, 38:205–247,
References: 1976. T. Hastie. A closer look at the deviance. The American Statistician, 41:16–20, 1987. S. Iyengar. The analysis of multiple neural spike trains. In: Advances in Method- ological and
Applied Aspects of probability and Statistics; N. Bolakrishnan editor; Gordon and Breack, pages 507–524, 2001. M. Jain, M. Elhilali, N. Vaswami, J. Fritz, and S. Shamma. A particle
filter for tracking adaptive neural responses in auditory cortex. Submitted for publication, 2007. R.E. Kalman. A new approach to linear filtering and prediction problems. Journal of
Basic Engineering, 82:35–45, 1960. R.E. Kass, V. Ventura, and E. Brown. Statistical issues in the analysis of neuronal data. Journal of Neurophysiology, 94:8–25, 2005. K.M. Kendrick,
A.P. da Costa, A.E. Leigh, M.R. Hinton, and J.W. Peirce. Sheep don’t forget a face. Nature, 414:165–166, 2001. C.J. Kim. Dynamic linear models with Markov-switching. Journal of
Econometrics, 60:1 – 22, 1994. S. Koyama, L. Castellanos Perez-Bolde, and R.E. Kass. Approximate Methods for State-Space Models: The Laplace-Gaussian Filter. Submitted for publication,
2008. P.M. Kuhnert, K. Mergesen, and P. Tesar. Bridging the gap between different statistical approaches: an integrated framework for modelling. International Statistical Review, 71:335
– 368, 2003. S. Kullback. Information theory and Statistics. Dover; New York, 1997. S. Kullback and R.A. Leibler. On information and sufficiency. The Annals of Mathematical Statistics,
22:79–86, 1951. D. Lindley. On a measure of the information provided by an experiment. The Annals of Mathematical Statistics, 27:986–1005, 1956. C.R. Loader. Change point estimation
using nonparametric regression. The Annals of Statistics, 24:1667–1678, 1996. S. Makeig, M. Westerfield, T. P. Jung, S. Enghoff, J. Towsend, E. Courchesne, and T. J. Sejnowski. Dynamic
brain sources of visual evoked responses. Science, 295: 690–694, 2002. P. Marjoram, J. Molitor, V. Plagnol, and S. Tavar´e. Markov Chain Monte Carlo without likelihoods. Proceedings of
the National Academy of Science, 100:15324 – 15328, 2003. R.E. McCulloch. Information and the likelihood function in exponential families. The American Statistician, 42:73–75, 1988.
R.E. McCulloch and R.S. Tsay. Statistical analysis of economic time series via Markov switching models. Journal of Time Series Analysis, 15:523 – 539, 1994. X.L. Meng. Posterior
predictive p-values. The Annals of Statistics, 22:1142–1160, 1994. A. Mira and S. Petrone. Bayesian hierarchical nonparametric inference for change point problems. Bayesian Statistics,
5:693–703, 1996. H.G. Muller. Change-points in nonparametric regression analysis. The Annals of Statistics, 20:737–761, 1992. T. O’Hagan and J. Forster. Kendall’s Advanced Theory of
Statistics, volume 2B. Arnold, 1999. M. Okatan, M.A. Wilson, and E.N. Brown. Analyzing functional connectivity using a network likelihood model of ensemble neural spiking activity.
Neural computation, 17:1927–1961, 2005. E.S. Page. A test for a change in a parameter occurring at an unknown point. Biometrika, 42:523–527, 1955. R.P.N. Rao. Hierarchical Bayesian
inference in networks of spiking neurons. Ad- vances in NIPS; MIT press, 17, 2005. F. Rigat, M. de Gunst, and J. ven Pelt. Bayesian modelling and analysis of spatiotemporal neuronal
networks. Bayesian Analysis, 1:733–764, 2006. C.P. Robert, G. Celeux, and J. Diebolt. Bayesian estimation of hidden Markov chains: a stochastic implementation. Statistics and
Probability Letters, 16:77 – 83, 1993. P.M. Robinson. Non-parametric estimation for time series models. Journal of Time Series Analysis, 4:185 – 208, 1983. D.B. Rubin. Bayesianly
justifiable and relevant frequency calclulations for the applied Statistician. Annals of Statistics, 12:1151 – 1172, 1984. A. San Martini and F. Spezzaferri. A predictive model
selection criterion. Journal of the Royal Statistical Society B, 46:296–383, 1984. R.H. Shumway and D.S. Stoffer. Dynamic linear models with switching. Journal of the American
Statistical Association, 86:763–769, 1991. A. F.M. Smith and G.O. Roberts. Bayesian computations via the Gibbs sampler and related Markov chain Monte Carlo methods. Journal of the Royal
Statistical Society B, 55:3–23, 1993. A.C. Smith, M.F. Loren, S. Wyrth, M. Yanike, D. Hu, Y. Kubota, A.M. Graybiel, W.A. Suzuki, and E.M. Brwon. Dynamic analysis of learning in
behavioural experiments. Journal of Neuroscience, 24:447–461, 2004. A.F.M. Smith. A Bayesian approach to inference about a change-point in a sequence of random variables. Biometrika,
62:407–416, 1975. J.Q. Smith. Non-linear state space models with partially specified distributions on states. Journal of Forecasting, 9:137 – 149, 1990. J.Q. Smith. A comparison of the
characteristics of some Bayesian forecasting models. International Statistical Reviews, 60:75 – 85, 1992. D.J. Spiegelhalter, N.G. Best, P.B. Carlin, and A. van der Linde.
Bayesianmeasures of model complexity and fit. Journal of the Royal Statistical Society B, 64:583– 639, 2002. L. Srinivansan, U.T. Eden, A.S. Willsky, and E.N. Brown. A state-space
analysis for reconstruction of goal-directed movements using neural signals. Neural Computation, 18:2465–2494, 2006. D.A. Stephens. Bayesian retrospective multiple-changepoint
identification. Applied Statistics, 43:159–178, 1994. M. Stone. Application of a measure of information to the design and comparison of regression experiments. The Annals of
Mathematical Statistics, 30:55–70, 1959. L. Tierney. Markov chains for exploring posterior distributions. The Annals of Statistics, 22:1701–1762, 1994. L. Tierney and J.B. Kadane.
Accurate Approximations for Posterior Moments and Marginal Densities. Journal of the American Statistical Association, 81:84 – 86, 1986. W. Truccolo, U.T. Eden, M.R. Fellows, J.P.
Donoghue, and E.N. Brown. A point process framework for relating neural spiking activity to spiking history, neural ensemble and extrinsic covariate effects. Journal of Neurophysiology,
93:1074– 1089, 2005. M. West. Bayesian model monitoring. Journal of the Royal Statistical Society B, 48:70–78, 1986. M. West and P.J. Harrison. Bayesian forecasting and dynamic models.
Springer; New York, second edition, 1997. M. West and P.J Harrison. Monitoring and adaptation in Bayesian forecasting models. Journal of the American Statistical Association,
81:741–750, 1986a. M. West and T.J. Harrison. Monitoring and adaptation in Bayesian forecasting models. Journal of the American Statistical Association, 81:741–50, 1986b. M. West, P.J.
Harrison, and H.S. Migon. Dynamic generalised linear models and Bayesian forecasting. Journal of the American Statistical Association, 80:73–83, 1985. S. Wirth, M. Yanike, M.F. Loren,
A.C. Smith, E.M. Brwon, and W.A. Suzuki. Single neurons in the monkey hyppocampus and learning of new associations. Science, 300:1578–1584, 2003.
URI: http://wrap.warwick.ac.uk/id/eprint/35539
Actions (login required)
|
{"url":"http://wrap.warwick.ac.uk/35539/","timestamp":"2014-04-18T15:45:30Z","content_type":null,"content_length":"68882","record_id":"<urn:uuid:2994236e-60e5-4d09-b0ad-d60d49a10b6c>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00218-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Panopticon Central
Lots of code has been flowing out of Redmond recently:
• Two Visual Basic 2005 Power Packs have been released. The Microsoft Interop Forms Toolkit 1.0 and Microsoft PrintForm Component 1.0 are both targeted at people who use COM and VB6 and should make
migration of code easier in many cases. You can get them here, and more are likely to come.
• Beta 1 of VS 2005 Service Pack 1 has been released. This service pack includes a significant amount of work on the part of the Visual Basic compiler team and addresses a lot of the major issues
(performance or otherwise) that we’ve found after release. I encourage people to try it out and report any problems you run into with it (I’ve got it installed!). You can get it on Microsoft
• The September Community Technology Preview (CTP) of Orcas has been released. Unlike the previous LINQ CTPs, this CTP is actual Orcas production code instead of a prototype implementation. As a
result, many of the features present in the last LINQ CTP aren’t in this CTP, so it’s going to be a step back in that regard. Also, because of the way the CTP schedule and the VB compiler
schedule happened to (not) sync up, there are not a lot of new VB features in this CTP. Expect much more in the coming CTPs! You can get the CTP here (and hurrah! We’re finally moving to Virtual
PC images!).
Enjoy all the tasty code!
Pseudo-Random Numbers, VB and Doing the (Mersenne) Twist…
Interesting how random things sometimes come together. I was checking out Chris William’s pre-beta rogue-like game Heroic Adventure! on CodePlex and noticed that although most of the game is written
in VB, he had included some C# code in there. The motivation was the fact that the pseudo-random number generators that VB and the .NET platform provide (i.e. VB’s Rnd function and .NET’s
System.Random) are not strong enough to satisfy the random number needs of his game. So he borrowed some C# code that implements the Mersenne Twister algorithm, which is a much better pseudo-random
number generator.
I thought it was a shame for Chris to have to sully all his beautiful VB code with some C# (obligatory <g> for the humor impaired), so I took a look at the reference implementation of the algorithm
and coded it up in VB (making sure, of course, the test output matched that of the reference implementation). For those of you with random number needs, I’ve attached it at the end. Please let me
know if you find bugs.
Interestingly, at almost exactly the same time, the question of the strength (or lack thereof) of our pseudo-random number generator came up internally. Keep in mind that VB .NET’s Rnd function was
written to remain completely compatible with VB 6.0′s Rnd function, which was compatible with VB 5.0′s Rnd function, etc., etc., all the way back to VB 1.0′s Rnd function. So that pseudo-random
number generator is pretty freaking old — 15+ years and counting! We’re discussing what we might do about the situation, but since some people may depend on setting a particular seed and then getting
a predictable sequence…
One caveat to keep in mind is that none of these pseudo-random number generators are strong enough to be used for cryptographic uses. For that, try
‘ An implementation of the Mersenne Twister algorithm (MT19937), developed
‘ with reference to the C code written by Takuji Nishimura and Makoto Matsumoto
‘ (http://www.math.sci.hiroshima-u.ac.jp/~m-mat/MT/emt.html).
‘ This code is free to use for any pupose.
Option Strict On
”’ A random number generator with a uniform distribution using the Mersenne
”’ Twister algorithm.
Public Class MersenneTwister
Private Const N As Integer = 624
Private Const M As Integer = 397
Private Const MATRIX_A As UInteger = &H9908B0DFUI
Private Const UPPER_MASK As UInteger = &H80000000UI
Private Const LOWER_MASK As UInteger = &H7FFFFFFFUI
Private mt(N – 1) As UInteger
Private mti As Integer = N + 1
”’ Create a new Mersenne Twister random number generator.
Public Sub New()
End Sub
”’ Create a new Mersenne Twister random number generator with a
”’ particular seed.
”’ The seed for the generator.
Public Sub New(ByVal seed As UInteger)
mt(0) = seed
For mti = 1 To N – 1
mt(mti) = CUInt((1812433253UL * (mt(mti – 1) Xor (mt(mti – 1) >> 30)) + CUInt(mti)) And &HFFFFFFFFUL)
End Sub
”’ Create a new Mersenne Twister random number generator with a
”’ particular initial key.
”’ The initial key.
Public Sub New(ByVal initialKey() As UInteger)
Dim i, j, k As Integer
i = 1 : j = 0
k = CInt(IIf(N > initialKey.Length, N, initialKey.Length))
For k = k To 1 Step -1
mt(i) = CUInt(((mt(i) Xor ((mt(i – 1) Xor (mt(i – 1) >> 30)) * 1664525UL)) + initialKey(j) + CUInt(j)) And &HFFFFFFFFUI)
i += 1 : j += 1
If i >= N Then mt(0) = mt(N – 1) : i = 1
If j >= initialKey.Length Then j = 0
For k = N – 1 To 1 Step -1
mt(i) = CUInt(((mt(i) Xor ((mt(i – 1) Xor (mt(i – 1) >> 30)) * 1566083941UL)) – CUInt(i)) And &HFFFFFFFFUI)
i += 1
If i >= N Then mt(0) = mt(N – 1) : i = 1
mt(0) = &H80000000UI
End Sub
”’ Generates a random number between 0 and System.UInt32.MaxValue.
Public Function GenerateUInt32() As UInteger
Dim y As UInteger
Static mag01() As UInteger = {&H0UI, MATRIX_A}
If mti >= N Then
Dim kk As Integer
Debug.Assert(mti <> N + 1, “Failed initialization”)
For kk = 0 To N – M – 1
y = (mt(kk) And UPPER_MASK) Or (mt(kk + 1) And LOWER_MASK)
mt(kk) = mt(kk + M) Xor (y >> 1) Xor mag01(CInt(y And &H1))
For kk = kk To N – 2
y = (mt(kk) And UPPER_MASK) Or (mt(kk + 1) And LOWER_MASK)
mt(kk) = mt(kk + (M – N)) Xor (y >> 1) Xor mag01(CInt(y And &H1))
y = (mt(N – 1) And UPPER_MASK) Or (mt(0) And LOWER_MASK)
mt(N – 1) = mt(M – 1) Xor (y >> 1) Xor mag01(CInt(y And &H1))
mti = 0
End If
y = mt(mti)
mti += 1
‘ Tempering
y = y Xor (y >> 11)
y = y Xor ((y << 7) And &H9D2C5680UI)
y = y Xor ((y << 15) And &HEFC60000UI)
y = y Xor (y >> 18)
Return y
End Function
”’ Generates a random integer between 0 and System.Int32.MaxValue.
Public Function GenerateInt32() As Integer
Return CInt(GenerateUInt32() >> 1)
End Function
”’ Generates a random integer between 0 and maxValue.
”’ The maximum value. Must be greater than zero.
Public Function GenerateInt32(ByVal maxValue As Integer) As Integer
Return GenerateInt32(0, maxValue)
End Function
”’ Generates a random integer between minValue and maxValue.
”’ The lower bound.
”’ The upper bound.
Public Function GenerateInt32(ByVal minValue As Integer, ByVal maxValue As Integer) As Integer
Return CInt(Math.Floor((maxValue – minValue + 1) * GenerateDouble() + minValue))
End Function
”’ Generates a random floating point number between 0 and 1.
Public Function GenerateDouble() As Double
Return GenerateUInt32() * (1.0 / 4294967295.0)
End Function
End Class
Updated 09/22/2006: The dingo ate my XML comments! Fixed.
Updated 09/22/2006 (Later): The dingo ate my <g> as well! Fixed.
Lang .NET 2006 talk posted
If you are curious about my talk at the Lang .NET symposium that I talked about a while ago, you can now download the video of the talk here. As is usual, I can’t bear to watch the damn thing since I
think my voice sounds just awful–it’s so much nicer sounding in my head. Oh well.
Overall, I think the talk was only OK. I kind of switched around what I was going to talk about late in the game and so I don’t think it was as interesting as I was hoping to be. It talks some about
some of the issues we’ve run into with implementing LINQ and some thoughts about some future direction, but we had a snafu with the time (I thought I had much less time than I really did), so it kind
of got truncated at the end. I’m hoping to make up for some of it on my blog this fall…
Localized programming languages
Omer van Kloten‘s entry on Internationalization of Programming reminded me of a (possibly apocryphal) story that I was told when I started working on OLE Automation. I asked why
IDispatch::GetIDsOfNames takes an LCID and was told that once-upon-a-time, the VBA team conducted an experiment in localization with VBA in Excel (which was the first application to host VBA).
Apparently, they attempted to localize the entire language–keywords, function names, etc.–into French, and possibly other languages. This mean you could write code along the lines of what Omer
outlines in his entry, except in French instead of Dutch.
The problem was that because VBA wasn’t exactly compiled in those days, the Excel spreadsheet that you wrote your code in now depended on having localized Excel on the machine. If you sent your
spreadsheet to your colleague in New York, they couldn’t run the macros because their English Excel didn’t understand the language…
Catch me at the Capital Area .NET Users group!
As I intimated a few posts ago, I’m going to be back on the East Coast at the end of the month. Part of the reason is to visit family (my family this time), but part of it is to give a talk at the
Capital Area .NET Users group. You can find their website at http://www.caparea.net/, and here’s the blurb:
Visual Basic 9.0: Language Integrated Query (LINQ), XML integration and beyond…
Tuesday, September 26, 2006 at 7:00 PM
With its second version on the .NET Framework, Visual Basic has largely completed the process of moving from its previous home in COM to its new home in the CLR. As a full-fledged language on a
premier runtime platform, the inevitable next question is: Now what? Come hear a discussion and see demos of where Visual Basic is headed in the future. Visual Basic 9.0 will offer radical
improvements in its ability to query data in all its forms, whether as objects, XML, or relational data. Also, working with data interchange becomes significantly easier as XML becomes integrated
directly into the language. And, finally, we’ll take a look back at Visual Basic’s dynamic language and scripting roots to see what lessons from the past might be brought into future versions and
look ahead at where the language might be headed beyond 9.0.
The talk will mostly cover LINQ and will be a more developed version of the talk I gave at the PDC (i.e. it will include all the work and thinking that we’ve done since last November). It’ll also
have some teaser stuff at the end that covers a bit of what we talked about at Lang .NET (which I’ll also discuss here once I’ve dug out a bit more).
If you’re in the VA/DC/MA area, I hope to see you there!
Two wild and crazy guys…
Normally, I don’t promote videos and stuff that I haven’t actually watched, but since I’m still buried under vacation emails, and I know that it’s just got to be good stuff, I recommend checking out:
Erik Meijer: Democratizing the Cloud
Brian Beckman: Monads, Monoids, and Mort
Erik and Brian are just two crazy guys with lots of crazy ideas who’ve been a lot of fun to interact with over the past couple of years… Without Erik, there’d be no XML support coming in VB and he’s
added a lot to our LINQ discussions. And, well, I haven’t quite figured Brian out yet, but I’m working on it! <g>
How I Spent My Summer Vacation
Well, I’m at the tail end of my East Coast summer vacation, which has included a trip to the beach (Oak Island, NC) and a trip to see the in-laws (Richmond, VA).
• Getting my 9 year old niece and 11 year old nephew hooked (nay, obsessed) with the board games Settlers of Catan and Ticket to Ride. I mean, I love those games, but man… I think they would play
Settlers 24/7 if they could. Frightening.
• Taking one last ride on the Hurricane at the Myrtle Beach Pavilion. This was the fourth summer we’d taken the kids to the Pavilion, and we’re sad that it’ll be the last.
• Having lunch on our way back to VA at Parker’s Barbeque and Chicken in Wilson, NC. It’s always a good sign when the parking lot is full on a weekday for lunch. I don’t think the barbeque or
Brunswick stew was as good as the ones at Bullocks in Durham, but since I’m going to be visiting there in another month, I wasn’t too broken up. It was, however, nice to be in a real down-home NC
eatery. Felt like home.
• Catching the tail-end of tropical-whatever Ernesto. Hopefully, my nephew’s laser tag birthday won’t be flooded out down in Shockoe Bottom.
• We ended up at Parker’s because the place we really wanted to try, the Beefmaster Inn, wasn’t open for lunch. I guess we’ll have to make the trek out one day when we’re visiting Durham…
Back at work soon…
|
{"url":"http://www.panopticoncentral.net/2006/09/","timestamp":"2014-04-16T13:36:00Z","content_type":null,"content_length":"63469","record_id":"<urn:uuid:7e7cf4d2-976c-4db1-9616-d71278ee165e>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00013-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Physics Forums - View Single Post - Poisson Bracket for 1 space dimension field
Suppose you have a collection of fields [itex]\phi^i (t,x)[/itex] depending on time and on 1 space variable, for [itex]i=1,...,N[/itex]. Its dynamics is defined by the Lagrangian
[itex]L=\frac{1}{2} g_{ij}(\phi) (\dot{\phi}^i \dot{\phi}^j - \phi ' ^i \phi ' ^j ) + b_{ij}(\phi) \dot{\phi}^i \phi ' ^j [/itex]
where [itex]\dot{\phi}^i [/itex] denotes the time derivative of the field [itex]{\phi}^i [/itex] and [itex]\phi ' ^i [/itex] denotes its space derivative, and where [itex]g_{ij}(\phi) [/itex] is a
symmetric tensor, and [itex]b_{ij}(\phi) [/itex] an antisymmetric tensor.
One easily computes that the momenta conjugate to the fields [itex]\phi^i (t,x)[/itex] are [itex]\pi_i = A_i + b_{ij} \phi ' ^j[/itex], where [itex]A_i = g_{ij} \dot{\phi}^j [/itex].
Now I would like to show that the (equal time) Poisson Bracket [itex]\{A_i,A_j\}[/itex] is
[itex]\{A_i(t,x),A_j(t,y)\}=(\partial_i b_{jk} + \partial_j b_{ki} + \partial_k b_{ij} ) \phi ' ^k \delta(x-y)[/itex]
using the canonical relation [itex]\{\phi ^i(t,x) , \pi_j (t,y)\}=\delta_j^i \delta(x-y)[/itex].
I tried to write [itex]A_i = \pi_i - b_{ij} \phi ' ^j[/itex], and then use [itex]\{\phi ' ^i(t,x) , \pi_j (t,y)\}=\delta_j^i \delta ' (x-y)[/itex]. But then I can't get rid of the [itex]\delta ' [/
itex], and I don't get the [itex]\partial_k b_{ij} [/itex] term.
Am I mistaken somewhere ? Thank you in advance !
|
{"url":"http://www.physicsforums.com/showpost.php?p=4239519&postcount=1","timestamp":"2014-04-20T21:31:45Z","content_type":null,"content_length":"9903","record_id":"<urn:uuid:a1620b16-04af-4bff-8751-e607b87b6f9d>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00188-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Approximation algorithms for scheduling unrelated parallel machines
Find out how to access preview-only content
January 1990
Volume 46
Issue 1-3
pp 259-271
Approximation algorithms for scheduling unrelated parallel machines
Purchase on Springer.com
$39.95 / €34.95 / £29.95*
Rent the article at a discount
Rent now
* Final gross prices may vary according to local VAT.
Get Access
We consider the following scheduling problem. There arem parallel machines andn independent jobs. Each job is to be assigned to one of the machines. The processing of jobj on machinei requires timep
[ ij ]. The objective is to find a schedule that minimizes the makespan.
Our main result is a polynomial algorithm which constructs a schedule that is guaranteed to be no longer than twice the optimum. We also present a polynomial approximation scheme for the case that
the number of machines is fixed. Both approximation results are corollaries of a theorem about the relationship of a class of integer programming problems and their linear programming relaxations. In
particular, we give a polynomial method to round the fractional extreme points of the linear program to integral points that nearly satisfy the constraints.
In contrast to our main result, we prove that no polynomial algorithm can achieve a worst-case ratio less than 3/2 unlessP = NP. We finally obtain a complexity classification for all special cases
with a fixed number of processing times.
A preliminary version of this paper appeared in theProceedings of the 28th Annual IEEE Symposium on the Foundations of Computer Science (Computer Society Press of the IEEE, Washington, D.C., 1987)
pp. 217–224.
1. R. Aharoni, P. Erdös and N. Linial, “Dual integer linear programs and the relationship between their optima,”Proceedings of the 17th Annual ACM Symposium on Theory of Computing (1985) 476–483.
2. J.J. Bartholdi III, “A guaranteed-accuracy round-off algorithm for cyclic scheduling and set covering,”Operations Research 29 (1981) 501–510.
3. J.J. Bartholdi III, J.B. Orlin and H.D. Ratliff, “Cyclic scheduling via integer programs with circular ones,”Operations Research 28 (1980) 1074–1085.
4. S. Baum and L.E. Trotter Jr., “Integer rounding for polymatroid and branching optimization problems,”SIAM Journal on Algebraic and Discrete Methods 2 (1981) 416–425.
5. V. Chvátal, “A greedy heuristic for the set-covering problem,”Mathematics of Operations Research 4 (1979) 233–235.
6. G.B. Dantzig,Linear Programming and Extensions (Princeton University Press, Princeton, NJ, 1963).
7. E. Davis and J.M. Jaffe, “Algorithms for scheduling tasks on unrelated processors,”Journal of the Association for Computing Machinery 28 (1981) 721–736.
8. M.R. Garey and D.S. Johnson, “Complexity results for multiprocessor scheduling under resource constraints,”SIAM Journal on Computing 4 (1975) 397–411.
9. M.R. Garey and D.S. Johnson, “Strong NP-completeness results: motivation, examples and implications,”Journal of the Association for Computing Machinery 25 (1978) 499–508.
10. M.R. Garey and D.S. Johnson,Computers and Intractability: a Guide to the Theory of NP-Completeness (Freeman, San Francisco, CA, 1979).
11. T. Gonzalez, O.H. Ibarra and S. Sahni, “Bounds for LPT schedules on uniform processors,”SIAM Journal on Computing 6 (1977) 155–166.
12. R.L. Graham, “Bounds for certain multiprocessing anomalies,”Bell System Technological Journal 45 (1966) 1563–1581.
13. R.L. Graham, “Bounds on multiprocessing timing anomalies,”SIAM Journal on Applied Mathematics 17 (1969) 416–429.
14. R.L. Graham, E.L. Lawler, J.K. Lenstra and A.H.G. Rinnooy Kan, “Optimization and approximation in deterministic sequencing and scheduling: a survey,”Annals of Discrete Mathematics 5 (1979)
15. M. Grötschel, L. Lovász and A. Schrijver,Geometric Algorithms and Combinatorial Optimization (Springer, Berlin, 1988).
16. D.S. Hochbaum and D.B. Shmoys, “Using dual approximation algorithms for scheduling problems: practical and theoretical results,”Journal of the Association for Computing Machinery 34 (1987)
17. D.S. Hochbaum and D.B. Shmoys, “A polynomial approximation scheme for machine scheduling on uniform processors: using the dual approximation approach,”SIAM Journal on Computing 17 (1988) 539–551.
18. E. Horowitz and S. Sahni, “Exact and approximate algorithms for scheduling nonidentical processors,”Journal of the Association for Computing Machinery 23 (1976) 317–327.
19. L.G. Khachian, “A polynomial time algorithm in linear programming, “Soviet Mathematics Doklady 20 (1979) 191–194.
20. L. Lovász, “On the ratio of optimal and fractional covers,”Discrete Applied Mathematics 13 (1983) 383–390.
21. O.M.-C. Marcotte,Topics in Combinatorial Packing and Covering, Ph.D. Thesis, School of Operations Research and Industrial Engineering, Cornell University (Ithaca, NY, 1983).
22. K. Numata, “Approximate and exact algorithms for scheduling independent tasks on unrelated processors,”Journal of the Operations Research Society of Japan 31 (1988) 61–81.
23. C.N. Potts, “Analysis of a linear programming heuristic for scheduling unrelated parallel machines,”Discrete Applied Mathematics 10 (1985) 155–164.
24. P. Raghavan, “Probabilistic construction of deterministic algorithms: approximating packing integer programs,”Proceedings of the 27th Annual IEEE Symposium on Foundations of Computer Science
(1986) 10–18.
25. P. Raghavan and C.D. Thompson, “Provably good routing in graphs: regular arrays,”Proceedings of the 17th Annual ACM Symposium on Theory of Computing (1985) 79–87.
26. S. Sahni, “Algorithms for scheduling independent tasks,”Journal of the Association for Computing Machinery 23 (1976) 116–127.
27. A. Schrijver, “Min-Max results in combinatorial optimization,” in: A. Bachem, M. Grötschel and B. Korte, eds.,Mathematical Programming: The State of the Art—Bonn 1982 (Springer, Berlin, 1983) pp.
28. A. Schrijver,Theory of Linear and Integer Programming (Wiley, Chichester, 1986).
Approximation algorithms for scheduling unrelated parallel machines
Cover Date
Print ISSN
Online ISSN
Additional Links
□ Scheduling
□ parallel machines
□ approximation algorithm
□ worst case analysis
□ linear programming
□ integer programming
□ rounding
Industry Sectors
Author Affiliations
□ 1. Eindhoven University of Technology, Eindhoven, The Netherlands
□ 2. Centre for Mathematics and Computer Science, Amsterdam, The Netherlands
□ 3. Cornell University, Ithaca, NY, USA
|
{"url":"http://link.springer.com/article/10.1007%2FBF01585745","timestamp":"2014-04-20T18:03:56Z","content_type":null,"content_length":"51984","record_id":"<urn:uuid:10b0146b-6c3f-4f7a-9aca-b1747010f25d>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00415-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Intrinsic functions in Fortran 90
Intrinsic functions in Fortran 90
There is a large a number of intrinsic functions and five intrinsic subroutinesin Fortran 90. I treat the numeric and mathematical routines very shortly, since they are not changed from Fortran 77
and therefore should be well-known.
This section is based on section 13 of the ISO standard (1991), which contains a more formal treatment. We follow the arrangement of the different functions and subroutines in the standard, but
explain directly in the list. For a more detailed treatment we refer to Metcalf and Reid (1990, 1993).
When a parameter below is optional it is given in lower case characters. When an argument list contains several arguments the function can be called either by position related arguments or by a
keyword. Keyword must be used if some previous argument is not included. Keywords are normally the names that are given below.
We have not always given all the natural limitations to the variables, for example that the rank is not permitted to be negative.
The function PRESENT(A) returns .TRUE. if the argument A is in the calling list, .FALSE. in the other case. The use is illustrated in the example program in chapter 8 of the main text.
The following are available from Fortran 77: ABS, AIMAG, AINT, ANINT, CMPLX, CONJG, DBLE, DIM, DPROD, INT, MAX, MIN, MOD, NINT, REAL and SIGN.
In addition, CEILING, FLOOR and MODULO have been added to Fortran 90. Only the last one is difficult to explain, which is most easily done with the examples from ISO (1991)
MOD (8,5) gives 3 MODULO (8,5) gives 3
MOD (-8,5) gives -3 MODULO (-8,5) gives 2
MOD (8,-5) gives 3 MODULO (8,-5) gives -2
MOD (-8,-5) gives -3 MODULO (-8,-5) gives -3
The following functions from Fortran 77 can use a kind-parameter like in AINT(A, kind), namely AINT, ANINT, CMPLX, INT, NINT and REAL.
A historic fact is that the numerical functions in Fortran 66 had to have specific (different) names in different precisions, and these explicit names are still the only ones which can be used when a
function name is passed as an argument.
A complete table of all the numerical functions follow. Those names that are indicated with a star * are not permitted to be used as arguments. Some functions, like INT and IFIX have two specific
names, either can be used. On the other hand, some functions do not have any specific name. Below I use C for complex floating point values, D for floating point values in double precision, I for
integers, and R for floating point values in single precision.
Function Generic Specific Data type
name name Arg Res
Conversion INT - I I
to integer * INT R I
* IFIX R I
* IDINT D I
(of the real part) - C I
Conversion REAL * REAL I R
to real * FLOAT I R
- R R
* SNGL D R
(real part) - C R
Conversion DBLE - I D
to double - R D
precision - D D
(real part) - C D
Conversion CMPLX - I (2I) C
to complex - R (2R) C
- D (2D) C
- C C
Truncation AINT AINT R R
DINT D D
Rounding ANINT ANINT R R
DNINT D D
NINT NINT R I
IDNINT D I
Absolute ABS IABS I I
value ABS R R
DABS D D
CABS C R
Remainder MOD MOD 2I I
AMOD 2R R
DMOD 2D D
MODULO - 2I I
- 2R R
- 2D D
Floor FLOOR - I I
- R R
- D D
Ceiling CEILING - I I
- R R
- D D
Transfer SIGN ISIGN 2I I
of sign SIGN 2R R
DSIGN 2D D
Positive DIM IDIM 2I I
difference DIM 2R R
DDIM 2D D
Inner product - DPROD R D
Maximum MAX * MAX0 I I
* AMAX1 R R
* DMAX1 D D
- * AMAX0 I R
- * MAX1 R I
Minimum MIN * MIN0 I I
* AMIN1 R R
* DMIN1 D D
- * AMIN0 I R
- * MIN1 R I
Imaginary part - AIMAG C R
Conjugate - CONJG C C
Truncation is towards zero, INT(-3.7) becomes -3, but rounding is correct, NINT(-3.7) becomes -4. The new functions FLOOR and CEILING truncate towards minus and plus infinity, respectively.
The function CMPLX can have one or two arguments, if two arguments are present these must be of the same type but not COMPLEX.
The function MOD(X,Y) calculates X - INT(X/Y)*Y.
The sign transfer function SIGN(X,Y) takes the sign of the second argument and puts it on the first argument, ABS(X) if Y >= 0 and -ABS(X) if Y < 0.
Positive difference DIM is a function I have never used, but DIM(X,Y) gives X-Y if this is positive and zero in the other case.
Inner product DPROD on the other hand is a very useful function which gives the product of two numbers in single precision as a double precision number. It is both fast and accurate.
The two functions MAX and MIN are unique in that they may have an arbitrary number of arguments, but at least two. The arguments have to be of the same type, but are not permitted to be of type
Same as in Fortran 77. All trigonometric functions work in radians. The following are available: ACOS, ASIN, ATAN, ATAN2, COS, COSH, EXP, LOG, LOG10, SIN, SINH, SQRT, TAN and TANH.
A historic fact is that the mathematical functions in Fortran 66 had to have specific (different) names in different precisions, and these explicit names are still the only ones which can be used
when a function name is passed as an argument.
A complete table of all the mathematical functions follow. Below I use C for complex floating point values, D for floating point values in double precision, I for integers, and R for floating point
values in single precision.
Function Generic Specific Data type
name name Arg Res
Square root SQRT SQRT R R
DSQRT D D
CSQRT C C
Exponential EXP EXP R R
DEXP D D
CEXP C C
Natural LOG ALOG R R
logarithm DLOG D D
CLOG C C
Common LOG10 ALOG10 R R
logarithm DLOG10 D D
Sine SIN SIN R R
DSIN D D
CSIN C C
Cosine COS COS R R
DCOS D D
CCOS C C
Tangent TAN TAN R R
DTAN D D
Arcsine ASIN ASIN R R
DASIN D D
Arccosine ACOS ACOS R R
DACOS D D
Arctangent ATAN ATAN R R
DATAN D D
ATAN2 ATAN2 2R R
DATAN2 2D D
Hyperbolic SINH SINH R R
sine DSINH D D
Hyperbolic COSH COSH R R
cosine DCOSH D D
Hyperbolic TANH TANH R R
tangent DTANH D D
The purpose of most of these functions is obvious. Note that they are all only defined for floating point numbers, and not for integers. You can therefore not calculate the square root of 4 as SQRT
(4), but instead you can use NINT(SQRT(REAL(4))). Please also note that all complex functions return the principal value.
The square root gives a real result for a real argument in single or double precision, and a complex result for a complex argument. So SQRT(-1.0) gives an error message (usually already at compile
time), while you can get the complex square root using the following statements.
COMPLEX, PARAMETER :: MINUS_ONE = -1.0
COMPLEX :: Z
Z = SQRT(MINUS_ONE)
The argument for the usual logarithms has to be positive, while the argument for CLOG must be different from zero.
The modulus for the argument to ASIN and ACOS has to be at most 1. The result will be within [-pi/2, pi/2] and [0, pi], respectively.
The function ATAN will return a value in [-pi/2, pi/2].
The function ATAN2(Y,X) = arctan(y,x) will return a value in (-pi, pi]. If Y is positive the result will be positive. If Y is zero the result will be zero if X is positive, and pi if X is negative.
If Y is negative the result will be negative. If X is zero the result will be plus or minus pi/2. Both X and Y are not permitted to be zero simultaneously. The purpose of the function is to avoid
division by zero.
A natural limitation for the mathematical functions is the limited accuracy and range, which means that for example EXP can cause underflow or overflow at rather common values of the argument. The
trigonometric functions will get very low accuracy for large arguments. These limitations are implementation dependent, and should be given in the vendor's manual.
The functions below perform operations from and to character strings. Please note that ACHAR works with the standard ASCII character set while CHAR works with the representation in the computer you
are using.
ACHAR(I) Returns the ASCII character which has number I
ADJUSTL(STRING) Adjusts to the left
ADJUSTR(STRING) Adjusts to the right
CHAR(I, kind) Returns the character that has the number I
IACHAR(C) Returns the ASCII number of the character C
ICHAR(C) Returns the number of character C
INDEX(STRING, SUBSTRING, back) Returns the starting position for a
substring within a string. If BACK is true then you get the
last starting position, in the other case, the first one.
LEN_TRIM(STRING) Returns the length of the string without the possibly
trailing blanks.
LGE(STRING_A, STRING_B)
LGT(STRING-A, STRING_B)
LLE(STRING_A, STRING_B)
LLT(STRING_A, STRING_B)
The above routines compare two strings using sorting according to ASCII. If a string is shorter than the other, blanks are added at the end of the short string. If a string contains a character
outside the ASCII character set, the result is implementation-dependent.
REPEAT(STRING, NCOPIES) Concatenates a character string NCOPIES
times with itself.
SCAN(STRING, SET, back) Returns the position of the first occurrence
of any character in the string SET in the string
STRING. If BACK is true, you will get
the rightmost such character.
TRIM(STRING) Returns the character string STRING without
trailing blanks.
VERIFY(STRING, SET, back) Returns the position of the first character
in STRING which is not in SET. If BACK
is TRUE, you get the last one!
The result is zero if all characters are
LEN(STRING) returns the length of a character string. There does not have to be assigned a value to the variable STRING.
SELECTED_REAL_KIND(p, r)
The first returns the kind of the actual argument, which can be of the type INTEGER, REAL, COMPLEX, LOGICAL or CHARACTER. The argument X does not have to be assigned any value. The second returns an
integer kind with the requested number of digits, and the third returns the kind for floating-point numbers with numerical precision at least P digits and one decimal exponent range between -R and
+R. The parameters P and R must be scalar integers. At least one of P and R must be given.
The result of SELECTED_INT_KIND is an integer from zero and upward, if the desired kind is not available you will get -1. If several implemented types satisfy the condition, the one with the least
decimal range is used. If there still are several types or kinds that satisfy the condition, the one with the smallest kind number will be used.
The result of SELECTED_REAL_KIND is also an integer from zero and upward; if the desired kind is not available, then -1 is returned if the precision is not available, -2 if the exponent range is not
available and -3 if no one of the requirements are available. If several implemented types satisfy the condition, the one with the least decimal precision is returned, and if there are several of
them, the one with the least kind number is returned.
Examples are given in chapter 2 of the main text. Examples of kinds in a few different implementations (NAG and Cray) are given in Appendix 6.
LOGICAL(L, kind) converts between different kinds of logical variables. Logical variables can be implemented in various ways, for example with a physical representation occupying one bit (not
recommended), one byte, one word or perhaps even one double word. This difference is important if COMMON and EQUIVALENCE with logical variables have been misused in a program in the traditional way
of Fortran 66 programming.
8. Numerical inquiry functions:
These functions work with a certain model of integer and floating-point arithmetics, see ISO (1991), section 13.7.1. The functions return properties of numbers of the same kind as the variable X,
which can be real and in some cases integer. Functions that return properties of the actual argument X are available in section 12 below, floating-point manipulation functions.
DIGITS(X) The number of significant digits
EPSILON(X) The least positive number that added
to 1 returns a number that is greater than 1
HUGE(X) The largest positive number
MAXEXPONENT(X) The largest exponent
MINEXPONENT The smallest exponent
PRECISION(X) The decimal precision
RADIX(X) The base in the model
RANGE(X) The decimal exponent
TINY(X) The smallest positive number
BIT_SIZE(I) returns the number of bits according to the model of bit representation in the standard ISO (1991), section 13.5.7. Normally we get the number of bits in a (whole) word.
The model for bit representation in the standard ISO (1991), section 13.5.7 is used.
BTEST(I, POS) .TRUE. if the position number POS of I is 1
IAND(I, J) logical addition of the bit characters in
variables I and J
IBCLR(I, POS) puts a zero in the bit in position POS
IBITS(I, POS, LEN) uses LEN bits of the word I with
beginning in position POS, the additional bits
are set to zero. It requires that
POS + LEN <= BIT_SIZE(I)
IBSET(I, POS) puts the bit in position POS to 1
IEOR(I, J) performs logical exclusive OR
IOR(I, J) performs logical OR
ISHFT(I, SHIFT) performs logical shift (to the right if the number
of steps SHIFT < 0 and to the left if SHIFT > 0).
Positions that are vacated are set to zero.
ISHFTC(I, SHIFT, size) performs logical shift a number of steps
circularly to the right if SHIFT < 0,
circularly to the left if SHIFT > 0. If SIZE
is given, it is required that 0 < SIZE <=
BIT_SIZE(I). Shift is only done for the bits
that are in the SIZE rightmost positions, but
for all positions if SIZE is not given.
NOT(I) returns a logical complement
TRANSFER(SOURCE, MOULD, size) specifies that the physical representation of the first argument SOURCE shall be treated as if it had type and parameters as the second argument MOULD, but without
converting it. The purpose is to give a possibility to move a quantity of a certain type via a routine that does not have exactly that data type.
12. Floating-point manipulation functions:
These functions work in a certain model of integer and floating-point arithmetic, see the standard ISO(1991), section 13.7.1. The functions return numbers related to the actual variable X of the type
REAL. Functions that return properties for the numbers of the same kind as the variable X are under section 8 (Numerical inquiry functions).
EXPONENT(X) exponent of the number
FRACTION(X) the fractional part of the number
NEAREST(X, S) returns the next representable number in
the direction of the sign of S
RRSPACING(X) returns the inverted value of the distance
between the two nearest possible numbers
SCALE(X, I) multiplies X by the base to the power I
SET_EXPONENT(X, I) returns the number that has the fractional
part of X and the exponent I
SPACING(X) the distance between the two nearest
possible numbers
DOT_PRODUCT(VECTOR_A, VECTOR_B) makes a scalar product of two vectors, which must have the same length (same number of elements).
Please note that if VECTOR_A is of type COMPLEX the result is SUM(CONJG(VECTOR_A)*VECTOR_B).
MATMUL(MATRIX_A, MATRIX_B) makes the matrix product of two matrices, which must be consistent, i.e. have the dimensions like (M, K) and (K, N). Used in chapter 11 of the main text.
14. Array functions:
ALL(MASK, dim) returns a logical value that indicates whether all relations in MASK are .TRUE., along only the desired dimension if the second argument is given.
ANY(MASK, dim) returns a logical value that indicates whether any relation in MASK is .TRUE., along only the desired dimension if the second argument is given.
COUNT(MASK, dim) returns a numerical value that is the number of relations in MASK who are .TRUE., along only the desired dimension if the second argument is given.
MAXVAL(ARRAY, dim, mask) returns the largest value in the array ARRAY, of those that obey the relation in the third argument MASK if that one is given, along only the desired dimension if the second
argument DIM is given.
MINVAL(ARRAY, dim, mask) returns the smallest value in the array ARRAY, of those that obey the relation in the third argument MASK if that one is given, along only the desired dimension if the second
argument DIM is given.
PRODUCT(ARRAY, dim, mask) returns the product of all the elements in the array ARRAY, of those that obey the relation in the third argument MASK if that one is given, along only the desired dimension
if the second argument DIM is given.
SUM (ARRAY, dim, mask) returns the sum of all the elements in the array ARRAY, of those that obey the relation in the third argument MASK if that one is given, along only the desired dimension if the
second argument DIM is given. An example is given in Appendix 3, section 10.
See also Appendix 3, section 10.
ALLOCATED(ARRAY) is a logical function which indicates if the array is allocated.
LBOUND(ARRAY, dim) is a function which returns the lower dimension limit for the ARRAY. If DIM (the dimension) is not given as an argument, you get an integer vector, if DIM is included, you get the
integer value with exactly that lower dimension limit, for which you asked.
SHAPE(SOURCE) is a function which returns the shape of an array SOURCE as an integer vector.
SIZE(ARRAY, dim) is a function which returns the number of elements in an array ARRAY, if DIM is not given, and the number of elements in the relevant dimension if DIM is included.
UBOUND(ARRAY, dim) is a function similar to LBOUND which returns the upper dimensional limits.
MERGE(TSOURCE, FSOURCE, MASK) is a function which joins two arrays. It gives the elements in TSOURCE if the condition in MASK is .TRUE. and FSOURCE if the condition in MASK is .FALSE. The two fields
TSOURCE and FSOURCE have to be of the same type and the same shape. The result is also of this type and this shape. Also MASK must have the same shape.
I here give a rather complete example of the use of MERGE which also uses RESHAPE from the next section in order to build suitable test matrices.
Note that the two subroutines WRITE_ARRAY and WRITE_L_ARRAY are test routines to write matrices which in the first case are of a REAL type, in the second case of a LOGICAL type.
SUBROUTINE WRITE_ARRAY (A)
REAL :: A(:,:)
END SUBROUTINE WRITE_ARRAY
SUBROUTINE WRITE_L_ARRAY (A)
LOGICAL :: A(:,:)
END SUBROUTINE WRITE_L_ARRAY
REAL, DIMENSION(2,3) :: TSOURCE, FSOURCE, RESULT
LOGICAL, DIMENSION(2,3) :: MASK
TSOURCE = RESHAPE( (/ 11, 21, 12, 22, 13, 23 /), &
(/ 2, 3 /) )
FSOURCE = RESHAPE( (/ -11, -21, -12, -22, -13, -23 /), &
(/ 2,3 /) )
MASK = RESHAPE( (/ .TRUE., .FALSE., .FALSE., .TRUE., &
.FALSE., .FALSE. /), (/ 2,3 /) )
RESULT = MERGE(TSOURCE, FSOURCE, MASK)
CALL WRITE_L_ARRAY(MASK)
REAL :: A(:,:)
DO I = LBOUND(A,1), UBOUND(A,1)
WRITE(*,*) (A(I, J), J = LBOUND(A,2), UBOUND(A,2) )
SUBROUTINE WRITE_L_ARRAY (A)
LOGICAL :: A(:,:)
DO I = LBOUND(A,1), UBOUND(A,1)
WRITE(*,"(8L12)") (A(I, J), J= LBOUND(A,2), UBOUND(A,2))
The following output is obtained
11.0000000 12.0000000 13.0000000
21.0000000 22.0000000 23.0000000
-11.0000000 -12.0000000 -13.0000000
-21.0000000 -22.0000000 -23.0000000
T F F
F T F
11.0000000 -12.0000000 -13.0000000
-21.0000000 22.0000000 -23.0000000
PACK(ARRAY, MASK, vector) packs an array to a vector with the control of MASK. The shape of the logical array MASK has to agree with the one for ARRAY or MASK must be a scalar. If VECTOR is included,
it has to be an array of rank 1 (i.e. a vector) with at least as many elements as those that are true in MASK and have the same type as ARRAY. If MASK is a scalar with the value .TRUE. then VECTOR
instead must have the same number of elements as ARRAY.
The result is a vector with as many elements as those in ARRAY that obey the conditions if VECTOR is not included (i.e. all elements if MASK is a scalar with value .TRUE.). In the other case the
number of elements of the result will be as many as in VECTOR. The values will be the approved ones, i.e. the values which fulfill the condition, and will be in the ordinary Fortran order. If VECTOR
is included and the number of its elements exceeds the number of approved values, the lacking values required for the result are taken from the corresponding locations in VECTOR.
The following example is based on the modification of the one for MERGE , but I give now only the results.
11.0000000 12.0000000 13.0000000
21.0000000 22.0000000 23.0000000
T F F
F T F
PACK(ARRAY, MASK)
PACK(ARRAY, MASK, VECTOR)
SPREAD(SOURCE, DIM, NCOPIES) returns an array of the same type as the argument SOURCE with the rank increased by one. The parameters DIM and NCOPIES are integer. If NCOPIES is negative the value zero
is used instead. If SOURCE is a scalar, then SPREAD becomes a vector with NCOPIES elements that all have the same value as SOURCE. The parameter DIM indicates which index is to be extended. It has to
be within the range 1 and 1+(rank of SOURCE), if SOURCE is a scalar then DIM has to be one. The parameter NCOPIES is the number of elements in the new dimensions. Additional discussion is given in
the solution to exercise (11.1).
UNPACK(VECTOR, MASK, ARRAY) scatters a vector to an array under control of MASK. The shape of the logical array MASK has to agree with the one for ARRAY. The array VECTOR has to have the rank 1 (i.e.
it is a vector) with at least as many elements as those that are true in MASK, and also has to have the same type as ARRAY. If ARRAY is given as a scalar then it is considered to be an array with the
same shape as MASK and the same scalar elements everywhere.
The result will be an array with the same shape as MASK and the same type as VECTOR. The values will be those from VECTOR that are accepted (i.e. those fulfilling the condition in MASK), taken in the
ordinary Fortran order, while in the remaining positions in ARRAY the old values are kept.
RESHAPE(SOURCE, SHAPE, pad, order) constructs an array with a specified shape SHAPE starting from the elements in a given array SOURCE. If PAD is not included then the size of SOURCE has to be at
least PRODUCT (SHAPE). If PAD is included it has to have the same type as SOURCE. If ORDER is included, it has to be an INTEGER array with the same shape as SHAPE and the values must be a permutation
of (1,2,3,...,N), where N is the number of elements in SHAPE , it has to be less than, or equal to 7.
The result has of course a shape SHAPE and the elements are those in SOURCE, possibly complemented with PAD. The different dimensions have been permuted at the assignment of the elements if ORDER was
included, but without influencing the shape of the result.
A few simple examples are given in the previous and the next section and also in Appendix 3, section 9. A more complicated example, illustrating also the optional arguments, follows.
SUBROUTINE WRITE_MATRIX(A)
REAL, DIMENSION(:,:) :: A
END SUBROUTINE WRITE_MATRIX
REAL, DIMENSION (1:9) :: B = (/ 11, 12, 13, 14, 15, 16, 17, 18, 19 /)
REAL, DIMENSION (1:3, 1:3) :: C, D, E
REAL, DIMENSION (1:4, 1:4) :: F, G, H
INTEGER, DIMENSION (1:2) :: ORDER1 = (/ 1, 2 /)
INTEGER, DIMENSION (1:2) :: ORDER2 = (/ 2, 1 /)
REAL, DIMENSION (1:16) :: PAD1 = (/ -1, -2, -3, -4, -5, -6, -7, -8, &
& -9, -10, -11, -12, -13, -14, -15, -16 /)
C = RESHAPE( B, (/ 3, 3 /) )
CALL WRITE_MATRIX(C)
D = RESHAPE( B, (/ 3, 3 /), ORDER = ORDER1)
CALL WRITE_MATRIX(D)
E = RESHAPE( B, (/ 3, 3 /), ORDER = ORDER2)
CALL WRITE_MATRIX(E)
F = RESHAPE( B, (/ 4, 4 /), PAD = PAD1)
CALL WRITE_MATRIX(F)
G = RESHAPE( B, (/ 4, 4 /), PAD = PAD1, ORDER = ORDER1)
CALL WRITE_MATRIX(G)
H = RESHAPE( B, (/ 4, 4 /), PAD = PAD1, ORDER = ORDER2)
CALL WRITE_MATRIX(H)
SUBROUTINE WRITE_MATRIX(A)
REAL, DIMENSION(:,:) :: A
DO I = LBOUND(A,1), UBOUND(A,1)
WRITE(*,*) (A(I,J), J = LBOUND(A,2), UBOUND(A,2))
END DO
The output from the above program is as follows.
11.0000000 14.0000000 17.0000000
12.0000000 15.0000000 18.0000000
13.0000000 16.0000000 19.0000000
11.0000000 14.0000000 17.0000000
12.0000000 15.0000000 18.0000000
13.0000000 16.0000000 19.0000000
11.0000000 12.0000000 13.0000000
14.0000000 15.0000000 16.0000000
17.0000000 18.0000000 19.0000000
11.0000000 15.0000000 19.0000000 -4.0000000
12.0000000 16.0000000 -1.0000000 -5.0000000
13.0000000 17.0000000 -2.0000000 -6.0000000
14.0000000 18.0000000 -3.0000000 -7.0000000
11.0000000 15.0000000 19.0000000 -4.0000000
12.0000000 16.0000000 -1.0000000 -5.0000000
13.0000000 17.0000000 -2.0000000 -6.0000000
14.0000000 18.0000000 -3.0000000 -7.0000000
11.0000000 12.0000000 13.0000000 14.0000000
15.0000000 16.0000000 17.0000000 18.0000000
19.0000000 -1.0000000 -2.0000000 -3.0000000
-4.0000000 -5.0000000 -6.0000000 -7.0000000
The shift functions return the shape of an array unchanged, but move the elements. They are rather difficult to explain so I recommend to study also the standard ISO (1991).
CSHIFT(ARRAY, SHIFT, dim) performs circular shift by SHIFT positions to the left if SHIFT is positive and to the right if it is negative. If ARRAY is a vector the shift is being done in a natural
way, if it is an array of a higher rank then the shift is in all sections along the dimension DIM. If DIM is missing it is considered to be 1, in other cases it has to be a scalar integer number
between 1 and n (where n equals the rank of ARRAY ). The argument SHIFT is a scalar integer or an integer array of rank n-1 and the same shape as the ARRAY, except along the dimension DIM (which is
removed because of the lower rank). Different sections can therefore be shifted in various directions and with various numbers of positions.
EOSHIFT(ARRAY, SHIFT, boundary, dim) performs shift to the left if SHIFT is positive and to the right if it is negative. Instead of the elements shifted out new elements are taken from BOUNDARY. If
ARRAY is a vector the shift is being done in a natural way, if it is an array of a higher rank, the shift on all sections is along the dimension DIM. If DIM is missing, it is considered to be 1, in
other cases it has to have a scalar integer value between 1 and n (where n equals the rank of ARRAY). The argument SHIFT is a scalar integer if ARRAY has rank 1, in the other case it can be a scalar
integer or an integer array of rank n-1 and with the same shape as the array ARRAY except along the dimension DIM (which is removed because of the lower rank).
The corresponding applies to BOUNDARY which has to have the same type as the ARRAY. If the parameter BOUNDARY is missing you have the choice of values zero, .FALSE. or blank being used, depending on
the data type. Different sections can thus be shifted in various directions and with various numbers of positions. A simple example of the above two functions for the vector case follows, both the
program and the output.
REAL, DIMENSION(1:6) :: A = (/ 11.0, 12.0, 13.0, 14.0, &
15.0, 16.0 /)
REAL, DIMENSION(1:6) :: X, Y
WRITE(*,10) A
X = CSHIFT ( A, SHIFT = 2)
WRITE(*,10) X
Y = CSHIFT (A, SHIFT = -2)
WRITE(*,10) Y
X = EOSHIFT ( A, SHIFT = 2)
WRITE(*,10) X
Y = EOSHIFT ( A, SHIFT = -2)
WRITE(*,10) Y
10 FORMAT(1X,6F6.1)
11.0 12.0 13.0 14.0 15.0 16.0
13.0 14.0 15.0 16.0 11.0 12.0
15.0 16.0 11.0 12.0 13.0 14.0
13.0 14.0 15.0 16.0 0.0 0.0
0.0 0.0 11.0 12.0 13.0 14.0
A simple example of the above two functions in the matrix case follows. I have here used RESHAPE in order to create a suitable matrix to start work with. The program is not reproduced here, only the
main statements.
B = (/ 11.0, 12.0, 13.0, 14.0, 15.0, 16.0 /)
11.0 12.0 13.0 Z = RESHAPE( B, (/3,3/) )
14.0 15.0 16.0
17.0 18.0 19.0
17.0 18.0 19.0 X = CSHIFT (Z, SHIFT = 2)
11.0 12.0 13.0
14.0 15.0 16.0
13.0 11.0 12.0 X = CSHIFT ( Z, SHIFT = 2, DIM = 2)
16.0 14.0 15.0
19.0 17.0 18.0
14.0 15.0 16.0 X = CSHIFT (Z, SHIFT = -2)
17.0 18.0 19.0
11.0 12.0 13.0
17.0 18.0 19.0 X = EOSHIFT ( Z, SHIFT = 2)
0.0 0.0 0.0
0.0 0.0 0.0
13.0 0.0 0.0 X = EOSHIFT ( Z, SHIFT = 2, DIM = 2)
16.0 0.0 0.0
19.0 0.0 0.0
0.0 0.0 0.0 X = EOSHIFT ( Z, SHIFT = -2)
0.0 0.0 0.0
11.0 12.0 13.0
TRANSPOSE (MATRIX) transposes a matrix, which is an array of rank 2. It replaces the rows and columns in the matrix.
MAXLOC(ARRAY, mask) returns the position of the greatest element in the array ARRAY, if MASK is included only for those which fulfill the conditions in MASK. The result is an integer vector! It is
used in the solution of exercise (11.1).
MINLOC(ARRAY, mask) returns the position of the smallest element in the array ARRAY , if MASK is included only for those which fulfill the conditions in MASK. The result is an integer vector!
ASSOCIATED(POINTER, target) is logical function that indicates if the pointer POINTER is associated with some target, and if a specific TARGET is included it indicates if it is associated with
exactly that target. If both POINTER and TARGET are pointers, the result is .TRUE. only if both are associated with the same target. I refer the reader to chapter 12 of the main text, Pointers.
• DATE_AND_TIME(date, time, zone, values)
A subroutine which returns the date, the time and the time zone. At least one argument has to be given.
DATE must be a scalar character string variable with at least 8 characters and it is assigned the value CCYYMMDD for century, year, month and day. All are given numerically, with blanks if the
system does not include the date.
TIME must also be a scalar character string variable with at least 10 characters and it is assigned a value hhmmss.sss for time in hours, minutes, seconds and milliseconds. All are given
numerically with blanks if the system does not include a clock.
ZONE must be a scalar character string variable with at least 5 characters and it is assigned the value +hhmm for sign, time in hours and minutes for the local time difference with UTC (which was
previously called Greenwich Mean Time). All are given numerically, with blanks if the system does not include a clock. In Sweden we therefore get +0100 in winter and +0200 in summer, in
Novosibirsk we get +0700 .
The variable VALUES is instead an integer vector with at least 8 elements, it gives the easiest way of using the results from DATE_AND_TIME at the calculations in a program. If the system does
not include the date or the time you get the value -HUGE(0), that is the smallest integer number in the model, as output. The vector will include the following elements: year, month, day, time
difference in minutes, hours, minutes, seconds and milliseconds.
SYSTEM_CLOCK(COUNT, COUNT_RATE, COUNT_MAX)
Subroutine which returns the system time. At least one argument has to be given. COUNT is a scalar integer which is increased by one for each cycle up to COUNT_MAX , where it starts once again.
If there is no system clock then -HUGE(0) is returned.
COUNT_RATE is a scalar integer that gives the number of cycles per second. If there is no system clock the value zero is returned.
COUNT_MAX is a scalar integer which gives the maximum value that COUNT can reach. If there is no system clock, zero is returned instead.
• MVBITS(FROM, FROMPOS, LEN, TO, TOPOS)
A subroutine which copies the sequence of bits in position FROMPOS and has the length LEN to target TO starting in position TOPOS. The remaining bits are not changed. All quantities have to be
integers and all except TO have to have INTENT(IN) while TO is supposed to have INTENT(INOUT) and be of the same kind type as FROM. The same variable can be both FROM and TO. Some natural
restrictions apply to the values of LEN, FROMPOS and TOPOS and you also have to consider the value of BIT_SIZE.
• A sequence of pseudo random numbers can be generated from a starting value which is stored as an integer vector. The subroutines offer a portable interface towards an implementation dependent
random number sequence.
This subroutine returns in the floating-point number variable HARVEST one (or several if HARVEST is an array) random numbers between zero and 1.
RANDOM_SEED(size, put, get)
This subroutine resets, or gives information about, the random number generator. No arguments have to be provided. The output variable SIZE must be a scalar integer and gives the number of
integers (N) the processor uses for the starting value. The input variable PUT is an integer vector which puts the starting numbers provided by the user into the random number generator. The
output variable GET (also an integer vector)reads the present starting value. Example:
CALL RANDOM_SEED ! Initializing
CALL RANDOM SEED (SIZE=K) ! Sets K = N
CALL RANDOM_SEED (PUT = SEED (1:K)) ! Uses the starting value
! given by the user
CALL RANDOM_SEED (GET = OLD(1:K)) ! Returns the present
! starting value
A simple example on the use of these functions is now available.
Last modified: August 10, 2009
|
{"url":"http://www.nsc.liu.se/~boein/f77to90/a5.html","timestamp":"2014-04-18T20:44:20Z","content_type":null,"content_length":"46564","record_id":"<urn:uuid:fe2ebaef-9b0d-4ed1-8ce5-d69c74b02952>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00613-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Trilinos/AztecOO: Object-Oriented Aztec Linear Solver Package.
AztecOO provides an object-oriented interface the the well-known Aztec solver library. Furthermore, it allows flexible construction of matrix and vector arguments via Epetra matrix and vector
classes. Finally, AztecOO provide additional functionality not found in Aztec and any future enhancements to the Aztec package will be available only through the AztecOO interfaces. AztecOO contains
a number of classes. They are:
• AztecOO - Primary solver class. An AztecOO object is instantiated using and Epetra_LinearProblem object. The solver options and parameters can be set using SetAztecOption() and SetAztecParam()
methods on an AztecOO object.
• Aztec2Petra() - Utility function to convert from Aztec data structures to Epetra objects. This function can be useful when migrating from Aztec to AztecOO. It is used internally by the
AZOO_iterate function.
• AZOO_iterate() - Utility function that mimics the behavior and calling sequence of AZ_iterate, the primary solver call in Aztec. AZOO_iterate converts Aztec matrix, vectors, options and params
into Epetra and AztecOO objects. It then calls AztecOO to solve the problem. For current Aztec users, this function should provide identical functionality to AZ_iterate, except for the extra
memory used by having Epetra versions of the matrix and vectors.
Please note that AZOO_iterate is meant to provide a smooth transition for current Aztec users to AztecOO. We do not advocate this function as a permanent solution to switching from Aztec to
AztecOO supports a ``matrix-free'' mechanism via the pure virtual class Epetra_RowMatrix. This class is part of Epetra and is implemented by the Epetra_CrsMatrix and Epetra_VbrMatrix classes. It is
possible to implement Epetra_RowMatrix using other matrix classes. AztecOO can then use this alternate implementation to provide the matrix multiply capabilities, and to obtain row value information
from the matrix for constructing preconditioners. For details of Epetra, see the Epetra home page. Generated on Thu Sep 18 12:42:37 2008 for AztecOO by 1.3.9.1
|
{"url":"http://trilinos.sandia.gov/packages/docs/r4.0/packages/aztecoo/doc/html/index.html","timestamp":"2014-04-17T13:09:19Z","content_type":null,"content_length":"5375","record_id":"<urn:uuid:04640592-7016-42c0-9e83-55ef65a4720c>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00085-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Stat of the Week Competition: October 8-14
October 10, 2011
Stat of the Week Competition: October 8-14
Each week, we would like to invite readers of Stats Chat to submit nominations for our Stat of the Week competition and be in with the chance to win an iTunes voucher.
Here’s how it works:
• Anyone may add a comment on this post to nominate their Stat of the Week candidate before midday Friday October 14 2011.
• Statistics can be bad, exemplary or fascinating.
• The statistic must be in the NZ media during the period of October 8-14 2011 inclusive.
• Quote the statistic, when and where it was published and tell us why it should be our Stat of the Week.
Next Monday at midday we’ll announce the winner of this week’s Stat of the Week competition, and start a new one.
The fine print:
• Judging will be conducted by the blog moderator in liaison with staff at the Department of Statistics, The University of Auckland.
• The judges’ decision will be final.
• The judges can decide not to award a prize if they do not believe a suitable statistic has been posted in the preceeding week.
• Only the first nomination of any individual example of a statistic used in the NZ media will qualify for the competition.
• Employees (other than student employees) of the Statistics department at the University of Auckland are not eligible to win.
• The person posting the winning entry will receive a $20 iTunes voucher.
• The blog moderator will contact the winner via their notified email address and advise the details of the $20 iTunes voucher to that same email address.
• The competition will commence Monday 8 August 2011 and continue until cancellation is notified on the blog.
• Statistic: Man drought confirmed in New Zealand
Source: Sunday Start Times
Date: 9 October 2011
The use of statistics in this article that “confirms” that there is a man drought is bizarre.
It says that there are roughly 50 000 each of men and women in the age range 25-39 who are single. But if you restrict men to just those earning more than $60 000 then there are only 24 000 of
them for the 60 000 women. Therefore there is a man drought!
Using the same logic I could say that of the 60 000 women there are only 10 000 who are blonde and so there really is a woman drought.
• Statistic: It’s not often that the quiet world of mathematics is rocked by a murder case. But last summer saw a trial that sent academics into a tailspin, and has since swollen into a fevered
clash between science and the law.
At its heart, this is a story about chance. And it begins with a convicted killer, “T”, who took his case to the court of appeal in 2010. Among the evidence against him was a shoeprint from a
pair of Nike trainers, which seemed to match a pair found at his home. While appeals often unmask shaky evidence, this was different. This time, a mathematical formula was thrown out of court.
The footwear expert made what the judge believed were poor calculations about the likelihood of the match, compounded by a bad explanation of how he reached his opinion. The conviction was
But more importantly, as far as mathematicians are concerned, the judge also ruled against using similar statistical analysis in the courts in future. It’s not the first time that judges have
shown hostility to using formulae. But the real worry, say forensic experts, is that the ruling could lead to miscarriages of justice.
“The impact will be quite shattering,” says Professor Norman Fenton, a mathematician at Queen Mary, University of London. In the last four years he has been an expert witness in six cases,
including the 2007 trial of Levi Bellfield for the murders of Marsha McDonnell and Amelie Delagrange. He claims that the decision in the shoeprint case threatens to damage trials now coming to
court because experts like him can no longer use the maths they need.
Specifically, he means a statistical tool called Bayes’ theorem. Invented by an 18th-century English mathematician, Thomas Bayes, this calculates the odds of one event happening given the odds of
other related events. Some mathematicians refer to it simply as logical thinking, because Bayesian reasoning is something we do naturally. If a husband tells his wife he didn’t eat the leftover
cake in the fridge, but she spots chocolate on his face, her estimate of his guilt goes up. But when lots of factors are involved, a Bayesian calculation is a more precise way for forensic
scientists to measure the shift in guilt or innocence.
In the shoeprint murder case, for example, it meant figuring out the chance that the print at the crime scene came from the same pair of Nike trainers as those found at the suspect’s house, given
how common those kinds of shoes are, the size of the shoe, how the sole had been worn down and any damage to it. Between 1996 and 2006, for example, Nike distributed 786,000 pairs of trainers.
This might suggest a match doesn’t mean very much. But if you take into account that there are 1,200 different sole patterns of Nike trainers and around 42 million pairs of sports shoes sold
every year, a matching pair becomes more significant.
The data needed to run these kinds of calculations, though, isn’t always available. And this is where the expert in this case came under fire. The judge complained that he couldn’t say exactly
how many of one particular type of Nike trainer there are in the country. National sales figures for sports shoes are just rough estimates.
And so he decided that Bayes’ theorem shouldn’t again be used unless the underlying statistics are “firm”. The decision could affect drug traces and fibre-matching from clothes, as well as
footwear evidence, although not DNA.
“We hope the court of appeal will reconsider this ruling,” says Colin Aitken, professor of forensic statistics at the University of Edinburgh, and the chairman of the Royal Statistical Society’s
working group on statistics and the law. It’s usual, he explains, for forensic experts to use Bayes’ theorem even when data is limited, by making assumptions and then drawing up reasonable
estimates of what the numbers might be. Being unable to do this, he says, could risk miscarriages of justice.
“From being quite precise and being able to quantify your uncertainty, you’ve got to give a completely bland statement as an expert, which says ‘maybe’ or ‘maybe not’. No numbers,” explains
“It’s potentially very damaging,” agrees University College London psychologist, Dr David Lagnado. Research has shown that people frequently make mistakes when crunching probabilities in their
heads. “We like a good story to explain the evidence and this makes us use statistics inappropriately,” he says. When Sally Clark was convicted in 1999 of smothering her two children, jurors and
judges bought into the claim that the odds of siblings dying by cot death was too unlikely for her to be innocent. In fact, it was statistically more rare for a mother to kill both her children.
Clark was finally freed in 2003.
Lawyers call this type of mistake the prosecutor’s fallacy, when people confuse the odds associated with a piece of evidence with the odds of guilt. Recognising this is also what eventually
quashed the 1991 conviction for rape of Andrew Deen in Manchester. The courts realised at appeal that a one-in-three-million chance of a random DNA match for a semen stain from the crime scene
did not mean there was only a one-in-three-million chance that anyone other than Deen could have been a match – those odds actually depend on the pool of potential suspects. In a population of 20
million adult men, for example, there could be as many as six other matches.
Now, Fenton and his colleague Amber Marks, a barrister and lecturer in evidence at Queen Mary, University of London, have begun assembling a group of statisticians, forensic scientists and
lawyers to research a solution to bad statistics. “We want to do what people failed to do in the past, which is really get the legal profession and statisticians and probability guys
understanding each other’s language,” says Fenton.
Their first job is to find out how often trials depend on Bayesian calculations, and the impact that the shoeprint-murder ruling might have on future trials. “This could affect thousands of
cases,” says Marks.
They have 37 members on their list so far, including John Wagstaff, legal adviser to the Criminal Cases Review Commission, and David Spiegelhalter, the Winton professor of the public
understanding of risk at the University of Cambridge. Added to these are senior statisticians and legal scholars from the Netherlands, US and New Zealand.
Fenton believes that the potential for mathematics to improve the justice system is huge. “You could argue that virtually every case with circumstantial evidence is ripe for being improved by
Bayesian arguments,” he says.
But the real dilemma is finding a way to help people make sense of the calculations. The Royal Statistical Society already offers guidance for forensic scientists, to stop them making mistakes.
Lagnado says that flowcharts in the style of family trees also help jurors visualise changing odds more clearly. But neither approach has been entirely successful. And until this complex bit of
maths can be simply explained, chances are judges will keep rejecting it.
Source: http://www.guardian.co.uk/law/2011/oct/02/formula-justice-bayes-theorem-miscarriage
Date: Sunday 2 October 2011
Bayes’ theorem is a mathematical equation used in court cases to analyse statistical evidence. But a judge has ruled it can no longer be used. Will it result in more miscarriages of justice?
• Statistic: Smoking costs the Australian health system $32 billion
Source: TVNZ
Date: 12 October
This is a horrible mutant stat.
Here’s TVNZ’s quote: “Australia’s Cancer Council said the Senate should end the political delays and get on with passing the legislation, with authorities estimating smoking now kills 15,000
Australians each year and costs the health system $32 billion.”
The $32 billion figure comes from Collins & Lapsley’s report on the social costs of alcohol, tobacco, and other drugs.
Most importantly, $32 billion figure counts a host of tangible and intangible costs that fall on the smoker, those around the smoker, and the public health system. Only $312 million of the $32
billion, according to the report, counts as a net health cost. Just look at the first table at xii in the Executive Summary.
I get really really annoyed at how these big numbers, which mostly consist of costs borne by the smoker or drinker himself, get twisted by activists like the Cancer Council to build support for
policies that further beat on smokers and drinkers. There can be a case for anti-smoking policy. But it oughtn’t be based on lies. Smokers pay more in tax than they cost the health system in any
country that has a reasonably large tobacco tax and a reasonably large public pension system.
• Statistic: A few months ago I came upon an old episode of Radiolab, one of my favorite podcasts whose host Jad Abumrad just won a Macarthur Fellowship. The episode was about numbers. It made me
nostalgic for my youthful enthrallment with the pristine world of mathematics, before I succumbed to the gritty reality of the financial world. Among the episode’s astounding revelations was that
babies count on a logarithmic scale.
A second earth-shattering fact is that there are more numbers in the universe that begin with the digit 1 than 2, or 3, or 4, or 5, or 6, or 7, or 8, or 9. And more numbers that begin with 2 than
3, or 4, and so on. This relationship holds for the lengths of rivers, the populations of cities, molecular weights of chemicals, and any number of other categories. What a blow to any of us who
purport to have mastered the basic facts of the world around us!
This numerical regularity is known as Benford’s Law, and specifically, it says that the probability of the first digit from a set of numbers is d is given by
In fact, Benford’s law has been used in legal cases to detect corporate fraud, because deviations from the law can indicate that a company’s books have been manipulated. Naturally, I was keen to
see whether it applies to the large public firms that we commonly study in finance.
I downloaded quarterly accounting data for all firms in Compustat, the most widely-used dataset in corporate finance that contains data on over 20,000 firms from SEC filings. I used a standard
set of 43 variables that comprise the basic components of corporate balance sheets and income statements (revenues, expenses, assets, liabilities, etc.).
And lo, it works! Here are the distribution of first digits vs. Benford’s law’s prediction for total assets and total revenues.
Next, I looked at how adherence to Benford’s law changed over time, using a measure of the sum of squared deviations of the empirical density from the Benford’s prediction.
where ^P(d) is the empirical probability of the first digit d.
Deviations from Benford’s law have increased substantially over time, such that today the empirical distribution of each digit is about 3 percentage points off from what Benford’s law would
predict. The deviation increased sharply between 1982-1986 before leveling off, then zoomed up again from 1998 to 2002. Notably, the deviation from Benford dropped off very slightly in 2003-2004
after the enactment of Sarbanes-Oxley accounting reform act in 2002, but this was very tiny and the deviation resumed its increase up to an all-time peak in 2009.
So according to Benford’s law, accounting statements are getting less and less representative of what’s really going on inside of companies. The major reform that was passed after Enron and other
major accounting standards barely made a dent.
Next, I looked at Benford’s law for three industries: finance, information technology, and manufacturing. The finance industry showed a huge surge in the deviation from Benford’s from 1981-82,
coincident with two major deregulatory acts that sparked the beginnings of that other big mortgage debacle, the Savings and Loan Crisis. The deviation from Benford’s in the finance industry
reached a peak in 1988 and then decreased starting in 1993 at the tail end of the S&L fraud wave, not matching its 1988 level until … 2008.
The time series for information technology is similarly tied to that industry’s big debacle, the dotcom bubble. Neither manufacturing nor IT showed the huge increase and decline of the deviation
from Benford’s that finance experienced in the 1980s and early 1990s, further validating the measure since neither industry experienced major fraud scandals during that period. The deviation for
IT streaked up between 1998-2002 exactly during the dotcom bubble, and manufacturing experienced a more muted increase during the same period.
While these time series don’t prove anything decisively, deviations from Benford’s law are compellingly correlated with known financial crises, bubbles, and fraud waves. And overall, the picture
looks grim. Accounting data seem to be less and less related to the natural data-generating process that governs everything from rivers to molecules to cities. Since these data form the basis of
most of our research in finance, Benford’s law casts serious doubt on the reliability of our results. And it’s just one more reason for investors to beware.
As noted by William Black in his great book on the S&L crisis The Best Way to Rob a Bank Is to Own One, the most fraudulent S&Ls were the ones that looked most profitable on paper. That was in
fact an inherent part of the scam. So perhaps, instead of looking solely at profitability, we should also consider this more fundamental measure of a firm’s “performance.” And many questions
remain. What types of firms, and what kind of executives drive the greatest deviations from Benford’s law? Does this measure do well in predicting known instances of fraud? How much of these
deviations are driven by government deregulation, changes in accounting standards, and traditional measures of corporate governance?
Source: http://econerdfood.blogspot.com/
Date: Sunday, October 9, 2011
Benford’s Law and the Decreasing Reliability of Accounting Data for US Firms
• Jennifer Bramwell
Statistic: A few months ago I came upon an old episode of Radiolab, one of my favorite podcasts whose host Jad Abumrad just won a Macarthur Fellowship. The episode was about numbers. It made me
nostalgic for my youthful enthrallment with the pristine world of mathematics, before I succumbed to the gritty reality of the financial world. Among the episode’s astounding revelations was that
babies count on a logarithmic scale.
A second earth-shattering fact is that there are more numbers in the universe that begin with the digit 1 than 2, or 3, or 4, or 5, or 6, or 7, or 8, or 9. And more numbers that begin with 2 than
3, or 4, and so on. This relationship holds for the lengths of rivers, the populations of cities, molecular weights of chemicals, and any number of other categories. What a blow to any of us who
purport to have mastered the basic facts of the world around us!
This numerical regularity is known as Benford’s Law, and specifically, it says that the probability of the first digit from a set of numbers is d is given by
In fact, Benford’s law has been used in legal cases to detect corporate fraud, because deviations from the law can indicate that a company’s books have been manipulated. Naturally, I was keen to
see whether it applies to the large public firms that we commonly study in finance.
I downloaded quarterly accounting data for all firms in Compustat, the most widely-used dataset in corporate finance that contains data on over 20,000 firms from SEC filings. I used a standard
set of 43 variables that comprise the basic components of corporate balance sheets and income statements (revenues, expenses, assets, liabilities, etc.).
And lo, it works! Here are the distribution of first digits vs. Benford’s law’s prediction for total assets and total revenues.
Next, I looked at how adherence to Benford’s law changed over time, using a measure of the sum of squared deviations of the empirical density from the Benford’s prediction.
where ^P(d) is the empirical probability of the first digit d.
Deviations from Benford’s law have increased substantially over time, such that today the empirical distribution of each digit is about 3 percentage points off from what Benford’s law would
predict. The deviation increased sharply between 1982-1986 before leveling off, then zoomed up again from 1998 to 2002. Notably, the deviation from Benford dropped off very slightly in 2003-2004
after the enactment of Sarbanes-Oxley accounting reform act in 2002, but this was very tiny and the deviation resumed its increase up to an all-time peak in 2009.
So according to Benford’s law, accounting statements are getting less and less representative of what’s really going on inside of companies. The major reform that was passed after Enron and other
major accounting standards barely made a dent.
Next, I looked at Benford’s law for three industries: finance, information technology, and manufacturing. The finance industry showed a huge surge in the deviation from Benford’s from 1981-82,
coincident with two major deregulatory acts that sparked the beginnings of that other big mortgage debacle, the Savings and Loan Crisis. The deviation from Benford’s in the finance industry
reached a peak in 1988 and then decreased starting in 1993 at the tail end of the S&L fraud wave, not matching its 1988 level until … 2008.
The time series for information technology is similarly tied to that industry’s big debacle, the dotcom bubble. Neither manufacturing nor IT showed the huge increase and decline of the deviation
from Benford’s that finance experienced in the 1980s and early 1990s, further validating the measure since neither industry experienced major fraud scandals during that period. The deviation for
IT streaked up between 1998-2002 exactly during the dotcom bubble, and manufacturing experienced a more muted increase during the same period.
While these time series don’t prove anything decisively, deviations from Benford’s law are compellingly correlated with known financial crises, bubbles, and fraud waves. And overall, the picture
looks grim. Accounting data seem to be less and less related to the natural data-generating process that governs everything from rivers to molecules to cities. Since these data form the basis of
most of our research in finance, Benford’s law casts serious doubt on the reliability of our results. And it’s just one more reason for investors to beware.
As noted by William Black in his great book on the S&L crisis The Best Way to Rob a Bank Is to Own One, the most fraudulent S&Ls were the ones that looked most profitable on paper. That was in
fact an inherent part of the scam. So perhaps, instead of looking solely at profitability, we should also consider this more fundamental measure of a firm’s “performance.” And many questions
remain. What types of firms, and what kind of executives drive the greatest deviations from Benford’s law? Does this measure do well in predicting known instances of fraud? How much of these
deviations are driven by government deregulation, changes in accounting standards, and traditional measures of corporate governance?
Source: http://econerdfood.blogspot.com/2011/10/benfords-law-and-decreasing-reliability.html
Date: 09 Oct, 2011
Benford’s Law and the Decreasing Reliability of Accounting Data for US Firms
• Sammie Jia
Statistic: All Blacks v Australia – What does history say?
NZ Heard uses a dynamic pie chart to display the record of ABs vs Austrailia.
Source: NZ herald
Date: Friday Oct 14, 2011
Personally, I think the game results is memoryless. The next result does not necessarily depends on the previous historical results
Nominate your Stat of the Week
First time nominating? Please use your real first name and surname and read the Comment Policy.
|
{"url":"http://www.statschat.org.nz/2011/10/10/stat-of-the-week-competition-october-8-14/","timestamp":"2014-04-17T21:50:19Z","content_type":null,"content_length":"49429","record_id":"<urn:uuid:5e41929c-1e94-47bc-b048-c2d87ce85447>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00547-ip-10-147-4-33.ec2.internal.warc.gz"}
|
IACR News
In the last few years, the need to design new cryptographic hash
functions has led to the intense study of when desired hash
multi-properties are preserved or assured under compositions and
domain extensions. In this area, it is important to identify the
exact notions and provide often complex proofs of the resulting
properties. Getting this analysis right (as part of provable security
studies) is, in fact, analogous to cryptanalysis. We note that it is
important and quite subtle to get indeed the ``right\'\' notions and
properties, and ``right\'\' proofs in this relatively young
area. Specifically, the security notion we deal with is ``adaptive
preimage resistance\'\' (apr) which was introduced by Lee and Park as an extension of ``preimage resistance\'\' (pr). In
Eurocrypt 2010, in turn, Lee and Steinberger already
used the apr security notion to prove ``preimage awareness\'\' and
``indifferentiable security\'\' of their new double-piped mode of
operation. They claimed that if $H^P$ is collision-resistant (cr) and apr,
then $F(M)=\\mathcal{R}(H^P(M))$ is indifferentiable from a variable
output length (VIL) random oracle $\\mathcal{F}$, where $H^P$ is a
function based on an ideal primitive $P$ and $\\mathcal{R}$ is a fixed
input length (FIL) random oracle. However, there are some limitations in their claim, because they considered only indifferentiability security notion in the information-theoretic adversarial model,
not in the computation-theoretic adversarial model. As we show in the current
work, the above statement is \\textit{not} correct in the computation-theoretic adversarial model. First in our
studies, we give a counterexample to the above. Secondly, we describe
\\textit{a new requirement} on $H^P$ (called ``admissibility\'\') so that
the above statement is correct even in the computation-theoretic adversarial model. Thirdly, we show that apr is, in fact,
not a strengthened notion of preimage resistance. Fourthly, we
explain the relation between preimage awareness and cr+apr+(our new
requirement) in the computation-theoretic adversarial model. Finally, we show that a polynomial-based mode of
operation \\cite{LeSt10} satisfies our new requirement; namely, the
polynomial-based mode of operation with fixed-input-length random
oracles is indifferentiable from a variable-input-length random oracle in the computation-theoretic adversarial model.
|
{"url":"https://www.iacr.org/news/index.php?p=detail&id=1259","timestamp":"2014-04-19T22:08:21Z","content_type":null,"content_length":"16834","record_id":"<urn:uuid:dd0d9f2b-d3bb-4ac9-b644-30c5536e8eb1>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00531-ip-10-147-4-33.ec2.internal.warc.gz"}
|
minimize the integration value
January 6th 2010, 10:00 AM #1
minimize the integration value
(1)Find real numbers $a,b,c\text{ such that they minimize the following integration }:$
$\int_0^\infty \left|x^3-a-bx-cx^2\right|^2e^{-x}dx$
(2)Find real numbers $a,b,c\text{ such that they minimize the following integration }:$
$\int_0^\infty \left|x^3-a-bx-cx^2\right|e^{-x}dx$ .
The first problem is easier than the second one. If somebody posts the elaborate correct solution for the second one, I will be very appreciate!
for the first problem, we can consider it as Projection:
find the distance of a point and a given subspace in Hilbert space.
we can do it by using the Projection Theorem in Hilbert space.
for the second problem, Since $L^1\left[0,\infty\right]$ is not a hilbert space, we can't apply the theorem.
Is There any other way to solve it?
auguement on a,b,c?
January 16th 2010, 03:00 AM #2
|
{"url":"http://mathhelpforum.com/differential-geometry/122644-minimize-integration-value.html","timestamp":"2014-04-17T10:31:00Z","content_type":null,"content_length":"32723","record_id":"<urn:uuid:88ceee12-888f-44f5-8abd-607a78f0e62f>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00416-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Is there a set of criteria to determine whether a number is transcendental for a subset of the reals with positive Lebesgue measure?
up vote 3 down vote favorite
I am reading A. van der Poorten's 1978 paper on Apery's constant, and it cited the Thue-Siegel-Roth Theorem (that if $\beta$ is algebraic, then for all $\epsilon > 0$ the inequality $|\beta - p/q| \
leq 1/q^{2+\epsilon}$ has only finitely many solutions) as a way to test whether a given number is transcendental, but that this method is not very satisfactory since only a set of measure zero of
transcendental numbers can be detected to be transcendental in this way.
So my question is, since 1978 has anyone devised a method that can (at least in theory) test whether a set of numbers with positive measure is transcendental or not?
diophantine-approximation nt.number-theory
What does it mean to "test" an arbitrary a set of real numbers of positive measure? Perhaps you would be satisfied with an "explicit" description of a single set of transcendentals of positive
measure? If so what counts as "explicit? – SJR Feb 21 '11 at 5:09
As a simple example, we can start with Liouville's constant, $\displaystyle \sum_{n=1}^\infty \frac{1}{10^{n!}}$ which can easily proved to be transcendental, and at the same time conclude that $\
displaystyle \sum_{n=1}^\infty \frac{a_n}{10^{n!}}$ for any sequence $(a_n)$ such that $a_n = 0,1$ for all $n$ is also transcendental. Thus, we can conclude that an uncountable set of real numbers
is transcendental, but this set will have measure 0. – Stanley Yao Xiao Feb 21 '11 at 5:09
add comment
3 Answers
active oldest votes
How about checking if the number is computable?
It doesn't seem to me that the question is well posed, because you haven't given a precise notion of "test" and how a real number is given to you. My belief is that most explicit reals
appearing in number theory are computable and thus inside a set of measure zero. See this previous answer by Joel David Hamkins for some of the hierarchies one can consider this way.
up vote Notice the similarity that computability is also a property defined in terms of an approximation-by-rationals property, but even though it gives a way to recognize way more transcendental
4 down numbers, it is unlikely to have direct applications in number theory for the reason mentioned above. The results from Diophantine approximations you mention are, on the other hand, more
vote valuable since they have applications ranging from solutions to Diophantine equations to transcendence of constants arising in various places (still within the computable world). See also
this answer by Greg Kuperberg to a previous question, which is close to this point of view.
I'm sorry if this long comment isn't very enlightening, but maybe you can edit your question a bit to express what you would look for in a helpful answer.
add comment
The question is indeed not well posed. Here is a nice example from [J.M. Borwein and P.B. Borwein, On the generating function of the integer part: $[n\alpha +\gamma]$, J. Number Theory 43
(1993), no. 3, 293--318], namely, part (b) of Theorem 0.4 there.
Consider the function $F(\alpha):=\sum_{n=1}^\infty\lfloor n\alpha\rfloor/2^n$ and the set of irrational numbers $A\subset\mathbb R$ that have unbounded partial quotients in their partial
fraction expansions. (For example, $$ e=[2;1,2,1,1,4,1,1,6,1,1,8,1,\dots]=2+\frac1{1+\dfrac1{2+\dfrac1{\ddots}}} $$ belongs to $A$.) The set $A$ has full measure in $\mathbb R$ as its
up vote 2 complement is of measure 0. The values $F(\alpha)$ at $\alpha\in A$ are all transcendental Liouville numbers. (Note the image set has... measure zero.)
down vote
A related variant of test for a given $\beta\in\mathbb R$ would be: if there exists $\alpha\in A$ such $\beta=F(\alpha)$, then $\beta$ is transcendental.
Is this a transcendence method for the full-measure set or a zero-measure set? $\ddot\smile$
add comment
Controlling the measure doesn't help much, since the algebraic numbers have measure zero. You'd need some kind of closure property to make the question nontrivial.
up vote 0
down vote
Are you referring to the availability of silly examples, like taking the positive-measure set to be the set of all transcendental numbers and taking the "test" to be "say yes" (which
works fine on that set)? I agree that any meaningful version of the question would have to exclude this example, as well as variations of it where the silliness is more hidden. –
Andreas Blass Apr 29 '12 at 23:43
add comment
Not the answer you're looking for? Browse other questions tagged diophantine-approximation nt.number-theory or ask your own question.
|
{"url":"http://mathoverflow.net/questions/56137/is-there-a-set-of-criteria-to-determine-whether-a-number-is-transcendental-for-a","timestamp":"2014-04-18T14:08:09Z","content_type":null,"content_length":"62675","record_id":"<urn:uuid:33688c7f-9265-42c9-b8b9-8751e16db28b>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00340-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Test match physics
A cricket ball at rest
By Margaret Harris
Late yesterday afternoon, I was pottering around with the BBC’s Test Match Special on in the background when something in the cricket commentary caught my attention. In-between the usual chatter
about English bowling (good), Indian batting (bad) and the latest cakes delivered to the TMS commentary box (excellent), the conversation suddenly turned to physics – specifically, to the question of
whether a ball could gain speed after nicking the edge of a bat.
The matter was raised after an Indian batsman, V V S Laxman, edged a delivery from Jimmy Anderson, an English bowler. The ball spurted off towards England’s captain, Andrew Strauss, who couldn’t
quite catch it. After lamenting the missed opportunity, one of the TMS commentators suggested that Strauss might have mistimed his catch because the ball gained speed after glancing off Laxman’s bat.
The commentators then spent the next several minutes talking a load of old rubbish about whether this was physically possible.
Then, shortly after 6 p.m., a secondary-school physics student, Laurence Copson, sent a message to the BBC’s online commentary team claiming that no, it wasn’t possible. “Removing all external forces
on the ball, under no circumstance would the ball gain speed after a nick…as [the] bat would be slightly hitting the ball in the opposite direction,” he wrote. However, he did add a caveat: “What may
be deceiving is if the batsmen swipes, catches an edge and then the ball gains top-spin and seems to reach the ground quicker than usual.”
This analysis was quickly contradicted by Rob, a university astrophysics student, who pointed out that Copson was neglecting both the elastic coefficients of ball and bat, and (more importantly) “the
spin on the ball before it hits the bat which, if very fine, may accelerate the ball…in the direction of spin (like a car with its wheels spinning hitting the ground goes forward)”.
This seemed fair enough, but Rob’s parting shot – ”this is the real world, external forces on the ball can’t be discounted!” – struck me as rather snide, so I decided to do some analysis of my own.
I began by making three assumptions. First, I assumed that Laxman’s bat was completely “dead” when he blocked Anderson’s delivery – in other words, the ball just bounced off the bat, without Laxman
introducing additional kinetic energy. I also assumed, initially, that the collision was completely elastic, and that it transferred all of the ball’s “spin” kinetic energy into “linear” kinetic
energy. These last two assumptions represent the best-case scenario for converting the ball’s angular momentum into linear momentum – and thereby absolving Strauss of blame.
With the moment of inertia I of a solid ball being 2/5 mr^2 , and the rotational kinetic energy 1/2 Iw^2 (where w is the angular velocity in radians per second), I get an “inbound” total kinetic
energy of 1/2 mv[1]^2+ 1/5 mr^2w^2. This must equal the “outbound” kinetic energy 1/2 mv[2]^2 since we’re deliberately ignoring the inelastic nature of the collision and assuming the ball isn’t
spinning at all afterwards.
Not being a cricketer myself – I’m American, so I have an excuse – I then hit the Internet to find out the average radius of a cricket ball (3.6 cm), the usual (linear) velocity of balls bowled with
spin (around 80 km/hr, or 22 m/s), and their typical rotation rate. There was not much consensus on the latter value, but I took 50 rotations per second (~314 rad/s) as a generous estimate, this
being somewhere between the values for golf balls (up to 100 rotations per second) and rugby balls (10ish).
Thus equipped, I solved for the outbound velocity and got 23 m/s. That’s faster than the incoming ball, but not by much – the difference amounts to less than 5%. Throw in the loss of kinetic energy
due to heat, sound and deforming the ball and bat, plus the fact that the ball was probably still spinning a bit after it edged Laxman’s bat and headed towards Strauss, and we’re looking at a pretty
negligible change. To make matters worse, Anderson isn’t a spin bowler; he’s a paceman, meaning he relies on speed rather than spin-induced unpredictability to trip batsmen up. So that figure of 50
rotations/second is way, way too generous.
In summary, my view is that:
1. Strauss can’t blame his missed catch on physics
2. Copson the school student was wrong on the details but right about the ball not gaining much speed
3. Rob the astrophysics student shouldn’t be so quick to jeer about the “real world”
But if you have other ideas on this particular cricket problem, send us an e-mail to pwld@iop.org. I’d be especially interested in any analysis that does not assume that Laxman was passively blocking
the ball, since this is purely a simplifying assumption, not one made to illustrate a best-case scenario.
• Comments should be relevant to the article and not be used to promote your own work, products or services.
• Please keep your comments brief (we recommend a maximum of 250 words).
• We reserve the right to remove excessively long, inappropriate or offensive entries.
Show/hide formatting guidelines
Tag Description Example Output
<a> Hyperlink <a href="http://www.google.com">google</a> google
<abbr> Abbreviation <abbr title="World Health Organisation" >WHO</abbr> WHO
<acronym> Acronym <acronym title="as soon as possible">ASAP</acronym> ASAP
<b> Bold <b>Some text</b> Some text
<blockquote> Quoted from another source <blockquote cite="http://iop.org/">IOP</blockquote> IOP
<cite> Cite <cite>Diagram 1</cite> Diagram 1
<del> Deleted text From this line<del datetime="2012-12-17"> this text was deleted</del> From this line[DEL: this text was deleted:DEL]
<em> Emphasized text In this line<em> this text was emphasised</em> In this line this text was emphasised
<i> Italic <i>Some text</i> Some text
WWF goal is to build a future <q cite="http://www.worldwildlife.org/who/index.html"> WWF goal is to build a future “
<q> Quotation where people live in harmony with nature and animals</q> where people live in harmony with nature and
<strike> Strike text <strike>Some text</strike> [S:Some text:S]
<strong> Stronger emphasis of text <strong>Some text</strong> Some text
This entry was posted in General. Bookmark the permalink.
View all posts by this author | View this author's profile
|
{"url":"http://blog.physicsworld.com/2011/08/22/test-match-physics/","timestamp":"2014-04-19T12:10:27Z","content_type":null,"content_length":"37379","record_id":"<urn:uuid:180247bf-72d5-4b6e-abaa-48c318599f7f>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00592-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The BBQ BRETHREN FORUMS. - View Single Post - Custom Smoker Problem
Originally Posted by
Here is how my math works, I hope area still = pie-r-squared
diameter 3/4 = 0.75 in
radius = 0.375 in
0.375 x pie = 1.178 sq in
4 of them = 4.71 sq in
diameter = 2 in
radius = 1 x pie = 3.14 blah blah blah
1 exhaust = 3.14 sq in
diameter = 3 in
radius = 1.5 in
1.5 x pie = 4.71 sq in
Have I screwed up somewhere?
I would get my exhaust AT LEAST in balance with intake. Actually, I'm a pretty strong beleiver in a wide open exhaust and control at the intake so I would be tempted to oversize the exhaust and go to
4 in. I dont know squat about UDS cookers, but I see a bunch on here with only 2 or 3 3/4 inch intakes so I'm assuming 4 might be enough.
Oh yeah, in my house, pie-r-round, cobblers-r-square (maybe rectangular).
PS: stopped by my local muffler shop and picked up a 3 ft chunk of 4 inch for $20 for another build we are starting.
Dave, I could be wrong but I thought the formula for area of a circle was radius squared x pie. So in my case I'd have:
.75/2= .375 radius
.375 x .375 = .1406 radius squared
.1406 x 3.14 = .4415 square inches of one 3/4" circle
.4414 x 4 = 1.766 is my total intake.
A 3" exhaust using the above formula would give me a 7.06 sq in exhaust.
I'm going to look for a airflow calculator now to see where I should be at.
|
{"url":"http://www.bbq-brethren.com/forum/showpost.php?p=2428326&postcount=37","timestamp":"2014-04-21T04:36:34Z","content_type":null,"content_length":"16867","record_id":"<urn:uuid:8d8a5b45-e62e-46b1-b3ec-95bfefcd9029>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00618-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Haledon Statistics Tutor
Find a Haledon Statistics Tutor
I am currently an adult Ramapo College student getting a second degree in mathematics and will be certified to teach K-12 mathematics in NJ. My professional background has been centered around
finance and information technology. Recently, I have been at home raising two children.
12 Subjects: including statistics, calculus, geometry, finance
...Microsoft Excel is an extremely powerful spreadsheet tool and may initially seem confusing, but with a little practice, can perform many diverse functions. I am confident that I will be able to
help you master this versatile application and ensure it meets all your business and personal needs. Geometry can be very esoteric, but it can also be very interesting if taught in the right way.
26 Subjects: including statistics, calculus, writing, GRE
...My teaching experience includes varied levels of students (high school, undergraduate and graduate students).For students whose goal is to achieve high scores on standardized tests, I focus
mostly on tips and material relevant to the test. For students whose goal is to learn particular subjects,...
15 Subjects: including statistics, chemistry, calculus, algebra 2
...I'm trained in brain-based learning and enjoy differentiating based on the learner's strengths so that the learner has a positive mindset.I have an active/up-to-date K-5 teaching certification.
I've been a public school elementary teacher (grades 2-4) for 6 years. I've done private tutoring, wa...
22 Subjects: including statistics, English, reading, algebra 1
...My high school bound students have gotten into Stuyvesant, Bronx Science, and Brooklyn Tech. And my college students have gotten into U Michigan, Case Western, UCSD, NYU, and many others. In
short, I am just who you are looking for!
42 Subjects: including statistics, chemistry, reading, biology
|
{"url":"http://www.purplemath.com/haledon_nj_statistics_tutors.php","timestamp":"2014-04-19T09:30:12Z","content_type":null,"content_length":"24016","record_id":"<urn:uuid:7add4843-235d-473e-85d3-5e1f6d6eb33f>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00502-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Efficiency Improvements for Signature Schemes with Tight Security Reductions
Results 1 - 10 of 20
, 2004
"... We describe a short signature scheme which is existentially unforgeable under a chosen message attack without using random oracles. The security of our scheme depends on a new complexity
assumption we call the Strong Di#e-Hellman assumption. This assumption has similar properties to the Strong RS ..."
Cited by 265 (14 self)
Add to MetaCart
We describe a short signature scheme which is existentially unforgeable under a chosen message attack without using random oracles. The security of our scheme depends on a new complexity assumption
we call the Strong Di#e-Hellman assumption. This assumption has similar properties to the Strong RSA assumption, hence the name. Strong RSA was previously used to construct signature schemes without
random oracles. However, signatures generated by our scheme are much shorter and simpler than signatures from schemes based on Strong RSA.
, 2004
"... We give an informal analysis and critique of several typical “provable security” results. In some cases there are intuitive but convincing arguments for rejecting the conclusions suggested by
the formal terminology and “proofs,” whereas in other cases the formalism seems to be consistent with common ..."
Cited by 59 (12 self)
Add to MetaCart
We give an informal analysis and critique of several typical “provable security” results. In some cases there are intuitive but convincing arguments for rejecting the conclusions suggested by the
formal terminology and “proofs,” whereas in other cases the formalism seems to be consistent with common sense. We discuss the reasons why the search for mathematically convincing theoretical
evidence to support the security of public-key systems has been an important theme of researchers. But we argue that the theorem-proof paradigm of theoretical mathematics is often of limited
relevance here and frequently leads to papers that are confusing and misleading. Because our paper is aimed at the general mathematical public, it is self-contained and as jargon-free as possible.
"... We present new techniques for achieving adaptive security in broadcast encryption systems. Previous work on fully collusion resistant broadcast encryption with short ciphertexts was limited to
considering only static security. First, we present a new definition of security that we call semi-static s ..."
Cited by 20 (2 self)
Add to MetaCart
We present new techniques for achieving adaptive security in broadcast encryption systems. Previous work on fully collusion resistant broadcast encryption with short ciphertexts was limited to
considering only static security. First, we present a new definition of security that we call semi-static security and show a generic “two-key ” transformation from semi-statically secure systems to
adaptively secure systems that have comparable-size ciphertexts. Using bilinear maps, we then construct broadcast encryption systems that are semi-statically secure in the standard model and have
constant-size ciphertexts. Our semi-static constructions work when the number of indices or identifiers in the system is polynomial in the security parameter. For identity-based broadcast encryption,
where the number of potential indices or identifiers may be exponential, we present the first adaptively secure system with sublinear ciphertexts. We prove security in the standard model. 1
"... Abstract. RSA-FDH and many other schemes secure in the Random-Oracle Model (ROM) require a hash function with output size larger than standard sizes. We show that the random-oracle
instantiations proposed in the literature for such cases are weaker than a random oracle, including the proposals by Be ..."
Cited by 11 (0 self)
Add to MetaCart
Abstract. RSA-FDH and many other schemes secure in the Random-Oracle Model (ROM) require a hash function with output size larger than standard sizes. We show that the random-oracle instantiations
proposed in the literature for such cases are weaker than a random oracle, including the proposals by Bellare and Rogaway from 1993 and 1996, and the ones implicit in IEEE P1363 and PKCS standards:
for instance, we obtain a practical preimage attack on BR93 for 1024-bit digests (with complexity less than 2 30). Next, we study the security impact of hash function defects for ROM signatures. As
an extreme case, we note that any hash collision would suffice to disclose the master key in the ID-based cryptosystem by Boneh et al. from FOCS ’07, and the secret key in the Rabin-Williams
signature for which Bernstein proved tight security at EUROCRYPT ’08. We also remark that collisions can be found as a precomputation for any instantiation of the ROM, and this violates the security
definition of the scheme in the standard model. Hence, this gives an example of a natural scheme that is proven secure in the ROM but that in insecure for any instantiation by a single function.
Interestingly, for both of these schemes, a slight modification can prevent these attacks, while preserving the ROM security result. We give evidence that in the case of RSA and Rabin/Rabin-Williams,
an appropriate PSS padding is more robust than all other paddings known. 1
"... Abstract. This paper shows that a $390 mass-market quad-core 2.4GHz Intel Westmere (Xeon E5620) CPU can create 109000 signatures per second and verify 71000 signatures per second on an elliptic
curve at a 2 128 security level. Public keys are 32 bytes, and signatures are 64 bytes. These performance ..."
Cited by 10 (4 self)
Add to MetaCart
Abstract. This paper shows that a $390 mass-market quad-core 2.4GHz Intel Westmere (Xeon E5620) CPU can create 109000 signatures per second and verify 71000 signatures per second on an elliptic curve
at a 2 128 security level. Public keys are 32 bytes, and signatures are 64 bytes. These performance figures include strong defenses against software sidechannel attacks: there is no data flow from
secret keys to array indices, and there is no data flow from secret keys to branch conditions.
"... Abstract. In 2003, Boneh, Gentry, Lynn and Shacham (BGLS) devised the first provably-secure aggregate signature scheme. Their scheme uses bilinear pairings and their security proof is in the
random oracle model. The first pairing-based aggregate signature scheme which has a security proof that does ..."
Cited by 7 (3 self)
Add to MetaCart
Abstract. In 2003, Boneh, Gentry, Lynn and Shacham (BGLS) devised the first provably-secure aggregate signature scheme. Their scheme uses bilinear pairings and their security proof is in the random
oracle model. The first pairing-based aggregate signature scheme which has a security proof that does not make the random oracle assumption was proposed in 2006 by Lu, Ostrovsky, Sahai, Shacham and
Waters (LOSSW). In this paper, we compare the security and efficiency of the BGLS and LOSSW schemes when asymmetric pairings derived from Barreto-Naehrig (BN) elliptic curves are employed. 1.
, 2007
"... Forward-secure signatures (FSS) prevent forgeries for past time periods when an attacker obtains full access to the signer’s storage. To simplify the integration of these primitives into
standard security architectures, Boyen, Shacham, Shen and Waters recently introduced the concept of forwardsecure ..."
Cited by 5 (0 self)
Add to MetaCart
Forward-secure signatures (FSS) prevent forgeries for past time periods when an attacker obtains full access to the signer’s storage. To simplify the integration of these primitives into standard
security architectures, Boyen, Shacham, Shen and Waters recently introduced the concept of forwardsecure signatures with untrusted updates where private keys are additionally protected by a second
factor (derived from a password). Key updates can be made on encrypted version of signing keys so that passwords only come into play for signing messages. The scheme put forth by Boyen et al. relies
on bilinear maps and does not require the random oracle. The latter work also suggested the integration of untrusted updates in the Bellare-Miner forward-secure signature and left open the problem of
endowing other existing FSS systems with the same second factor protection. This paper solves this problem by showing how to adapt the very efficient generic construction of Malkin, Micciancio and
Miner (MMM) to untrusted update environments. More precisely, our modified construction- which does not use random oracles either- obtains a forward-secure signature with untrusted updates from any
2-party multi-signature in the plain public key model. In combination with Bellare and Neven’s multi-signatures, our generic method yields implementations based on standard assumptions such as RSA,
factoring or the hardness of computing discrete logarithms. Like the original MMM scheme, it does not require to set a bound on the number of time periods at key generation.
, 2003
"... This paper discusses the security of the Rabin-Williams publickey signature system with a deterministic signing algorithm that computes “standard signatures.” The paper proves that any generic
attack on standard Rabin-Williams signatures can be mechanically converted into a factorization algorithm ..."
Cited by 4 (1 self)
Add to MetaCart
This paper discusses the security of the Rabin-Williams publickey signature system with a deterministic signing algorithm that computes “standard signatures.” The paper proves that any generic attack
on standard Rabin-Williams signatures can be mechanically converted into a factorization algorithm with comparable speed and approximately the same effectiveness. “Comparable” and “approximately” are
explicitly quantified.
- Proceedings of Selected Areas in Cryptography (SAC’11), LNCS. 7118 , 2012
"... Abstract. We examine a natural, but non-tight, reductionist security proof for deterministic message authentication code (MAC) schemes in the multi-user setting. If security parameters for the
MAC scheme are selected without accounting for the non-tightness in the reduction, then the MAC scheme is s ..."
Cited by 4 (3 self)
Add to MetaCart
Abstract. We examine a natural, but non-tight, reductionist security proof for deterministic message authentication code (MAC) schemes in the multi-user setting. If security parameters for the MAC
scheme are selected without accounting for the non-tightness in the reduction, then the MAC scheme is shown to provide a level of security that is less than desirable in the multi-user setting. We
find similar deficiencies in the security assurances provided by non-tight proofs when we analyze some protocols intheliteratureincludingonesfor networkauthentication and aggregate MACs. Our
observations call into question the practical value of non-tight reductionist security proofs. We also exhibit attacks on authenticated encryption schemes, disk encryption schemes, and stream ciphers
in the multi-user setting. 1
, 2009
"... Designated verifier signature (DVS) is a cryptographic primitive that allows a signer to convince a verifier the validity of a statement in a way that the verifier is unable to transfer the
conviction to a third party. In DVS, signatures are publicly verifiable. The validity of a signature ensures t ..."
Cited by 3 (0 self)
Add to MetaCart
Designated verifier signature (DVS) is a cryptographic primitive that allows a signer to convince a verifier the validity of a statement in a way that the verifier is unable to transfer the
conviction to a third party. In DVS, signatures are publicly verifiable. The validity of a signature ensures that it is from either the signer or the verifier. Strong DVS (SDVS) enhances the privacy
of the signer so that anyone except the designated verifier cannot verify the signer’s signatures. In this paper we propose a highly efficient SDVS scheme based on pseudorandom functions, which is
proved to be secure in the standard model. Compared with the most efficient SDVS scheme secure in the random oracle model, our scheme has almost the same complexity in terms of both the computational
cost of generating a signature and signature size. A signature of our scheme is simply the output of a pseudorandom function. The security of the scheme is tightly reduced to the hardness of DDH
problem and the security of the pseudorandom function. Since our scheme is vulnerable to delegatability attacks, the study of which was initiated by Lipmaa, Wang and Bao in ICALP 2005, we then
propose another construction of SDVS, which is the first one immune to delegatability attacks. The scheme is also very efficient, and has the same
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=8901847","timestamp":"2014-04-16T17:04:03Z","content_type":null,"content_length":"38784","record_id":"<urn:uuid:330db455-f8f4-4ca6-8fdc-df94f33514c1>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00098-ip-10-147-4-33.ec2.internal.warc.gz"}
|
188 helpers are online right now
75% of questions are answered within 5 minutes.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/users/mazehaq/asked","timestamp":"2014-04-18T00:18:48Z","content_type":null,"content_length":"101442","record_id":"<urn:uuid:255ffc3e-82e6-45e0-9ac2-f438bfb00390>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00149-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Grade 11 math - transformation of exponential functions
March 13th 2012, 11:30 AM
Grade 11 math - transformation of exponential functions
1.Transform the graph of f(x) = 3^x to sketch g(x) = 3^-(x+1) -2. Show table of values and each transformation clearly to get full marks.
2.Write two equations to represent the same exponential function with a y-intercept of 5 and an asymptote at y = 3. Investigate whether other exponential functions have the same properties. Use
the transformations to explain your observations.
I really appreciate your help! Thank you
March 13th 2012, 12:11 PM
Re: Grade 11 math - transformation of exponential functions
$3^{-x-1} +2=\frac{3^{-x}}{3} + 2$
|
{"url":"http://mathhelpforum.com/algebra/195920-grade-11-math-transformation-exponential-functions-print.html","timestamp":"2014-04-24T10:44:54Z","content_type":null,"content_length":"4308","record_id":"<urn:uuid:3c9820ca-2738-4989-8ea8-6b01984a1802>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00517-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Re: [TowerTalk] Guy Wire Formula?
Bill MacLane wrote:
> Alain,
> I forgot to mention that you could guy 30 ft. out from the base if you want.
> Also, if you buy dacron rope in 500 ft. spools one spool will make a complete
> set of guys for one mast. You still need a halyard.
> Finally, to figure the guy length it isthe square root of the height of the
> anchor squared plus the distance from the base to the anchor squared. I
> think that is what you intended to write. However I don't know how to make
> the formula in email.
> 73,
> Bill
> AI4WM
> Alain Michel <opalockamishabob@yahoo.com> wrote: Hello All!
> I am in the process of erecting a 40'+ mast to support the feedpoint of my
> G5RV. I was initially planning 3 wires [Dacron rope] at each level. Should I
> use 4?
> Using A2+B2=C2, how far out from the bottom of the mast should the guys be
> fastened to the ground? I plan on guying the mast at the 20', 30' and 39'
> levels.
Writing that formula out a^2 + b^2 = c^2
where a = altitude or height of anchor on pole
b=base or distance from pole to anchor point
and c =guy length
Square and square root are usually written out "^2" and "sqr"
so if we take the square root (sqr) of both sides we have
sqr(a^2 + b^2) = c for the guy length.
If your algebra is a bit rusty do the stuff inside the parenthesis first.
If you want the base length "b", or distance from the pole to the anchor
Then subtract a^2 from both sides to get:
b^2 = c^2 - a^2 or b = sqr(c^2 - a^2)
Links to lengths have already been listed, but for the do it yourselfers
you can also figure the lengths for given angles using trig although for
guy angles of 45, 30, and 60 degrees there are short cuts. The shortest
is throw out the 60 degrees although that might be kind of handy for a
bottom set of guys.
For 45 degrees the height and base are the same. The 30, 60, 90
triangle is a special case that looks a little more confusing so I'll
skip it unless some one wants it.
The trig is easy (if you know, or knew trig), otherwise we don't have
the time or room on the reflector as trig is a whole semester.<:-)) OTOH
you can get by with only sin, cos, and tan for working with towers and
guys IF you really feel the need to do the calculations<:-)).
Roger (K8RI)
> All suggestions will be most gratefully received!
> 73,
> Alan...KI6HPO
> ---------------------------------
> You rock. That's why Blockbuster's offering you one month of Blockbuster
> Total Access, No Cost.
> _______________________________________________
> _______________________________________________
> TowerTalk mailing list
> TowerTalk@contesting.com
> http://lists.contesting.com/mailman/listinfo/towertalk
> 73,
> Bill
> AI4WM
> ---------------------------------
> You rock. That's why Blockbuster's offering you one month of Blockbuster
> Total Access, No Cost.
> _______________________________________________
> _______________________________________________
> TowerTalk mailing list
> TowerTalk@contesting.com
> http://lists.contesting.com/mailman/listinfo/towertalk
TowerTalk mailing list
|
{"url":"http://lists.contesting.com/_towertalk/2008-04/msg00006.html?contestingsid=ekd84ud9fd4i81d8hjklegjo57","timestamp":"2014-04-25T09:50:50Z","content_type":null,"content_length":"11610","record_id":"<urn:uuid:c7d00a1a-ef54-4652-be5f-f357ce5bb6b0>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00309-ip-10-147-4-33.ec2.internal.warc.gz"}
|
cassiodor Home | Über uns | Impressum | Kontakt | AGB
home > Fotografie - Photographie > Prominenz
Jahr: um 1948
ArtikelNr. 05105
Besucher 1.099
Constantin Carathéodory, s-w-Foto um 1930
S-w-Abzug à ca. 7x10cm, verso nicht beschrieben. Etwas berieben, sonst gut. Gezeigt ist der Mathematik-Professor Konstantin Karathéodory (auch Carathéodory) während einer Vorlesung wohl an der LMU
Beiliegend die ausgeschnittene Zeitungs-Todesanzeige von 1950, sowie eine ausgeschnittene Zeitungs-Kurzbiographie.
Wikipedia schreibt zum Mathematiker (Stand 7/2010):
Constantin Carathéodory (or Constantine Karatheodori) (Greek: Κωνσταντίνος Καραθεοδωρή) (September 13, 1873 – February 2, 1950) was a Greek mathematician. He made significant contributions to the
theory of functions of a real variable, the calculus of variations, and measure theory. His work also includes important results in conformal representations and in the theory of boundary
correspondence. In 1909, Carathéodory pioneered the Axiomatic Formulation of Thermodynamics along a purely geometrical approach.
Constantin Carathéodory was born in Berlin to Greek parents and grew up in Brussels, where his father served as the Ottoman ambassador to Belgium. The Carathéodory family was well-established and
respected in Constantinople, and its members held many important governmental positions.
The Carathéodory family spent 1874-75 in Constantinople, where Constantin's paternal grandfather lived, while Stephanos was on leave. Then in 1875 they went to Brussels when Stephanos was appointed
there as Ottoman Ambassador. In Brussels, Constantin's younger sister Loulia was born. The year 1895 was a tragic one for the family since Constantin's paternal grandfather died in that year, but
much more tragically, Constantin's mother Despina died of pneumonia in Cannes. Constantin's maternal grandmother took on the task of bringing up Constantin and Loulia in his father's home in Belgium.
They employed a German maid who taught the children to speak German. Constantin was already bilingual in French and Greek by this time.
Constantin began his formal schooling at a private school in Vanderstock in 1881. He left after two years and then spent time with his father on a visit to Berlin, and also spent the winters of
1883-84 and 1884-85 on the Italian Riviera. Back in Brussels in 1885 he attended a grammar school for a year where he first began to become interested in mathematics. In 1886 he entered the high
school Athénée Royal d'Ixelles and studied there until his graduation in 1891. Twice during his time at this school Constantin won a prize as the best mathematics student in Belgium.
At this stage Carathéodory began training as a military engineer. He attended the École Militaire de Belgique from October 1891 to May 1895 and he also studied at the École d'Application from 1893 to
1896. In 1897 a war broke out between Turkey and Greece. This put Carathéodory in a difficult position since he sided with the Greeks, yet his father served the government of the Ottoman Empire.
Since he was a trained engineer he was offered a job in the British colonial service. This job took him to Egypt where he worked on the construction of the Assiut dam until April 1900. During periods
when construction work had to stop due to floods, he studied mathematics from some textbooks he had with him, such as Jordan's Cours d'Analyse and Salmon's text on the analytic geometry of conic
sections. He also visited the Cheops pyramid and made measurements which he wrote up and published in 1901. He also published a book on Egypt in the same year which contained a wealth of information
on the history and geography of the country.
Carathéodory studied engineering in Belgium at the Royal Military Academy, where he was considered a charismatic and brilliant student. In 1900 he entered the University of Berlin. In the years
1902-1904 he completed his graduate studies in the University of Göttingen under the supervision of Hermann Minkowski. During the years 1908-1920 he held various lecturing positions in Bonn,
Hannover, Breslau, Göttingen and Berlin.
He is credited with the theories of outer measure, and prime ends, amongst other mathematical results. He is credited with the authorship of the Carathéodory conjecture claiming that a closed convex
surface admits at least two umbilic points. As of 2007, this conjecture remained unproven despite having attracted a large amount of research.
In 1909, Carathéodory published a pioneering work "Investigations on the Foundations of Thermodynamics" (Untersuchungen ueber die Grundlagen der Thermodynamik, Math. Ann., 67 (1909) p. 355-386) in
which he formulated the Laws of Thermodynamics axiomatically, using only mechanical concepts and the theory of Pfaff's differential forms, in this matter he simplified the concepts needed to sustain
the whole theory; for instance, with his approach, the concept of heat is not an essential, but a derived one [1]. He expressed the Second Law of Thermodynamics via the following Axiom: "In the
neighbourhood of any initial state, there are states which cannot be approached arbitrarily close through adiabatic changes of state." Carathéodory coined the term adiabatic accessibility.[2] This
"first axiomatically rigid foundation of thermodynamics" was acclaimed by Max Planck and Max Born.
On 20 October 1919 he submitted a plan for the creation of a new University in Greece, to be named Ionian University. This university never actually admitted students due to the War in Asia Minor in
1922, but the present day University of the Aegean claims to be a continuation of Carathéodory's original plan.[3]
In 1920 Carathéodory accepted a post in the University of Smyrna, invited by Prime Minister Eleftherios Venizelos. He took a major part in establishing the institution, but his efforts ended in 1922
when the Greek population was expelled from the city during the War in Asia Minor.
Having been forced to move to Athens, Carathéodory brought along with him some of the university library, thus saving it from destruction. He stayed at Athens and taught at the university and
technical school until 1924.
In 1924 Carathéodory was appointed professor of mathematics at the University of Munich, and held this position until his death in 1950.
Carathéodory formulated the axiomatic principle of irreversibility in thermodynamics in 1909, stating that inaccessibility of states is related to the existence of entropy, where temperature is the
integration function.
In 1926 he gave a strict and general proof, that no system of lenses and mirrors can avoid aberration, except for the trivial case of plane mirrors.
Carathéodory excelled at languages, much like many members of his family did. Greek and French were his first languages, and he mastered German with such perfection, that his writings composed in the
German language are stylistic masterworks. Carathéodory also spoke and wrote English, Italian, Turkish, and the ancient languages without any effort. Such an impressive linguistic arsenal enabled him
to communicate and exchange ideas directly with other mathematicians during his numerous travels, and greatly extend his fields of knowledge.
Much more than that, Carathéodory was a treasured conversation partner for his fellow professors in the Munich Department of Philosophy. The well-respected, German philologist, professor of ancient
languages Kurt von Fritz praised Carathéodory, saying that from him one could learn an endless amount about the old and new Greece, the old Greek language, and Hellenic mathematics. Fritz had an
uncountable number of philosophical discussions with Carathéodory. Deep in his heart, Carathéodory felt himself Greek above all. The Greek language was spoken exclusively in Carathéodory's house –
his son Stephanos and daughter Despina went to a German high school, but they obtained daily additional instruction in Greek language and culture from a Greek priest. At home, they were not allowed
to speak any other language.
On December 19, 2005, Israeli officials along with Israel's ambassador to Athens, Ram Aviram, presented the Greek foreign ministry with copies of 10 letters between Albert Einstein and Constantin
Carathéodory [Karatheodoris] that suggest that the work of Carathéodory helped shape some of Albert Einstein's theories. The letters were part of a long correspondence which lasted from 1916 to 1930.
Aviram said that according to experts at the National Archives of Israel — custodians of the original letters — the mathematical side of Einstein's physics theory was partly substantiated through the
work of Carathéodory.[4][5]
The Greek authorities intended for a long time to create a museum honoring Karatheodoris in Komotini, a major town of the northeastern Greek region where his family came from. On March 21, 2009 the
museum "Karatheodoris"(Καραθεοδωρής) opened its gates to the public, in Komotini. The coordinator of the Museum, Athanasios Lipordezis(Αθανάσιος Λιπορδέζης),noted that the museum gave home to
original manuscripts of the letter corenspodence of C.Carathéodory with the German mathematician Arthour Rozenthal,for the algebrization of measure, and other manuscripts of the mathematician of
about 10,000 pages. Also the visitors can view at the showcases the books " Gesammelte Mathematische Schriften Band 1,3,4,5 "," Gesammelte Mathematische Schriften Band 1,2,3,4 ", " Mass und Ihre
Algebraiserung ", " Reele Functionen Band 1", " Zahlen/Punktionen Funktionen " and many more. Handwritten letters of C.Caratheodory to Albert Einstein, Hellmuth Kneser and photographs of the
Caratheodory family can be found at the place. The effort to flurish the museum with more exhibits is continuous.
(c) Ingo Hugger 2009 | livre@cassiodor.com | Artikel | RSS
|
{"url":"http://www.cassiodor.com/Artikel/5105.aspx","timestamp":"2014-04-18T00:42:31Z","content_type":null,"content_length":"14087","record_id":"<urn:uuid:8b3ab5ea-9c70-4099-8aa5-4b0aac9c82c2>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00603-ip-10-147-4-33.ec2.internal.warc.gz"}
|
User Care about R?
Why Would a Mathematica User Care about R?
May 22, 2013 — Jon McLoone, International Business & Strategic Development
The benefits of linking from Mathematica to other languages and tools differ from case to case. But unusually, in the case of the new RLink in Mathematica 9, I think the benefits have very little to
do with R, the language. The real benefit, I believe, is in the connection it makes to the R community.
When we first added the MathLink libraries for C, there were real benefits in farming out intensive numerical work (though Mathematica performance improvements over the years and development of the
compiler have greatly reduced the occasions where that would be worth the effort). Creating an Excel link added an alternative interface paradigm to Mathematica that wasn’t available in the
Mathematica front end. But in the case of R, it isn’t immediately obvious that it does many things that you can’t already do in Mathematica or many that it does significantly better.
However, with RLink I now have immediate access to the work of the R community through the add-on libraries that they have created to extend R into their field. A great zoo of these free libraries
fill out thousands of niches–sometimes popular, sometimes obscure–but lots of them. There are over 4,000 right here and more elsewhere. At a stroke, all of them are made immediately available to the
Mathematica environment, interpreted through the R language runtime.
Let’s look at a simple example. While Mathematica supports FisherRatioTest, it doesn’t know the exact Fisher test. (This is a hypothesis test where the null hypothesis is that rows and columns in a
contingency table with fixed marginals are independent.)
Well, now it does.
Finding the right library is more work than phoning Tank, and I skipped over any error checking. But the only complicated bit was extracting the p-value from the result (the “[[1,1,1]]” part) because
RFunction returns an RObject that contains additional metadata, which, this time, I didn’t care about.
I can now use this just like any built-in function.
I can plot it:
I can manipulate it:
And I can use it with libraries from other languages in a similar way:
The future is always hard to predict. When I started here (many) years ago, general linking to FORTRAN seemed like the most important thing, but no one ever asks me about that any more–C and Java
linking are the most popular. Links to some specific libraries (BLAS/LAPACK, GMP, and others) have ended up being core infrastructure components in Mathematica. Whether RLink finds extensive use in
Mathematica‘s future features, or remains a more or less stand-alone added chunk of functionality within Mathematica‘s infrastructure, is not yet clear.
There are also issues that need to be considered. R code isn’t going to handle symbolic arguments or high-precision numbers, so, for robustness, you will want to type-check more carefully than you
might with Mathematica code. You don’t always have the elegant design and quality of Mathematica. Some of it is quite raw, while some of it is excellent. But design and quality take time and
resources, and so it will be quite a while before we fill out Mathematica to fill every one of these niches, and through RLink they are available right now.
A big chunk of extra functionality just became a part of the Mathematica ecosystem.
Download this post as a Computable Document Format (CDF) file.
12 Comments
Posted by Murray Eisenberg May 22, 2013 at 5:10 pm
Posted by John Hutchinson May 22, 2013 at 5:46 pm
Posted by Szabolcs May 22, 2013 at 11:24 pm
Posted by Szabolcs May 22, 2013 at 11:22 pm
Posted by telefunkenvf14 May 24, 2013 at 12:13 am
Posted by Selva Rajan May 28, 2013 at 12:30 am
Posted by Michael Schaferkotter June 1, 2013 at 7:42 am
|
{"url":"http://blog.wolfram.com/2013/05/22/why-would-a-mathematica-user-care-about-r/","timestamp":"2014-04-17T12:35:33Z","content_type":null,"content_length":"108507","record_id":"<urn:uuid:3926e8e7-96c8-49b3-8e0b-2bc2906acfe9>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00338-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Difference Equation - Non Homogeneous need help
December 22nd 2012, 01:51 PM #1
Difference Equation - Non Homogeneous need help
Can someone help me with this and provide a step by step response?
Suppose I have the following difference equation:
$u_{n}= -u_{n-1}+6u_{n-2} +7$ with
$u_{0} =1, u_{1} =2$
I have solved the characteristic eqn to be $\lambda = 2,-3$
But how do I go about solving the particular solution?
Many thanks in advance!
Last edited by zzizi; December 22nd 2012 at 01:54 PM.
Re: Difference Equation - Non Homogeneous need help
Can someone help me with this and provide a step by step response?
Suppose I have the following difference equation:
$u_{n}= -u_{n-1}+6u_{n-2} +7$ with
$u_{0} =1, u_{1} =2$
I have solved the characteristic eqn to be $\lambda = 2,-3$
But how do I go about solving the particular solution?
Many thanks in advance!
Hi zzizi!
Wiki explains it better than I can:
Recurrence relation - Wikipedia, the free encyclopedia
The equation in the above example was [[homogeneous differential equation|homogeneous]], in that there was no constant term. If one starts with the non-homogeneous recurrence
with constant term ''K'', this can be converted into homogeneous form as follows: The [[steady state]] is found by setting $b_n = b_{n-1} =b_{n-2} = b^*$ to obtain
$b^{*} = \frac{K}{1-A-B}$
Then the non-homogeneous recurrence can be rewritten in homogeneous form as
$[b_n - b^{*}]=A[b_{n-1}-b^{*}]+B[b_{n-2}-b^{*}]$
which can be solved as above.
Re: Difference Equation - Non Homogeneous need help
I would use the technique of symbolic differencing to obtain a homogeneous linear recurrence:
Subtracting the former from the latter, we obtain:
The characteristic roots are $r=-3,\,1,\,2$ hence the closed form is:
We may use initial conditions to determine the parameters $k_i$:
Solving this system, we find:
$k_1=\frac{7}{20},\,k_2=-\frac{7}{4},\,k_3=\frac{12}{5}$ and so we have:
Re: Difference Equation - Non Homogeneous need help
Hi zzizi!
Wiki explains it better than I can:
Recurrence relation - Wikipedia, the free encyclopedia
The equation in the above example was [[homogeneous differential equation|homogeneous]], in that there was no constant term. If one starts with the non-homogeneous recurrence
Thank you very much, for your reply.
So I was incorrect to think it was non-homogeneous?
I wasn't aware that it could be solved this way, so I will have to perhaps look into it further.
Re: Difference Equation - Non Homogeneous need help
I would use the technique of symbolic differencing to obtain a homogeneous linear recurrence:
Subtracting the former from the latter, we obtain:
The characteristic roots are $r=-3,\,1,\,2$ hence the closed form is:
We may use initial conditions to determine the parameters $k_i$:
Solving this system, we find:
$k_1=\frac{7}{20},\,k_2=-\frac{7}{4},\,k_3=\frac{12}{5}$ and so we have:
Thank you very much for this solution.
How would I go about checking it? I tried to find u(3) from the original recursion and the closed form but they didn't correlate.
Re: Difference Equation - Non Homogeneous need help
From the inhomogeneous recurrence you gave:
Using the closed form I gave:
Re: Difference Equation - Non Homogeneous need help
You were right. It is non-homogeneous.
In your case you have
If you compare that to
you'll see that you have:
$b_n = u_n$
$A = -1$
$B = 6$
$K = 7$
$b^{*} = \frac{K}{1-A-B} = \frac{7}{1-(-1)-6} = - \frac 7 4$
As a result, your solution takes the form:
$u_n = C (-3)^n + D 2^n + b^{*}$
If you fill in your boundary conditions, you get 2 equations with 2 unknowns (C and D), which can be solved with substitution.
Last edited by ILikeSerena; December 23rd 2012 at 04:47 AM.
Re: Difference Equation - Non Homogeneous need help
Last edited by zzizi; December 23rd 2012 at 05:05 AM.
Re: Difference Equation - Non Homogeneous need help
You were right. It is non-homogeneous.
In your case you have
If you compare that to
you'll see that you have:
$b_n = u_n$
$A = -1$
$B = 6$
$K = 7$
$b^{*} = \frac{K}{1-A-B} = \frac{7}{1-(-1)-6} = - \frac 7 4$
As a result, your solution takes the form:
$u_n = C (-3)^n + D 2^n + b^{*}$
If you fill in your boundary conditions, you get 2 equations with 2 unknowns (C and D), which can be solved with substitution.
Thank you very much for your explanation! Much appreciated.
Re: Difference Equation - Non Homogeneous need help
I have a query;
Is there another way, by substituting values to find the particular solution?
Re: Difference Equation - Non Homogeneous need help
You were right. It is non-homogeneous.
In your case you have
If you compare that to
you'll see that you have:
$b_n = u_n$
$A = -1$
$B = 6$
$K = 7$
$b^{*} = \frac{K}{1-A-B} = \frac{7}{1-(-1)-6} = - \frac 7 4$
As a result, your solution takes the form:
$u_n = C (-3)^n + D 2^n + b^{*}$
If you fill in your boundary conditions, you get 2 equations with 2 unknowns (C and D), which can be solved with substitution.
What if I end up with zero for the b* value? I used this method to solve my difference equation but I got zero for the constant. If this happens would the whole solution be complete or do I try
another method?
Re: Difference Equation - Non Homogeneous need help
The only way that b* can be zero is if the constant K is zero.
In that case the difference equation is a homogeneous difference equation instead of an in-homogeneous one.
But this is not applicable to your current problem statement.....
Am I misunderstanding you?
Re: Difference Equation - Non Homogeneous need help
You are quite right. I wanted to understand the concept because I have another problem for my homework that I am working which is like this:
$u_{n}= -3u_{n-1}+4u_{n-2}+9$
When I applied the formula I got this:
$b^{*} = \frac{K}{1-A-B} = \frac{9}{1-(-3)-4} = 0$
SO would this be considered a homogeneous D.Eqn?
Re: Difference Equation - Non Homogeneous need help
You are quite right. I wanted to understand the concept because I have another problem for my homework that I am working which is like this:
$u_{n}= -3u_{n-1}+4u_{n-2}+9$
When I applied the formula I got this:
$b^{*} = \frac{K}{1-A-B} = \frac{9}{1-(-3)-4} = 0$
I just wondered what I should do in this situation
I'm afraid you miscalculated b*.
$b^* = \frac 9 0 = \infty$
Either way, it means this method does not work.
Next method in line is the symbolic differentiation method MarkFL2 described.
That one works for this problem.
Re: Difference Equation - Non Homogeneous need help
Thank you.
December 22nd 2012, 03:45 PM #2
December 22nd 2012, 06:02 PM #3
December 23rd 2012, 04:05 AM #4
December 23rd 2012, 04:23 AM #5
December 23rd 2012, 04:31 AM #6
December 23rd 2012, 04:39 AM #7
December 23rd 2012, 04:49 AM #8
December 23rd 2012, 04:53 AM #9
December 23rd 2012, 07:31 AM #10
December 23rd 2012, 09:43 AM #11
December 23rd 2012, 09:47 AM #12
December 23rd 2012, 10:08 AM #13
December 23rd 2012, 10:19 AM #14
December 23rd 2012, 10:23 AM #15
|
{"url":"http://mathhelpforum.com/discrete-math/210258-difference-equation-non-homogeneous-need-help.html","timestamp":"2014-04-16T05:09:20Z","content_type":null,"content_length":"96309","record_id":"<urn:uuid:afeef55b-89cc-4362-b8df-58d12a0f181e>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00481-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Integrate - Area Under the Curve
November 12th 2008, 11:26 PM #1
Nov 2008
This is the question in my book and I can't seem to get an answer:
14 (a) For the function:
It then says
(b) With the same function (
And finally.. (however, if we can get the right answer to (b) i'm sure (c) will be done too)
(c) For the same function (
This is the question in my book and I can't seem to get an answer:
14 (a) For the function: Mr F says: Yes. And how you got this answer is probably the way you need to get the answers to the next two questions.
It then says
(b) With the same function (
Mr F says: The area of the rectangle is ${\color{red}2 \cdot 2^n = 2^{n+1}}$. Then the required area is ${\color{red}2^{n+1} - \frac{2^{n+1}}{n+1} = \, ....}$
And finally.. (however, if we can get the right answer to (b) i'm sure (c) will be done too)
(c) For the same function (
(c) is left for you to then.
Thanks, I appreciate your help
The thing is however, that for (a) I made that equation based on numerical trials - i.e substituting numbers and subtracting from the area. I didn't get it in an algebraic way which is why I'm
lost for (b).
For (b) however, $2^n+^1 -\frac {2^n+^1}{n+1}$ can be simplified to $\frac {2n\times 2^n}{n+1}$ ??
(c) I thought solving (b) would solve (c), but now when i have this extra info, i see that in (c) from 2 to 3 it makes a rectangle, but the y axis is different from the x axis, no?
Thanks, I appreciate your help
The thing is however, that for (a) I made that equation based on numerical trials - i.e substituting numbers and subtracting from the area. I didn't get it in an algebraic way which is why I'm
lost for (b).
For (b) however, $2^{n+1} -\frac {2^{n+1}}{n+1}$ can be simplified to $\frac {2n\times 2^n}{n+1}$ ??
(c) I thought solving (b) would solve (c), but now when i have this extra info, i see that in (c) from 2 to 3 it makes a rectangle, but the y axis is different from the x axis, no?
$2^{n+1} -\frac {2^{n+1}}{n+1} = \frac{(n+1)2^{n+1}}{n+1} -\frac {2^{n+1}}{n+1} = \frac{n \cdot 2^{n+1}}{n+1}$.
(c) Get the length and width of the rectangle. Hence get the area of the rectangle. Subtract the area between the curve and the x-axis from the area of the rectangle.
Thanks, for your help I have now done (a) (b) and (c)
2 more Qs though: The first is:
Doesn't $\frac{n \cdot 2^{n+1}}{n+1}$ the same/simplify to $\frac {2n\times 2^n}{n+1}$?
The 2nd Q is somewhat related to the (a/b/c), but it is for integration 3 which i learnt last year and my paper 2 exam in 2 weeks is on this.
Area A is the area under the curve for x = a to b and the x axis
Area B is the area under the curve for y = a to b and the y axis
It says: " For a function $y=x^n$, the ratio of Area A: Area B is $n:1$. Given the function $y=x^n$ from x = a to x = b such that a < b, does this ratio hold true for the regions defined below?
Prove and explain why or why not.
Area A: $y=x^n$, $y=a^n$, $y=b^n$ and the y axis
Area B: $y=x^n$, x = a, x = b, and the x axis
I'm not asking you to do it for me; you've already helped me a lot, but I don't know what to do or where to start, so just guide me a little?
Thanks, for your help I have now done (a) (b) and (c)
2 more Qs though: The first is:
Doesn't $\frac{n \cdot 2^{n+1}}{n+1}$ the same/simplify to $\frac {2n\times 2^n}{n+1}$?
The 2nd Q is somewhat related to the (a/b/c), but it is for integration 3 which i learnt last year and my paper 2 exam in 2 weeks is on this.
Area A is the area under the curve for x = a to b and the x axis
Area B is the area under the curve for y = a to b and the y axis
It says: " For a function $y=x^n$, the ratio of Area A: Area B is $n:1$. Given the function $y=x^n$ from x = a to x = b such that a < b, does this ratio hold true for the regions defined below?
Prove and explain why or why not.
Area A: $y=x^n$, $y=a^n$, $y=b^n$ and the y axis
Area B: $y=x^n$, x = a, x = b, and the x axis
I'm not asking you to do it for me; you've already helped me a lot, but I don't know what to do or where to start, so just guide me a little?
A ratio of n : 1 means the areas are a fraction $\frac{1}{n+1}$ and $\frac{n}{n+1}$ of the rectangular area.
Last edited by jhomie; November 13th 2008 at 02:08 PM.
November 13th 2008, 12:21 AM #2
November 13th 2008, 02:16 AM #3
Nov 2008
November 13th 2008, 03:48 AM #4
November 13th 2008, 04:08 AM #5
Nov 2008
November 13th 2008, 04:12 AM #6
November 13th 2008, 04:14 AM #7
November 13th 2008, 01:05 PM #8
Nov 2008
|
{"url":"http://mathhelpforum.com/calculus/59316-integrate-area-under-curve.html","timestamp":"2014-04-16T05:05:54Z","content_type":null,"content_length":"68518","record_id":"<urn:uuid:dc708f9f-9c0c-4112-8b30-cc61d4ef213e>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00074-ip-10-147-4-33.ec2.internal.warc.gz"}
|
quadratic equations
August 16th 2006, 11:43 PM #1
Aug 2006
a shopkeeper buys a number of books for Rs 80,if he had brought 4 more for the same amount ,each book would have cost Rs 1 less .how many books did he buy?
a shopkeeper buys a number of books for Rs 80,if he had brought 4 more for the same amount ,each book would have cost Rs 1 less .how many books did he buy?
Suppose he bought b books, then he paid 80/b Rupees per book.
If he had bought b+4 books for 80 Rupees he would have paid 80/(b+4)
Rupees for each book, but we are told this is 1 Rupee less than he did
pay per book so:
80/(b+4)=80/b - 1
This is the equation you need to solve to find b the number of books he
August 17th 2006, 02:17 AM #2
Grand Panjandrum
Nov 2005
|
{"url":"http://mathhelpforum.com/algebra/4972-quadratic-equations.html","timestamp":"2014-04-18T08:22:14Z","content_type":null,"content_length":"32839","record_id":"<urn:uuid:ce0aefd7-b06f-469f-bca7-b5796283650e>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00072-ip-10-147-4-33.ec2.internal.warc.gz"}
|
linear equation, span, vectors, linear systems of equations
show that S and T have the same span in R^3 by showing that the vectors in S are in the span of T and vise versa.
S= {(1,0,0), (0,1,0)}
T= {(1,2,0), (2,1,0)}
im a little confused on how to start off on this problem.. help?!
|
{"url":"http://www.physicsforums.com/showthread.php?t=528962","timestamp":"2014-04-24T15:19:57Z","content_type":null,"content_length":"29830","record_id":"<urn:uuid:b06dc1de-22d2-4a04-85f0-e5f296d8e406>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00598-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Figure 6: Lowry plot of the eFAST quantitative measure. The total effect of a parameter comprised the main effect (black bar) and any interactions with other parameters (grey bar) given as a
proportion of variance. The ribbon, representing variance due to parameter interactions, is bounded by the cumulative sum of main effects (lower bold line) and the minimum of the cumulative sum of
the total effects (upper bold line), (a) CV at 3 hours, (b) CXPPM at 3 hours, (c) C[urine] at 5 hours.
|
{"url":"http://www.hindawi.com/journals/jt/2012/760281/fig6/","timestamp":"2014-04-20T06:50:28Z","content_type":null,"content_length":"3409","record_id":"<urn:uuid:266f778a-9e18-44c3-9239-b679b5bfb205>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00102-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Would Copernicus have been more convincing if he’d been more accurate?
As a follow-up to yesterday’s post, I was wondering if Copernicus would have been more convincing if he’d used ellipses in his model instead of circles. By using circles Copernicus had to use
epicycles like Ptolemy, though not so many. Still, it gave the impression that epicycles were necessary. If that’s the case then why not have a stationary Earth as well? The discovery that
planetary motion would be better described by ellipses didn’t come about till Kepler’s work almost a century later. As far as the post title goes, I think Dr* T’s Theory #1 applies here: Any
tabloid heading that starts ‘Is this.…’, ‘Could this be…’ etc. can be safely answered ‘No’
So my post title is a bit of a cliché, but the reason I’ve used it is that if the answer is no, then something strange is happening. More accurate is less convincing?
The reason I think that is that Copernicus’ model wasn’t isolated from the rest of thought for that period. It used and built on a number of assumptions of the time. One of those ideas was the
creation of the universe by a perfect being. Another was the idea that a circle was a perfect shape, derived from classical geometry. By telling people the Sun was at the centre of the
universe and not the Earth, Copernicus was asking people to make a big shift in their thinking. A lot of people thought it nonsense. If he’d made the orbits elliptical as well then many people
who would have been willing to listen to Copernicus’ ideas would have balked at that, reducing his potential audience further. In terms of numbers, the population of mathematically
minded people who could examine his work was small enough already.
If he’d reduced the number of initial readers further, would his ideas have spread enough for others to pick them 50 years later? It’s impossible to say, but if Copernicus hadn’t given Kepler
the idea of a putting the Sun at the centre of universe, could Kepler have discovered it independently? It’s hard to say but, given how Kepler struggled with letting go of circles and using
ellipses, I think it’s unlikely.
This is why I’m wary of histories of science that are purely about who got it right and who got it wrong. Copernicus’ use of circles isn’t ‘right’, but it was necessary at the time.
I’ve «cough» borrowed the portrait of Copernicus from Prof Reike’s page on Copernicus. It’s well worth visiting if you want to find out more about the astronomer.
You can read more about Kepler’s discovery of the elliptical path of planets at:
Boccaletti 2001. From the epicycles of the Greeks to Keplerʼs ellipse — The breakdown of the circle paradigm
10 thoughts on “Would Copernicus have been more convincing if he’d been more accurate?”
2. Thanks for these last couple of posts. I wanted to add though that, rather than, as you seem to suggest, Copernicus keeping circles to make his heliocentric ideas more palatable to his
audience, it was his desire to bolster the classical ideal of uniform circular motion that led him to heliocentrism. His use of circles was not just “necessary at the time” but a
fundamental driver to the development of his theory — as, in fact, was his reverence for Ptolemy, on whose Almagest Copernicus modelled the structure of De Revolutionibus.
□ Thanks for adding that. I think that shows even more that setting up Ptolemy and Copernicus in opposition to each other doesn’t work historically. I know that’s well-known to
Renaissance historians, but it’s surprising how much of the popular belief about this period is wrong. It reminds me I really need to get a copy of Gingerich’s “The Book Nobody Read”.
5. Just another point to make here. We know that ellipses are correct, because Kepler eventually convinced everyone about it. But Kepler had available a much larger, and much more accurate
set of data on planetary motions, compiled by Tycho. It was only when circular orbits no longer matched the data, that Kepler abandoned circles. Copernicus had only the data currently
available at the time (and he made almost no observations himself). He simply reworked the cosmological system to heliocentrism with the same data. The result was predictive models
just as accurate as geocentrism–very important to make heliocentrism even plausible from the standpoint of technical astronomy.
But the major objections to heliocentrism since antiquity always had come from physics–how can we justify a moving earth, when it so visibly appears stationary to us? Accuracy or
inaccuracy of the mathematical system played little role.
6. The main motive for Kepler’s discoveries was to adjust the recorded observations to take account of Copernicus’s discovery that the Earth as the observation point was not stationary
but orbited round the Sun.
7. Further to my comment of 5 April 2011, how does Galileo fit into this? Galileo and Kepler were contemporories and were both in agreement with Copernicus, but Galileo did not agree with
Kepler’s elliptical orbits. However Galileo did discover the law of falling bodies v^2=d which can be incorporated into Kepler’s system. The works of both Galileo and Kepler suffered from
religious prohibition, which explains even to-day why these works are not as well known as they should be.
8. Further to my comment of 15 April 2011, the connection between Galileo’s v^2=d at the empty focus end of the elliptical orbit and Kepler’s v^2=(1/r) at the Sun focus end is mathematically
incredibly interesting and not at all straight forward. Kepler’s version can be adapted for further research purposes by including a constant V being the maximum velocity, then the
variable velocities can be expressed as V/#r where # is my notation for square root. In this way the same velocity arises on both the accelerating side as well as the decelerating
side, but in opposite directions. As one of the properties of all perfect ellipses d is the distance from the curve to the empty focus, and r is the distance from the curve to the Sun
focus, d+r equals the major axis of the elliptical orbit.
9. Further to my previous comments, in about 1600 to 1603, Kepler wrote a paper which does not seem to be mentioned among his listed works. This paper contains the title Conic Section and is
written in both Latin and German. What this paper contains is mention of pins and thread. This would indicate that Kepler knew about the way a perfect ellipse should be drawn, even though
the paper’s description is not complete. Kepler does mention elsewhere that one of his biggest problems was trying to reconcile Apollonius’s conic section with the perfect ellipse which
is a cylindric section containing foci.
|
{"url":"http://alunsalt.com/2011/01/18/would-copernicus-have-been-more-convincing-if-hed-been-more-accurate/","timestamp":"2014-04-16T10:11:01Z","content_type":null,"content_length":"59732","record_id":"<urn:uuid:fbbf1a6d-088e-45f1-92df-1539e3a76472>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00037-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Forum Discussions
- User Profile for: colos_@_tfrancishigh.net
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
User Profile: colos_@_tfrancishigh.net
User Profile for: colos_@_tfrancishigh.net
UserID: 633925
Name: Denise Colosi
Registered: 4/27/10
Total Posts: 6
Show all user messages
|
{"url":"http://mathforum.org/kb/profile.jspa?userID=633925","timestamp":"2014-04-20T03:31:28Z","content_type":null,"content_length":"10722","record_id":"<urn:uuid:95576cb3-2019-46de-90d7-c027afecdd18>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00196-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Patent application title: DETERMINATION OF PAIRINGS ON A CURVE USING AGGREGATED INVERSIONS
Inventors: Kristin Lauter (Redmond, WA, US) Peter Montgomery (Bellevue, WA, US) Michael Naehrig (Stolberg, DE)
Assignees: Microsoft Corporation
IPC8 Class: AH04L928FI
USPC Class: 380 28
Class name: Cryptography particular algorithmic function encoding
Publication date: 2011-07-14
Patent application number: 20110170684
Sign up to receive free email alerts when patent applications with chosen keywords are published SIGN UP
One or more techniques and/or systems are disclosed that provide for determining mathematical pairings for a curve for use in cryptography. A plurality of inversions used for determining the
mathematical pairings for the curve are aggregated (e.g., into a single inversion in respective levels of a binary tree representation of elements of the computation). The mathematical pairings for
the curve are determined in affine coordinates from a binary representation of a scalar read from right to left using the aggregated plurality of inversions.
A computer-based method for determining mathematical pairings for a curve for use in cryptography, comprising: aggregating a plurality of inversions used in determining the mathematical pairings for
the curve using one or more micro-processors; and determining the mathematical pairings for the curve in affine coordinates along a binary representation of a scalar read from right to left using the
aggregated plurality of inversions.
The method of claim 1, aggregating a plurality of inversions comprising aggregating the plurality of inversions into a single inversion for mathematical pairing determination.
The method of claim 1, comprising reusing the aggregated plurality of inversions for one or more pairing calculations when determining the mathematical pairing for the curve.
The method of claim 3, reusing the aggregated plurality of inversions where a first curve point that was used in determining the aggregated inversions is a same element as a second curve point for
which the aggregated inversions are reused.
The method of claim 1, comprising parallelizing the determination of two or more mathematical pairings for the curve in affine coordinates along a binary representation of a scalar read from right to
left on a plurality of processors.
The method of claim 5, comprising parallelizing two or more instances of addition acts in the determination of two or more mathematical pairings for the curve on a plurality of processors.
The method of claim 1, the output of the aggregated inversions comprising a slope value for the updating of a line function in the pairing determination.
The method of claim 1, comprising using the aggregated plurality inversions as one or more slope values for updating a line function in the pairing determination.
The method of claim 1, comprising using an output of the aggregated inversions to update a coefficient of a line used to determine the mathematical pairing.
The method of claim 1, comprising reading a binary representation of the scalar for the curve from right to left.
The method of claim 1, the curve point comprising an element of a group used for a cryptographic application.
The method of claim 1, determining the mathematical pairings for the curve in affine coordinates along a binary representation of a scalar read from right to left using the aggregated plurality of
inversions comprising: determining a multiple of a curve point by a scalar using a binary representation of the scalar read from right to left; determining an inversion of an aggregation of
additions; and using an output of the aggregated inversions to update a function of a line used to determine the mathematical pairing.
A system for determining mathematical pairings for a curve for use in cryptography, comprising: an inversion aggregation component, operably coupled with one or more programmed processors disposed in
one or more computing devices, and configured to aggregate a plurality of inversions used in determining the mathematical pairings for the curve, and operably coupled with a data storage component
configured to store one or more of the aggregated inversions; and a mathematical pairings determination component, operably coupled with the data storage component, and configured to determine the
mathematical pairings for the curve in affine coordinates along a binary representation of a scalar read from right to left using the aggregated plurality of inversions.
The system of claim 13, comprised in a cryptographic system comprising: an input receiving component configured to receive at least two elements; a cryptographic key comprising the scalar; and
configured to compare results of a first mathematical pairing on an elliptic curve for the received elements with a second mathematical pairing for at least two points on the elliptic curve, using
the scalar.
The system of claim 13, the inversion aggregation component configured to aggregate the plurality of inversions for the mathematical pairing determination into a single inversion for use in the
mathematical pairing determination for two points on the curve.
The system of claim 13, the mathematical pairings determination component operably coupled with a plurality of processors configured to parallelize a determination of addition relationships for
multiples of the curve point in affine coordinates along a binary representation of a scalar read from right to left.
The system of claim 13, the mathematical pairings determination component configured to reuse the aggregated plurality of inversions if a first curve point that was used in determining the aggregated
plurality of inversions is a same element as a second curve point for which the aggregated inversions are reused.
The system of claim 17, comprising an inversion reuse determination component configured to: determine whether the second curve point is the same element as the first curve point; and retrieve the
aggregated plurality of inversions from the data storage component that correspond to the first curve point.
A computer-based method for determining mathematical pairings for a curve for use in cryptography, comprising: aggregating a plurality of inversions used in determining the mathematical pairings for
the curve into a single inversion for mathematical pairing determination using one or more programmed processors; and determining the mathematical pairings for two elements used for a cryptographic
application on a curve in affine coordinates along a binary representation of a scalar read from right to left, using the aggregated plurality of inversions as one or more slope values for updating a
coefficient of a line in the pairing determination, the determining of the mathematical pairings comprising reusing the aggregated plurality of inversions if a first curve point that was used in
determining the aggregated inversions is a same element as a second curve point for which the aggregated inversions are reused.
The method of claim 19, comprising parallelizing the determination of the mathematical pairings for the curve in affine coordinates along a binary representation of a scalar read from right to left
on a plurality of processors.
BACKGROUND [0001]
Computers have become increasingly interconnected via networks (such as the Internet), and security and authentication concerns have become increasingly important. Cryptographic techniques that
involve a key-based cipher, for example, can take sequences of intelligible data (e.g., typically referred to as plaintext) that form a message and mathematically transform them into seemingly
unintelligible data (e.g., typically referred to as ciphertext), through an enciphering process. In this example, the enciphering can be reversed, thereby allowing recipients of the ciphertext with
an appropriate key to transform the ciphertext back to plaintext, while making it very difficult, if not nearly impossible, for those without the appropriate key from recovering the plaintext.
Public-key cryptographic techniques are an embodiment of key-based cipher. In public-key cryptography, for example, respective communicating parties have a public/private key pair. The public key of
each respective pair is made publicly available (e.g., or at least available to others who are intended to send encrypted communications), and the private key is kept secret. In order to communicate
a plaintext message using encryption to a receiving party, for example, an originating party can encrypt the plaintext message into a ciphertext message using the public key of the receiving party
and communicate the ciphertext message to the receiving party. In this example, upon receipt of the ciphertext message, the receiving party can decrypt the message using its secret private key,
thereby recovering the original plaintext message.
An example of public/private key cryptology comprises generating two large prime numbers and multiplying them together to get a large composite number, which is made public. In this example, if the
primes are properly chosen and large enough, it may be extremely difficult (e.g., practically impossible due to computational infeasibility) for someone who does not know the primes to determine them
from just knowing the composite number. However, in order for this method to be secure, the size of the composite number should be more than 1,000 bits. In some situations, such a large size makes
the method impractical to be used.
An example of authentication is where a party or a machine attempts to prove that it is authorized to access or use a product or service. Often, a product ID system is utilized for a software program
(s), where a user enters a product ID sequence stamped on the outside of the properly licensed software package as proof that the software has been properly paid for. If the product ID sequence is
too long, then it will be cumbersome and user unfriendly. Other common examples include user authentication, when a user identifies themselves to a computer system using an authentication code.
As another example, in cryptography, elliptic curves are often used to generate cryptographic keys. An elliptic curve is a mathematical object that has a structure and properties well suited for
cryptography. Many protocols for elliptic curves have already been standardized for use in cryptography. A recent development in cryptography involves using a pairing, where pairs of elements from
one or more groups, such as points on an elliptic curve, can be combined to generate new elements from another group to create a cryptographic system.
SUMMARY [0006]
This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key factors
or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
Encryption and decryption are usually performed based on a secret. This secret can utilize an order of a group of points, or some other characteristic of the group, such as the generator, or a
multiple of the generator. A variety of different groups can be used in cryptography, such as implementing the points on an elliptic curve for the group's elements. A group of elements (e.g., points)
derived from an elliptic curve can be used in the encryption/decryption, for example, as the discrete logarithm problem (DLP) for such a group is considered to be hard. A hard DLP is preferred in
cryptography in order to create a secure encryption/decryption process, for example.
Currently, when computing pairings on an elliptic curve a lot of operations such as multiplications or inversions in the finite field that the elliptic curve is defined over can be required. One can
attempt to reduce the number of multiplications to reduce computation expense, and/or speed the computations up in other ways. One technique for speeding up the computations is to reduce a number of
inversions undertaken when computing the pairing.
For example, when working in affine space, both multiplications and inversions are performed, where inversions are more computationally expensive than multiplications. In order to reduce the number
of inversions current practitioners change the coordinate system of the curve points from affine space to projective space. This has an effect of reducing the inversions, while increasing the number
of multiplications, which are computationally cheaper.
One or more of the techniques and/or systems described herein provide an alternate to converting the coordinates to projective space, while still reducing a number of inversions needed to compute the
pairing on the elliptic curve. Using these techniques and systems one may aggregate inversions for coordinates in affine space, for example, and reuse the aggregated inversions for an additive act
used for the pairing computation. Further, portions of the computations can be parallelized on multi-core systems, for example, to speed up the overall computation time. In this way, for example,
pairings used in a cryptographic system can be computed using less computational resources and in a shorter time (e.g., faster) than present implementations.
In one embodiment, when determining mathematical pairings for a curve for use in cryptography, a plurality of inversions that are used when determining the mathematical pairings for the curve are
aggregated (e.g., into a single inversion, such as an intermediate calculation in the pairings computation). The mathematical pairings for the curve are determined in affine coordinates along a
binary representation of a scalar read from right to left using the aggregated plurality of inversions.
To the accomplishment of the foregoing and related ends, the following description and annexed drawings set forth certain illustrative aspects and implementations. These are indicative of but a few
of the various ways in which one or more aspects may be employed. Other aspects, advantages, and novel features of the disclosure will become apparent from the following detailed description when
considered in conjunction with the annexed drawings.
DESCRIPTION OF THE DRAWINGS [0013] FIG. 1
is a block diagram illustrating an exemplary cryptosystem in accordance with one or more of the methods and/or systems disclosed herein.
[0014] FIG. 2
is an illustration of an exemplary system using a product identifier to validate software.
[0015] FIG. 3
is a flow-chart diagram of an exemplary method for determining mathematical pairings for a curve for use in cryptography.
[0016] FIG. 4
is a flow diagram illustrating one embodiment of an implementation of portions of one or more of the methods described herein.
[0017] FIG. 5
is a flow diagram illustrating an exemplary implementation of one or more of the techniques and/or systems described herein.
[0018] FIG. 6
is a component block diagram illustrating an exemplary system for determining mathematical pairings for a curve for use in cryptography.
[0019] FIG. 7
is a component block diagram illustrating an exemplary implementation of one or more of the systems described herein.
[0020] FIG. 8
is a component block diagram illustrating an exemplary implementation of one or more of the systems described herein.
[0021] FIG. 9
is an illustration of an exemplary computer-readable medium that may be devised to implement one or more of the methods and/or systems described herein.
[0022] FIG. 10
is a component block diagram of an exemplary environment that may be devised to implement one or more of the methods and/or systems described herein.
DETAILED DESCRIPTION [0023]
The claimed subject matter is now described with reference to the drawings, wherein like reference numerals are used to refer to like elements throughout. In the following description, for purposes
of explanation, numerous specific details are set forth in order to provide a thorough understanding of the claimed subject matter. It may be evident, however, that the claimed subject matter may be
practiced without these specific details. In other instances, structures and devices are shown in block diagram form in order to facilitate describing the claimed subject matter.
The one or more cryptographic pairings techniques and/or systems described herein can determine mathematical pairings for an elliptic curve that can be used for cryptographic applications. For
example, they can be used to determine pairings for a proposed authorization (e.g., an electronic signature) for cryptographic applications.
Typically, a pairing-based cryptosystem utilizes a group (e.g., of elements and a binary multiplier derived from an elliptic curve) whose elements are publicly known (e.g., by knowing the curve). The
scalar that is used to compute the pairing is publicly known. Unknown secrets are either the input points or implicitly contained in the input points to the pairing. The basis for the security is the
hardness of an associated discrete logarithm problem. A pairing-based encryption and decryption as illustrated in
FIG. 1
, as an example, typically refers to encryption and decryption that uses keys that are generated based on aspects or characteristics of an algebraic curve. The exemplary cryptosystem of FIGS. 1 and 2
can be based on the curve being publically known but the points generated being secret, as the points generated from the curve by the scalar are secret (e.g., and difficult to determine). In one
embodiment of the pairing-based cryptography, the curve may be an elliptic curve, and the elements that comprise the group can be generated from points on the elliptic curve. As one of ordinary skill
in the art may appreciate, in a typical situation a point P is publicly known and a scalar m is secret. Then a point Q=mP is made public as well. Because an associated discrete log problem is hard,
it is infeasible to determine m from P and Q. Accordingly, the secrets are usually just the scalars and the points are public.
Pairing-based cryptosystems can be used to encrypt a wide variety of information. For example, a cryptosystem may be used to generate a "short" signature or product identifier, which is a code that
allows validation and/or authentication of a machine, program or user, for example. The signature can be a "short" signature in that it uses a relatively small number of characters.
[0027] FIG. 1
is a block diagram illustrating an exemplary cryptosystem 100 in accordance with certain embodiments of the methods and systems disclosed herein. The exemplary cryptosystem 100 comprises an encryptor
102 and a decryptor 104. A plaintext message 106 can be received at an input module 108 of the encryptor 102, which is a pairing-based encryptor that encrypts message 106 based on a public key--an
element of a publicly known group--generated based on a secret scalar (known merely by decryptor 104). In one embodiment, the group can be a group of points generated from the elliptic curve used by
the encryptor 102, and discussed in more detail below. A plaintext message 106 is typically an unencrypted message, although encryptor 102 can encrypt other types of messages. Thus, the message 106
may alternatively be encrypted or encoded by some other component (not shown) or a user.
An output module 110 of the encryptor 102 outputs an encrypted version of the plaintext message 106, which can be ciphertext 112. Ciphertext 112, which may comprise a string of unintelligible text or
some other data, can then be communicated to the decryptor 104, which can be implemented, for example, on a computer system remote from a computer system on which encryptor 102 is implemented. Given
the encrypted nature of ciphertext 112, the communication link between the encryptor 102 and the decryptor 104 need not be secure (e.g., it is often presumed that the communication link is not
secure). As an example, the communication link can be one of a wide variety of public and/or private networks implemented using one or more of a wide variety of conventional public and/or proprietary
protocols, and including both wired and wireless implementations. Additionally, the communication link may include other non-computer network components, such as hand-delivery of media including
ciphertext or other components of a product distribution chain.
The decryptor 104 receives the ciphertext 112 at an input module 114 and, because the decryptor 104 is aware of the secret key corresponding to the public key used to encrypt the message 106 (e.g.,
as well as the necessary generator), can decrypt the ciphertext 112 to recover the original plaintext message 106, which is output by an output module 116 as a plaintext message 118. In one
embodiment, the decryptor 104 is a pairing-based decryptor that decrypts the message based on the group of points generated from the elliptic curve (e.g., a group as was used by encryptor 102), and
is discussed in more detail below.
In one embodiment, the encryption and decryption are performed in the exemplary cryptosystem 100 based on a secret, which may be the scalar used to generate the public key, an element of a group of
points from the elliptic curve, thereby allowing the solution to the problem to be difficult to determine. The secret is known to the decryptor 104, and a public key can be generated based on the
secret known to encryptor 102. In this embodiment, this knowledge may allow the encryptor 102 to encrypt a plaintext message that can be subsequently decrypted merely by the decryptor 104. Other
components, including the encryptor 102, which do not have knowledge of the secret, cannot decrypt the ciphertext (e.g., although decryption may be technically possible, it is not computationally
feasible). Similarly, in one embodiment, the decryptor 104 may also generate a message using the secret key based on a plaintext message; a process referred to as digitally signing the plaintext
message. In this embodiment, the signed message can be communicated to other components, such as the encryptor 102, which can verify the digital signature based on the public key.
[0031] FIG. 2
is an illustration of an exemplary system 200 using a product identifier to validate software in accordance with certain embodiments of the methods and systems described herein. The exemplary system
comprises a software copy generator 202 including a product identifier (ID) generator 204. Software copy generator 202 may produce software media 210 (e.g., a CD-ROM, DVD (Digital Versatile Disk),
etc.) that can contain files needed to collectively implement a complete copy of one or more application programs, (e.g., a word processing program, a spreadsheet program, an operating system, a
suite of programs, and so forth). These files can be received from source files 206, which may be a local source (e.g., a hard drive internal to generator 202), a remote source (e.g., coupled to
generator 202 via a network), or a combination thereof. Although a single generator 202 is illustrated in
FIG. 2
, often multiple generators operate individually and/or cooperatively to increase a rate at which software media 210 can be generated.
A product ID generator 204 can generate a product ID 212 that may include numbers, letters, and/or other symbols. The generator 204 generates a product ID 212 using the pairing-based encryption
techniques and/or systems described herein. The product ID 212 may be printed on a label and affixed to either a carrier containing software media 210 or a box into which software media 210 is
placed. Alternatively, the product ID 212 may be made available electronically, such as a certificate provided to a user when receiving a softcopy of the application program via an on-line source
(e.g., downloading of the software via the Internet). The product ID 212 can serve multiple functions, such as being cryptographically validated to verify that the product ID is a valid product ID
(e.g., thus allowing the application program to be installed). As a further example, the product ID 212 may serve to authenticate the particular software media 210 to which it is associated.
The generated software media 210 and associated product ID 212 can be provided to a distribution chain 214. The distribution chain 214 can represent one or more of a variety of conventional
distribution systems and methods, including possibly one or more "middlemen" (e.g., wholesalers, suppliers, distributors, retail stores (either on-line or brick and mortar), etc.), and/or electronic
distribution, such as over the Internet. Regardless of the manner in which media 210 and the associated product ID 212 are distributed, the media 210 and product ID 212 are typically purchased by
(e.g., licensed) or distributed to, the user of a client computer 218, for example.
The client computer 218 can include a media reader 220 that is capable of reading the software media 210 and installing an application program onto client computer 218 (e.g., installing the
application program on to a hard disk drive or memory (not shown) of client computer 218). In one embodiment, part of the installation process can involve entering the product ID 212 (e.g., to
validate a licensed copy). This entry may be a manual entry (e.g., the user typing in the product ID via a keyboard), or alternatively an automatic entry (e.g., computer 218 automatically accessing a
particular field of a license associated with the application program and extracting the product ID therefrom). The client computer 218 can also include a product ID validator 222 which validates,
during installation of the application program, the product ID 212. In one embodiment, the validation can be performed using the pairing-based decryption techniques and/or systems described herein.
If the validator 222 determines that the product ID is valid, an appropriate course of action can be taken (e.g., an installation program on software media 210 allows the application to be installed
on computer 218). However, if the validator 222 determines that the product ID is invalid, a different course of action can be taken (e.g., the installation program terminates the installation
process preventing the application program from being installed).
In one embodiment, the product ID validator 222 can also optionally authenticate the software media (e.g., application program) based on the product ID 212. This authentication verifies that the
product ID 212 entered at computer 218 corresponds to the particular copy of the application be accessed, for example. As an example, the authentication may be performed at different times, such as
during installation, or when requesting product support or an upgrade. Alternatively, in this embodiment, the authentication may be performed at a remote location (e.g., at a call center when the
user of client computer 218 calls for technical support, the user may be required to provide the product ID 212 before receiving assistance).
In one embodiment, if an application program manufacturer desires to utilize the authentication capabilities of the product ID, the product ID generated by generator 204 for each copy of an
application program can be unique. As an example, unique product IDs can be created by assigning a different initial number or value to each copy of the application program (e.g., this initial value
is then used as a basis for generating the product ID). The unique value associated with the copy of the application program can be optionally maintained by the manufacturer as an authentication
record 208 (e.g., a database or list) along with an indication of the particular copy of the application program. The indication of the copy can be, for example, a serial number embedded in the
application program or on software media 210, and may be hidden in any of a wide variety of conventional manners. Alternatively, for example, the individual number itself may be a serial number that
is associated with the particular copy, thereby allowing the manufacturer to verify the authenticity of an application program by extracting the initial value from the product ID and verifying that
it is the same as the serial number embedded in the application program or software media 210.
A method can be devised that allows a mathematical pairing to be determined for a curve, where a first set of elements submitted as a cryptographic key (e.g., points on an elliptic curve) can be
compared with known points on the curve, and used for cryptographic purposes. Effective cryptosystems are typically based on groups where the Discrete Logarithm Problem (DLP) for the group is hard
(e.g., difficult to calculate), such as a group of points from the elliptic curve. The DLP can be formulated in a group, which is a collection of elements together with a binary operation, such as a
group multiplication. As an illustrative example, the DLP may be: given an element g in a finite group G and another element h that is an element of G, find an integer x such that g
=h. Generating pairings for use in cryptography typically requires a lot of underlying multiplications in a finite field over which the elliptic curve is defined.
[0038] FIG. 3
is a flow-chart diagram of an exemplary method 300 for determining mathematical pairings for a curve for use in cryptography. The exemplary method 300 begins at 302 and involves aggregating a
plurality of inversions used in determining the mathematical pairings for the curve, at 304. Because pairing operations on an elliptic curve utilize a lot of underlying multiplications in the finite
field, in order to make pairing operations more efficient, for example, a number of multiplications can be mitigated, and/or efficiencies can be created in other pairing operations.
In one embodiment, in the finite field for the curve, both multiplications and inversions (e.g., identifying multiplicative inverses or reciprocals) are performed for the pairing operation.
Inversions are typically more computationally expensive to perform than the multiplications. For example, an inversion to multiplication ratio for computations can often be eighty to one, as a
coordinate system used for the curve points is commonly changed from affine to projective in order to reduce a number of inversions. As an illustrative example of an inversion determination, to
approximate a multiplicative inverse of a nonzero real number x, a number y can be repeatedly replaced with 2y-xy
. In this example, when changes to y remain within a threshold, y is an approximation of the multiplicative inverse of x. It will be appreciated that this example is merely for illustration purposes,
and that there are other techniques for determining inversions, particularly for other types of numbers, such as complex numbers.
In the exemplary method 300, for example, while working in the affine coordinate system, a number of inversions can be greatly mitigated by combining the inversions, and determining them at a same
time. In one embodiment, when using affine coordinates for the pairing computation, respective doubling (e.g., multiplication) and addition acts use a finite field inversion to compute a slope value
for a line that is evaluated in a subsequent act. In this embodiment, the inversions can be aggregated, for example, using "Montgomery's trick" to replace I finite field inversions by a single
inversion and 3(I-1) multiplications.
As an illustrative example of Montgomery's trick, in order to determine inversions for elements x and y, instead of determining two inversions the product xy can be determined and its inverse
computed. In this example, the inverse of x and y can then be determined by the multiplications: x
y, and y
x. In this way, in this example, the inversions of the two elements x and y can be determined by one inversion and three multiplications. Where inversions are to be determined for n elements, merely
one inversion and 3(n-1) multiplications can be performed. Therefore, in one embodiment, where the pairing computation comprises a plurality of inversions (n), the n inversions can be aggregated into
one inversion.
In one embodiment, let [a
, . . . , a
] be a sequence of elements of which reciprocals [a
, . . . , a
] are to be computed. The reciprocals can be computed by first computing a product a
. . . a
, its reciprocal (a
. . . a
, the products a
+1 a
for 1≦i≦s, and the reciprocals of single elements by
- .
The acts can be performed in
1 inversion and 3(s-1) multiplications. That is, s inversions can be replaced by 1 inversion and 3(s-1) multiplications.
In one embodiment, the product a
. . . a
can be computed in a binary tree with s-1 multiplications, for example, where the s-1 products can be stored for use in the inversion aggregation. Further, in this embodiment, the reciprocals (a
. . . a
are computed, and subsequent reciprocals are computed along a same tree with 2(s-1) multiplications.
Returning to the exemplary method 300 of
FIG. 3
, at 306, the mathematical pairings are determined for the curve in affine coordinates along a binary representation of a scalar read from right to left using the aggregated inversions. When
computing a Tate pairing for a curve a typical Miller loop algorithm goes through the scalar from left to right (or from top down). As an illustrative example, assume that k>1 so that denominators in
a Miller algorithm are eliminated.
For the following exemplary embodiment and illustrative examples utilize the following notations: Let p>3 be a prime and F
be a finite field of characteristic p. Let E be an elliptic curve defined over F
having a Weierstrass equation E:y 2=x 3+ax+b. For a prime r with r|#E(F
), let k be an embedding degree of E with respect to r, i.e. k is a smallest positive integer with r|q k-1. A set of r-torsion points on E can be denoted by E[r] and a set of F_(q i)-rational
r-torsion points by E(F_(q i))[r] for i>0. Let φ_q be a q-power Frobenius endomorphism on E.
Further, define
q-[q]).OR right.E(F_(q k))[r].
Let k
>1. A reduced Tate pairing is a map:
(P,Q)f_(r,P)(Q) ((q k-1)/r),
where f
_(r,P)F_q (E) is a function with divisor r(P)-r(O). Denote the function in F_q (E) given by a line through two points R and S on E by I_(R,S). If R=S, then the line is given by the tangent to the
curve passing through R.
The following illustrates one embodiment of a typical Miller loop for computing a Tate pairing (using the above notations):
-US-00001 Input: P ε G
, Q ε G
, r = (r
-1, ..., r
Output: e r ( P , Q ) = f r , P ( Q ) q k - 1 r ##EQU00001## 1: R P, f 1 2: for (i m - 2; i ≧ 0; i - -) do 3: f f
,R(Q) 4: R [2]R 5: if (r
= 1) then 6: f f l
,P(Q) 7: R R + P 8: end if 9: end for 10: f f q k - 1 r ##EQU00002## 11: return f
In this illustrative example, Lines 3 and 4 in the above algorithm together are commonly called a doubling act, and Lines 6 and 7 are commonly called an addition act.
In one embodiment of the act 306 of the exemplary method 300, the Miller loop algorithm can be modified, where the binary representations are read from right to left (or bottom up). The following is
an illustrative example of the right to left (or bottom up) approach:
-US-00002 Input: P ε G
, Q ε G
, r = (r
-1, ..., r
Output: e r ( P , Q ) = f r , P ( Q ) q k - 1 r ##EQU00003## 1: R P, f
1 2: V O, f
1 3: for (i 0; i ≦ m - 1; i + +) do 4: if (r
= 1) then 5: f
,R(Q) 6: V V + R 7: end if 8: f
,R(Q) 9: R [2]R 10: end for 11: f f V q k - 1 r ##EQU00004## 12: return f
In this illustrative example, the doubling act is in Lines 8 and 9, and the addition act in Lines 5 and 6. The above algorithm does m doubling acts and h addition acts. In this example, although the
loop can be done m times, merely m-1 doubling acts are used, a last one may not influence the computation.
Further, in this embodiment, when using the "bottom-up" approach, the addition acts can be postponed in line 5 and 6 (in the bottom-up algorithm, above). Here, for example, pairs of relevant function
values and corresponding points (f
,R) can be stored in a list L (e.g., in a database), and a computation of a final function value can be computed later. As an illustrative example, the following algorithm provides a bottom-up
approach with postponement of the addition acts, by storing the values and points (see Line 5 of the following algorithm), and carries out the computation of the final function value later (see Line
-US-00003 Input: P ε G
, Q ε G
, r = (r
-1, ..., r
Output: e r ( P , Q ) = f r , P ( Q ) q k - 1 r ##EQU00005## 1: R P, f
1 2: L [ ] 3: For (i 0; i ≦ m - 1; i + +) do 4: if (r
= 1) then 5: Append (f
, R) to L. 6: end if 7: f
,R(Q) 8 R [2]R 9: End for 10: Compute f
,P(Q) from the pairs in L. 11: f f q k - 1 r ##EQU00006## 12: return f
In this example
, the approach can be more efficient than the present top-down algorithm, as postponing the computation allows to save costs equivalent to h-1) multiplications in F_(q k).
In one embodiment, using the aggregated inversions, line 10 of the above algorithm, "Compute f
,P(Q) from the pairs in L", can also be carried out along a binary tree. In this embodiment, in each layer of the binary tree, the aggregated inversion technique can be applied. In this way, for
example, as described above, (h-1) inversions can be substituted for [log(h)] inversions and 3(h-1-[log(h)]) multiplications when computing mathematical pairings for the curve. Therefore, the number
of inversions is dramatically reduced while a small number of multiplications are added, which are computationally cheaper.
Having computed the mathematical pairings for the curve, the exemplary method 300 of
FIG. 3
ends at 308.
[0054] FIG. 4
is a flow diagram illustrating one embodiment 400 of an implementation of portions of one or more of the methods described herein. Determining the mathematical pairings for the curve in affine
coordinates along a binary representation of a scalar read from right to left using the aggregated plurality of inversions can comprise reading a binary representation of the scalar from right to
left, at 402. As described above, the binary representation of the scalar can be read from right to left, for example, when the curve point coordinates are in the affine space.
At 404, a multiple of a curve point is determined by computing a scalar multiple of a curve point, where the scalar multiple, for example, is a m-fold sum of the curve point in affine space. That is,
for example, the multiplication acts of the bottom up approach algorithm can be performed, a plurality of curve point multiples can be determined. This act is often called the doubling act, for
example, where for (i0; i≦m-1; i++) do . . . f
,R(Q), and R[2]R (using the above described notations). Notably, in this embodiment, the multiplication is performed on the coordinates in affine space, unlike current commonly used techniques that
switch the coordinates to projective space in order to mitigate a number of inversions, for example.
At 406, the inversions of the additions of the curve point can be determined for the finite field. That is, for example, while reading the scalar from right to left, curve points are added depending
on the binary representation of the scalar. In this example, for the curve point additions, inversions are determined in the finite field. As an example, an inversion is often referred to as a
multiplicative inverse, or a reciprocal. As described above, the inversions are aggregated, for example, into a single inversion for respective acts in the additive process. In this way, a plurality
of inversions are combined at respective levels in the binary tree representation of the pairing portion of the algorithm, for example, merely using one inversion for respective levels.
At 408, an output of the aggregated inversion is determined, for example, as a slope value for a line function, and the line function is updated with the outputted slope value. For example, when
computing pairings for elements (e.g., curve points) for a curve over a finite field, line functions are evaluated to compute the pairings, such as to a new element in a different group. As such, in
this example, the aggregated inversion output is used as the slope value for the line function, for example, at respective levels of the binary tree representation, and the line is evaluated with the
slope value to determine the pairings.
[0058] FIG. 5
is a flow diagram illustrating one embodiment 500 of an implementation of one or more of the techniques and/or systems described herein. In this embodiment 500, two elements are received from an
electronic signature 550, such as submitted by a user to authenticate their identity, as input for a pairing computation, at 502. Further, two elements from a known group 552 (e.g., a shared secret
cryptographic key for security) are submitted as input for pairing computation. In this embodiment, pairings will be computed for the elements from the signature 550 and pairings will be computed for
the elements from the group 552.
At 504, multiples of the curve points submitted as elements of the group are determined, as described above. In this embodiment 500, binary representations of the scalar read from right to left 554
are used to determine the multiples of the curve point. The inversions are aggregated from the multiples at 506, into a single inversion, and are stored 556, such as in a remote or local database. In
one embodiment, the scalar, which is read from right to left in a binary representation 554, may be comprised in a cryptographic key, such as a public key.
In one embodiment, the determining of pairings for the curve in affine coordinates along a binary representation of a scalar read from right to left can be parallelized on a plurality of processors,
at 508. For example, computers commonly have multi-core processors, which may allow the computations to be parallelized on more than one core in order to speed up the computation and free resources.
In one embodiment, the parallelization may comprise two or more instances for the determination of the multiples of the curve point on two or more processors, for example, at a same time.
At 510, an output for the aggregated inversions is retrieved, for example, as a slope value. As described above, an aggregated inversion may be used at respective levels in a binary tree
representation of the additive act in the pairings computations. Further, in one embodiment, the stored aggregated inversions 556 may be reused in a subsequent pairing computation of a set of
coordinates in affine space. As an example, the aggregated inversions can be retrieved from the remote or local database and reused. In this way, a number of computations can be mitigated by reducing
the inversion aggregation act.
In one embodiment, the aggregated inversion may be reused when a first curve point that was used in determining the aggregated inversions is a same element as a second curve point for which the
aggregated inversions are reused. That is, the aggregated inversions can be determined using a curve point submitted as an element in the pairings computation. In this embodiment, if a second set of
elements submitted after the first set is computed, and the second set comprises an element that is the same as the element from the first set, the aggregated inversions may be reused in computing
the pairings for the second set of elements, for example.
At 512, the line function is updated using the aggregated inversions output as a slope value. In one embodiment, the output of the aggregated inversions may be used to update a coefficient in the
function of the line used for the pairing computation. At 514, the pairings can be determined for the elements, for example, by evaluating the updated line function, resulting in a mathematical
pairing for the encrypted signature authorization element 558, and a mathematical pairing for the secret elements 560.
At 516, the respective pairings 558, 560, can be compared to determine whether they are equal, for example, to determine authenticity of the submitted electronic signature. In this embodiment, 500,
if the elements are found not to be equal, at 518, (or they are not from the same group) the submitted signature is not authenticated, at 520. If the elements are found to be equal, at 518, (and from
the same group) the submitted signature is authenticated, at 522. In this way, for example, the computation of the pairings for elements can be used for cryptographic purposes, and the one or more
techniques described herein may be used to facilitate a more efficient and faster pairing computation.
One or more systems may be devised for determining mathematical pairings for a curve, in order to compare submitted elements for cryptographic purposes, for example. Because the computation of
pairings for use in cryptography can require a lot of underlying multiplications and inversions in a finite field over which the elliptic curve is defined, the one or more systems described herein
can be devised to mitigate the time and resources used to compute these pairings.
FIG. 6
is a component block diagram illustrating an exemplary system 600 for determining mathematical pairings for a curve for use in cryptography.
The exemplary system 600 comprises an inversion aggregation component 602, which aggregates inversions that are used to determine the mathematical pairings for the curve. The inversion aggregation
component 602 is operably coupled with one or more programmed processors 650, which reside in one or more computing devices, and with a data storage component 654 that can store one or more of the
aggregated inversions 656. Further, in the exemplary system 600, a mathematical pairings determination component 604 is operably coupled with the data storage component 654, and can determine the
mathematical pairings for the curve in affine coordinates along a binary representation of a scalar read from right to left using the aggregated inversions 656 stored thereon.
In one embodiment, the inversion aggregation component 602, pairings determination component 604, and data storage component 654 may be comprised in a same computing device, such as the computing
device 652 that comprises the one or more processors 650. Alternatively, the components of the system may be disposed on different devices, or in some combination thereof.
In one embodiment, the inversion aggregation component 602 can be configured to aggregate the plurality of inversions into a single inversion for use in the mathematical pairing determination. For
example, respective inversions for a level of a binary representation of multiples of the curve point can be combined by the inversion aggregation component 602 into a single inversion for that
level. In this example, the combined (aggregated inversion, e.g., 656) can be stored in the data storage component 654, and used by the pairings determination component 604 for computing pairings.
[0069] FIG. 7
is a component block diagram illustrating one embodiment 700 of an exemplary implementation of the one or more systems described herein. A cryptographic system 702, such as illustrated in FIGS. 1 and
2 (e.g., 104, 222), may comprise a pairing on an elliptic curve-based determiner 750, for example, which utilizes one or more implementations of the systems described herein. Further, the
cryptographic system 702 can comprise a group that is publically known (e.g., by knowing the curve) that utilizes the cryptographic system (e.g., for authentication, security, encryption, etc.).
In this exemplary embodiment 700, an input component 704, such as a component that can read an incoming document that uses cryptographic authentication, receives a document 754 that comprises
cryptographic elements 708 and a public key 706. As an example, the document 754 may be an encrypted document that is being submitted to a decryptor (e.g., in order to be read), the cryptographic
elements 708 are points on a curve (e.g., group elements comprised from the group if the document is authentic), and a scalar used in the computation of the pairings. The public key is usually a
point on the curve, while the secret key is a scalar.
In this embodiment 700, the pairing on an elliptic curve-based determiner 750 can determine a pairing for the submitted cryptographic elements 708, and for elements from the private key 710, to
determine whether the submitted document is authentic, for example. That is, if the document is authentic, for example, the cryptographic elements 708 will match to the same element as those from the
private key 710, when pairings are computed for each using the scalar from the public key 706. In this way, the cryptographic system can output an authentication 752 for the document 754, for
example, in order for the document 754 to be decrypted for viewing.
[0072] FIG. 8
is a component block diagram illustrating another exemplary embodiment 800 of an implementation of the one or more systems described herein. In this embodiment 800, the mathematical pairings
determination component 504 is operably coupled with a plurality of processors 802a-802n that can parallelize a determination of addition relationships for multiples of the curve point 850 in affine
coordinates along a binary representation of a scalar read from right to left. That is, for example, various parts of the pairings determination (e.g., the addition acts) can be run on several
processors at a same time in order to reduce an overall time for computing the pairings 852.
Further, in the exemplary embodiment 800, the mathematical pairings determination component 504 can be configured to reuse the aggregated plurality of inversions 656, such as where a first curve
point that was used in determining the aggregated plurality of inversions is a same element as a second curve point for which the aggregated inversions are reused. In this embodiment, an inversion
reuse determination component 804 can determine whether the second curve point is the same element as the first curve point, such as by comparing a stored version of the first curve point to a second
one received 850. Additionally, the inversion reuse determination component 804 can retrieve the aggregated inversions 656 from the data storage component 654 that correspond to the first curve
point. In this way, the retrieved inversions 656 can be reused by the pairing determination component 504, for example, instead of computing new aggregated inversions.
Still another embodiment involves a computer-readable medium comprising processor-executable instructions configured to implement one or more of the techniques presented herein. An exemplary
computer-readable medium that may be devised in these ways is illustrated in
FIG. 9
, wherein the implementation 900 comprises a computer-readable medium 908 (e.g., a CD-R, DVD-R, or a platter of a hard disk drive), on which is encoded computer-readable data 906. This
computer-readable data 906 in turn comprises a set of computer instructions 904 configured to operate according to one or more of the principles set forth herein. In one such embodiment 902, the
processor-executable instructions 904 may be configured to perform a method, such as the exemplary method 300 of FIG. 3, for example. In another such embodiment, the processor-executable instructions
904 may be configured to implement a system, such as the exemplary system 600 of
FIG. 6
, for example. Many such computer-readable media may be devised by those of ordinary skill in the art that are configured to operate in accordance with the techniques presented herein.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is
not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.
As used in this application, the terms "component," "module," "system", "interface", and the like are generally intended to refer to a computer-related entity, either hardware, a combination of
hardware and software, software, or software in execution. For example, a component may be, but is not limited to being, a process running on a processor, a processor, an object, an executable, a
thread of execution, a program, and/or a computer. By way of illustration, both an application running on a controller and the controller can be a component. One or more components may reside within
a process and/or thread of execution and a component may be localized on one computer and/or distributed between two or more computers.
Furthermore, the claimed subject matter may be implemented as a method, apparatus, or article of manufacture using standard programming and/or engineering techniques to produce software, firmware,
hardware, or any combination thereof to control a computer to implement the disclosed subject matter. The term "article of manufacture" as used herein is intended to encompass a computer program
accessible from any computer-readable device, carrier, or media. Of course, those skilled in the art will recognize many modifications may be made to this configuration without departing from the
scope or spirit of the claimed subject matter.
[0078] FIG. 10
and the following discussion provide a brief, general description of a suitable computing environment to implement embodiments of one or more of the provisions set forth herein. The operating
environment of
FIG. 10
is only one example of a suitable operating environment and is not intended to suggest any limitation as to the scope of use or functionality of the operating environment. Example computing devices
include, but are not limited to, personal computers, server computers, hand-held or laptop devices, mobile devices (such as mobile phones, Personal Digital Assistants (PDAs), media players, and the
like), multiprocessor systems, consumer electronics, mini computers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like.
Although not required, embodiments are described in the general context of "computer readable instructions" being executed by one or more computing devices. Computer readable instructions may be
distributed via computer readable media (discussed below). Computer readable instructions may be implemented as program modules, such as functions, objects, Application Programming Interfaces (APIs),
data structures, and the like, that perform particular tasks or implement particular abstract data types. Typically, the functionality of the computer readable instructions may be combined or
distributed as desired in various environments.
[0080] FIG. 10
illustrates an example of a system 1000 comprising a computing device 1012 configured to implement one or more embodiments provided herein. In one configuration, computing device 1012 includes at
least one processing unit 1016 and memory 1018. Depending on the exact configuration and type of computing device, memory 1018 may be volatile (such as RAM, for example), non-volatile (such as ROM,
flash memory, etc., for example) or some combination of the two. This configuration is illustrated in
FIG. 10
by dashed line 1014.
In other embodiments, device 1012 may include additional features and/or functionality. For example, device 1012 may also include additional storage (e.g., removable and/or non-removable) including,
but not limited to, magnetic storage, optical storage, and the like. Such additional storage is illustrated in
FIG. 10
by storage 1020. In one embodiment, computer readable instructions to implement one or more embodiments provided herein may be in storage 1020. Storage 1020 may also store other computer readable
instructions to implement an operating system, an application program, and the like. Computer readable instructions may be loaded in memory 1018 for execution by processing unit 1016, for example.
The term "computer readable media" as used herein includes computer storage media. Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any
method or technology for storage of information such as computer readable instructions or other data. Memory 1018 and storage 1020 are examples of computer storage media. Computer storage media
includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, Digital Versatile Disks (DVDs) or other optical storage, magnetic cassettes, magnetic tape,
magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by device 1012. Any such computer storage
media may be part of device 1012.
Device 1012 may also include communication connection(s) 1026 that allows device 1012 to communicate with other devices. Communication connection(s) 1026 may include, but is not limited to, a modem,
a Network Interface Card (NIC), an integrated network interface, a radio frequency transmitter/receiver, an infrared port, a USB connection, or other interfaces for connecting computing device 1012
to other computing devices. Communication connection(s) 1026 may include a wired connection or a wireless connection. Communication connection(s) 1026 may transmit and/or receive communication media.
The term "computer readable media" may include communication media. Communication media typically embodies computer readable instructions or other data in a "modulated data signal" such as a carrier
wave or other transport mechanism and includes any information delivery media. The term "modulated data signal" may include a signal that has one or more of its characteristics set or changed in such
a manner as to encode information in the signal.
Device 1012 may include input device(s) 1024 such as keyboard, mouse, pen, voice input device, touch input device, infrared cameras, video input devices, and/or any other input device. Output device
(s) 1022 such as one or more displays, speakers, printers, and/or any other output device may also be included in device 1012. Input device(s) 1024 and output device(s) 1022 may be connected to
device 1012 via a wired connection, wireless connection, or any combination thereof. In one embodiment, an input device or an output device from another computing device may be used as input device
(s) 1024 or output device(s) 1022 for computing device 1012.
Components of computing device 1012 may be connected by various interconnects, such as a bus. Such interconnects may include a Peripheral Component Interconnect (PCI), such as PCI Express, a
Universal Serial Bus (USB), firewire (IEEE 1394), an optical bus structure, and the like. In another embodiment, components of computing device 1012 may be interconnected by a network. For example,
memory 1018 may be comprised of multiple physical memory units located in different physical locations interconnected by a network.
Those skilled in the art will realize that storage devices utilized to store computer readable instructions may be distributed across a network. For example, a computing device 1030 accessible via
network 1028 may store computer readable instructions to implement one or more embodiments provided herein. Computing device 1012 may access computing device 1030 and download a part or all of the
computer readable instructions for execution. Alternatively, computing device 1012 may download pieces of the computer readable instructions, as needed, or some instructions may be executed at
computing device 1012 and some at computing device 1030.
Various operations of embodiments are provided herein. In one embodiment, one or more of the operations described may constitute computer readable instructions stored on one or more computer readable
media, which if executed by a computing device, will cause the computing device to perform the operations described. The order in which some or all of the operations are described should not be
construed as to imply that these operations are necessarily order dependent. Alternative ordering will be appreciated by one skilled in the art having the benefit of this description. Further, it
will be understood that not all operations are necessarily present in each embodiment provided herein.
Moreover, the word "exemplary" is used herein to mean serving as an example, instance, or illustration. Any aspect or design described herein as "exemplary" is not necessarily to be construed as
advantageous over other aspects or designs. Rather, use of the word "exemplary" is intended to present concepts in a concrete fashion. As used in this application, the term "or" is intended to mean
an inclusive "or" rather than an exclusive "or". That is, unless specified otherwise, or clear from context, "X employs A or B" is intended to mean any of the natural inclusive permutations. That is,
if X employs A; X employs B; or X employs both A and B, then "X employs A or B" is satisfied under any of the foregoing instances. In addition, the articles "a" and "an" as used in this application
and the appended claims may generally be construed to mean "one or more" unless specified otherwise or clear from context to be directed to a singular form.
Also, although the disclosure has been shown and described with respect to one or more implementations, equivalent alterations and modifications will occur to others skilled in the art based upon a
reading and understanding of this specification and the annexed drawings. The disclosure includes all such modifications and alterations and is limited only by the scope of the following claims. In
particular regard to the various functions performed by the above described components (e.g., elements, resources, etc.), the terms used to describe such components are intended to correspond, unless
otherwise indicated, to any component which performs the specified function of the described component (e.g., that is functionally equivalent), even though not structurally equivalent to the
disclosed structure which performs the function in the herein illustrated exemplary implementations of the disclosure. In addition, while a particular feature of the disclosure may have been
disclosed with respect to only one of several implementations, such feature may be combined with one or more other features of the other implementations as may be desired and advantageous for any
given or particular application. Furthermore, to the extent that the terms "includes", "having", "has", "with", or variants thereof are used in either the detailed description or the claims, such
terms are intended to be inclusive in a manner similar to the term "comprising."
Patent applications by Michael Naehrig, Stolberg DE
Patent applications by Microsoft Corporation
Patent applications in class PARTICULAR ALGORITHMIC FUNCTION ENCODING
Patent applications in all subclasses PARTICULAR ALGORITHMIC FUNCTION ENCODING
User Contributions:
Comment about this patent or add new information about this topic:
|
{"url":"http://www.faqs.org/patents/app/20110170684","timestamp":"2014-04-18T04:23:23Z","content_type":null,"content_length":"99301","record_id":"<urn:uuid:448e0fe8-cd91-4223-a5ab-9f26343449ee>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00012-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Algebra Of Sets, Identity Law, De Morgan’s Law - Transtutors
Algebra of Sets
Definition of Algebra of Sets
Algebra of sets explains the basic properties and laws of sets, i.e., the set-theoretic operations of union, intersection, and complementation. It also explains the relations of set equality and set
inclusion. Systematic procedure for evaluating expressions, along with performing calculations which involve these operations and relations are included as well.
Some of the useful properties/operations on sets are as follows:
* A ∪ U = U
* A ∩ Φ = Φ
* Φ^C = U
* U^C = Φ
Definition of a Set
According to Georg Cantor’s theory, a set is a combination of whole of definite, distinct objects of our perceptions and thoughts – which are called elements of set.
Laws of set algebra
Fundamental Laws of Set algebra
Commutative Law: states that numbers can be swapped and still can get the same answer on addition.
For any two sets A and B,
* A ∪ B = B ∪ A
* A ∩ B = B ∩ A
Associative Law: states that numbers regrouped within the same sequential order may lead to similar operation results. This law holds for addition or multiplication.
For any three sets A, B and C,
* (A ∪ B) ∪ C = A ∪ (B ∪ C)
* A ∩ (B ∩ C) = (A ∩ B) ∩ C
Distributive Law: states that one may get the same answer if a number is multiplied by a group of numbers added together as when one does each multiplication separately.
For any three sets A, B and C,
* A ∪ (B ∩ C) = (A ∪ B) ∩ (A ∪ C)
* A ∩ (B ∪ C) = (A ∩ B) ∪ (A ∩ C)
Additional Laws of Set Algebra
Idempotent Law: Idempotent is the property of certain operations in mathematics and computer science, which can be applied multiple times without changing the result beyond the initial application.
For any set A,
* A ∪ A = A
* A ∩ A = A
Identity Law: For any set A,
* A ∪ A = A
* A ∩ U = A
De Morgan’s Law of Set Algebra
De Morgan’s Law: In formal logic, De Morgan's laws are rules relating the logical operators in terms of each other through negation. That is,
“The negation of a conjunction is the disjunction of the negations.
The negation of a disjunction is the conjunction of the negations.”
For any two sets A and B
* (A ∪ B)' = A' ∩ B'
* (A ∩ B)' = A' ∪ B'
Union and Intersection
* In set theory, the set of all distinct elements in the collection is called a union (denoted as ∪). The union of a collection of sets S[1], S[2], S[3],.........S[n] gives a set.
S[1] U S[2] U S[3] U .... U S[n.]
* The union of two sets A and B is the collection of points which are in A or in B (or in both):
AUB = {x:x ε A or x ε B}
* Together with intersection and complement, union makes any power set into a Boolean algebra.
Definition of an Intersection
In mathematics, the intersection (denoted as ∩) of two sets A and B is the set that contains all elements of A belonging to B as well (or vice versa) but no other elements.
The intersection of A and B is written "A ∩ B". Formally:
x ∈ A ∩ B if and only if
* x ∈ A and
* x ∈ B.
For example:
* The intersection of the sets {1, 2, 3} and {2, 3, 4} is {2, 3}.
* The number 9 is not in the intersection of the set of prime numbers {2, 3, 5, 7, 11, …} and the set of odd numbers {1, 3, 5, 7, 9, 11, …}.
If the intersection of two sets A and B is empty, it means it has no elements in common, then it is said to be a disjoint, denoted: A ∩ B = ∅.
For example the sets {1, 2} and {3, 4} are disjoint, written as
{1, 2} ∩ {3, 4} = ∅.
Types of Union and Intersection
Types of Union
* Finite Union: In mathematics, any union carried out on a finite number of sets is a finite union. Though, it doesn't imply that the union set is a finite set.
X ε U M ↔ ∃A εM, x ε A.
* Arbitrary Union: The most general notion is the union of an arbitrary collection of sets. If M is a set whose elements are themselves sets, then x is an element of the union of M, if and only if,
for at least one element A of M, x is an element of A. In symbols:
Types of Intersection
* Arbitrary Intersection: The most general notion is the intersection of an arbitrary nonempty collection of sets. If M is a nonempty set whose elements are themselves sets, then x is an element of
the intersection of M if and only if for every element A of M, x is an element of A. In symbols:
(xε ∩ M)↔(A∀ε M, xεA)
Nullary Intersection: The intersection of the collection M is defined as the set.
∩ M = {x : A∀ε M, x εA}
If M is empty there are no sets A in M, so the question is- Which x's satisfy the stated condition? The answer is every possible x. When M is empty the condition given above is an example of a
vacuous truth. So the
of the empty family should be the universal set, which according to standard (ZFC) set theory does not exist.
A partial fix for this problem is to restrict our attention to subsets of a fixed set U called the universe. In this case the intersection of a family of subsets of U can be defined as
Now if M is empty there is no problem. The intersection is just the entire universe U, which is a well-defined set by assumption.
Solved Examples of Set Algebra
Solved Example1: If A and B are two sets, then A ∩ (A ∪ B) equals
(A) A (B) B
(C) Φ (D) none of these
Solution: A ∩ (A ∪ B) = A
Hence (A) is the correct answer.
Solved Example2: The set (A ∪ B ∪ C) ∩ (A ∩ B' ∩ C')' ∩ C' is equal to
(A) B ∩ C' (B) A ∩ C
(C) B' ∩ C' (D) None of these
Solution: (A ∪ B ∪ C) ∩ (A ∩ B' ∩ C')' Ç C'
= (A ∪ B ∪ C) ∩ (A' ∪ B ∪ C) ∩ C'
= [(A ∩ A') ∪ (B ∪ C)] ∩ C'
= (Φ∪ B ∪ C) ∩ C' = (B ∪ C) ∩ C'
= (B ∩ C') ∪ (C ∩ C') = (B ∩ C') ∪ Φ = B ∩ C'.
Hence (A) is the correct answer.
Solved Example3: If A = { 1, 3, 5, 7, 9, 11, 13, 15, 17}, B = { 2, 4, …, 18} and N is the universal set, then A'∪ ((A∪ B ) ∩ B') is
(A) A (B) N
(C) B (D) None of these
Solution: We have, (A ∪ B) ∩ B' = A
(A ∪ B ) ∩ B') ∪ A' = A ∪ A ' = N.
Hence (B) is the correct answer.
Help in Algebra of Sets Assignments
Transtutors is the perfect platform to get answers to all your doubts regarding algebra of sets, De Morgan’s law, idempotent law, identity law, commutative, associative & distributive law, union and
intersection with solved examples. You can submit your school, college or university level homework or assignment to us and we will make sure that you get the answers you need which will be timely
and cost effective. Our tutors are available round the clock to help you with your queries on math.
Algebra of Sets – Online Tutors
Transtutors has a vast panel of experienced math tutors who specialize in the concerned topic namely; algebra of sets and can explain the different concepts effectively. You can also interact
directly with our math tutors on a one to one session and get answers to all your problems at school, college or university level. Our tutors will make sure that you achieve the highest grades for
your math assignments. We will make sure that you get the best help possible for exam related questions.
Related Questions
more assignments »
|
{"url":"http://www.transtutors.com/math-homework-help/set-theory/algebra-of-sets.aspx","timestamp":"2014-04-19T14:41:20Z","content_type":null,"content_length":"85644","record_id":"<urn:uuid:9f2b5a1a-8747-401d-95a6-f47a1bc34884>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00134-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Mathematics and Web Design: A Close Relationship - Tuts+ Web Design Article
Math is everywhere, even where you wouldn’t expect it. You can find mathematical ratios and constants in architecture, but also in the instruments we use to make music. You can find math in certain
games we play, and therefore it should not surprise you that mathematics plays an important role in web design too. But what is this role? And how can we use these ratios, constants, and theories to
make our web designs more beautiful?
Math is Everywhere
Walt Disney once made a film about Donald Duck in Mathmagicland. In this video – available on YouTube – they introduce children to mathematics and what it’s used for. It shows that a mathematical
ratio is used to define the notes on our instruments, and that a mathematical rectangle can be found in both ancient and modern architecture. Also, we can find this exact same rectangle in some
Renaissance art by, for example, the famous Leonardo Da Vinci.
The general lesson is simple: you can use some basic mathematical principles to design order and beauty in your own creations.
A Little History
In ancient Greece there was an elite group of mathematicians who called themselves the Pythagoreans. The Pythagoreans had the pentagram as their emblem. They chose this shape because of its
mathematical perfection: The linear shape of the pentagram already contains the golden ratio three times already! Also, there are tons of golden rectangles hidden inside the shape as well, these are
the same golden rectangles that are present in the Mona Lisa.
Rabbit Breeding
A while after that, in the 12^th and 13^th century, lived a talented Italian mathematician. His name was Leonardo Pisano Bigollo, although you might know him better as Fibonacci. For his book Liber
Aci, he observed the natural breeding of rabbits. In this ideal world of him, where no rabbit would ever die and every individual rabbit would start reproducing as soon as possible, he found out this
cycle contained a special sequence of numbers. This sequence later became known as the Fibonacci Numbers.
The thing that’s so special about this sequence is that if you divide a chosen number with the number prior in the sequence, you will (approximately) get the same number, every time. This number is
approximately 1.618, better known as Phi. The further you go in the sequence, the closer the result of the division comes to Phi. Fibonacci also found out that this sequence is not only found in the
breeding of rabbits, but also in other things in nature, such as the arrangement of seeds in a sunflower.
The Golden Ratio
As you might already know, Phi is also a very prominent constant in design; This is because a ratio of 1 to 1.618 is better known as the Golden Ratio – often referred to as the Golden Section, Golden
Mean or the Divine Ratio. If you create a rectangle according to this ratio, you get a shape known as the Golden Rectangle.
The Golden Rectangle, shown here, shows how you can divide it upon itself infinitely (and perfectly).
The Golden Ratio and the Golden Rectangle is used in many forms of art and design. In the Renaissance period, many artists proportioned their artworks according to this ratio and rectangle. In
ancient Greece, architects used this rectangle in the design of the buildings; the Parthenon is a good example of this. Even in modern architecture, the golden rectangle has a strong presence.
But what is it that makes this ratio so special? Because this number, Phi, finds its origins in nature, we humans automatically find ourselves comfortable with this ratio. Because we are so
acquainted to this ratio, it naturally triggers a feeling of balance and harmony. For that reason, using this ratio can guarantee you a balanced composition of your elements.
Examples of The Golden Ratio in Web Design
Before we even start thinking about applying the ratio to our designs, we must first look at a few examples that use the ratio already.
One good example is this website, as its design houses multiple cases of the ratio. In the image below, you can see a screenshot of this website. As you can see, I’ve used two colours to mark the
different columns. The width of the main column with the blog posts in it is more or less 1.618 times as big as the sidebar with the ads. A quick calculation on the bottom proves this.
But not only does this website use the golden ratio on its total width, it’s also applied to some of the smaller parts of the website.
Let’s take a quick look the main column, and then the content inside. As you can see below, the containing element is about 1.618 times as big as the content that’s to be read inside this element.
Another good example is the famous Smashing Magazine blog. Its main column has a total width of just over 700 pixels. When you divide this number by 1.618, about 435 is the result: The exact width of
the sidebar.
How to Apply this Ratio to Your Next Design
The canvas of a painting and the width of a building all have a fixed width, the monitors that display our work vary in size. Therefore – and especially in fluid designs – there’s an extra variable
that should be taken in consideration when calculating the golden ratio.
However, there is an easy way to overcome this problem. When you want to calculate the width of an element according to the ratio, you just need to take the width of its parent-element, so the
containing element. In our first and last example, this was the complete width of a website. In the second example, this was just the width of a smaller part: their main column.
Anyhow, when you’ve determined the width of the containing element, you should now divide this value by Phi. The result will give you the width of the main element. Now, all that’s left to do is to
subtract the result from the main element from your original width, this will give you the width of the secondary column.
If you have any trouble remembering Phi, or when you’re just lazy to fill in some numbers on a calculator, I suggest using Phiculator. This little application requires you to fill in a value (the
width of the containing element that is) and it automatically calculates the corresponding width. You can even ask it to calculate with integers, so you don’t have to worry about decimal numbers
The Rule of Thirds
Another famous mathematical division is the rule of thirds. This rule can help you create a balanced composition by dividing your canvas in nine equal parts. The rule is a little similar to the
Golden Ratio, as the division by 0.62 is closely similar to 0.67 – which equals to two-thirds.
A form of art where the rule of thirds is used very often is in Photography as it’s an easy and quick guide to get you a good composition. This is why you’re likely to find a function on your digital
camera that divides its LCD screen in nine parts, using the rule of thirds. Even some dSLR’s have this function, as they plant a few light dots in the viewfinder when focusing.
How Does it Work?
Using the rule of thirds, you will divide your canvas horizontally and vertically in by three. This division gives you nine equal rectangles, four lines and four intersection-points. You can create
an interesting and balanced composition by using these lines and points of intersection.
The key in a good composition, obviously, lies in positioning your elements correctly. When using the rule of thirds, there are two things you can position with.
The first are the lines used to divide the canvas. In photography, things with a long and straight shape are often aligned to these lines. In design, things with this same shape – such as a sidebar –
can be aligned to these lines as well.
The second things to align to are the points where your dividing lines intersect. You will need to put one or two objects on these points, because too much will still kill your composition.
A good example of this I found on Photography-website Flickr. As you can see below, the photographer aligned the row of buildings with the top line, and on the upper-right intersection point, you
will find a house that stands out the most because of its colour. Because it’s a focal point by itself already, aligning it with the intersection point adds to a good composition and a balanced feel.
We’ve seen the rule of thirds applied to photography, but how about applying it to website design, can we find examples of that?
The Rule of Thirds in Web Design
A good example of the rule being applied to web design is, again, this website. I’ve prepared an image that you can see below. It shows that, on the right, the sidebar is aligned very closely to the
vertical line on the right. On the left, you can see that the articles are positioned on the intersecting points.
The two alignments you see above create a feeling of harmony in the layout of this website.
Applying the Rule of Thirds to Your Next Design
So how exactly can the rule of thirds be applied to your site’s design? Again, the varying width of our ‘canvas’ can bring some trouble. When we use the same technique as we did with the golden
ratio, though, we’ll be fine.
To apply the division, you must take the full width of your containing element and divide it by three. You then have to draw a line – or a guide, whatever suits you best – two times on the value you
get as a result (multiply them by two to get the position of the second line).
The second part of the division can give you some problems, though. The height of our ‘canvas’ is also variable, therefore dividing this variable by three will give us some trouble. The way I use to
work around this, is to calculate the ‘height’ of the division with a 16:9 (widescreen) ratio or just use the height of the containing element. Divide the width of the containing element by 16 and
multiply that number with 9 and you’ve got yourself a height. You can now divide this number by 3 again, and draw the lines/guides.
When you’ve got the guides set up, you can now position your elements according to these guides. Align your elements with the lines, and you must put some elements of interest and contrast on the
points of intersection.
Grid Systems
You might not think of grids as being mathematical, but they are. You are dividing your canvas in different columns and gutters, this division by two, three – and I’ve seen up to sixteen – is really
A lot of people argue that grid systems limit your creativity, because you’re limiting your freedom with a grid system. I don’t think this is true, as book called Vormator taught me that limitations
actually boost your creativity. This is because you will think of solutions with these limits in mind, whereas these ideas would never have been thought of if you don’t have these restrictions.
The reason grid systems ‘work’ is that they can guide you in sizing, positioning and aligning your website design. They can help you in organizing and removing clutter from content. But most
importantly, they’re easy to use.
Another good reason to use grids is that rules are meant to be broken, aren’t they? If you ‘break’ your grid once in a while, it’s not bad. On the contrary! ‘Breaking’ your grid can create special
interest for a specific element on the page, because it’s in contrast with the rest. This can help you achieve certain goals, like a call to action that stands out more because of this.
How to Create a Good Grid
There is no real set way to construct a good grid system, as they revolve around content and no content is really the same. But for the sake of it, I’ll demonstrate a simple process in how to
construct a 6-column grid in a 960-pixel wide environment.
First, we will divide our total canvas width by 6 so we have the total width of each column. The result of this division is 160 pixels, as you can see below in the image.
Secondly, we’ll create an image of one column, we’ll duplicate this later. This way it’s easier to create our complete grid afterwards, as we don’t have to repeat this step for each column.
We’ll decide on the size of our gutter, I think 20 pixels will suffice. The gutter should be added to both sides of the column, so we must divide it by two. If we don’t do this, our gutter will be 40
pixels wide. As you can see in the image below, we’ve added a 10-pixel gutter on each side.
Now we can duplicate this image until we reach the total of 960 pixels again, and we’ve created ourselves a (basic) grid.
I'm Lazy!
Don’t worry; even if you’re lazy you won’t have to live without grids. There are lots of nice – and free – grid systems up for grabs on the internet. My favourite, and I’m sure you’ve heard about it
before, is the famous 960.gs grid system, which has a CSS-framework and a PSD-file with all the guides installed.
I hope I’ve shown you that mathematics can be beautiful when applied to design, and that I’ve given you enough techniques to use in your next design. Be warned though, lots of other things are
required to make a design a success, and therefore using these tricks is no guarantee for a good design, but they can sure help you and guide you in the process of making one.
Thanks for reading!
|
{"url":"http://webdesign.tutsplus.com/articles/mathematics-and-web-design-a-close-relationship--webdesign-1053","timestamp":"2014-04-20T23:27:44Z","content_type":null,"content_length":"87639","record_id":"<urn:uuid:096a598b-f1cd-44fa-8962-0ce36fc608f1>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00557-ip-10-147-4-33.ec2.internal.warc.gz"}
|
How Do You Solve a Word Problem Using a Proportion?
The idea of proportions is that a ratio can be written in many ways and still be equal to the same value. That's why proportions are actually equations with equal ratios. This is a bit of a tricky
definition, so make sure to watch the tutorial!
|
{"url":"http://www.virtualnerd.com/pre-algebra/ratios-proportions/scale-models/scale-model-examples/proportion-word-problem-example","timestamp":"2014-04-21T02:10:37Z","content_type":null,"content_length":"27031","record_id":"<urn:uuid:89736d5b-e91b-4d35-9125-2bf6295178a8>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00349-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Below is a sample text output page for high-pass filters. At the top of the presentation is a list showing the input specifications.
Listed first is the corner frequency. Next would be the stop frequency and required attenuation for the filter unless a manual entry for the order was done as in this case. The input and output
termination resistances, the filter order and the geometric configuration finish the list.
Double check that all specifications are correct. The software used the data displayed in the first half of the output.
Below that is a sample output chart showing inductances in Henries and capacitances in Farads. Since high-pass filters may be either a Tee - Series or Pi - Shunt configuration, be alert to whether
the element value is assigned to an inductor or a capacitor and where it is positioned in the circuit.
In the case of a high-pass Tee - Series filter, the element at X[1] and is a capacitor and 7.15236E-09 would be a 7.152 nanofarad capacitor. See Help - Geometry in the main menu for sample diagrams.
Read the 'Geometry' section in the helps to see the exact location of each component.
Finally, when entering a stop frequency and an attenuation, then lettting the computer calculate the order, the the attenuation will be equal to or greater than the attenuation requested at the stop
frequency specified. That's because the order calculation imposes a stepwise function on a continuous curve.
|
{"url":"http://www.qsl.net/n9zia/butterworth/hpout.htm","timestamp":"2014-04-19T07:27:43Z","content_type":null,"content_length":"3165","record_id":"<urn:uuid:2cd46925-2e0e-4811-91bc-df82977cba13>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00194-ip-10-147-4-33.ec2.internal.warc.gz"}
|
convert 0.22 acres to square meter
You asked:
convert 0.22 acres to square meter
Say hello to Evi
Evi is our best selling mobile app that can answer questions about local knowledge, weather, books, music, films, people and places, recipe ideas, shopping and much more. Over the next few months we
will be adding all of Evi's power to this site.
Until then, to experience all of the power of Evi you can download Evi for free on iOS, Android and Kindle Fire.
|
{"url":"http://www.evi.com/q/convert_0.22_acres_to_square_meter","timestamp":"2014-04-19T22:41:29Z","content_type":null,"content_length":"53009","record_id":"<urn:uuid:126d0070-3fb8-4c21-aebb-5135a27c225a>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00127-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Permutations help me please
July 12th 2008, 12:40 AM #1
Permutations help me please
Hi i need urgent help please , i don't know where this topic belongs on this forum since it seems kinda miscellaneous. Sorry to ask so much for a 1st post.
Q1) A car registration consists of 3 letters following by a number between 000 to 999
a) how many car numbers are possible?
b) How many possible car number if the 3 letters are the same?
If repetitions aren't allowed, how many even numbers less than 400 can be made using the digits 1,2,3,4,5,6,7?
Q3) 5 travelers arrive in a town where there are 5 hotels
a) how many different arrangements if there are no restrictions on where the travelers wish to stay?
b) Suppose 2 travelers are husband and wife and must go to the same hotel. how many different accommodation arrangements are there if the other 3 can go to any of the other hotels?
Please give me some tips for these questions, i dont have a clue on how to do these, thanks alot.
this is actually in probability/statistics forum.. anyway, you don't have to post it there since any topics can be posted here.
_ _ _ - _ _ _
how many letters can be placed in each of the first three blanks? there are 26 letters right? therefore, you can choose any of the 26 for the first blank, 26 for the second, and 26 for the
for the next 3 blanks, how many choices do you have? 10 each (0,1,...,9)
therefore, the number cars of possible is the product..
26 x 26 x 16 x 10 x 10 x 10
b) if a letter has been chosen to one of the first three blanks, (whether the first, second or third), you would have no choice to other 2 blanks of the first three.. so you would have
26 x 1 x 1 x 10 x 10 x 10
case1: all five stay together in one hotel. how many ways?
case2: 4 stay together on 1 hotel, the other 1 is isolated. how many ways?
case3: 3 stay together on 1 hotel, 1 stays alone and also the other 1 stays alone.
case4: 3 stay together on 1 hotel, 2 stays together on other hotel.
case5: 2 stay together on 1 hotel, 3 distributes themselves to other hotels.
case6: 2 stay together on 1 hotel, another 2 stay together on another hotel and 1 isolates.
case4*: 2 stay together on 1 hotel, the other 3 stay on another hotel. (but this is the same as the case4. so this is not included anymore.)
case7: all 5 distributes themselves without accompanying others. (each 1 stays alone on one hotel)
case2*: 1 stays alone, 4 stay together. (same as case2.)
case3*: 1 stays alone, another 1 stays alone, 3 stay together. (same as case3)
case6*: 1 stay alone, 2 stay together, another 2 stay together. (same as case6)
in fact, other cases are just repetitions of case 1 - 7.
solve, how many ways in each cases, then add them.
among the cases i have listed, which will you add up?
Thanks alot!
Wow, thank you so much for your time and support, very simplified and fast, i understand from this even better than my teachers explanation
I understand the question on the car number, but im still a little confused on Q2) and Q3)
Ok for this one im srry i mistyped 400 but i was meant to write 4000
so heres the question again
How many numbers less than 4000 can be made using the digits 1,2,3,4,5,6,7 which are even numbers?
3 choices for the thousands (need number less than 4 since it must be <4000)
6 choices for the hundreds
5 choices for the tens
and finally 3 choices for the ones (since its must have have an even digit to be an even number???)
oh and also the number 4000
so i would have
3x6x5x3+1 permutations in total which is 271 arrangements.
Again thanks for your support on this question, however im still not sure if its right since these questions are tricky, can you check the answer for me if you can thanks alot.
case1: all five stay together in one hotel. how many ways?
case2: 4 stay together on 1 hotel, the other 1 is isolated. how many ways?
case3: 3 stay together on 1 hotel, 1 stays alone and also the other 1 stays alone.
case4: 3 stay together on 1 hotel, 2 stays together on other hotel.
case5: 2 stay together on 1 hotel, 3 distributes themselves to other hotels.
case6: 2 stay together on 1 hotel, another 2 stay together on another hotel and 1 isolates.
case4*: 2 stay together on 1 hotel, the other 3 stay on another hotel. (but this is the same as the case4. so this is not included anymore.)
case7: all 5 distributes themselves without accompanying others. (each 1 stays alone on one hotel)
case2*: 1 stays alone, 4 stay together. (same as case2.)
case3*: 1 stays alone, another 1 stays alone, 3 stay together. (same as case3)
case6*: 1 stay alone, 2 stay together, another 2 stay together. (same as case6)
in fact, other cases are just repetitions of case 1 - 7.
solve, how many ways in each cases, then add them.
among the cases i have listed, which will you add up?
This question i am still very, very confused, heres what i got:
case1: all five stay together in one hotel. how many ways? 5!??
case2: 4 stay together on 1 hotel, the other 1 is isolated. how many ways? 2! x 4!
case3: 3 stay together on 1 hotel, 1 stays alone and also the other 1 stays alone. 3! x 3! x 1 x 1
case4: 3 stay together on 1 hotel, 2 stays together on other hotel. 2! x 3! x 2!
case5: 2 stay together on 1 hotel, 3 distributes themselves to other hotels.
4! x 2! x 1 x 1 x 1
case6: 2 stay together on 1 hotel, another 2 stay together on another hotel and 1 isolates.
3! x 2! x 2! x 1
case7: all 5 distributes themselves without accompanying others. (each 1 stays alone on one hotel) 5!
when i add all this up i get 408 but the answer says 3125, where did i go wrong?
Wow, thank you so much for your time and support, very simplified and fast, i understand from this even better than my teachers explanation
I understand the question on the car number, but im still a little confused on Q2) and Q3)
Ok for this one im srry i mistyped 400 but i was meant to write 4000
so heres the question again
How many numbers less than 4000 can be made using the digits 1,2,3,4,5,6,7 which are even numbers?
LOOK AT THE ORDER OF CHOOSING I WILL DO..
a.. 3 choices for the thousands (need number less than 4 since it must be <4000)
b.. and finally 3 choices for the ones (since its must have have an even digit to be an even number???)
(however, a and b are dependent to each other.. since in a, if 2 were chosen, then there are only 2 choices in b)
5 choices for the hundreds
4 choices for the tens
oh and also the number 4000
so i would have
3x6x5x3+1 permutations in total which is 271 arrangements.
Again thanks for your support on this question, however im still not sure if its right since these questions are tricky, can you check the answer for me if you can thanks alot.
that is, you have to choose for the ones digit first before the tens and the hundreds..
Last edited by kalagota; July 12th 2008 at 08:08 PM.
This question i am still very, very confused, heres what i got:
case1: all five stay together in one hotel. how many ways? 5!??
case2: 4 stay together on 1 hotel, the other 1 is isolated. how many ways? 2! x 4!
case3: 3 stay together on 1 hotel, 1 stays alone and also the other 1 stays alone. 3! x 3! x 1 x 1
case4: 3 stay together on 1 hotel, 2 stays together on other hotel. 2! x 3! x 2!
case5: 2 stay together on 1 hotel, 3 distributes themselves to other hotels.
4! x 2! x 1 x 1 x 1
case6: 2 stay together on 1 hotel, another 2 stay together on another hotel and 1 isolates.
3! x 2! x 2! x 1
case7: all 5 distributes themselves without accompanying others. (each 1 stays alone on one hotel) 5!
when i add all this up i get 408 but the answer says 3125, where did i go wrong?
for case1: consider the group as 1 entity? as one, how many choices does it have? there are only 5 hotels, and so, it has only 5 choices..
case2: there are 2 entities right? the first entity can choose among the 5 hotels, and the other entity can have only 4 choices since the first has already chosen 1. so there are 5 x 4..
do the rest with similar arguments..
July 12th 2008, 01:00 AM #2
July 12th 2008, 01:02 AM #3
July 12th 2008, 01:19 AM #4
July 12th 2008, 09:38 AM #5
July 12th 2008, 07:43 PM #6
July 12th 2008, 08:11 PM #7
|
{"url":"http://mathhelpforum.com/statistics/43514-permutations-help-me-please.html","timestamp":"2014-04-16T04:32:29Z","content_type":null,"content_length":"60308","record_id":"<urn:uuid:caba69e9-efdf-4588-b84b-7d09a3edb331>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00593-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Homework Help
Posted by gina on Friday, October 17, 2008 at 2:24pm.
There were 1320 pumpkins in a pumpkin patch,
but it was difficult for farmer Joe to find the
perfect pumpkin.
• Every third pumpkin was too small.
• Every fourth pumpkin was too green.
• Every fifth pumpkin had a broken stem.
• Every sixth pumpkin had the wrong shape.
How many perfect pumpkins did farmer Joe find
in the pumpkin patch?
we thought the answer was 440. 1320/6=220. Then 1320-220=1100. then 1100/5=220. then subtract 1100-220=880 nd so on...does that make sense?
• math logic - Ms. Sue, Friday, October 17, 2008 at 2:47pm
Please study the answers you were given earlier.
□ math logic - gina, Friday, October 17, 2008 at 3:17pm
440 is not the right answer... it's 528 with the venn diagram
but i don't know how the diagram is suppose to look
□ nygaldm yvir - nygaldm yvir, Wednesday, November 19, 2008 at 10:08pm
nxmedp phzoqafvm lktoijqmr nzfqwdp caxpgj ghlek rpomibfs
• math logic - Jean, Wednesday, October 19, 2011 at 12:07pm
Of 500 employees, 200 participate in a company's profit sharing plan(P), 400 have major insurance coverage(M), & 150 employees participate in both programs. Construct a Venn diagram and find out
the probability that a randomly selected employees a) will be a participant in at least one of two programs b)Will not be a participant in either program
Related Questions
logic - There were 1320 pumpkins in a pumpkin patch, but it was difficult for ...
math logic - There were 1320 pumpkins in a pumpkin patch, but it was difficult ...
math - there were 360 pumpkins in a pumpkin patch , but it was difficult for ...
math - Farmer Frank's pumpkin patch had 360 pumpkins and he was searching for ...
math - went to market with a whole crop of pumpkins which I had grown in my ...
Physics - Early one October, you go to a pumpkin patch to select your Halloween ...
Physics - Early one October you go to a pumpkin patch to select your Halloween ...
Algebra 2 - The Pumpkin Patch Doll Company has detrmined that the profit the ...
Math - The size to which a pumpkin grows while on the vine is probably ...
chemistry: brain teaser - I have this question that i am having trouble with.....
|
{"url":"http://www.jiskha.com/display.cgi?id=1224267855","timestamp":"2014-04-20T09:24:05Z","content_type":null,"content_length":"9834","record_id":"<urn:uuid:3c62f942-64a4-4285-835f-be503b7c2e2b>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00568-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Induction with inequality
April 30th 2008, 04:39 PM
Induction with inequality
How do i use induction(effectively) to prove that n! < nn for n ≥ 2 ?
let p(n): n! < nn for n ≥ 2.
we see that p(2) holds true.
now i prove it holds for p(n+1) right?
(n+1)! = (n+1)n!<(n+1)nn by the induction hypothesis.
i think this is right.. can someone show me how to finish this correctly?
April 30th 2008, 04:41 PM
oops, n^n came out as nn...anywhere there is nn, it should read (n)^n.
April 30th 2008, 05:03 PM
How do i use induction(effectively) to prove that n! < nn for n ≥ 2 ?
let p(n): n! < nn for n ≥ 2.
we see that p(2) holds true.
now i prove it holds for p(n+1) right?
(n+1)! = (n+1)n!<(n+1)nn by the induction hypothesis.
i think this is right.. can someone show me how to finish this correctly?
assume $k! < k^k$
$(k+1)^{k+1}=(k+1)^k(k+1) > k^k(k+1)>(k!)(k+1)=(k+1)!$
April 30th 2008, 08:42 PM
thanks empty set.
|
{"url":"http://mathhelpforum.com/discrete-math/36727-induction-inequality-print.html","timestamp":"2014-04-16T19:20:27Z","content_type":null,"content_length":"5906","record_id":"<urn:uuid:1d53783e-c547-443e-afcc-ad5e34de916c>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00530-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Forest Hills, NY Precalculus Tutor
Find a Forest Hills, NY Precalculus Tutor
...I have a bachelor's degree in physics. I have experience tutoring pre-algebra and have a bachelor's degree in physics. I have tutored pre-calculus both privately and for the Princeton Review.
20 Subjects: including precalculus, English, algebra 2, grammar
...I am also an iOS developer, the first application I brought to market launched in spring 2013 (title: Side-by-Side: Dual Column Task Manager). All of my work is done on Macintosh computers. I
am intimitely familiar with the Macintosh product line and am aware of the pros and cons of of their dif...
32 Subjects: including precalculus, reading, calculus, physics
...I'm currently working on my masters in epidemiology working with biostatistics and health policy. I also have 3 years of clinical research experience having been published in journals such as
The Oncologist and Cancer. I'm a current fellow with the New York Academy of Sciences teaching to at risk youth.
12 Subjects: including precalculus, chemistry, calculus, physics
...As a NYS Certified Math teacher I am confident that I can help students succeed in the subject of mathematics. I believe that all students have the ability to learn math! Please contact me if
you are interested!
10 Subjects: including precalculus, calculus, algebra 1, GED
...Stellar students benefit when tutors challenge them to move towards higher level applications of the material. Struggling students need someone who can see beyond their mistake and into their
thought process. I strive to convince students they are developing valuable skills for their lives by regularly connecting the material to their everyday lives.
18 Subjects: including precalculus, calculus, statistics, geometry
Related Forest Hills, NY Tutors
Forest Hills, NY Accounting Tutors
Forest Hills, NY ACT Tutors
Forest Hills, NY Algebra Tutors
Forest Hills, NY Algebra 2 Tutors
Forest Hills, NY Calculus Tutors
Forest Hills, NY Geometry Tutors
Forest Hills, NY Math Tutors
Forest Hills, NY Prealgebra Tutors
Forest Hills, NY Precalculus Tutors
Forest Hills, NY SAT Tutors
Forest Hills, NY SAT Math Tutors
Forest Hills, NY Science Tutors
Forest Hills, NY Statistics Tutors
Forest Hills, NY Trigonometry Tutors
|
{"url":"http://www.purplemath.com/forest_hills_ny_precalculus_tutors.php","timestamp":"2014-04-20T09:05:06Z","content_type":null,"content_length":"24295","record_id":"<urn:uuid:c89efda4-ca10-4aee-ac4c-797f419b7b44>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00582-ip-10-147-4-33.ec2.internal.warc.gz"}
|
April 4th 2013, 08:25 PM
Hi guys, could you help me solve this simple problem, what happens is that I have a little doubt with the defining intervals of the function, thanks
Find the Laplace transform of the given function
= 0 t < 3
= t t ≥ 3
April 4th 2013, 09:21 PM
Re: laplace
Hey frankitm.
Hint: Look at the Heaviside function:
Heaviside step function - Wikipedia, the free encyclopedia
April 5th 2013, 03:51 AM
Re: laplace
Indeed, your fuction can be written as
where $\theta(x)$ is the Heaviside funcion. The Laplace transform of your function is
You can do that integral by parts.
|
{"url":"http://mathhelpforum.com/differential-equations/216688-laplace-print.html","timestamp":"2014-04-18T11:01:06Z","content_type":null,"content_length":"5515","record_id":"<urn:uuid:78152aec-101c-4c57-ad6a-35f6926da35a>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00000-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Injective, Surjective and Bijective
"Injective, Surjective and Bijective" tell you about how a function behaves.
A function is a way of matching the members of a set "A" to a set "B":
A General Function points from each member of "A" to a member of "B".
To be a function you never have one "A" pointing to more than one "B", so one-to-many is not OK in a function (as you would have something like "f(x) = 7 or 9")
But more than one "A" can point to the same "B" (many-to-one is OK)
Injective means that every member of "A" has its own unique matching member in "B".
As it is also a function one-to-many is not OK
And you won't get two "A"s pointing to the same "B", so many-to-one is NOT OK.
But you can have a "B" without a matching "A"
Injective functions can be reversed!
If "A" goes to a unique "B" then given that "B" value you can go back again to "A" (this would not work if two or more "A"s pointed to one "B" like in the "General Function")
Read Inverse Functions for more.
Injective is also called "One-to-One"
Surjective means that every "B" has at least one matching "A" (maybe more than one).
There won't be a "B" left out.
Bijective means both Injective and Surjective together.
So there is a perfect "one-to-one correspondence" between the members of the sets.
(But don't get that confused with the term "One-to-One" used to mean injective).
On The Graph
Let me show you on a graph what a "General Function" and a "Injective Function" looks like:
General Function "Injective" (one-to-one)
In fact you can do a "Horizontal Line Test":
To be Injective, a Horizontal Line should never intersect the curve at 2 or more points.
(Note: Strictly Increasing (and Strictly Decreasing) functions are Injective, you might like to read about them for more details)
Formal Definitions
OK, stand by for some details about all this:
A function f is injective if and only if whenever f(x) = f(y), x = y.
Example: f(x) = x+5 from the set of real numbers
This function can be easily reversed. for example:
Given 8 we can go back to 3
Example: f(x) = x^2 from the set of real numbers not an injective function because of this kind of thing:
This is against the definition f(x) = f(y), x = y, because f(2) = f(-2) but 2 ≠ -2
In other words there are two values of "A" that point to one "B", and this function could not be reversed (given the value "4" ... what produced it?)
BUT if we made it^ from the set of natural numbers is injective, because:
• f(2) = 4
• there is no f(-2), because -2 is not a natural number
Surjective (Also Called "Onto")
A function f (from set A to B) is surjective if and only for every y in B, there is at least one x in A such that f(x) = y, in other words f is surjective if and only if f(A) = B.
So, every element of the range corresponds to at least one member of the domain.
Example: The function f(x) = 2x from the set of natural numbers even numbers is a surjective function.
However, f(x) = 2x from the set of natural numbers not surjective, because, for example, nothing in 3 by this function.
A function f (from set A to B) is bijective if, for every y in B, there is exactly one x in A such that f(x) = y
Alternatively, f is bijective if it is a one-to-one correspondence between those sets, in other words both injective and surjective.
Example: The function f(x) = x^2 from the set of positive real numbers to positive real numbers is injective and surjective. Thus it is also bijective.
But not from the set of real numbers
|
{"url":"http://www.mathsisfun.com/sets/injective-surjective-bijective.html","timestamp":"2014-04-19T01:48:37Z","content_type":null,"content_length":"13289","record_id":"<urn:uuid:6a98808a-0d46-4dc2-b37e-d69887c14178>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00219-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Lower bound on $L^2$ norm of mean curvature in general dimensions
up vote 7 down vote favorite
Suppose $\Sigma\subset \mathbb{R}^{n+1}$ is a closed embedded hypersurface. We know that when $n=1$
$$ \int_{\Sigma} |H|^2 \geq \frac{4 \pi^2}{|\Sigma|} $$ by Gauss-Bonnet and that this is saturated on the round circle -- here $|\Sigma|$ is the length of $\Sigma$ and $H$ the mean curvature.
Likewise, if $n=2$ we have $$ \int_{\Sigma} |H|^2\geq 16\pi $$ which is also saturated on the round sphere (of course as we now know this can be improved for positive genus surfaces).
I'm wondering to what extent one can get sharp lower bounds in dimensions $n+1>3$. Specifically, something like $$ \int_{\Sigma} |H|^2 \geq C_n |\Sigma|^{(n-2)/n}. $$
Ideally, this bound would be sharp on the round sphere at least amongst convex competitors (I suspect otherwise it wouldn't be true).
Any references would be appreciated.
dg.differential-geometry geometric-analysis reference-request
Isn't it $\ge 4\pi$ for dimension $n=2$? which is realized by round sphere? – J. GE May 15 '13 at 11:27
I think that is a just using a different convention for the mean curvature (I use the trace of the second fundamental form, not the average). However, I think I miscomputed... – Rbega May 15 '13
at 12:31
2 The exponent at $|\Sigma|$ should be $(n-2)/n$ for scale invariance. – Sergei Ivanov May 15 '13 at 20:56
add comment
1 Answer
active oldest votes
I have no idea about the general case but in the convex case the sphere is indeed optimal. Moreover the $L^1$ norm of $H$ attains its minimum at the sphere (among the convex surfaces
with the same area). To deduce the result for the $L^2$ norm, just apply Cauchy-Schwarz,
Let $A$ be the convex body bounded by $\Sigma$ and $B$ the unit ball in $\mathbb R^{n+1}$. Then the area $|\Sigma|$ is proportional to $V_1(B,A)$ and the integral of the mean curvature
is proportional to $V_2(B,A)$, where $V_k(B,A)$ is the mixed volume of $k$ copies of $B$ and $n+1-k$ copies of $A$.
up vote 3 down
vote accepted By the Alexandrov-Fenchel inequality, $\log V_k(A,B)$ is a concave function of $k$ ($k\in\{0,1,\dots,n+1\}$). This fact yields a lower bound for $V_2(B,A)$ in terms of $V_1(B,A)=C(n)|\
Sigma|$ and $V_{n+1}(B,A)=V(B)$, a constant. Namely $$ C(n)\int_\Sigma H = V_2(B,A) \ge V(B)^{1/n} V_1(B,A)^{(n-1)/n} = C_1(n)|\Sigma|^{(n-1)/n} $$ If $A$ is a ball, the inequality
turns to equality because so does the Alexandrov-Fenchel inequality.
add comment
Not the answer you're looking for? Browse other questions tagged dg.differential-geometry geometric-analysis reference-request or ask your own question.
|
{"url":"http://mathoverflow.net/questions/130635/lower-bound-on-l2-norm-of-mean-curvature-in-general-dimensions","timestamp":"2014-04-21T08:07:34Z","content_type":null,"content_length":"55084","record_id":"<urn:uuid:498cebf7-4ea8-47ab-bdde-1e60f3ce3be4>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00387-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Identifying the Firm-Specific Cost Pass-Through Rate
Orley Ashenfelter, David Ashmore, Jonathan B. Baker & Signe-Mary McKernan1
January 1998
I. Introduction
A merger that permits the combined company to reduce the marginal cost of producing a
product creates an incentive for it to lower price. Accordingly, the rate at which cost changes are
passed through to prices (along with an estimate of the magnitude of cost reductions that would
result from merger) matters to the evaluation of the likely competitive effects of an acquisition.
In this paper, we describe our empirical methodology for estimating the cost pass-through
rate facing an individual firm, and for distinguishing that rate from the rate at which a firm passes
through cost changes common to all firms in an industry. In essence, we regress the price a firm
charges on both its costs and the costs of another firm in the industry. Including the second cost
variable allows us to estimate the impact of costs on prices while holding constant that part of
cost variation due to industry-wide cost shocks.
We apply this methodology to determine the firm-specific pass-through rate for Staples,
an office superstore chain, and find that this firm historically passed-through firm-specific cost
changes at a rate of 15% (i.e. it lowered price on average by 0.15% in response to a 1% decrease
The authors are, respectively, Professor of Economics, Princeton University; Partner,
Ashenfelter & Ashmore; Director of the Bureau of Economics, Federal Trade Commission; and
Economist, Federal Trade Commission. Ashenfelter testified about the results presented in this
paper on behalf of the FTC in the Staples/Office Depot merger litigation. The views expressed
are not necessarily those of the Federal Trade Commission or any individual Commissioner. The
authors are indebted to Charles Thomas.
in marginal cost).2 This result was relied upon by the court in deciding to enjoin preliminarily
the proposed merger of Staples and Office Depot.3
Our primary empirical concern is distinguishing the firm-specific pass-through rate from
the industry-wide pass-through rate. The firm-specific rate relates a change in the price Staples
charges for a product to a change in the marginal cost of that product, holding constant the
marginal cost of rival sellers of office supplies. The industry-wide rate relates a change in
Staples’ price to a change in its marginal cost, given that the identical marginal cost change is
experienced by firms competing with Staples. This distinction is important in merger analysis,
because merger-specific efficiencies4 typically lead to firm-specific cost savings.5
The existing empirical literature on pass-through rates does not make the distinction
between the effects of firm-level and industry-wide shocks on price. The exchange rate pass-
through literature examines the response of local currency import prices to variation in the
We follow the convention in the literature of expressing pass-through rates in
percentage terms, regardless of the functional form in which they are estimated, such as levels
(dP/dC) or logs (d lnP/d lnC).
Federal Trade Commission v. Staples, Inc., 970 F. Supp. 1066, 1090 (D.D.C.
1997)(Hogan, J.). Judge Hogan did not accept the claim of the merging firms that two-thirds of
cost reductions were historically passed-through to consumers.
In evaluating the competitive effect of mergers, the federal antitrust enforcement
agencies consider only those efficiencies likely to be accomplished with the proposed merger and
unlikely to be accomplished otherwise. These are termed merger-specific efficiencies. See
generally Department of Justice and Federal Trade Commission, Horizontal Merger Guidelines
§4 (1997).
We do not consider here whether the merger, by lessening competition, would alter the
firm-specific pass-through rate. However, the FTC staff, in an analysis not presented in court,
found that the estimated firm-specific pass-through rate did not vary much with the number and
identity of the superstore competition facing Staples’ stores.
exchange rate between exporting and importing countries.6 A shock to an exchange rate could be
industry-wide if all sellers are located in the exporting country, firm-specific if there is only one
seller in the exporting country but other sellers located elsewhere, or somewhere in between
industry-wide and firm-specific if there are multiple suppliers both in the exporting country and
outside.7 In general, however, this literature appears to interpret estimated pass-through rates
under the assumption that exchange rate variation is generally close to industry-wide.8 The tax
pass-through literature, which examines the impact of excise tax changes on prices, is also
concerned with an industry-wide pass-through question.9
II. Economics of Cost Pass-Through
Our formal analysis of the economics of cost pass-through begins with a partial
equilibrium model of the determination of the price and quantity for an individual firm, which, in
anticipation of the empirical work, we term Staples. We adopt the following notation, and
represent vectors in bold.
Surveys of this literature appear in Goldberg & Knetter (1997) and Menon (1996).
Another possibility is that an exchange rate shock differentially affects export suppliers
within an industry because suppliers use imported inputs in varying degrees.
Estimated United States pass-through rates of 60% to 70% (typically based on
estimating log-linear pricing equations) appear to be the most common.
Contributions include Barzel (1976), Johnson (1978), Sumner (1981), Sumner & Ward
(1981), Sullivan (1985), Harris (1987) and Sung, Hu & Keeler (1994). The majority of these
studies report pass-through rates slightly in excess of 100% (usually based on estimating linear
pricing equations). Much of the tax pass-through literature is concerned with the significance for
estimated pass-through rates of unobservable variation in product quality, an issue not important
in our application.
PS = Staples’ price
Qi = quantity for firm i
X = exogenous variables affecting demand
C = industry-wide components of marginal cost
Ci = firm-specific components of marginal cost, for all firms i
Ki / C + Ci = marginal cost for firm i
The inverse demand function facing Staples is specified as equation (1).
(1) PS = P(QS , Qi, X), for i … S
Equation (2) sets forth the best-response function for each rival firm i. We allow for the
possibility of different reactions to variation in the firm-specific and industry-wide components
of marginal cost, and treat the cost components as independent of output.
(2) Qi = Qi (Qj, X, C, Ci), for i … S, j … i
The system of equations (2) is solved for the set of reduced form best-response functions (3).
(3) Qi = Qi (QS, X, C, Ci, Cj) , for i … S; j … i, S
The inverse residual demand function (4) facing Staples is derived by substituting the functions
(3) into the inverse demand function (1).10
(4) PS = R(QS , X, C, Ci), for i … S
Staples’ choice of its decision variable (output) is defined by the joint solution of
equation (4), the residual demand function, and equation (5), the first order condition equating
the firm’s marginal revenue and marginal cost. In the notation, Ri represents the derivative of R
The residual demand function faced by Staples, equation (4), is not necessarily the
residual demand function that would be estimated by an outside observer. See Baker &
Bresnahan (1988).
with respect to its ith argument and Rij represents the derivative of Ri with respect to its jth
argument.11 We assume that R1 < 0 and R11 $ 0.
(5) QS R1 + R = KS
Our later empirical work focuses on the way changes in the components of marginal cost affect
equilibrium price and quantity. Accordingly, we rewrite the first order condition (5) as follows:
(6) QS R1 + R = C + CS
We derive the rate at which Staples passes-through firm-specific and industry-wide cost
shocks by differentiating equations (4) and (6) with respect to PS, QS, C, and CS.12
(7) dPS = R1dQS + R3 dC
(8) [2R1 + QS R11 ]dQS + [QS R13 + R3 ] dC = dC + dCS
We solve the system (7) and (8) for firm-specific and industry-wide pass-through rates,
which are set forth in equations (9) and (10), respectively.
(9) dPS/dCS = 1/(2 +f ), where f = QS R11/R1 # 0
(10) dPS/dC = [dPS/dCS ][1 + R3 (1+f ) - QS R13]
The expression f is interpreted as the elasticity of the slope of residual demand.13
We first interpret equation (9), the expression for the firm-specific pass-through rate
(dPS/dCS). This rate reflects how Staples changes price in response to a cost change not
The transformation of equation (5) into the following equivalent form demonstrates
that the first order condition can be interpreted as equating the Lerner Index of markup over
marginal cost with the absolute value of the elasticity of inverse residual demand:
(PS - KS)/PS = -QS R1/R.
We do not differentiate with respect to Ci (for i … S) because we are not concerned
with the firm’s price response to cost shocks specific to rival firms.
More precisely, 1/f is the elasticity of the slope of inverse residual demand.
experienced by any rival. The second order condition guarantees that the firm-specific pass-
through rate is non-negative.14 Thus, Staples will raise price when its firm-individuated costs rise
and lower price when its firm-individuated costs decline.
The shape of the demand curve affects the pass-through rate. If the firm’s residual
demand is linear (R11 = f = 0), then the firm-specific pass-through rate equals ½. Such a firm is
a monopolist of its residual demand function, and a monopolist facing linear demand and
constant marginal cost passes through half of any cost increase to consumers. The firm-specific
pass- through rate varies from the benchmark of ½ with the curvature of the residual demand
function, as is evident from the presence in equation (9) of a parameter (f ) related to the second
derivative of demand. This occurs because the curvature is related to the way the demand
elasticity changes with price.15 Firms exercising market power have an incentive to take
advantage of more inelastic industry demand by raising price. If residual demand grows elastic
when price rises less rapidly than it would were residual demand linear, then the firm may
respond to a small cost increase by raising price by more than half the cost increase.
The pass-through rate also may vary with the extent of competition. In the limiting case
The second order condition guaranteeing that the solution to equation (6) maximizes
profits requires that 2R1 + QS R11 < 0. This implies that (2 +f ) > 0, essentially restricting the
slope of the residual demand function not to grow more horizontal too rapidly as output
Bulow & Pfleiderer (1983) and Stiglitz (1988) show that a monopolist facing a linear
demand curve will pass through 50% of cost changes while pass through rates can be higher or
lower depending on the shape of the demand curve. The relationship between the pass-through
rate and the elasticity of the slope of demand has been highlighted by Bishop (1968), Seade
(1985) and Goldberg & Knetter (1997). The theoretical literature on pass-through rates also
considers an issue we do not treat: the relationship between the pass-through rate and the slope
of marginal cost. E.g. Bishop (1968); Stiglitz (1988); and Goldberg & Knetter (1997).
of perfect competition, under which the residual demand function Staples faces becomes
horizontal (R1 6 0), the firm-specific pass-through rate goes to zero.16 This result is derived by
totally differentiating equation (4) under the assumption that residual demand does not vary with
firm output, yielding:
(7') dPS = R3 dC
In this limiting case, price varies only with industry-wide shocks to marginal cost, not with
variation in firm-specific costs.17
The industry-wide pass-through rate (dPS/dC) defines the way Staples alters its price in
response to a cost increase common to it and its rivals (such as the incidence of an industry-wide
tax). In general, we expect this rate to be positive, and indeed to exceed the firm-specific pass
through rate, on the view that the industry’s response to a common cost shock is likely to be
more like that of a monopolist than a competitor.18 Under the technical assumptions of the
model, the industry-wide pass through rate will be positive if 1 + R3 (1+f ) - QS R13 > 0, and the
industry rate will exceed the firm-specific rate if 1 + R3 (1+f ) - QS R13 > 1. These conditions
will be satisfied in one benchmark case: when the Staples residual demand function is
approximately linear (R11 = R13 = 0) and an industry cost increase raises the residual demand
This result is illustrated in Yde & Vita (1996).
We cannot infer the slope of residual demand or the extent of competition from an
estimate of the firm-specific pass-through rate, however. As equation (9) makes clear, the firm-
specific pass-through rate in general also depends upon the curvature of residual demand. See
also Bulow & Pfleiderer (1983).
The FTC’s principal economic expert in the Staples litigation, Dr. Frederick Warren-
Boulton, took this view. He based his conclusion in part on a theoretical model he developed in
which Cournot oligopolists passed through a greater fraction of industry-wide cost shocks than
firm-specific cost shocks.
function facing Staples (R3 > 0).19 More generally, without restricting the curvature of the
residual demand function, the conditions for the industry pass-through rate to be positive and in
excess of the corresponding firm-specific pass-through rate are most likely to hold when the main
effect of an industry-wide cost rise is non-strategic (reducing industry supply without markedly
altering the way firms interact). If so, then it is plausible that an industry cost shock would lead
the residual demand function facing Staples to rise (R3 > 0) without altering its slope (R13
III. Estimating Cost Pass-Through
As is evident from their derivation, the pass-through rates dPS/dCS and dPS/dC are
derivatives of the reduced form price equation (11), which we seek to estimate.
(11) PS = f(CS, X,C,Ci) , for i … S
We specify a functional form (12) linear in logarithms (using lower case values of the variables
to reflect logs).21 In order to highlight the econometric issues, we suppress the vector of
exogenous demand shift variables X. The error term e is assumed to be independently and
This assumption is consistent with the experience of the brewing industry: Baker &
Bresnahan (1988) found empirically that common cost increases raised the residual demand
facing three brewers. Note that R3 mixes structural parameters of demand with conduct terms, as
is evident from the following relationship derived from equations (1) and (2):
R3 = 3i…s dPS/dQi dQi/dKi .
This interpretation also presumes restrictions on the elasticity of the slope of residual
demand such that (1+f ) > 0.
We might prefer a functional form that is second-order flexible, such as a translog
model, given the importance of the curvature of the demand function to the pass-through rate.
But in our application, we have insufficient data to estimate with precision many more
parameters of the demand function, so do not use such a functional form here.
identically distributed and uncorrelated with the regressors.
(12) pS = ß0 + ß1 cS + ß2 c + ?i ci + e , for i… S
We also assume that the industry-wide and firm-specific marginal cost components are
independent, and that the firm-specific components are uncorrelated across firms:22
(13) cov(ci, c) = cov(ci, cj) = 0, for i … j
We do not observe the cost components; instead we observe measures of marginal cost by
firm, kS and kD. We treat the components as additive in logs:
(14) ki = c +ci , where i = S, D
Although the assumption that the cost components are additive in levels is equally plausible,
equation (14) may nevertheless be a reasonable local approximation.
Our primary goal is to estimate ß1, the pass through rate for Staples-specific cost shocks.23
With the model expressed in logs, this parameter would have an elasticity interpretation: a 1
percent increase (reduction) in Staples-specific costs will be associated with a ß1 percent
increase (reduction) in Staples’ price. We will sometimes refer to ß1 alternatively as the price
elasticity with respect to Staples’ costs.
Our strategy for estimating ß1 is to extract the Staples-specific cost component from kS,
by including in the equation costs for a rival firm (kD), Office Depot, as a measure of the
industry-wide cost component. This strategy exploits the assumed independence of firm-specific
These assumptions are plausible for office supply retailing, our application. For
example, if the cost of plastic for pens increased, the wholesale costs of pens might rise for all
firms independent of firm-specific components such as the negotiating skills of individual
managers in bargaining with suppliers.
One advantage of our procedure is that we can recover the parameter ß1 without
independently estimating the multiple demand and conduct parameters of which it is composed.
and industry-wide cost shocks. Accordingly, we rewrite equation (12) using equation (14):
(15) pS = ß0 + ß1 kS + (ß2 - ß1) kD + (ß1 - ß2 + ?D ) cD + ?i ci +e, i… S,D
Equation (15) explains Staples’ price in terms of two observable variables, Staples’ and
Office Depot’s marginal costs, and several unobservable variables, the Office Depot-specific cost
component and other firm-specific cost components. The main econometric issue we treat is
whether and to what extent the omission of the unobservable variable for Office Depot-specific
costs (cD ) would bias coefficient estimates.24 As will be seen, it is straightforward to estimate ß1
consistently in a regression model involving only observable right hand variables.
The models we estimate are specified in equations (16) and (17).25
(16) pS = a0 + a1 kS + ?
(17) pS = b0 + b1 kS + b2 kD + ?
Equation (16) relates Staples’ price to Staples’ costs but not to two variables present in (15):
Office Depot’s costs and the unobservable component of Office Depot’s costs. This is not our
preferred model for identifying the pass-through rate on firm-specific cost shocks because the
coefficient on kS in (16) will be a biased estimator of the true coefficient in (15). The bias arises
because kS is correlated with the industry-wide component of the omitted variable kD (though
not with the firm-specific component), as indicated in equation (18). The notation E represents
Because the other unobservable firm-specific cost-components are uncorrelated with
the observable variables, their omission does not bias regression coefficient estimates.
Equations (16) and (17) implicitly recognize that office superstores adjust prices in
response to cost shocks rapidly. In other industries, firms may have reasons to smooth their
responses to cost shocks. For example, price adjustments may be costly and the firms may
believe that most cost shocks are temporary. Under such circumstances, we might have
considered estimating the model on lower frequency data (e.g. quarterly rather than monthly) or
incorporating lagged costs in the estimating equations.
the expectations operator.
(18) E a1 = ß1 + (ß2 - ß1 )[cov(kS, kD )/var(kS )] + (ß1 - ß2 +?D)[cov(kS, cD )/var(kS )]
+ 3 ?i [cov(kS,ci )/var(kS )] for i … S,D
= ß1 + (ß2 - ß1 )? S , where ? S = var(c)/[var (c) + var(cS)]0 [0,1]
= (1-? S) ß1 + ? S ß2
The expected value of the parameter a1 is a weighted average of ß1 and ß2, with more weight
placed on ß2 as more of the variation in Staples’ costs comes from the industry-wide component
(i.e. as ? S rises). Accordingly, if the industry-wide cost pass-through rate exceeds the firm-
specific rate (i.e. if ß2 > ß1 ), as is plausible, then a1 will be biased upward as an estimator of the
firm-specific rate (i.e. then E a1 $ ß1 ). In discussing our results below, we refer to a1 as an
overall average estimate of the effect of changes in Staples’ costs on Staples’ prices (that is,
averaging the effects of firm-specific and industry-wide cost shocks on price).
We instead use equation (17) to estimate the Staples-specific cost pass-through rate
because the coefficient on kS in equation (17) is an unbiased estimator of the true coefficient in
equation (15):
(19) E b1 = ß1.
The omitted variables cD and ci do not introduce bias here because they are uncorrelated with kS;
this is implied by equations (13) and (14).26
The coefficients in equation (17) also generate a biased estimate of the industry pass-
through rate, ß2. In particular:
Although measurement error in one independent variable can bias the regression
coefficients on other independent variables, that does not occur here because we have assumed
that the error in measuring industry-wide costs is uncorrelated with Staples’ costs.
(20) E b2 = (ß2 - ß1 )? D + ?D (1 - ? D ), where ? D = var(c)/[var (c) + var(cD)]0 [0,1].
The parameter ?D reflects the effect on Staples’ price of a change in the firm-specific component
of Office Depot’s costs. To the extent this is small, as may be plausible, equation (20) implies
that b2 is a downward-biased estimator of the difference between the rate at which Staples passes
through industry-wide and firm-specific cost shocks, and thus that the sum of b1 and b2 is a
downward-biased estimator of the industry pass through rate, ß2.27 However, it is evident from
equation (20) that regardless of the magnitude of ?D, the expected value of b2 approaches the
expression (ß2 - ß1) in the limit as most of the variation in Office Depot’s costs comes from the
industry-wide component. Under such circumstances, the sum of b1 and b2 converges to an
unbiased estimator of the pass through rate for industry-wide cost shocks, ß2.
The latter case — in which most of the variation in firm costs comes from the industry-
wide component, so our estimator of the pass-through rate for industry-wide cost shocks is
unbiased — will be important in our empirical work. We can identify this situation using the
simple correlation between kS and kD, derived from equations (13) and (14), which we denote ?:
(24) ? = [cov(kS, kD)/var(kS)]½ [cov(kS, kD)/var(kD)]½ = [? S? D]1/2
Because the variance ratios are bounded (? i 0 [0,1] for i = S,D), the square of the correlation ?
provides a lower bound estimator for the variance ratio ? D. Thus, if ? is near one, it is
reasonable to report the sum of b1 and b2 as an estimator of the pass through rate for industry-
wide cost shocks.
If ?D . 0, the downward bias has an errors in variables interpretation: it arises
because equation (17) omits the unobservable variable cD which appears in equation (15), and
thus because Office Depot costs are a noisy proxy for industry-wide costs.
IV. Data
The data used to estimate price equations (16) and (17) comes from two samples, one
provided by Staples and one by Office Depot, of average monthly price and variable cost data on
products sold during the years 1995 and 1996. From these two samples we matched 30 identical
products that were sold in both Staples and Office Depot stores during 1995 and 1996 and for
which we had cost data from both companies. These monthly data cover almost all
(approximately 500) Staples stores and are at the stock-keeping unit (SKU)28 level. The 30
SKUs comprised: 17 pens, 7 paper items, 5 toner cartridges, and 1 computer diskette.29 These
items are largely what Staples terms "price-sensitive" SKUs.30
We include store, SKU, and time fixed effect dummy variables in our regressions in order
to control for price variation due to differences across stores, products, and months. Equations
(16) and (17) are rewritten below to reflect these additional variables and the level of the data
used in the analysis. For store j, SKU l, at time t, the reduced form price equations estimated are
(16') pSlt = ao + a1kSlt + Xjta2 + µ1j + µ2l + µ3t + ?jlt
j j
(17') pSlt = bo + b1kSlt + b2kDt + Xjtb3 + µ1j + µ2l + µ3t + ?jlt.
j j l
Stock-keeping units are the finely specified product definitions chosen by a firm for
internal inventory management uses. For example, a firm might use different stock keeping units
for red ink and blue ink models of a particular brand and style of pens, and different SKUs for the
medium and fine-point models.
We also estimated our model on a second sample of SKUs matched by the defendants’
expert. (The defendants’ expert had gone through a similar exercise, for another purpose, of
matching those Staples and Office Depot SKUs for which cost data were available.) The pass-
through rate estimates based on our sample and the defendants’ sample were nearly identical.
In general, price sensitive items are highly visible items that are comparison-shopped
and frequently purchased.
The variables included are log Staples price (pSlt ), log Staples cost (kSlt ) and log average Office
j j
Depot cost (kDt ) (for corresponding SKU in the same month averaged over all Office Depot
stores), fixed effect dummies for store (µ1j), SKU (µ2l), and time (µ3t ), and in some models,
competitor variables (Xjt). The competitor variables control for the number of Staples, Office
Depot, OfficeMax, Wal-Mart, Sam’s Club, Computer City, Best Buy, Office 1 Superstore,
Costco, BJ’s, CompUSA, Kmart, and Target stores in the metropolitan statistical area (MSA).
The cost variables were accounting estimates of average variable cost (essentially, cost of goods
sold) supplied by the merging firms; we treat these as estimates of marginal cost. We cannot
present descriptive statistics, such as the mean and standard deviation of the variables in our
sample, as they are not in the public domain. The regression results are discussed below.
V. Empirical Results
Table 1 presents estimates of the impact of changes in costs on Staples’ prices.31 Models
1 and 2 correspond to estimates of equations (16') and (17'), respectively, but without the
competitor variables. Model 1 does not separate firm-specific from industry-wide cost changes.
The coefficient of 0.571 on log Staples Cost is an estimate of a1 , the price elasticity with respect
to weighted average marginal cost, in equation (16'). Thus, for a 10% decrease in Staples’ costs,
We are unable to report additional coefficients or regression diagnostics, as this
information was not made public during litigation. We did not formally examine the statistical
properties of the error terms, though nothing in our results suggested that they had troublesome
Model 1 estimates a 5.7% decrease in Staples’ prices; the combined firm-specific and industry
wide pass-through rate is 57%.32
Model 2 separates Staples’ firm-specific cost changes from industry-wide cost changes by
including log Office Depot cost as an explanatory variable for Staples’ prices. The coefficient of
0.149 on log Staples cost in Model 2 is an estimate of b1 , the price elasticity with respect to firm-
specific costs, and measures the impact of Staples’ firm-specific cost changes on Staples’ prices.
It implies that if Staples-specific costs fall 10%, Staples lowers prices on average by roughly
1.5%; the firm-specific pass-through rate is about 15%.
The coefficient of 0.149 on log Staples cost in Model 2 is much lower than the coefficient
of 0.571 on log Staples cost in model 1, thus demonstrating that the bias in estimating firm-
specific pass-through without controlling for industry-wide cost changes can be large.
Models 3 and 4, also presented in Table 1, are identical to Models 1 and 2 except for the
addition of variables to control for the number of competitors in the MSA. Including competitor
variables made only a trivial difference to the estimated coefficients on the cost variables. The
overall pass-through and firm-specific pass-through remain 0.571 and 0.149, respectively, and
stay highly significant statistically.
Models 2 and 4 also permit us to estimate the pass through rate on industry-wide cost
shocks. Because the Staples and Office Depot cost variables were highly correlated in our data
(? close to one), we treat the sum of the coefficients on the log Staples cost and log Office Depot
cost variables as a reasonable estimator of the industry-wide pass-through rate. In both models,
This estimate is close to the two-thirds suggested by the merging firms’ expert in the
Staples litigation.
the point estimate is close to 0.85, implying an 85% pass-through rate for industry-wide cost
At the preliminary injunction hearing, the merging firms argued that our empirical
estimates were not a good guide for policy-making because our data were limited. They
emphasized that the 30 SKUs used in the analysis were not a random sample and that we did not
test whether they were representative of all of the products sold at Staples. For example, they
noted that 17 of the 30 SKUs were pens, while pens make up only 2.3% of Staples’ sales; that 27
of the 30 SKUs were price-sensitive items; and that excluding variants in style and color, which
are likely to have a similar shelf price, there were only 20 SKUs in the sample. The defendants
also pointed out that the time period covered by our study was limited to the years 1995 and
1996. In response, we pointed out three empirical reasons to trust our results. First, when we
estimated our models on a second sample of matched SKUs put together by the merging firms’
expert for a different purpose, we found the pass-through rates to be nearly identical to those
estimated from our sample. Second, when we simulated the impact of the merger based on the
models from equations (16') and (17') on this sample of 30 largely price-sensitive items we
found a predicted price increase of 16-18% from the merger, as presented in Table 1. This
predicted price increase is close to the 19-20% price increase derived independently of the cost
pass-through study with a model estimated on a far broader, and more representative, sample of
price-sensitive items.33 Finally, we found no significant difference in the pass-through rate when
we estimated the model separately on cost increases and cost decreases.
The results of the pricing study are summarized in Appendix Table A1.
Table 1
Estimates of the Impact of Log Costs on Log Staples Prices
Model 1 Model 2 Model 3 Model 4
Log Staples Cost 0.571 0.149 0.571 0.149
(194.20) (37.62) (195.15) (37.65)
Log Office Depot Cost - 0.696 - 0.697
(150.25) (151.22)
Competitor Variables No No Yes Yes
Simulated Impact on Staples Not Not 16.4% 16.6%
Prices of Merging Staples and Applicable Applicable
Office Depot
Simulated Impact on Staples Not Not 17.0% 17.6%
Prices of Merging Staples, Applicable Applicable
Office Depot, and OfficeMax
Notes: Based on models in which the log of Staples’ price for each of 30 SKUs is regressed on
fixed effects for store, month, and SKU, and on the variables indicated in the Table. Cost
variables are entered as natural logarithms. Numbers in parentheses are t-statistics.
Table A1
Simulated Impact of Two Hypothetical Mergers on Staples’ Price
for Price Sensitive Office Products
Simulation: Percent t-Statistic Number of
Impact on Observations
Prices in Simulation
Merge Staples and Office Depot 18.7% 16.81 3,038
in Markets with Office Depot
Merge Staples, Office Depot, 19.7% 13.69 1,960
and OfficeMax in Markets with
Office Depot and OfficeMax
Notes: Simulations based on a model in which Staples’ prices for price sensitive items are
regressed on fixed effects for the store, fixed effects for the month, and variables which control
for the number of Staples, Office Depot, OfficeMax, Wal-Mart, Sam’s Club, Computer City,
Best Buy, Office 1 Superstore, Costco, BJ’s, CompUSA, Kmart, and Target stores in the MSA.
Baker Jonathan B. and Timothy Bresnahan, "Estimating the Residual Demand Curve Facing a
Single Firm," International Journal of Industrial Organization, September 1988, 6(3),
pp. 283-300.
Barzel, Yoram., "An Alternative Approach to the Analysis of Taxation," Journal of Political
Economy, 1976, 84(6), pp. 1177-1197.
Bishop, Robert L., "The Effects of Specific and Ad Valorem Taxes," Quarterly Journal of
Economics, 1968, 82, pp.198-218.
Bulow, Jeremy I. and Paul Pfleiderer, "A Note on the Effect of Cost Changes on Prices," Journal
of Political Economy, 1983, 91(1), pp. 182-185.
Federal Trade Commission v. Staples, Inc., 970 F. Supp. 1066 (D.D.C. June 30, 1997) (Hogan,
Goldberg, Pinelopi Koujianou and Michael M. Knetter, "Goods Prices and Exchange Rates:
What Have We Learned?" Journal of Economic Literature, September 1997, XXXV, pp.
Harris, Jeffrey E., "The 1983 Increase in the Federal Cigarette Excise Tax," Reprinted from Tax
Policy & the Economy, 1987, 1, edited by L.H. Summers, Cambridge, MA: M.I.T.
Johnson, Terry R., "Additional Evidence on the Effect of Alternative Taxes on Cigarette Prices,"
Journal of Political Economy, December 1978, 86(2), pp. 325-328.
Menon, Jayant., "Exchange Rate Pass-Through," Journal of Economic Surveys, June 1995, 9(2),
pp. 197-231.
Seade J., "Profitable Cost Increases and the Shifting of Taxation: Equilibrium Responses to
Markets in Oligopoly," Warwick Economic Research Papers Number 260, April 1985,
University of Warwick Coventry.
Stiglitz, Joseph E., "Who Really Pays the Tax: Tax Incidence," in Economics of the Public
Sector, Chapter 17, 1988. New York, NY: W.W. Norton & Company, pp.411-436.
Sumner Daniel A., "Measurement of Monopoly Behavior: An Application to the Cigarette
Industry," Journal of Political Economy, October 1981, 89(5), pp. 1010-1019.
Sullivan, Daniel, “Testing Hypotheses about Firm Behavior in the Cigarette Industry,” Journal
of Political Economy, 1985, 93(3), pp. 586-598.
Sumner, Michael T. and Robert Ward, "Tax Changes and Cigarette Prices," Journal of Political
Economy, 1981, 89(6), pp. 1261-1265.
Sung, Hai-Yen, The-Wei Hu, and Theodore E. Keeler, "Cigarette Taxation and Demand: An
Empirical Model," Contemporary Economic Policy, July 1994, Vol. 12, pp. 91-100.
United States Department of Justice and Federal Trade Commission, Horizontal Merger
Guidelines, Section 4, Revised 1997.
Yde, Paul L. and Michael G. Vita, "Merger Efficiencies: Reconsidering the ‘Passing-On’
Requirement," Antitrust Law Journal, 1996, 64, pp. 735-747.
|
{"url":"http://www.docstoc.com/docs/51001827/Identifying-the-Firm-Specific-Cost-Pass-Through-Rate","timestamp":"2014-04-25T06:04:01Z","content_type":null,"content_length":"92214","record_id":"<urn:uuid:88d0ac8e-6c77-4962-92be-029a7032d1eb>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00205-ip-10-147-4-33.ec2.internal.warc.gz"}
|
James Glimm Named Fellow of the Society for Industrial and Applied Mathematics
James Glimm, a staff member of Brookhaven National Laboratory’s (BNL) Computational Science Center (CSC) and chair of the Department of Applied Mathematics & Statistics at Stony Brook University
(SBU), has been named a Fellow of the Society for Industrial and Applied Mathematics. The professional organization is beginning their Fellowship program this year, and Glimm is one of 183 members
who are being honored as Fellows for their distinguished contributions to their field.
Glimm was cited for “contributions to operator algebras, partial differential equations, mathematical physics, and especially shock wave theory.”
“Science for me is the ultimate adventure of the human mind,” Glimm said. “I am honored by this recognition for my research spanning multiple areas of pure and applied mathematics, theoretical
physics, and computation.”
Glimm has made outstanding contributions to shock wave theory, in which mathematical models are developed to explain natural phenomena that involve intense compression, such as air pressure in sonic
booms, crust displacement in earthquakes, and density of material in volcanic eruptions and other explosions. He also has been a leading theorist in operator algebras, partial differential equations,
mathematical physics, applied mathematics, and quantum statistical mechanics.
After earning a B.A. in engineering from Columbia University in 1956, Glimm went on to receive a Ph.D. in mathematics from Columbia in 1959. He joined BNL in 1999 as director of the Center for Data
Intensive Computing (CDIC), a position he held until 2004, when the CSC replaced the CDIC, thereby expanding computing capabilities at BNL in numerous areas of science. From 2004 to the present,
Glimm has overseen the applied mathematics program at the CSC. Glimm has been Distinguished Professor and Chair of Applied Mathematics & Statistics at SBU since 1989.
Glimm has received numerous awards for his work, including the 2002 National Medal of Science, the 1993 American Mathematical Society’s Steele Prize for a paper of fundamental importance, the Dannie
Heineman Prize for Mathematical Physics in 1980 and the New York Academy of Sciences’ Award in the Physical and Math Sciences in 1979.
|
{"url":"http://www.bnl.gov/newsroom/print/friendly.php?a=21229","timestamp":"2014-04-17T09:41:50Z","content_type":null,"content_length":"22737","record_id":"<urn:uuid:16c10f30-9970-4db3-8168-422eaca0dfcf>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00088-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Hans Petter Langtangen
Email: hpl@simula.no
Mobile phone: +47 99 53 20 21
Reception phone: +47 67 82 82 00
Fax: +47 67 82 82 01
Visiting address: Martin Linges vei 17, 4th floor
Snail mail: Simula Research Laboratory, P.O. Box 134 NO-1325 Lysaker, Norway
My main position is now as Director of Center for Biomedical Computing (CBC), a Norwegian Center of Excellence hosted at the Simula Research Laboratory. I am on 80% leave from my position as
Professor of Mathematical Modeling at the Department of Informatics, University of Oslo (I spend the 20% part of this position on teaching and supervision). Much of my time is currently devoted to
the work as Editor-in-Chief of SIAM Journal on Scientific Computing.
From 2007 my main research tasks are related to CBC. This center covers computational middleware for numerical solution of partial differential equations, robust numerical methods for laminar and
turbulent flow, including fluid-structure interactions, mathematical modeling of air and blood flow in the human body, and modeling of the electrical activity in the heart. We apply our generic
methods both in biomedicine and in geoscience.
In the past I have worked part-time at the Physics of Geological Processes Center of Excellence at the University of Oslo; Numerical Objects, a company that commercialized the Diffpack software; the
Department of Scientific Computing, Uppsala University; and SINTEF Applied Mathematics. Before taking up the position at the Department of Informatics in 1999, I worked as an Associate Professor and
later as a full Professor of Mechanics at the Department of Mathematics, University of Oslo. My education is from the University of Oslo, with MSc and PhD degrees in Mechanics. My CV has more
Research topics
• methods for creating flexible scientific software (high-level languages, scripting, Python, object-orientation, Diffpack),
• numerical methods for flow problems (e.g. water waves, porous media flow, biomedical flows),
• finite element methods,
• stochastic models and methods in mechanics.
Further information
• Curriculum Vitae with updated publication list
• FEniCS tutorial: v1.0, v1.0 (grad instead of nabla_grad)
• Recent books:
• Current software packages: ptex2tex, latexslides, doconce, and scitools.
• Present teaching:
□ Now I develop and teach one course: Introduction to programming in the natural sciences (INF1100). This course applies Python to teach both Matlab-style scripting and Java/C++-style
(object-oriented) programming to beginning students. All examples and exercises illustrate the use of mathematical modeling and programming to solve problems from classical calculus,
numerical calculus, physics, biology, and finance. The course constitutes a foundation for the University's strong focus on Computational Science in the bachelor programs. See above for links
to the book.
• Past teaching:
|
{"url":"http://home.simula.no/~hpl/","timestamp":"2014-04-24T05:10:41Z","content_type":null,"content_length":"9757","record_id":"<urn:uuid:fe209056-578a-4202-ada9-039de302ea36>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00376-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
• one year ago
• one year ago
Best Response
You've already chosen the best response.
can u identify in which quadrant (3,-85) will lie ?
Best Response
You've already chosen the best response.
it is false..right?
Best Response
You've already chosen the best response.
yup, because its in 1st quadrant and (3,-85) should lie in 4th quadrant.
Best Response
You've already chosen the best response.
that point is very far from origin, not definitely 3, so false.
Best Response
You've already chosen the best response.
But its polar cordinates we are talking about I think its ryt..
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
yeah this is polar coordinates
Best Response
You've already chosen the best response.
yeah its true then. Cause the 3 stands for the length of the hypotenuse if u draw a triangle with perpendicular side on the y axis and the base on x axis. I hope u get what I mean.
Best Response
You've already chosen the best response.
hahahahha hell no!!!
Best Response
You've already chosen the best response.
excuse me?
Best Response
You've already chosen the best response.
300 degrees doesn't lie on the second quad. Its given there that the upper quadrant lies between 0 to 180. if it had been 300 it would lie somewhere down below the 0. Get it??
Best Response
You've already chosen the best response.
(-9,300) = (9,120) so true. sorry for late reply.
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/50588869e4b03290a41457ea","timestamp":"2014-04-19T07:25:51Z","content_type":null,"content_length":"58998","record_id":"<urn:uuid:716eca14-fb2f-4cfe-9b8d-d88ee03fd100>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00427-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Is the set of cube-free binary sequences perfect?
up vote 13 down vote favorite
This question is inspired by this one. In that thread, it's established that there are uncountably many cube-free infinite binary strings (where $x \in 2^{\omega}$ is cube-free iff $\forall \sigma \
subset x,\ \sigma \sigma \sigma \not \subset x$). Here $\subset$ denotes "substring" which should be distinguished from "initial segment" which I'll denote by $\sqsubset$ if the need arises.
So let $C \subset 2^{\omega}$ be the subset of Cantor space consisting of all cube-free sequences. It's uncountable, and easily seen to be closed, hence it contains a perfect set.
Question 1. Is $C$ perfect?
Let's define:
$T_C = \{ \sigma \in 2^{<\omega} : (\exists x \in C)(\sigma \sqsubset x)\}$
$\ \ \ \ \ = \{ \sigma \in 2^{<\omega} : (\forall n > |\sigma|)( \exists \tau \in 2^n )(\sigma \sqsubset \tau,\ \tau$ cube-free$)\}$
$T_P = \{\sigma \in 2^{<\omega} : (\forall \tau \sqsupset \sigma, \tau \in T_C)(\exists \rho_1, \rho_2 \sqsupset \tau$ both in $T_C)(\rho_1 \perp \rho_2)\}$
Question 2. $T_C$ is evidently $\Pi ^0 _1$, but is it in fact recursive?
If we let $P$ denote the maximal perfect subset of $C$ obtained by iteratively removing isolated points of $C$ until this procedure stabilizes (taking intersections at limit stages), then $P$
determines a tree which I believe would be the tree $T_P$ defined above.
Question 3. What is the complexity of the tree determined by $P$?
If $T_P$ is indeed the tree determined by $P$, then the tree determined by $P$ is at worst $\Pi ^0 _3$, but can we do better?
UPDATE: The answers are:
1. Yes
2. Yes
3. Since $C$ is perfect, $P = C$ and so by the answers to 1 and 2, the tree determined by $P$ is just the tree determined by $C$, which is recursive.
I've learnt this after discussing it with Robert Shelton, one of the authors of the paper Gjergji linked to in his response below. In fact, what I've gathered is that there's a function $f$, better
than being on the order of $n^2$, such that to determine whether an arbitrary finite string $\sigma$ has a cube-free infinite extension, it suffices to check whether it has one of length $f(|\sigma|)
$ (where $|\sigma|$ is the length of $\sigma$). I suppose this would mean furthermore that the tree $T_C$ is not merely recursive, i.e. $\Delta_1$, but in fact $\Delta_0$.
computability-theory co.combinatorics symbolic-dynamics
This will also be a question in dynamics. One can also ask what is the entropy of C? – Nishant Chandgotia Apr 15 '11 at 7:09
I added the symbolic dynamics tag. – Gjergji Zaimi Apr 15 '11 at 7:26
add comment
1 Answer
active oldest votes
I found an article of J.D. Currie and R.O. Shelton that answers the first question. "The set of k-power free words over Σ is empty or perfect".
up vote 12
down vote Added: Note that Currie has a number of papers on power free words, and many of them are on the arxiv.
Thanks. The article actually seems to answer all the questions I asked (possibly). Since $C$ is perfect, $T_C = T_P$ so questions 2 and 3 are equivalent. And Theorem 3.3 claims that
3 we can effectively determine whether a finite binary string has an infinite cube-free extension. It's not clear whether this effective procedure is uniform however, and the proof of
this theorem along with the definitions of some key terms appear to be in some other papers, so rather than chasing papers I emailed the authors and asked. I'll post an update if I
get a response. – Amit Kumar Gupta Apr 15 '11 at 15:12
After a fruitful back-and-forth with Robert Shelton, I've understood that yes, the tree $T_C$ is recursive. – Amit Kumar Gupta Apr 19 '11 at 14:53
add comment
Not the answer you're looking for? Browse other questions tagged computability-theory co.combinatorics symbolic-dynamics or ask your own question.
|
{"url":"http://mathoverflow.net/questions/61788/is-the-set-of-cube-free-binary-sequences-perfect?answertab=votes","timestamp":"2014-04-19T10:31:13Z","content_type":null,"content_length":"58524","record_id":"<urn:uuid:7fc06c7c-98b2-443d-be83-f959f25011c2>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00297-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Determine which of the following point(s) lie on the graph y= square root of x+2 a. (-6,4) b. (7,-3) c. a&b d. neither
• one year ago
• one year ago
Best Response
You've already chosen the best response.
\[y = \sqrt{x + 2}\]Just plug in the coordinate choices with their respective variables
Best Response
You've already chosen the best response.
like he said, the numbers are coordinates (x,y) so \[4=\sqrt{-6+2}\] this is square root of -4 which is imaginary \[-3=\sqrt{7+2}\] this is 3 not negative 3 so this one is a no so i would go with
D none of the above
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
|
{"url":"http://openstudy.com/updates/50b01bb9e4b09749ccac22c4","timestamp":"2014-04-21T12:42:57Z","content_type":null,"content_length":"1046923","record_id":"<urn:uuid:f33dab64-eab4-443d-be9e-ca7d2e68a6bc>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00255-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Bulletin of the American Mathematical Society
ISSN 1088-9485(online) ISSN 0273-0979(print)
It is easy to determine whether a given integer is prime
Author: Andrew Granville
Journal: Bull. Amer. Math. Soc. 42 (2005), 3-38
MSC (2000): Primary 11A51, 11Y11; Secondary 11A07, 11A41, 11B50, 11N25, 11T06
Published electronically: September 30, 2004
MathSciNet review: 2115065
Full-text PDF
Abstract | References | Similar Articles | Additional Information
Abstract: ``The problem of distinguishing prime numbers from composite numbers, and of resolving the latter into their prime factors is known to be one of the most important and useful in arithmetic.
It has engaged the industry and wisdom of ancient and modern geometers to such an extent that it would be superfluous to discuss the problem at length. Nevertheless we must confess that all methods
that have been proposed thus far are either restricted to very special cases or are so laborious and difficult that even for numbers that do not exceed the limits of tables constructed by estimable
men, they try the patience of even the practiced calculator. And these methods do not apply at all to larger numbers ... It frequently happens that the trained calculator will be sufficiently
rewarded by reducing large numbers to their factors so that it will compensate for the time spent. Further, the dignity of the science itself seems to require that every possible means be explored
for the solution of a problem so elegant and so celebrated ... It is in the nature of the problem that any method will become more complicated as the numbers get larger. Nevertheless, in the
following methods the difficulties increase rather slowly ... The techniques that were previously known would require intolerable labor even for the most indefatigable calculator.''
--from article 329 of Disquisitiones Arithmeticae (1801) by C. F. Gauss
• 1. Leonard M. Adleman and Ming-Deh A. Huang, Primality testing and abelian varieties over finite fields, Lecture Notes in Mathematics, vol. 1512, Springer-Verlag, Berlin, 1992. MR 1176511
• 2. Leonard M. Adleman, Carl Pomerance, and Robert S. Rumely, On distinguishing prime numbers from composite numbers, Ann. of Math. (2) 117 (1983), no. 1, 173–206. MR 683806 (84e:10008), http://
• 3. Manindra Agrawal, Neeraj Kayal and Nitin Saxena, PRIMES is in P (to appear).
• 4. W. R. Alford, Andrew Granville, and Carl Pomerance, There are infinitely many Carmichael numbers, Ann. of Math. (2) 139 (1994), no. 3, 703–722. MR 1283874 (95k:11114), http://dx.doi.org/
• 5. R. C. Baker and G. Harman, The Brun-Titchmarsh theorem on average, Analytic number theory, Vol. 1 (Allerton Park, IL, 1995) Progr. Math., vol. 138, Birkhäuser Boston, Boston, MA, 1996, pp.
39–103. MR 1399332 (97h:11096)
• 6. D. J. Bernstein, Proving primality in essentially quartic random time (to appear).
• 7. Pedro Berrizbeitia, Sharpening ``PRIMES is in P'' for a large family of numbers (to appear).
• 8. Dan Boneh, Twenty years of attacks on the RSA cryptosystem, Notices Amer. Math. Soc. 46 (1999), no. 2, 203–213. MR 1673760
• 9. Richard Crandall and Carl Pomerance, Prime numbers, Springer-Verlag, New York, 2001. A computational perspective. MR 1821158 (2002a:11007)
• 10. Whitfield Diffie and Martin E. Hellman, New directions in cryptography, IEEE Trans. Information Theory IT-22 (1976), no. 6, 644–654. MR 0437208 (55 #10141)
• 11. Étienne Fouvry, Théorème de Brun-Titchmarsh: application au théorème de Fermat, Invent. Math. 79 (1985), no. 2, 383–407 (French). MR 778134 (86g:11052), http://dx.doi.org/10.1007/BF01388980
• 12. Morris Goldfeld, On the number of primes 𝑝 for which 𝑝+𝑎 has a large prime factor, Mathematika 16 (1969), 23–27. MR 0244176 (39 #5493)
• 13. Andrew Granville and Thomas J. Tucker, It’s as easy as 𝑎𝑏𝑐, Notices Amer. Math. Soc. 49 (2002), no. 10, 1224–1231. MR 1930670 (2003f:11044)
• 14. Shafi Goldwasser and Joe Kilian, Almost all primes can be quickly certified, Proceedings of the 18th annual ACM symposium on theory of computing, Association for Computing Machinery, New
York, 1986.
• 15. Donald E. Knuth, The art of computer programming. Vol. 2: Seminumerical algorithms, Addison-Wesley Publishing Co., Reading, Mass.-London-Don Mills, Ont, 1969. MR 0286318 (44 #3531)
• 16. H. W. Lenstra Jr., Galois theory and primality testing, Orders and their applications (Oberwolfach, 1984) Lecture Notes in Math., vol. 1142, Springer, Berlin, 1985, pp. 169–189. MR 812498
(87g:11171), http://dx.doi.org/10.1007/BFb0074800
• 17. H.W. Lenstra, Jr., and Carl Pomerance, Primality testing with Gaussian periods (to appear).
• 18. Yuri V. Matiyasevich, Hilbert’s tenth problem, Foundations of Computing Series, MIT Press, Cambridge, MA, 1993. Translated from the 1993 Russian original by the author; With a foreword by
Martin Davis. MR 1244324 (94m:03002b)
• 19. Preda Mihailescu and Roberto Avanzi, Efficient `quasi - deterministic' primality test improving AKS (to appear).
• 20. Gérald Tenenbaum and Michel Mendès France, The prime numbers and their distribution, Student Mathematical Library, vol. 6, American Mathematical Society, Providence, RI, 2000. Translated from
the 1997 French original by Philip G. Spain. MR 1756233 (2001j:11086)
• 21. Paulo Ribenboim, The new book of prime number records, Springer-Verlag, New York, 1996. MR 1377060 (96k:11112)
• 22. José Felipe Voloch, On some subgroups of the multiplicative group of finite rings (to appear).
Similar Articles
Retrieve articles in Bulletin of the American Mathematical Society with MSC (2000): 11A51, 11Y11, 11A07, 11A41, 11B50, 11N25, 11T06
Retrieve articles in all journals with MSC (2000): 11A51, 11Y11, 11A07, 11A41, 11B50, 11N25, 11T06
Additional Information
Andrew Granville
Affiliation: Département de Mathématiques et Statistique, Université de Montréal, CP 6128 succ. Centre-Ville, Montréal, QC H3C 3J7, Canada
Email: andrew@dms.umontreal.ca
DOI: http://dx.doi.org/10.1090/S0273-0979-04-01037-7
PII: S 0273-0979(04)01037-7
Received by editor(s): January 27, 2004
Received by editor(s) in revised form: August 19, 2004
Published electronically: September 30, 2004
Additional Notes: L’auteur est partiellement soutenu par une bourse du Conseil de recherches en sciences naturelles et en génie du Canada.
Dedicated: Dedicated to the memory of W. ‘Red’ Alford, friend and colleague
Article copyright: © Copyright 2004 American Mathematical Society
The copyright for this article reverts to public domain 28 years after publication.
|
{"url":"http://www.ams.org/journals/bull/2005-42-01/S0273-0979-04-01037-7/home.html","timestamp":"2014-04-19T18:06:38Z","content_type":null,"content_length":"39216","record_id":"<urn:uuid:ce1930e4-95da-43a3-9e6a-2f22a2ee27a5>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00079-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Physics Homework Help
Pulley Problems
(See below for answers)
An Atwood machine uses a cable drawn over a pulley to connect two or more masses. One of the masses acts as a counterbalance or counterweight to reduce acceleration because of gravity. Elevators in
multi-level buildings are examples of Atwood machines. The counterweight in an elevator is typically the mass of the elevator plus about half of the mass of the allowable load.
1. In an Atwood's machine, the larger mass is 1.8 kg and the smaller mass is 1.2 kg.
a. Ignoring friction, what is the acceleration of the masses?
b. What is the tension in the string?
2. A 10.0 kg mass, m[1], on a frictionless table is accelerated by a 5.0 kg mass, m[2], hanging over the edge of the table. What is the acceleration of the mass along the table?
3. Two identical blocks are tied together with a string which passes over a pulley at the crest of the inclined planes, one of which makes an angle ø[1] = 28° to the horizontal, the other makes the
complementary angle ø[2] = 62°. If there is no friction anywhere, with what acceleration do the blocks move?
A 2.00-kg and a 6.00-kg
block are connected by
a light string over a frictionless pulley. The two blocks are allowed to move on a fixed s wedge inclined 30º. µ = 0.18 Determine the acceleration of the two blocks and the tension in the string.
5. Two blocks are moving down a ramp that is inclined at 30º. Box 1(of mass 1.55 kg) is above box 2(of 3.1 kg) and they are connected by a massless rod. Find the tension of the rod if the coefficient
of kinetic friction for box 1 is .226 and box 2 is .113.
a) Determine the acceleration of the blocks.
b) Calculate the tension of the string.
pulley. The ramp is inclined 30º ramp from the horizontal, and the coefficient of kinetic friction = .26.
a. Determine the acceleration of the 5.0 kg mass along the ramp.
b. Determine the tension in the rope during the acceleration on the 5.0 kg mass along the ramp.
[1] sits on a table. It is connected, by a rope drawn through a pulley, to a box m[2] = 2.10 kg that is hanging off the side of
the table. The coefficient of static friction between mass m[1] and the table is 0.400, whereas the coefficient of kinetic friction is 0.295.
a. What minimum value of m[1] will keep the boxes from starting to move?
b. What value of m[1] will keep the boxes moving at constant speed?
[1], is sliding on a 10 kg block, m[2]. The blocks
are on a 20º slope and are connected by a light string looped
over a pulley. All surfaces are frictionless. Find the
acceleration of each block and the tension in the sting that
connects the blocks.
coefficient of friction is 0.2, what is the acceleration and what is the tension in the string?
Answers to Pulley Problems
Answers to selected problems are listed below.
For solutions to all the problems on this page click here.
a. The net force on each mass is given in the two equations (g = 9.8 m/s/s):
1.8a = 1.8g -T
1.2a = T - 1.2g
Since a and T are the same for both masses, add the two equations:
3.0a = .6g
a = .2g = 1.96 m/s/s
b. Substituting for a in the first equation:
1.8(1.96) = 1.8(9.8) - T
T = 14.1 N
For solutions to all the problems on this page click here.
|
{"url":"http://www.physics247.com/physics-homework-help/pulley.php","timestamp":"2014-04-21T04:34:27Z","content_type":null,"content_length":"13101","record_id":"<urn:uuid:325b8cc7-1ce7-4f5a-9c71-b4564a6c417f>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00360-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Parker, TX Science Tutor
Find a Parker, TX Science Tutor
I am an experienced tutor and instructor in undergraduate physics. I tutored at the University of Texas at Dallas, where I was also a Teaching Assistant. I taught courses at Richland College and
Collin County Community College.
8 Subjects: including physical science, physics, calculus, geometry
...He is, however, equally as helpful in language studies. He is fluent in Spanish, and conversational in French. He, of course, has a great grasp of English, having spent many years as an ESL
37 Subjects: including chemistry, biochemistry, anatomy, ACT Science
...In 1990 I started work as a scientist at the Superconducting Super Collider (SSC). All of our research computers were UNIX and my desktop was a SUN SPARC workstation. My laboratory computer
was an HP running HP-UX. After the SSC was terminated, I bought my first home computer.
25 Subjects: including chemistry, networking (computer), MCAT, computer programming
...We all learn concepts and grasp material in a different way, so my tutoring methods are unique to the individual. I am the oldest in my family, so I have a great deal of patience with my
students. I believe that a balance of strict guidelines in terms of what is expected from a student on their end in addition to praise is important.
34 Subjects: including sociology, ecology, reading, biology
...Word problems are a particular challenge for a number of students for whom the following steps must first be modeled: 1) drawing an appropriate diagram, if required, 2) establishing the
correct algebraic representation(s) for the unknown(s) and then using this information to label the correct pa...
17 Subjects: including nutrition, organic chemistry, chemistry, biochemistry
Related Parker, TX Tutors
Parker, TX Accounting Tutors
Parker, TX ACT Tutors
Parker, TX Algebra Tutors
Parker, TX Algebra 2 Tutors
Parker, TX Calculus Tutors
Parker, TX Geometry Tutors
Parker, TX Math Tutors
Parker, TX Prealgebra Tutors
Parker, TX Precalculus Tutors
Parker, TX SAT Tutors
Parker, TX SAT Math Tutors
Parker, TX Science Tutors
Parker, TX Statistics Tutors
Parker, TX Trigonometry Tutors
|
{"url":"http://www.purplemath.com/parker_tx_science_tutors.php","timestamp":"2014-04-17T04:47:44Z","content_type":null,"content_length":"23642","record_id":"<urn:uuid:1e32f793-a294-4893-82e3-cda67cc216f7>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00181-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Riemann integrable functions
December 20th 2008, 11:50 PM
Riemann integrable functions
How do you do the capital script R for Riemann integrable functions? It looks like the same "font" as $\ell$ but it is a capital R.
December 20th 2008, 11:59 PM
Chop Suey
This perhaps?
December 21st 2008, 01:32 AM
$\mathcal{R}$ ?
Can you draw it or describe it more precisely ? I've never heard of such an R (Doh)
And maybe it would be quicker to write "Riemann integrable" :D
December 21st 2008, 01:54 AM
December 21st 2008, 10:17 AM
No =(
I got it from Rudin...and I wanted it because it was pretty (Happy)...and as CB pointed out it is the \mathscr font...you can see some examples of it here http://www.ctan.org/tex-archive/
info...symbols-a4.pdf on page 65
Thanks CB...its a shame we dont have it =(
|
{"url":"http://mathhelpforum.com/latex-help/65705-riemann-integrable-functions-print.html","timestamp":"2014-04-23T23:38:18Z","content_type":null,"content_length":"8507","record_id":"<urn:uuid:58f70e89-cf18-462e-8f42-b4279a417f56>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00367-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Resonance in Westwoods, England
Mandala Formation in Westwoods, England
Westwoods Crop Formation Reiterates the Prime Cross Formula for Magnetic Resonance
by Alex Putney for Human-Resonance.org
June 21, 2011
Another spectacular crop formation -this time reported in Westwoods, England on June 21, 2011- has rendered the sacred cross symbol known among many ancient cultures, variously called the Kalachakra,
the Circle of the Four-Quarters, Hunab K'u, the Celtic Cross, the Rose Cross, etc... This beautiful formation is a simplified 2-dimensional image in living grains that references a higher-dimensional
structure of cosmic resonance - refered to among the ancient Sanskrit traditons as the seed syllable 'OM'.
In studying this sacred symbol within the ancient cultural context, I had understood it as reflecting the global geometry of infrasound standing waves that are transduced by the axis-symmetric
geometry of the Orion pyramids of Giza, Egypt. I had been rendering spherical mandalas using the 2-dimensional algorithms of mathematicians and physicists like P. Bourke and A. Jadzyck and produced a
spherical Prime Cross quantum function rendering (above, at right) in 2004.
The spherical Prime Cross rendering bears a striking resemblance to the Westwoods crop circle (compared above), and provides a profound conformation of my applcations of this mathematics to the study
of spherical resonance patterns. The nonlinear physics of standing waves reveals this mandala formula as encoding the global infrasound pattern of Magnetic Resonance that defines the precise
geopositioning of the world's pyramids, megaliths and sacred sites.
Westwoods, England (51.39°N 1.76°W) is 2,240 miles from Giza, Egypt, a distance that is 9.0% of the Earth's mean circumference (of 24,892 miles). This sacred distance is shared by the nearby
megalithic site of Stonehenge. Another mandala crop formation that appeared in England in 2010, at Whitefield, presented the similar nonlinear octagonal formula [ z[n+1] = z[n]^2 ]:
This geometric formula previously appeared in the fields on Lurkley Hill, in Lockeridge, England, and then again in nearby Wayland-Smithy. It is a fractal equation closely related to the Mandelbrot
Set, [ z[n+1] = z[n]^2 + c ]. First rendered by French mathematician Benoit Mandelbrot in 1980, this formula appeared in 1991 in the wheat fields of Ickleton, England.
The multitude of standing wave resonance maps presented throughout this site were rendered using these sacred formulas, which are keys to the nature of human consciousness and the changes culminating
in the coming events of December 22, 2012. The revelation of the structure of the electron is but one in a series of profound discoveries that will altogether transform the human experience on this
I had originally published the Prime Cross rendering in my first book Phi and on this homepage in 2006, and was republished by the Natural World Museum in Art in Action: Nature, Creativity and Our
Collective Future (Earth Aware Editions, 2007, ISBN: 978-1-932771-77-0). Here is an excerpt of my 2006 writings on the subject of the Prime Cross (Phi, p. 128-130):
This octagonal cross was once called the Celtic Cross, the Rose Cross by the Copts (as engraved at the Temple of Philae and painted on ceramics, above), and is referred to by Native American tribes
as the four-quartered hoop of the nation -a complex structure referenced by prime number geometries encoded in myriad artifacts from La Maná, Ecuador. In their collective presentation of the sacred
order of primes, the deeper holographic application of the information comes into focus.
Prime numbers are defined as numbers that are only divisible by 1 (and by themselves). The organization of prime numbers within the series of whole numbers has been a mystery to modern mathematics
until the work of Peter Plichta, a Düsseldorf chemist. In 1997 he put forth his theory of the structure of prime numbers being based on a cycle of 6, a product of the indivisible numbers 1, 2, and 3.
While not the first mathematician to recognize the six-cycle of the sequence of primes, his work has extended this understanding to the role of prime numbers in all of the structures underlying the
physical universe:
Apart from the numbers 2 and 3, all prime numbers occur in a cycle of 6. 6 n ±1 for n = 1, 2, 3, 4, .... For combination reasons this cycle produces a series of prime-number twins, [5, 7], [11,
13], [17, 19], 23, .... although with the number 25 we inevitably obtain the first square of a prime number from the function 6 n ±1 (the next composite number is the product of 5 x 7 = 35),
which is not prime. The reason why the number six plays such an elementary role in the complex of whole numbers is that the numbers 1, 2 and 3 are indivisible. As a result, the complete number 6
must be surrounded by the expression: 6 - 1 = 5 and 6 + 1 = 7...
Plichta's search for the significance of prime numbers extends to the essential structures of nuclear chemistry and biochemistry, specifically that of atoms and the periodic table of elements, as
well as the amino acids and the DNA helix. The six-cycle structure of prime numbers can be visualized as a symmetric octagonal cross, with seven concentric circles divided into 24 radial points
(above). Being a universal constant, Plichta observes that "the 'Prime Number Cross' is not a human invention. It is in fact a model of the construction plan with which infinity was made finite in
the structure of the atoms."
This same model has also been derived by the quantum mechanical algorithms of theoretical physicist Arkadiusz Jadczyk in a more complex rendering known as the Octagonal Quantum Iterated Function
(QIF) (above, inset). The Prime Number Cross and the Octagonal QIF are synchronous patterns reflecting the structure of resonance inherent to atomic, molecular, planetary, solar and galactic
|
{"url":"http://www.human-resonance.org/westwoods.html","timestamp":"2014-04-20T01:04:18Z","content_type":null,"content_length":"9833","record_id":"<urn:uuid:4ce0ca11-90fc-4017-ae9b-13e64bf5e0e6>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00077-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Please help me with multiple comparisons - urgent (stats)
This is an "analysis of one way classified data".
In design of experiments you see it as CRD (completely randomized design).
The analysis is very straight forward.
Xi = ith observation
Ti= i th row total, G = sum of Ti. Then SS(T)= Sum(Ti*Ti/Ni)-cf, has df=4-1=3
where, cf= (G*G/N), Ni= values in i th row, N = sum of Ni.
SST= Sum(Xi*Xi)-cf, has df= 20-1=19.
SSE= SST-SS(T), has df=19-3=16.
F= MS(T)/MSE ~ F (3,16) (=> F distribution with 3,16 df)
MS(T)=SS(T)/3, MSE=SSE/16. Critical region:F> F(a,3,16). a=0.05 or 0.01 as you choose. Find F(a,3,16)
from Biometrica tables.
|
{"url":"http://www.physicsforums.com/showthread.php?p=4266158","timestamp":"2014-04-16T07:37:01Z","content_type":null,"content_length":"30204","record_id":"<urn:uuid:9b18d0a0-0554-43cf-97f2-81cea3dc9b17>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00198-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Introduction: Type I and type II alignments
Online Lectures on Bioinformatics
Algorithms for the comparison of two sequences
This section provides the reader with a formal notation of the algorithms to compare two sequences, which are based on dynamic programming. Furthermore gap-functions, score- and distance-functions
are discussed in detail.
Introduction: Type I and type II alignments
Consider the following example of an alignment between two sequences "RDISLVKNAGI" and "RNILVSDAKNVGI"
R D I S L V - - - K N A G I
R N I - L V S D A K N V G I
As before dashes represent insertions or deletions (shortly called indels or gaps). The order in which the residues occur in the alignment is identical to the one in their respective sequences. Two
residues that are printed in the same column are called matched.
An alternative alignment for the first three pairs [Kru83] calls them trace and alignment, we want to use "type I" and "type II" alignment because we view both versions as having equal rights to be
called alignment. In simple terms a type I alignment is one that does not allow for adjacent gaps in opposite sequences, whereas a type II alignment does. A type I alignment can therefore be
represented as a sequence of successive residue pairs that do not violate the order of the residues in the sequences. In at least one of the sequences no residue may be skipped when going to the next
pair. The above example seen as a type I alignment can be expressed by listing the index-pairs for the matched residues:
( (1,1), (2,2), (3,3), (5,4), (6,5), (7,9), (8,10), (9,11), (10,12), (11,13))
E.g. (5,4) means that the 5th residue of the first sequence is matched with the 4th residue of the second sequence. This representation will be chosen to define the type I alignment. To describe a
type II alignment such a representation does not suffice. In the case of
In the case of the 20 amino acids, identical matched residues are not as frequent as in DNA. Consequently weighting schemes have been developed which attribute a value to a pair of matched amino
acids. One such scheme was devised by M. Dayhoff [DBH83] and is based on exchange frequencies between amino acids. This 20 x 20 matrix attributes different positive values (ranging from +2 to +17) to
exact matches and values between -8 and +7 for mismatches. The score of an alignment is then made up of the weights for the matching pairs in the alignment minus a penalty for every gap introduced.
The gap penalty will in general be a function g of the length of the gap. The example above scored by the Dayhoff matrix would give:
The intention behind such a scoring scheme is that the alignment which optimizes this score should best represent the biological similarity between the two sequences. Finding such an optimal
alignment is the central task in sequence comparison.
Most of the subsequent algorithms are based on the representation of an alignment as a path in a comparison matrix as shown in the below. The two kinds of alignment lead to two slightly different
versions of this representation. For a type I alignment the grid points of the comparison matrix are thought of as labeled with the corresponding residue pair. A path may move from one grid point to
any one to its bottom right without skipping more than one row or column simultaneously. The possible arcs from each point are depicted in the top right of the matrix. For a type II alignment there
are only three moves allowed which are shown in the top left of the right matrix. Interpreting these as matching the residues in a row/column (diagonal arc), deleting the residue corresponding to the
column of an arc (horizontal move) or deleting the residue corresponding to the row of an arc (vertical move) yields exactly the type II alignments corresponding to a path in this matrix.
Interpreting these as matching the residues in a row/column (diagonal arc), deleting the residue corresponding to the column of an arc (horizontal move) or deleting the residue corresponding to the
row of an arc (vertical move) yields exactly the type II alignments corresponding to a path in this matrix. The following sections will define the two types of alignments and their scores. Then for
each of them a directed graph will be introduced according the above description together with a one-to-one mapping between alignments and paths in the graph. The arcs will be given weights such that
under the mapping the length of a path is just the score of the corresponding alignment. The optimal path in such a graph therefore also defines the optimal alignment of the sequences. Using this
framework we can give a common description and proof to the algorithms introduced in the literature ([NeW70], [San72], [Sel74], [WSB76], [Got82], [Wat84a]).
Comments are very welcome.
|
{"url":"http://lectures.molgen.mpg.de/Alg/Intro/index.html","timestamp":"2014-04-20T23:44:28Z","content_type":null,"content_length":"9741","record_id":"<urn:uuid:53c31ebb-c8b8-401d-a95e-6ed174fb344c>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00009-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Spatial Concepts
Spatial Concepts
Oracle Spatial is an integrated set of functions and procedures that enables spatial data to be stored, accessed, and analyzed quickly and efficiently in an Oracle9i database.
Spatial data represents the essential location characteristics of real or conceptual objects as those objects relate to the real or conceptual space in which they exist.
1.1 What Is Oracle Spatial?
Oracle Spatial, often referred to as Spatial, provides a SQL schema and functions that facilitate the storage, retrieval, update, and query of collections of spatial features in an Oracle9i database.
Spatial consists of the following components:
● A schema (MDSYS) that prescribes the storage, syntax, and semantics of supported geometric data types
● A spatial indexing mechanism
● A set of operators and functions for performing area-of-interest queries, spatial join queries, and other spatial analysis operations
● Administrative utilities
The spatial component of a spatial feature is the geometric representation of its shape in some coordinate space. This is referred to as its geometry.
1.2 Object-Relational Model
Spatial supports the object-relational model for representing geometries. The object-relational model uses a table with a single column of MDSYS.SDO_GEOMETRY and a single row per geometry instance.
The object-relational model corresponds to a "SQL with Geometry Types" implementation of spatial feature tables in the OpenGIS ODBC/SQL specification for geospatial features.
│ Note: │
│ │
│ The relational geometry model of Oracle Spatial is no longer supported, effective with this release. Only the object-relational model is supported. │
The benefits provided by the object-relational model include:
● Support for many geometry types, including arcs, circles, compound polygons, compound line strings, and optimized rectangles
● Ease of use in creating and maintaining indexes and in performing spatial queries
● Index maintenance by the Oracle9i database server
● Geometries modeled in a single row and single column
● Optimal performance
1.3 Introduction to Spatial Data
Oracle Spatial is designed to make spatial data management easier and more natural to users of location-enabled applications and Geographic Information System (GIS) applications. Once this data is
stored in an Oracle database, it can be easily manipulated, retrieved, and related to all the other data stored in the database.
A common example of spatial data can be seen in a road map. A road map is a two-dimensional object that contains points, lines, and polygons that can represent cities, roads, and political boundaries
such as states or provinces. A road map is a visualization of geographic information. The location of cities, roads, and political boundaries that exist on the surface of the Earth are projected onto
a two-dimensional display or piece of paper, preserving the relative positions and relative distances of the rendered objects.
The data that indicates the Earth location (latitude and longitude, or height and depth) of these rendered objects is the spatial data. When the map is rendered, this spatial data is used to project
the locations of the objects on a two-dimensional piece of paper. A GIS is often used to store, retrieve, and render this Earth-relative spatial data.
Types of spatial data that can be stored using Spatial other than GIS data include data from computer-aided design (CAD) and computer-aided manufacturing (CAM) systems. Instead of operating on
objects on a geographic scale, CAD/CAM systems work on a smaller scale, such as for an automobile engine or printed circuit boards.
The differences among these systems are only in the relative sizes of the data, not the data's complexity. The systems might all actually involve the same number of data points. On a geographic
scale, the location of a bridge can vary by a few tenths of an inch without causing any noticeable problems to the road builders, whereas if the diameter of an engine's pistons are off by a few
tenths of an inch, the engine will not run. A printed circuit board is likely to have many thousands of objects etched on its surface that are no bigger than the smallest detail shown on a road
builder's blueprints.
These applications all store, retrieve, update, or query some collection of features that have both nonspatial and spatial attributes. Examples of nonspatial attributes are name, soil_type,
landuse_classification, and part_number. The spatial attribute is a coordinate geometry, or vector-based representation of the shape of the feature.
1.4 Geometry Types
A geometry is an ordered sequence of vertices that are connected by straight line segments or circular arcs. The semantics of the geometry are determined by its type. Spatial supports several
primitive types and geometries composed of collections of these types, including two-dimensional:
● Points and point clusters
● Line strings
● n-point polygons
● Arc line strings (All arcs are generated as circular arcs.)
● Arc polygons
● Compound polygons
● Compound line strings
● Circles
● Optimized rectangles
Two-dimensional points are elements composed of two ordinates, X and Y, often corresponding to longitude and latitude. Line strings are composed of one or more pairs of points that define line
segments. Polygons are composed of connected line strings that form a closed ring and the area of the polygon is implied.
Self-crossing polygons are not supported, although self-crossing line strings are supported. If a line string crosses itself, it does not become a polygon. A self-crossing line string does not have
any implied area.
Figure 1-1 illustrates the geometric types.
Figure 1-1 Geometric Types
Description of the illustration geom_types.gif
Spatial also supports the storage and indexing of three-dimensional and four-dimensional geometric types, where three or four coordinates are used to define each vertex of the object being defined.
However, spatial functions (except for LRS functions and MBR-related functions) can work with only the first two dimensions, and all spatial operators except SDO_FILTER are disabled if the spatial
index has been created on more than two dimensions.
1.5 Data Model
The Spatial data model is a hierarchical structure consisting of elements, geometries, and layers, which correspond to representations of spatial data. Layers are composed of geometries, which in
turn are made up of elements.
For example, a point might represent a building location, a line string might represent a road or flight path, and a polygon might represent a state, city, zoning district, or city block.
1.5.1 Element
An element is the basic building block of a geometry. The supported spatial element types are points, line strings, and polygons. For example, elements might model star constellations (point
clusters), roads (line strings), and county boundaries (polygons). Each coordinate in an element is stored as an X,Y pair. The exterior ring and the interior ring of a polygon with holes are
considered as two distinct elements that together make up a complex polygon.
Point data consists of one coordinate. Line data consists of two coordinates representing a line segment of the element. Polygon data consists of coordinate pair values, one vertex pair for each line
segment of the polygon. Coordinates are defined in order around the polygon (counterclockwise for an exterior polygon ring, clockwise for an interior polygon ring).
1.5.2 Geometry
A geometry (or geometry object) is the representation of a spatial feature, modeled as an ordered set of primitive elements. A geometry can consist of a single element, which is an instance of one of
the supported primitive types, or a homogeneous or heterogeneous collection of elements. A multipolygon, such as one used to represent a set of islands, is a homogeneous collection. A heterogeneous
collection is one in which the elements are of different types, for example, a point and a polygon.
An example of a geometry might describe the buildable land in a town. This could be represented as a polygon with holes where water or zoning prevents construction.
1.5.3 Layer
A layer is a collection of geometries having the same attribute set. For example, one layer in a GIS might include topographical features, while another describes population density, and a third
describes the network of roads and bridges in the area (lines and points). Each layer's geometries and associated spatial index are stored in the database in standard tables.
1.5.4 Coordinate System
A coordinate system (also called a spatial reference system) is a means of assigning coordinates to a location and establishing relationships between sets of such coordinates. It enables the
interpretation of a set of coordinates as a representation of a position in a real world space.
Any spatial data has a coordinate system associated with it. The coordinate system can be georeferenced (related to a specific representation of the Earth) or not georeferenced (that is, Cartesian,
and not related to a specific representation of the Earth). If the coordinate system is georeferenced, it has a default unit of measurement (such as meters) associated with it, but you can have
Spatial automatically return results in another specified unit (such as miles). (For more information about unit of measurement support, see Section 2.6.)
Before Oracle Spatial release 8.1.6, geometries (objects of type SDO_GEOMETRY) were stored as strings of coordinates without reference to any specific coordinate system. Spatial functions and
operators always assumed a coordinate system that had the properties of an orthogonal Cartesian system, and sometimes did not provide correct results if Earth-based geometries were stored in latitude
and longitude coordinates. With release 8.1.6, Spatial provided support for many different coordinate systems, and for converting data freely between different coordinate systems.
Spatial data can be associated with a Cartesian, geodetic (geographical), projected, or local coordinate system:
● Cartesian coordinates are coordinates that measure the position of a point from a defined origin along axes that are perpendicular in the represented two-dimensional or three-dimensional space.
If a coordinate system is not explicitly associated with a geometry, a Cartesian coordinate system is assumed.
● Geodetic coordinates (sometimes called geographic coordinates) are angular coordinates (longitude and latitude), closely related to spherical polar coordinates, and are defined relative to a
particular Earth geodetic datum. (A geodetic datum is a means of representing the figure of the Earth and is the reference for the system of geodetic coordinates.)
● Projected coordinates are planar Cartesian coordinates that result from performing a mathematical mapping from a point on the Earth's surface to a plane. There are many such mathematical
mappings, each used for a particular purpose.
When performing operations on geometries, Spatial uses either a Cartesian or curvilinear computational model, as appropriate for the coordinate system associated with the spatial data.
For more information about coordinate system support in Spatial, including geodetic, projected, and local coordinates and coordinate system transformation, see Chapter 5.
1.5.5 Tolerance
Tolerance is used to associate a level of precision with spatial data. Tolerance reflects the distance that two points can be apart and still be considered the same (for example, to accommodate
rounding errors). The tolerance value must be a non-negative number greater than zero. The significance of the value depends on whether or not the spatial data is associated with a geodetic
coordinate system. (Geodetic and other types of coordinate systems are described in Section 1.5.4.)
● For geodetic data (such as data identified by longitude and latitude coordinates), the tolerance value is a number of meters. For example, a tolerance value of 100 indicates a tolerance of 100
● For non-geodetic data, the tolerance value is a number of the units that are associated with the coordinate system associated with the data. For example, if the unit of measurement is miles, a
tolerance value of 0.005 indicates a tolerance of 0.005 (that is, 1/200) mile (approximately 105 feet), and a tolerance value of 2 indicates a tolerance of two miles.
In both cases, the smaller the tolerance value, the more precision is to be associated with the data.
A tolerance value is specified in two cases:
1.5.5.1 In the Geometry Metadata for a Layer
The dimensional information for a layer includes a tolerance value. Specifically, the DIMINFO column (described in Section 2.4.3) of the xxx_SDO_GEOM_METADATA views includes an SDO_TOLERANCE value.
If a function accepts an optional tolerance parameter and this parameter is null or not specified, the SDO_TOLERANCE value of the layer is used. Using the non-geodetic data from the example in
Section 2.1, the actual distance between geometries cola_b and cola_d is 0.846049894. If a query uses the SDO_GEOM.SDO_DISTANCE function to return the distance between cola_b and cola_d and does not
specify a tolerance parameter value, the result depends on the SDO_TOLERANCE value of the layer. For example:
● If the SDO_TOLERANCE value of the layer is 0.005, this query returns .846049894.
● If the SDO_TOLERANCE value of the layer is 0.5, this query returns 0.
The zero result occurs because Spatial first constructs an imaginary buffer of the tolerance value (0.5) around each geometry to be considered, and the buffers around cola_b and cola_d overlap in
this case.
You can therefore take either of two approaches in selecting an SDO_TOLERANCE value for a layer:
● The value can reflect the desired level of precision in queries for distances between objects. For example, if two non-geodetic geometries 0.8 units apart should be considered as separated,
specify a small SDO_TOLERANCE value such as 0.05 or smaller.
● The value can reflect the precision of the values associated with geometries in the layer. For example, if all the geometries in a non-geodetic layer are defined using integers and if two objects
0.8 units apart should not be considered as separated, an SDO_TOLERANCE value of 0.5 is appropriate. To have greater precision in any query, you must override the default by specifying the
tolerance parameter.
With non-geodetic data, the guideline to follow for most instances of the second case (precision of the values of the geometries in the layer) is: take the highest level of precision in the geometry
definitions, and use .5 at the next level as the SDO_TOLERANCE value. For example, if geometries are defined using integers (as in the simplified example in Section 2.1), the appropriate value is
0.5. However, if geometries are defined using numbers up to 4 decimal positions (for example, 31.2587), such as with longitude and latitude values, the appropriate value is 0.00005.
│ Note: │
│ │
│ This guideline, however, should not be used if the geometries include any polygons that are so narrow at any point that the distance between facing sides is less than the proposed tolerance │
│ value. Be sure that the tolerance value is less than the shortest distance between any two sides in any polygon. │
│ │
│ Moreover, if you encounter "invalid geometry" errors with inserted or updated geometries, and if the geometries are in fact valid, consider increasing the precision of the tolerance value (for │
│ example, changing 0.00005 to 0.000005). │
1.5.5.2 As an Input Parameter
Many Spatial functions accept an optional tolerance parameter, which (if specified) overrides the default tolerance value for the layer (explained in Section 1.5.5.1). If the distance between two
points is less than or equal to the tolerance value, Spatial considers the two points to be a single point. Thus, tolerance is usually a reflection of how accurate or precise users perceive their
spatial data to be.
For example, assume that you want to know which restaurants are within 5 kilometers of your house. Assume also that Maria's Pizzeria is 5.1 kilometers from your house. If the spatial data has a
geodetic coordinate system and if you ask, Find all restaurants within 5 kilometers and use a tolerance of 100 (or greater, such as 500), Maria's Pizzeria will be included, because 5.1 kilometers
(5100 meters) is within 100 meters of 5 kilometers (5000 meters). However, if you specify a tolerance less than 100 (such as 50), Maria's Pizzeria will not be included.
Tolerance values for Spatial functions are typically very small, although the best value in each case depends on the kinds of applications that use or will use the data.
1.6 Query Model
Spatial uses a two-tier query model to resolve spatial queries and spatial joins. The term is used to indicate that two distinct operations are performed to resolve queries. The output of the two
combined operations yields the exact result set.
The two operations are referred to as primary and secondary filter operations.
● The primary filter permits fast selection of candidate records to pass along to the secondary filter. The primary filter compares geometry approximations to reduce computation complexity and is
considered a lower-cost filter. Because the primary filter compares geometric approximations, it returns a superset of the exact result set.
● The secondary filter applies exact computations to geometries that result from the primary filter. The secondary filter yields an accurate answer to a spatial query. The secondary filter
operation is computationally expensive, but it is only applied to the primary filter results, not the entire data set.
Figure 1-2 illustrates the relationship between the primary and secondary filters.
Description of the illustration query.gif
As shown in Figure 1-2, the primary filter operation on a large input data set produces a smaller candidate set, which contains at least the exact result set and may contain more records. The
secondary filter operation on the smaller candidate set produces the exact result set.
Spatial uses a spatial index to implement the primary filter. Spatial does not require the use of both the primary and secondary filters. In some cases, just using the primary filter is sufficient.
For example, a zoom feature in a mapping application queries for data that has any interaction with a rectangle representing visible boundaries. The primary filter very quickly returns a superset of
the query. The mapping application can then apply clipping routines to display the target area.
The purpose of the primary filter is to quickly create a subset of the data and reduce the processing burden on the secondary filter. The primary filter therefore should be as efficient (that is,
selective yet fast) as possible. This is determined by the characteristics of the spatial index on the data.
For more information about querying spatial data, see Section 4.2.
1.7 Indexing of Spatial Data
The introduction of spatial indexing capabilities into the Oracle database engine is a key feature of the Spatial product. A spatial index, like any other index, provides a mechanism to limit
searches, but in this case based on spatial criteria such as intersection and containment. A spatial index is needed to:
● Find objects within an indexed data space that interact with a given point or area of interest (window query)
● Find pairs of objects from within two indexed data spaces that interact spatially with each other (spatial join)
A spatial index is considered a logical index. The entries in the spatial index are dependent on the location of the geometries in a coordinate space, but the index values are in a different domain.
Index entries may be ordered using a linearly ordered domain, and the coordinates for a geometry may be pairs of integer, floating-point, or double-precision numbers.
Oracle Spatial lets you use R-tree indexing (the default) or quadtree indexing, or both. Each index type is appropriate in different situations. You can maintain both an R-tree and quadtree index on
the same geometry column, by using the add_index parameter with the ALTER INDEX statement (described in Chapter 8), and you can choose which index to use for a query by specifying the idxtab1 and/or
idxtab2 parameters with certain Spatial operators, such as SDO_RELATE, described in Chapter 10.
In choosing whether to use an R-tree or quadtree index for a spatial application, consider the items in Table 1-1.
Table 1-1 Choosing R-tree or Quadtree Indexing
│ R-tree Indexing │ Quadtree Indexing │
│ The approximation of geometries cannot be fine-tuned. (Spatial uses the minimum bounding rectangles, as │ The approximation of geometries can be fine-tuned by setting the tiling level and number │
│ described in Section 1.7.1.) │ of tiles. │
│ Index creation and tuning are easier. │ Tuning is more complex, and setting the appropriate tuning parameter values can affect │
│ │ performance significantly. │
│ Less storage is required. │ More storage is required. │
│ If your application workload includes nearest-neighbor queries (SDO_NN operator), R-tree indexes are │ If your application workload includes nearest-neighbor queries (SDO_NN operator), │
│ faster. │ quadtree indexes are slower. │
│ If there is heavy update activity to the spatial column, an R-tree index may not be a good choice. │ Heavy update activity does not affect the performance of a quadtree index. │
│ You can index up to four dimensions. │ You can index only two dimensions. │
│ An R-tree index is recommended for indexing geodetic data if SDO_WITHIN_DISTANCE queries will be used │ │
│ on it. │ │
│ An R-tree index is required for a whole-earth index. │ │
Testing of R-tree and quadtree indexes with many workloads and operators is ongoing, and results and recommendations will be documented as they become available. However, before choosing an index
type for an application, you should understand the concepts and options associated with both R-tree indexing (described in Section 1.7.1) and quadtree indexing (described in Section 1.7.2).
1.7.1 R-tree Indexing
A spatial R-tree index can index spatial data of up to four dimensions. An R-tree index approximates each geometry by a single rectangle that minimally encloses the geometry (called the minimum
bounding rectangle, or MBR), as shown in Figure 1-3.
Figure 1-3 MBR Enclosing a Geometry
Description of the illustration rt_mbr.gif
For a layer of geometries, an R-tree index consists of a hierarchical index on the MBRs of the geometries in the layer, as shown in Figure 1-4.
Figure 1-4 R-tree Hierarchical Index on MBRs
Description of the illustration rt_tree.gif
In Figure 1-4:
● 1 through 9 are geometries in a layer.
● a, b, c, and d are the leaf nodes of the R-tree index, and contain minimum bounding rectangles of geometries, along with pointers to the geometries. For example, a contains the MBR of geometries
1 and 2, b contains the MBR of geometries 3 and 4, and so on.
● A contains the MBR of a and b, and B contains the MBR of c and d.
● The root contains the MBR of A and B (that is, the entire area shown).
An R-tree index is stored in the spatial index table (SDO_INDEX_TABLE in the USER_SDO_INDEX_METADATA view, described in Section 2.5). The R-tree index also maintains a sequence number generator
(SDO_RTREE_SEQ_NAME in the USER_SDO_INDEX_METADATA view) to ensure that simultaneous updates by concurrent users can be made to the index.
1.7.1.1 R-tree Quality
A substantial number of insert and delete operations affecting an R-tree index may degrade the quality of the R-tree structure, which may adversely affect query performance.
The R-tree is a hierarchical tree structure with nodes at different heights of the tree. The performance of an R-tree index structure for queries is roughly proportional to the area and perimeter of
the index nodes of the R-tree. The area covered at level 0 represents the area occupied by the minimum bounding rectangles of the data geometries, the area at level 1 indicates the area covered by
leaf-level R-tree nodes, and so on. The original ratio of the area at the root (topmost level) to the area at level 0 can change over time based on updates to the table; and if there is a degradation
in that ratio (that is, if it increases significantly), rebuilding the index may help the performance of queries.
Spatial provides several functions and procedures related to the quality of an R-tree index:
● SDO_TUNE.ANALYZE_RTREE provides advice about whether or not an index needs to be rebuilt. It computes the current index quality score and compares it to the quality score when the index was
created or most recently rebuilt, and it displays a recommendation.
● SDO_TUNE.RTREE_QUALITY returns the current index quality score.
● SDO_TUNE.QUALITY_DEGRADATION returns the current index quality degradation.
These functions and procedures are described in Chapter 16.
To rebuild an R-tree index, use the ALTER INDEX REBUILD statement, which is described in Chapter 8.
1.7.2 Quadtree Indexing
In the linear quadtree indexing scheme, the coordinate space (for the layer where all geometric objects are located) is subjected to a process called tessellation, which defines exclusive and
exhaustive cover tiles for every stored geometry. Tessellation is done by decomposing the coordinate space in a regular hierarchical manner. The range of coordinates, the coordinate space, is viewed
as a rectangle. At the first level of decomposition, the rectangle is divided into halves along each coordinate dimension generating four tiles. Each tile that interacts with the geometry being
tessellated is further decomposed into four tiles. This process continues until some termination criteria, such as size of the tiles or the maximum number of tiles to cover the geometry, is met.
Spatial can use either fixed-size or variable-sized tiles to cover a geometry:
● Fixed-size tiles are controlled by tile resolution. If the resolution is the sole controlling factor, then tessellation terminates when the coordinate space has been decomposed a specific number
of times. Therefore, each tile is of a fixed size and shape.
● Variable-sized tiling is controlled by the value supplied for the maximum number of tiles. If the number of tiles per geometry, n, is the sole controlling factor, the tessellation terminates when
n tiles have been used to cover the given geometry.
Fixed-size tile resolution and the number of variable-sized tiles used to cover a geometry are user-selectable parameters called SDO_LEVEL and SDO_NUMTILES, respectively. Smaller fixed-size tiles or
more variable-sized tiles provides better geometry approximations. The smaller the number of tiles, or the larger the tiles, the coarser are the approximations.
Spatial supports two quadtree indexing types, reflecting two valid combinations of SDO_LEVEL and SDO_NUMTILES values:
● Fixed indexing: a non-null and non-zero SDO_LEVEL value and a null or zero (0) SDO_NUMTILES value, resulting in fixed-sized tiles. Fixed indexing is described in Section 1.7.2.2.
● Hybrid indexing: non-null and non-zero values for SDO_LEVEL and SDO_NUMTILES, resulting in two sets of tiles per geometry. One set contains fixed-size tiles and the other set contains
variable-sized tiles. Hybrid indexing is not recommended for most spatial applications, and is described in Appendix B.
1.7.2.1 Tessellation of a Layer During Indexing
The process of determining which tiles cover a given geometry is called tessellation. The tessellation process is a quadtree decomposition, where the two-dimensional coordinate space is broken down
into four covering tiles of equal size. Successive tessellations divide those tiles that interact with the geometry down into smaller tiles, and this process continues until the desired level or
number of tiles has been achieved. The results of the tessellation process on a geometry are stored in a table, referred to as the SDOINDEX table.
The tiles at a particular level can be linearly sorted by systematically visiting tiles in an order determined by a space-filling curve as shown in Figure 1-5. The tiles can also be assigned unique
numeric identifiers, known as Morton codes or z-values. The terms tile and tile code will be used interchangeably in this and other sections related to spatial indexing.
Figure 1-5 Quadtree Decomposition and Morton Codes
Description of the illustration quadtree.gif
1.7.2.2 Fixed Indexing
Fixed spatial indexing uses tiles of equal size to cover a geometry. Because all the tiles are the same size, they all have codes of the same length, and the standard SQL equality operator (=) can be
used to compare tiles during a join operation. This results in excellent performance characteristics.
Two geometries are likely to interact, and hence pass the primary filter stage, if they share one or more tiles. The SQL statement for the primary filter stage is:
SELECT DISTINCT <select_list for geometry identifiers>
FROM table1_sdoindex A, table2_sdoindex B
WHERE A.sdo_code = B.sdo_code
The effectiveness and efficiency of this indexing method depends on the tiling level and the variation in size of the geometries in the layer. If you select a small fixed-size tile to cover small
geometries and then try to use the same size tile to cover a very large geometry, a large number of tiles would be required. However, if the chosen tile size is large, so that fewer tiles are
generated in the case of a large geometry, then the index selectivity suffers because the large tiles do not approximate the small geometries very well. Figure 1-6 and Figure 1-7 illustrate the
relationships between tile size, selectivity, and the number of cover tiles.
With a small fixed-size tile as shown in Figure 1-6, selectivity is good, but a large number of tiles is needed to cover large geometries. A window query would easily identify geometries A and B, but
would reject C.
Figure 1-6 Fixed-Size Tiling with Many Small Tiles
Description of the illustration fixedtiling.gif
With a large fixed-size tile as shown in Figure 1-7, fewer tiles are needed to cover the geometries, but the selectivity is not as good. The same window query as in Figure 1-6 would probably pick up
all three geometries. Any object that shares tile T1 or T2 would identify object C as a candidate, even though the objects may be far apart, such as objects B and C are in Figure 1-7.
Figure 1-7 Fixed-Size Tiling with Fewer Large Tiles
Description of the illustration largetiles.gif
You can use the SDO_TUNE.ESTIMATE_TILING_LEVEL function or the tiling wizard of the Spatial Index Advisor tool in Oracle Enterprise Manager to help determine an appropriate tiling level for your data
Figure 1-8 illustrates geometry 1013 tessellated to three fixed-sized tiles at level 1. The codes for these cover tiles are then stored in an SDOINDEX table.
Figure 1-8 Tessellated Geometry
Description of the illustration tessgeo.gif
Only three of the four tiles generated by the first tessellation interact with the geometry. Only those tiles that interact with the geometry are stored in the SDOINDEX table, as shown in Table 1-2.
In this example, three fixed-size tiles are used. The table structure is shown for illustrative purposes only, because you should not directly access the index tables.
Table 1-2 SDOINDEX Table Using Fixed-Size Tiles
│ SDO_GID <number> │ SDO_CODE <raw> │
│ 1013 │ T0 │
│ 1013 │ T2 │
│ 1013 │ T3 │
All elements in a geometry are tessellated. In a multielement geometry such as 1013, Element 1 is already covered by tile T2 from the tessellation of Element 0. If, however, the specified tiling
resolution was such that tile T2 was further subdivided and one of these smaller tiles was completely contained in Element 1, then that tile would be excluded because it would not interact with the
1.8 Spatial Relations and Filtering
Spatial uses secondary filters to determine the spatial relationship between entities in the database. The spatial relation is based on geometry locations. The most common spatial relations are based
on topology and distance. For example, the boundary of an area consists of a set of curves that separates the area from the rest of the coordinate space. The interior of an area consists of all
points in the area that are not on its boundary. Given this, two areas are said to be adjacent if they share part of a boundary but do not share any points in their interior.
The distance between two spatial objects is the minimum distance between any points in them. Two objects are said to be within a given distance of one another if their distance is less than the given
To determine spatial relations, Spatial has several secondary filter methods:
● The SDO_RELATE operator evaluates topological criteria.
● The SDO_WITHIN_DISTANCE operator determines if two spatial objects are within a specified distance of each other.
● The SDO_NN operator identifies the nearest neighbors for a spatial object.
The syntax of these operators is given in Chapter 10.
The SDO_RELATE operator implements a 9-intersection model for categorizing binary topological relations between points, lines, and polygons. Each spatial object has an interior, a boundary, and an
exterior. The boundary consists of points or lines that separate the interior from the exterior. The boundary of a line consists of its end points. The boundary of a polygon is the line that
describes its perimeter. The interior consists of points that are in the object but not on its boundary, and the exterior consists of those points that are not in the object.
Given that an object A has 3 components (a boundary Ab, an interior Ai, and an exterior Ae), any pair of objects has 9 possible interactions between their components. Pairs of components have an
empty (0) or a non-empty (1) set intersection. The set of interactions between 2 geometries is represented by a 9-intersection matrix that specifies which pairs of components intersect and which do
not. Figure 1-9 shows the 9-intersection matrix for 2 polygons that are adjacent to one another. This matrix yields the following bit mask, generated in row-major form: Ò101001111Ó.
Figure 1-9 The 9-Intersection Model
Description of the illustration 9inter.gif
Some of the topological relationships identified in the seminal work by Professor Max Egenhofer (University of Maine, Orono) and colleagues have names associated with them. Spatial uses the following
● DISJOINT -- The boundaries and interiors do not intersect.
● TOUCH -- The boundaries intersect but the interiors do not intersect.
● OVERLAPBDYDISJOINT -- The interior of one object intersects the boundary and interior of the other object, but the two boundaries do not intersect. This relationship occurs, for example, when a
line originates outside a polygon and ends inside that polygon.
● OVERLAPBDYINTERSECT -- The boundaries and interiors of the two objects intersect.
● EQUAL -- The two objects have the same boundary and interior.
● CONTAINS -- The interior and boundary of one object is completely contained in the interior of the other object.
● COVERS -- The interior of one object is completely contained in the interior of the other object and their boundaries intersect.
● INSIDE -- The opposite of CONTAINS. A INSIDE B implies B CONTAINS A.
● COVEREDBY -- The opposite of COVERS. A COVEREDBY B implies B COVERS A.
● ON -- The interior and boundary of one object is on the boundary of the other object (and the second object covers the first object). This relationship occurs, for example, when a line is on the
boundary of a polygon.
● ANYINTERACT -- The objects are non-disjoint.
Figure 1-10 illustrates these topological relationships.
Figure 1-10 Topological Relationships
Description of the illustration top_rel.gif
The SDO_WITHIN_DISTANCE operator determines if two spatial objects, A and B, are within a specified distance of one another. This operator first constructs a distance buffer, D[b], around the
reference object B. It then checks that A and D[b] are non-disjoint. The distance buffer of an object consists of all points within the given distance from that object. Figure 1-11 shows the distance
buffers for a point, a line, and a polygon.
Figure 1-11 Distance Buffers for Points, Lines, and Polygons
Description of the illustration buffers.gif
In the geometries shown in Figure 1-11:
● The dashed lines represent distance buffers. Notice how the buffer is rounded near the corners of the objects.
● The geometry on the right is a polygon with a hole: the large rectangle is the exterior polygon ring and the small rectangle is the interior polygon ring (the hole). The dashed line outside the
large rectangle is the buffer for the exterior ring, and the dashed line inside the small rectangle is the buffer for the interior ring.
The SDO_NN operator returns a specified number of objects from a geometry column that are closest to a specified geometry (for example, the five closest restaurants to a city park). In determining
how close two geometry objects are, the shortest possible distance between any two points on the surface of each object is used.
1.9 Spatial Aggregate Functions
SQL has long had aggregate functions, which are used to aggregate the results of a SQL query. The following example uses the SUM aggregate function to aggregate employee salaries by department:
SELECT SUM(salary), dept
FROM employees
GROUP BY dept;
Oracle Spatial aggregate functions aggregate the results of SQL queries involving geometry objects. Spatial aggregate functions return a geometry object of type SDO_GEOMETRY. For example, the
following statement returns the minimum bounding rectangle of all the geometries in a table (using the definitions and data from Section 2.1):
SELECT SDO_AGGR_MBR(shape) FROM cola_markets;
The following example returns the union of all geometries except cola_d:
SELECT SDO_AGGR_UNION(MDSYS.SDOAGGRTYPE(c.shape, 0.005))
FROM cola_markets c WHERE c.name < 'cola_d';
All geometries used with spatial aggregate functions must be defined using 4-digit SDO_GTYPE values (that is, must be in the format used by Oracle Spatial release 8.1.6 or higher). For information
about SDO_GTYPE values, see Section 2.2.1.
For reference information about the spatial aggregate functions and examples of their use, see Chapter 12.
1.9.1 SDOAGGRTYPE Object Type
Many spatial aggregate functions accept an input parameter of type MDSYS.SDOAGGRTYPE. Oracle Spatial defines the object type SDOAGGRTYPE as:
CREATE TYPE sdoaggrtype AS OBJECT (
geometry MDSYS.SDO_GEOMETRY,
tolerance NUMBER);
│ Note: │
│ │
│ Do not use SDOAGGRTYPE as the data type for a column in a table. Use this type only in calls to spatial aggregate functions. │
The tolerance value in the SDOAGGRTYPE definition should be the same as the SDO_TOLERANCE value specified in the DIMINFO in the xxx_SDO_GEOM_METADATA views for the geometries, unless you have a
specific reason for wanting a different value. For more information about tolerance, see Section 1.5.5; for information about the xxx_SDO_GEOM_METADATA views, see Section 2.4.
The tolerance value in the SDOAGGRTYPE definition can affect the result of a spatial aggregate function. Figure 1-12 shows a spatial aggregate union (SDO_AGGR_UNION) operation of two geometries using
two different tolerance values: one smaller and one larger than the distance between the geometries.
Figure 1-12 Tolerance in an Aggregate Union Operation
Description of the illustration tol_union.gif
In the first aggregate union operation in Figure 1-12, where the tolerance is less than the distance between the rectangles, the result is a compound geometry consisting of two rectangles. In the
second aggregate union operation, where the tolerance is greater than the distance between the rectangles, the result is a single geometry.
1.10 Geocoding
Geocoding is the process of converting tables of address data into standardized address, location, and possibly other data. The result of a geocoding operation is the pair of longitude and latitude
coordinates that correspond with the input address or location. For example, if the input address is 22 Monument Square, Concord, MA 01742, the result of the geocoding operation is -71.34937,
Given a geocoded address, you can then perform proximity or location queries using a spatial engine, such as Oracle Spatial, or demographic analysis using tools and data from Oracle's business
partners. In addition, geocoded data can be used with other spatial data such as block group, postal code, and county code for association with demographic information. Results of analyses or queries
can be presented as maps, in addition to tabular formats, using third-party software integrated with Oracle Spatial.
Oracle Spatial is integrated with all major geocoding service providers. The usual and recommended approach for application developers is to use the API for the geocoding provider to obtain a
geocoded result (longitude/latitude coordinate pair) for an address, and then use these coordinates to construct an MDSYS.SDO_GEOMETRY object for input to a spatial operator, function, or procedure.
1.11 Performance and Tuning Information
Many factors can affect the performance of Oracle Spatial applications, such as the indexing method (R-tree or quadtree), the SOD_LEVEL value for a quadtree index, and the use of optimizer hints to
influence the plan for query execution. This guide contains some information about performance and tuning where it is relevant to a particular topic. For example, Section 1.7 includes
performance-related items among the considerations for choosing an R-tree or quadtree index.
In addition, more Spatial performance and tuning information is available in one or more white papers through the Oracle Technology Network (OTN). That information is often more detailed than what is
in this guide, and it is periodically updated as a result of internal testing and consultations with Spatial users. To find that information on the OTN, go to
Search for Spatial, and then search for white papers relevant to performance and tuning.
1.12 Spatial Release (Version) Number
To check which release of Spatial you are running, use the SDO_VERSION function. For example:
The SDO_VERSION function replaces the SDO_ADMIN.SDO_VERSION function, which was available with the deprecated relational model of Oracle Spatial.
1.13 Spatial Application Hardware Requirement Considerations
This section discusses some general guidelines that affect the amount of disk storage space and CPU power needed for spatial applications. They are not, however, intended to replace any other
guidelines you use for general application sizing, but to supplement them.
The following characteristics of spatial applications can affect the need for storage space and CPU power:
● Data volumes: The amount of storage space needed for spatial objects depends on their complexity (precision of representation and number of points for each object). For example, storing one
million point objects takes less space than storing one million road segments or land parcels. Complex natural features such as coastlines, seismic fault lines, rivers, and land types can require
significant storage space if they are stored at a high precision.
● Query complexity: The CPU requirements for simple mapping queries, such as Select all features in this rectangle, are lower than for more complex queries, such as Find all seismic fault lines
that cross this coastline.
1.15 Spatial Examples
Oracle Spatial provides examples that you can use to reinforce your learning and to create models for coding certain operations. Several examples are provided in the following directory:
The following files in that directory are helpful for applications that use the Oracle Call Interface (OCI):
● readgeom.c and readgeom.h
● writegeom.c and writegeom.h
This guide also includes many examples in SQL and PL/SQL. One or more examples are usually provided with the reference information for each function or procedure, and several simplified examples are
provided that illustrate table and index creation, as well as several functions and procedures:
● Inserting, indexing, and querying spatial data (Section 2.1)
● Coordinate systems (spatial reference systems) (Section 5.8)
● Linear referencing system (LRS) (Section 6.6)
|
{"url":"http://docs.oracle.com/cd/B10500_01/appdev.920/a96630/sdo_intro.htm","timestamp":"2014-04-18T11:58:05Z","content_type":null,"content_length":"73599","record_id":"<urn:uuid:9a8007ea-3dbd-4c4b-9995-849bb026b1e4>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00145-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The Search for a Finite Projective Plane of Order 10
The Search for a Finite Projective Plane of Order 10***
Clement W. H. Lam
Computer Science Department
Concordia University
Montréal Québec
Projective planes are special cases of a class of combinatorial objects called symmetric block designs. We are not going to discuss block designs, except to mention that Chowla and Ryser have
generalized the Bruck-Ryser theorem to symmetric block designs [10], which it is now known as the Bruck-Ryser-Chowla theorem. Here again, a partial converse exists, providing more credence to the
hope that the conditions in the Bruck-Ryser-Chowla theorem are both necessary and sufficient. This hope is now shattered by the non-existence of the finite projective plane of order 10.
***Previously appeared in the American Mathematical Monthly 98, (no. 4) 1991, 305 - 318.
Author's Reflections:
When I was a graduate student looking for a thesis topic, Herbert Ryser advised me not to work on the projective plane of order 10. Even though he was extremely interested in this subject, he
believed that it was too difficult and that I might get nowhere with it. I took his advice and chose another problem. Somehow, this problem has a beauty that fascinates me as well as many other
mathematicians. Finally in 1980, I succumbed to the temptation and started working on it with some of my colleagues. We eventually managed to get somewhere, but unfortunately, Dr. Ryser is no longer
with us to hear of the final result. This is an expository article describing the evolution of the problem and how computers were used to solve it.
While we were tracing the origin of the existence problem of the plane of order 10, we asked Dan Hughes, who has worked in this area for a long time and is famous for the Hughes planes which are
named after him. He recounted the following story. In about 1957, at a Chinese restaurant in Chicago, Reinhold Baer, another mathematician well known for his work in group theory and projective
planes, was trying to impress the younger Hughes by remarking that if the plane of order 10 was settled by a computer, he hoped not be alive to see it. Baer got his wish but I do not think Herb Ryser
shared this opinion. Ryser was happy that the weight 12 case was settled by a computer. I can only extrapolate and hope that he would also be happy that the whole problem has been ``settled'', even
if by a computer.
|
{"url":"http://www.cecm.sfu.ca/organics/papers/lam/","timestamp":"2014-04-20T20:55:34Z","content_type":null,"content_length":"3907","record_id":"<urn:uuid:8a2f93fc-fcfc-45dd-894b-85d8dfafb5f1>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00387-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Arizona Math Standards - 5th Grade
MathScore EduFighter is one of the best math games on the Internet today. You can start playing for free!
Arizona Math Standards - 5th Grade
MathScore aligns to the Arizona Math Standards for 5th Grade. The standards appear below along with the MathScore topics that match. If you click on a topic name, you will see sample problems at
varying degrees of difficulty that MathScore generated. When students use our program, the difficulty of the problems will automatically adapt based on individual performance, resulting in not only
true differentiated instruction, but a challenging game-like experience.
Want unlimited math worksheets? Learn more about our online math practice software.
View the Arizona Math Standards at other levels.
Number Sense and Operations
C1 Number Sense
1. Make models that represent improper fractions.
2. Identify symbols, words, or models that represent improper fractions.
3. Use improper fractions in contextual situations.
4. Compare two proper fractions or improper fractions with like denominators. (Fraction Comparison )
5. Order three or more unit fractions, proper or improper fractions with like denominators, or mixed numbers with like denominators.
6. Compare two whole numbers, fractions, and decimals (e.g., 1/2 to 0.6). (Compare Mixed Values , Compare Mixed Values 2 , Fractions to Decimals , Decimals To Fractions , Positive Number Line )
7. Order whole numbers, fractions, and decimals.
8. Determine the equivalency between and among fractions, decimals, and percents in contextual situations. (Percentage Pictures )
9. Identify all whole number factors and pairs of factors for a number. (Factoring )
10. Recognize that 1 is neither a prime nor a composite number. (Prime Numbers )
11. Sort whole numbers (through 50) into sets containing only prime numbers or only composite numbers. (Prime Numbers )
C2 Numerical Operations
1. Select the grade-level appropriate operation to solve word problems.
2. Solve word problems using grade-level appropriate operations and numbers. (Arithmetic Word Problems , Unit Cost , Fraction Word Problems )
3. Multiply whole numbers. (Long Multiplication )
4. Divide with whole numbers. (Long Division , Long Division with Remainders )
5. Demonstrate the distributive property of multiplication over addition. (Distributive Property , Basic Distributive Property )
6. Demonstrate the addition and multiplication properties of equality.
7. Apply grade-level appropriate properties to assist in computation.
8. Apply the symbol "[ ]" to represent grouping.
9. Use grade-level appropriate mathematical terminology.
10. Simplify fractions to lowest terms. (Basic Fraction Simplification , Fraction Simplification )
11. Add or subtract proper fractions and mixed numbers with like denominators with regrouping. (Fraction Addition , Fraction Subtraction )
12. Add or subtract decimals. (Decimal Addition , Decimal Subtraction )
13. Multiply decimals. (Money Multiplication , Decimal Multiplication )
14. Divide decimals. (Small Decimal Division , Money Division , Decimal Division )
15. Simplify numerical expressions using the order of operations with grade- appropriate operations on number sets. (Using Parentheses , Order Of Operations )
C3 Estimation
1. Solve grade-level appropriate problems using estimation. (Estimated Multiplication , Estimated Division , Estimated Multiply Divide Word Problems )
2. Use estimation to verify the reasonableness of a calculation (e.g., Is 4.1 x 2.7 about 12?).
3. Round to estimate quantities. (Rounding Numbers , Rounding Large Numbers , Decimal Rounding to .01 , Decimal Rounding , Estimated Addition , Estimated Subtraction , Money Addition , Money
Subtraction )
4. Estimate and measure for area and perimeter.
5. Compare estimated measurements between U.S. customary and metric systems (e.g., A yard is about a meter.).
Data Analysis, Probability, and Discrete Mathematics
C1 Data Analysis (Statistics)
1. Formulate questions to collect data in contextual situations.
2. Construct a double-bar graph, line plot, frequency table, or three-set Venn diagram with appropriate labels and title from organized data.
3. Interpret graphical representations and data displays including bar graphs (including double-bar), circle graphs, frequency tables, three-set Venn diagrams, and line graphs that display continuous
data. (Bar Graphs , Line Graphs )
4. Answer questions based on graphical representations, and data displays including bar graphs (including double-bar), circle graphs, frequency tables, three-set Venn diagrams, and line graphs that
display continuous data. (Bar Graphs , Line Graphs )
5. Identify the mode(s) and mean (average) of given data. (Mean, Median, Mode )
6. Formulate reasonable predictions from a given set of data.
7. Compare two sets of data related to the same investigation.
8. Solve contextual problems using graphs, charts, and tables.
C2 Probability
1. Name the possible outcomes for a probability experiment.
2. Describe the probability of events as being:
• certain (represented by 1)
• impossible, (represented by 0), or
• neither certain nor impossible (represented by a fraction less than 1). (Probability )
3. Predict the outcome of a grade-level appropriate probability experiment.
4. Record the data from performing a grade-level appropriate probability experiment.
5. Compare the outcome of an experiment to predictions made prior to performing the experiment.
6. Make predictions from the results of student-generated experiments using objects (e.g., coins, spinners, number cubes).
7. Compare the results of two repetitions of the same grade-level appropriate probability experiment.
C3 Discrete Mathematics - Systematic Listing and Counting
1. Find all possible combinations when one item is selected from each of two sets of different items, using a systematic approach. (e.g., shirts: tee shirt, tank top, sweatshirt; pants: shorts,
C4 Vertex-Edge Graphs
1. Color maps with the least number of colors so that no common edges share the same color (increased complexity throughout grade levels).
Patterns, Algebra, and Functions
C1 Patterns
1. Communicate a grade-level appropriate iterative pattern, using symbols or numbers.
2. Extend a grade-level appropriate iterative pattern. (Patterns: Numbers )
3. Solve grade-level appropriate iterative pattern problems. (Patterns: Numbers )
C2 Functions and Relationships
1. Describe the rule used in a simple grade-level appropriate function (e.g., T-chart, input/output model). (Function Tables , Function Tables 2 )
C3 Algebraic Representations
1. Evaluate expressions involving the four basic operations by substituting given decimals for the variable.
2. Use variables in contextual situations.
3. Solve one-step equations with one variable represented by a letter or symbol (e.g., 15 = 45 ÷ n). (Missing Factor , Single Variable Equations )
C4 Analysis of Change
1. Describe patterns of change:
• constant rate (speed of movement of the hands on a clock), and
• increasing or decreasing rate (rate of plant growth).
Geometry and Measurement
C1 Geometric Properties
1. Recognize regular polygons. (Quadrilateral Types , Polygon Names )
2. Draw 2-dimensional figures by applying significant properties of each (e.g., Draw a quadrilateral with two sets of parallel sides and four right angles.).
3. Sketch prisms, pyramids, cones, and cylinders.
4. Identify the properties of 2- and 3-dimensional geometric figures using appropriate terminology and vocabulary.
5. Draw points, lines, line segments, rays, and angles with appropriate labels.
6. Recognize that all pairs of vertical angles are congruent.
7. Classify triangles as scalene, isosceles, or equilateral. (Triangle Types )
8. Recognize that a circle is a 360º rotation about a point.
9. Identify the diameter, radius, and circumference of a circle. (Circle Measurements )
10. Understand that the sum of the angles of a triangle is 180º (Triangle Angles )
11. Draw two congruent geometric figures.
12. Draw two similar geometric figures.
13. Identify the lines of symmetry in a 2-dimensional shape.
C2 Transformation of Shapes
1. Demonstrate reflections using geometric figures.
2. Describe the transformations that created a tessellation.
C3 Coordinate Geometry
1. Graph points in the first quadrant on a grid using ordered pairs.
C4 Measurement - Units of Measure - Geometric Objects
1. State an appropriate measure of accuracy for a contextual situation (e.g., What unit of measurement would you use to measure the top of your desk?).
2. Draw 2-dimensional figures to specifications using the appropriate tools (e.g., Draw a circle with a 2-inch radius.).
3. Determine relationships including volume (e.g., pints and quarts, milliliters and liters).
4. Convert measurement units to equivalent units within a given system (U.S. customary and metric) (e.g., 12 inches = 1 foot; 10 decimeters = 1 meter). (Distance Conversion , Time Conversion , Volume
Conversion , Weight Conversion , Temperature Conversion )
5. Solve problems involving the perimeter of convex polygons. (Perimeter )
6. Determine the area of figures composed of two or more rectangles on a grid. (Perimeter and Area of Composite Figures )
7. Solve problems involving the area of simple polygons. (Triangle Area , Parallelogram Area )
8. Describe the change in perimeter or area when one attribute (length, width) of a rectangle is altered. (Area And Volume Proportions )
Structure and Logic
C1 Algorithms and Algorithmic Thinking
1. Discriminate necessary information from unnecessary information in a given grade-level appropriate word problem.
2. Design simple algorithms using whole numbers. (Function Tables , Function Tables 2 )
3. Develop an algorithm or formula to calculate areas of simple polygons.
C2 Logic, Reasoning, Arguments, and Mathematical Proof
1. Construct if...then statements.
2. Identify simple valid arguments using if... then statements based on graphic organizers (e.g., 3-set Venn diagrams and pictures).
Learn more about our online math practice software.
|
{"url":"http://www.mathscore.com/math/standards/Arizona/5th%20Grade/","timestamp":"2014-04-16T11:25:04Z","content_type":null,"content_length":"20719","record_id":"<urn:uuid:ebfe19fd-2591-4dd5-bc71-c0e6d526960c>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00383-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
help please (: F equals 27 point 2 times m all over t F is the force m is the mass t is the time in seconds Solve F equals 27 point 2 times m all over t for m. Please show your work. (6 points) Use
your new formula to find the mass of the automobile in kg for each value of force and time given in the table below. Round your answer to the nearest whole number. (4 points) Force (Newtons) Mass
(kg) Time (sec) 11000 8 9520 10 12000 7 11500 9
• one year ago
• one year ago
Best Response
You've already chosen the best response.
So you're saying \[ F = \frac{27.2 m}{t} \] What then are you asking in the first part of the question?
Best Response
You've already chosen the best response.
I think you are missing part of the question
Best Response
You've already chosen the best response.
@JamesJ yeah that's right and i have to find out the mass using that formula
Best Response
You've already chosen the best response.
Ah you need to solve for m. Fine. First, multiply both sides by t and you get \[ Ft = 27.2 m \] Can you solve for m now?
Best Response
You've already chosen the best response.
@JamesJ umm i think so?
Best Response
You've already chosen the best response.
Take Ft = 27.2m and divide both sides by 27.2 and you get ... what?
Best Response
You've already chosen the best response.
@JamesJ ummm ft=m?
Best Response
You've already chosen the best response.
That can't be right. You didn't divide the left hand side by 27.2 as well. So, we have F = 27.2m/t Ft = 27.2m multiplying both sides by t Ft/27.2 = m dividing both sides by 27.2 Hence \[ m = \
frac{Ft}{27.2} \]
Best Response
You've already chosen the best response.
Now apply that formula for the second part of the question. You are given values of F and t. Use this formula to find the corresponding values of m.
Best Response
You've already chosen the best response.
@JamesJ ohh okay i think i get it now
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/5106a9e4e4b0ad57a563d4ff","timestamp":"2014-04-21T02:02:40Z","content_type":null,"content_length":"49959","record_id":"<urn:uuid:f96219d3-0ba2-48d7-a855-67b9c0d22adc>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00631-ip-10-147-4-33.ec2.internal.warc.gz"}
|
understanding drawing scale and scale factor
9th Jul 2004 03:45 am #2
8th Jul 2004 09:51 pm #1
Registered forum members do not see this ad.
I am a newbie AutoCADer and am having some problems getting the hang of applying the drawing scale to my assignment. I understand the scale factor that is applicable to the printing only and that
AutoCAD doesn't care about the units used...but something is not clicking here and I'm sure it is pretty simple. My assignment requires me to draw 3 simple structures at 1/8" scale on 11X17
paper. Each house is 20' long with 10' walls and each is done in absolute, relative, and polar. I actually completed the work with all 3 formats without specifying the scale or paper size, just
to get familiar with the commands. But now that I am trying to set the dimensions correctly, I cannot replicate my work because I'm getting lost in the way things are defined. Any help would be
appreciated. I am using AutoCAD 2005, also have a 2002 version to which I can defer.
Thanks everyone!
Scale factor (as used by AutoCAD) is always a reciprocal of the drawing scale.
AutoCad uses the unit one as the base unit.
For example, if you wish to plot a mechanical drawing at a scale of 1/2” = 1” the reciprocal of 1/2 is 2/1 which is 2. Therefore the scale factor is 2. If this drawing was to be plotted on a 11 x
17 piece of paper the limits would have to be 34,22. Text would be .25 high and other settings made for this scale to be plotted properly.
Another example would be an architectural drawing plotted at a scale of 1/4” = 1’-0”. Changing feet to inches so the units are all in inches gives us 1/4” = 12”. Dividing both sides by 12 which
is the same thing as multiplying by 1/12 we obtain 1/48 = 1. The reciprocal of 1/48 is 48. The scale factor then is 48. If this drawing is plotted on a 11 x 17 piece of paper the limits would
have to be 816,528 (68’,44’). Text would be 6” high (.125 times 48 = 6). Other settings would have to be made for this scale to be plotted properly.
Another example would be a metric drawing made full size. 1 = 25.4. Dividing both sides of this equation by 25.4 we get 1/25.4 = 1. The reciprocal of 1/25.4 is 25.4/1. The scale factor therefore
is 25.4.
Text (originally .125) then would be made approximately 3 high (.125 X 25.4 = 3.175). Text (originally .25) then would be made approximately 6 high (.25 X 25.4 = 6.35). Other settings would have
to be made for this scale to be plotted properly
The scale factor of a drawing should be determined and utilized during the time of drawing set up.
The proper scale factor is extremely important because it makes sure that text (height, etc.), dimension values (dimscale), hatch patterns, limits, and linetype scale (ltscale) are plotted at the
proper size.
A drawing scale of 1” = 1” has a scale factor of 1
A drawing scale of 1/2” = 1” has a scale factor of 2
A drawing scale of 2 = 1 has a scale factor of .5
A drawing scale of 1 = 60.0’ has a scale factor of 720
A drawing scale of half size (metric) has a scale factor of 50.8
why on earth would someone want to "draw to scale" in Autocad ?
surely the whole point about computers/CAD is that we can get away from that old drawback of paper.
In CAD, the space is "infinite", everything should be drawn at 1:1 ~ viewports are where any "scaling" factors should come into play
and here starts another argument
why on earth would someone want to "draw to scale" in Autocad ?
surely the whole point about computers/CAD is that we can get away from that old drawback of paper.
In CAD, the space is "infinite", everything should be drawn at 1:1 ~ viewports are where any "scaling" factors should come into play
and here starts another argument
Read this tripe....
This is from the owner of Engineered Software, the maker of PowerCADD ( a mac only CAD app.).
Please do not PM me with CAD questions. Post your question on the forum. Our users are the best out there and you'll get the best possible answer to your question.
- http://f700es.deviantart.com/gallery/ -
Read this tripe....
I don't think they have a smiley for what I'm feeling after reading that !
Read this tripe....
I don't think they have a smiley for what I'm feeling after reading that !
Everyone here needs to visit that site and just take a look at the way they act. It's no wonder there anti-apple sites all over the net, with attitudes like that. It just kills me. These users
throw off on AutoCAD in about everyother post. They try to make fun of us drawing 1 to 1 yet when a cross platfrom program like VectroWorks or SketchUp draws 1 to 1 it is ok..
Got into a internet arguement with one of their users on another forum a couple of years ago after he said that we could not do solid fills, had no accuracy, could not export to illustrator (or
anything else for that matter), unstable program and other un-founded remarks. Evern called me a liar when I posted some screen images of things that he said Acad could not do. I'll put Acad up
against just about anything.
another smily for them...
Please do not PM me with CAD questions. Post your question on the forum. Our users are the best out there and you'll get the best possible answer to your question.
- http://f700es.deviantart.com/gallery/ -
Read this tripe....
This is from the owner of Engineered Software, the maker of PowerCADD ( a mac only CAD app.).
A joke surely? No one can still hold views like that - at least not if they live in the 21st Century. I have to say that this sort of view is propagated by people who find the transition from
drawing board to computer rather confusing. They feel much more comfortable working with an electronic version of their old tried an trusted manual tools. Their loss - new technology means new
tools and new tools means new method and new method leads to new opportunities.
It's a brave new world but some are just a little bit scared to jump.
Tip: Please do not PM or email me with CAD questions - use the forums, you'll get an answer sooner.
AutoCAD Tutorials | How to add images to your posts | How to register successfully | Forum FAQ
David, I wish that was a joke but it is not. Browse through the their/his site and see. www.engsw.com
I am also sad to say but these guys are only about an hour away from me here in NC, USA.
Here is their forum...http://powercadd.designcommunity.com/index.php
Pretty pathetic to me :evil:
Please do not PM me with CAD questions. Post your question on the forum. Our users are the best out there and you'll get the best possible answer to your question.
- http://f700es.deviantart.com/gallery/ -
End of the day you need accuracy,and drawing to scale the old way is just not feasable.
Small minded Souls. I love macs for their simplicity, but hate them for the lack of cross platform programs.
Tell me, isnt Archicad a 1-1 program?
Registered forum members do not see this ad.
End of the day you need accuracy,and drawing to scale the old way is just not feasable.
Small minded Souls. I love macs for their simplicity, but hate them for the lack of cross platform programs.
Tell me, isnt Archicad a 1-1 program?
Not sure about Archicad but it probably is. Just about every CAD/3D application I can think of does 1:1. To me it is just another example of the "Mac - I'm better than you attitude". I work with
2 mac users and they both have this attitude that since they us macs their designs are automatically better
Please do not PM me with CAD questions. Post your question on the forum. Our users are the best out there and you'll get the best possible answer to your question.
- http://f700es.deviantart.com/gallery/ -
9th Jul 2004 08:18 am #3
Super Member
AutoCAD 2007
Join Date
Aug 2003
Livingston, Scotland
9th Jul 2004 10:25 am #4
9th Jul 2004 10:56 am #5
Super Member
AutoCAD 2007
Join Date
Aug 2003
Livingston, Scotland
9th Jul 2004 01:24 pm #6
9th Jul 2004 10:49 pm #7
10th Jul 2004 07:42 pm #8
12th Jul 2004 06:10 am #9
Super Member
not applicable
Join Date
Dec 2002
physicaly - Australia, mentaly - Another Planet
12th Jul 2004 01:47 pm #10
|
{"url":"http://www.cadtutor.net/forum/showthread.php?1568-understanding-drawing-scale-and-scale-factor","timestamp":"2014-04-18T08:03:48Z","content_type":null,"content_length":"100482","record_id":"<urn:uuid:938edb35-4452-451f-9a95-304d6911a9e8>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00514-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Millis Math Tutor
Find a Millis Math Tutor
...My name is Dan, and I love helping students to improve in Math and Science. I attended U.C. Santa Barbara, ranked 33rd in the World's Top 200 Universities (2013), and graduated with a degree in
27 Subjects: including logic, grammar, ACT Math, GED
...I was also a Summit private tutor for SAT, both Math and English. I was an SAT instructor for Princeton Review and Kaplan. I was also a Summit private tutor for SAT, both Math and English.
67 Subjects: including precalculus, marketing, logic, geography
I am a senior chemistry major and math minor at Boston College. In addition to my coursework, I conduct research in a physical chemistry nanomaterials lab on campus. I am qualified to tutor
elementary, middle school, high school, and college level chemistry and math, as well as SAT prep for chemistry and math.I am a chemistry major at Boston College.
13 Subjects: including trigonometry, algebra 1, algebra 2, biology
...Whether you want to solidify your knowledge and get ahead or get a fresh perspective if your are struggling, I am confident I can help you. I have the philosophy that anything can be understood
if it is explained correctly. Teachers and professors can get caught up using too much jargon which can confuse students.
19 Subjects: including calculus, ACT Math, SAT math, trigonometry
...Courses I have tutored include elementary mathematics, statistics, and calculus. I am also certified to teach other areas of mathematics including pre-algebra, algebra 1 and 2. I have an
associate's degree in mathematics from Roxbury Community College, and a bachelor’s degree in mathematics and computer science from Boston University.
13 Subjects: including calculus, geometry, precalculus, trigonometry
|
{"url":"http://www.purplemath.com/millis_ma_math_tutors.php","timestamp":"2014-04-21T10:32:55Z","content_type":null,"content_length":"23497","record_id":"<urn:uuid:96b98536-8006-4156-af8f-f10f081f5874>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00500-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
a little bit confused about summation rules
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/50772c55e4b02f109be3c8ea","timestamp":"2014-04-20T06:25:39Z","content_type":null,"content_length":"73855","record_id":"<urn:uuid:7b40e539-6563-41a3-94fb-f876ecc03cb8>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00583-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Gaussian integral
667pages on
this wiki
The area of the blue region is equal to the value of the Gaussian integral.
The Gaussian integral is the integral of the Gaussian function over the entire real number line.
The Gaussian integral is the integral defined as
The function exp(-x²) is known as the Gaussian function. Note how the graph takes the traditional bell-shape, the shape of the Laplace curve.
You can use several methods to show that the integrand, the Gaussian function, has no indefinite integral that can be expressed in elementary terms. In other words, the integral resists the tools of
elementary calculus. Still, there are several methods to evaluate it (differentiation under the integral sign, polar integration, contour integration, square of an integral, etc.) We will demonstrate
the polar integration method.
We denote the Gaussian integral as I for convenience. From the square of an integral, we have:
We will now express the integral in polar coordinates. We can easily do this, since -x²-y² is merely -r² in polar coordinates. Hence, our integral becomes
We evaluate the right hand side. The integrand now does have an elementary indefinite integral, which can be evaluated with the substitution u=r² as follows:
This was the integral with respect to r. We substitute it in our original integral:
Taking the square root of both sides finally yields the desired expression:
|
{"url":"http://math.wikia.com/wiki/Gaussian_integral","timestamp":"2014-04-18T13:09:01Z","content_type":null,"content_length":"56726","record_id":"<urn:uuid:3070b8a5-ee88-4bba-8526-202f9851f665>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00657-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Eola, IL Science Tutor
Find an Eola, IL Science Tutor
...I usually meet with students in the evening at our local library. I have also met with students in their homes and occasionally at a local coffee shop. Students will typically meet with me
twice a week for 30 minutes to an hour each session.
13 Subjects: including ACT Science, biology, geometry, algebra 1
...I am well versed in Botany, my understanding derived from my B.S. in Biology from Purdue University and then adding post-graduate studies from ISU, where I enrolled in more Botany-related
coursework. At Purdue, I majored in Ecology for two years, which included a significant amount of instructio...
11 Subjects: including biology, psychology, anatomy, physiology
...I invite you to message me if you have any additional questions. I look forward to helping you help yourself. Education is key.
35 Subjects: including biology, elementary (k-6th), SPSS, phonics
...I have been successful in the past tutoring high school students in math and science, but enjoy all ages and skill levels. As a current dental student, my passion for learning is proven every
day, and I hope to inspire your child to achieve their academic goals. I played soccer from the time I was four until I was eighteen.
17 Subjects: including psychology, physical science, biology, chemistry
...Initially I was into Mainframes. Later on I got opportunities to work in different areas in the computer field. I am exposed to various technologies and my last assignment was with Allstate as
an Analyst.
6 Subjects: including chemistry, geometry, algebra 1, algebra 2
Related Eola, IL Tutors
Eola, IL Accounting Tutors
Eola, IL ACT Tutors
Eola, IL Algebra Tutors
Eola, IL Algebra 2 Tutors
Eola, IL Calculus Tutors
Eola, IL Geometry Tutors
Eola, IL Math Tutors
Eola, IL Prealgebra Tutors
Eola, IL Precalculus Tutors
Eola, IL SAT Tutors
Eola, IL SAT Math Tutors
Eola, IL Science Tutors
Eola, IL Statistics Tutors
Eola, IL Trigonometry Tutors
Nearby Cities With Science Tutor
Big Rock, IL Science Tutors
Bristol, IL Science Tutors
Burlington, IL Science Tutors
Elburn Science Tutors
Indianhead Park, IL Science Tutors
Lily Lake, IL Science Tutors
Maple Park Science Tutors
Medinah Science Tutors
Millington, IL Science Tutors
Newark, IL Science Tutors
Plato Center Science Tutors
Virgil, IL Science Tutors
Wayne, IL Science Tutors
Western, IL Science Tutors
Yorkville, IL Science Tutors
|
{"url":"http://www.purplemath.com/eola_il_science_tutors.php","timestamp":"2014-04-18T16:10:38Z","content_type":null,"content_length":"23596","record_id":"<urn:uuid:cca0a5a8-e17a-47aa-b108-3ca8b5b673b6>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00566-ip-10-147-4-33.ec2.internal.warc.gz"}
|
at most, at least, exactly...geometric probability & binomial probability
December 6th 2010, 04:57 PM #1
Junior Member
Aug 2010
at most, at least, exactly...geometric probability & binomial probability
For example, I have a problem in my book, the probability of school having a contract is 0.62, and you have a random sample of 20 schools.
Whats probability at least 4 schools w/ contract?
Whats probability between 4 and 12 schools w/ contract?
Whats probability at most 4 schools w/ contract?
You don't have to answer the question...just maybe clarify:
if the problem says "at least" then p(none)-p(1)? wait I think this is wrong...oh and calculator keystrokes would be great btw...like binomcdf(...)
Basically just walk me through, any help is greatly appreciated
Follow Math Help Forum on Facebook and Google+
|
{"url":"http://mathhelpforum.com/advanced-statistics/165523-most-least-exactly-geometric-probability-binomial-probability.html","timestamp":"2014-04-21T07:29:57Z","content_type":null,"content_length":"29865","record_id":"<urn:uuid:3190463d-e620-4f44-a599-ac65205b3352>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00440-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[Haskell-cafe] Explaining monads
Dan Piponi dpiponi at gmail.com
Tue Aug 14 14:52:16 EDT 2007
On 8/14/07, Sebastian Sylvan <sebastian.sylvan at gmail.com> wrote:
> Well that's easy, don't use the recipe analogy to explain code, use it
> for monadic values exclusively, and you avoid the confusion entirely!
> I don't think it's that complicated.
It certainly is complicated. I think I have a good grasp of monads to
the point where I can tease novel monads (and comonads) out from
algorithms that people previously didn't see as monadic. And yet I
still don't understand what you are saying (except with respect to one
specific monad, IO, where I can interpret 'action' as meaning an I/O
> Monads have a monadic type. They
> represent an abstract form of an "action", which can be viewed as an
> analogy to real-world cooking recipes.
All functions can be viewed as recipes. (+) is a recipe. Give me some
ingredients (two numbers) and I'll use (+) to give you back their sum.
> As long as you don't
> deliberately confuse things by using the same analogy for two
> different things I don't see where confusion would set in.
If I was one of your students and you said that monads are recipes I
would immediately ask you where the monads are in my factorial program
regardless of whether you had introduced one or two different
analogies for recipes. There are two sides to every analogy. If you
have an analogy between A and B then you can use knowledge about A to
understand B. But conversely, if you can't set up the same analogy
between A and B then that tells you something useful about B also. As
far as I can see, your description of a monad fits every computer
program I have ever written, and as a result I don't see what it is
that makes monads special. And monads are special.
More information about the Haskell-Cafe mailing list
|
{"url":"http://www.haskell.org/pipermail/haskell-cafe/2007-August/030526.html","timestamp":"2014-04-18T15:58:18Z","content_type":null,"content_length":"4396","record_id":"<urn:uuid:93cfd843-fcdf-4411-a740-1d48b7944ce4>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00233-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Node.js / coffeescript performance on a math-intensive algorithm
up vote 13 down vote favorite
I am experimenting with node.js to build some server-side logic, and have implemented a version of the diamond-square algorithm described here in coffeescript and Java. Given all the praise I have
heard for node.js and V8 performance, I was hoping that node.js would not lag too far behind the java version.
However on a 4096x4096 map, Java finishes in under 1s but node.js/coffeescript takes over 20s on my machine...
These are my full results. x-axis is grid size. Log and linear charts:
Is this because there is something wrong with my coffeescript implementation, or is this just the nature of node.js still?
genHeightField = (sz) ->
timeStart = new Date()
DATA_SIZE = sz
SEED = 1000.0
data = new Array()
iters = 0
# warm up the arrays to tell the js engine these are dense arrays
# seems to have neligible effect when running on node.js though
for rows in [0...DATA_SIZE]
data[rows] = new Array();
for cols in [0...DATA_SIZE]
data[rows][cols] = 0
data[0][0] = data[0][DATA_SIZE-1] = data[DATA_SIZE-1][0] =
data[DATA_SIZE-1][DATA_SIZE-1] = SEED;
h = 500.0
sideLength = DATA_SIZE-1
while sideLength >= 2
halfSide = sideLength / 2
for x in [0...DATA_SIZE-1] by sideLength
for y in [0...DATA_SIZE-1] by sideLength
avg = data[x][y] +
data[x + sideLength][y] +
data[x][y + sideLength] +
data[x + sideLength][y + sideLength]
avg /= 4.0;
data[x + halfSide][y + halfSide] =
avg + Math.random() * (2 * h) - h;
#console.log "A:" + x + "," + y
for x in [0...DATA_SIZE-1] by halfSide
y = (x + halfSide) % sideLength
while y < DATA_SIZE-1
avg =
avg /= 4.0;
avg = avg + Math.random() * (2 * h) - h;
data[x][y] = avg;
if x is 0
data[DATA_SIZE-1][y] = avg;
if y is 0
data[x][DATA_SIZE-1] = avg;
#console.log "B: " + x + "," + y
y += sideLength
sideLength /= 2
h /= 2.0
#console.log iters
console.log (new Date() - timeStart)
import java.util.Random;
class Gen {
public static void main(String args[]) {
public static void genHeight(int sz) {
long timeStart = System.currentTimeMillis();
int iters = 0;
final int DATA_SIZE = sz;
final double SEED = 1000.0;
double[][] data = new double[DATA_SIZE][DATA_SIZE];
data[0][0] = data[0][DATA_SIZE-1] = data[DATA_SIZE-1][0] =
data[DATA_SIZE-1][DATA_SIZE-1] = SEED;
double h = 500.0;
Random r = new Random();
for(int sideLength = DATA_SIZE-1;
sideLength >= 2;
sideLength /=2, h/= 2.0){
int halfSide = sideLength/2;
for(int x=0;x<DATA_SIZE-1;x+=sideLength){
for(int y=0;y<DATA_SIZE-1;y+=sideLength){
double avg = data[x][y] +
data[x+sideLength][y] +
data[x][y+sideLength] +
avg /= 4.0;
data[x+halfSide][y+halfSide] =
avg + (r.nextDouble()*2*h) - h;
//System.out.println("A:" + x + "," + y);
for(int x=0;x<DATA_SIZE-1;x+=halfSide){
for(int y=(x+halfSide)%sideLength;y<DATA_SIZE-1;y+=sideLength){
double avg =
data[(x-halfSide+DATA_SIZE-1)%(DATA_SIZE-1)][y] +
data[(x+halfSide)%(DATA_SIZE-1)][y] +
data[x][(y+halfSide)%(DATA_SIZE-1)] +
avg /= 4.0;
avg = avg + (r.nextDouble()*2*h) - h;
data[x][y] = avg;
if(x == 0) data[DATA_SIZE-1][y] = avg;
if(y == 0) data[x][DATA_SIZE-1] = avg;
//System.out.println("B:" + x + "," + y);
//System.out.print(iters +" ");
System.out.println(System.currentTimeMillis() - timeStart);
performance node.js coffeescript
data = new Array() and double[][] data = new double[DATA_SIZE][DATA_SIZE]; are very different statements. The Java version is a real array. The JavaScript version is a hash table pretending to be
an array. – generalhenry Aug 13 '11 at 7:47
also note Node.js is only half JavaScript. For heavy computation you can write c/c++ addons nodejs.org/docs/v0.4.10/api/addons.html – generalhenry Aug 13 '11 at 8:30
1 On coffeescript ranges: don't use [0...DATA_SIZE-1], that's what [0..DATA_SIZE] is for. – Aaron Dufour Aug 13 '11 at 17:04
add comment
4 Answers
active oldest votes
As other answerers have pointed out, JavaScript's arrays are a major performance bottleneck for the type of operations you're doing. Because they're dynamic, it's naturally much
slower to access elements than it is with Java's static arrays.
The good news is that there is an emerging standard for statically typed arrays in JavaScript, already supported in some browsers. Though not yet supported in Node proper, you can
easily add them with a library: https://github.com/tlrobinson/v8-typed-array
After installing typed-array via npm, here's my modified version of your code:
{Float32Array} = require 'typed-array'
genHeightField = (sz) ->
timeStart = new Date()
DATA_SIZE = sz
SEED = 1000.0
iters = 0
# Initialize 2D array of floats
data = new Array(DATA_SIZE)
for rows in [0...DATA_SIZE]
data[rows] = new Float32Array(DATA_SIZE)
for cols in [0...DATA_SIZE]
data[rows][cols] = 0
# The rest is the same...
up vote 10 down The key line in there is the declaration of data[rows].
vote accepted
With the line data[rows] = new Array(DATA_SIZE) (essentially equivalent to the original), I get the benchmark numbers:
And with the line data[rows] = new Float32Array(DATA_SIZE), I get
So that one small change cuts the running time down by about 1/3, i.e. a 50% speed increase!
It's still not Java, but it's a pretty substantial improvement. Expect future versions of Node/V8 to narrow the performance gap further.
Caveat: It's got to be mentioned that normal JS numbers are double-precision, i.e. 64-bit floats. Using Float32Array will thus reduce precision, making this a bit of an
apples-and-oranges comparison—I don't know how much of the performance improvement is from using 32-bit math, and how much is from faster array access. A Float64Array is part of the
V8 spec, but isn't yet implemented in the v8-typed-array library.)
A long-needed addition to javascript! I'm impressed at the JS renaissance we're going through. I see a similar speedup with your tweak. One thing I don't understand though, is if I
also change the initializer for data from data = new Array(DATA_SIZE) to use Float32Array, the runtime increases by 3x? – matt b Aug 14 '11 at 2:00
@matt That surprised me, but on further investigation, it turned out that the performance increase was because... it breaks the code. Try putting console.log data[0][0] at the end
of the function; you'll find that the value is always undefined when data is a Float32Array. (And yeah, it's weird that there's no runtime error—I'd chalk that up to the immaturity
of the typed-array library.) – Trevor Burnham Aug 14 '11 at 2:14
3 Update! As of 0.5.5, Node.js now supports typed arrays (including Float64Array)! Note that the 0.5.x branch is unstable, but this means that it's almost certain that Node 0.6+ will
support them. – Trevor Burnham Aug 30 '11 at 21:17
add comment
If you're looking for performance in algorithms like this, both coffee/js and Java are the wrong languages to be using. Javascript is especially poor for problems like this because it does
not have an array type - arrays are just hash maps where keys must be integers, which obviously will not be as quick as a real array. What you want is to write this algorithm in C and call
up vote 3 that from node (see http://nodejs.org/docs/v0.4.10/api/addons.html). Unless you're really good at hand-optimizing machine code, good C will easily outstrip any other language.
down vote
I was afraid that would be the suggestion. I really like the idea of just using one language for client and server, eg both just coffeescript, or both just java. Using Coffeescript
alongside some C on the server removes that benefit. Seems java is the answer for me, for now. – matt b Aug 14 '11 at 1:55
If that's important to you, then its definitely worth noting that Java is not a good choice for client-side code. Java doesn't work in most mobile browsers, and a significant portion of
the population doesn't have it installed. It ultimately depends on exactly what the project is, but keep all sides of it in mind. – Aaron Dufour Aug 14 '11 at 2:55
add comment
Forget about Coffeescript for a minute, because that's not the root of the problem. That code just gets written to regular old javascript anyway when node runs it.
Just like any other javascript environment, node is single-threaded. The V8 engine is bloody fast, but for certain types of applications you might not be able to exceed the speed of the jvm.
I would first suggest trying to right out your diamond algorithm directly in js before moving to CS. See what kinds of speed optimizations you can make.
Actually, I'm kind of interested in this problem now too and am going to take a look at doing this.
Edit #2 This is my 2nd re-write with some optimizations such as pre-populating the data array. Its not significantly faster, but the code is a bit cleaner.
var makegrid = function(size){
size++; //increment by 1
var grid = [];
grid.length = size,
gsize = size-1; //frequently used value in later calculations.
//setup grid array
var len = size;
grid[len] = (new Array(size+1).join(0).split('')); //creates an array of length "size" where each index === 0
//populate four corners of the grid
grid[0][0] = grid[gsize][0] = grid[0][gsize] = grid[gsize][gsize] = corner_vals;
var side_length = gsize;
while(side_length >= 2){
var half_side = Math.floor(side_length / 2);
//generate new square values
for(var x=0; x<gsize; x += side_length){
up for(var y=0; y<gsize; y += side_length){
3 //calculate average of existing corners
down var avg = ((grid[x][y] + grid[x+side_length][y] + grid[x][y+side_length] + grid[x+side_length][y+side_length]) / 4) + (Math.random() * (2*height_range - height_range));
//calculate random value for avg for center point
grid[x+half_side][y+half_side] = Math.floor(avg);
//generate diamond values
for(var x=0; x<gsize; x+= half_side){
for(var y=(x+half_side)%side_length; y<gsize; y+= side_length){
var avg = Math.floor( ((grid[(x-half_side+gsize)%gsize][y] + grid[(x+half_side)%gsize][y] + grid[x][(y+half_side)%gsize] + grid[x][(y-half_side+gsize)%gsize]) / 4) + (Math.random() * (2*height_range - height_range)) );
grid[x][y] = avg;
if( x === 0) grid[gsize][y] = avg;
if( y === 0) grid[x][gsize] = avg;
side_length /= 2;
height_range /= 2;
return grid;
1 Yes, I don't think its a coffeescript vs javascript thing, as both my CS code and your JS code take about the same runtime, and both are about 10x slower than the java version, so it must be a JVM
vs node.js thing? But i'm not sure how to reconcile that with "the V8 engine is bloody fast" – matt b Aug 13 '11 at 3:05
Translating back to javascript is completely pointless. Coffeescript is nothing but nicer syntax that compiles to javascript. If you're worried about the time it take to compile the coffeescript
(hint: you shouldn't be), run coffee -c <file> and run the compiled code with node. – Aaron Dufour Aug 13 '11 at 17:00
1 Translating it back to JavaScript makes it easier for a JavaScript expert to spot issues. – generalhenry Aug 13 '11 at 20:52
add comment
I have always assumed that when people described javascript runtime's as 'fast' they mean relative to other interpreted, dynamic languages. A comparison to ruby, python or smalltalk would
be interesting. Comparing JavaScript to Java is not a fair comparison.
up vote 0
down vote To answer your question, I believe that the results you are seeing are indicative of what you can expect comparing these two vastly different languages.
"V8 increases performance by compiling JavaScript to native machine code before executing it, rather than to execute bytecode or interpreting it. Further performance increases were
achieved by employing optimization techniques such as inline caching. With these features, JavaScript applications running within V8 have an effective speed comparable to a compiled
binary" (en.wikipedia.org/wiki/V8_%28JavaScript_engine%29) – matt b Aug 17 '11 at 13:35
being on wikipedia doesn't make it true. – liammclennan Aug 19 '11 at 5:06
add comment
Not the answer you're looking for? Browse other questions tagged performance node.js coffeescript or ask your own question.
|
{"url":"http://stackoverflow.com/questions/7046509/node-js-coffeescript-performance-on-a-math-intensive-algorithm/7047321","timestamp":"2014-04-21T06:25:53Z","content_type":null,"content_length":"102062","record_id":"<urn:uuid:1d4755c8-07ec-409b-aa43-0d5aee845be9>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00370-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Searching for Spacetime Defects
Whether or not space and time are fundamentally discrete is one of the central questions of quantum gravity. Discretization is a powerful method to tame divergences that plague the quantization of
gravity, and it is thus not surprising that many approaches to quantum gravity rely on some discrete structure, may that be condensed matter analogies, triangulations, or approaches based on
networks. One expects that discretization explains the occurrence of singularities in general relativity as unphysical, much like
singularities in hydrodynamics
are merely mathematical artifacts that appear because on short distances the fluid approximation for collections of atoms is no longer applicable.
But finding experimental evidence for space-time discreteness is difficult because this structure is at
the Planck scale
and thus way beyond what we can directly probe. The best tests for such discrete approaches thus do not rely on the discreteness itself but on the baggage it brings, such as violations or
deformations of Lorentz-symmetry that can be very precisely tested. Alas, what if the discrete structure does not violate Lorentz-symmetry? That is the question I have addressed in my two
recent papers
In discrete approaches to quantum gravity, space-time is not, fundamentally, a smooth background. Instead, the smooth background that we use in general relativity – the rubber sheet on which the
marbles roll – is only an approximation that becomes useful at long distances. The discrete structure itself may be hard to test, but in any such discrete approach one expects the approximation of
the smooth background to be imperfect. The discrete structure will have defects, much like crystals have defects, just because perfection would require additional explanation.
The presence of space-time defects affects how particles travel through the background, and the defects thus become potentially observable, constituting indirect evidence for space-time
discreteness.To be able to quantify the effects, one needs a phenomenological model that connects the number and type of defects to observables, and can in return serve to derive constraints on the
prevalence and properties of the defects.
In my papers, I distinguished two different types of defects: local defects and non-local defects. The requirement that Lorentz-invariance is maintained (on the average) turned out to be very
restrictive on what these defects can possibly do.
The local defects are similar to defects in crystals, except that they are localized both in space and in time. These local defects essentially induce a violation of momentum conservation. This leads
to a fairly straight-forward modification of particle interactions whenever a defect is encountered that makes the defects potentially observable even if they are very sparse.
The non-local defects are less intuitive from the particle-physics point of view. They were motivated by what Markopoulou and Smolin called ‘
disordered locality
’ in spin-networks, just that I did not, try as I might, succeed in constructing a version of disordered locality compatible with Lorentz-invariance. The non-local defects in my paper are thus
essentially the dual of the local defects, which renders them Lorentz-invariant (on the average). Non-local defects induce a shift in position space in the same way that the local defects induce a
shift in momentum space.
I looked at a bunch of observable effects that the presence of defects of either type would lead to, such as CMB heating (from photon decay induced by scattering on the local defects) or the blurring
of distant astrophysical sources (from deviations of photons from the lightcone caused by non-local defects). It turns out that generally the constraints are stronger for low-energetic particles, in
constrast to what one finds in deformations of Lorentz-invariance.
Existing data give some pretty good constraints on the density of defects and the parameters that quantify the scattering process. In the case of local defects, the density is roughly speaking less
than one per fm
. That’s an exponent, not a footnote: It has to be a four-volume, otherwise it wouldn’t be Lorentz-invariant. For the non-local defects the constraints cannot as easily be summarized in a single
number because they depend on several parameters, but there are contour plots in my papers.
The constraints so far are interesting, but not overwhelmingly exciting. The reason is that the models are only for flat space and thus not suitable to study cosmological data. To make progress, I'll
have to generalize them to curved backgrounds. I also would like to combine both types of defects in a single model. I am presently quite excited about this because there is basically nobody who has
previously looked at space-time defects, and there’s thus a real possibility that analyzing the data the right way might reveal something unexpected. And into the other direction, I am looking into a
way to connect this phenomenological model to approaches to quantum gravity by extracting the parameters that I have used. So, you see, more work to do...
35 comments:
Bee:much like singularities in hydrodynamics are merely mathematical artifacts that appear because on short distances the fluid approximation for collections of atoms is no longer applicable.
Any papers showing this?
Not that, such a singularity has been proven not to exist in terms of the fluid approximation, yes?
In AWT the question, whether the space-time is continuous is analogous to question, whether the water surface continuous, if we would observe it with its own surface ripples at small scales?
black hole
If we define the space-time like the environment for spreading of transverse waves, then at both small or large scales such an environment will become discontinuous and fragmented into many
density fluctuations. But this is just a local perspective of us, human observers.
If we would shrink/expand into scale of space-time fluctuations, then the scale of space-time fluctuations would shrink accordingly. This is just another example of relative reality concept.
I really do not have a pet theory but do appreciate that if any approximation could lead to something correlated in the very nature of QGP, as to it's fluid design, it is held in context as
continuity? This could be a mistake on my part.
Then, this would further correspond to development and continuity of lets say, when jets are produced?
Gee, I had a great reply for you. Looks like it didn't go thru
Great essay even without equations in it. :-)
Edgar, There's nothing in the spam filter from you, seems it got lost indeed. Sorry about that, very annoying :(
Not sure what you are asking for. If you're on distances comparable to the size of an atom a continuum approximation clearly doesn't make sense anymore. You can look at it this way: In the
hydrodynamical description you find singular solutions, but if you were to zoom in around the singularity, you'd eventually start seeing the 'grainy' (atomic) structure of the fluid. Best,
Hi Bee,
In the QGP is there a point to point measure?
I guess at Planck scale should you choose to use discreteness as a model.....
AdS/CFT predicts the quark gluon plasma is unstable
BEE:For example, in General Relativity, Einstein's field equations when applied to the gravitational collapse of a very massive star develop infinities in density and curvature at the centre of
the system. Singularities in your Kitchen
12 pentagons close an unlimited hexagon tessellation into an elliptic surface. 12 heptagons do what hyperbolically? Equal numbers cancel even if non-adjacent. Discrete spacetime is incompatible
with vacuum isotropy and conservation of angular momentum, arXiv:1109.1963 Suppose the tessellation is nothing but defects! A quasitiling is aperiodic (periodic in 5D) but has a sharp spot
diffraction. They have assigned space groups,
"But finding experimental evidence for space-time discreteness is difficulty" Measurable Lorentz-symmetry violation need only occur where physics refuses to look. Green's function removes
physical chirality from Newton. Green's function is not within GR. The universe is not mirror-symmetric toward fermionic matter (quarks): parity violations, chiral anomalies, symmetry breakings;
Chern-Simons repair of Einstein-Hilbert action. Physics has been burned for denying chirality,
Phys. Rev. 104(1) 254 (1956) Heretical, stupid idea...
Phys. Rev. 105(4) 1413 (1957) ...that is observed...
Phys. Rev. 105(4) 1415 (1957) ...and independently confirmed...
PNAS 14(7) 544 (1928) ...28 years after it was observed and rejected for being impossible.
Do discretized spaces have torsion? If so, they are chiral or racemic, crystallography's screw axes (A + B = N-fold axis). An -even numbered N-fold screw axis has a racemic version N/2, e.g 6
(1),6(5); 6(2)6(4); 6(3). A pair's lowest number is right-handed. Paired enantiomorphic space groups are mathematically chiral independent of contents,
http://elib.mi.sanu.ac.rs/files/journals/publ/69/7.pdf Section 2
http://www.math.ru.nl/~souvi/papers/acta03.pdf Section 3ff.
"Wick-rotation" Projecting graphs down by one dimension (cf: Schlegel diagrams) bumps into Kuratowski's theorem (e.g,, knots). There are more than 66 million organic compounds in the CAS
Registry. About a dozen are K(5) molecules. Organic chemistry is flat, even the fullerenes and other bubbles.
Empirical not quite exact angular momentum conservation is MoND's Milgrom acceleration uncreating dark matter. Any Lorentz-symmetry violation is detected by a geometric Eötvös experiment.
The apparatus. Load the image in a separate frame for a much larger view,
insert visually and chemically identical, single crystal test masses in enantiomorphic space groups, such as P3(1)21 versus P3(2)21 alpha-quartz (about 20,000 tonnes/year commercially grown for
crystal oscillator chips). Given 0.113 nm^3 volume/alpha-quartz unit cell, the 40 gram net load as 8 single crystal test masses compares 6.68×10^22 pairs of opposite shoes (pairs of
enantiomorphic unit cells). One vertical side of the test mass cubic array is left-handed, the other right-handed. Differential vacuum free fall (bets are on divergent speeds not directions) is
torque bringing that interferometer leg (plane mirrors) out of null into a single photon counter.
Symmetry "problems" in discretized spaces are natural evolution. Their presence is measurable vs. isotropic vacuum to one part in 20 trillion difference/average on a bench top within 90 days.
Derive your theory, model its chiral divergence, tell the U/Washington Physics Department/CENPA to run a geometric Eötvös experiment
All very interesting.
PhysRevD - Wow - congratulations!
forgot to mention 't Hooft's recent preprint, promoting his discrete determistic Hamiltonian formulation of QM
The search for space-time defects may not be a difficult task, until we consider all forces violating the inverse square law (with exception of gravity) into account. The various dipole and
dispersion forces, Casimir force and magnetic forces may affect the geodesic motion of massive bodies across space-time and as such they can be considered a space-time defects too.
Alternatively you may want to define the space-time just with light spreading and after then we can consider all gravitational lensing and dark matter effects as a manifestation of space-time
defects. If you decide not to consider them as a space-time defect violating the dimensionality of space-time, then indeed some more detailed criterion of "space-time defect" would be welcomed
In another words, most of problems of contemporary physics are just a problem of vague terminology.
Thanks for mentioning 't Hooft's paper, I had missed it! Should probably have a look. (Unfortunately, this just reminded me there's a proceedings deadline at the end of the month.) Best,
"A particle that encounters a nonlocal defect continues its path in space-time elsewhere, but with the same momentum"
Seems you 've just invented the absolute space-time travel mechanism!
Well, the interaction probability for localized states is basically zero. So there's not much travel that you can do with these defects, unless you can cook up a mechanism to collect them.
IMO magnetic motors work on this principle. They're collecting magnetic monopoles and harnessing negentropic work from them. The ensembles of Dirac fermions inside of superconductors and
topological insulators can violate the gravitational law instead.
Bee - " I am presently quite excited about this because there is basically nobody who has previously looked at space-time defects, and there’s thus a real possibility that analyzing the data the
right way might reveal something unexpected. "
Well now you've done gone and done it.. there will soon be a stampede of researchers chasing your lead!
Congrats though on PRD .. haven't picked up a copy in quite some time, even before I switched from APS to IOP
"a mechanism to collect them", that's exactly what crossed my mind! I just couldn't figure out what such mechanism would based upon as there was no evidence on your papers for possible
interaction mechanisms between such defects so that I could set up a bait to lure some of them nasty beasts to make them gather! But you are so much more clever and I trust that you could figure
out a plan for this...
As a chemist,
one thinks of C60, when looking at that graphite monolayer with pentagon defects.
@Georg An unlimited size graphene sheet closes to an elliptic surface given 12 internal regular pentagons. The smallest example is dodecahedrane, C_20H_20. A C_260 fullerene,
(View with CHIME plug-in)
is point group I (not I_h) and is calculated to be perfectly mathematically chiral. Petitjean's CHI = 0, COR = 1, DSI = 0, J. Math. Phys. 40, 4587 (1999). The fun question is, what hyperbolic
shape is chicken wire incorporating 12 heptagons?
Bee is moving on up and Congratulations! Today a D (Physical Review D), not the best grade, but tomorrow an A (Physical Review A)!
This comment has been removed by the author.
Bee can you kindly update on what will be needed to experimentally test Doubly Special Relativity and Discrete Spacetime theories? Can they be tested today, or do we need tomorrow's technology?
You are obviously very familiar with Giovanni Amelino-Camelia's work (and have cited him in your PRD paper too). In 2001 he wrote the paper "Testable scenario for Relativity with minimum-length."
That paper says: "[O]ne might be tempted to assume that none of these effects could be tested in the near future. However, this is not true, at least not for" [part of what he theorizes in his
Thanks in advance for any information you can provide regarding experiments that can be done or are in planning.
Basically, I said your phenomenological approach is a more advanced way to explore the cosmological question as physics. Just as Crick and Watson asked facing models to assemble of DNA with due
regard for Rosylin's data from the crystal structure.
A minimum distance concept needs to consider the relation to min duration (divided by c) beyond linear. This a long standing foundational early proposal.
How they test the Casmir thus a relation to the cosmological constant I do not know the tech. But instead of two planes of metal use graphine more than five atoms close.
As we can think of such forces as particals, in the apparent nonlinear of it all shoot particles (leptons? ) through it with sensors that encode the landscape.
It may take biochemistry to simplify the design as it encounters issues of theory in its complexity. Uncle Al has some very relavant ideas for this.
Then again there is QM logic to consider in polarization. Maybe sets of double planes in sets.
But of the three great arguments for the existince (of God) or some yet unknown as dispassionate Ontology, I feel Teleology should fit in somewhere
if a poet may speak. Best..
I want to add some consequences should such an experiment work.
Crick et al in perfecting their model took the cosmological answer to be its role of replication. So they thought of hydrogen bonding.
If the experiments work it would make more natural ways to intervine to aid globally from a higher stance of Phenomenology methods and self repair of errors by code buildt already into the
generation problem answered as general to DNA topology.
This could lead to whole new areas of physics as that essential distinction between codons (36) on 3D and the rest reaching into 4D of the 64.
New classes of "particles " will be found. (Cosmons, Melissons, Over such abstract unified sheets ...quasons but -on or -inorganic are not good suffixes)
Nor Vanderaals as force, Uncle Bob.
Such fields or "particles" like a living cell may balance over min space time higherGR analog for micro errors too at a local or reached spacious singularity. Laws again as life (mind) everywhere
Interesting, the idea of collecting such singularities. Where it may be the linear position (or non-linear, non-local superposition) may be a fixed number coordinate code such as holes involving
Goldbach's conjecture.
Time travel may be limited over a quantum span both ways into the past of future as QM theory seems to suggest
At spacious field of zero D as rest we see duration as nature solving and adapting a unique existential code which can be similar to Newtonian aether ideas in the sense as Einstein said we may as
well use the term as a hyponym.
A possible consequence of this enquiry is a need for abstract Labeling (theory) as a standard choice independent but by the same methods a neutral reference frame.
(You go girl! What the heck is gravity? Melission means honey Bee and it is more than issues of subjective statistics or terminology. How fitting you would think about the honeycomb or seek the
spiral symmetry of sunflowers following the sun then try to see more than a wax cell as accidental measure in our models of prophilis and pressure. Thanks for your gift of new inspiration.)
Think bigger! Defects (voids) in a granular spacetime equals wormholes?
Think about this Frank: no matter how many millions some agency throws or wastes to strengthen encryption it will not work unless they understood on a higher level the very question you asked us
to think about.
I thought about this, but couldn't make sense of it. Best,
Hi - I'm not a physicist, but very interested in these questions and love perusing this blog. My question is, space-time defects seem IMHO much more question-begging than perfection. Because if
space-time geometry is the essential substrate of reality, what could intervene into it to create a defect? That would seem to presuppose something external to that geometry that could cause a
Unknown artist from another artist interested in these questions :
Perhaps the essence as an underlying perfection is the question. How art relates reflecting and influencing the other would views or the world.
An artist friend (we were so young then) saw me making a Pauling dodecahedron of straws and jacks from an inorganic chemistry set then asked me if I could make a ball of hexagons. I said no but
let him try. After he convinced himself there was some sort of restrictions he became free to make some intricate structures quite independent of the underlying organic chemistry (perhaps? ) and
they were beautiful.
In the Gothic West grotesque gargoyles were added to cathedrals as a token of humility that there may be more than we know pointing or reaching for the stars.
Or in the East a work of carving with long intricate perfected steps adds a defect (wabi) . Or in the beads in beads of finer colors and ink the paintings come alive beyond projection to an ideal
point that these can fill the plane everywhere.
We can of course choose to seek higher art so prove its limitations or copy the iron work on a Singer sewing machine down to errors in the mould
In the calligraphy of our scribbles we can take a lifetime to draw the same circle or line segment in black on white. But even Zen masters do not try to do both in pursuit of perfection.
This may change with our generation.
You're confused because you got the premise up-side down. In this scenario geometry is *not* the 'essential substrate of reality'. Instead, the fundamental structure is something entirely
different, some kind of network, something that's discrete and just approximates a smooth background. Best,
|
{"url":"http://backreaction.blogspot.ca/2013/12/searching-for-spacetime-defects.html","timestamp":"2014-04-21T14:40:10Z","content_type":null,"content_length":"178056","record_id":"<urn:uuid:cde79af0-c85a-4f96-8578-8f79598e82cf>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00107-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The Dangers of Horizons BetaPro ETFs
Million Dollar Journey
an article yesterday
Horizons BetaPro ETFs
. These Exchange-Traded Funds (ETFs) are designed to give investors double exposure to certain indexes.
This means that if you are invested in their S&P/TSX 60 ETF and the index goes up by 1% one day, the ETF should go up by 2%. They also offer a bear version of each ETF where a 1% rise in the index
gives a 2% drop in the bear ETF. Obviously, the owners of the bear ETFs are hoping for the index to drop.
Let’s focus on the ETF based on the S&P/TSX 60 index, which is based on the biggest companies in Canada. If the TSX 60 goes up by 10% one year, you’d expect the corresponding Horizons BetaPro ETF
(ticker symbol HXU) to go up by 20% that year. But, that’s not how it works. For example,
this fact sheet
shows that in its first year, HXU returned 13.28% and the TSX 60 rose 9.19%. If we double the TSX 60 return, we find that there is a 5.1% gap.
What causes this 5.1% gap? I tried reading the
, but like most such documents, clarity doesn’t seem to have been a priority. Page 36 outlines a number of fees including 1.15% management fees, operating expenses, and various expenses attached to
forward contracts.
In addition to fees, the method used to achieve double exposure contributes to the 5.1% gap. This can be interest on borrowed money or built-in bias of stock options (called forward contracts in this
case). When you trade in stock options, the expected rise of the underlying stock (the TSX 60 in this case) is built in to the option prices.
If this explanation of the 5.1% gap makes no sense to you, don’t worry. Just accept that it exists for what follows.
It might seem like we just need the TSX 60 to return at least 5.1%, and then HXU will perform better than the TSX 60. This works for one year, but doesn’t take into account the effect of volatility
on long-term compounded returns.
Based on historical data, the long-term compounded return will be lower than the expected one-year return by about 2%. However, HXU’s doubled volatility increases this penalty by a factor of 4 to
about 8% per year.
Combining all this together with the 5.1% gap we observed earlier, the TSX 60 would have to have a long-term compounded return of 9.1% for HXU to break even with the TSX 60. Suddenly, HXU doesn’t
look as appealing as it did before.
Math interlude that can be ignored
: If the expected one-year return of the TSX 60 is x, then its long-term compounded return will be x-2%. HXU’s one-year return will be 2x-5.1%, and its long-term compounded return will be 2x-5.1%-8%.
Equating the two compounded return gives x=11.1%, or a long-term compounded return for the TSX 60 of 9.1%.)
The bear versions of the ETFs are much easier to argue against. Investors and pundits cannot reliably predict when the stock market will drop. The bear ETF corresponding to HXU is HXD and it had a
one-year return of -19.85%. An investor who chose the bear for this year would be in a very deep hole trying to catch up to investors who just bought the TSX 60.
8 comments:
1. Hi
Is the gap you identified not due (at least in part) by the daily re-baselining?
2. Anonymous:
Daily re-baselining causes some volatility losses within a single year, but not that much. Leverage always has costs, but they are spread out over the year and are too small to be visible in the
returns of a single day.
3. Well, compared to the inception date of HXU (about 17 months ago), HXU has gained ~42% whereas XIC has gained only ~19%. Your hypothesis simply doesn't bear out! HXU has provided about a 20% (=
1.42/1.19) better return over this period than XIC with all fees considered. I think any buy & hold investor would take a 20% improvement, yet that is not the best use of any index fund. Indices
are best exploited through wave trading. Buying HXD today, & then about four months from now short selling it or buying HXU, should yield a superior return. Rinse & repeat.
4. Anonymous: According to the Horizons BetaPro web site, the return from 2007 January 8 to 2008 May 31 was 27.27%. The closing price May 30th was $34.50 making the price at inception $27.11. This
differs from the TSX chart. Perhaps this ETF sold at a discount initially? Who knows. Using the $27.11 starting figure, the return to date is now 31.9%, far short of the apparent 42% that you
report. I'm confident that HXU will continue to underperform double the TSX 60 by significant amounts.
As for wave trading and other market timing methods, there is no evidence that anyone has succeeded at this over the long term. Be careful not to lose all your money.
5. Michael, can you explain where you get this square relationship between volatility and duration? Also, you seem to have used a square relation here too (double exposure turns 2% under-performance
into 8%). I'd like to understand the math behind these. Thanks!
6. Patrick: Few things make me happier than being asked a math question. Here is a good explanation on Wikipedia: volatility link.
Basically, one measure of volatility, the variance, grows in proportion to time. Standard deviation is the square root of variance, and so the standard deviation of returns grows as the square
root of time.
The gap between expected return and expected compound return is equal to half the variance (this falls out of computing the expected value of the lognormal probability distribution). If an
investment has double exposure to the index, then its standard deviation is double, which makes the variance 4 times bigger and the gap between expected return and expected compound return 4
times bigger.
You can see why I tried to avoid too many of these details in the post entry. If any of this isn't clear, let me know and I'll try again.
7. I was thinking of putting some money in the bear hxd and waiting for the market to crash again, resulting in the doubling, or more, of my money. From there I was going to reinvest in the bull
hxu, wait for the market to correct itself & voila, another big profit. Think this is a good or bad idea?
8. @Anonymous: Your question doesn't seem serious, but in case it is, you should know that even if there is a crash followed by a boom, you aren't guaranteed to make money. I recommend understanding
why before making a decision on what to do.
|
{"url":"http://www.michaeljamesonmoney.com/2008/06/dangers-of-horizons-betapro-etfs.html?showComment=1297980300115","timestamp":"2014-04-20T20:56:33Z","content_type":null,"content_length":"130106","record_id":"<urn:uuid:34daba32-113a-4678-b6ce-10851d1814b9>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00331-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: Marketing + Mathematics = Money! (Are YOU being coined by Devlin?)
Replies: 1 Last Post: Nov 13, 2012 11:53 PM
Messages: [ Previous | Next ]
Marketing + Mathematics = Money! (Are YOU being coined by Devlin?)
Posted: Nov 13, 2012 8:54 PM
It does crazy things to arithmetic.
Take the Aussie 1 cent coin and compare it to the 2 cent coin. Same metal (97% copper 2.5% zinc 0.5% tin) and the 2 cent coin weighed twice as much as a 1 cent coin (5.2 g vs 2.6 g).
Yet copper prices went up so the 1c and 2c coins were withdrawn from sale.
The ten cent coin has twice as much metal (75% copper 25% nickel) as a five cent coin (still being sold) and weighs twice as much.
Yep you guessed it! A 20c coin weighs the same as four 5c coins and two 10c coins!
Yet Devlin* and his colleagues then pulled a swifty on the Australian public.
First came the 50c coin. While it's the same metal as the other silver coins, rather than have it weigh five times a 10c coin and ten times a 5c coin, they undetook a marketing makeover trick.
They then sold less for more!
Yes, rather than having an infinite number of sides, they reduced the number of sides of the new 50c coin to just twelve!
And so instead of weighing 28.25 grams, the 50c coin sold for a premium, by weighing just 15.55 grams.
The unit price of the coin measured by METAL increased by 81.6 percent. The price per kg for the same metal we were buying in Australia went from $17.70 to $32.15.
Now that's marketing!
But it gets better...
The $1 coin then release looks like GOLD! From the silver coins having 25% nickel, the new improved GOLD coin had just 2 percent nickel. Maybe all the 1c and 2c pieces were melted by the mint and
reissued as $1 coins, as the copper was now 92 percent to create that fools gold look.
So time to get out the marketing 101 manual once again.
The $2 coin was subsequently made SMALLER than the $1 coin! It was more convenient (more could fit in a purse) and you expended LESS energy carrying around a $2 coin than two $1 coins.
It's just that two $1 coins is worth about 70 percent more in metal than one $2 coin.
Brilliant. Oh, our Reserve Bank wants to get rid of 5c coins now. So expect more marketing brilliance in the future. That's what monopolies can get away with.
It takes a mint to make a mint!
BTW our Royal Australian Mint had said the metal value of our coins is commercial-in-confidence. Not any more!
The value of metals in the 50c coin was 15.53c, as of mid-November 2011 metal prices.
So the intrinsic value of the 5c piece was 2.83c, while the 10c piece was worth 5.65c. The 20c coin had 11.29c worth of metal in it.
As highlighted above, the $2 coin's constituent metals, valued at 4.82c, are worth less than that of the $1 coin (6.58c).
So has YOUR government got into the marketing as well as manufacturing money business? It's very lucrative!
Pictures of shiny coins at
Jonathan Crabtree
P.S. Make sure your government's bank has a good ethics committee. Subsidiaries of the Reserve Bank of Australia are in the process of being 'busted' for using bribery as a way to win money making
contracts for overseas customers.
* No not THAT one!
Message was edited by: Jonathan Crabtree yet that won't stop you finding typos and grammatical errors!
Date Subject Author
11/13/12 Marketing + Mathematics = Money! (Are YOU being coined by Devlin?) Jonathan Crabtree
11/13/12 Re: Marketing + Mathematics = Money! (Are YOU being coined by Devlin?) kirby urner
|
{"url":"http://mathforum.org/kb/message.jspa?messageID=7922888","timestamp":"2014-04-19T02:08:05Z","content_type":null,"content_length":"20728","record_id":"<urn:uuid:89be6aaf-8030-44b2-8e62-7eee3c53b3e5>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00340-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Patent US7269299 - Image warp
Images, such as satellite images, can be combined to produce one large panorama or mosaic of images. One problem encountered in combining images is that there may be misregistration between the
images. The misregistration may be caused by a variety of factors. With respect to satellite images, for example, misregistration may be caused by errors in the digital elevation model used in the
orthorectification process for elevation correction of points in the image. The misregistration may cause shear or feature misalignment along a seam between a pair of images.
One approach that has been used to correct for the misregistration is to correlate a pair of images along a line halfway between an overlap region of the images. The shift determined by the
correlation is then applied half to one image and half to the second image. This approach assumes that the image overlaps are horizontal or vertical rectangles and that the seam is placed in the
middle of the overlap region. However, the overlap regions are often not horizontal or vertical rectangles and the seams are often not positioned down the middle of the overlap region. Additionally,
if the misregistration between the images is not consistent, this approach only correct for a small amount of misregistration and will not work properly for large misregistration between the images.
Methods are disclosed for warping overlapping images. In one embodiment, the method comprises selecting a first set of points in a first image. The first set of points are located in an overlap
region of the first image and a second image. A set of tie points in the second image is determined. Each tie point correlates to a point in the first set. A second set of points located at a
position in the overlap region between points in the first set and correlating tie points are also determined. The images are warped by applying an algorithm using the second set of points. The
algorithm repositions at least a portion of the points in the first image and at least a portion of the points in the second image.
Illustrative embodiments of the invention are illustrated in the drawings in which:
FIG. 1A illustrates a pair of overlapping images that may be combined into an image mosaic;
FIG. 1B illustrates an image mosaic constructed from the pair of images in FIG. 1A;
FIG. 2 is a flow diagram illustrating an exemplary method that may be used to align the images of FIG. 1A;
FIG. 3 is a flow diagram illustrating an exemplary warp algorithm that may be used in the method of FIG. 2;
FIG. 4 illustrates a triangulation of the overlap region of FIG. 1A that may be produced by the method of FIG. 3;
FIGS. 5 and 6 illustrate a flow diagram of an exemplary method for determining tie points that may be used by the method of FIG. 2;
FIG. 7 is a flow diagram illustrating an exemplary method for determining if a good correlation exists that may be used by the method of FIGS. 5 and 6;
FIG. 8 is a flow diagram illustrating an exemplary method that may be used to remove tie points from the set of tie points;
FIG. 9 is a flow diagram illustrating an exemplary method for multi-resolution correlation of images; and
FIG. 10 is a flow diagram illustrating an exemplary method for coarse mapping of images that may be used prior to the method of FIG. 2.
As shown in FIGS. 1A and 1B, an image mosaic may be produced from a pair of overlapping images 100, 120. By way of example, images 100, 120 may be images taken from a satellite. The image mosaic
includes an overlap region 110 where image 100 overlaps with image 120. The images may contain one or more features, such as feature 125, which may not align correctly when the images are combined
because of misregistration of the images. As shown in FIG. 1A, region 112 of image 100 that corresponds to overlap region 110 is not properly registered to region 114 of image 120 that corresponds to
overlap region 110. If the images were combined without correcting for the misregistration, feature 125 would not be aligned properly along the seam between the images. A method, such as that
described in FIG. 2, may be used to correct for the misregistration so that the images are properly aligned as shown in FIG. 1B. It should be appreciated that other image mosaics may contain
additional features and may be a combination of a different number of images than that shown in FIGS. 1A and 1B. It should also be appreciated that overlap region 110 may have a different shape than
that shown.
Alignment of Images
FIG. 2 illustrates a method 200 that may be used to align images 100, 120. First, a first set of points located in the overlap region 112 of the first image 100 are selected 205. By way of example,
the first set of points may be selected using a grid of blocks of a predetermined size (e.g., 32 points by 32 points), so that a point is selected from each block of the grid. Other methods may also
be used to select the first set of points.
One or more tie points located in the overlap region 114 of the second image 120 are then determined 210. Each tie point correlates to a point in the first set. Because of misregistration between the
images, the tie point may not be in the same position in the overlap region 110 as the point from the first set to which it is correlated. Additionally, the displacement between a tie point and its
correlated point from the first set may vary from other tie points and their correlated points. It should be appreciated that by correlating points from the first image 100 located throughout overlap
region 110 to tie points in the second image 120, the correction of misregistration of the images is not limited to a seam in the middle of the overlap region 110 and the overlap region may be
irregularly shaped.
Next, a second set of points are determined 215 so that each point in the second set is located at a position in the overlap region 110 between a point in the first set and its correlated tie point.
By way of example, the second set of points may each be located halfway between a point in the first set and its correlated tie point. As will be described below, the second set of points will be
used to warp the first image 100 and the second image 120. By using points located between the correlated points of the images 100, 120, a fixed percentage of the warp will be applied to each image.
For instance, if the second set of points are located halfway between the correlated points, the warps will be applied 50% to each image.
An algorithm that uses the second set of points is then applied 220 to warp the first image and the second image. The algorithm repositions at least a portion of the points in the first image 100 and
at least a portion of the points in the second image 120 so that the images are aligned. By way of example, the portion of points from the first image 100 may include points in the overlap region 110
and points located a predetermined distance from the overlap region 110. The algorithm may use the known displacements between the points in the second set and their correlating points from the first
set to warp the portion of points in the first image 100. Similarly, the known displacements between the points in the second set and their correlating tie points may be used to warp the portion of
points in the second image 120. The displacements at the correlated points may be used to calculate an interpolation for other points in the images to be warped. In other embodiments, the warp may be
applied entirely to one image and points may only be repositioned in that image.
FIG. 3 illustrates an exemplary warp algorithm that may be used by the method of FIG. 2. Optionally, outlying points from the first image 100 located in the non-overlap region of image 100 and/or
outlying points from the second image 120 located in the non-overlap region of image 120 may be added 305 to the second set of points used to warp the images. The outlying points may each be located
within a predetermined distance from the overlap region 110. As the outlying points do not have physical points in the alternate image to which they correlate, the displacement used in the warp
algorithm may be set to a predetermined displacement (e.g., 0). It should be appreciated that by adding outlying points, the warps may be feathered out from the overlap region so that repositioning
of points in the overlap region 110 does not cause each image to be misaligned at its overlap seam.
Next or, to begin, if optional points have not been added 305, a Delaunay triangulation is calculated 310 for points in the second set. An exemplary Delaunay triangulation is shown in FIG. 4. The
triangulation may be used to interpolate the displacement of points located between the points in the second set with known displacements to points in the first set and tie points.
Linear polynomial warps may then be calculated 315 for the images 100, 120 using the Delaunay triangulation. The warp may be created using a linear least squares approach which results in a linear
polynomial. A system that may be used to calculate the polynomial for the first image 100 is shown in Equation (1).
$[ s i 2 s i I i s i 0 0 0 s i I i I i 2 I i 0 0 0 s i I i 1 0 0 0 0 0 0 s i 2 s i I i s i 0 0 0 s i I i I i 2 I i 0 0 0 s i I i i 1 ] * [ u o u 1 u 2 u 3 u 4 u 5 ] = [ s i x i I i x i x
i s i y i I i y i y i ] ( 1 )$
where x[i ]is the x coordinate of the point in the first set of points from the first image correlated to a vertex[i ]of a triangle, y[i ]is the y coordinate of the point from the first set of points
correlated to the vertex, s[i ]is the x coordinate of the vertex of the triangle, l[i ]is the y coordinate of the vertex of the triangle, and u[i ]are the coefficients used to compute the locations
in the first image that correlate to locations in the overlap region 110.
The system can then be used to reposition the points in the first image located at the overlap region and within the predetermined distance from the overlap region. Thus, for any output location s, I
for the warped first image 100, the original location x, y of the point is:
x=s*u [o] +/*u [1] +u [2](2)
y=s*u [3] +/*u [4] +u [5](3)
The point x,y is moved to output location s, I. The same system can be used to warp the second image, with x[i ]being the x coordinate of the tie point from the second image correlated to a vertex[i
]of a triangle and y[i ]being the y coordinate of the tie point. Thus, the known displacements between the points in the first set and the second set of points are used to warp the first image 100
and the known displacements between the tie points and the second set of points are used to warp the second image 120. In some embodiments, if the output location has an originating point outside the
boundaries of the image, the intensity of the output location may be set to a default value (such as a background intensity). It should be appreciated that some points may be moved to more than one
output location (e.g., because the warp stretches the image) and that some points may not be used in the warped image (e.g., because the warp compresses the image).
Determine Tie Points
FIG. 5 illustrates an exemplary method that may be used by the method of FIG. 2 to determine the tie points 210. For a point in the first set, a patch comprising the point and a set of neighboring
points located within a first predetermined distance from the point is selected 505. By way of example, the first predetermined distance may be 2 points and all points in the first image 100 located
within a radius of 2 points from the point in the first set may be selected for the patch. It should be appreciated that by selecting a patch of points 210, the points surrounding the point from the
first set may be used to aid in the correlation of the point to a point in the second image 120.
Optionally, before proceeding further, the point may be removed from the first set and may not be correlated to a tie point if a predetermined percentage (e.g., 50%) of the patch appears to be a
shadow. This may provide for better correlation between the images 100, 120 by avoiding correlating shadows in one image 100 to a shadow in the second image 120. As shown in FIG. 6, shadow points may
be detected by creating 605 a histogram of frequencies of point intensities in the patch is created. Next, a bimodal coefficient for the histogram is calculated 610. By way of example, the bimodal
coefficient b may be calculated as shown in Equation (4):
b=[(m [3] ^2+1)/(m [4] ^2+[(3(n−1)^2)/(n−2)/(n−3))])](4)
where m[3]=skew, m[4]=kurtosis, and n is the number of pixels in the patch.
If the bimodal coefficient does not indicate 615 a bimodal distribution, the method continues with block 510. If the bimodal coefficient does indicates a bimodal distribution 615 (e.g., coefficient
is greater than 0.555), a mean point intensity value is calculated 620 for the patch. The mean intensity value is then examined to determine if it falls within a predefined shadow range 625 (e.g., a
range slightly above and below a shadow intensity value). This may prevent patches that are made up of normal light and dark elements from being discarded.
If the mean does not fall within the predetermined shadow range, the method continues with block 510. Otherwise, the point in the first set is probably a shadow and is removed from the first set 630
and a tie point from the second image is not correlated to this point. The method then continues at block 525 where a determination is made as to whether there are more points in the first set to
process. In alternate embodiments, other methods may be used to detect shadows in image 100 and shadow detection may be performed at a different point of the method used to determine tie points 210
or may not be performed at all.
Returning to FIG. 5, after the patch is selected 505 and optionally after shadow detection has been performed, a first potential tie point in the second image 120 is selected 510. By way of example,
the first potential tie point may correspond to the same location in the overlap region 110 as the point in the first set. Next, a set of potential tie points including the first potential tie point
and one or more additional potential tie points located within a second predetermined distance surrounding the first potential tie point is determined 515. The second predetermined distance may be
set to a maximum number of points the images are expected to be misregistred by. All points within the range of the second predetermined distance may be points that could potentially correlate to the
point from the first set.
For a potential tie point, a correlation is calculated 520 between points in the first patch and the tie point and neighboring points in the second image located within the first predetermined
distance from the potential tie point. If there are more potential tie points 525, the correlation is repeated using the new potential tie point and its neighboring points. Thus, a correlation value
between the first patch and patches in the second image surrounding potential tie points is obtained for each potential tie point.
If a good correlation for the first patch exists 530, a tie point corresponding to the potential tie point with the best correlation is selected 535 to correlate to the point in the first set. If a
good correlation does not exist 530, the point may be removed from the first set 540 and a correlating tie point is not found.
If there are more points 545 in the first set that need to be correlated to a tie point, processing continues by selecting a patch for the next point 505. Otherwise, the method ends 550.
FIG. 7 illustrates an exemplary method that may be used by the method of FIG. 5 to determine if a good correlation between a point in the first set and a tie point exists 530. The correlations
obtained for the potential tie points are analyzed 705. If there is not at least one correlation value for a potential tie point that exceeds a threshold 710 (e.g., 60%), a determination is made that
a good correlation does not exist 720.
Optionally, if at least one correlation value exceeds the threshold, a check may be performed to determine if there are a plurality of similar correlations for a plurality of potential tie points
that exceed the threshold 715. This may provide a better warping of the images as it may help prevent the use of bad correlations in the warp algorithm that result from constantly changing features
(e.g., waves) or features that remain similar throughout the image (e.g., roads). If a plurality of similar correlations exceed the threshold, a determination is made that a good correlation does not
exist 720. Otherwise, a determination is made that a good correlation exists 725. By way of example, a plurality of randomly distributed potential tie points with similar correlations exceeding the
threshold may indicate the point is located within a wave or other constantly changing feature. Tie points with similar correlations that are positioned linearly to each other may indicate a feature,
such as a road, that also may result in a bad correlation. It should be appreciated that other embodiments may include additional checks to determine if a good correlation between a point from the
first set and a potential tie point exists.
Removing Tie Points
In one embodiment, before the second set of points is determined 215, one or more tie points may be removed from the set of tie points. This may help prevent the use of bad correlations due to noise,
shadow, satellite angle, or other reason. FIG. 8 illustrates an exemplary method that may be used to remove tie points 800. It should be appreciated that in alternate embodiments, additional or
different methods may be used to remove tie points that may have bad correlations.
The method begins with selecting a tie point for analysis 805. A plurality of tie points neighboring the selected tie point are then analyzed 810. If the results of the analysis indicate the tie
point is not similar to the neighboring tie points 815, it is removed from the set of tie points 825.
In one embodiment, the neighboring tie points may be analyzed 810 by calculating an average (e.g., median) vertical displacement and an average horizontal displacement between the neighboring tie
points and points in the first set correlated to the neighboring tie points. A determination may be made that the tie point is not similar to its neighbors 815, if the difference between the average
horizontal displacement and the horizontal displacement between the tie point and the point in the first set to which it was correlated exceeds a predetermined threshold. Similarly, a determination
may be made that the tie point is not similar to its neighbors 815, if the difference between the average vertical displacement and the vertical displacement between the tie point and its correlating
point in the first set exceeds the predetermined threshold or a second predetermined threshold.
In another embodiment, the neighboring tie points may be analyzed 810 by calculating an average angular variance between the neighboring tie points and points in the first set correlated to the
neighboring tie points. The magnitude of the displacement between the selected tie point and its correlating point from the first set may also be calculated since a large difference in angle may be
less important if the magnitude of the displacement is small. Finally, the magnitude and average angular variance may be used to compare the selected tie point to its neighbors. Equation 5
illustrates an exemplary equation that may be used to compare the selected tie point to its neighboring tie points using the magnitude and average angular variance:
where dx=horizontal displacement between the selected tie point and its correlating point from the first set, dy=vertical displacement between the selected tie point and its correlating point, mag=
magnitude of the displacement, cosmean=average cosine variance of neighboring tie points, and sinmean=average sine variance of neighboring tie points. If dA is greater than a specified tolerance, a
determination 815 is made that the tie point is not similar to its neighbors. It should be appreciated that in other embodiments, statistical techniques or equations different from the two
embodiments described above may be used to compare the selected tie point to its neighboring tie points and determine if the points are similar.
If the tie point is similar to its neighbors 815, the tie point is kept 820. The method then continues by determining if there are more tie points that need to be analyzed 830. If there are
additional tie points, processing returns to selecting the next tie point for analysis 805. Otherwise, the method ends 835.
Multi-Resolution Correlation
In some embodiments, a multi-resolution correlation process may be used to determine tie points 210. This may provide for a more efficient and faster correlation between points in the first set and
tie points. FIG. 9 illustrates an exemplary method that uses multi-resolution correlation 900.
Before selecting the first potential tie point 510 for a point in the first set, the resolution of the patch is reduced by a resolution parameter 905. By way of example, the resolution may be reduced
by averaging blocks of points or eliminating points. The resolution of at least a portion of the second image that the patch will be correlated over is also reduced by the resolution parameter. The
reduced resolution patch is used to determine 910 a correlating tie point (if any). The correlating tie point may be determined by using a method such as that described in FIG. 5.
If a good correlation exists 530 and a tie point is selected 535, the resolution of the patch and the second image are restored to the original resolution 915. The tie point found in the previous
iteration is selected as the first potential tie point 510. The maximum possible misregistration for this tie point is equal to two times the size of the resolution parameter. Thus, the second
predetermined distance is set to this maximum possible misregistration 920.
A correlating tie point may then be determined 925 using the above-described parameters by repeating the determining the set of potential tie points 515, the calculating the correlation 520, and
selecting the tie point with the best correlation 535. The reduced correlation method may be repeated for additional points in the first set. It should be appreciated that tie points may also be
removed from the set of tie points as described in FIG. 7 before determining tie points with the full resolution. Thus, the number of correlations that are performed may be reduced because tie points
with bad correlations are removed before the full resolution correlation is performed.
Coarse Mapping
FIG. 10 illustrates an exemplary method that may be used to perform a coarse mapping 1000 of the images 100, 120 before determining tie points 210. A set of user correlations is received 1005 from
the user. The set of user correlations correlate a set of first points in the overlap region 112 of the first image 100 to initial tie points in the second image 120.
The set of user correlations are then used to perform 1010 a coarse mapping between points in the overlap region of the images. The coarse mapping maybe computed by calculating a Delaunay
triangulation of the initial tie points (or points located at a position between the first points provided by the user and the initial tie points) and using the triangulation to compute a linear
coarse warp. One or both of the images are then warped in accordance with the linear coarse warp. In alternate embodiments, other techniques may also be used to perform 1010 a coarse mapping of the
images. After the image or images are warped, a method similar to that described with reference to FIG. 5 maybe used to determine tie points 1015. Thus, the user correlations may be used to improve
the accuracy and/or speed of the final warping of the images.
In the foregoing description, for the purposes of illustration, methods were described in a particular order. It should be appreciated that in alternate embodiments, the methods may be performed in a
different order than that described. Additionally, the methods described above embodied in machine-executable instructions stored on one or more machine-readable mediums, such as disk drives or
CD-ROMs. The instructions may be used to cause the machine (e.g., computer processor) programmed with the instructions to perform the method. Alternatively, the methods may be performed by a
combination of hardware and software.
While illustrative and presently preferred embodiments of the invention have been described in detail herein, it is to be understood that the inventive concepts may be otherwise variously embodied
and employed, and that the appended claims are intended to be construed to include such variations, except as limited by the prior art.
|
{"url":"http://www.google.co.uk/patents/US7269299","timestamp":"2014-04-18T13:36:01Z","content_type":null,"content_length":"115114","record_id":"<urn:uuid:4742516e-0613-4e85-a7f9-fd59a5d8493f>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00510-ip-10-147-4-33.ec2.internal.warc.gz"}
|
User Justin Palumbo
bio website math.ucla.edu/~justinpa
location UCLA
visits member for 4 years, 4 months
seen Jun 25 '13 at 0:32
stats profile views 2,153
UCLA grad student studying set theory.
20 awarded Nice Answer
8 awarded Yearling
24 awarded Nice Answer
6 awarded Nice Question
8 awarded Yearling
Dec Why does the Solovay-Tennenbaum theorem work?
3 comment With that in mind I'd suggest that a good 'next' iterated forcing argument to look at is Baumgartner's construction of a model where all $\aleph_1$-dense set of reals are isomorphic,
since the iteration only uses ccc forcings, but the result doesn't itself doesn't follow from MA (The original paper is freely available at the FM archive matwbn.icm.edu.pl/ksiazki/fm/
fm79/fm79111.pdf, and here's an expository note presenting the same result: scholarworks.sjsu.edu/etd_theses/3834)
Dec Why does the Solovay-Tennenbaum theorem work?
3 comment When I'm in situations where I feel like I understand the mechanism of a proof but don't 'grok' them I am reminded of the quote of von Neumann: "In mathematics you don't understand
things. You just get used to them."
Nov Which of these relations on partial orders allows us to identify forcing equivalence?
26 comment I see, so it looks like the map $q$ to $[q\in\tau]$ isn't even well-defined in that case, since forcing-wise RO($\mathbb{Q}$) consists only of nonzero elements. If it is well-defined it's
a complete embedding, and that will only happen if (and only if) for any $q$ we can find a $p$ forcing $q$ into $\tau$.
Nov Which of these relations on partial orders allows us to identify forcing equivalence?
26 comment The paper "On the Alaoglu-Birkhoff equivalence of posets" by Todorcevic and Zapletal seems relevant, since it discusses the relationship among several natural preorderings on posets,
including $\lhd_1$, Tukey reducibility and others. The paper is available open access at projecteuclid.org/…
Nov Which of these relations on partial orders allows us to identify forcing equivalence?
26 comment If I'm interpreting the definitions correctly, I think $\lhd_1$ and $\lhd_4$ are equivalent as long as the forcing notions in question are separative. Certainly $\lhd_4$ implies $\lhd_1$
since a separative forcing $\mathbb{P}$ is forcing equivalent to its Boolean completion RO($\mathbb{P}$). And if $\mathbb{Q}\lhd_1\mathbb{P}$ then in particular there is $\tau$ a RO($\
mathbb{P}$)-name for a RO($\mathbb{Q}$)-generic. The map sending q in RO($\mathbb{Q}$) to the Boolean value $[q\in\tau]$ (calculated in RO($\mathbb{P}$) is a complete embedding.
24 awarded Nice Question
Sep Theory of (definable) ideals on a multi-dimensional countable set
17 comment Any/all of the above. All the structural results in the literature on 'definable' ideals I know of would follow from determinacy. I'm willing to make large cardinal assumptions here, so I
have a fairly large umbrella in mind. But if something about multidimensional ideals can be extracted from stronger definability assumptions I'd be happy to hear about them..
17 asked Theory of (definable) ideals on a multi-dimensional countable set
Jun Forcing over set theory versus forcing over arithmetic
27 comment Can you recommend a good general reference for forcing over models of arithmetic? (for someone who knows the set theory side, but not the arithmetic side very well)
May Mathias forcing with Ramsey ultrafilters, and Cohen reals
28 revised added 86 characters in body; deleted 25 characters in body
May Mathias forcing with Ramsey ultrafilters, and Cohen reals
28 comment Ah right, yes I was mentally conflating bounded and finite. (The Cohen real I described only needs the $A$ in $\mathcal{U}$ to have unbounded intersection with the $A_k$). Thanks
May Mathias forcing with Ramsey ultrafilters, and Cohen reals
28 revised added 1461 characters in body
May Mathias forcing with Ramsey ultrafilters, and Cohen reals
14 comment in the meantime I've 'strengthened' the question by also asking about p-points, where it isn't clear that the forcing has the Laver property (and I would guess, perhaps, it does not)
May Mathias forcing with Ramsey ultrafilters, and Cohen reals
14 comment Ramiro, your suggestion seems to be right; I think the same diagonal arguments that show vanilla Mathias forcing has the Laver property shows that Mathias forcing relative to a Ramsey
ultrafilter does.. if you wanted to add your suggestion as an answer I would certainly upvote it...
May revised Mathias forcing with Ramsey ultrafilters, and Cohen reals
14 added 25 characters in body
|
{"url":"http://mathoverflow.net/users/2436/justin-palumbo?tab=activity","timestamp":"2014-04-18T00:34:05Z","content_type":null,"content_length":"48239","record_id":"<urn:uuid:3a0c4c25-3d22-41a6-a9af-4a391a1d11c5>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00460-ip-10-147-4-33.ec2.internal.warc.gz"}
|
MathGroup Archive: March 2007 [00030]
[Date Index] [Thread Index] [Author Index]
Re: Re: Re: Hold and Equal
• To: mathgroup at smc.vnet.net
• Subject: [mg73806] Re: [mg73770] Re: [mg73739] Re: [mg73715] Hold and Equal
• From: Carl Woll <carlw at wolfram.com>
• Date: Thu, 1 Mar 2007 06:08:12 -0500 (EST)
• References: <200702261112.GAA27677@smc.vnet.net> <200702271044.FAA23846@smc.vnet.net> <200702280928.EAA24217@smc.vnet.net>
Murray Eisenberg wrote:
>OK, that does what I explicitly asked for, but what I asked for was an
>oversimplified case of what I actually wanted...
>The trouble with my example is that the left-hand side of the
>mathematical equality is an expression that Mathematica does not
>automatically "evaluate". But suppose the left-hand side were, say,
>Integrate[x^2,x]? Then when function step is applied to that, the
>integral is actually evaluated on both sides of the equality produced.
>Moreover, if I try, say,
> step[Hold[Integrate[x^2,x]]]
>then Hold appears on both sides of the resulting equality.
>What I'm after is something that will allow me to show an equation of
>the form, say,
> integral = evaluatedIntegral
>where the left-hand side uses the integral sign and a "dx" (as an
>unevaluated expression), the right-hand side evaluates that integral,
>and the entire expression appears in traditional mathematical form.
Make step HoldFirst (or HoldAll)
SetAttributes[step, HoldFirst]
does what you want, although the explicit inclusion of Expand in the
definition of step isn't necessary.
Carl Woll
Wolfram Research
>Carl Woll wrote:
>>Murray Eisenberg wrote:
>>>How can I produce in an Output cell (under program control) an
>>>expression like the following,
>>> (a+b)^2 = a^2+ 2 a b + b^2
>>>where instead of the usual Equal (==) I get a Set (=), as in traditional
>>>math notation? I want to input the unexpanded (a+b)^2 and have the
>>>expansion done automatically.
>>>Of course, I can try something like the following:
>>> (a+b)^2 == Expand[(a+b)^2])
>>>So how do I convert the == to =? Of course
>>> ((a + b)^2 == Expand[(a + b)^2]) /. Equal -> Set
>>>gives a Set::write error. And
>>> (Hold[(a + b)^2 == Expand[(a + b)^2]]) /. Equal -> Set
>>>doesn't actually evaluate the Expand part and leaves the "Hold" wrapper.
>>How about using HoldForm?
>>step[x_] := HoldForm[x = #] &[Expand[x]]
>>(a+b)^2=a^2+2 b a+b^2
>>Carl Woll
>>Wolfram Research
|
{"url":"http://forums.wolfram.com/mathgroup/archive/2007/Mar/msg00030.html","timestamp":"2014-04-17T04:18:58Z","content_type":null,"content_length":"36484","record_id":"<urn:uuid:2ee84e4b-2b19-4bba-971f-289b1470b109>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00018-ip-10-147-4-33.ec2.internal.warc.gz"}
|
A new thought on healing
So, apparently a large portion of the base doesn't like healing surges. Yet propoprtionate healing makes a ot of sense and takes care of a lot of problems. What if the character sheet did the
Your hit points are divided into 5 stages. Divide your hp into six, rounding down (min. 1). So your hp may appear as follows:
HP Stage
5 (max)
Then the cure spells would say:
Cure Light Wounds
: Heal 1d8+4 hp or restore your hp or to the first stage value, whichever is greater.
Cure Moderate Wounds
: Heal 2d8+4 hp or to the second stage value, whichever is greater.
Cure Serious Wounds
: Heal 3d8+4 hp or to the third stage value, whichever is greater.
Cure Critical Wounds
: Heal 4d8+4 hp or to the fourth stage value, whichever is greater.
Inflict spells would inflict that much damage or reduce your hp by one or more stages.
Calculating hp stages isn't very complicated, since you only do it when you level up. Electronic sheets would calculate it for you. It provides a framework for wound charts (which I hate but others
like). It allows for effects based on your stage as well. (I.e., this effect kills creatures who have fewer hp than their first stage value.)
#2 Wed, 02/06/2013 - 10:54
Calculating hp stages isn't very complicated, since you only do it when you level up. Electronic sheets would calculate it for you. It provides a framework for wound charts (which I hate but
others like). It allows for effects based on your stage as well. (I.e., this effect kills creatures who have fewer hp than their first stage value.)
An interesting alternate mechanic, but I think it would tie in better with a wound chart that as a replacement for healing surges.
As is, hit dice scale well (which is what this looks like it is trying to do, mainly) but it doesn't have the major benefit of healing surges in that it doesn't limit daily magical healing.
And if all the people that hate healing surges start liking this, I don't even...
#3 Wed, 02/06/2013 - 14:13
Calculating hp stages isn't very complicated, since you only do it when you level up. Electronic sheets would calculate it for you. It provides a framework for wound charts (which I hate but
others like). It allows for effects based on your stage as well. (I.e., this effect kills creatures who have fewer hp than their first stage value.)
I like it, mostly because it makes healing scale while at the same time insuring that minor healing isn't enough for high level tough characters. It gives space for clerics to scale healing a bit,
without healing becoming over powered.
it doesn't have the major benefit of healing surges in that it doesn't limit daily magical healing.
Healing surges in 4e are not much of a limit anway, they are easy to move between characters via ritual and many characters will have more healing surges then they can practically use in a day. The
more practical limit is simply the limited number of daily healing powers you can get.
I always had the impression that the number of healing surges was set fairly high with the intent of using them to power magic items and other non-healing effects. In practice this didn't turn out to
work well, so they stopped using them for anything but healing, without ever reducing the number of healing surges.
#4 Wed, 02/06/2013 - 14:36
I like it, mostly because it makes healing scale while at the same time insuring that minor healing isn't enough for high level tough characters. It gives space for clerics to scale healing a
bit, without healing becoming over powered.
Cure light heals one surge worth; cure moderate heals two surges; cure serious heals 3 surges worth; cure critical heals to full (oh look, very close to exactly how that works in 4e).
Healing surges in 4e are not much of a limit anway
From what I understand, if you run the 4e modules WotC has published, this is very true. In my personal experience, it's an extremely limiting factor (and then some), perhaps because I try to really
challenge my players. I've gone over earlier in another thread why that works; even on top of that, you can say "healing surges in 4th edition didn't work because you had too many of them" and then
simply reduce the number of healing surges in a game. This is a great, obvious, transparent way to easily scale the difficulty of a game.
they are easy to move between characters via ritual and many characters will have more healing surges then they can practically use in a day.
If a group has 40 healing surges altogether, it doesn't matter if one person has 12 and another 8, etc etc. If you actually wear down on your group properly - in the manner I think the game intended,
when it wasn't on easy mode - then your group will walk into encounters with the defender at 3 surges, a striker at 1, another striker lucky at 2, and the healer only has 2.
The more practical limit is simply the limited number of daily healing powers you can get.
Practical? No. Traditional? Yes. Better? No. What happens if you have two healers in the group? Suddenly your group can go, more or less literally, twice as long before a long rest. How do you
balance out consumable healing items like healing potions?
they stopped using them for anything but healing, without ever reducing the number of healing surges
I'm not sure who "they" are. The writers of the 4e adventure books? There are a number of class features, abilities, magic items, and rituals that involve surges. Moreover, there are monsters that
deal damage in the form of removing surges. Related to why healing surges are better than just limited healing powers, you can use surges in a half-dozen utilitarian ways. It speeds book-keeping,
enhances game play, and adds in tactical and character choices into the game.
Am I the only 4e DM that had traps that dealt damage in surges? Included monsters (especially life-sucking undead) that took away surges? Diseases/curses that reduce surge value? Enforced using
surges for elixers? Am I the only 4e DM that properly challenged his players that they actually needed as many surges as they had?
I'm not being satrical here: Am I the only 4e DM that challenged his players?
#5 Wed, 02/06/2013 - 14:38
And if all the people that hate healing surges start liking this, I don't even...
There were many, many reasons to dislike healing surges which have nothing to do with healing being scaled to max HP. Most notably, healing surges were tied into the heavy abstraction of HP, to the
point where you could just shrug and recover HP. (To contrast, Next currently requires the use of a healer's kit in order to recover HP - implying that all lost HP are actual wounds.)
Merely using one-quarter or one-fifth max HP as a convenient base number for magical healing is a completely separate point that has very little to do with any prior implementation of healing surges.
So yes, I'm a big fan of this sort of healing module, even though I hate healing surges.
#6 Wed, 02/06/2013 - 14:39
I like it. Six is a good number, as well. Four would be too few, eight too many.
Straightforward, flexible, can support modification.
I like it.
#7 Wed, 02/06/2013 - 14:52
Most notably, healing surges were tied into the heavy abstraction of HP, to the point where you could just shrug and recover HP. (To contrast, Next currently requires the use of a healer's kit in
order to recover HP - implying that all lost HP are actual wounds.)
If you're using hit points as a direct wound system, and this makes sense to you, I don't know what to tell you. D&D has used an abstract hit point system since 2e, and this has been written and
explained explicitly in the core books in every edition since then. I'm pretty sure it was that way in 1e too, but I never played it so couldn't expressly say so.
As far as shrugging and healing, are you perhaps confusing the "Second Wind" mechanic with healing surges? I don't see what you don't like about that concept; the hero coming thru after being beaten
down is a very common troupe, in both film and literature.
using one-quarter or one-fifth max HP as a convenient base number for magical healing is a completely separate point that has very little to do with any prior implementation of healing surges.
Actually using healing surges to scale healing was a major point of why to even use them.
So yes, I'm a big fan of this sort of healing module, even though I hate healing surges.
You don't like a limit on daily magical healing.
I still have yet to find a good mechanical reason why surges are bad, other than "I really don't like them because of the way I choose to view them".
#8 Wed, 02/06/2013 - 15:01
I like it. Six is a good number, as well. Four would be too few, eight too many.
Six was my other option, mostly because stage three becomes the bloodied value which seems too useful to abandon. The only problem with six is that it becomes a bit too much for low-level
characters, many of whom might only have six or eight hp. But I'm not totally opposed to having six stages.
Four could be fine, but there's no way to distinguish all the healing spells. The healing spells have a pretty nice progression:
1. Cure Light
2. Cure Moderate
3. Cure Serious
4. Cure Critical
5. Mass Cure
6. Heal
7. Improved Mass Cure
8. ??
9. Mass Heal
With only four stages, there's nothing more Cure Ciritical to do. (Or, more precisely, Cure Ciritcal and Heal would do the same thing.)
#9 Wed, 02/06/2013 - 15:03
I'm not seeing the formula from your example wrecan. Shouldn't each stage have roughly the same HP amount? It seems like you are having each stage ~20% of your max HP (round down), with the left over
being evenly distributed among the top stages.
#10 Wed, 02/06/2013 - 15:03
Perhaps stages could be limited by Hit Dice. As in, at 1 HD you only have one stage, and CLW heals you to full (which it probably does already, just by the numerical heal). Two HD gets you two
stages, and so on, until you reach some maximum number of stages.
#11 Wed, 02/06/2013 - 15:05
I'm not seeing the formula from your example wrecan. Shouldn't each stage have roughly the same HP amount? It seems like you are having each stage ~20% of your max HP (round down), with the left
over being evenly distributed among the top stages.
This is just due to rounding. All the stages are either 4 or 5, and whether it's 4 or 5 is a matter of rounding.
#12 Wed, 02/06/2013 - 15:08
I still have yet to find a good mechanical reason why surges are bad, other than "I really don't like them because of the way I choose to view them".
As stated previously, every edition prior to 4E has allowed the option of viewing HP as wounds. In spite of the handwave that HP are supposed to represent multiple factors, most evidence within each
edition had shown clearly that every hit created some physical damage.
Except for 4E, where you could just "spend a Healing Surge" with no justification - even in the middle of combat - and suddenly you're up to full.
If the whole point of choosing a system is to play one which allows you to tell the kinds of stories you want, while deviating as little as possible from what you would expect to happen within the
confines of that genre, then yes, I don't like Healing Surges because they conflict with the way I choose to view the game.
And to try and keep this on topic, I would point out that the traditional pitfall of scaling CLW = 1/5 max HP, and CSW = 3/5 max HP is that low-level characters have very few HP and 1/5 of 8 is only
1; offering the minimum of 2d8+4 or 40% max HP is a great workaround that keeps lower-level spell slots valuable against high-HP targets.
#13 Wed, 02/06/2013 - 15:09
If you're using hit points as a direct wound system, and this makes sense to you, I don't know what to tell you.
It's a matter of degree. 4e was my favorite edition to date, but the level of abstraction was definitely higher than in prior editions and healing surges were a part of that.
Like others, I also found that surges/day offered little to no true limit on the amount of healing and I sent some pretty severe encounters at my players. But between Comrade's Succor and defenders
and controllers doing their jobs well, people never ran low on surges.
And the game worked fine. So, as far as I'm concerned, it's not necessary for 5e's surge analog to provide a limit on healing. The healing kits and spell slots do that fine. I don't see a lot of
people complaining that there's too much healing. As long as they avoid wands of curing, it should all be good.
As far as shrugging and healing, are you perhaps confusing the "Second Wind" mechanic with healing surges? I don't see what you don't like about that concept
Again, at some point, hp loss does represent actual wounds -- particularly for attacks with carriers, like poison -- and it can be difficult to reconcile second wind with the healing of actual
I personally liked it, but I know a lot of people who did not. That said, changing HD to surges is very easy. Just eliminate the need for healing kits to spend a HD to heal, and limit your HD to
the size of your HD plus three. I.e., if your HD is d8, you get 11 uses of HD.
I still have yet to find a good mechanical reason why surges are bad
There's nothing mechanically wrong with surges. It's an aesthetic preference. And that's just as valid as a mechanical complaint.
#14 Wed, 02/06/2013 - 15:14
Perhaps stages could be limited by Hit Dice. As in, at 1 HD you only have one stage, and CLW heals you to full (which it probably does already, just by the numerical heal). Two HD gets you two
stages, and so on, until you reach some maximum number of stages.
That really would make it complicated and I'm not sure what it accomplishes.
#15 Wed, 02/06/2013 - 15:15
It accomplishes not having the system break completely at very low numbers of hitpoints.
#16 Wed, 02/06/2013 - 15:16
How does the system break with low numbers of hp? The minimum hp that most PCs should have is 5 (wizard with 8 Con). Which means 1 hp/stage at level 1. (Another reason I went with 5 stages instead
of 6.)
#17 Wed, 02/06/2013 - 15:21
Well, it breaks in that having stages isn't useful at low levels. Nothing built around stages would interact with them at all. You'd still have hard-coded numeric thresholds, from the 1d8+X for CLW
to the 3d8 of Sleep.
This system works great as soon as you have enough hitpoints for a stage to be a meaningful contributor, but as the current hitpoints work that would only happen at high level.
#18 Wed, 02/06/2013 - 15:28
I think I'd rather just accomplish the proportionality part of your proposed change by having Cure spells use the same sized dice as the character.
I..e - if you use CLW on a fighter, you heal for 1d10 +4 while if you use CLW on a wizard you heal for 1d6+4.
#19 Wed, 02/06/2013 - 15:29
Well, it breaks in that having stages isn't useful at low levels. Nothing built around stages would interact with them at all. You'd still have hard-coded numeric thresholds, from the 1d8+X for
CLW to the 3d8 of Sleep.
Which is where a "greater of" comes in.
But really, the 'stages' should be based around intervals of 10%. Dividing HP by ten and then multiplying that is much faster than dividing by six.
Thus we get A CL* that's XdY or (Z*10)%, whichever is greater.
Cure Light Wounds: greater of 1d8 or 20% max.
#20 Wed, 02/06/2013 - 15:29
I like the HP/damage model used in the board games a lot. It keeps the numbers small (good for the younger/math-challenged players), and would make combat that much faster, since most hits will do 1
damage, with 2h weapons dealing 2 damage, and a crit could simply be +1 damage. You won't gain a HP every level, but you won't need to.
#21 Wed, 02/06/2013 - 15:32
If you want proportional healing, Next already has a mechanic in place: hit dice. Just change the healing spells to the following:
Cure Light Wounds: The target regains hit points as if they had spent a hit die.
Cure Moderate Wounds: The target regains hit points as if they had spent two hit dice.
If you want, give each spell a +4 or +8 to the amount healed (just like the current versions).
But honestly, I'm not sure why proportional healing is so important. The high HP characters already have an advantage: they have more hit points. I don't see why they also need to recover more HP
from magical healing. The way I look at it is that damage isn't proportional, and so healing shouldn't be either. When a goblin hits you, you take 1d6-1 damage regardless of how many hit points you
The trick is to remember that hp are abstract, and that real wounds don't show up until you are below 0. So the wizard who is healed from 1 hit point back to full isn't having his bones mended and
organs put back in; his scrapes and bruises are healed, but more importantly he recovers his stamina and energy. He can once more dodge attacks to turn a killing blow into a grazing hit.
#22 Wed, 02/06/2013 - 15:34
If you want proportional healing, Next already has a mechanic in place: hit dice. Just change the healing spells to the following:
Cure Light Wounds: The target regains hit points as if they had spent a hit die.
Cure Moderate Wounds: The target regains hit points as if they had spent two hit dice.
Argh! You saw what I was doing there.
#23 Wed, 02/06/2013 - 17:40
I was thinking the same thing as Carl, since the clerics capacity to heal could be based on the recipients ability to heal. You could take this further in a low magic or no magic campaign that
healing just restores hit dice used to heal. Versus healing coming from external source, i.e. diety.
As to the way 4E represented hit points versus previous editions, I see no difference, except 4E spread the healing resources around, and allowed the characters to recover themselves. It had more
flexibility to deal with low magic or no magic at all.
#24 Wed, 02/06/2013 - 17:45
Here's what I don't get about cure light/moderate/etc. Higher HP totals are supposed to represent better ability to avoid taking damage: if one guy has twice as many HP as the other, it's not
because he can survive getting his liver skewered twice as many times its because he's a more skilled combatant who can dodge and parry and what have you to the point where the skewer is only half as
deep and in half as vital a location. This is the only way to avoid rapid HP growth turning you into the black knight, and I'm fine with it. This is why it makes sense for healing to be
proportional, so that CLW heals an equal percentage of a fighter's and a wizard's health, because it's the percent that actually measures physical damage (to the extent there is any physical damage
in D&D.
But if healing is going to be proportional between the fighter and the wizard, why isn't it proportional between the level 1 fighter and the level 20 fighter? Either the level 20 fighter is indeed
the black knight, absorbing a quiver full of arrows and a dozen skewered organs before falling, or he's taking minimal to nil physical damage from the 5-10 HP that would have killed his lesser
brethren. If 20 out of 50 HP is the same amount of physical damage as 4 out of 10, why isn't CLW healing 20 for the high level fighter? If a healing spell will heal 20/50 for a level 5 fighter and
10/25 for a level 5 wizard, why doesn't it heal 20/50 for a level 10 wizard?
The only answer I can come up with is, "because we need 1st level spells to stop being useful to high level clerics." I'm not happy with that answer. Does someone have a better one?
#25 Wed, 02/06/2013 - 17:55
Here's what I don't get about cure light/moderate/etc. Higher HP totals are supposed to represent better ability to avoid taking damage: if one guy has twice as many HP as the other, it's not
because he can survive getting his liver skewered twice as many times its because he's a more skilled combatant who can dodge and parry and what have you to the point where the skewer is only
half as deep and in half as vital a location. This is the only way to avoid rapid HP growth turning you into the black knight, and I'm fine with it. This is why it makes sense for healing to be
proportional, so that CLW heals an equal percentage of a fighter's and a wizard's health, because it's the percent that actually measures physical damage (to the extent there is any physical
damage in D&D.
But if healing is going to be proportional between the fighter and the wizard, why isn't it proportional between the level 1 fighter and the level 20 fighter? Either the level 20 fighter is
indeed the black knight, absorbing a quiver full of arrows and a dozen skewered organs before falling, or he's taking minimal to nil physical damage from the 5-10 HP that would have killed his
lesser brethren. If 20 out of 50 HP is the same amount of physical damage as 4 out of 10, why isn't CLW healing 20 for the high level fighter? If a healing spell will heal 20/50 for a level 5
fighter and 10/25 for a level 5 wizard, why doesn't it heal 20/50 for a level 10 wizard?
The only answer I can come up with is, "because we need 1st level spells to stop being useful to high level clerics." I'm not happy with that answer. Does someone have a better one?
Yes, I do. Think of it this way: a strong orc attacking with a sword does 1d8+4 damage. When you get hit, you take between 5 and 12 damage. It could be that the damage is enough to reduce you to 0
or lower: you have taken a big hit, enough to drop you. You might also have enough HP that the hit simply takes you a step closer to being knocked down.
Now the cleric heals you with CLW. You regain 1d8+4 hp. No matter how many HP you have, you can now take another hit from that orc.
Note that this works even if the characters were knocked out, and even if they have very different HP totals. The 1st level fighter and 20th level fighter will both be able to take one extra hit
from that orc. The difference between these two fighters is that the level 20 guy can "avoid" a lot more attacks before he goes down. He might be covered in scratches, bruises, small cuts, etc.,
but he still reacts as well as an unharmed level 1 fighter.
The trick is to avoid getting caught up in the name of the spell, and to remember that HP are very abstract and incorporate the concept of being hit.
#26 Wed, 02/06/2013 - 17:59
I would prefer a return to somehting similar to surges (but call them stamina because anything associated with 4e is badwrongfun) where each one spent heals for 25% of max HP.
Make in combat healing exceptionally rare, so that it is not required for a party to succeed. Then have cure spells be one of the only ways to spend stamina mid-combat. Cure light is a level 1 spell
and allows you to spend 1 stamina. Cure moderate is a level 3 spell and allows your target to spend 2 stamina. Cure serious is a level 6 spell and allows your target to spend 3 stamina. Heal is a
level 9 spell and allows your target to spend 4 stamina (bringin someone from 0 to full HP).
This makes magical healing based on the % of the targets HP, it makes magical healing powerful, but it also makes magical healing not necessarily mandatory for a groups success as the game should be
designed around the assumption of no in combat healing.
#27 Wed, 02/06/2013 - 18:25
But if healing is going to be proportional between the fighter and the wizard, why isn't it proportional between the level 1 fighter and the level 20 fighter?
That's one of the major arguments in favor of the "Black Knight" model. If an arrow to the torso is 8 damage, then healing 8 damage is un-doing a wound equivalent to an arrow to the torso. The high
level fighter can take 20 arrows to the torso, for 160 damage, and each cure light wounds un-does one of those injuries, and cure critical wounds un-does four of them. It's all very internally
The trade-off is that you have to imagine a high-level fighter powering on through twenty arrow wounds for ~30 seconds without succumbing to those injuries (adrenaline?) and then you have to ignore
the possibility of bleeding out (but then, it's a pretty safe assumption that tending to your wounds is an immediate priority after combat).
Considering that this is the point where a wizard can cast sky castle, I'm perfectly happy with the fighter's extraordinary capability to not die.
#28 Wed, 02/06/2013 - 18:31
But if healing is going to be proportional between the fighter and the wizard, why isn't it proportional between the level 1 fighter and the level 20 fighter?
That's one of the major arguments in favor of the "Black Knight" model. If an arrow to the torso is 8 damage, then healing 8 damage is un-doing a wound equivalent to an arrow to the torso. The
high level fighter can take 20 arrows to the torso, for 160 damage, and each cure light wounds un-does one of those injuries, and cure critical wounds un-does four of them. It's all very
internally consistent.
The trade-off is that you have to imagine a high-level fighter powering on through twenty arrow wounds for ~30 seconds without succumbing to those injuries (adrenaline?) and then you have to
ignore the possibility of bleeding out (but then, it's a pretty safe assumption that tending to your wounds is an immediate priority after combat).
Considering that this is the point where a wizard can cast sky castle, I'm perfectly happy with the fighter's extraordinary capability to not die.
While certainly an entertaining image, it also applies to the Rogue or Wizard who decided to invest in a 20 Con and has 200+ HP by level 20 and that starts making very little sense.
I think it makes far more sense to imagine that the 200 HP fighter getting hit for 8 damage from an arrow manages to avoid most of the impact so that the arrow barely grazes him. The 20 HP fighter
who is hit by the 8 damage arrow would instead have the arrow bite into him more harshly.
#29 Wed, 02/06/2013 - 18:50
Well yeah, that's the trade-off. You can either have internal consistency, or you can have characters grounded more closely to reality. This module is trying to walk a middle path, and solves the
difference with additional complexity. Other solutions involve less scaling of HP, such as removing the Con modifier at every level, or not gaining hit dice past level 10.
#30 Wed, 02/06/2013 - 19:00
Here's what I don't get about cure light/moderate/etc. Higher HP totals are supposed to represent better ability to avoid taking damage: if one guy has twice as many HP as the other, it's
not because he can survive getting his liver skewered twice as many times its because he's a more skilled combatant who can dodge and parry and what have you to the point where the skewer is
only half as deep and in half as vital a location. This is the only way to avoid rapid HP growth turning you into the black knight, and I'm fine with it. This is why it makes sense for
healing to be proportional, so that CLW heals an equal percentage of a fighter's and a wizard's health, because it's the percent that actually measures physical damage (to the extent there is
any physical damage in D&D.
But if healing is going to be proportional between the fighter and the wizard, why isn't it proportional between the level 1 fighter and the level 20 fighter? Either the level 20 fighter is
indeed the black knight, absorbing a quiver full of arrows and a dozen skewered organs before falling, or he's taking minimal to nil physical damage from the 5-10 HP that would have killed
his lesser brethren. If 20 out of 50 HP is the same amount of physical damage as 4 out of 10, why isn't CLW healing 20 for the high level fighter? If a healing spell will heal 20/50 for a
level 5 fighter and 10/25 for a level 5 wizard, why doesn't it heal 20/50 for a level 10 wizard?
The only answer I can come up with is, "because we need 1st level spells to stop being useful to high level clerics." I'm not happy with that answer. Does someone have a better one?
Yes, I do. Think of it this way: a strong orc attacking with a sword does 1d8+4 damage. When you get hit, you take between 5 and 12 damage. It could be that the damage is enough to reduce you
to 0 or lower: you have taken a big hit, enough to drop you. You might also have enough HP that the hit simply takes you a step closer to being knocked down.
Now the cleric heals you with CLW. You regain 1d8+4 hp. No matter how many HP you have, you can now take another hit from that orc.
Note that this works even if the characters were knocked out, and even if they have very different HP totals. The 1st level fighter and 20th level fighter will both be able to take one extra hit
from that orc. The difference between these two fighters is that the level 20 guy can "avoid" a lot more attacks before he goes down. He might be covered in scratches, bruises, small cuts,
etc., but he still reacts as well as an unharmed level 1 fighter.
The trick is to avoid getting caught up in the name of the spell, and to remember that HP are very abstract and incorporate the concept of being hit.
I don't know if you do, but you certainly haven't given it to me yet. Your argument rejects proportionate healing altogether, which is at least consistent even if it doesn't make sense (to me). If
that's how you're thinking of it, then CLW has no business healing more HP on a 1st level fighter than a first level wizard. You're still running into the problem that proportionate healing is
curing 1 orc hit on a level 1 fighter with 12 HP but only half a hit on a level 2 wizard with 12 HP.
As to why it doesn't make sense: an orc hit is not an orc hit is not an orc hit, so it doesn't make sense that CLW cures an orc hit or an orc hit or an orc hit. Put more clearly but less
alliteratively, someone which large amounts of HP takes only a scratch from an orc hit, while someone with only a small amount of HP suffers massive tissue damage and blood loss. Therefore it
doesn't make sense that CLW would heal only the scratch on the high HP guy but the whole shebang on the low-HP guy. Unless you're arguing that the spell is limited to curing a single injury but
nearly unlimited in the severity of the single injury it can heal (seals up one cut, whether that cut is a paper cut or a gaping, profusely bleeding, seconds-to-live gash)? And then the only
explanation I can come up with for why that is is "because we needed an in-world explanation, however nonsensical, that justifies letting us make 1st level spells to stop being useful to high level
clerics." In other words, more or less back where we started.
#31 Wed, 02/06/2013 - 21:13
i know the arguments of the system used before 3rd edition when it came to healing but why not use a modified version of that and do away with surges.
#32 Wed, 02/06/2013 - 21:20
I would prefer a return to somehting similar to surges (but call them stamina because anything associated with 4e is badwrongfun) where each one spent heals for 25% of max HP.
Make in combat healing exceptionally rare, so that it is not required for a party to succeed. Then have cure spells be one of the only ways to spend stamina mid-combat. Cure light is a level 1
spell and allows you to spend 1 stamina. Cure moderate is a level 3 spell and allows your target to spend 2 stamina. Cure serious is a level 6 spell and allows your target to spend 3 stamina.
Heal is a level 9 spell and allows your target to spend 4 stamina (bringin someone from 0 to full HP).
That would just encourage parties to stockpile those spells and effects until they are needed. Everybody would want a cleric or other healer because it would be the only effective way to get healing
in combat.
I think a better solution is to put Second Wind back in the game as an action, and make combat healing spells, potions and anything else use the characters second wind. Healing spells would then heal
the same amount as second wind (surges, HD or stamina) plus some bonus depending on the spell. That way a healer isn't critical and in combat is limited but they are not encouraged to stockpile
healing either and they are useful because healing spells are more efficient that second wind.
A few high level spells and effects could be exempted when they are in the range that nobody can cast to many per day. It would also be possible to have non-combat only healing spells that take a 1
minute to cast and some other variations to give the pure healers more to do.
#33 Wed, 02/06/2013 - 21:23
so its more believable in game that a rogue after getting his ass kicked in 3 encounters can self heal with no skill in healing at all. i played many old school healers and i never stockpiled
anything. the bonus spells you got for high wisdom helped you balance things at low levels. plus you are also a partial combat class so you have many things to do
#34 Wed, 02/06/2013 - 21:31
Either the level 20 fighter is indeed the black knight, absorbing a quiver full of arrows and a dozen skewered organs before falling, or he's taking minimal to nil physical damage from the 5-10
HP that would have killed his lesser brethren.
Back in 1st edition when breaking 100 HP was unheard of, yes. Gaining a level often meant gaining a single HP and a +3 sword was godly. A high level character was heroic, but not to the point
where a few arrows wouldn't kill him still.
Newer editions give more HP for a variety of reasons, but the result, which people seem to forget, is that a level 20 black knight can't get hit by an arrow anymore. A level 20 character is not a
character that faces level 1 challenges because the system is not made to support inconsequential fights. The arrows he gets hit by are heart-seeking-vorpal-of-greater-human-bane +10 fired 3 at a
time by a maralith. He doesn't get hit by 1d4 magic missles, he is getting hit by 20d6 polar rays.
#35 Thu, 02/07/2013 - 01:28
As stated previously, every edition prior to 4E has allowed the option of viewing HP as wounds.
Not really. I mean, if you see an orc make an attack roll, hit, roll damage, and that means that the arrow he fired struck your character, then how you manage to think your way through that is
absolutely beyond me. On top of that, you're pretty much saying that anytime hit points are healed, it's the medical/magical treatment of wounds, and that doesn't fit with a lot of things in a lot of
editions of D&D - I feel comfortable saying "this simply does not fit with D&D".
In spite of the handwave that HP are supposed to represent multiple factors, most evidence within each edition had shown clearly that every hit created some physical damage.
Until your character gets that one shot that puts him down, it's supposed to be generally superficial damage.
Except for 4E, where you could just "spend a Healing Surge" with no justification - even in the middle of combat - and suddenly you're up to full.
I have never understood how "grit your teeth and bear thru the pain because you need to do this to be the big damn hero" is such a strange, mystical, no-reason thing, when you see it all the time in
movies and books and comic books and anime and videogames and seriously this troupe confuses you!? More importantly, this is not what healing surges are. You can have a Second Wind mechanic with or
without healing surges. You can have healing surges with or without a Second Wind mechanic.
If the whole point of choosing a system is to play one which allows you to tell the kinds of stories you want, while deviating as little as possible from what you would expect to happen within
the confines of that genre, then yes, I don't like Healing Surges because they conflict with the way I choose to view the game.
I expect to tell a story of heroic fantasy. The confines of that genre easily supports people not being able to go any further in a day, being too exhausted, even with the presence of magic.
There's nothing mechanically wrong with surges. It's an aesthetic preference. And that's just as valid as a mechanical complaint.
And all game preferences should be supported. I'm all for a module that removes surges entirely, gritty or not. Sure. But the question is more "should the core rules have them" for the game style
that fits D&D most appropriately, is easiest to introduce new players to, and is easiest to handle mechanically.
i know the arguments of the system used before 3rd edition when it came to healing but why not use a modified version of that and do away with surges.
I'm not understanding your point. Why do way with surges? Because you want a modified older version of the rules? Why are those rules better? Feel free to elaborate.
so its more believable in game that a rogue after getting his ass kicked in 3 encounters can self heal with no skill in healing at all.
Hit points are not about how many times an arrow stuck in you, how many times you got stabbed with a sword, or how badly you were burned by that fireball. Hit points are about how capable you are of
moving forward despite the minor injuries you've sustained. This is what hit points have always been about, and it's been explicitly explained that is how the rules work in every edition since 2e.
#36 Thu, 02/07/2013 - 02:49
I don't like healing surges. But it's a great example of a modular system because it can be added on without disrupting anything in the core.
I like dice and randomness. I'm not gonna defend my position 'cause I think the argument is neverending. If you disagree and want to make it less random then I respect that and I want you to have
your module. I don't want to use it, personally.
I like HP as actual wounds too. Again, not gonna defend that. I find that narratively it just makes more sense to me. If you don't then that's great and I respect that. I don't need a module to
describe HP as actual wounds any more than someone who likes HP as a function of morale needs a module to describe it the way they want. HP works both ways which is why it's still being used.
I like my healing to be magic only or naturally aided bed rest. Martial healers are totally fine for some but not for me. I don't see any reason why we need a module for either point of view; whether
something's justified by magic or not is a matter of narration and metagame. My warlord will use magic of some kind to heal people. Sorry if that offends you but that's what I like. If that offends
you, take comfort in the fact that your use of martial healing without magic doesn't particularly offend me.
This module - it's interesting. I've been staring at it trying to figure out why I'm not sold on it. I'm not and I don't see myself using it but I can't actually articulate why. I think probably I
just like random.
#37 Thu, 02/07/2013 - 02:53
I like this. I would also use it for self-healing, allowing a number of times per day for a wounded character to pull himself up to the next hp stage.
Way better than HD, which no kind of player seems to like anyway - but that the designers are determined to shoehorn into the game anyway for no other reason than it having a 'classic' feel to it.
#38 Thu, 02/07/2013 - 03:21
If you want proportional healing, Next already has a mechanic in place: hit dice. Just change the healing spells to the following:
Cure Light Wounds: The target regains hit points as if they had spent a hit die.
Cure Moderate Wounds: The target regains hit points as if they had spent two hit dice.
If you want, give each spell a +4 or +8 to the amount healed (just like the current versions).
But honestly, I'm not sure why proportional healing is so important. The high HP characters already have an advantage: they have more hit points. I don't see why they also need to recover more
HP from magical healing. The way I look at it is that damage isn't proportional, and so healing shouldn't be either. When a goblin hits you, you take 1d6-1 damage regardless of how many hit
points you have.
I don't think you actually understand what 4e players mean when we say "Healing surges were proportional healing" heck, I don't even think WoTC get it for that matter.
We're not just talking about differences between classes we're also talking about differences between levels. Cure light wounds in 4e healed you 1 surge worth of HP. That means a level 1 fighter
regains 25% of their HP, A level 15 cleric regains 25% of their HP, and guess what, a level 30 barbarian regains 25% of their HP.
Now, look at what you suggested (or even what the current spells do). Wow, Mr magic toes casts cure light wounds and healed me 1 hit dice, Oops, I'm lvl 20 and have 20 hit dice + 20xCon mod worth of
hp. Mr magic toes just cured my paper cut.
If the healing spells don't scale with level then they get progressively more worthless as you gain levels. This also means that any feature which gives you the effect of a spell (such as the feat
which gives you cure light wounds or the cleric domain which means you always have it prepared) are worthless.
It's a problem that goes beyond healing and is rooted in their decision to scale the number of spell slots you get rather than the damage/healing values of spells as you gain levels. +X to hit/AC
scales with level, +X HP or X damage doesn't. By level 20 none of the low level damage/healing spells will be worth it.
It's kind of weird that they seem to have realised this with cantrips as they now just scale with level but regular spells only scale with spell slot. I mean, who's going to fill their level 2 wizard
slots with melf's acid arrow when a cantrip does the same damage and you can prepare another knock/invisibility/whatever.
If you really hate healing surges and love hit dice then healing spells need to cure a number of hit dice proportional to your level.
#39 Thu, 02/07/2013 - 05:46
If you really hate healing surges and love hit dice then healing spells need to cure a number of hit dice proportional to your level.
Seconded. As they are right now HD are just a second health pool per day.
I'd rather them to be stripped entirely out of Basic and Standard and have instead a system along the lines of what Wrecan or Lawolf are suggesting here in Advanced.
Another idea I'd like to explore as an option would be to have healing magic consume 'surges' when cast normally, but be surge-free when cast as a ritual.
#40 Thu, 02/07/2013 - 06:06
I like hit dice as a gesture. The name sucks and it's not powerful enough to rely on but it's a buffer.
If the system feels insipid and unviable, that's because it is. And it's supposed to be substandard because it's meant to reflect how well we can recover on our own without any help (not very well by
their estimation). As such, modifying it to be more or less generous or even taking it away is totally fine.
I like everyone having a minor mechanism to self heal that reflects the fact that we can't really heal anything resembling a serious injury. We'll recover from minor cuts and bruises but the big
stuff takes magic or days of bed rest.
For my purposes, HD works better than surges ever did.
#41 Thu, 02/07/2013 - 06:22
You can have proportional healing and scaling spells if you use hit dice, versus having healing spells use a seperate dice mechanic like a d8. As to a wound system that may include a bloodied
condition, I recommend it be addressed as a seperate set of rules.
|
{"url":"http://community.wizards.com/content/forum-topic/3645001","timestamp":"2014-04-20T00:28:56Z","content_type":null,"content_length":"216831","record_id":"<urn:uuid:b8c373ac-f160-49b3-878f-d61d272efd62>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00441-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Homework Help
Posted by Anonymous on Wednesday, March 16, 2011 at 9:38pm.
Translate the following situation into an inequality. Do not solve.
“Charlie would like to pack one less outfit than Jane for their trip. Loretta would like to bring five outfits less than twice what Jane packs. Due to space in the suitcase, they are limited to a
total of 24 outfits. How many outfits can they each pack?”
• math - bobpursley, Wednesday, March 16, 2011 at 9:44pm
so C is 6
and L is 9
total 22 check that.
• math - Alyssa, Wednesday, March 16, 2011 at 9:47pm
use X for the amount of outfits Jane has.
you know:
-that Charlie packs 1 less than Jane (x-1)
-that Loretta packs 5 less than twice Jane's (2x-5)
-that you can't have more than 24. so set it equal to 24
and then solve :)
Related Questions
homework - ranslate the following situation into an inequality. Do not solve. “...
Translate the following situation into an inequali - “Charlie would like to pack...
“Charlie would like to pack one less outfit than J - “Charlie would like to pack...
math - Translate the following statement into an inequality: Five less than a ...
Math - Responding back as requested: This is all I was provided Solve the ...
soc - Which of the following is NOT a question that a social conflict theorist ...
algebra3 - which inequality is tge solution of 6x negative 3 greater than or = ...
8th grade - need help thank-you translate the verbal phrase into an inequality, ...
math - How would you solve for these word problems in one variable inequality ...
French - I need to translate this paragraph. Can someone check my work? I didn't...
|
{"url":"http://www.jiskha.com/display.cgi?id=1300325935","timestamp":"2014-04-17T04:29:31Z","content_type":null,"content_length":"9010","record_id":"<urn:uuid:6243e99c-927b-45f0-a018-29a5d8e6743e>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00239-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Robert H.
Hi, my name is Rob and I'm a recent Magna Cum Laude graduate from George Mason University with a BS in Mathematics and a BA in English. I currently work as a financial analyst, where I keep my math
skills sharp.
I have years of experience tutoring students from all high school grade levels and have always made positive and lasting impacts on my students' grades.
My tutoring strategy is to help the student understand the WHY in math instead of just the WHAT, so that they understand the reasoning behind the procedures, and they'll be able to carry the
knowledge on with them to harder math classes.
While I have predominantly tutored in math, I'm also qualified to tutor all aspects of English.
Robert's subjects
|
{"url":"http://www.wyzant.com/Tutors/MD/Laurel/7604222/","timestamp":"2014-04-16T15:03:46Z","content_type":null,"content_length":"80371","record_id":"<urn:uuid:50f800ef-6d18-4bed-a75c-c2db235f3146>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00363-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Structure, structure and more structure
I was expecting to write about a paper I found recently by Oran Magal, a post doc at McGill University, On the mathematical nature of logic. I was attracted to the paper because the title was
followed by the phrase Featuring P. Bernays and K. Gödel
I’m often intrigued by disputes over whether mathematics can be reduced to logic or whether logic is, in fact, mathematics, because these disputes often remind me of questions addressed by cognitive
science, questions related to how the mind uses abstraction to build meaning. This particular paper acknowledges, in the end, that its purpose is two-fold. It makes the philosophical argument that an
examination of the interrelationship between mathematics and logic shows that “a central characteristic of each has an essential role within the other” But the paper is also a historical
reconstruction and analysis of the ideas presented by Bernays, Hilbert and Gödel (the detail of which is not particularly relevant to my concerns). It was Bernays’ perspective that I was most
interested in pursuing.
Magal begins with the observation that
the relationship between logic and mathematics is especially close, closer than between logic and any other discipline, since the very language of logic is arguably designed to capture the
conceptual structure of what we express and prove in mathematics.
While some have seen logic as more general than mathematics, there has also been the view that mathematics is more general than logic. It is here that Magal introduces Bernays’ idea that logic and
mathematics are equally abstract but in different directions. And so they cannot be derived one from the other but must be developed side-by-side. When logic is stripped of content it becomes the
study of inference, of things like negation and implication. But while logical abstraction leaves the logical terms constant, according to Bernays, mathematical abstraction leaves structural
properties constant. These structural properties do seem to be the content of mathematics, and what makes mathematics so powerful.
Magal describes how Bernays understands Hilbert’s axiomatic treatment of geometry. Here, the purely mathematical part of knowledge is separated from geometry (where geometry is thought of as the
science of spatial figures) and is then investigated directly.
The spatial relationships are, as it were, mapped into the sphere of the abstract mathematical in which the structure of their interconnections appears as an object of pure mathematical thought.
This structure is subjected to a mode of investigation that concentrates only on the logical relations and is indifferent to the question of the factual truth, that is, the question whether the
geometrical connections determined by the axioms are found in reality (or even in our spatial intuition). (Bernays, 1922a, p. 192) (emphasis added)
Magal then uses abstract algebra to illustrate the point:
To understand Bernays’ point, that this is a structural direction of abstraction, and the sense in which this is a mathematical treatment of logic, it is useful to compare this to abstract
algebra. The algebra familiar to everyone from our school days abstracts away from particular calculations, and discusses the rules that hold generally (the invariants, in mathematical
terminology) while the variable letters are allowed to stand for any numbers whatsoever. Abstract algebra goes further, and ‘forgets’ not just which number the variables stand for, but also what
the basic operations standardly mean. The sign ‘+’ need not necessarily stand for addition. Rather, the sign ‘+’ stands for anything which obeys a few rules; for example, the rule that a+ b= b+
a, that a+ 0= a, and so on. Remember that the symbol ‘a’ need not stand for a number, and the numeral ‘0’ need not stand for the number zero, merely for something that plays the same role with
respect to the symbol ‘+ ’ that zero plays with respect to addition. By following this sort of reasoning, one arrives at an abstract algebra; a mathematical study of what happens when the formal
rules are held invariant, but the meaning of the signs is deliberately ‘forgotten’. This leads to the study of general structures such as groups, rings, and fields, with immensely broad
applicability in mathematics, not restricted to operations on numbers.
Again the key to the discussion is the question of content. When mathematics is viewed as a variant of logic it could easily be judged to have no specific content. The various arguments presented are
complex, and not everyone writes with respect to the same logic. But the consistency of Bernays’ argument is most interesting to me. He is very clear on the question of content in mathematics. And
reading this sent me back to another of his essays, where he is responding to Wittgenstein’s thoughts on the foundations of mathematics is 1959. Here he challenges Wittgenstein’s view with the
nothingness of color.
Where, however, does the initial conviction of Wittgenstein’s arise that in the region of mathematics there is no proper knowledge about objects, but that everything here can only be techniques,
standards and customary attitudes, He certainly reasons: `There is nothing here at all to which knowing could refer.’ That is bound up, as already mentioned, with the circumstance that he does
not recognize any kind of phenomenology. What probably induces his opposition here are such phrases as the one which refers to the `essence’ of a colour; here the word `essence’ evokes the idea
of hidden properties of the color, whereas colors as such are nothing other than what is evident in their manifest properties and relations. But this does not prevent such properties and
relations from being the content of objective statements; colors are not just a nothing….That in the region of colors and sounds the phenomenological investigation is still in its beginnings, is
certainly bound up with the fact that it has no great importance for theoretical physics, since in physics we are induced, at an early stage, to eliminate colors and sounds as qualities.
Mathematics, however, can be regarded as the theoretical phenomenology of structures. In fact, what contrasts phenomenologically with the qualitative is not the quantitative, as is taught by
traditional philosophy, but the structural, i.e. the forms of being aside and after, and of being composite, etc., with all the concepts and laws that relate to them. (emphasis added)
Near the end of the essay he makes a reference to the Leibnizian conception of the characteristica universalis which, Bernays says was intended “to establish a concept-world which would make possible
an understanding of all connections existing in reality. This dream of Leibniz’s (which it seems Gödel thought feasible) is probably the subject of another blog. But in closing I would make the
following remarks:
Cognitive scientists have found that abstraction is fundamental to how the body builds meaning or brings structure to its world. This is true in visual processes where we find cells in the visual
system that respond only to things like verticality, and it is seen in studies that show that a child’s maturing awareness seems to begin with simple abstractions. Mathematics is the powerful enigma
that it is because it cuts right into the heart of how we see and how we find meaning.
I hope you will be interested in our articles:
136. (with T. Porter), `Category theory and higher dimensional
algebra: potential descriptive tools in neuroscience’, Proceedings
of the International Conference on Theoretical Neurobiology, Delhi,
February 2003, edited by Nandini Singh, National Brain Research
Centre, Conference Proceedings 1 (2003) 80-92. arXiv:math/0306223
146. (with T. Porter) `Category Theory: an abstract setting for
analogy and comparison’, In: What is Category Theory? Advanced
Studies in Mathematics and Logic, Polimetrica Publisher, Italy,
(2006) 257-274.
both available as pdfs from my Publications list, as they, and others there, perhaps, seem relevant to your arguments.
• Thank you! I am interested and will look at them.
|
{"url":"http://mathrising.com/?p=988","timestamp":"2014-04-17T21:25:54Z","content_type":null,"content_length":"76517","record_id":"<urn:uuid:ab5b252a-c202-4623-acf5-0ece0574d510>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00506-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Notions of flatness relative to a Grothendieck topology
Panagis Karazeris
Completions of (small) categories under certain kinds of colimits and exactness conditions have been studied extensively in the literature. When the category that we complete is not left exact but
has some weaker kind of limit for finite diagrams, the universal property of the completion is usually stated with respect to functors that enjoy a property reminiscent of flatness. In this fashion
notions like that of a left covering or a multilimit merging functor have appeared in the literature. We show here that such notions coincide with flatness when the latter is interpreted relative to
(the internal logic of) a site structure associated to the target category. We exploit this in order to show that the left Kan extensions of such functors, along the inclusion of their domain into
its completion, are left exact. This gives in a very economical and uniform manner the universal property of such completions. Our result relies heavily on some unpublished work of A. Kock from 1989.
We further apply this to give a pretopos completion process for small categories having a weak finite limit property.
Keywords: flat functor, postulated colimit, geometric logic, exact completion, pretopos completion, left exact Kan extension
2000 MSC: 18A35, 03G30, 18F10
Theory and Applications of Categories, Vol. 12, 2004, No. 5, pp 225-236.
TAC Home
|
{"url":"http://www.tac.mta.ca/tac/volumes/12/5/12-05abs.html","timestamp":"2014-04-21T02:02:19Z","content_type":null,"content_length":"2815","record_id":"<urn:uuid:0720eae1-9001-49bc-b655-c1f177014d80>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00308-ip-10-147-4-33.ec2.internal.warc.gz"}
|
minimum value of N
February 11th 2013, 10:55 PM #1
Junior Member
Sep 2012
minimum value of N
If we pick N composite numbers between 1 and 1000, than we can find 2 numbers whose hcf is not 1. find the value of minimum value of N.
Re: minimum value of N
It seems that in the worst case (which maximizes N) each of the N numbers has exactly two prime factors, and these pairs are disjoint for different numbers.
February 12th 2013, 07:57 AM #2
MHF Contributor
Oct 2009
|
{"url":"http://mathhelpforum.com/number-theory/212984-minimum-value-n.html","timestamp":"2014-04-19T12:24:03Z","content_type":null,"content_length":"31269","record_id":"<urn:uuid:00a36ed3-07c5-49c7-92d4-81c3845457c8>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00206-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Fibonacci numbers
From Encyclopedia of Mathematics
The elements of the sequence
Operations that can be performed on the indices of the Fibonacci numbers can be reduced to operations on the numbers themselves. The basis for this lies in the "addition formula" :
Immediate corollaries of it are:
etc. The general "multiplication formula" is more complicated:
The elementary divisibility properties of the Fibonacci numbers are mainly determined by the following facts:
An important role in the theory of Fibonacci numbers is played by the number
holds; it implies that
The Fibonacci numbers occupy a special position in the theory of continued fractions. In the continued-fraction expansion of
[1] B. Boncompagni, "Illiber Abbaci di Leonardo Pisano" , Rome (1857)
[2] N.N. Vorob'ev, "Fibonacci numbers" , Moscow (1984) (In Russian)
[3] V.E. Hoggatt, "Fibonacci and Lucas numbers" , Univ. Santa Clara (1969)
[4] U. (or A. Brousseau) Alfred, "An introduction to Fibonacci discovery" , San José, CA (1965)
[5] Fibonacci Quart. (1963-)
Still more generally, a sequence arithmetic function, is said to be recurrent of order
For some more results on Fibonacci numbers, Lucas numbers and recurrent sequences, as well as for their manifold applications, cf. also [a1].
[a1] A.N. Phillipou (ed.) G.E. Bergum (ed.) A.F. Horodam (ed.) , Fibonacci numbers and their applications , Reidel (1986)
How to Cite This Entry:
Fibonacci numbers. N.N. Vorob'ev (originator), Encyclopedia of Mathematics. URL: http://www.encyclopediaofmath.org/index.php?title=Fibonacci_numbers&oldid=12076
This text originally appeared in Encyclopedia of Mathematics - ISBN 1402006098
|
{"url":"http://www.encyclopediaofmath.org/index.php/Fibonacci_numbers","timestamp":"2014-04-18T00:35:17Z","content_type":null,"content_length":"23624","record_id":"<urn:uuid:7d301705-5cd3-446a-aa2b-422765c29426>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00348-ip-10-147-4-33.ec2.internal.warc.gz"}
|
g Calculator
"BisMag Calculator 3D" is a powerful math tool for Android consists of 5 calculators. "Matrix Calculator", a tool to calculate the decompositions and various operations on matrix, "Equation Solver"
an instrument capable of solving equations of degree n, "Graphing Calculator" a real scientific graphing calculator can draw graphs in 2D and 3D, "Currency Converter" a currency converter always
updated with the new exchange rates and "Unit Converter" a small units converter. In addition we find a comfortable Periodic Table of Elements.
Features in detail:
- Matrix Calculator:
--Derived Quantities
Cholesky decomposition
LU decomposition whit pivoting
QR decomposition
SVD - Singular Values decomposition
Eigenvalues - Eigenvectors
--Linear Systems
Linear Systems M N
- Graphing Calculator:
--Sample expressions
variable evaluation: pi
function evaluation: sin(0)
variable definition: a=3.5
function definition: f(x)=x^2-1
parentheses: (1-x)^2
--Logarithms and power
sqrt(x): square root; x^0.5
cbrt(x): cube root; x^(1/3)
exp(x): exponential; e^x
log(x), ln(x): natural logarithm
log2(x), lb(x): binary logarithm
log10(x), lg(x): decimal logarithm
log(base,x): arbitrary base logarithm
--Trigonometric - radians
sin(x), cos(x), tan(x)
asin(x), acos(x), atan(x)
--Trigonometric - degrees
sind(x), cosd(x), tand(x)
asind(x), acosd(x), atand(x)
sinh(x), cosh(x), tanh(x)
asinh(x), acosh(x), atanh(x)
gcd(x,y): greatest common divisor
comb(n,k): combinations
perm(n,k): permutations
min(x,y), max(x,y)
floor(x), ceil(x)
abs(x): absolute value
sign(x): signum
rnd(): random value from [0,1). rnd(max): random value from [0, max).
gamma(x): (x-1)!
mod(x,y): modulo
--Complex numbers
i or j is the complex base. Example:
+ - × ÷ basic arithmetic
^ power
% percent
! factorial
# modulo
√ square root
' first derivative
--Binary, octal, hexadecimal
Value converter in binary, octal or hexadecimal input prefixing respectively 0b, 0o, 0x, in decimal.
binary: 0b1010
octal: 0o17
hexa: 0x100
It is possible to compute the first derivative of a function with one argument using the prime notation: log'(5).
The prime mark (quote) must appear immediately after the name of the function, and must be followed by open-parentheses.
The derivative may be plotted e.g. sqrt'(x).
To compute the derivative of an expression you must define the expression as a named function.
Eg f (x) = x ^ 2 + x, after insertion, type f '(x) to display the graph of the derivative.
--Multi plot
To plot multiple functions on the same 2d graph, simply enter them on the same line separated by ";".
--Special Function
Indefinite Integrals
Limit of a function /*Still in beta*/
Definite Integrals
Taylor Series
Tangent Line
Arc Length & Surface Area
Formula Tables
--Graph (MultiPlot)
Graph of parametric functions
Graph in polar coordinates
MultiPlot 3D
-- Widget Calculator
Say about us:
scientific calculator, grapher, graphic calculator, integration, derivative, mathematica, matlab, complex numbers, plotting, graph plot, plotter, calculation, symbolic, graphing, study of function,
derive, arity, symja
Really useful Could use left and right tab keys for easier editing. Otherwise a superior, comprehensive calculator.
A great graphing calculator I got this calculator for the purpose of graphing in 3D. The graphing is quick and accurate.
Great app Lots of funny , easy to use.
Bugs Bugs in matrices and limits and polar plots. Additional problems still turning up.
So far so good, exactly what I wanted, all bundled into one
Good App I have been looking for an app like that for long time. And I and finally found it just for $1.99
What's New
- App completely revised and improved
- Added support for multiple screens
- Improved the visibility of the keyboard in "graphing calculator" for screens with dpi > 240
- View graph also in landscape
- Added multi plot 3d graph, polar coordinate graph, parametric coordinate graph
- Added derivative, definite integrals, arc length & surface area, taylor series.
- Added Indefinite Integrals and Limit calculator
- Periodic Table
- Widget
- Keyboard improved
- Improved Formula Tables
- Bugs fixed
A calculator with 10 computing modes in one application + a handy scientific reference facility - different modes allow: 1) basic arithmetic (both decimals and fractions), 2) scientific calculations,
3) hex, oct & bin format calculations, 4) graphing applications, 5) matrices, 6) complex numbers, 7) quick formulas (including the ability to create custom formulas), 8) quick conversions, 9) solving
algebraic equations & 10) time calculations.
Please note that internet permission is needed to allow access to currency exchange rates in the conversion function
Functions include:
* General Arithmetic Functions
* Trigonometric Functions - radians, degrees & gradients - including hyperbolic option
* Power & Root Functions
* Log Functions
* Modulus Function
* Random Number Functions
* Permutations (nPr) & Combinations (nCr)
* Highest Common Factor & Lowest Common Multiple
* Statistics Functions - Statistics Summary (returns the count (n), sum, product, sum of squares, minimum, maximum, median, mean, geometric mean, variance, coefficient of variation & standard
deviation of a series of numbers), Bessel Functions, Beta Function, Beta Probability Density, Binomial Distribution, Chi-Squared Distribution, Confidence Interval, Digamma Function, Error Function,
Exponential Density, Fisher F Density, Gamma Function, Gamma Probability Density, Hypergeometric Distribution, Normal Distribution, Poisson Distribution, Student T-Density & Weibull Distribution
* Conversion Functions - covers all common units for distance, area, volume, weight, density, speed, pressure, energy, power, frequency, magnetic flux density, dynamic viscosity, temperature, heat
transfer coefficient, time, angles, data size, fuel efficiency & exchange rates
* Constants - a wide range of inbuilt constants listed in 4 categories:
1) Physical & Astronomical Constants - press to include into a calculation or long press for more information on the constant and its relationship to other constants
2) Periodic Table - a full listing of the periodic table - press to input an element's atomic mass into a calculation or long press for more information on the chosen element - the app also includes
a clickable, pictorial representation of the periodic table
3) Solar System - press to input a planet's orbit distance into a calculation or long press for more information on the chosen planet
4) My Constants - a set of personal constants that can be added via the History
* Convert between hex, oct, bin & dec
* AND, OR, XOR, NOT, NAND, NOR & XNOR Functions
* Left Hand & Right Hand Shift
* Plotter with a table also available together with the graph
* Complex numbers in Cartesian, Polar or Euler Identity format
* The main screen of the calculator can also be set to Fractions Mode for general arithmetic functions including use of parentheses, squares, cubes and their roots
* 20 Memory Registers in each of the calculation modes
* A complete record of each calculation is stored in the calculation history, the result of which can be used in future calculations
An extensive help facility is available which also includes some useful scientific reference sections covering names in the metric system, useful mathematical formulas and a detailed listing of
physical laws containing a brief description of each law.
A default screen layout is available for each function showing all buttons on one screen or, alternatively, all the functions are also available on a range of scrollable layouts which are more
suitable for small screens - output can be set to scroll either vertically (the default) or horizontally as preferred – output font size can be increased or decreased by long pressing the + or -
A full range of settings allow easy customisation - move to SD for 2.2+ users
Please email any questions that are not answered in the help section or any requests for bug fixes, changes or extensions regarding the functions of the calculator - glad to help wherever possible.
As a powerful emulator of HP 15C Scientific Calculator, Vicinno 15C Scientific Calculator provides all functions of the world-renowned HP 15C RPN high-end scientific programmable calculator for
Android. It uses the identical mathematics and calculations with the original to give you the same precision and accuracy.
It can perform numerical integration and solve the roots of equations in addition to supporting complex numbers and matrix calculations.
**** Cool Tip: click the upper right logo to see the settings page for more options. ****
**** Please be noted: the f (i) in our app works a little differently from the real calculator: instead of hold and release, just click the f (i), the display will show the imaginary part for a
second, then switch to real part automatically. *****
★ Features include:
• Root finder
• Numeric integration
• Complex numbers
• Matrix operations
• Hyperbolic and inverse hyperbolic trig functions
• Probability (combinations and permutations)
• Factorial, % change, and absolute value
• Random number generator
• RPN entry
• Programmable
• enable/disable key click sound
• Comma as decimal point option
• Automatically save/restore settings
• Touch logo to see all settings
• Direct access to support forum from app
• Support Android Tablet
• More
★ Support:
Feel free to contact us at support@vicinno.com.
★ Stay tuned:
Like us: www.facebook.com/vicinno
Follow us: www.twitter.com/vicinno
This is a scientific calculator for mathematics fans with widely used mathematical functions like square, square root, cube, cube root, sin, cos, tan,sinh, cosh, tanh, ncr, npr, permutation and more.
This is an advertisement free application with a low price. Please report any improvements required in future releases to
Kal Pro allows you to create and store all desired formulas, contains no advertising, plus enjoy a better experience because it contains more space for the keyboard.
Scientific calculator with all functions, easy to use, includes a screen to type the characters unlimited transactions, use parentheses and hierarchy in the operations and functions, the result is
displayed on the second line of the display. Ability to modify or correct an operation.
Kal Scientific calculator Features:
* Allows function graphs
* New functionality (FML), 100 built-in formulas.
* Typical operations (add, subtract, multiply, divide).
* Functions powers (nth power, nth root, squaring, square root, cube root)
* Functions logarithmic (log10, ln, powers of 10, exp)
* Trigonometric functions (sin, cos, tan, including inverse and hyperbolic)
* Three angle modes (DEG, RAD, GRA)
* Random number generator
* PI
* Permutations (nPr) and combinations (nCr)
* Absolute value, factorial.
* Allows numbers in scientific notation.
* Includes major scientific constants.
* Basic operations and converting number systems (decimal, hexadecimal, octal, binary)
* Set decimal number.
* History of the last 10 operations and results.
* Memory storage results
statistical mode
* Standard deviation
* arithmetic mean
* Sum of values
The Panecal is an editable expressions scientific calculator. The Panecal can indicate expressions on a multi-line display, allowing you to prevent input mistakes. In addition, moving the cursor on
the display can easily modify expressions.
* Re-editable and re-callable expressions
* Result and expressions history
* Decimal, Binary, octal, and hexadecimal
* Base conversions
* Main memory and 6 variable memories
* Percentages
* Arithmetic, trigonometric, inverse trigonometric, exponential, logarithmic function, power, power root function, factorial, and absolution.
* DEG, RAD, GRAD modes.
* Floating-point, Fixed-point, Scientific and Engineering display modes.
* Configurable decimal separator and grouping separator
* Configurable number of bits for base conversions
* BS key, DEL key, INS key.
* Landscape mode
* Key input confirmation by vibration and orange colors
[System environment]
Android OS 2.1 to 2.3.x
Android OS 3.x, 4.x
APPSYS does not accept responsibility for any loss which may arise from reliance on the software or materials published on this site.
Calculator++ is an advanced, modern and easy to use scientific calculator #1.
Calculator++ helps you to do basic and advanced calculations on your mobile device.
Discuss Calculator++ on Facebook: http://facebook.com/calculatorpp
1. Always check angle units and numeral bases: trigonometric functions, integration and complex number computation work only for RAD!!!
2. Application contains ads! If you want to remove them purchase special option from application settings. Internet access permission is needed only for showing the ads. ADS ARE ONLY SHOWN ON THE
SECONDARY SCREENS! If internet is off - there are no ads!
++ easy to use
++ home screen widget
+ no need to press equals button any more - the result is calculated automatically
+ smart cursor positioning
+ copy/paste in one button
+ landscape/portrait orientations
++ drag buttons up or down to use special functions, operators etc
++ modern interface with possibility to choose themes
+ highlighting of expressions
+ history with all previous calculations and undo/redo buttons
++ variables and constants support (build-in and user defined)
++ complex number computations
+ support for a huge variety of functions
++ expression simplification: use 'identical to' sign (≡) to simplify current expression (2/3+5/9≡11/9, √(8)≡2√(2))
+ support for Android 1.6 and higher
+ open source
NOTE ABOUT INTERNET ACCESS: Calculator++ (version 1.2.24) contains advertisement which requires internet access. To get rid of it - purchase a version without ads (can be done from application's
How can I get rid of the ads?
You can do it by purchasing the special option in the main application preferences.
Why Calculator++ needs INTERNET permission?
Currently application needs such permission only for one purpose - to show ads. If you buy the special option C++ will never use your internet connection.
How can I use functions written in the top right and bottom right corners of the button?
Push the button and slide lightly up or down. Depending on value showed on the button action will occur.
How can I toggle between radians and degrees?
To toggle between different angle units you can either change appropriate option in application settings or use the toggle switch located on the 6 button (current value is lighted with yellow color).
Also you can use deg() and rad() functions and ° operator to convert degrees to radians and vice versa.
268° = 4.67748
30.21° = 0.52726
rad(30, 21, 0) = 0.52726
deg(4.67748) = 268
Does C++ support %?
Yes, % function can be found in the top right corner of / button.
100 + 50% = 150
100 * 50% = 50
100 + 100 * 50% * 50% = 125
100 + (100 * 50% * (25 + 25)% + 100%) = 150
100 + (20 + 20)% = 140, but 100+ (20% + 20%) = 124.0
100 + 50% ^ 2 = 2600, but 100 + 50 ^ 2% = 101.08
Does C++ support fractional calculations?
Yes, you can type your fractional expression in the editor and use ≡ (in the top right corner of = button). Also you can use ≡ to simplify expression.
2/3 + 5/9 ≡ 11/9
2/9 + 3/123 ≡ 91/369
(6-t) ^ 3 ≡ 216 - 108t + 18t ^ 2 - t ^ 3
Does C++ support complex calculations?
Yes, just enter complex expression (using i or √(-1) as imaginary number). ONLY IN RAD MODE!
(2i + 1) ^ = -3 + 4i
e ^ i = 0.5403 + 0.84147i
Can C++ plot graph of the function?
Yes, type expression which contains 1 undefined variable (e.g. cos(t)) and click on the result. In the context menu choose 'Plot graph'.
Does C++ support matrix calculations?
No, it doesn't
Keywords: calculator++ calculator ++ engineer calculator, scientific calculator, integration, differentiation, derivative, mathematica, math, maths, mathematics, matlab, mathcad, percent, percentage,
complex numbers, plotting graphs, graph plot, plotter, calculation, symbolic calculations, widget
*** FIRST 100 DOWNLOADS AT PROMOTIONAL PRICE ***
THIS IS → rvCALC !!
▪ An application that can be used as a Scientific Calculator and also as a Standard and Simple Calculator
▪ A fully featured Scientific Calculator with a lot of Functions, Math & Physics Constants, Complex Numbers, Auto-Correction Features and a complete set of Unit Conversions
▪ INTUITIVE and very EASY TO USE, even in Scientific Mode
▪ A must have for highschool, university and at work !!
❶ STANDARD and SCIENTIFIC modes
This calculator has two modes of operations with an ergonomic design:
▪ STANDARD: Vertical orientation
▪ SCIENTIFIC: Horizontal orientation
To change from Standard to Scientific and from Scientific to Standard you just need to rotate your screen.
This calculator has a broad and comprehensive set of scientific functions that differentiate it from any other in the market:
▪ Complex & Real Numbers
▪ Math Functions: +50
▪ Math & Physics Constants: 30
▪ Unit Conversion: 200 units in 13 groups
▪ Configurable Options
Includes new and very useful Auto-Correction Features that are automatically available in real time "As you Type":
▪ Automatic Error Detection and Visualization
▪ Automatic Error Correction
▪ Automatic Parenthesis Evaluation
▪ Automatic Exponential Functions Evaluation
▪ Automatic Operators
Formula History allow you to review, modify and/or re-use your work
❺ INFORMATION & MANUAL
▪ https://sites.google.com/site/rvcalc2013/
▪ Very complete and clear manual
▪ General Description
▪ How to Use
▪ Functions
▪ Constants
▪ Unit Conversion
▪ Options
❻ OPTIONS
▪ Real and Complex numbers
▪ Engeneering, Scientific and Fix numbers
▪ Complex Numbers in Rectangular and Polar notation
▪ Number of Digits: 4 to 16
▪ Angle Units: RAD, DEG, GRA
❼ MATH FUNCTIONS (50 functions)
▪ General Functions
▪ Exponential & Logaritmic
▪ Trigonometric
▪ Trigonometric - Additional
▪ Hyperbolic
▪ Hyperbolic - Additional
▪ Statistics
▪ Complex Numbers
❽ UNIT CONVERTION FUNCTIONS (200 units)
▪ Angle
▪ Temperature
▪ Distance / Length
▪ Astronomical Distance
▪ Area
▪ Volume
▪ Force
▪ Mass
▪ Speed
▪ Time
▪ Energy
▪ Power
▪ Pressure
❾ MOBILE PHONES and TABLETS
Available for Android Phones & Tablets
⑩ WISH LIST
Post feature request into the following site:
▪ https://sites.google.com/site/rvcalc2013/
⑪ ERROR REPORT
Send error reports to the developer via email:
▪ prbSwDev@gmail.com (subject: ERROR - xxx)
MathsApp Scientific Calculator is aiming to be the best scientific calculator app for Android.
Looking for a graphing calculator, matrix support, or simply want to support the cause of creating the greatest scientific calculator app on Android? Get MathsApp Graphing Calculator: https://
MathsApp Scientific Calculator includes:
-Landscape mode
-Easily adjust previous calculations or insert previous results
-Binary, octal and hexadecimal number support
-Scientific, engineering and regular number formatting
-Physical constants
-User-friendly interface
-No advertisements
-Advanced calculations
-Trigonometric functions
-Advanced statistical distribution functions
-List support
-Complex number support
-Percent support
Mathex Scientific Calculator
Fully featured, expression based, scientific calculator which looks and operates like a real one.
Mathex includes the following features:
* Direct algebraic logic: enter equations as they are written
* History Playback: Recall any of the 10 last steps you have made
* Expression (re)editing: change equations and recalculate
* Chain calculations: use answer in following equation
* 2-Line display: check your equation and answer at the same time
* Intuitive Plotting feature
* Symbolic derivative
* Metric conversions
* Physical constants table
* 10 numerical memories
* Percentages calculations
* Combinatorial operators
* Trigonometry functions in degrees, radians or grads
* Normal, Fixed, Scientific or Engineering display modes
* Thin space digit grouping (SI/ISO 31-0 standard)
* Random number generator/White Noise for functions
* Sturdy, stylish look
To request support visit:
MathsApp Graphing Calculator is the ultimate graphing calculator app on Android. Features include:
-Graphing of functions
-Show line intersection points
-Show extrema (minimums/maximums)
-Show x-axis intersections
-Show y-axis intersections
-View as table
-View dy/dx
-Matrix support
-Easiest matrix entry in the history of calculator apps!
-Programming mode
-8-, 16-, 32-, and 64-bit
-binary, octal, decimal and hexadecimal support
On top of that, all features included in the free MathsApp Scientific Calculated are included as well:
-Landscape mode
-Color themes
-Easily adjust previous calculations or insert previous results
-User-friendly interface
-No advertisements
-Advanced calculations
-Trigonometric functions
-Advanced statistical distribution functions
-List support
-Complex number support
Powerful simulator of the classics calculators. With advanced features and easy to use. The same that we all know but now in your Smartphone.
* Percentages
* Memories
* Trig functions in degrees, radians or grads
* Scientific, engineering and fixed-point display modes
* Configurable digit grouping and decimal point
NHN-1 is a free advanced engineering and scientific calculator with a transparent widget that has fractions and percentage:
•A resizeable widget with arithmetic, percentage button, and fractions.
•Expression (formula) based input.
•Fraction input and output with conversion to proper fraction, mixed fraction and decimal. Even the quadratic or cubic root functions, for instance, will return fractions when the answer can be
written as a fraction.
•Full history with buttons that load old results, in case you need to start over from a previous point.
•Unlimited memory slots that can be named and are always displayed on screen.
•Multiple input fields except on small phones. This let's you do short side calculations without losing your place.
•Pre-packaged functions for quadratic equations as well as many geometric formulae.
•Buttons that store user defined functions that are created by binding an argument to a binary operator.
•Visualization of multivalued complex functions (roots, inverse trigonometric, logarithm)
•Visualization of quadratic and cubic functions
•Visualization of angles returned from inverse trigonometric functions.
•Complex numbers are very easy to work with, and all functions work with complex numbers where they are defined (inverse hyperbolic cosine, for instance).
•There is a fourth layout for small phones that is not shown in the screen shots.
•A unit conversion utility that lets you build arbitrary composite units. Conversion factors are written to the history, so you can bind the factor with multiplication to do the same conversion many
The layouts are designed to be ergonomic for each of four device sizes in portrait and landscape. Here are some examples of what we've done to make the calculator easy and speedy to work with:
•On small tablets in landscape, the keys are split so you can operate the calculator with two thumbs.
•On small phones, the numeric keypad is right under your thumb when holding the device with one hand.
•Square root, square, inverse and negation are all close to the numeric keypad.
•On small tablets, the long "NHN-1" button is a gesture recognizer that lets you scroll the six input fields with your left thumb. That way you don't have to let go of the device with your right hand
to click an input field.
MathAlly Graphing Calculator is quickly becoming the most comprehensive free Graphing, Symbolic, and Scientific Calculator for Android.
Here are some of our current features:
-Enter values and view results as you would write them
-Swipe up, down, left, or right to quickly switch between keyboard pages.
-Long click on keyboard key to bring up dialog about key.
-Undo and Redo keys to easily fix mistakes.
-Cut, Copy, and Paste.
-User defined functions with f, g, h
Symbolic Calculator:
-Simplify and Factor algebra expressions.
-Polynomial long division.
-Solve equations for a variable.
-Solve equations with inequalities such as > and <
-Solve systems of equations.
-Simplify trigonometric expressions using trigonometric identities.
-Graph three equations at once.
-View equations on graph or in table format.
-Normal functions such as y=x^2
-Inverse functions such as x=y^2
-Circles such as y^2+x^2=1
-Ellipses, Hyperbola, Conic Sections.
-Logarithmic scaling
-Add markers to graph to view value at given point.
-View delta and distance readings between markers on graph.
-View roots and intercepts of traces on graph.
-Definite integration.
Other Features:
-Complex numbers
-Hyperbolic functions
-nCr and nPr functions
-Change numeric base between binary, octal, decimal, and hexadecimal
-Bitwise operators AND, OR, XOR, and NOT
-Vector dot product and norm.
Q. Is there are tutorial anywhere explaining how to use the graphing calculator?
A. There are three into tutorials in the app for the calculator, graph equations, and graph screens. Additional tutorials can be found on our website http://www.mathally.com/
Q. How do I get to the keys for pi, e, solve, etc?
A. There are four keyboard pages. Each swipe direction across the keyboard moves you to a different page. The default page is the swipe down page. To get to the page with trig functions, swipe left.
To get to the matrix keys, swipe up. To get to the last page, swipe right. No matter what page you are on, the swipe direction to move to a specific page is always the same.
Q. What do you have planned for future releases?
A. You can keep up to date on the latest news on our blog at http://mathally.blogspot.com/ . This news will include what is coming up in future releases. Also feel free to leave comments and let me
know what you think!
If you find a bug or have questions, please email me.
Math Ally
Calculator done the right way!
Calculator Pro is designed for everyone looking for simplicity and functionality. You can enjoy using a standard calculator for basic operations or extend it into a scientific one for more complex
calculations. Just tilt your Android device into the landscape mode!
Whether you’re a diligent student, an accountant, a banking manager, a housewife in charge of the family finances or even a maths genius, this calc will save both your effort and time and let you
calculate anything you need!
• Two modes are available: do basic calculations in the Portrait Mode or tilt your smartphone or tablet and go advanced in the Landscape Mode
• Degrees and Radians calculations
• Memory buttons to help you out with complex calculations
• Choose the skin that suits you (additional skins available through in-app purchase)
• Accidentally input the wrong number? Just swipe with your finger to edit it!
• Copy and paste results and expressions directly into the current calculation
• History Bar: see your full calculation history directly on the screen
• Feel free to use it as an office calculator for all kinds of financial and statistical computations
• Use Calculator Pro for college studies, accounting, algebra and geometry lessons, engineering calculations and much more
• Work with decimal fractions, algebraic formulas, solve equations or just master your math skills
Now you can do any calculations on the go seamlessly!
Modern Calculator has been developed for basic purposes.
Any suggestions about future changes are welcome and for sure will be considered.
Feel free to contact me by e-mail.
Modern Calculator supports:
- tangent (tan)
- sine (sin)
- cosine (cos)
- asin
- acos
- atan
- exp
- logarithm (log)
- 2nd degree root
- 3rd degree root
- power of 2nd degree
- power of 3rd degree
- percentages
- brackets
- endless results and expressions history
- endless math expressions
- swipe navigation
- copy result from history by long-press
- copy result or expression to Android clipboard
- backspace
- UI customization
For sure my Modern Calculator is worth to try so go ahead and let me know what you like most in my app and things which need to be changed/fixed.
This version is supporting calculator widget.
Tags: calculate, calculator, simple, science, advanced, best, design, wonderful, modern, looking, good, iphone, ios, real, mobi, scientific, plus, free
Free version with ads: The best graphing and scientific calculator here.
As a scientific calculator cFunction supports functions like pow, square roots, trigonometric functions and the logarithm. As mathematical constants, the Euler number e and Pi (π) are supported
On top of that with this graphing calculator you can plot (multiple) functions, calculate derivatives, roots, extrema (maxima or minima of a function), inflection points, value table, certain values,
definite integrals, intersections of functions and it can convert between degrees and radian.
The calculator supports the trigonometric functions sine, cosine, tangent,
their hyperbolic representations hyperbolic sine, hyperbolic cosine and hyperbolic tangent
and the inverse trigonometric functions sin^−1(x), cos^-1(x), tan^-1(x), coth, cot, acot.
Using the calculator you can analyze functions fast and simple so it's perfect for your math classes.
Supports english, german, french and spanish.
Relevant Tags:
Math, Calculator, Scientific Calculator, Graphing Calculator, calculate Derivative, calculate Integral, calculate roots, homework, math classes, math tool, plotting, math help, functions, plot
functions, definite integrals, analyze functions, function plotter, extrema (maxima or minima of a function), inflection points, value table, certain values, definite integrals, intersections of
functions, sine, cosine, tangent, pow, square roots, trigonometric functions and the logarithm
More from developer
"Scientific Calculator 3D Free" is a powerful math tool for Android consists of 5 calculators. "Matrix Calculator", a tool to calculate the decompositions and various operations on matrix, "Equation
Solver" an instrument capable of solving equations of degree n, "Graphing Calculator" a real scientific graphing calculator can draw graphs in 2D and 3D, "Currency Converter" a currency converter
always updated with the new exchange rates and "Unit Converter" a small units converter. In addition we find a comfortable Periodic Table of Elements.
Features in detail:
-Matrix Calculator:
--Derived Quantities
Cholesky decomposition
LU decomposition whit pivoting
QR decomposition
SVD - Singular Values decomposition
Eigenvalues - Eigenvectors
--Linear Systems
Linear Systems M N
-Graphing Calculator:
--Sample expressions
variable evaluation: pi
function evaluation: sin(0)
variable definition: a=3.5
function definition: f(x)=x^2-1
parentheses: (1-x)^2
--Logarithms and power
sqrt(x): square root; x^0.5
cbrt(x): cube root; x^(1/3)
exp(x): exponential; e^x
log(x), ln(x): natural logarithm
log2(x), lb(x): binary logarithm
log10(x), lg(x): decimal logarithm
log(base,x): arbitrary base logarithm
--Trigonometric - radians
sin(x), cos(x), tan(x)
asin(x), acos(x), atan(x)
--Trigonometric - degrees
sind(x), cosd(x), tand(x)
asind(x), acosd(x), atand(x)
sinh(x), cosh(x), tanh(x)
asinh(x), acosh(x), atanh(x)
gcd(x,y): greatest common divisor
comb(n,k): combinations
perm(n,k): permutations
min(x,y), max(x,y)
floor(x), ceil(x)
abs(x): absolute value
sign(x): signum
rnd(): random value from [0,1). rnd(max): random value from [0, max).
gamma(x): (x-1)!
mod(x,y): modulo
--Complex numbers
i or j is the complex base. Example:
+ - × ÷ basic arithmetic
^ power
% percent
! factorial
# modulo
√ square root
' first derivative
--Binary, octal, hexadecimal
Value converter in binary, octal or hexadecimal input prefixing respectively 0b, 0o, 0x, in decimal.
binary: 0b1010
octal: 0o17
hexa: 0x100
It is possible to compute the first derivative of a function with one argument using the prime notation: log'(5).
The prime mark (quote) must appear immediately after the name of the function, and must be followed by open-parentheses.
The derivative may be plotted e.g. sqrt'(x).
To compute the derivative of an expression you must define the expression as a named function.
Eg f (x) = x ^ 2 + x, after insertion, type f '(x) to display the graph of the derivative.
--Multi plot
To plot multiple functions on the same 2d graph, simply enter them on the same line separated by ";".
--Special Function
Limit Calculator
Indefinite Integrals
Definite Integrals
Taylor Series
Tangent Line
ArcLength & SurfaceArea
Formula Tables
--Graph (MultiPlot up to 6 functions)
Graph of parametric functions
Graph in polar coordinates
MultiPlot 3D
--Widget Calculator
Say about us
"BisMag Scientific Graphing Calculator 3D"is a demo version of "BisMag Calculator 3D" if the product is to your liking and you find it useful try the PRO version.
scientific calculator, scientific calculator app, grapher, graphic calculator, integration, derivative, mathematica, matlab, mathcad, complex numbers, plotting, graph plot, plotter, calculation,
symbolic calculations, graphing calculator, study of function, asymptote, derive, arity, symja
"Smart Utilities Free" is a demo version of "Smart Utilities" then the collection of 11 tools in "Smart Utilities Free" is limited:
- Spirit Level (measuring instrument used to determine the slope of a surface with respect to a horizontal plane of reference)
- Battery Level (Easy way to view the exact battery level, battery temperature, battery voltage, battery status and battery health)
- Compass (This app is a tool to search bearings(azimuth) using the built-in magnetic sensors or gps)
- Torch (The app turns your camera flash into a flashlight. In addition, with a tap on the "Menu" provides additional functionality)
- Metronome (A versatile metronome with a very simple layout. Useful for the practice of music as well as physical exercise, putting practice, dance, and many other activities) /* Only in PRO
versione */
- Calculate angle for orientation parable (A tool for finding TV satellites and aligning satellite dishes) /* Only in PRO versione */
- Speedometer (A tool to control your speed, along a path) /* Only in PRO versione */
- Detector speed connection (Simple app that allows you to test the speed of your Internet connection in both download and upload) /* Only in PRO versione */
- Sound Level Meter (Use the phone's microphone to measure the volume of sound in decibels (dB)) /* Only in PRO versione */
- Converter Binary-Decimal-Hexadecimal-Octal
- Task Killer /* Only in PRO versione */
Keywords: Smart Compass, Sound Meter, Smart Ruler, Smart Measure, Speed Gun, Smart Dista, Vibration Meter, Smart Protract, Smart Distance, Smart Tools
This app solves an equation of degree n (with n <= 20).
It is simple and intuitive, you insert the degree and later the coefficients of x, it will do the rest.
The 15-puzzle (also called Gem Puzzle, Boss Puzzle, Game of Fifteen, Mystic Square and many others) is a sliding puzzle that consists of a frame of numbered square tiles in random order with one tile
missing. The object of the puzzle is to place the tiles in order ,by making sliding moves that use the empty space.
The puzzle also exists in other sizes, particularly the smaller 8-puzzle. If the size is 3×3 tiles, the puzzle is called the 8-puzzle or 9-puzzle, and if 4×4 tiles, the puzzle is called the 15-puzzle
or 16-puzzle named, respectively, for the number of tiles and the number of spaces.
(in gameplay pressed the "standard MENU button" to activate the option menu)
** New Game
** High Scores
** Settings:
*Enable/Disable Sound
*Enable/Disable Status bar
*Select Background Color
*Enable/Disable Timer
*Select Timer Color
*Enable/Disable Number on the tiles
*Select Numbers Colors
*Select Numbers Sizes
*Show image
*Image Source
*Puzzle Size:
*8-puzzle; (size 3x3)
*24-puzzle; (size 5 x 5)
*31-puzzle; (size 6 x 6)
*48-puzzle; (size 7 x 7)
*63-puzzle; (size 8 x 8)
*customizable images (with images from your gallery)
*timer for to challenge the your completion time
Important notes:
- The application to work properly requires a SD card is active.
- The default image in this game is a photo of one of the most beautiful places in Italy, the Salento.
Smart Utilities is a collection of 11 tools:
- Spirit Level (measuring instrument used to determine the slope of a surface with respect to a horizontal plane of reference)
- Battery Level (Easy way to view the exact battery level, battery temperature, battery voltage, battery status and battery health)
- Compass (This app is a tool to search bearings(azimuth) using the built-in magnetic sensors or gps)
- Torch (The app turns your camera flash into a flashlight. In addition, with a tap on the "Menu" provides additional functionality)
- Metronome (A versatile metronome with a very simple layout. Useful for the practice of music as well as physical exercise, putting practice, dance, and many other activities)
- Calculate angle for orientation parable (A tool for finding TV satellites and aligning satellite dishes)
- Speedometer (A tool to control your speed, along a path)
- Detector speed connection (Simple app that allows you to test the speed of your Internet connection in both download and upload)
- Sound Level Meter (Use the phone's microphone to measure the volume of sound in decibels (dB))
- Converter Binary-Decimal-Hexadecimal-Octal
- Task Killer
Keywords: Smart Compass, Sound Meter, Smart Ruler, Smart Measure, Speed Gun, Smart Dista, Vibration Meter, Smart Protract, Smart Distance, Smart Tools
Matrix Total Calculator Free is a simple and intuitive app:
it is a powerful tool to calculate decompositions and other various matrix operations; MTC is able to do:
In this FREE version
-Elementary Operation
Addition (2 by 2 Matrix)
Subtraction (2 by 2 Matrix)
Multiplication (2 by 2 Matrix)
Scalar Multiplication (2 by 2 Matrix)
Transpose (2 by 2 Matrix)
Norms (2 by 2 Matrix)
-Derived Quantities
Determinant (2 by 2 Matrix)
Rank (2 by 2 Matrix)
Inverse (2 by 2 Matrix)
PseudoInverse (2 by 2 Matrix)
Cholesky (2 by 2 Matrix)
LU whit pivoting /* Only in Matrix Total Calculator Pro */
QR (2 by 2 Matrix)
SVD - Singular Values /* Only in Matrix Total Calculator Pro */
Eigenvalues – Eigenvectors /* Only in Matrix Total Calculator Pro */
-Equation solution
Linear Systems M equations N unknowns /* Only in Matrix Total Calculator Pro */
Can you guess all the notes?
Test yourself with Notes' Rainbow, the new music game of BisMag dev. A small piano made up of two octaves
where you have to recognize the sequence of notes played.
Some lifes and some replay will help you climb the seven levels. Moreover, in the Test Piano there is a piano, not professional, but it will help you to train your ear in order to face better the
Finally a CheckPoint, obtained after the third level, usable only once, you will allow to restart from the fourth, once lost all lives; In the case you will reach the seventh and last, come back to
Have fun with Notes' Rainbow!
/* The free version is locked at the first level */
The use is intuitive, you have to size the matrices by entering the number of rows and columns, and then the values of the matrices, the app does the rest.
The application can calculate matrices up to 10x10. To avoid crashes derived from the hardware limitations of a smartphone.
This app allows the calculation of the determinant of any square matrix. The app is simple and intuitive, just size the matrix and insert the values, the app will do the rest.
The application can calculate matrices up to 15x15. To avoid crashes derived from the hardware limitations of a smartphone.
When you enter a value in a matrix, you can use the following
Operations: * / - + ^
sqrt(x) Calculates the square root of x
exp(x) Calculates e^x (e = Euler’s number)
ln(x) Calculates the natural logarithm (base e) of x
log(x) Calculates the logarithm (base 10) of x
sin(x) Calculates the sine of x
cos(x) Calculates the cosine of x
tan(x) Calculates the tangent of x
asin(x) Calculates the arcsine of x
acos(x) Calculates the arccosine of x
atan(x) Calculates the arctangent of x
The argument x can be a function. Example: log(sin(pi))
The argument x of the trigonometric functions is in radians
pi 3.14159… Example: sin(2*pi)
e Euler number 2.71828… Example: 6e-5
This app allows the calculation of the inverse of square matrix. The app is simple and intuitive, just size the matrix and insert the values, the app will do the rest.
The application can calculate matrices up to 15x15. To avoid crashes derived from the hardware limitations of a smartphone.
When you enter a value in a matrix, you can use the following
Operations: * / - + ^
sqrt(x) Calculates the square root of x
exp(x) Calculates e^x (e = Euler’s number)
ln(x) Calculates the natural logarithm (base e) of x
log(x) Calculates the logarithm (base 10) of x
sin(x) Calculates the sine of x
cos(x) Calculates the cosine of x
tan(x) Calculates the tangent of x
asin(x) Calculates the arcsine of x
acos(x) Calculates the arccosine of x
atan(x) Calculates the arctangent of x
The argument x can be a function. Example: log(sin(pi))
The argument x of the trigonometric functions is in radians
pi 3.14159… Example: sin(2*pi)
e Euler number 2.71828… Example: 6e-5
One of the most popular logic game in three levels:
- Easy
- Medium / * only Pro version * /
- Hard / * only Pro version * /
This is Sudoku Free, where you can venture
to solve your sudoku with the option of stopping and resuming
in a second time from where you left, thanks to the Continue button.
The formats of Sudoku will always be random ( In the free version there are only two diagrams). You can also enable or disable the music and suggestions.
Have fun
The Newton’s Method finds a single root of a function from an approximation sufficiently accurate. The default precision of |x(i+1) - x(i)| is 0.00001, but there is the possibility to insert one
This app checks if a number is prime or not. In the case in which the number will not be prime, the app returns a divisor of the number entered.
This app also allows the calculation of the factorial of a number <= 170
|
{"url":"https://play.google.com/store/apps/details?id=it.bisemanuDEV.mathTools","timestamp":"2014-04-17T05:10:38Z","content_type":null,"content_length":"304108","record_id":"<urn:uuid:6f363f9f-fb10-4f01-9108-705c5ff0ecd2>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00098-ip-10-147-4-33.ec2.internal.warc.gz"}
|
September 5th, 2010
There are a lot of online tools out there for computing p-values and test statistics associated with common statistical distributions such as the normal or Student’s t-distributions. Unfortunately,
most of them are either ad-ridden or powered by Java (and hence slow to initially load and finicky when it comes to which browsers they work with). So one of my summertime projects this year was to
create a website that solves both of those problems:
The website computes p-values and test statistics in real-time via javascript (and thus does not need Java or any other plug-in). The computations themselves are fairly straightforward and are
performed via the trapezoid rule. The graphic on the right is composed of a static PNG that displays the appropriate distribution. The distribution’s image is transparent under the graph and opaque
above the graph, which makes it easy to display the p-value graphically – the light blue area is actually just a blue rectangle that is drawn beneath the distribution’s image.
Additionally, through the magic of PHP the tool automatically creates a URL that links to the current computation (and thus makes it much more citable). So, for example, if you want to know the
T-value that corresponds to a right-tailed test with 12 degrees of freedom and a p-value of 0.1, you could simply click here.
Anyway, if you’re a nerd like me then enjoy it and of course feel free to leave any feedback/suggestions that you might have.
My New Project: Conway’s Game of Life
January 16th, 2009
Sometime around the start of December I was reminded of Conway’s Game of Life – a mathematical “game” that I was first introduced to in my grade 12 programming class. Unfortunately for me and my
research, I found the game much more interesting this time around and have proceeded to spend almost the entire last month dedicated to it.
It started out innocently enough; I thought that a webpage that uses the canvas tag to run the Game of Life would be the perfect way to hammer home what I was talking about in this post. It turns out
that the game gets quite computationally intensive for even moderately-sized patterns however, so although the tool was functional, it chugged. What would be the solution to this problem? Write the
tool in a pre-compiled programming language like Java, of course.
There are several Java implementations of the game freely available on the internet, but at this point I wanted to make an online tool that does a bit more than just evolve patterns; I wanted also to
be able to upload, save, and download pattern files from the tool, something that is quite impossible from a Java applet. Also, because building interfaces for Java applets is a bit of a chore, most
of the pre-existing Java applets implementing the game are a bit hard to look at. All of these problems can be solved via a bit of server-side ASP and some Java-to-javascript communication,
fortunately (something I will post about separately later).
The end result? Well, I don’t really know yet, but here’s the beta 0.2.0 result:
The Java applet that runs the game itself is based on Alan Hensel’s brilliantly fast Java staggerstep algorithm, with some added file manipulation functionality and a dynamic online database of
patterns. The applet is still very much a work in progress and there are quite a few known bugs, but it works well enough for now.
The real star of the website for now is the LifeWiki, which I’m hoping will fill a huge gap on the internet that I was rather shocked to find – even though there are many great homepages with bits
and pieces of information about the game scattered across the internet, there really is no central resource that catalogues everything about the game; LifeWiki will hopefully fill that gap. I have
started it off with some 150 articles and uploaded as many images, but I’m hoping that I can draw a few other Life enthusiasts to the wiki to help expand it so as to ease some of the burden.
Anyway, that’s about all I have to say for now about the website; I’ll likely make another post or two in the reasonably near future with coding tips/tricks based on my experience making the tool and
site in general. Until then, I present to you my favourite pattern for the Game of Life, the Canada goose:
PS. Happy birthday to me!
New Website Launched
October 2nd, 2008
Well, after squatting on this domain for about a year, I finally decided that I might as well put a website up here again. The website is going to be much more math-oriented this time around since
I’m a nerd like that. In that vein, my first entry here will simply be one of the only math-related things that I wrote on the old site (on November 22, 2005):
After hours of thought and consideration, I have come to the conclusion that the way in which math is taught sucks monkey fur.Let us take, for example, a Numerical Methods assignment that I
currently have sitting in front of me. One particular question (which is worth a whopping 0.0769% of my final mark) on this assignment requires to me to find eigenvalues of a 3 × 3 matrix for
which the characteristic polynomial does not factor.
“But Nathan,” I can hear you say, “that’s simply a matter of plugging numbers into the cubic root formula! What’s your problem, ho?”
And though you are quite correct, allow me to print, in its entirety, said formula:
“But Nathan,” I can hear to chirp up again, “why don’t you just use the QR Algorithm or MATLAB or some other method to find the roots?”
Well, it seems that this route of escape was thought of by the prof, so she specifically states that we are to compare our answer with the one obtained from MATLAB – indicating that we are
indeed actually expected to find these roots by hand and get the exact answers – to obtain a whopping 0.0769% of our final mark (actually, considerably less than that – this is only one part of
a multi-part question).
“But Nathan,” I hear you say one final time, “why don’t you talk about something interesting? I don’t know what the hell the QR Algorithm is, nor could I care less.”
Shut up, I never talk about math on here, I’ve been generous until now, so let me rant. I’m getting sick and tired of so many courses managing to teach so little, while expecting so much. Why
do we have to prove over and over again that we are capable of plugging numbers into longer and longer formulae, while not actually being required to demonstrate any real insight along the way?
So, tell me professors, what is the point of this? And what is the point of us having to draw nine linearizations to complete question #1 on a differential equations assignment? You don’t
believe that we know how to do it after the first eight times? Why do you feel the need to ask the same questions over and over again, while giving us nothing really insightful or different on
the assignments?
Maybe someday down the line if/when I become a professor I’ll understand (and perhaps even prove) that making assignments ridiculously repetitive and far more tedious than necessary is a
fundamental law of the universe which keeps us all in harmony and prevents the Earth from being hurled into the sun. But, barring that realization, I make the following vow to my future
students, should I become a professor:
I will (try to) make assignments for my classes (as) interesting (as possible for a math class) and will not ask you to do questions that involve exceedingly gross algebra (unless you all get
on my bad side by skipping lectures) for no good reason.
Personal, Websites
|
{"url":"http://www.njohnston.ca/tag/websites/","timestamp":"2014-04-19T04:37:50Z","content_type":null,"content_length":"31302","record_id":"<urn:uuid:78969fd1-8f0b-49f0-b969-b68cdd4b3aa1>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00590-ip-10-147-4-33.ec2.internal.warc.gz"}
|
the first resource for mathematics
The geometry of syzygies. A second course in commutative algebra and algebraic geometry.
(English) Zbl 1066.14001
Graduate Texts in Mathematics 229. New York, NY: Springer (ISBN 0-387-22232-4/pbk; 0-387-22215-4/hbk). xvi, 243 p. $ 29.95, EUR 29.95/net, £ 23.00, sFr 54.40/pbk; $ 64.15/hbk (2005).
In his introduction to the French edition of F. Klein’s Erlanger Programm [Le programm d’Erlangen (1974; Zbl 0282.50012)], J. Dieudonne wrote:
“The developments which most will have influence on Klein come from years 1850–1860...we have the construction, due to Cayley and Sylvester, of the general Theory of Invariants, which will offer soon
a procedure [the symbolic method ] to determine the algebraic invariants of a system of geometric object and all the algebraic relations [or syzygies] among them”.
So when the new way to look at geometry, or better at geometries, enters the game at the end of XIX century (a geometry is the study of certain objects and a group of transformations among them, as
Klein defines it in his program), the study of syzygies is already part of this new approach, also if it will be with Hilbert that its role will become clearer and the approach to syzygy modules more
As the author points out at the beginning of the book, when you study geometry using the tools given by gbstract algebra one of the main problems is how to relate equations and geometric objects,
i.e. how to extract the geometric information from the equations defining such objects. The theory of syzygies is one powerful tool to do this (a microscope, Eisenbud says).
This book is devoted to offer, as the title says already, an approach to the study of this algebraic subject (syzygy = relation among generators of a module) with a very geometry-oriented point of
view which is actually consubstantial with the way the theory has been born. The exposition is at a graduate level, and a student would learn a lot of (classical and less classical) algebraic
geometry from it.
The double bet of the book is to be able to be a complete textbook for a (second level) graduate course in algebraic geometry or commutative algebra and at the same time to become a useful reference
text for research work on the subject. I would say that both aspects of the bet have been gained, since the book manages to be introductory but also gives a fairly complete outlook on the most recent
result about syzygies and their applications to algebraic geometry. It also gives a good idea of how the theory has been developed and what are the questions that have pushed towards the main
Every chapter begins with an informal sketch of what will follow, giving motivations for the aspects of the theory to be developed, and geometric examples and application will spread light on the
algebraic construction.
After a nice preface that gives an idea of the subject, the reader is introduced to Hilbert functions and free resolutions (chapter one) immediately followed in chapters 2 and 3 by examples (monomial
ideals, ideals of sets of points in ${ℙ}^{2}$ and ${ℙ}^{3}$) which shows how syzygies of ideals can carry information about geometric configurations.
Castelnuovo-Mumford regularity is introduced in chapter 4, and the problem of interpolation along with the cohomological point of view are again intertwined along the chapter so to give motivations
and examples along with the development of the theory, which is applied in Chapter 5 to study regularity of projective curves, giving the main results (as the Green-Lazarsfeld-Peskine theorem) and
conjectures on the subject at the present state of the art.
In chapter 6 embeddings via linear systems are studied and expecially matrices of linear forms (and determinantal ideals) coming from them. Here rational and elliptic normal curves are treated in
A new tool is introduced in chapter 7: Bernard-Gelfand-Gelfand correspondence, and exterior algebra is here studied to give Green’s linear syzygy theorem. All these and new tools are used in chapter
8, where high degree embeddings of curves are studied, where the classical results on the subject and the Green and Green-Lazarsfeld conjecture are given. Recent results on these conjectures are
described in chapter 9.
Eventually, two appendices give an useful “reminder” of local cohomology and of commutative algebra (the author prides himself that this is the shortest compendium of commutative algebra, after he
wrote the longest one [see “Commutative algebra. With a view toward algebraic geometry”, Graduate Texts in Mathematics 150 (1995; Zbl 0819.13001)]).
As a final note, I would like to add that the author manages in this book to give an example of how to overcome what Simone Weil (the great sister of André) had said pointing out algebra as one of
the “bad gift” of contemporary age, since algebra’s procedures can be so abstract to hide to people working with them what they are really doing. Sometimes going through an algebra text (specially
the very Bourbaki-oriented ones) one could agree with her, but geometry can furnish a way out (not the only one) of this by giving “life” to algebra structures that could otherwise appear as
diamonds: perfect and sterile.
14-02 Research monographs (algebraic geometry)
13D02 Syzygies, resolutions, complexes
13-02 Research monographs (commutative algebra)
|
{"url":"http://zbmath.org/?format=complete&q=an:1066.14001","timestamp":"2014-04-18T00:23:12Z","content_type":null,"content_length":"25997","record_id":"<urn:uuid:c301a2f3-44b7-4caf-8f87-50c4f4d8c20f>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00594-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Choose the answer that best answers the question word in bold type. ¿Qué comes?
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/50a8159ce4b0129a3c901adb","timestamp":"2014-04-20T08:35:10Z","content_type":null,"content_length":"39740","record_id":"<urn:uuid:3f3f12bb-8df7-4899-9200-2a6b41224fe9>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00398-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Converting Units
How do I convert metric to U.S units and vise versa (kilograms to pounds, meters to feet, centimeters to inches, liters to gallons, etc.)? How do I convert Celsius temperatures to Fahrenheit? How
many pints are in a gallon? How do I convert square feet to square inches?
For help setting up unit conversion problems and doing the math, see these tips from Dr. Math on
Careers FAQ Index Fashions & Costumes
|
{"url":"http://www.factmonster.com/homework/convertingfaq.html","timestamp":"2014-04-16T10:21:37Z","content_type":null,"content_length":"22373","record_id":"<urn:uuid:88c3dac0-a1bd-43c5-8d71-ac023f5e743d>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00203-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Comment 4 for bug 712913
Minh Van Nguyen (mvngu) wrote : #4
> igraph_ring() is a pretty simple function. OK, this is because it calls
> igraph_lattice(), which can do a bit more than just generating ring and line graphs.
> Still, igraph_lattice() is less than 100 lines of code. Your test suite is about 800
> lines. This seems to be out of proportion for me.
My understanding is that it's OK for a function's test suite to be out of proportion (in terms of lines of code) with respect to the function itself. When you take into account branch test coverage,
boundary value tests, conformance to known mathematical properties, etc., you're really looking at having a test suite that will be out of proportion with respect to the function it's testing. A
thorough test suite can serve as a suite of regression tests. I really like how the SQLite team tests SQLite; see
Beginners to igraph such as myself face the daunting task of learning how to use igraph's C library. The one place where beginners (especially myself) look for examples is in the tutorial (if there
is any) or the directory of examples as found under example/. The more varied the examples the easier it is for beginners to learn how to use igraph's C library.
In test-driven development, a function that is not tested (with respect to known results) can be considered broken. Take the example of the function igraph_watts_strogatz_game() in src/games.c. It's
simple and less than 50 lines of code, yet it's broken because it detracts from the original Watts-Strogatz model as published in
* D. J. Watts and S. H. Strogatz. Collective dynamics of ‘small-world’ networks. Nature, 393(6684):440--442, 1998.
I consider the present implementation of igraph_watts_strogatz_game() to be an extension of the original Watts-Strogatz model. Here are my reasons why I consider this function broken. Until recently,
the function igraph_watts_strogatz_game() could generate a rewired graph that has loop edges, but the original Watts-Strogatz model doesn't allow loop edges in any rewired graph. Fortunately, the
issue of loop edges was fixed at
and I thank you for that fix. Another reason why igraph_watts_strogatz_game() is broken is that it can generate a rewired graph that is actually a multigraph, which detracts from the original
Watts-Strogatz model. Grep'ing through C files under examples/simple/ fails to show up any tests that uses the function igraph_watts_strogatz_game(). There are plenty of rooms for users to think that
igraph_watts_strogatz_game() will generate a rewired graph that conforms to the original Watts-Strogatz model, i.e. a rewired graph that is simple and doesn't change the number of edges and vertices.
But that is not the case and users are not likely to notice this until they actually carefully inspect the graph generated by igraph_watts_strogatz_game(). Having plenty of documented and commented
tests for igraph_watts_strogatz_game() should alleviate such potential confusion.
> We need to maintain this code in the future, and even if you don't change the
> testing code too often, sometimes you need to, e.g. API changes are quite frequent
> in igraph, because a lot of the API is messy, or new functionality is added. In
> these cases we need to change the tests as well. And this requires effort and time.
> (In general, I'm not talking about igraph_ring() now.)
A reason to have an extensive test suite for the igraph C library is to detect regressions. An API change can result in a failed test; one need to see why the test fail and change accordingly. That's
just a normal part of maintaining code. Of course one could have minimal test code to lessen the time and effort required for maintenance. But that doesn't give a user much confidence that a function
works or that the result it generates conforms to known mathematical properties. An extensive test suite for a function helps to build confidence in the correctness of the function.
> As for igraph_ring(), I would prefer some more concise tests, e.g. I don't see any
> need checking the girth of the graph, or even the number of edges. Just test the
> boundary cases, i.e. 0, 1, 2, 3, vertices, query the full edge list to see that it
> is all right and that's it.
There's a very simple way to compute the girth of an undirected circular ring (i.e. a cycle graph): the required girth is just the number of vertices. One need to ensure that the above generated ring
does indeed satisfy this known property. Again this comes back to the issue of building confidence in the correctness of the result generated by igraph_ring(): whether the generated graph satisfies
some known mathematical properties. One could get carried away with testing a larger list of known properties for the cycle graph, but we don't need to go that far; just a tiny list of known simple
properties for now is good enough.
> Or maybe just store the edges of the correct graphs, and then see whether they are
> isomorphic to the ones create by igraph_ring(). Maybe this is the simplest, and it
> can be done with more readable code. I.e. something like:
> igraph_real_t edges_undirected_circular_3[] = {0,1, 1,2, 2,0 };
> igraph_real_t edges_undirected_line_3[] = {0,1, 1,2 };
> ...
> igraph_ring(&g, 3, /*directed=*/ 0, /*mutual=*/ 0, /*circular=*/ 1);
> if (! check_ring(&g, edges_undirected_circular)) return 1;
> ....
> where check_ring() creates a graph from the given edge list and checks that it is
> isomorphic to the supplied graph.
Now you're giving me an itch to scratch :-) Give me some time to write some examples of such tests.
|
{"url":"https://bugs.launchpad.net/igraph/+bug/712913/comments/4","timestamp":"2014-04-17T19:08:17Z","content_type":null,"content_length":"20075","record_id":"<urn:uuid:12c8138d-605c-4389-be36-0e223e3dd176>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00266-ip-10-147-4-33.ec2.internal.warc.gz"}
|
10th Annual Conference for African-American Researchers in the
Mathematical Sciences (CAARMS10)
10th Annual Conference for African-American Researchers in the Mathematical Sciences (CAARMS10)
The Tenth Conference for African American Researchers in the Mathematical Sciences (CAARMS10) will be held during the dates of June 22-25, 2004 in Berkeley, CA. It is jointly hosted by the
Mathematical Sciences Research Institute and the Lawrence Berkeley National Laboratory. Events include invited technical speakers, tutorials, and a graduate poster session. The organizers are William
A. Massey of Princeton University, Robert Megginson of the Mathematical Sciences Research Institute and Juan Meza of the Lawrence Berkeley National Laboratory. Stay tuned to this site for more
details about the conference.
There is funding to support graduate students who want to make poster presentations at CAARMS10. All interested graduate students should submit their titles and abstracts by email to
wmassey@princeton.edu before the end of May.
Conference Sponsors:
National Security Agency
Mathematical Sciences Research Institute
Lawrence Berkeley National Laboratory
June 15, 2004
|
{"url":"http://www.princeton.edu/~wmassey/CAARMS10/","timestamp":"2014-04-21T15:08:11Z","content_type":null,"content_length":"6188","record_id":"<urn:uuid:2221e8f5-4e12-4c5a-973d-0da0fb9c201d>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00578-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The Geomblog
Peter Woit
Luca Trevisan
comes a pointer to an
article by Anatoly Vershik
in the new Notices of the AMS, lamenting the role of money prizes in mathematics. Three thoughts:
• "the newspapers, especially in Russia, are presently “discussing” a completely different question: Is mathematical education, and mathematics itself, really necessary in contemporary society ".
At the risk of sounding patronizing, I find it terribly worrisome that the place that spawns such amazing mathematicians, and has such a legendary training program for scientists, should even
indulge in such a discussion. Especially now, with all the handwringing in the US about the lack of mathematical training at school level, it seems a particularly bad time to abdicate what is a
clearly a competitive advantage.
• He talks about not understanding "the American way of life" as regards how money is viewed. There's a juxtapositon of images that I've always been struck by, and that tennis lovers will
recognize: At Wimbledon, the winner is crowned with a fanfare, royalty, and a trophy (or plate); the prize money is never really discussed. At the US Open on the other hand, along with the
fanfare comes the huge check handed out by some corporate sponsor while the PA blares out the amount. The trophy presentation, although making for good photo-ops, seems almost anticlimactic.
I am a little skeptical though whether offering prizes like the Clay prize convinces people that mathematics is a lucrative profession. After all, this hasn't happened for the Nobel prizes.
• On the false-duality: I've heard a variation of this argument many times. It goes basically like this: "Either you're interested in subject X and don't need motivation, or you aren't, in which
case no amount of motivation is going to help". This is possibly true for identifying students likely to make the transition to being professionals in subject X. In fact, I've heard an anecdote
from the world of music, about a maestro who would tell all his students that they would fail professionally at being musicians. His argument was that only the ones who cared enough to prove him
wrong had what it took to survive.
One has to realize though that the teaching of a subject is not about creating Mini-Mes: only a small fraction of the students we come in contact with will become professional computer scientists
/mathematicians/whatever. But a large fraction of these students will vote, many of them will go onto position of influence either in industry or government, and they will all contribute to a
general awareness of the discipline. So it's a mistake to give up on motivating students; even if they never end up proving theorems for a living, a better appreciation for those who do will help
all of us.
Pat Morin has a great collection of Java sorting applets. You can line up three at a time and have them sort a random permutation of one through N. It's fun to see insertion sort or bubble sort
earnestly plodding along long after merge sort or quick sort has blazed through.
There are typically two major ways of ordering authors on a paper. In theoryCS, we (mostly) use the lexicographic ordering (by last name); in many other areas, author ordering by relative
contribution is common. There are variants: advisors might list themselves last by default even within a lexicographic or contribution-based system, and middle authors may not always be ordered
carefully, etc etc.
Since paper authorship conveys many important pieces of information, author ordering is an important problem. It's an even bigger problem if you have hundreds of authors on a paper, some of which may
not even know each other (!). This is apparently becoming common in the HEP (high energy physics) literature, and an interesting article by Jeremy Birnholtz studies the problem of authorship and
author ordering in this setting. The study is sociological; the author interviews many people at CERN, and derives conclusions and observations from their responses.
As one might imagine, not too many of the problems of 1000-author papers are translatable to our domain. After all, these papers are bigger than our conferences, and I doubt anyone has never needed a
"publication committee" when writing their paper. And yet, the interviews reveal the same kinds of concerns that we see all the time. Is a certain ordering scheme shortchanging certain authors ? Did
a certain author do enough to merit authorship ? Who gets to go around giving talks on the work ?
Towards the end of the paper, the author makes an interesting (but unexplored) connection to game theory. The players in this game are the authors, and what they are trying to optimize is perceived
individual contributions by the community (the "market"). Intuitively, lexicographic ordering conveys less information about author contributions and thus "spreads" contributions out: however, it's
not symmetric, in the sense that if we see a paper with alphabetically ordered authors, it could be a product of a truly relative contribution ordering that yields this ordering, or a lexicographic
ordering. In that sense, authors with names earlier in the alphabet are disadvantaged, something that seems counter-intuitive.
As it turns out, there's been some work on the equilibrium behaviour of this system. To cite one example, there's a paper by Engers, Gans, Grant and King (yes, it's alphabetically ordered) that
studies the equilibrium behaviour of author ordering systems with a two-author paper in a market. Their setup is this:
• The two players A and B decide to put in some individual effort.
• The relative contribution of each (parametrized by the fraction of contribution assigned to A) is determined as a (fixed but hidden) stochastic function of the efforts.
• The players "bargain" to determine ordering (lexicographic or contribution). The result is a probability of choosing one kind of ordering, after which a coin is tossed to determine the actual
• The work is "published", and the market assigns a value to the paper as a whole, and a fraction of this value to A, based on public information and other factors.
Now all of these parameters feed back into each other, and that's where the game comes from. What is a stable ordering strategy for this game ? It turns out that lexicographic ordering does yield
equilibrium behaviour, and contribution-based ordering does not.
What's even more interesting is that if we look at merely maximizing research output (the external "quality" of the paper), then this is not maximized by lexicographic ordering, because of the overal
disincentive to put in more effort if it's not recognized. However, this does not suggest that always using contribution-based ordering is better; the authors have an example where this is not true,
and one intuition could be that if there's a discrepancy between the market perception of contribution and individual contributions, then there is a disincentive to deviate too much from the
"average" contribution level.
It's all quite interesting. Someone made a comment to me recently (you know who you are :)) about how assigning papers to reviewers made them value research into market-clearing algorithms. I like
the idea of applying game theory to the mechanisms of our own research.
(HT: Chris Leonard)
Previous posts on author ordering here, and here.
|
{"url":"http://geomblog.blogspot.com/2006_12_01_archive.html","timestamp":"2014-04-20T13:38:54Z","content_type":null,"content_length":"151908","record_id":"<urn:uuid:c67eb016-e960-406d-941f-161ccde7b110>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00089-ip-10-147-4-33.ec2.internal.warc.gz"}
|
3067 -- Japan
Time Limit: 1000MS Memory Limit: 65536K
Total Submissions: 19533 Accepted: 5285
Japan plans to welcome the ACM ICPC World Finals and a lot of roads must be built for the venue. Japan is tall island with N cities on the East coast and M cities on the West coast (M <= 1000, N <=
1000). K superhighways will be build. Cities on each coast are numbered 1, 2, ... from North to South. Each superhighway is straight line and connects city on the East coast with city of the West
coast. The funding for the construction is guaranteed by ACM. A major portion of the sum is determined by the number of crossings between superhighways. At most two superhighways cross at one
location. Write a program that calculates the number of the crossings between superhighways.
The input file starts with T - the number of test cases. Each test case starts with three numbers – N, M, K. Each of the next K lines contains two numbers – the numbers of cities connected by the
superhighway. The first one is the number of the city on the East coast and second one is the number of the city of the West coast.
For each test case write one line on the standard output:
Test case (case number): (number of crossings)
Sample Input
Sample Output
Test case 1: 5
Southeastern Europe 2006
[Submit] [Go Back] [Status] [Discuss]
All Rights Reserved 2003-2013 Ying Fuchen,Xu Pengcheng,Xie Di
Any problem, Please Contact Administrator
|
{"url":"http://poj.org/problem?id=3067","timestamp":"2014-04-24T19:32:41Z","content_type":null,"content_length":"6610","record_id":"<urn:uuid:c0da06f7-0b4e-4082-b3b7-158aa6d4ae1d>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00441-ip-10-147-4-33.ec2.internal.warc.gz"}
|
geometry terms
Hi angie38;
Welcome to the forum.
First find the slope of the line you are given.
solve for y
The slope of this line is 2 / 3. For a line to be perpendicular to this one its slope must be a negative reciprocal. So therefore the slope of the new line is -(3 / 2).
The general form of the line you want is:
since we know the slope m
You should be able to solve for b and get the equation of the line that is perpendicular to the given one and passes through the point (-2,-2).
|
{"url":"http://www.mathisfunforum.com/viewtopic.php?pid=239921","timestamp":"2014-04-23T23:29:35Z","content_type":null,"content_length":"17297","record_id":"<urn:uuid:34a8295e-b411-4a0c-bae4-a33f15854d0f>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00525-ip-10-147-4-33.ec2.internal.warc.gz"}
|
On pointers versus addresses
, 1996
"... : The aspect of purity versus impurity that we address involves the absence versus presence of mutation: the use of primitives (RPLACA and RPLACD in Lisp, set-car! and set-cdr! in Scheme) that
change the state of pairs without creating new pairs. It is well known that cyclic list structures can be c ..."
Cited by 17 (0 self)
Add to MetaCart
: The aspect of purity versus impurity that we address involves the absence versus presence of mutation: the use of primitives (RPLACA and RPLACD in Lisp, set-car! and set-cdr! in Scheme) that change
the state of pairs without creating new pairs. It is well known that cyclic list structures can be created by impure programs, but not by pure ones. In this sense, impure Lisp is "more powerful" than
pure Lisp. If the inputs and outputs of programs are restricted to be sequences of atomic symbols, however, this difference in computability disappears. We shall show that if the temporal sequence of
input and output operations must be maintained (that is, if computations must be "online "), then a difference in complexity remains: for a pure program to do what an impure program does in n steps,
O(n log n) steps are sufficient, and in some cases\Omega\Gamma n log n) steps are necessary. * This research was partially supported by an NSERC Operating Grant. 1. Introduction The programming la...
- Science of Computer Programming , 1995
"... A "Pointer Machine" is many things. Authors who consider referring to this term are invited to read the following note first. 1 Introduction In a 1992 paper by Galil and the author we referred
to a "pointer machine " model of computation. A subsequent survey of related literature has produced over ..."
Cited by 16 (1 self)
Add to MetaCart
A "Pointer Machine" is many things. Authors who consider referring to this term are invited to read the following note first. 1 Introduction In a 1992 paper by Galil and the author we referred to a
"pointer machine " model of computation. A subsequent survey of related literature has produced over twenty references to papers having to do with "pointer machines", naturally containing a large
number of cross-references. These papers address a range of subjects that range from the model considered in the above paper to some other ones which are barely comparable. The fact that such
different notions have been discussed under the heading of "pointer machines" has produced the regrettable effect that cross references are sometimes found to be misleading. Clearly, it is easy for a
reader who does not follow a paper carefully to misinterpret its claims when a term that is so ill-defined is used. This note is an attempt to rectify the situation. We start with a survey of the
different notions...
- In: Proc. Asia-Pacific Software Engineering Conf. (APSEC '95), IEEE, Los Alamitos, Cal , 1995
"... Embedded specifications in object-oriented (OO) languages such as Eiffel and Sather are based on a rigorous approach towards validation, compatibility and reusability of sequential programs. The
underlying method of "design-by-contract" is based on Hoare logic for which concurrency extensions exist. ..."
Cited by 16 (7 self)
Add to MetaCart
Embedded specifications in object-oriented (OO) languages such as Eiffel and Sather are based on a rigorous approach towards validation, compatibility and reusability of sequential programs. The
underlying method of "design-by-contract" is based on Hoare logic for which concurrency extensions exist. However concurrent OO languages are still in their infancy. They have inherently imperative
facets, such as object identity, sharing, and synchronisation, which cannot be ignored in the semantics. Any marriage of objects and concurrency requires a trade-off in a space of intertwined
qualities. This paper summarises our work on a type system, calculus and an operational model for concurrent objects in a minimal extension of the Eiffel and Sather languages (cSather). We omit
concurrency control constructs and instead use assertions as synchronisation constraints for asynchronous functions. We show that this provides a framework in which subtyping and concurrency can
coexist. 1 Introduction C...
, 1991
"... Schwartz et al. described an optimization to implement built-in abstract types such as sets and maps with efficient data structures. Their transformation rests on the discovery of finite
universal sets, called bases, to be used for avoiding data replication and for creating aggregate data structures ..."
Cited by 15 (0 self)
Add to MetaCart
Schwartz et al. described an optimization to implement built-in abstract types such as sets and maps with efficient data structures. Their transformation rests on the discovery of finite universal
sets, called bases, to be used for avoiding data replication and for creating aggregate data structures that implement associative access by simpler cursor or pointer access. The SETL implementation
used global analysis similar to classical dataflow for typings and for set inclusion and membership relationships to determine bases. However, the optimized data structures selected by this
optmization did not include a primitive linked list or array, and all optimized data structures retained some degree of hashing. Hence, this heuristic approach did not guarantee a uniform improvement
in performance over the use of default representations. The analysis was complicated by SETL's imperative style, weak typing, and low level control structures. The implemented optimizer was large
(about 20,000 line...
- Journal of Computer and System Sciences , 1996
"... Manipulation of pointers in shared data structures is an important communication mechanism used in many parallel algorithms. Indeed, many fundamental algorithms do essentially nothing else. A
Parallel Pointer Machine, (or PPM ) is a parallel model having pointers as its principal data type. PPMs hav ..."
Cited by 3 (1 self)
Add to MetaCart
Manipulation of pointers in shared data structures is an important communication mechanism used in many parallel algorithms. Indeed, many fundamental algorithms do essentially nothing else. A
Parallel Pointer Machine, (or PPM ) is a parallel model having pointers as its principal data type. PPMs have been characterized as PRAMs obeying two restrictions --- first, restricted arithmetic
capabilities, and second, the CROW memory access restriction (Concurrent Read, Owner Write, a commonly occurring special case of CREW). We present results concerning the relative power of PPMs (and
other arithmetically restricted PRAMs) versus CROW PRAMs having ordinary arithmetic capabilities. First, we prove lower bounds separating PPMs from CROW PRAMs. For example, any step-by-step
simulation of an n-processor CROW PRAM by a PPM requires time \Omega# log log n) per step. Second, we show that this lower bound is tight --- we give such a step-by-step simulation using O(log log n)
time per step. As a coro...
- in Logic Programming: Leftness Detection in Dynamic Search Trees. In: LPAR. (2005) 79–94
"... Abstract. We present efficient Pure Pointer Machine (PPM) algorithms to test for “leftness ” in dynamic search trees and related problems. In particular, we show that the problem of testing if a
node x is in the leftmost branch of the subtree rooted in node y, in a dynamic tree that grows and shrink ..."
Cited by 1 (0 self)
Add to MetaCart
Abstract. We present efficient Pure Pointer Machine (PPM) algorithms to test for “leftness ” in dynamic search trees and related problems. In particular, we show that the problem of testing if a node
x is in the leftmost branch of the subtree rooted in node y, in a dynamic tree that grows and shrinks at the leaves, can be solved on PPMs in worst-case O((lg lg n) 2) time per operation in the
semidynamic case—i.e.,all the operations that add leaves to the tree are performed before any other operations—where n is the number of operations that affect the structure of the tree. We also show
that the problem can be solved on PPMs in amortized O((lg lg n) 2) time per operation in the fully dynamic case. 1
- In Proc. of the 8th International Symposium on Parallel Architectures, Algorithms, and Networks (I-SPAN , 2005
"... In this paper we propose a new class of memory models, called transparent memory models, for implementing data structures so that they can be emulated in a distributed environment in a scalable,
efficient and robust way. Transparent memory models aim at combining the advantages of the pointer model ..."
Cited by 1 (1 self)
Add to MetaCart
In this paper we propose a new class of memory models, called transparent memory models, for implementing data structures so that they can be emulated in a distributed environment in a scalable,
efficient and robust way. Transparent memory models aim at combining the advantages of the pointer model and the linear addressable memory model without inheriting their disadvantages. We demonstrate
the effectiveness of our approach by looking at a specific memory model, called the hypertree memory model, and by implementing a search tree in it that matches, in an amortized sense, the
performance of the best search trees in the pointer model yet can efficiently recover from arbitrary memory faults. 1.
"... Contents 1 Introduction 3 2 Models of computation 6 3 The Set Union Problem 9 4 The Worst--Case Time Complexity of a Single Operation 15 5 The Set Union Problem with Deunions 18 6 Split and the
Set Union Problem on Intervals 22 7 The Set Union Problem with Unlimited Backtracking 26 1 Introduction A ..."
Add to MetaCart
Contents 1 Introduction 3 2 Models of computation 6 3 The Set Union Problem 9 4 The Worst--Case Time Complexity of a Single Operation 15 5 The Set Union Problem with Deunions 18 6 Split and the Set
Union Problem on Intervals 22 7 The Set Union Problem with Unlimited Backtracking 26 1 Introduction An equivalence relation on a finite set S is a binary relation that is reflexive symmetric and
transitive. That is, for s; t and u in S, we have that sRs, if sRt then tRs, and if sRt and tRu then sRu. Set S is partitioned by R into equivalence classes where each class cointains all and only
the elements that obey R pairwise. Many computational problems involve representing, modifying and tracking the evolution of equivalenc
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=2488425","timestamp":"2014-04-20T19:47:58Z","content_type":null,"content_length":"33410","record_id":"<urn:uuid:946345a2-bc00-495f-b168-81e93ab84f7b>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00606-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Program by Special Session
Joint Mathematics Meetings Program by Special Session
Current as of Wednesday, January 16, 2008 00:26:57
Program | Deadlines | Timetable | Inquiries: meet@ams.org
Joint Mathematics Meetings
San Diego, CA, January 6-9, 2008 (Sunday - Wednesday)
Meeting #1035
Associate secretaries:
Michel L Lapidus, AMS lapidus@math.ucr.edu, lapidus@mathserv.ucr.edu
James J Tattersall, MAA tat@providence.edu
AMS Special Session on Inverse Problems in Geometry
• Tuesday January 8, 2008, 8:30 a.m.-10:50 a.m.
AMS Special Session on Inverse Problems in Geometry, I
Peter A. Perry, University of Kentucky perry@ms.uky.edu
Carolyn S. Gordon, Dartmouth College
• Tuesday January 8, 2008, 1:00 p.m.-5:50 p.m.
AMS Special Session on Inverse Problems in Geometry, II
Peter A. Perry, University of Kentucky perry@ms.uky.edu
Carolyn S. Gordon, Dartmouth College
□ 1:00 p.m.
Can you hear the shape of an analytic drum: higher dimensions.
Steve Zelditch*, Johns Hopkins University
□ 1:30 p.m.
Counting nodal lines which touch the boundary of an analytic domain.
John A. Toth*, McGill University
Steve Zelditch, Johns Hopkins University
□ 2:00 p.m.
Upper and lower bounds on resonances for manifolds hyperbolic near infinity.
David Borthwick*, Emory University
□ 2:30 p.m.
Metric Degeneration and Spectral Convergence.
J M Rowlett*, University of California at Santa Barbara
□ 3:00 p.m.
Scattering with Singular Muira Potentials on the Line.
Christopher S Frayer*, University of Kentucky
□ 3:30 p.m.
□ 4:00 p.m.
A negative mass theorem for the 2-torus.
Kate Okikiolu*, University of California, San Diego
□ 4:30 p.m.
Length and eigenvalue equivalence.
D. B. McReynolds*, University of Chicago
Christopher J Leininger, University of Illinois at Urbana-Champaign
Walter D. Neumann, Barnard College, Columbia University.
Alan W. Reid, University of Texas
□ 5:00 p.m.
Semiclassical analogues of the heat invariants.
Alejandro Uribe*, University of Michigan
□ 5:30 p.m.
Equivalence of geometric quantizations of isospectral manifolds.
William D Kirwin*, Max Planck Institute for Mathematics in the Natural Sciences
MAA Online Inquiries: meet@ams.org
|
{"url":"http://jointmathematicsmeetings.org/meetings/national/jmm/2109_program_ss9.html","timestamp":"2014-04-18T06:30:06Z","content_type":null,"content_length":"8005","record_id":"<urn:uuid:0578cb56-639e-4841-bc4a-288741b5a7cb>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00211-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Flaw in proof
September 18th 2011, 09:15 AM #1
Junior Member
Apr 2010
Flaw in proof
I need to find the flaw in the following proof. Can anyone help me?
Suppose A has a right inverse (say B)
AB = I
A^(T)AB = A^(T)
B = (A^(T)A)^(-1) (A^(T))
BA = (A^(T)A)^(-1) (A^(T)A) = I
Therefore B is also a left inverse of A
Re: Flaw in proof
That argument assumes that $A^{\textsc t}\!A$ is invertible. In the case where A and B are matrices, you can check (by considering determinants) that $A$ and $A^{\textsc t}$ (and hence their
product) are indeed invertible. But in some other situations, for example if A and B are linear operators on an infinite-dimensional space, the result fails, and you can have operators which have
a right inverse but not a left inverse.
September 18th 2011, 10:26 AM #2
|
{"url":"http://mathhelpforum.com/advanced-algebra/188264-flaw-proof.html","timestamp":"2014-04-18T14:28:39Z","content_type":null,"content_length":"34347","record_id":"<urn:uuid:b6eda8e7-ebfa-4385-a86e-8d7b9264d064>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00629-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Posts by
Total # Posts: 4
#1.) If the force of gravity on a human is 539 N determine the mass of the human. #2.) Draw a free body diagram for the following situation: a.) A 0.15 kg running shoe is pushed to the right, with a
constant velocity, across a wet concrete floor with an applied force of 3.5 N....
#1.) A 3.0 kg computer printer is pushed across a desk at a rate of 0.6m/ s2 (towards the right). Determine the force applied to the printer #2.) If a 0.9 kg apple falls from a tree and hits the
ground with a force of 8.82 N calculate the acceleration of the apple. #3.) Calcul...
#1.) A 3.0 kg computer printer is pushed across a desk at a rate of 0.6m/ s2 (towards the right). Determine the force applied to the printer #2.) If a 0.9 kg apple falls from a tree and hits the
ground with a force of 8.82 N calculate the acceleration of the apple. #3.) Calcul...
#1.) A 3.0 kg computer printer is pushed across a desk at a rate of 0.6m/ s2 (towards the right). Determine the force applied to the printer #2.) If a 0.9 kg apple falls from a tree and hits the
ground with a force of 8.82 N calculate the acceleration of the apple. #3.) Calcul...
|
{"url":"http://www.jiskha.com/members/profile/posts.cgi?name=AMANDAAA","timestamp":"2014-04-16T11:17:39Z","content_type":null,"content_length":"7159","record_id":"<urn:uuid:67f36f00-bfe2-4e62-bc81-3af2be3240de>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00619-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Probability of NVP Help for Risk Analysis and - Transtutors
Probability of NVP
Assume that a project has a mean of Rs 40 and standard deviation of Rs 20. The management wants to determine the probability of the NPV under the following ranges:
(i) Zero or less,
(ii) Greater than zero,
(iii) Between the range of Rs 25 and Rs 45,
(iv) Between the range of Rs 15 and Rs 30.
(i) Zero or less: The first step is to determine the difference between the expected outcome X and the expected net present value X. The second step is to standardize the difference (as obtained in
the first step) by the standard deviation of the possible net present values. Then, the resultant quotient is to be seen in statistical tables of the area under the normal curve. Such a table (Table
Z) is given at the end of the book. The table contains values for various standard normal distribution functions. Z is the value which we obtain through the first two steps, that is:
This is also illustrated in Fig.
The figure of -2 indicates that a NPV of 0 lies 2 standard deviation to the left of the expected value of the probability distribution of possible NPV. Table Z indicates that the probability of the
value within the range of 0 to 40 is 0.4772. Since the area of the left-hand side of the normal curve is equal to 0.5, the
probability of NPV being zero or less would be 0.0228, that is, 0.5 - 0.4772. It means that there is 2.28 per cent probability that the NPV of the project will be zero or less.
(ii) Greater than zero: The probability for the NPV being greater than zero would be equal to 97.72 per cent, that is, 100 - 2.28 per cent probability of NPV being zero or less.
(iii) Between the range of Rs 25 and Rs 45: The first step is to calculate the value of Z for two ranges: (a) between Rs 25 and Rs 40, and (b) between Rs 40 and Rs 45. The second and the last step is
to sum up the probabilities obtained for these values of Z:
The area as per Table Z for the respective values of -0.75 and 0.25 is 0.2734 and 0.0987 respectively. Summing up, we have 0.3721. In other words, there is 37.21 per cent probability of NPV being
within the range of Rs 25 and Rs 45. (It may be noted that the negative signs for the value of Z in any way does not affect the way Table Z is to be consulted. It simply reflects that the value lies
to the left of the mean value),
(iv) Between the range of Rs 15 and Rs 30:
According to Table Z, the area for respective values -1.25 and -0.5 is 0.3944 and 0.1915. The probability of having value between Rs 15 and 40 is 39.44 per cent, while the probability of having value
between Rs 30 and 40 = 19.15 per cent. Therefore, the probability of having value between Rs 15 and Rs 30 would be 20.29 per cent = (39.44 per cent - 19.15 per cent).
Email Based, Online Homework Assignment Help In Probability of NVP
Transtutors is the best place to get answers to all your doubts regarding probability of NVP. Transtutors has a vast panel of experienced financial management tutors who can explain the different
concepts to you effectively. You can submit your school, college or university level homework or assignment to us and we will make sure that you get the answers related to probability of NVP.
Related Questions
• what challenges do you see in assessing a supplier's environmental... 7 mins ago
what challenges do you see in assessing a supplier's environmental performance
Tags : Management, Supply Chain Management / Operations Management, Others, University ask similar question
• security and privacy issues in cloud computing 33 mins ago
Tags : Management, Managing Information Technology, Others, University ask similar question
• Assignment 1 hr ago
Tags : Management, Strategic Management, Others, Graduation ask similar question
• With regard to the relationship between treaties and Australian law,... 2 hrs ago
With regard to the relationship between treaties and Australian law, Campbell JA in Samootin v. Shea [2012] NSWCA 378 made the following observation (at [33] citations omitted): </o:p> <span...
Tags : Management, Others, International Business, University ask similar question
• Database Marketing 4 hrs ago
Choose a company that you work for or would like to work for that sells a specific product. Imagine your boss has asked you to present a possible database marketing program that the company could
better use to promote its...
Tags : Management, Marketing Management, Market Research, College ask similar question
• help 5 hrs ago
Tags : Management, Strategic Management, Acquisition and Restructuring, College ask similar question
• select a business that you are familiar with. criteria/ consideration when... 5 hrs ago
select a business that you are familiar with.criteria/ consideration when selecting the business1)operate in Australia2) the information is accessible, e.g, websites3)consider physical or
tangible product for your assessment....
Tags : Management, Marketing Management, Marketing Strategy and Plan, College ask similar question
• SWOT 6 hrs ago
Your company produces school uniforms for secondary school students. The aging population can be seen as a 'Threat' in the SWOT framework.True or False?
Tags : Management, Marketing Management, Environmental Scanning, University ask similar question
• The Running case 7 hrs ago
Several issues have arisen on the Recreation and wellness Intranet Project. The HR person left the company, more support is needed in that area. A member of the user group that supports the
project is extremely vocal and hard...
Tags : Management, Supply Chain Management / Operations Management, Others, College ask similar question
• make the less matching percentage 8 hrs ago
give me the solution of answer by that prescribed book
Tags : Management, Business Law and Ethics, Business Laws, University ask similar question
more assignments »
|
{"url":"http://www.transtutors.com/homework-help/financial-management/risk-analysis-and-uncertainty-/probability-of-nvp.aspx","timestamp":"2014-04-19T06:53:13Z","content_type":null,"content_length":"93286","record_id":"<urn:uuid:a9ec2243-abe8-427c-aaaf-4abb02ce849c>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00071-ip-10-147-4-33.ec2.internal.warc.gz"}
|