content
stringlengths 86
994k
| meta
stringlengths 288
619
|
|---|---|
Math Forum Discussions - Re: Uncountable Diagonal Problem
Date: Jan 1, 2013 5:57 PM
Author: Virgil
Subject: Re: Uncountable Diagonal Problem
In article
"Ross A. Finlayson" <ross.finlayson@gmail.com> wrote:
> The transfinite course-of-passage in well-ordering the reals sees a
> diminishing interval. Do the endpoints of the interval meet, in the
> well-ordering? A critical point of Cantor's first is that the
> intersection is non-empty.
It is a well known property of the real number line, at least among
mathematicians, that nested sequence of closed intervals has non-empty
Doe Ross claim otherwise?
|
{"url":"http://mathforum.org/kb/plaintext.jspa?messageID=7948474","timestamp":"2014-04-21T08:47:21Z","content_type":null,"content_length":"1616","record_id":"<urn:uuid:4454c06c-a6cd-4673-b5ed-f9fa0bde3cc9>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00401-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[FOM] On Godel's Enigmatic Footnote 48a
Alasdair Urquhart urquhart at cs.toronto.edu
Wed Sep 1 14:14:30 EDT 2004
Jeffrey Ketland's conclusion that Goedel already knew in 1931 a lot
of the content of Tarski's famous truth paper of 1936 is,
I think, basically correct. One source for this is his
recollections as given to Hao Wang in 1976 ("Some Facts about Kurt
Goedel", JSL Vol. 46, 653- 659). The crucial passage here is on page
654. Goedel says that he discovered the incompleteness theorem as
follows. He set himself the task of proving analysis consistent, assuming
all of elementary number theory.
"He represented real numbers by formulas (or sentences) of number
theory and found he had to use the concept of truth for sentences
in number theory in order to verify the comprehension axiom for analysis.
He quickly ran into the paradoxes (in particular, the Liar and Richard's)
connected with truth and definability. He realized that truth in number
theory cannot be defined in number theory and therefore his plan
of proving the relative consistency of analysis did not work. He went on to
draw the conclusion that in suitably strong systems ... there are undecidable
propositions." (654)
It is interesting to notice here that Goedel started from the idea that the
notion of arithmetical truth is solid and reliable, but when he came to
publish the results, he replaced the notion of truth as far as possible by
finitary notions (hence the rather artificial notion of "omega consistency"
in the published paper).
It's often overlooked that there are really two papers on truth by Tarski.
The first was delivered in Warsaw, 21 March 1931. The second is that
published in German with a Postscript reflecting Tarski's reaction to
Goedel's incompleteness paper. The first paper says:
1. There is no exact definition of the notion of truth in ordinary
(semantically closed) languages, because of the Liar paradox;
2. In a simple type hierarchy truth for order n is definable in order n+1;
3. There is NO definition of truth for the whole typed language (because
transfinite types are not allowed).
The Postscript, reacting to Goedel, says very different things -- and this is
what most people remember as Tarski's results on truth. One of the things
that Tarski says is that he was too much under the spell of the theory of
semantical categories, and hence only considered languages of finite order.
It's only in the Postscript that he considers languages of transfinite order;
he explicitly refers to Goedel's famous footnote 48a in this connection.
Goedel was always very generous in giving credit to Tarski. As the recently
published volumes of his correspondence show, there was an extremely warm
friendship between the two logicians.
More information about the FOM mailing list
|
{"url":"http://www.cs.nyu.edu/pipermail/fom/2004-September/008445.html","timestamp":"2014-04-18T20:47:38Z","content_type":null,"content_length":"5042","record_id":"<urn:uuid:61989c8f-08a5-4d91-8e93-220df2c273ee>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00477-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Mesquite, TX Precalculus Tutor
Find a Mesquite, TX Precalculus Tutor
...As far as the tutoring space goes, I'll hold session in public libraries or any other better place you know including your home. I don't mind traveling to a place where you feel comfortable. I
look forward to hearing from you and helping you to accomplish the goals with your studies.Currently, I am working on my masters on Chemistry at Texas Woman's University.
19 Subjects: including precalculus, chemistry, physics, geometry
...I have taken courses in pre-algebra, algebra I and II, Matrix Algebra, Trigonometry, pre-calculus, Calculus I and II, Geometry and Analytical Geometry, Differential Equations. I was a tutor in
college for students that needed help in math. I have a Master's degree in civil engineering and have practiced engineering for almost 40 years where math important to performing my job.
11 Subjects: including precalculus, geometry, algebra 1, algebra 2
...SCHEDULE TODAY! MATH, SCIENCE, WRITING ARE PRIORITIES! College-level Writing, Algebra, Geometry, Trigonometry, Precalculus, Calculus, Probability, Discrete Math, Chemistry, Physics, Anatomy and
Physiology, Extracurricular Activities like Publishing, Music-Piano and Vocals, French, Drafting, Programming, Chess, Basketball, Karate, etc.
39 Subjects: including precalculus, reading, writing, English
...My goal is for each student to achieve their best score on these tests to improve their chances of being admitted to the college of their choice, and increase their chances to earn college
scholarships. I live in the Richardson area and I will meet you at your home or a local library. Sign up today and begin increasing your confidence and test scores!
11 Subjects: including precalculus, algebra 1, algebra 2, SAT math
...My laboratory computer was an HP running HP-UX. After the SSC was terminated, I started at Texas Instruments as a member of a UNIX 24/7 support group. I was part of the first team to receive
SUN UNIX Team Certification (The program has since been discontinued). We supported the manufacturing facilities and were trained on both hardware and software troubleshooting of SUN computers.
25 Subjects: including precalculus, chemistry, calculus, physics
|
{"url":"http://www.purplemath.com/Mesquite_TX_Precalculus_tutors.php","timestamp":"2014-04-19T17:12:47Z","content_type":null,"content_length":"24540","record_id":"<urn:uuid:295e48e4-7336-44be-bd25-847441dc080a>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00512-ip-10-147-4-33.ec2.internal.warc.gz"}
|
What Every Programmer Should Know About Floating-Point Arithmetic
12020804 story
Posted by
from the gaining-understanding-bit-by-bit dept.
-brazil- writes
"Every programmer forum gets a steady stream of novice questions about numbers not 'adding up.' Apart from repetitive explanations, SOP is to link to a paper by David Goldberg which, while very
thorough, is not very accessible for novices. To alleviate this, I wrote The Floating-Point Guide, as a floating-point equivalent to Joel Spolsky's excellent introduction to Unicode. In doing so, I
learned quite a few things about the intricacies of the IEEE 754 standard, and just how difficult it is to compare floating-point numbers using an epsilon. If you find any errors or omissions, you
can suggest corrections."
This discussion has been archived. No new comments can be posted.
• Only scratching the surface (Score:5, Interesting)
by ameline (771895) <<moc.liamg> <ta> <enilema.nai>> on Sunday May 02, 2010 @11:45AM (#32064214) Homepage Journal
You really need to talk about associativity (and the lack of it). ie a+b+c != c+b+a, and the problems this can cause when vectorizing or otherwise parallelizing code with fp.
And any talk about fp is incomplete without touching on catastrophic cancellation.
• If you want accuracy... (Score:3, Interesting)
by Nutria (679911) on Sunday May 02, 2010 @11:57AM (#32064276)
use BCD math. With h/w support it's fast enough...
Why don't any languages except COBOL and PL/I use it?
• I'd just avoid it (Score:5, Interesting)
by Chemisor (97276) on Sunday May 02, 2010 @12:13PM (#32064396)
Given the great complexity of dealing with floating point numbers properly, my first instinct, and my advice to anybody not already an expert on the subject, is to avoid them at all cost. Many
algorithms can be redone in integers, similarly to Bresenham, and work without rounding errors at all. It's true that with SSE, floating point can sometimes be faster, but anyone who doesn't know
what he's doing is vastly better off without it. At the very least, find a more experienced coworker and have him explain it to you before you shoot your foot off.
• No, base 10 arithmetic isn't "more accurate". (Score:4, Interesting)
by Animats (122034) on Sunday May 02, 2010 @12:19PM (#32064454) Homepage
The article gives the impression that base 10 arithmetic is somehow "more accurate". It's not. You still get errors for, say, 1/3 + 1/3 + 1/3. It's just that the errors are different.
Rational arithmetic, where you carry along a numerator and denominator, is accurate for addition, subtraction, multiplication, and division. But the numerator and denominator tend to get very
large, even if you use GCD to remove common factors from both.
It's worth noting that, while IEEE floating point has an 80-bit format, PowerPCs, IBM mainframes, Cell processors, and VAXen do not. All machines compliant with the IEEE floating point standard
should get the same answers. The others won't. This is a big enough issue that, when the Macintosh went from Motorola 68xxx CPUs to PowerPC CPUs, most of the engineering applications were not
converted. Getting a different answer from the old version was unacceptable.
• Re:If you want accuracy... (Score:4, Interesting)
by TheRaven64 (641858) on Sunday May 02, 2010 @12:30PM (#32064544) Journal
also it would absolutely be very slow
Depends on the architecture. IBM's most recent POWER and System-Z chips have hardware for BCD arithmetics.
• Re:Analog Computers (Score:2, Interesting)
by cupantae (1304123) <maroneill AT gmail DOT com> on Sunday May 02, 2010 @12:48PM (#32064678)
Irrational numbers are not such a problem as rational numbers which can't be represented in the base used.
Lets say our computer has 6-digit-decimal precision. If you add two irrational numbers, say pi and e, you'll get 5.85987. It's imprecise, but imprecision is necessary, since it can't be
represented in any base.
But if you add 3/7 and 2/3 you get 1.90524 which is imprecise even though a precise answer does exist.
• Hard to debug floating point when it goes wrong! (Score:5, Interesting)
by Cliff Stoll (242915) on Sunday May 02, 2010 @01:19PM (#32064930) Homepage
Over at Evans Hall at UC/Berkeley, stroll down the 8th floor hallway. On the wall, you'll find an envelope filled with flyers titled, "Why is Floating-Point Computation so Hard to Debug whe it
Goes Wrong?"
It's Prof. Kahan's challenge to the passerby - figure out what's wrong with a trivial program. His program is just 8 lines long, has no adds, subtracts, or divisions. There's no cancellation or
giant intermediate results.
But Kahan's malignant code computes the absolute value of a number incorrectly on almost every computer with less than 39 significant digits.
Between seminars, I picked up a copy, and had a fascinating time working through his example. (Hint: Watch for radioactive roundoff errors near singularities!)
Moral: When things go wrong with floating point computation, it's surprisingly difficult to figure out what happened. And assigning error-bars and roundoff estimates is really challenging!
Try it yourself at:
http://www.cs.berkeley.edu/~wkahan/WrongR.pdf [berkeley.edu]
• Re:Hard to debug floating point when it goes wrong (Score:2, Interesting)
by Lord Efnar (30962) on Sunday May 02, 2010 @02:46PM (#32065604) Homepage
That is neat, but some (math oriented) languages do just fine:
Some Mathematica code:
h[x_] := Module[{y, w, k},
y = Abs[x];
For[k = 1, k <= 128, k++, y = Sqrt[y]];
w = y;
For[k = 1, k <= 128, k++, w = w^2];
Plot[Evaluate[h[x]], {x, 0, 2}]
The result: http://www.untruth.org/~josh/real-rounding.png [untruth.org]
• Only in a perfect world, what about MS Access? (Score:1, Interesting)
by Anonymous Coward on Sunday May 02, 2010 @03:33PM (#32065928)
And I've seen the addition of two money columns defined in Access get magical values. I'm sure somebody here can explain the situation here better than I can, but I've seen $1.00+.50 become
$1.49. But in MS Access's defense, a float is a poor way to define money especially in MS Access. I was just hired to bandaid the broken solution.
• Re:Analog Computers (Score:4, Interesting)
by RAMMS+EIN (578166) on Sunday May 02, 2010 @03:55PM (#32066068) Homepage Journal
``Nobody would expect someone to write down 1/3 as a decimal number, but because people keep forgetting that computers use binary floating point numbers, they do expect them not to make rounding
errors with numbers like 0.2.''
A problem which is exacerbated by the fact that many popular programming languages use (base 10) decimal syntax for (base 2) floating point literals. Which, first of all, puts people on the wrong
foot (you would think that if "0.2" is a valid float literal, it could be represented accurately as a float), and, secondly, makes it impossible to write literals for certain values that _could_
actually be represented exactly as a float.
• Thanks to Sun (Score:5, Interesting)
by khb (266593) on Sunday May 02, 2010 @04:04PM (#32066118)
Note that the cited paper location is docs.sun.com; this version of the article has corrections and improvements from the original ACM paper. Sun has provided this to interested parties for 20odd
years (I have no idea what they paid ACM for rights to distribute).
http://www.netlib.org/fdlibm/ [netlib.org] is the Sun provided freely distributable libm that follows (in a roundabout way) from the paper.
I don't recall if K.C. Ng's terrific "infinite pi" code is included (it was in Sun's libm) which takes care of intel hw by doing the range reduction with enough bits for the particular argument
to be nearly equivalent to infinite arithmetic.
Sun's floating point group did much to advance the state of the art in deployed and deployable computer arithmetic.
Kudos to the group (one hopes that Oracle will treat them with the respect they deserve)
• Re:#1 Floating Point Rule (Score:4, Interesting)
by gnasher719 (869701) on Sunday May 02, 2010 @05:11PM (#32066478)
Repeatability. If your code and language are standard-compliant, then you'll get the same floating-point math results as someone using another compliant language on any other platform. Not
crucial for some tasks, but it certainly is for others, such as scientific work.
Wouldn't it be great if you could change a switch in your computer to change all double precision fp from 53 bit mantissa to 52 bit, and if your results are suddenly radically different then you
know your first set of results couldn't be trusted?
Repeatability is highly overrated. It's no good if you get the wrong results, and a different computer system gets you identical wrong results.
• Re:Simple, effective and useful (Score:3, Interesting)
by JWSmythe (446288) <jwsmythe@@@jwsmythe...com> on Sunday May 02, 2010 @06:49PM (#32067064) Homepage Journal
That's what I was thinking too. But hey, what do I know, I just work computers, I'm not a mathematician. :)
The way some folks do it,
0.1 + 02 = 0 + 2
0 + 2 = 2
There was a thread on here a few weeks ago, where I explained it in the calculation of payroll. If you're calculating fractional hours, then those decimals come in handy.
1 minute = 0.0166666666666667 hours.
Depending on how many decimal points you make it, it can really mess with your pay.
0.01 * 60 = 0.6
0.02 * 60 = 1.2
0.0166 * 60 = 0.996
0.0167 * 60 = 1.002
For hourly folks, check your paychecks. I'd bet the company is using the most advantageous rounding for their profit rather than for accuracy.
I was recently told on a something that one interval = 0.0083333 (1/120), that it should always be simply cut off (not rounded) at 1 decimal point. I tried to explain, that would make the numbers
totally wrong.
1 = 0.0
10 = 0.0
10 instances of 10 would then be 0.0, rather than 0.8. They wanted "absolute" accuracy over thousands of instances, but still insisted chopping it off to one decimal place is the way they wanted
it. *sigh*
I do understand why floating point numbers can induce errors, but is it necessary to make it worse by adding in sloppy math?
Related Links Top of the: day, week, month.
|
{"url":"http://developers.slashdot.org/story/10/05/02/1427214/What-Every-Programmer-Should-Know-About-Floating-Point-Arithmetic/interesting-comments","timestamp":"2014-04-19T19:49:35Z","content_type":null,"content_length":"114861","record_id":"<urn:uuid:5f7999f8-862f-4d2e-9830-c44b08cc033e>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00488-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The Golden Ratio/Fibonacci Series or Charting the Populations of Rabbits(!)
ARTICLES & REVIEWS
The Golden Ratio/Fibonacci Series
or Charting the Populations of Rabbits(!)
Excerpts from The Message Boards
Assembled by Garrett Lambert
Louis McGrath asked, “having never seen a writeup on the Golden Rule for Dummies, I have a question that I can't answer on my own. Given a vertical measurement of 89^7/8" , divided into three
different lengths, longest on the bottom, shortest at the top, what are the three lengths that fit into the golden rule? An explanation of how you came to your answer would be appreciated.”
A number of very useful replies were posted—copied below—but none provided any background on the origin of the Rule. My curiosity aroused, I did a search and came across an interesting website that
introduced the Rule's originator, a 12^th century mathematician named Leonardo Fibonacci. Because I think others at WoodCentral will be equally surprised—and impressed—I contacted Karen Sadler at
University of Arkansas who graciously gave permission for me to use the information which I have re-cast as follows:
Leonardo Fibonacci was born in Pisa, Italy, around 1175 and died there some time after 1240. His father was Guilielmo Bonacci, a secretary of the Republic of Pisa. His father was also a customs
officer for the North African city of Bugia. Some time after 1192. Bonacci brought his son with him to Bugia. Guilielmo wanted Leonardo to become a merchant, and so arranged for his instruction in
calculational techniques, especially those involving the Hindu–Arabic numerals which had not yet been introduced into Europe. Since Fibonacci was the son of a merchant, he was able go travel freely
all over the Byzantine Empire. This allowed him to visit many of the area's centers of trade where he was able to learn both the mathematics of the scholars, and the calculating schemes in popular
use at the time. He returned to Pisa around 1200 to become the greatest European mathematician of the Middle Ages. He was the first to introduce the Hindu–Arabic number system into Europe, and in
1202 completed a book on how to do arithmetic in the decimal system (titled Liber abaci. It describes the rules we all now learn in elementary school for adding, subtracting, multiplying and
Leonardo`s mathematical interests and abilities encompassed the practical as well as theoretical. A problem in the third section of Liber abaci led to the introduction of the Fibonacci numbers: "A
certain man put a pair of rabbits in a place surrounded on all sides by a wall. How many pairs of rabbits can be produced from that pair in a year if it is supposed that every month each pair begets
a new pair which from the second month on becomes productive?"
By charting the populations of rabbits Fibonacci discovered a number series from which one can derive the Golden Section. Here`s the beginning of the sequence:
│ 1, │ 1, │ 2, │ 3, │ 5, │ 8, │ 13, │ 21, │ 34, │ 55, │ … │
Each number is the sum of the two preceding numbers as follows:
│ 1 │ = │ 1 │ + │ 0 │
│ 2 │ = │ 1 │ + │ 1 │
│ 3 │ = │ 2 │ + │ 1 │
│ 5 │ = │ 3 │ + │ 2 │
│ 8 │ = │ 5 │ + │ 3 │
│ 13 │ = │ 8 │ + │ 5 │
│ 21 │ = │ 13 │ + │ 8 │
│ 34 │ = │ 21 │ + │ 13 │
│ 55 │ = │ 34 │ + │ 21 │
│ … │ … │ … │ … │ … │
and it continues to grow exponentially.
French mathematician Edouard Lucas (1842–1891) gave the name Fibonacci numbers to this series and found many other important applications for them. The Fibonacci Society was founded in 1962, and a
journal, The Fibonacci Quarterly, first appeared in 1963, dedicated to unraveling its secrets. There were a lot of secrets to be found.
The Fibonacci sequence isn't special just because it is an exponential sequence. The Fibonacci sequence shows up in all sorts of weird places…and it gives us the Golden Ratio. As you go farther and
farther to the right in this sequence, the ratio of a term to the one before it converges to the Golden Ratio (which means it will get closer and closer to the Golden Ratio).
The Golden Ratio is a special number approximately equal to 1.6180339887498948482. We use the Greek letter Φ (Phi, pronounced fee) to refer to this ratio. Like π (Pi), the digits of the Golden Ratio
go on forever without repeating. It is often better to use its exact definition:
The Golden Rectangle also shows the Golden Ratio. A Golden Rectangle is a rectangle with proportions that are two consecutive numbers from the Fibonacci sequence. The example at right, 1, 2, is the
most common, but Golden Rectangles are all over the place—take 3 x 5 index cards, as another example.
In the diagram at left ACDF is a golden rectangle, ABEF is a square, and BCDE is a golden rectangle. The ratio of the length of the longest side to the length of the shortest side of a, golden
rectangle is the Golden Ratio.
The Golden Ratio can be mathematically derived from this relationship by the proportion shown at the right.
If we solve the equation appearing above for x, we'll find that it is the value
and is the Golden Ratio! If you have a Golden Rectangle and you cut a square off it the remaining rectangle will also be a Golden Rectangle. You can keep cutting these squares off and getting smaller
and smaller Golden Rectangles.
The Golden Spiral
The Golden Ratio may also be seen in a spiral. A spiral can be drawn from the diagram above by dividing BCDE into a square and another golden rectangle, and by continuing to separate the golden
rectangles into squares and new golden rectangles. The first part of the spiral is drawn with a compass point at B and an arc from A to E; next, the compass point at G and an arc from E to H.
Continue in this way until the spiral is completed to your satisfaction. The spiral appears in nature in shells, flower petals, and pine cones.
The Golden Spiral on the left was created by making adjacent squares of Fibonacci dimensions. An arc is then made across each square. This spiral is called equiangular, because for each quarter turn
(90° or π/2), the spiral increases by a factor of Φ. That is, if you take one point, and then a second point one-quarter of a turn away from it, the second point is Φ times farther from the center
than the first point. All equiangular, or logarithmic spirals increase by a factor of Φ.
When the golden spiral is graphed on a polar axis, it looks like the image to the right.
The coordinates of a point, such as A, are written as
Points can also be written in terms of
In these instances,
If you combine these two equations, you end up with
And now for the helpful responses to Louis' original question:
Dan Donaldson: You can play with these just a bit to make them fit with fractions, but here is one way to do it. I took the Fibonicci series as: 1, 1, 2, 3, 5, 8, 13, 21, 34. You could go higher, but
I don't think it will make much difference in accuracy. Then I added up the last three numbers—13, 21, and 34—to get 68. I took the ratio of the 89 ^7/8 (rounded to 90) and got 1.32. I took the 1.32
times the last three numbers and got 17.2, 27.8, and 45 as the three distances.
Hoa Dinh: I got 17 ^3/16", 27 ¾", and 44 ^15/16". Call them a, b, c, from shortest to longest. Golden Ratio gives: c = 1.618 x b and b = 1.618 x a. Thus a + b + c = a + (1.618 x a) + (1.618 x 1.618 x
a) = 5.236 x a = 89 ^7/8 Solve for a, b, and c.
Sam Simpson: A simple rule of thumb that will always get you very close is to remember the Greek rule of 2–3–5 With a brick having sides that are dimensioned 2, 3, 5 which create faces of 2x3, 2x5 &
3x5 you can stack any number of them, in any combination to build anything.
If you take each of these numbers 2, 3, 5 add them together = 10 then divide them into your height, lets round it to 90 for simplicity's sake, then divide it by ten, you get a factor of nine. 2x9=18
3x9=27 5x9=45 Thus, 18, 27, 45. See how close those are to Dan's numbers?
Dan Donaldson: Actually, that is exactly what I did, the only thing is that Sam used numbers that are a lot easier to work with. 2, 3,and 5 are three of the Fibonacci numbers. I just used slightly
bigger ones to do the same thing. Technically, the accuracy will improve as you use bigger numbers, but the difference is negligible, and Sam's numbers are easier to work with.
Warning! If you are not a math geek, you probably do not want to read any further.
Φ, which is the ratio that the Fibonacci series will eventually resolve to is 1.618033989 rounded to nine decimal places. 5/3 will give 1.666666667, which is about 3% high (103.0056648%). The 34/21
division gives 1.619047619, which is about 6/100th of a percent high (100.0626458%). This tells you that the series converges rather rapidly.
In practice, the 2–3–5 is a lot easier to remember, and is accurate enough for most any work, as you will have to play with any dimensions anyway to get something that you can measure.
. . . assembled by Garrett Lambert
with excerpts from The Message Boards
^ ©
2005 by Garrett Lambert . All rights reserved.
No parts of this article may be reproduced in any form or by any means
without the written permission of
the publisher
and the author.
|
{"url":"http://www.woodcentral.com/articles/shop/articles_546.shtml","timestamp":"2014-04-18T13:31:56Z","content_type":null,"content_length":"19640","record_id":"<urn:uuid:37679527-b131-4f6d-ac17-40f1afd9ec07>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00000-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Optimization in economic question
October 13th 2012, 01:13 PM
Optimization in economic question
A firm that produces a single product faces a demand function : q = 10,000P - 100P where q is the quantity demanded, and p is the firm’s price. Its total cost function is : TC(q)= 200+20q+q2/300.
the question is : What is the profit-maximising level of production for this firm? answer= q=3000
What is the firm’s maximum profit? R= 119800
Calculate (discrete) marginal revenue and (discrete) marginal cost at an output of 2000 ?. MR=60 , Mc= 33.3
The correct answers are in brackets after the question , but I am more interested in explaining how the answer was obtained. with thanks
October 13th 2012, 05:14 PM
Re: Optimization in economic question
Hey SAM123.
What is this total cost function? How do you identify profit in this model?
October 14th 2012, 04:21 AM
Re: Optimization in economic question
Total Cost function : TC(q)= 200+20q+q2/300.
Write a formula for profit (π) in terms of q.
|
{"url":"http://mathhelpforum.com/calculus/205256-optimization-economic-question-print.html","timestamp":"2014-04-21T03:15:56Z","content_type":null,"content_length":"4467","record_id":"<urn:uuid:403d27cc-6f3e-43b8-b70e-b3c68e667407>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00106-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math and Hacking/Computers
So I was reading one of the previous posts about it being a misconception that math has much to do with hacking and computers in general. So I'm going to counter act that misconception a little.
Basically anything done in your computer is done with strings of 1's and 0's, binary, I'm sure most of you already know that little tidbit already. I'm going to match up a few of the areas of math
that some topics you all are interested in fall into.
Encryption - this field uses algebra, finite field theory, elliptic curve theory, linear algebra, chaos theory, calculus, and we can keep going but you get the idea.
Code theory - this is the transmission of information over the web - uses linear algebra, modular arithmetic, boolean algebra, binary conversions, fast fourier transforms, galois field theory, and
again the list goes on and on.
I don't know of too many pure programming languages that don't use the theories of recursive definitions, discrete structures, arrays, multidimensional space, and linear algebra just to name a few.
Networking - Topology is a field of mathematics used extensively in networking, studying connectedness and things like that(I know a bit about topology but I only know that it's used in networking,
not how).
Probability theory is used a lot more in hacking than you would ever realize, for example probability is used to determine how strong or weak an encryption algorithm is, or a password is.
I could go on all day, but you get the idea.
Re: Math and Hacking/Computers
Sorry, but you're over-analyzing. You don't need to know calculus to encrypt something. Binary is easy math. Algebra is easy math. Nearly everything you named is either algebra, a variation thereof,
or common sense. Recursion isn't rocket science. Arrays aren't either. Linear algebra? Sorry, you're trying too hard.
Even toddlers know the number 1, and usually understand the concept of 0. Thus, binary isn't hard math.
Finite Theory - the idea that stuff that isn't infinite isn't infinite. No duh. Galois is very similar.
The very concept of chaos theory is just that tiny things can snowball to create bigger effects. (Such as the movement of a mouse being used to help encrypt a file securely). That's not difficult
math. Now, if you try to calculate the effect of a pebble dropping causing humidity to rise, that's hard. But that still doesn't require high-level mathematics.
Boolean algebra is just true or false. It's not a science. Toddlers understand that one too. See the 1's and 0's argument.
Binary conversion isn't hard either. It's just paying attention to which switches are on.
Probability is also an easy concept. Even kindergartners can calculate whether it'd be easy to get away with something or not.
Topology - You said yourself you don't understand the functionality behind it.
Basically what you are saying is that algebra is used a lot. Yes, it is. It's not hard math though.
HTS User Composition:
95% Male
4.98% Female
.01% Monica
.01% Goat
Re: Math and Hacking/Computers
Heh. If it's so easy then why do we have so many buffer overflow vulns in software?
Encryption certainly is not just simple math at all.
Jgrimm is right that there are strong ties between computer science and mathematics. Most decent university courses for computer science require qualifications in maths, often with higher priority
than other computing qualifications.
You don't need to know calculus to encrypt something.
If you're talking about using built-in functions, then no - you don't. But encryption is not just A=5,B=23, etc. Read up about keypair generator functions for proof of that one.
You mention probability being an easy concept. Sure, it is. But advanced probability is still presumably very hard.
If this link works, can you do this easy enough?
Maths is absolutely crucial to becoming decent in many areas of computing.
Re: Math and Hacking/Computers
Sanddbox I have to admit that I haven't had a good laugh like that in a while. So lets get started. The first post was off the top of my head we'll get into a little more detail here.
1. You're right I can encrypt something using a simple shift cypher and it can be broken in an exhaustive attack by a modern computer in about 30 seconds. If that's what you're aiming for rock on.
There exists a nasty attack called a square root attack that states for many trapdoor encryption functions the amount of work to break it is equal to the square root of the cardinality of the group
you're working with, where the group is the basis for your encryption function. These are simple encryption functions mind you, nothing too crazy, public key exchange systems. So we have a minimum
size constraint, enter elliptic curve cryptography, this type of cryptography uses elliptic curve groups as a basis for an encryption system. Elliptic curves allow a much smaller sized key to be
used, but the thing is describe an elliptic curve group. Such a group is the set of all integer solutions to a given polynomial in n dimensional space. The size of such a group is very very hard to
quantify, if you can do so I would suggest you do a PhD thesis on the subject and write your own ticket in the mathematics world. Regardless this type of thing is actually a mix of algebra ( hence
group ) geometry ( hence surface in n dimensional space ) and calculus ( the polynomial has to have certain properties to be used, such as differentiable in certain circumstances ).
2. Any time you are using a matrix to solve anything you are using linear algebra so...and array is actually using linear algebra, also anytime you're trying to solve a system of linear equations in
multivariate space you are using linear algebra so I guess the sad thing is that you are using it, if you are doing these things, without even realizing it.
3. Recursion isn't rocket science however it is a mathematical concept that many have problems with, If you don't then congrats but trivializing it isn't something that is helpful to others who do
have problems and we're here for helping each other not boosting our own ego's at our own intelligence....well I'm not here to do that.
4. Binary math isn't hard when you're talking about adding, subtracting, multiplication, and division I suppose. What about looking at a binary string and computing how much work it will take to make
sure that binary string is correct, or taking that encrypted binary string and breaking it? Do you know how to break apart a data stream in binary, convert it to hex and convert that to ASCII, if you
don't know why that would be useful look it up. I doubt a toddler can explain it to you.
5. You misread finite theory---Finite Field Theory- which is analogous to Galois Theory in many terms. So lets talk about Finite field theory. A run down of a practical use for this situation are BCH
codes. I hate to break this to you but they use irreducible polynomials and their roots to construct finite fields which are then used to encode information for transmission through communications
channels. The fun thing about these codes are that they can tell you if there are errors in the information you received without having to open garbage in your browser, even better they can correct
those errors for you.
6. As for Galois being anything close to the fact that finite stuff isn't infinite is funny. Galois field theory is about field extensions to include polynomials that couldn't be factored in the
smaller field. That's just one aspect of which there are many. You're oversimplifying things you don't understand obviously.
7. Don't mistake my statement of not knowing how it's used to not understanding how it's used. I didn't feel like looking it up at the time but I do now. Network topology has to do with the
configuration of the terminals in the network. Connectedness is a prime concern here. The topological definition for connectedness is about if two points can be connected using any path, much like
whether two computers in a network can be connected using any path. A famous problem in Topology was one by Euler who posed a question about whether a person could traverse five bridges only once in
a town and end up on the other side of the town, may seem simple, but what about when there are 10,000 bridges and you want the most efficient route to connect each one, sounds like a university
campus network to me.
And the statement that Algebra is not hard math
Given a Semi-Group Action from a finite semi-group to a finite set, where you know that for a,b in G the semi-group, and s in S the set, where as = k and bs = y, can you compare the amount of work it
would take to just run an exhaustive process that simply computes the value of gs for every element of G vs the amount of work that it would take to consider the action as a subset of the Symmetric
group Sn where n = the cardinality of S and determine the possible answers by running a search through that subset?
If you can answer that you will have my respect completely and I would be very grateful for the answer, it's nothing really important just a side note on the work I'm doing now but it takes someone
far smarter than me to figure it out, or at least I need the help of someone smarter than I am. I would be grateful for points in the right direction also.
Oh and a final note that problem is actually directly from an issue involving encryption in a general case. If you can solve that in a pertinent way you could possibly get published if it's
definitive enough for the absolute general case in which I'm referring.
Re: Math and Hacking/Computers
I realized that I missed a few points in the last post so I'll clear them up.
Chaos Theory also includes infinite repeatability-i.e. fractals- Use the right fractal equation to generate your encryption algorithm and you know it has an inverse, but you also know that it could
take years of exhaustively running values to get your key values without knowing the inverse,
Probability - The probability or randomly guessing a key value is extremely important to encryption and computer science. If you have a probability of guessing the right key or password to a system
that's like .9 your system sucks and you should know it. The probability that you will get an error in the BCH example I used is good to know also, if your probability to get more than 3 errors in a
given code is .0000000000000000000000000001 then you are simply wasting time and resources to create a code system that will detect more than 4 errors and correct more than 3, in fact that high of
error values is probably over kill. It is also the theory behind pseudo random number generators, since pure random number generators are a myth, you want a very low probability you will be
reproducing the same number multiple times.
Boolean algebra is the math behind logic gates and anything that uses logic, every if then function, do while function, true false function, etc you use in actual programming ( idk about scripting
haven't done it much, haven't done much programming much either to be honest ). It gets as complicated as you feel like making it.
If anybody wants more information on any of these subjects I'll do my best to provide some articles
Re: Math and Hacking/Computers
Wow. i haven't seen such complete and udder ownage since... well shit...
-Special Needs
Re: Math and Hacking/Computers
Here's a paper I wrote on the subject of BCH codes, due to different encoding in this and the original document some things didn't display correctly, all the numbers on the right of x in the
polynomial are powers, and all the numbers to the right of the alpha are powers also.
In Applied Algebra: Codes, Ciphers, and Discrete Algorithms by Darel W. Hardy and Carol L. Walker codes are defined as “a system of words or other symbols used to represent a given set of words or
other symbols.” The purpose of a code is to change information that may not be easily transmitted into something that is easily transmitted. Codes are different from encryption in that they are not
designed to obscure the information. Some examples of codes are ASCII and binary. ASCII is used to change the text that you input on your computer to a form that can be stored in memory and
reproduced from memory when necessary, among other things. Bar codes on products we buy at the store are an easily thought of code, all those different size lines give the cash registers the
information needed to sell you the product you want. In just these examples we can see that codes have different properties and go about their functions in different ways; I will be discussing a code
that has been designed with the property that it can detect and correct a given number of errors in information received. This type of code is naturally called an error detecting/correcting code. The
codes that I will be discussing are called BCH codes, named for Bose-Chaudhuri-Hocquenghem which are the names of the people that designed it.
BCH Codes
BCH codes are based on properties of polynomial fields. We will be generating the finite fields GF(pn) where p is a prime and n is an element of the natural numbers; these are Galois Fields. The way
the code works is to associate the plaintext you want to transmit with a polynomial and then encode it using another polynomial. The encoding polynomial will allow us to check and fix errors in the
received information.
Theorem: Let q(x) be an irreducible polynomial in GF(p) of degree n and use q(x) to construct GF(pn). Let α be a primitive element in GF(pn), let mi(x) be the minimal polynomial for αi for i =
1,2,….,2t, and set g(x) = lcm[m1(x), m2(x),…, m2t(x)]. Let deg(g(x)) = k and let a(x) represent the plaintext polynomial of degree at most pn – k – 2. Then the minimum weight of a nonzero codeword a
(x)g(x) is at least d = 2t +1, and at least t errors can be corrected.
There are some properties of finite fields that will allow us to see what is going on with these codes more clearly. We start with the irreducible polynomial q(x) in GF(p)[x]. We note that GF(p) is a
finite dimensional vector space where a vector is given as (an-1, an-2, …, a0), where each ai is taken mod p for every 0 ≤ i < n, represents a polynomial of degree n – 1 in GF(p)[x]. We are able to
choose the size of the finite field we are working in using the property that GF(pn) is the splitting field of the polynomial xp^n – x, hence this polynomial factors in GF(pn). In fact this
polynomial factors into all of the distinct irreducible polynomials in GF(p) of degree dividing pn. So once we have all the factors we have a list to choose q(x) from. We then let α be a root of q(x)
which forms a cyclic multiplicative group of order pn-1 where n is the deg(q(x)). What we need to do next is find the encoding polynomial g(x). First we need to figure out how many errors we want to
be able to correct in our code, which should be chosen using the formula above in such a way that the code is still useful. It wouldn’t be useful if you find that you can only send one bit of
information at a time because you constructed g(x) too large. Since t is the value of the number of errors that we can correct, if we want to construct a code that will correct 3 errors we need 2t =
2(3) = 6 consecutive minimal polynomials. So in this instance we need to find the minimal polynomials of α, α2, …, α6 to compute g(x). We note here that we would probably need to be working with a
fairly large n to make this code a workable one since our maximum degree of a(x) is given by the formula above and our deg(g(x)) will most likely be close to the size of pn for small n. The following
is an example of the process of creating a BCH code.
I will construct a code in GF(8) using GF(2). Since 8 = 23 we want to use an irreducible polynomial of degree 3. We can determine the irreducible polynomials of GF(8) by factoring the polynomial x8+
x or by building up all the polynomials of degree 3 using polynomials of degree 1 and 2 and choosing one that doesn’t fall into that list. I will use the irreducible polynomial q(x) = x3+ x2 + 1.
Then we let α be a root for q(x). Then q(α) = 0. We then use this to generate GF(8). We show that |α| = 7 using the power notation. Since α is a root of q(x) we get α3 + α2 + 1 = 0 and α3 = α2 + 1.
Taking the powers of α results in the following table:
0 = 0 α0 = 1 α1 = α α2 = α2 α3 = α2 + 1
α4 = α2 + α + 1 α5 = α + 1 α6 = α2 + α α7 = 1
and we have shown that α is a primitive element of GF(8).
We then need to compute g(x) which is the lcm of the minimal polynomials for each αi.
We then get the following results using the similar process outlined in [1] taking
0 = 02 = (α3 + α2 + 1)2 = (α3)2 + (α2)2 + 1 = (α2)3 + (α2)2 + 1
This means that m1(α2) = 0 and m1(x) = m2(x). We also have that m1(α4) = 0, this follows from the fact that 4 = 22 and hence α = (α2)2 thus (m1(α2))2 = 02. Therefore we have the result that m1(x) =
m2(x) = m4(x). Then we figure out the minimal polynomial for α3, α6, and α5 which have the same minimal polynomial demonstrated by α6 = (α3)2 and (α6)2 = α12 = α5 since 12 is congruent to 5 mod 7. We
compute this by setting m3(x) = (x – α3)(x – α5)(x – α6) = x3 + x2(α6 + α5 + α3) + x(α11 + α9 + α8) + 1 = x3 + x2(α6 + α5 + α3) + x(α4 + α2 + α) + 1. Reducing the polynomial we get x3 + x2(0) + x(1)
+ 1, hence m3(x) = x3 + x + 1. To compute g(x) we take the lcm(m1(x),…, m6(x)) so we have m1(x)m3(x) = g(x). Therefore g(x) = (x3 + x2 + 1)( x3 + x + 1) = x6 + x5 + x4 + x3 + x2 + x + 1. Notice that
we have included the minimal polynomials for 6 consecutive roots in our construction of g(x), so we could theoretically correct 3 errors with this code. Unfortunately however when we apply the above
theorem for the maximum degree of the plaintext polynomial given as pn – k – 2, where k is the deg(g(x)), we find that deg(a(x)) ≤ 0 which means that this code isn’t useful, illustrating the
statement about the size of pn relative to the deg(g(x)).
We can now use some examples to illustrate the decoding of received information and checking it for errors.
Example 1: Assume that the polynomial q(x) = x3 + x + 1 is used to construct a BCH code that corrects a single error with plaintext polynomials of the form a(x) = a3x3 + a2x2 + a1x1 + a0 ϵ GF2[x].
Assume that the message x5 + x4 + x3 + 1 is received. What is the plaintext?
Solution: Since this code corrects a single error it turns out that q(x) = g(x) since g(x) is the minimal polynomial of α and α2 as shown above. To determine if this received information is correct
we first take x5 + x4 + x3 + 1 mod q(x). We want this to turn out to be zero due to the properties stated above. The result of this operation turns out to be x + 1 showing that we have an error in
our received information. However since this code is a single error correcting code and the result was a double error as a remainder we have to find the single error equivalent of x + 1. We do this
by examining the powers of α determined by this minimal polynomial and find that x + 1 is associated with α3 which is our single error. We then subtract this error from the received polynomial and
the result is x5 + x4 + 1. We then take this as our intended received information and compute the plaintext polynomial by taking c(x)/g(x) = a(x), where c(x) is our code polynomial we just computed
and a(x) is our plaintext. The result of this operation is a(x) = x2 + x + 1. We can consider this plaintext in the form of a(x) = (0, 1, 1, 1) in a binary style form.
Example 2: Assume that the polynomial q(x) = x4 + x + 1 is used to construct a BCH code that corrects a single error with plaintext polynomials a(x) є GF(2). What is the largest possible degree for a
(x)? Assume that the polynomial x12 + x10 + x8 + x6 + x2 + x is received. What is the plaintext?
Solution: The root α of this polynomial generates GF(16). Since we are generating a code that corrects a single error we want t = 1 in the equation 2t + 1 by Theorem 90 on p.215 of [1], where t is
the number of consecutive roots. Thus we need 2 consecutive roots. This happens to fall apart easily giving that g(x) = q(x). Since α is a root of the irreducible polynomial q(x), q(x) is the minimal
polynomial of α and α2 hence it should be used as the generating polynomial. We now know the order of g(x) and using the equation pn – k – 2 for the maximum degree of a(x) found in Theorem 90 also we
have 16 – 4 – 2 = 10, and deg(a(x)) ≤ 10. When we evaluate r(x) / g(x) we find that there was an error in the received text. This error residue turns out to be x2 + x + 1 and for the same reason as
above we find the associated single error which is x10. Then we proceeded in the same manner outlined in 10.2.1 and the result is x12 + x8 + x6 + x2 + x which is our code word c(x). Then evaluating c
(x) / g(x) = a(x) we find that a(x) = x8 + x5 + x. Which we can represent in binary form as (0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0).
Example 3: Let α be a root of x2 + x + 2 in GF(9) and calculate g(x) = m1(x)m2(x).
Solution: We do this in a similar fashion as the previous problems and example with the exception that we are now working modulo 3 instead of modulo 2. In this problem we find that the roots repeat
themselves in multiples of 3, i.e. m1(x) = m3(x) and m2(x) = m6(x). Therefore we know that m1(x) = x2 + x + 2 and m2(x) = (x – α2)(x – α6) = x2 – xα6 – xα2 + α8 = x2 – x(α6 + α2) + 1. Then
simplifying (α6 + α2) we end up with α + 2 + 2α + 1 = 0 mod 3. Thus m2(x) = x2 + 1. To find g(x) we now take m1(x)m2(x) = (x2 + x + 2)(x2 + 1) = x4 + x2 + x3 + x + 2x2 + 2 and simplifying we end up
with g(x) = x4 + x3 + x + 2.
Example 4: Assume the generator polynomial defined in Example 3 produced a codeword that was received as 2x7 + 2x4 + 2x3 + x2 + 2x + 2. What is the plaintext?
Solution: Since we are using the generator polynomial from Example 3 we have that g(x) = x4 + x3 + x + 2, so we need to evaluate (2x7 + 2x4 + 2x3 + x2 + 2x + 2)/ (x4 + x3 + x + 2). While doing this
we need to keep in mind that we are now working in GF(32) so our coefficients are mod 3 now instead of mod 2. Going through the polynomial division we have that (2x3) (x4 + x3 + x + 2) = 2x7 + 2x6 +
2x4 + x3, note that 2(2) = 4 (mod 3) = 1. Subtracting (2x7 + 2x4 + 2x3 + x2 + 2x + 2) – (2x7 + 2x6 + 2x4 + x3) = -2x6 + x3 + x2 + 2x + 2 and noting that -2 (mod 3) = 1 we can change this to x6 + x3 +
x2 + 2x + 2. Then we can take (x2)(x4 + x3 + x + 2) = x6 + x5 + x3 + 2x2 and we have (x6 + x3 + x2 + 2x + 2) – (x6 + x5 + x3 + 2x2) = -x5 + (-x2) + 2x + 2, noting again that we have coefficients mod
3 we have 2x5 + 2x2 + 2x + 2. Continuing in the same manner all the way through the polynomial division we end up with the error residue as 2x3. We now have that r(x) = g(x)a(x) + e(x) therefore r(x)
– e(x) = g(x)a(x) and we take (2x7 + 2x4 + 2x3 + x2 + 2x + 2) – (2x3) = 2x7 + 2x4 + x2 + 2x + 2. Evaluating (r(x) – e(x))/g(x) = (2x7 + 2x4 + x2 + 2x + 2)/(x4 + x3 + x + 2) = 2x3 + x2 + 2x + 1 = a
(x). As above we can also represent this plaintext word in a ternary notation, similar to the binary we used above, as a(x) = (2, 1, 2, 1). Note that there are only 4 positions in this notation
because the maximum deg(a(x)) = pn – k – 2 = 9 – 4 – 2 = 3.
We note finally that there is a check in most codes called a parity check matrix. These matrices allow the receiver to determine if an error has occurred and in some cases correct that error. A
notable example is the Hamming(7,4) codes’ parity check matrix. While BCH codes have a parity check matrix a useful property of this code allows us to skip the matrix algebra and simply use g(x) as
our parity check. This simplifies the number of operations necessary for the error detection/correction portion of the code. We can however construct the parity check matrix for a BCH code by
creating a matrix uses powers of α. That is we have as the first row [ 1 α α2 … αd-1], where d – 1 is the number of coefficients in our codeword, we have as the second row [ 1 α2 (α2)2 … (α2)d-1] the
third row is taking powers of α3 and so on until we have a matrix pn rows and d – 1 columns. We then take our codeword c(x) as a vector with the entries as the coefficients and do the associated
linear algebra multiplying our matrix by the vector. This multiplication should result in 0 if we have received our information right. However we don’t need to go through this process because it is
replaced by simply finding the residue of our received polynomial mod g(x).
Through the discussion above and the concrete examples that have been provided I have demonstrated the construction and use of BCH codes.
David S. Dummit and Richard M. Foote, Abstract Algebra Third Edition, John Wiley and Sons Inc., New Jersey, 2004.
Darel W. Hardy and Carol L. Walker, Applied Algebra: Codes, Ciphers, and Discrete Algorithms, Prentice Hall, New Jersey, 2003.
Piyush Kuror, Lecture 14 BCH codes, http://www.cmi.ac.in/~ramprasad/lecturenotes/comp_ numb_theory/lecture14.pdf, April 28, 2008.
-- Wed Sep 16, 2009 1:01 pm --
some of the text properties didn't actually convert correctly for example that GF(pn) is actually p to the power n. Some of the subscripts did the same thing, sorry about that, hopefully it will
still give some insight.
Re: Math and Hacking/Computers
"What if C-A-T really spelled dog?"
-Revenge of The Nerds II
Re: Math and Hacking/Computers
Jgrimm wrote:Sanddbox I have to admit that I haven't had a good laugh like that in a while.
^Yes, now your argument is much more valid! [sarcasm]
Sorry for not replying; I didn't notice this thread and was away for HTS for awhile.
Is math a fairly large part of the programs we create? Yes.
But the idea that you need a lot of these concepts to be able to program is absurd.
If you're going to make an encryption program, then yes, it's a damn good idea to research encryption.
Obviously, a very basic knowledge of math is needed. However, complex math itself isn't necessary to program. If you're going to make a program that is heavily laden with math concepts (encryption,
calculating fractals, whatever floats your boat), then a knowledge of the subject your program is about is necessary.
Oh, and to that other guy, it's spelled 'utter'. Sorry for being nitpicky, it's my OCD speaking.
HTS User Composition:
95% Male
4.98% Female
.01% Monica
.01% Goat
|
{"url":"http://www.hackthissite.org/forums/viewtopic.php?p=29553","timestamp":"2014-04-16T07:58:29Z","content_type":null,"content_length":"64019","record_id":"<urn:uuid:e38d1e9c-cd0a-4281-99ff-ef1948851a19>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00466-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
solve:(pq+rp+rc) (st+uv+wx)
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/4f84129de4b0505bf084bffd","timestamp":"2014-04-19T13:00:22Z","content_type":null,"content_length":"49022","record_id":"<urn:uuid:f4125b02-45b4-4c5a-9843-9e7856bb79ef>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00282-ip-10-147-4-33.ec2.internal.warc.gz"}
|
On the 12th Day of Christmas… A KitchenAid Stand Mixer Giveaway! (Winner Announced)
UPDATE: The winner of the special edition KitchenAid mixer is:
#154 – Miss Christine: Her comment on being Brown Eyed Baker fan on Facebook was the winner. What she’s baking this holiday season:
“I am baking four recipes for holiday parties and two for the holidays! I I would LOVEE, to have that beauty in my kitchen!”
Congratulations Christine! You should have already received an email from me; make sure you reply with your mailing address so I can get your new KitchenAid mixer out to you!
Thanks everyone for entering, and thank you for making these 12 Days of Giveaways so much fun for me!
Welcome to Day 12 of the “12 Days of Giveaways”!
I’m totally bummed that today is the last day of the giveaways; I’ve had so much fun playing Santa! But, I wanted to end the twelve days with a big ol’ bang, and thought that this mixer fit the bill.
I hope you all agree :) Not only is this a KitchenAid stand mixer (my kitchen couldn’t survive without one!), but it has one of those oh-so-pretty glass bowls and, more importantly, sales of this
special edition model benefit Susan G. Komen Cook for the Cure®’s effort to end breast cancer. So not only do you get a shiny (and gorgeous!) new toy, but I also get to donate to a good cause. It’s a
holiday win-win!
Giveaway Details
The winner will receive one (1) KitchenAid Artisan Susan G. Komen Stand Mixer.
How to Enter
To enter to win, simply leave a comment on this post and answer the question:
“How many different things are you planning to bake this holiday season?”
You can receive up to FOUR additional entries to win by doing the following:
1. Subscribe to Brown Eyed Baker by either RSS or email. Come back and let me know you’ve subscribed in an additional comment.
2. Follow @browneyedbaker on Twitter. Come back and let me know you’ve followed in an additional comment.
3. Tweet the following about the giveaway: “Day 12 of “12 Days of Giveaways” from @browneyedbaker: Win a special edition KitchenAid Artisan Stand Mixer! http://wp.me/p1rsii-3Wz”. Come back and let me
know you’ve Tweeted in an additional comment.
4. Become a fan of Brown Eyed Baker on Facebook. Come back and let me know you became a fan in an additional comment.
Deadline: Today (Friday), December 16, 2011 at 11:59pm EST
Winner: The winner will be chosen at random using Random.org and announced here tomorrow. If the winner does not respond within 24 hours, another winner will be selected.
Disclaimer: This giveaway is sponsored by Brown Eyed Baker.
GOOD LUCK!!
6,006 Responses to “On the 12th Day of Christmas… A KitchenAid Stand Mixer Giveaway! (Winner Announced)”
1. I subscribed to every and all things you have! Love your blog and wished you blogged more! Thanks. Hope I win!
3. I’m a fan of you on facebook!
4. I’m hosting my very first cookie swap.. so excited! Probably going to bake a bunch of different things because I’m so excited!
5. I plan on baking four different goodies for the holiday.
6. I like you on Facebook as well!
7. I’m planning to bake as many things as I can manage! A peanut butter cheesecake, gingersnaps, cinnamon rolls, mint chocolate cookies, and a slew of other christmas cookies
9. I have subscribed by email.
11. I’m moving houses (no more living with my dad, yay!), so I don’t think I’ll have time to bake very much.. :/ Especially as my kitchen appliance collection is a bit, well.. incomplete. Instead,
I’ll enjoy all the treats my family has to offer, and I don’t think that’s a bad thing – they’re delicious.
12. My baking list is long, but I don’t know if I will have time to get to it all.
13. “How many different things are you planning to bake this holiday season?”
Probably 3… ???
Christmas fruit cake is a MUST!
Cookies to follow!
Pudding will be a bonus this season!!!
Hopefully yes!
14. I am subscribed via email, too. Thanks for the giveaway!
15. I’ve baked two or three things and will bake at least half a dozen more. I love the excuse to bake at the holidays
16. I have subscribed by email!
17. I’ve tweeted about the giveaway.
19. And I’m your fan on facebook as well : )
20. Every year I bake/make something special to give out to my fellow church members for Christmas. Two years ago, it was S’mores Pops… Last year, I baked sugay cookie stockings with red glaze and
sugar crystals. This year, I was planning on making red velvet whoopie pies… but you saved the day with a recipe for peppermint whooopie pies! Thank you for the inspiration!
21. at least a dozen different items… I’m always baking no matter what time of year!
22. FOLLOWing brown eyed baker on twitter!
23. Tweeted the giveaway on twitter!
24. LIKEing brown eyed baker on facebook!
25. mini peppermint cheesecakes with oreo crust… one of my favorites
26. I am planning on baking cookies next week, bread, and some brownie cupcakes! Thinking of adding in your oreo truffles you posted a little while back to mix it up this year.
Thanks for the chance to win!
27. and i am a subscriber by rss!
30. I like your use of random.org. I’ll be making at least a few things, and at least a few discovered through your site. Thanks.
31. I am going to bake 3 different kinds of Christmas cookies this weekend!
32. I would LOVE a new mixer!!
I have about 15 different things I plan on baking for the holidays!
Thanks for the great giveaway!!
33. I am subscribed to the rss feed!
34. Uf! Too many things. Cinnamon rolls, multiple pies, dinner rolls, cookies, snicker doodle bars, and who knows what else!
36. I’m planning on baking four different things this holiday season: trifle, sugar cookies, meringues, and fudge.
37. I follow you on twitter @aanie1963
40. I’m going to bake tons of things this Christmas, and they are all going to be CHOCOLATE!
41. I would like to make about 20 different things but I’ll probably only make 4 or 5!
42. I subscribe to your emails!
43. Oh hey Twitter! @emilyprevost is now following @browneyedbaker!
45. I like you, I like Kitchenaid, and I like Brown Eyed Baker on Facebook.
46. This year I will be making a sweet vanilla party cake for my cuz’s Christmas Eve party (recipe courtesy of Sweetapolita! She has such a great site!), my Mom’s Oatmeal cookies as a gift for my
dear friend Jessi, and gingerbread men for Santa with leftovers for the family on Christmas Day (maybe a gingerbread house if my sis helps).
Oh, how I love holiday baking! And baking in general!
47. I lose count over the number of goodies I make, but fudge is always on the top of the list!
49. Between family dinners, friends’ parties, and winter birthdays, I wouldn’t be surprised if I ended up baking 15 different sweets this holiday season.
50. Following you on the Twitter!
51. Oh my goodness, I’ll be baking a lot: candies, cookies, pies/cakes, bread, and specialty items for christmas dinner, not to mention all of the cooking!
52. I’m subscribed to you over e-mail!
55. I am subscribed to Brown Eyed Baker by RSS
56. And finally I subscribed through email!
FINGERS CROSSED
58. I tweeted about today’s giveaway!
59. Already a fan on facebook
60. I’m a fan of you on Facebook!
61. Oh, I’ve already made cookies and fudge. Next on my list are homemade marshmallow and buche Noel.
62. Finally off from school, so probably at least one thing a day!
63. I am an rss feed subscriber!
64. I plan on making four different types of cookies to pass out to coworkers. Still deciding on which ones to make!!
65. I like your Facebook page, too! Thanks for a great giveaway!
67. I’m a terrible baker (but am working to rectify that), so I won’t be baking anything for the holidays except cookies using my grandfather’s secret recipe.
70. Just a couple different kinds of cookies. We’re really broke this Christmas, and can’t give any presents or make anything real special.
71. One or two things. I’m pretty busy this season!
76. I subscribe via Google Reader.
77. I’m a fan on facebook too
78. Probably at least 5 different things! I love shipping holiday cookies to friends and family that are across the country. Happy Holidays!
79. Baking a few different kinds of cakes and cookies!
80. I am making 12 kinds of cookies, plus candy (homemade caramels, marshmallows)!!!!
82. Cookies, cakes, cinnamon rolls, and a pie. Does that cover all the major groups? Maybe I need some fudge too.
84. I also am now a fan on Facebook!
85. I subscribed to get your e-mails
86. I’ve subscribed to the emails!
87. following you on twitter…
88. Just tweeted about the giveaway.
89. Way too many to count! We’re making treats for family, friends, neighbors, and ourselves!
91. I follow you on Google Reader.
94. Maybe 3-5 times! For a few holiday parties and for friends and family!
95. I plan on baking some cupcakes, brownies and maybe some more cupcakes.
Today was the first time I baked cupcakes. I baked some red velvet cupcakes and they came out great. I tried to make the frosting too but without a mixer it came out too runny, I think. I ended
up using store bought frosting.
You can see a picture of my cupcakes here.
96. I only plan on baking one more thing – chocolate crinkle cookies! I think I’m done with my holiday baking otherwise
97. I’m planning to make lemon tart, chocolate pear cake, and caramelized pomegranate carrot cake next week. Oh wait, how could I forget the cookies! The stand mixer is in such a lovely shade of pink
so hopefully I win this *my fingers are crossed*
Happy holidays!
98. I want to make pumpkin bread and cookies. Thank you for this great giveaway!
100. This is my first Christmas out of college and boy am I excited to bale. I have soon many things I want to bake,but I think I’m most excited to bake some different kinds if chocolate bark with my
boyfriend :).
|
{"url":"http://www.browneyedbaker.com/2011/12/16/on-the-12th-day-of-christmas-a-kitchenaid-stand-mixer-giveaway/comment-page-4/","timestamp":"2014-04-18T18:12:28Z","content_type":null,"content_length":"124176","record_id":"<urn:uuid:302440a8-5f45-4262-86df-4f64114c5b53>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00006-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Plate thickness
Hello guys, i have a problem in determining a thickness of a plate. i have a motor (m = 17 kg) attached to a plate. the design can be seen in the attachment. i need to know the dimension of "t".
One way to do it is to calculate the force in each of the 4 bolts that fasten the plate to the wall. Then calculate the bending moment in the plate equal to the bolt tensile force times its
perpendicular distance to the inner set of the bolt circle which holds the motor to the plate. Then check the plate stresses from that moment.
If I convert to customary USA units, the motor ways say 50 pounds lets use 100 pounds to account for vibration impacts and throw
a safety factor of 4 on it. That's 400 pounds. Assume a 12 inch diameter motor 12 inches long. Moment at plate = 2400 inch pounds. Load to one bolt is 2400 divided by 12 on 2 bolts, or bolt tension =
100 pounds. Moment to inner bolts is say 1000 in pounds. Using 30000 psi steel or aluminum, the required section modulus is 1/30 inches cubed and that is 10t^2/6 solve t = about 1/8 inch. Use 1/4
inch aluminum as suggested, or use 1/4 in steel for better plate rigidity. Based on a lot of assumptions here. Disclaimer: proceed at your own risk, the author absolves himself of all responsibility.
|
{"url":"http://www.physicsforums.com/showthread.php?t=740059","timestamp":"2014-04-18T08:19:32Z","content_type":null,"content_length":"27302","record_id":"<urn:uuid:8ec975ba-ad7b-421d-8034-667988633c4c>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00653-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Scheduling for Multiple Flows Sharing a Time-Varying Channel: The Exponential Rule
Results 1 - 10 of 103
- IEEE Communications Magazine , 2003
"... As the cellular and PCS world collides with Wireless LANs and Internet-based packet data, new networking approaches will support the integration of voice and data on the composite infrastructure
of cellular base stations and Ethernet-based wireless access points. This paper highlights some of the pa ..."
Cited by 155 (3 self)
Add to MetaCart
As the cellular and PCS world collides with Wireless LANs and Internet-based packet data, new networking approaches will support the integration of voice and data on the composite infrastructure of
cellular base stations and Ethernet-based wireless access points. This paper highlights some of the past accomplishments and promising research avenues for an important topic in the creation of
future wireless networks. In this paper, we address the issue of cross-layer networking, where the physical and MAC layer knowledge of the wireless medium is shared with higher layers, in order to
provide efficient methods of allocating network resources and applications over the Internet. In essence, future networks will need to provide ”impedance matching ” of the instantaneous radio channel
conditions and capacity needs with the traffic and congestion conditions found over the packet-based world of the Internet. Further, such matching will need to be coordinated with a wide range of
particular applications and user expectations, making the topic of cross-layer networking an increasingly important one for the evolving wireless build-out. 1
- In Proceedings of IEEE Infocom , 2005
"... We consider the problem of allocating resources (time slots, frequency, power, etc.) at a base station to many competing flows, where each flow is intended for a different receiver. The channel
conditions may be time-varying and different for different receivers. It is well-known that appropri-ately ..."
Cited by 128 (22 self)
Add to MetaCart
We consider the problem of allocating resources (time slots, frequency, power, etc.) at a base station to many competing flows, where each flow is intended for a different receiver. The channel
conditions may be time-varying and different for different receivers. It is well-known that appropri-ately chosen queue-length based policies are throughput-optimal while other policies based on the
estimation of channel statistics can be used to allocate resources fairly (such as proportional fairness) among competing users. In this paper, we show that a combination of queue-length-based
scheduling at the base station and congestion control implemented either at the base station or at the end users can lead to fair resource allocation and queue-length stability.
- IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS , 2006
"... This tutorial paper overviews recent developments in optimization based approaches for resource allocation problems in wireless systems. We begin by overviewing important results in the area of
opportunistic (channel-aware) scheduling for cellular (single-hop) networks, where easily implementable my ..."
Cited by 128 (13 self)
Add to MetaCart
This tutorial paper overviews recent developments in optimization based approaches for resource allocation problems in wireless systems. We begin by overviewing important results in the area of
opportunistic (channel-aware) scheduling for cellular (single-hop) networks, where easily implementable myopic policies are shown to optimize system performance. We then describe key lessons learned
and the main obstacles in extending the work to general resource allocation problems for multi-hop wireless networks. Towards this end, we show that a clean-slate optimization based approach to the
multi-hop resource allocation problem naturally results in a “loosely coupled” crosslayer solution. That is, the algorithms obtained map to different layers (transport, network, and MAC/PHY) of the
protocol stack are coupled through a limited amount of information being passed back and forth. It turns out that the optimal scheduling component at the MAC layer is very complex and thus needs
simpler (potentially imperfect) distributed solutions. We demonstrate how to use imperfect scheduling in the crosslayer framework and describe recently developed distributed algorithms along these
lines. We conclude by describing a set of open research problems.
- COMPUTER NETWORKS , 2003
"... We present a method, called opportunistic scheduling, for exploiting the time-varying nature of the radio environment to increase the overall performance of the system under certain quality of
service/fairness requirements of users. We first introduce a general framework for opportunistic scheduling ..."
Cited by 127 (6 self)
Add to MetaCart
We present a method, called opportunistic scheduling, for exploiting the time-varying nature of the radio environment to increase the overall performance of the system under certain quality of
service/fairness requirements of users. We first introduce a general framework for opportunistic scheduling, and then identify three general categories of scheduling problems under this framework. We
provide optimal solutions for each of these scheduling problems. All the proposed scheduling policies are implementable online; we provide parameter estimation algorithms and implementation
procedures for them. We also show how previous work by us and others directly fits into or is related to this framework. We demonstrate via simulation that opportunistic scheduling schemes result in
significant performance improvement compared with non-opportunistic alternatives.
- in Proceedings of 17th International Teletraffic Congress (ITC-17
"... High Data Rate (HDR) technologz has recently been proposed as an overlay to CDMA... In this paper, we study various scheduling algorithms for a mixture of real-time and non-real-time data over
HDR/CDMA and compare their performance. We study the performance with respect to packet delays and also ave ..."
Cited by 111 (0 self)
Add to MetaCart
High Data Rate (HDR) technologz has recently been proposed as an overlay to CDMA... In this paper, we study various scheduling algorithms for a mixture of real-time and non-real-time data over HDR/
CDMA and compare their performance. We study the performance with respect to packet delays and also average throughput, where we use a token based mechanism to give minimum throughput guarantees. We
nd that a rule which we call the exponential rule performs well with regard to both these criteria. (In a companion paper, we show that this rule is throughput-optimal, i.e., it makes the queues
stable if it is feasible to do so with any other scheduling rule.) Our main conclusion is that intelligent scheduling algorithms in conjunction with token based rate control provide an ecient
framework for supporting a mixture of real-time and non-real-time data applications in a single carrier.
- IEEE/ACM TRANSACTIONS ON NETWORKING , 2005
"... We study the problem of stable scheduling for a class of wireless networks. The goal is to stabilize the queues holding information to be transmitted over a fading channel. Few assumptions are
made on the arrival process statistics other than the assumption that their mean values lie within the cap ..."
Cited by 66 (15 self)
Add to MetaCart
We study the problem of stable scheduling for a class of wireless networks. The goal is to stabilize the queues holding information to be transmitted over a fading channel. Few assumptions are made
on the arrival process statistics other than the assumption that their mean values lie within the capacity region and that they satisfy a version of the law of large numbers. We prove that, for any
mean arrival rate that lies in the capacity region, the queues will be stable under our policy. Moreover, we show that it is easy to incorporate imperfect queue length information and other
approximations that can simplify the implementation of our policy.
, 2002
"... We study the use of channel state information for random access in fading channels. Traditionally, random access protocols have been designed by assuming simple models for the physical layer
where all users are symmetric and there is no notion of channel state. We introduce a reception model that ta ..."
Cited by 60 (18 self)
Add to MetaCart
We study the use of channel state information for random access in fading channels. Traditionally, random access protocols have been designed by assuming simple models for the physical layer where
all users are symmetric and there is no notion of channel state. We introduce a reception model that takes into account the channel states of various users. Under the assumption that each user has
access to his channel state information (CSI), we propose a variant of Slotted ALOHA protocol for medium access control, where the transmission probability is allowed to be a function of the CSL The
function is called the transmission control scheme. Assuming the finite user infinite buffer model we derive expressions for the maximum stable throughput of the system. We introduce the notion of
asymptotic stable throughput (AST) that is the maximum stable throughput as the number of users goes to infinity. We consider two types of transmission control namely population independent
transmission control (PITC) where the transmission control is not a function of the size of the network and population dependent transmission control where the transmission control is a function of
the size of the network. We obtain expressions for the AST achievable with PITC. For population dependent transmission control, we introduce a particular transmission control that can potentially
lead to significant gains in AST. For both PITC and PDTC, we show that the effect of transmission control is equivalent to changing the probability distribution of the channel state. The theory is
then applied to CDMA networks with Linear Minimum Mean Square Error (LMMSE) receivers and Matched Filters (MF) to illustrate the effectiveness of utilizing channel state. It is shown that through the
use of channel state, with an...
- IEEE Journal on Selected Areas in Communications , 2006
"... In this work, we describe and analyze a joint scheduling, routing and congestion control mecha-nism for wireless networks, that asymptotically guarantees stability of the buffers and fair
allocation of the network resources. The queue lengths serve as common information to different layers of the ne ..."
Cited by 58 (8 self)
Add to MetaCart
In this work, we describe and analyze a joint scheduling, routing and congestion control mecha-nism for wireless networks, that asymptotically guarantees stability of the buffers and fair allocation
of the network resources. The queue lengths serve as common information to different layers of the network protocol stack. Our main contribution is to prove the asymptotic optimality of a primal-dual
congestion controller, which is known to model different versions of TCP well.
- Advances in Applied Probability , 2004
"... We consider the problem of scheduling transmissions of multiple data users (flows) sharing the same wireless channel (server). The unique feature of this problem is the fact that the capacity
(service rate) of the channel varies randomly with time and asynchronously for different users. We study a s ..."
Cited by 48 (12 self)
Add to MetaCart
We consider the problem of scheduling transmissions of multiple data users (flows) sharing the same wireless channel (server). The unique feature of this problem is the fact that the capacity
(service rate) of the channel varies randomly with time and asynchronously for different users. We study a scheduling policy called Exponential scheduling rule, which was introduced in an earlier
paper. Given a system with N users, and any set of positive numbers {an},n = 1,2,...,N, we show that in a heavy-traffic limit, under a non-restrictive complete resource pooling condition, this
algorithm has the property that, for each time t, it (asymptotically) minimizes maxn an˜qn(t), where ˜qn(t) is user n queue length in the heavy traffic regime.
- in Proceedings of IEEE INFOCOM ’05 , 2005
"... We consider the problem of scheduling multiple users sharing a time-varying wireless channel. (As an example, this is a model of scheduling in 3G wireless technologies, such as CDMA2000
3G1xEV-DO downlink scheduling.) We introduce an algorithm which seeks to optimize a concave utility H i (R i ) o ..."
Cited by 41 (10 self)
Add to MetaCart
We consider the problem of scheduling multiple users sharing a time-varying wireless channel. (As an example, this is a model of scheduling in 3G wireless technologies, such as CDMA2000 3G1xEV-DO
downlink scheduling.) We introduce an algorithm which seeks to optimize a concave utility H i (R i ) of the user throughputs R i , subject to certain lower and upper throughput bounds: R i R i i .
The algorithm, which we call the Gradient algorithm with Minimum/Maximum Rate constraints (GMR) uses a token counter mechanism, which modifies an algorithm solving the corresponding unconstrained
problem, to produce the algorithm solving the problem with throughput constraints. Two important special cases of the utility functions are log R i and R i , corresponding to the common Proportional
Fairness and Throughput Maximization objectives.
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=34371","timestamp":"2014-04-18T06:44:23Z","content_type":null,"content_length":"40866","record_id":"<urn:uuid:7efa9b3e-b2c2-4947-a88e-55daa529bea2>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00517-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Arizona City Geometry Tutor
Find an Arizona City Geometry Tutor
...He is both fun and serious about his work." "An inspiration. He touches the lives of everyone he teaches and gives students a new confidence in themselves. A smart, encouraging, patient and
kind man that I will forever remember and respect." I am a highly qualified and state certified high school math teacher.
15 Subjects: including geometry, calculus, statistics, algebra 1
...I have over 10 years of experience in tutoring students with ADD/ADHD. I have attended seminars covering learning styles in special needs students in the course of my tutoring. This has led me
to conduct seminars for teachers in addressing learning styles in special needs students.
29 Subjects: including geometry, reading, English, writing
...I am also am a college football player and track athlete, and have tons of personal training knowledge from my years of experience. strength training, speed training, fitness training, mass
gaining and cutting, linebacker training, and collegiate wrestling are my strong points. I am also proficient in technical and rhetorical writing, and physics. Just ask.
28 Subjects: including geometry, reading, calculus, English
...I began to tutor students in math, reading, and writing to improve their test scores. After every student finished their math homework, I reviewed each problem on the board and taught them how
to evaluate each one. When it came to reading, I sat down with students while they read and corrected them if they mispronounced a word.
67 Subjects: including geometry, chemistry, Spanish, English
...I have even been known to ask students to generate their own examples and check their solutions. I do not meet with minors unless a parent is present. When tutoring in a pupil's home, I prefer
to work in a dining room or other public area of the house.
15 Subjects: including geometry, calculus, algebra 1, algebra 2
|
{"url":"http://www.purplemath.com/arizona_city_az_geometry_tutors.php","timestamp":"2014-04-18T19:20:43Z","content_type":null,"content_length":"24038","record_id":"<urn:uuid:35b400a2-41b9-4fe1-bd46-076635f9fecb>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00139-ip-10-147-4-33.ec2.internal.warc.gz"}
|
model checking and theorem proving
Results 1 - 10 of 15
"... We propose a combination of model checking and interactive theorem proving where the theorem prover is used to represent finite and infinite state systems, reason about them compositionally and
reduce them to small finite systems by verified abstractions. As an example we verify a version of the Alt ..."
Cited by 45 (3 self)
Add to MetaCart
We propose a combination of model checking and interactive theorem proving where the theorem prover is used to represent finite and infinite state systems, reason about them compositionally and
reduce them to small finite systems by verified abstractions. As an example we verify a version of the Alternating Bit Protocol with unbounded lossy and duplicating channels: the channels are
abstracted by interactive proof and the resulting finite state system is model checked.
, 1995
"... A method combining data abstraction, model checking and theorem proving is presented. It provides a semi-automatic, formal framework for proving arbitrary linear time temporal logic properties
of infinite state reactive systems. The paper contains a complete case study to prove safety and liveness o ..."
Cited by 34 (0 self)
Add to MetaCart
A method combining data abstraction, model checking and theorem proving is presented. It provides a semi-automatic, formal framework for proving arbitrary linear time temporal logic properties of
infinite state reactive systems. The paper contains a complete case study to prove safety and liveness of an implementation of a scheduler for the readers/writers problem which uses unbounded queues
and sets. We argue that the proposed framework could be automated to a very large extent making this approach feasible in an industrial environment.
- IN ALUR AND HENZINGER [AH96 , 1996
"... Our goal is to use a theorem prover in order to verify invariance properties of distributed systems in a "model checking like" manner. A system S is described by a set of sequential components,
each one given by a transition relation and a predicate Init defining the set of initial states. In order ..."
Cited by 27 (5 self)
Add to MetaCart
Our goal is to use a theorem prover in order to verify invariance properties of distributed systems in a "model checking like" manner. A system S is described by a set of sequential components, each
one given by a transition relation and a predicate Init defining the set of initial states. In order to verify that P is an invariant of S, we try to compute, in a model checking like manner, the
weakest predicate P 0 stronger than P and weaker than Init which is an inductive invariant, that is, whenever P 0 is true in some state, then P 0 remains true after the execution of any possible
transition. The fact that P is an invariant can be expressed by a set of predicates (having no more quantifiers than P ) on the set of program variables, one for every possible transition of the
system. In order to prove these predicates, we use either automatic or assisted theorem proving depending on their nature. We show in this paper how this can be done in an efficient way using the
Prototype V...
- IN COMPUTER AIDED VERIFICATION : 7TH INTERNATIONAL CONFERENCE, CAV '95, LNCS 939 , 1995
"... We show how the second-order monadic theory of strings can be used to specify hardware components and their behavior. This logic admits a decision procedure and counter-model generator based on
canonical automata for formulas. We have used a system implementing these concepts to verify, or find e ..."
Cited by 25 (10 self)
Add to MetaCart
We show how the second-order monadic theory of strings can be used to specify hardware components and their behavior. This logic admits a decision procedure and counter-model generator based on
canonical automata for formulas. We have used a system implementing these concepts to verify, or find errors in, a number of circuits proposed in the literature. The techniques we use make it easier
to identify regularity in circuits, including those that are parameterized or have parameterized behavioral specifications. Our proofs are semantic and do not require lemmas or induction as would be
needed when employing a conventional theory of strings as a recursive data type.
- In Proc. of the Second International Workshop on Tools and Algorithms for the Construction and Analysis of Systems , 1996
"... 2The bulk of the contribution of the first author to this work was done when he was on leave from UCLA and doing a summer job at Bell Laboratories. ..."
Cited by 20 (1 self)
Add to MetaCart
2The bulk of the contribution of the first author to this work was done when he was on leave from UCLA and doing a summer job at Bell Laboratories.
- Science of Computer Programming , 1997
"... Model checking is a proven successful technology for verifying hardware. It works, however, on only finite state machines, and most software systems have infinitely many states. Our approach to
applying model checking to software hinges on identifying appropriate abstractions that exploit the nature ..."
Cited by 17 (0 self)
Add to MetaCart
Model checking is a proven successful technology for verifying hardware. It works, however, on only finite state machines, and most software systems have infinitely many states. Our approach to
applying model checking to software hinges on identifying appropriate abstractions that exploit the nature of both the system, S, and the property, OE, to be verified. We check OE on an abstracted,
but finite, model of S. Following this approach we verified three cache coherence protocols used in distributed file systems. These protocols have to satisfy this property: "If a client believes that
a cached file is valid then the authorized server believes that the client's copy is valid." In our finite model of the system, we need only represent the "beliefs" that a client and a server have
about a cached file; we can abstract from the caches, the files' contents, and even the files themselves. Moreover, by successive application of the generalization rule from predicate logic, we need
only conside...
- Proceedings of the Third Intl. Workshop on Frontiers of Combining Systems, volume 1794 of LNCS , 2000
"... Abstract. The two main approaches to the formal verification of reactive systems are based, respectively, on model checking (algorithmic verification) and theorem proving (deductive
verification). These two approaches have complementary strengths and weaknesses, and their combination promises to enh ..."
Cited by 11 (0 self)
Add to MetaCart
Abstract. The two main approaches to the formal verification of reactive systems are based, respectively, on model checking (algorithmic verification) and theorem proving (deductive verification).
These two approaches have complementary strengths and weaknesses, and their combination promises to enhance the capabilities of each. This paper surveys a number of methods for doing so. As is often
the case, the combinations can be classified according to how tightly the different components are integrated, their range of application, and their degree of automation. 1
, 1999
"... The use of automatic model checking algorithms to verify detailed gate or switch level designs of circuits is very attractive because the method is automatic and such models can accurately
capture detailed functional, timing, and even subtle electrical behaviour of circuits. The use of binary decisi ..."
Cited by 4 (1 self)
Add to MetaCart
The use of automatic model checking algorithms to verify detailed gate or switch level designs of circuits is very attractive because the method is automatic and such models can accurately capture
detailed functional, timing, and even subtle electrical behaviour of circuits. The use of binary decision diagrams has extended by orders of magnitude the size of circuits that can be so verified,
but there are still very significant limitations due to the computational complexity of the problem. Verifying abstract versions of the model is attractive to reduce computational costs but this
poses the problem of how to build abstractions easily without losing the accuracy of the low-level model. This paper proposes a method of bridging the gap between detailed designs and abstract
- Proc. FME '96, LNCS (Springer-Verlag , 1996
"... . Provably correct software can only be achieved by basing the development process on formal methods. For most industrial applications such a development never terminates because requirements
change and new functionality has to be added to the system. Therefore a formal method that supports an incre ..."
Cited by 3 (3 self)
Add to MetaCart
. Provably correct software can only be achieved by basing the development process on formal methods. For most industrial applications such a development never terminates because requirements change
and new functionality has to be added to the system. Therefore a formal method that supports an incremental development of complex systems is required. The project CoCoN (Provably Correct
Communication Networks) that is carried out jointly between Philips Research Laboratories Aachen and the University of Oldenburg takes results from the ESPRIT Basic Research Action ProCoS to show the
applicability of a more formal approach to the development of correct telecommunications software. These ProCoS-methods have been adapted to support the development of extensible specifications for
distributed systems. Throughout this paper our approach is exemplified by a case study how call handling software for telecommunication switching systems should be developed. keywords: extension of
existing formal methods, combination of methods, incremental development 1
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=1132788","timestamp":"2014-04-24T07:03:32Z","content_type":null,"content_length":"36568","record_id":"<urn:uuid:eec2c43f-f366-4498-83bc-dec9eb6aed99>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00409-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Domain question...
January 26th 2010, 09:08 PM #1
Jan 2010
Domain question...
I'm having problems finding the domain of f(x) = ln(4 + ln(x)).
I thought the domain of ln(x) was (0,infinity) but I was wrong.
Can anyone explain how to find the domain of ln(x)?
If, by that, you mean x positive, you are right. What makes you think you are wrong?
But, of course, the domain of ln(x) is not necessarily the same as the domain of ln(4+ ln(x)). In order to be able to take the second logarithm, you must have 4+ ln(x)> 0 or ln(x)> -4. For what
value of x is ln(x)= -4?
Can anyone explain how to find the domain of ln(x)?
You already know that. That's the wrong question!
January 26th 2010, 09:18 PM #2
January 27th 2010, 03:23 AM #3
MHF Contributor
Apr 2005
|
{"url":"http://mathhelpforum.com/pre-calculus/125672-domain-question.html","timestamp":"2014-04-19T21:37:29Z","content_type":null,"content_length":"37132","record_id":"<urn:uuid:5e39864b-e568-447f-a0a8-a940aa7557b2>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00164-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Subjects to be Learned
• digraph
• vertex
• arc
• loop
• in-degree, out-degree
• path, directed path, simple path
• cycle
• connected graph
• partial digraph
• subdigraph
A digraph is short for directed graph, and it is a diagram composed of points called vertices (nodes) and arrows called arcs going from a vertex to a vertex.
For example the figure below is a digraph with 3 vertices and 4 arcs.
In this figure the vertices are labeled with numbers 1, 2, and 3.
Mathematically a digraph is defined as follows.
Definition (digraph): A digraph is an ordered pair of sets G = (V, A), where V is a set of vertices and A is a set of ordered pairs (called arcs) of vertices of V.
In the example, G[1] , given above, V = { 1, 2, 3 } , and A = { <1, 1>, <1, 2>, <1, 3>, <2, 3> } .
Digraph representation of binary relations
A binary relation on a set can be represented by a digraph.
Let R be a binary relation on a set A, that is R is a subset of A A.
Then the digraph, call it G, representing R can be constructed as follows:
1. The vertices of the digraph G are the elements of A, and
2. <x, y> is an arc of G from vertex x to vertex y if and only if <x, y> is in R.
Example: The less than relation R on the set of integers A = {1, 2, 3, 4} is the set {<1, 2> , <1, 3>, <1, 4>, <2, 3> , <2, 4> , <3, 4> } and it can be represented by the following digraph.
Let us now define some of the basic concepts on digraphs.
Definition (loop): An arc from a vertex to itself such as <1, 1>, is called a loop (or self-loop)
Definition (degree of vertex): The in-degree of a vertex is the number of arcs coming to the vertex, and the out-degree is the number of arcs going out of the vertex.
For example, the in-degree of vertex 2 in the digraph G[2] shown above is 1, and the out-degree is 2.
Definition (path): A path from a vertex x[0] to a vertex x[n] in a digraph G = (V, A) is a sequence of vertices x[0] , x[1] , ....., x[n] that satisfies the following:
for each i, 0 , <x[i] , x[i + 1]> A , or <x[i + 1] , x[i]> A , that is, between any pair of vertices there is an arc connecting them.
x[0] is the initial vertex and x[n] is the terminal vertex of the path.
A path is called a directed path if <x[i] , x[i + 1]> A , for every i, 0 .
If the initial and the terminal vertices of a path are the same, that is, x[0] = x[n] , then the path is called a cycle .
If no arcs appear more than once in a path, the path is called a simple path. A path is called elementary if no vertices appear more than once in it except for the initial and terminal vertices of a
cycle. In a simple cycle one vertex appears twice in the sequence: once as the initial vertex and once as the terminal vertex.
Note: There are two different definitions for "simple path". Here we follow the definition of Berge[1], Liu[2], Rosen[3] and others. A "simple path" according to another group (Cormen et al[4],
Stanat and McAllister[5] and others) is a path in which no vertices appear more than once.
Definition(connected graph): A digraph is said to be connected if there is a path between every pair of its vertices.
Example: In the digraph G[3] given below,
1, 2, 5 is a simple and elementary path but not directed,
1, 2, 2, 5 is a simple path but neither directed nor elementary.
1, 2, 4, 5 is a simple elementary directed path,
1, 2, 4, 5, 2, 4, 5 is a directed path but not simple (hence not elementary),
1, 3, 5, 2, 1 is a simple elementary cycle but not directed, and
2, 4, 5, 2 is a simple elementary directed cycle.
This digraph is connected.
Sometimes we need to refer to part of a given digraph. A partial digraph of a digraph is a digraph consisting of arbitrary numbers of vertices and arcs of the given digraph, while a subdigraph is a
digraph consisting of an arbitrary number of vertices and all the arcs between them of the given digraph. Formally they are defined as follows:
Definition (subdigraph, partial digraph): Let G = ( V, A ) be a digraph. Then a digraph ( V^', A^' ) is a partial digraph of G , if V^' , and A^' ( V^' ^' ) . It is a subdigraph of G , if V^' , and A
^' = A ( V^' ^' )
A partial digraph and a subdigraph of G[3] given above are shown below.
Test Your Understanding of Digraph
Indicate which of the following statements are correct and which are not.
Click True or False , then Submit. There are two sets of questions.
The digraph in the exercise questions below is G[3] repeated below unless otherwise specified.
Next -- Digraph Representaion of Binary Relation
Back to Schedule
Back to Table of Contents
|
{"url":"http://www.cs.odu.edu/~toida/nerzic/content/digraph/definition.html","timestamp":"2014-04-16T14:22:46Z","content_type":null,"content_length":"11228","record_id":"<urn:uuid:20a7d00d-c2da-470b-b2db-4da6a99b4cbc>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00185-ip-10-147-4-33.ec2.internal.warc.gz"}
|
golden ratio
Definitions for golden ratio
This page provides all possible meanings and translations of the word golden ratio
1. golden ratio(Noun)
The irrational number (approximately 1u00B7618), usually denoted by the Greek letter u03C6 (phi), which is equal the sum of its own reciprocal and 1, or, equivalently, is such that the ratio of 1
to the number is equal to the ratio of its reciprocal to 1.
1. Golden ratio
In mathematics and the arts, two quantities are in the golden ratio if the ratio of the sum of the quantities to the larger quantity is equal to the ratio of the larger quantity to the smaller
one. The figure on the right illustrates the geometric relationship. Expressed algebraically: where the Greek letter phi represents the golden ratio. Its value is: The golden ratio is also called
the golden section or golden mean. Other names include extreme and mean ratio, medial section, divine proportion, divine section, golden proportion, golden cut, and golden number. Many 20th
century artists and architects have proportioned their works to approximate the golden ratio—especially in the form of the golden rectangle, in which the ratio of the longer side to the shorter
is the golden ratio—believing this proportion to be aesthetically pleasing. Mathematicians since Euclid have studied the properties of the golden ratio, including its appearance in the dimensions
of a regular pentagon and in a golden rectangle, which can be cut into a square and a smaller rectangle with the same aspect ratio. The golden ratio has also been used to analyze the proportions
of natural objects as well as man-made systems such as financial markets, in some cases based on dubious fits to data.
Find a translation for the golden ratio definition in other languages:
Use the citation below to add this definition to your bibliography:
Are we missing a good definition for golden ratio?
|
{"url":"http://www.definitions.net/definition/golden%20ratio","timestamp":"2014-04-18T16:57:12Z","content_type":null,"content_length":"25487","record_id":"<urn:uuid:ced62507-7e36-4dd0-94f5-26f820718b60>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00501-ip-10-147-4-33.ec2.internal.warc.gz"}
|
NAG Library
NAG Library Routine Document
1 Purpose
F02SDF finds the eigenvector corresponding to a given real eigenvalue for the generalized problem $Ax=\lambda Bx$, or for the standard problem $Ax=\lambda x$, where $A$ and $B$ are real band
2 Specification
SUBROUTINE F02SDF ( N, MA1, MB1, A, LDA, B, LDB, SYM, RELEP, RMU, VEC, D, IWORK, WORK, LWORK, IFAIL)
INTEGER N, MA1, MB1, LDA, LDB, IWORK(N), LWORK, IFAIL
REAL (KIND=nag_wp) A(LDA,N), B(LDB,N), RELEP, RMU, VEC(N), D(30), WORK(LWORK)
LOGICAL SYM
3 Description
Given an approximation $\mu$ to a real eigenvalue $\lambda$ of the generalized eigenproblem $Ax=\lambda Bx$, F02SDF attempts to compute the corresponding eigenvector by inverse iteration.
F02SDF first computes lower and upper triangular factors, $L$ and $U$, of $A-\mu B$, using Gaussian elimination with interchanges, and then solves the equation $Ux=e$, where $e={\left(1,1,1,\dots ,1\
right)}^{\mathrm{T}}$ – this is the first half iteration.
There are then three possible courses of action depending on the input value of
1. ${\mathbf{D}}\left(1\right)=0$.
This setting should be used if $\lambda$ is an ill-conditioned eigenvalue (provided the matrix elements do not vary widely in order of magnitude). In this case it is essential to accept only a
vector found after one half iteration, and $\mu$ must be a very good approximation to $\lambda$. If acceptable growth is achieved in the solution of $Ux=e$, then the normalized $x$ is accepted as
the eigenvector. If not, columns of an orthogonal matrix are tried in turn in place of $e$. If none of these give acceptable growth, the routine fails, indicating that $\mu$ was not a
sufficiently good approximation to $\lambda$.
2. ${\mathbf{D}}\left(1\right)>0$.
This setting should be used if $\mu$ is moderately close to an eigenvalue which is not ill-conditioned (provided the matrix elements do not differ widely in order of magnitude). If acceptable
growth is achieved in the solution of $Ux=e$, the normalized $x$ is accepted as the eigenvector. If not, inverse iteration is performed. Up to $30$ iterations are allowed to achieve a vector and
a correction to $\mu$ which together give acceptably small residuals.
3. ${\mathbf{D}}\left(1\right)<0$.
This setting should be used if the elements of $A$ and $B$ vary widely in order of magnitude. Inverse iteration is performed, but a different convergence criterion is used.
Section 8.3
for further details.
Note that the bandwidth of the matrix $A$ must not be less than the bandwidth of $B$. If this is not so, either $A$ must be filled out with zeros, or matrices $A$ and $B$ may be reversed and $1/\mu$
supplied as an approximation to the eigenvalue $1/\lambda$. Also it is assumed that $A$ and $B$ each have the same number of subdiagonals as superdiagonals. If this is not so, they must be filled out
with zeros. If $A$ and $B$ are both symmetric, only the upper triangles need be supplied.
4 References
Peters G and Wilkinson J H (1979) Inverse iteration, ill-conditioned equations and Newton's method SIAM Rev. 21 339–360
Wilkinson J H (1965) The Algebraic Eigenvalue Problem Oxford University Press, Oxford
Wilkinson J H (1972) Inverse iteration in theory and practice Symposia Mathematica Volume X 361–379 Istituto Nazionale di Alta Matematica, Monograf, Bologna
Wilkinson J H (1974) Notes on inverse iteration and ill-conditioned eigensystems Acta Univ. Carolin. Math. Phys. 1–2 173–177
Wilkinson J H (1979) Kronecker's canonical form and the $QZ$ algorithm Linear Algebra Appl. 28 285–303
5 Parameters
1: N – INTEGERInput
On entry: $n$, the order of the matrices $A$ and $B$.
Constraint: ${\mathbf{N}}\ge 1$.
2: MA1 – INTEGERInput
On entry: the value ${m}_{A}+1$, where ${m}_{A}$ is the number of nonzero lines on each side of the diagonal of $A$. Thus the total bandwidth of $A$ is $2{m}_{A}+1$.
Constraint: $1\le {\mathbf{MA1}}\le {\mathbf{N}}$.
3: MB1 – INTEGERInput
On entry
: if
${\mathbf{MB1}}\le 0$
is assumed to be the unit matrix. Otherwise
must specify the value
, where
is the number of nonzero lines on each side of the diagonal of
. Thus the total bandwidth of
Constraint: ${\mathbf{MB1}}\le {\mathbf{MA1}}$.
4: A(LDA,N) – REAL (KIND=nag_wp) arrayInput/Output
On entry
: the
band matrix
. The
subdiagonals must be stored in the first
rows of the array; the diagonal in the (
)th row; and the
superdiagonals in rows
. Each row of the matrix must be stored in the corresponding column of the array. For example, if
the storage scheme is:
$* * a31 a42 a53 a64 * a21 a32 a43 a54 a65 a11 a22 a33 a44 a55 a66 a12 a23 a34 a45 a56 * a13 a24 a35 a46 * * .$
Elements of the array marked
need not be set. The following code assigns the matrix elements within the band to the correct elements of the array:
DO 20 J = 1, N DO 10 I = MAX(1,J-MA1+1), MIN(N,J+MA1-1) A(I-J+MA1,J) = matrix(J,I) 10 CONTINUE 20 CONTINUE
(i.e., both
are symmetric), only the lower triangle of
need be stored in the first
rows of the array.
On exit: details of the factorization of $A-\stackrel{-}{\lambda }B$, where $\stackrel{-}{\lambda }$ is an estimate of the eigenvalue.
5: LDA – INTEGERInput
On entry
: the first dimension of the array
as declared in the (sub)program from which F02SDF is called.
Constraint: ${\mathbf{LDA}}\ge 2×{\mathbf{MA1}}-1$.
6: B(LDB,N) – REAL (KIND=nag_wp) arrayInput/Output
On entry
: if
must contain the
band matrix
, stored in the same way as
. If
, only the lower triangle of
need be stored in the first
rows of the array.
If ${\mathbf{MB1}}\le 0$, the array is not used.
On exit: elements in the top-left corner, and in the bottom right corner if ${\mathbf{SYM}}=\mathrm{.FALSE.}$, are set to zero; otherwise the array is unchanged.
7: LDB – INTEGERInput
On entry
: the first dimension of the array
as declared in the (sub)program from which F02SDF is called.
□ if ${\mathbf{SYM}}=\mathrm{.FALSE.}$, ${\mathbf{LDB}}\ge 2×{\mathbf{MB1}}-1$;
□ if ${\mathbf{SYM}}=\mathrm{.TRUE.}$, ${\mathbf{LDB}}\ge {\mathbf{MB1}}$.
8: SYM – LOGICALInput
On entry
: if
, both
are assumed to be symmetric and only their upper triangles need be stored. Otherwise
must be set to .FALSE..
9: RELEP – REAL (KIND=nag_wp)Input
On entry
: the relative error of the coefficients of the given matrices
. If the value of
is less than the
machine precision
, the
machine precision
is used instead.
10: RMU – REAL (KIND=nag_wp)Input
On entry: $\mu$, an approximation to the eigenvalue for which the corresponding eigenvector is required.
11: VEC(N) – REAL (KIND=nag_wp) arrayOutput
On exit: the eigenvector, normalized so that the largest element is unity, corresponding to the improved eigenvalue ${\mathbf{RMU}}+{\mathbf{D}}\left(30\right)$.
12: D($30$) – REAL (KIND=nag_wp) arrayInput/Output
On entry
must be set to indicate the type of problem (see
Section 3
Indicates a well-conditioned eigenvalue.
Indicates an ill-conditioned eigenvalue.
Indicates that the matrices have elements varying widely in order of magnitude.
On exit
: if
${\mathbf{D}}\left(1\right)e 0.0$
on entry, the successive corrections to
are given in
, for
$\mathit{i}=1,2,\dots ,k$
, where
is the total number of iterations performed. The final correction is also given in the last position,
, of the array. The remaining elements of
are set to zero.
If ${\mathbf{D}}\left(1\right)=0.0$ on entry, no corrections to $\mu$ are computed and ${\mathbf{D}}\left(\mathit{i}\right)$ is set to $0.0$, for $\mathit{i}=1,2,\dots ,30$. Thus in all three
cases the best available approximation to the eigenvalue is ${\mathbf{RMU}}+{\mathbf{D}}\left(30\right)$.
13: IWORK(N) – INTEGER arrayWorkspace
14: WORK(LWORK) – REAL (KIND=nag_wp) arrayWorkspace
15: LWORK – INTEGERInput
On entry
: the dimension of the array
as declared in the (sub)program from which F02SDF is called.
□ if ${\mathbf{D}}\left(1\right)e 0.0$, ${\mathbf{LWORK}}\ge {\mathbf{N}}×\left({\mathbf{MA1}}+1\right)$;
□ if ${\mathbf{D}}\left(1\right)=0.0$, ${\mathbf{LWORK}}\ge 2×{\mathbf{N}}$.
16: IFAIL – INTEGERInput/Output
On entry
must be set to
$-1\text{ or }1$
. If you are unfamiliar with this parameter you should refer to
Section 3.3
in the Essential Introduction for details.
For environments where it might be inappropriate to halt program execution when an error is detected, the value
$-1\text{ or }1$
is recommended. If the output of error messages is undesirable, then the value
is recommended. Otherwise, if you are not familiar with this parameter, the recommended value is
When the value $-\mathbf{1}\text{ or }\mathbf{1}$ is used it is essential to test the value of IFAIL on exit.
On exit
unless the routine detects an error or a warning has been flagged (see
Section 6
6 Error Indicators and Warnings
If on entry
, explanatory error messages are output on the current error message unit (as defined by
Errors or warnings detected by the routine:
On entry, ${\mathbf{N}}<1$,
or ${\mathbf{MA1}}<1$,
or ${\mathbf{MA1}}>{\mathbf{N}}$,
or ${\mathbf{LDA}}<2×{\mathbf{MA1}}-1$,
or ${\mathbf{LDB}}<{\mathbf{MB1}}$ when ${\mathbf{SYM}}=\mathrm{.TRUE.}$,
or ${\mathbf{LDB}}<2×{\mathbf{MB1}}-1$ when ${\mathbf{SYM}}=\mathrm{.FALSE.}$ (LDB is not checked if ${\mathbf{MB1}}\le 0$).
On entry, ${\mathbf{MA1}}<{\mathbf{MB1}}$. Either fill out A with zeros, or reverse the roles of A and B, and replace RMU by its reciprocal, i.e., solve $Bx={\lambda }^{-1}Ax\text{.}$
On entry, ${\mathbf{LWORK}}<2×{\mathbf{N}}$ when ${\mathbf{D}}\left(1\right)=0.0$,
or ${\mathbf{LWORK}}<{\mathbf{N}}×\left({\mathbf{MA1}}+1\right)$ when ${\mathbf{D}}\left(1\right)e 0.0$.
is null. If
is nonsingular, all the eigenvalues are zero and any set of
orthogonal vectors forms the eigensolution.
$B$ is null. If $A$ is nonsingular, all the eigenvalues are infinite, and the columns of the unit matrix are eigenvectors.
On entry, $A$ and $B$ are both null. The eigensolution is arbitrary.
${\mathbf{D}}\left(1\right)e 0.0$
on entry and convergence is not achieved in
iterations. Either the eigenvalue is ill-conditioned or
is a poor approximation to the eigenvalue. See
Section 8.3
on entry and no eigenvector has been found after
is not a sufficiently good approximation to the eigenvalue.
on entry and
is too inaccurate for the solution to converge.
7 Accuracy
The eigensolution is exact for some problem
are of the order of
$\eta \left(‖A‖+\mu ‖B‖\right)$
, where
is the value used for
The time taken by F02SDF is approximately proportional to $n{\left(2{m}_{A}+1\right)}^{2}$ for factorization, and to $n\left(2{m}_{A}+1\right)$ for each iteration.
The storage of the matrices $A$ and $B$ is designed for efficiency on a paged machine.
F02SDF will work with full matrices but it will do so inefficiently, particularly in respect of storage requirements.
Inverse iteration is performed according to the rule
${\alpha }_{r+1}$
is the element of
of largest magnitude.
Hence the residual corresponding to
is very small if
$\left|{\alpha }_{r+1}\right|$
is very large (see
Peters and Wilkinson (1979)
). The first half iteration,
, corresponds to taking
is a very accurate eigenvalue, then there should always be an initial vector
such that one half iteration gives a small residual and thus a good eigenvector. If the eigenvalue is ill-conditioned, then second and subsequent iterated vectors may not be even remotely close to an
eigenvector of a neighbouring problem (see pages 374–376 of
Wilkinson (1972)
Wilkinson (1974)
). In this case it is essential to accept only a vector obtained after one half iteration.
However, for well-conditioned eigenvalues, there is no loss in performing more than one iteration (see page 376 of
Wilkinson (1972)
), and indeed it will be necessary to iterate if
is not such a good approximation to the eigenvalue. When the iteration has converged,
will be some multiple of
${y}_{r+1}={\beta }_{r+1}{x}_{r}$
, say.
$\mu +\frac{1}{{\beta }_{r+1}}$
is a better approximation to the eigenvalue.
${\beta }_{r+1}$
is obtained as the element of
which corresponds to the element of largest magnitude,
, in
. The routine terminates when
$‖\left(A-\left(\mu +\frac{1}{{\beta }_{r}}\right)B\right){x}_{r}‖$
is of the order of the
machine precision
relative to
$‖A‖+\left|\mu \right|‖B‖$
If the elements of $A$ and $B$ vary widely in order of magnitude, then $‖A‖$ and $‖B‖$ are excessively large and a different convergence test is required. The routine terminates when the difference
between successive corrections to $\mu$ is small relative to $\mu$.
In practice one does not necessarily know if the given problem is well-conditioned or ill-conditioned. In order to provide some information on the condition of the eigenvalue or the accuracy of
in the event of failure, successive values of
$\frac{1}{{\beta }_{r}}$
are stored in the vector
is nonzero on input. If these values appear to be converging steadily, then it is likely that
was a poor approximation to the eigenvalue and it is worth trying again with
as the initial approximation. If the values in
vary considerably in magnitude, then the eigenvalue is ill-conditioned.
A discussion of the significance of the singularity of
is given in relation to the
algorithm in
Wilkinson (1979)
9 Example
Given the generalized eigenproblem
$Ax=\lambda Bx$
$A= 1 1 2 -1 2 1 2 -1 3 1 2 -1 4 1 -1 5 and B= 5 1 1 4 2 2 3 2 2 2 1 1 1$
find the eigenvector corresponding to the approximate eigenvalue
is symmetric,
is not, so
must be set to .FALSE. and all the elements of
in the band must be supplied to the routine.
(as written above) has
subdiagonal and
superdiagonals, so
must be set to
filled out with an additional subdiagonal of zeros. Each row of the matrices is read in as data in turn.
9.1 Program Text
9.2 Program Data
9.3 Program Results
|
{"url":"http://www.nag.com/numeric/fl/nagdoc_fl24/html/F02/f02sdf.html","timestamp":"2014-04-20T17:22:18Z","content_type":null,"content_length":"50549","record_id":"<urn:uuid:34a12061-d4a0-4f68-b22d-b114478b7025>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00092-ip-10-147-4-33.ec2.internal.warc.gz"}
|
plase tel me ?
Neutron energy spectrum is simply a description of the neutron population by energy, without any spatial reference. Neutron flux is a spatial description, the number of neutrons passing through a
unit area per unit time.
However, neutron flux could be energy dependent, i.e. one can refer to a thermal flux, that is the flux of neutrons whose energies are below some particular energy (e.g. 0.025 eV). Or one can refer
to a fast flux with E > 0.82 MeV or 1 MeV. The energy cut off is arbitrary.
If flux on has energy dependent flux, [itex]\phi(x,y,z,E)[/itex], then on can integrate over the entire energy spectrum or a portion of the energy spectrum to obtain the spatial flux for that range
of energies, which would be [itex]\phi(x,y,z)[/itex].
|
{"url":"http://www.physicsforums.com/showthread.php?t=101298","timestamp":"2014-04-21T02:07:46Z","content_type":null,"content_length":"21892","record_id":"<urn:uuid:d2b9b9b8-6763-4762-af3e-c46d85f29813>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00267-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Friction problem involving 2 blocks sliding in 2 directions with 2 frictions
1. The problem statement, all variables and given/known data
A 1-kg block is pushed against a 4-kg block on a horizontal surface of coefficient of friction 0.25, as shown in the figure. Determine the minimum force needed to ensure that the 1-kg block does not
slip down. Assume that the coefficient of friction at the interface between the block is 0.4. Hint: The two blocks exert equal and opposite forces on each other.
2. Relevant equations
3. The attempt at a solution
In class we have done similiar problems, only with a frictionless horizontal surface. I don't know how to account for the horizontal coefficient of friction of .25 in this problem. Using the above
equations and ignoring the horizontal coefficient of friction, I get this:
Now how do I account for the horizontal coefficient of friction of .25? Thanks in advance for any help you can provide.
Well, examining block one, we notice that we want [tex]\sum\vec{F}=m\vec{a}=0=\vec{F}_f+\vec{F}_g\rightarrow\vec{F}_f=-\vec{F}_g[/tex]. From this, we can examine the frictional force, specifically:
[tex]\vec{F}_f=\mu_i\vec{N}\rightarrow m\vec{g}/\mu_i=\vec{N}[/tex].
What next? :)
|
{"url":"http://www.physicsforums.com/showthread.php?t=270709","timestamp":"2014-04-21T04:51:36Z","content_type":null,"content_length":"25762","record_id":"<urn:uuid:da2ccb78-14ca-4fcf-a330-ca003687115f>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00305-ip-10-147-4-33.ec2.internal.warc.gz"}
|
1. Field
This disclosure relates generally to memories, and more specifically, to determining a hit in a content addressable memory.
2. Related Art
Cache memories are common for improving speed. The improved speed is achieved with a high speed memory that is small but fast compared to main memory. Cache accesses may be performed with a variety
of techniques. Sometimes these techniques involve the sum of a first operand and a second operand. Caches are useful only when there is a hit in the cache. Because the cycle time of a system may be
limited by the cache access speed, it is useful to improve the speed of operation in such a case where the stored data is accessed using the sum of the first and second operands. This quickens access
and system speed or allows more entries for a given speed. Also there are benefits to having a cache that can variably size entries. For example, it may desirable to allow variation in how a hit is
Thus, there is a need for a technique that improves upon one or more of the issues described above.
The present invention is illustrated by way of example and is not limited by the accompanying figures, in which like references indicate similar elements. Elements in the figures are illustrated for
simplicity and clarity and have not necessarily been drawn to scale.
FIG. 1 is a block diagram of a system according to an embodiment of the invention;
FIG. 2 is a block diagram of first portion of the system of FIG. 1;
FIG. 3 is a block diagram of a portion of the first portion shown in FIG. 2;
FIG. 4 is a circuit diagram of the portion of the first portion shown in FIG. 3 and a portion of the block diagram shown in FIG. 1;
FIG. 5 is a block diagram of a second portion of the system of FIG. 1; and
FIG. 6 is a block diagram of a portion of a system according to an alternative embodiment;
A system is used to determine if a sum of a first operand and a second operand is the same as a third operand wherein a comparison to the third operand is of variable length. This is particularly
useful in a content addressable memory (CAM) where the likelihood of hit is commonly improved over a limited set associative cache and allows for the CAM to identify different things in different
entries. For example, one entry can be one length to identify a page of a memory and another entry can be a different length to identify a page of memory of a different size. This is better
understood by reference to the following description and the drawings.
The terms “assert” or “set” and “negate” (or “deassert” or “clear”) are used herein when referring to the rendering of a signal, status bit, or similar apparatus into its logically true or logically
false state, respectively. If the logically true state is a logic level one, the logically false state is a logic level zero. And if the logically true state is a logic level zero, the logically
false state is a logic level one.
Each signal described herein may be designed as positive or negative logic, where negative logic can be indicated by a bar over the signal name or an asterisk (*) following the name. In the case of a
negative logic signal, the signal is active low where the logically true state corresponds to a logic level zero. In the case of a positive logic signal, the signal is active high where the logically
true state corresponds to a logic level one. Note that any of the signals described herein can be designed as either negative or positive logic signals. Therefore, in alternate embodiments, those
signals described as positive logic signals may be implemented as negative logic signals, and those signals described as negative logic signals may be implemented as positive logic signals.
Shown in FIG. 1 is a system 10 comprising propagate and generate logic 12, a CAM 14, a carry-in circuit 20, and a controller 22. CAM 14 comprises a RAM select circuit 16, a RAM array 18, and a
plurality of rows of which a row 24, a row 26, a row 28, and a row 30 are shown in FIG. 1. Rows 24, 26, 28, and 30 provide inputs h0, h1, h2, and h3, respectively, to RAM select circuit 16. Row 24
comprises compare logic 32 and bitcell portion 34. Row 26 comprises compare logic 36 and bitcell portion 38. Row 28 comprises compare logic 40 and bitcell portion 42. Row 30 comprises compare logic
44 and bitcell portion 46. RAM 18 has a plurality of readable segments of which RAM0, RAM1, RAM2, and RAMj are shown coupled to RAM select circuit 16. Controller 22 provides signals mask0, mask1,
mask2, and mask j to logic circuits 32, 36, 40, and 44, respectively.
In operation, system 10 determines if the sum of input signals A and B is the same as the contents of one of the bitcell portions such as bitcell portions 34, 38, 42, and 46 and if there is a hit,
then RAM 18 will provide the selected entry as an output. The determination is made concurrently with all of the bitcell portions. As an example, the determination will be explained with regard to
bitcell portion 38. Inputs A and B are multi-bit and are shown as a0-ai and b0-bi, respectively. The number of bits is the same as the number of bits of the bitcell portions 34, 38, 42, and 46. For
each bit location, propagate and generate circuit 12 provides a propagate signal p and a generate signal g. For example, corresponding to bit location k1 of bitcell portion 38, a propagate signal p1
and a generate signal g1 are generated in response to signal a1 and signal b1. Similarly, propagate signal p2 and generate signal g2 are generated in response to signals a2 and b2. Compare logic 36
receives mask1 which includes bits m10-m1i which identify which portion of bitcell portion 38 is relevant and required carry information. Carry-in logic 20 receives inputs A and B and determines a
true carry for each bit location for the most significant bit location to next to least significant bit location shown as signals tc0-tc(i-1). Simply providing the carry information for each bit
location is much simpler and faster than having to perform a full add. Thus, carry-in circuit 20 provides the carry information in less than a clock cycle. Compare logic circuit 36 thus uses
propagate signals p0-pi, generate signals g0-gi, stored information k10-k1i, mask1 signals, and true carry information tC0-tC(i-1) to determine if there is a match between the sum of A and B and the
contents of bitcell portion 38. If there is a match, this a considered a hit and signal h1 is asserted which causes RAM select circuit 16 to cause the contents of RAM1 to be provided as the selected
output. If there is not a match, this a considered a miss and signal h1 is deasserted so that RAM select circuit 16 does not cause RAM1 to provide its contents as the selected entry. A similar
operation is performed with regard to the all of the bit locations to determine if any contents of RAM 18 are output as the selected entry.
Shown in FIG. 2 is logic circuit 36 comprising bit logic 39, bit logic 41, bit logic 43, bit logic 45, and AND circuit 47. Additional logic circuits are present but not shown. Bit logic 39 receives
propagate signal p0, generate signal g0, bitcell bit k10, bitcell bit bar kb10, mask m10, mask m11, true carry tc0, and required carry rc0. Bit logic 41 receives propagate signal p1, generate signal
g1, bitcell bit k11, bitcell bit bar kb11, mask m11, mask m12, true carry tc1, and required carry rc1. Bit logic 43 receives propagate signal p2, generate signal g2, bitcell bit k12, bitcell bit bar
kb12, mask m12, mask m13, true carry TC2, and required carry rc2. Bit logic 45 receives propagate signal pi, generate signal gi, bitcell bit k1i, bitcell bit bar kb1i, mask 1j, a deasserted signal 0
for a second mask signal and deasserted signals 0 for the carry bits. Bit logic circuits 39, 41, 43, and 45 correspond to bit locations, 0, 1, 2, and i, respectively. Bit location 0 in this example
is the most significant bit. Thus bit location i is the least significant bit, which is why no carry is provided for that bit.
In operation, each bit logic circuit 39, 41, 43, and 45 may determine if there is a match for its corresponding bit location. Using bit logic circuit 41 as an example, mask signal m11 determines if a
compare function is to be performed by bit logic circuit 41. If Mask m11 is a asserted then a compare is performed. If Mask m11 is deasserted, then the output z1 is asserted and no comparison is
required. When the mask signal associated with the bit logic is deasserted, there is no need for a carry. If mask m12 is asserted, the required carry rc1 is used. If mask m12 is deasserted, then true
carry tc1 is used. When the preceding bit location is not performing a compare but the present bit location is, then the true carry must be used because the required carry is not valid. If both the
preceding and present bit location are performing a compare then the required carry is valid. The required carry can be generated faster than the true carry so is preferred when available. Also the
true carry may become prohibitively slow when near the most significant bit. The compare is performed by adding the bitcell value to the propagate value and comparing to the carry. If they are the
same that is a hit for that bit location. If they are different then there is a miss for that bit location which also means there is a miss for the compare logic. Thus, for bit logic 41, propagate
signal p1 is added to signal k11 and compared to either required carry rc1 or true carry tc1. If the comparison is that they are the same, then that is a hit. If they are different, then that is a
miss. Generate signal g1 is used to provide the required carry signal for the next bit location. Thus, if generate signal g1 is asserted, then required carry signal rc0 is asserted to bit logic 39.
Required carry rc0 can also be asserted by signal k11 being deasserted and propagate signal p1 being asserted.
Shown in FIG. 3 is a logic diagram of bit logic 41. Bit logic 41 comprises a multiplexer 48, an Exclusive OR gate 50, an Exclusive NOR gate 52, an OR gate 54, an AND gate 56, and an OR gate 58.
Multiplexer 48 has a first input for receiving true carry tc1, a second input for receiving required carry rc1, a select input for receiving mask m12, and an output. Exclusive OR gate 50 has a first
input for receiving signal k11, a second input for receiving propagate signal p1, and an output. Exclusive NOR gate 52 has a first input coupled to the output of multiplexer 48, a second input
coupled to the output of Exclusive OR gate 50, and an output. OR gate 54 has a first input for receiving mask signal m11, a second input coupled to the output of Exclusive NOR gate 52, and an output
providing signal z1. AND gate 56 has a first input for receiving signal k11b, which is the logical complement of signal k11, a second input for receiving signal p1, and an output. OR gate 58 has a
first input coupled to the output of AND gate 56, a second input for receiving generate signal g1, and an output for providing required carry signal RC0.
In operation, Exclusive OR gate 50 functions as an adder so that a zero (logic low) is output when signals p11 and k11 both are a one or both are a zero. Otherwise a one is output by Exclusive OR
gate 50. Multiplexer 48 provides the proper carry to the first input of Exclusive NOR gate 52. Exclusive NOR gate 52 responds by providing a logic high output if the carry, signal tc1 or tc1, and the
output of Exclusive OR gate are the same. This is a match which is also commonly called a hit. If the inputs to Exclusive NOR gate are different a logic low is output and that indicates a miss. OR
gate 54 provides a hit signal z1 at a logic low, which indicates a miss, if both the output of Exclusive NOR gate 52 and mask m1 are a logic low. Mask m11 can force a hit by being a logic high. This
would occur, for example, if the bit location corresponding to bit logic 41 does not have data that is relevant. AND gate 56 provides a logic high if both signal kb11 and propagate signal p1 are a
logic high. Or gate 58 provides required carry rc0 at logic high, representing a one, if the output of AND gate 56 is a logic high or generate signal g1 is a logic high. Thus a carry is indicated for
the next bit location if either the generate signal is a logic high or the stored logic state is a logic low and the propagate signal is a logic high.
Shown in FIG. 4 is a circuit using N and P channel transistors implementing compare logic 41. The P channel transistors are indicated by a small circle on the gate symbol.
Shown in FIG. 5 is a combination block diagram and logic diagram of propagate and generate circuit 12 comprising a propagate and generate block 60, a propagate and generate block 62, a propagate and
generate block 64, and a propagate and generate block 66. Propagate and generate block 62 is shown comprising an AND gate 68 and an Exclusive OR gate 70. AND gate 68 has a first input for receiving
signal a1, a second input for receiving signal b1, and an output for providing generate signal g1. Exclusive OR gate 70 has first input for receiving signal a1, a second input for receiving signal b1
, and an output for providing propagate signal p1. Generate signal g1 is provided as a logic high when both signals a1 and b1 are a logic high and otherwise is provided as a logic low. Propagate
signal p1 is provided as a logic high when signals a1 and b1 are different and a logic low when signals a1 and b1 are the same. Blocks 60, 64, and 66 operate in the same way with regard to the a and
b signals they receive. Each of blocks 60, 62, 64, and 66 correspond to a bit location in CAM 14. Block 62 for example corresponds to bit location 1 in bitcell portions 34, 38, 42, and 46. Similarly
blocks 60, 64, and 66 correspond to bit locations 0, 2, and l, respectively. Blocks 60, 62, 64, and 66 each provide the propagate and generate signals to compare logic 32, 36, 40, and 44,
respectively, for the bit location to which they correspond.
The combination of a propagate and generate block and a bit logic circuit effectively form an adder of a bit location of signals A and B and a comparison to that same bit location in stored entry K.
For example, bit logic 41 and propagate and generate block 62 correspond to bit location 1. Thus, propagate and generate block 62 and bit logic 41 together function to be equivalent to an add of
signals A and B that results in a value for bit location 1 and compare that value to the value of k11, which is the value stored in bit location 1 of bit cell portion 38. This function of propagate
and generate of block 62 and bit logic 41 can be considered a logic operation on a bit location. This logic operation can also be viewed as achieving this result by performing a full add of single
bits of signal A, signal B, and Kb (complement of K) at the particular bit location with the real carry signal for the next most significant bit location being the carry of the add. This logic
operation is performed on each bit location of a bitcell portion that is within the selected length for that bitcell portion. For example, if bitcell portion includes bit locations 0-7, then the
logic operation is performed on each of bit locations 0-7, which is eight bit locations. These bit operations can be performed concurrently by the bit logic circuits, such as bit logic 39, 41, 43 and
45 shown in FIG. 2. The generation of the true carry is a little slower for the more significant bits, but the bit logic circuits are not in a chain in which a signal propagates through them
serially. Thus, they can function concurrently.
Shown in FIG. 6 is compare logic 80 that may be used in an alternative embodiment. In the case of system 10, the various compare logic circuits, such as compare logic 36, has the ability to define
the length of an operand between any two bits. This can cause delays that may be excessive. Compare logic 80 is limited to defining the operand length on every fourth bit location. Compare logic 80
has bit logic for each bit location. Shown in FIG. 6 are bit logic 82, bit logic 84, bit logic 86, bit logic 88, bit logic 92, bit logic 94, bit logic 96, bit logic 98, and bit logic 102
corresponding to bit locations k0, k1, k2, k3, k4, k5, k6, k7, and k8, respectively. Each bit logic circuit provides the required carry to the next most significant bit location except for every
fourth bit location where the true carry may be used instead. Between every fourth bit logic is a size logic circuit. Shown in FIG. 6 are size logic 90 between bit logic 88 and bit logic 90 and size
logic 100 between bit logic 98 and bit logic 102 which is between the bit logic circuits corresponding to bit location 4 and bit location 3. Bit logic 92 outputs required carry rc3 to size logic 90.
Size logic 90 also receives true carry tc3. Size logic 90 also receives a mask signal m0 from the controller. In this case mask signal m0 indicates if the size includes those locations and thus also
if the true carry should be coupled to bit logic 88. Bit logic 90 outputs carry signal c3 to bit logic 88 which thus may be either true carry tc3 or required carry rc3 in response to mask m0. A size
logic circuit, size logic 100, is between bit logic 98 and bit logic 102 and similarly performs a selection between true carry tc7 and required carry rc7 for coupling as carry signal c7 to bit logic
98. At the point at which the boundary is set, all less significant bits are considered a hit. Shown in FIG. 6 is an AND gate 108 which has inputs connected to the hit/miss outputs z4, z5, z6, and z7
of logic and asserts an output if all of these hit/miss outputs are asserted. Also shown is an OR gate 106 that has an input coupled to the output of AND gate 108 and an input coupled to mask m0. If
mask m1 is deasserted meaning that the boundary does not include the previous locations, then OR gate 106 asserts an output indicating a hit for bit locations 4-7. That is to say, bit locations not
within the length of the operand are forced to indicate a hit. The most significant four bits in this example are always considered to be within the operand length so only have a single AND gate 104
to indicate a hit or miss for those four bit locations.
By now it should be appreciated that there has been provided a circuit comprising an input to receive a first operand, an input to receive a second operand, a first circuit for providing a length
indication, and a logic circuit for providing an indication of a match between a logical sum of the first operand and the second operand with a set of bits of a third operand. A length of the set of
bits is determined by the length indication. The first operand includes a first plurality of bits. The second operand includes a second plurality of bits. The logic circuit generates the indication
of the match by performing a plurality of bit logic operations. Each of the plurality of bit logic operations includes a logic operation on a bit of the first operand of a corresponding bit location,
a bit of a second operand of the corresponding bit location, and a bit of the third operand of the corresponding bit location to generate a result and comparing the result with a carry indication
generated from a logic operation on bits including bits of an immediately less significant bit location from the corresponding bit location. Each of the bit logic operations of the plurality of bit
logic operations is performed for a different corresponding bit location of a plurality of bit locations. The circuit may further comprise a first carry determination circuit, the first carry
determination circuit generating a first carry indication of at least one bit of carry bits of an add operation of the first operand and the second operand; and a selection circuit; wherein the
selection circuit selects, based on the length indication, one of a group consisting of a bit of the first carry indication and a second carry indication generated from a logic operation on one bit
of the first operand of a first bit location, one bit of the second operand of the first bit location, and one bit of the third operand of the first bit location, as the carry indication for the bit
logic operation of an adjacent less significant bit location to the first bit location. The circuit may be further characterized by, for the bit location, a bit logic operation of the plurality of
bit logic operations being for generating a result of a bit addition of a bit of the first operand, a bit of a second operand, and a bit of the third operand, and a comparison of the result with the
one of the group. The circuit may be further characterized by the bit logic operation for generating the result using a first representation of a bit of the third operand, wherein the logic operation
to generate the second carry indication uses a second representation of the bit of the third operation, wherein the second representation is complementary to the first representation. The circuit may
be further characterized by each of the plurality of bit logic operations performed by the logic circuit generating a bit result of a plurality of bit results, the logic circuit masks the plurality
of bit results as per the length indication. The circuit may further comprise a first carry determination circuit, the first carry determination circuit generating a first carry indication of at
least one bit of carry bits of an add operation of the first operand and the second operand; wherein for each bit location of a first subset of the plurality of bit locations, the carry indication is
selected, based on the length indication, from one of a group consisting of a bit of the first carry indication and a second carry indication generated from a logic operation on one bit of the first
operand of an immediately less significant bit location to each bit location of the first subset, one bit of the second operand of the immediately less significant bit location to each bit location
of the first subset, and one bit of the third operand of the immediately less significant bit location to each bit location of the first subset; and wherein for each bit of a second subset of the
plurality of bit locations, the carry indication is generated from a logic operation on one bit of the first operand of an immediately less significant bit location to each bit location of the second
subset, one bit of the second operand of the immediately less significant bit location to each bit location of the second subset, and one bit of the third operand of the immediately less significant
bit location to each bit location of the second subset. The circuit may be further characterized by a bit location of the first subset occurring at every fourth bit location of the plurality of bit
locations wherein intervening three bit locations are bit locations of the second subset. The circuit may be further characterized by the second subset having a greater number of bit locations of the
plurality of bit locations than the first subset. The circuit may be further characterized by the logic circuit being implemented in a content addressable memory, the third operand being stored in a
storage location of a plurality of storage locations of the content addressable memory, the logic circuit generating a plurality of indications wherein each indication of the plurality is an
indication of a match between a logical sum of the first operand and the second operand with a set of bits of a value from each storage location of the plurality of storage locations, wherein the
length of the set of bits for each storage location is determined by the length indication. The circuit may be further characterized by the length indication for a first storage location of the
plurality of storage locations being capable of indicating a different length than for a second storage location of the plurality of storage locations.
Also described is a method of comparing a logical sum of a first operand and second operand with a third operand. The method includes receiving a first operand by logic circuitry. The method further
includes receiving a second operand by the logic circuitry. The method further includes receiving a length indication by the logic circuitry. The method further includes generating, by the logic
circuitry, an indication of a match between a logical sum of the first operand and the second operand with a set of bits of a third operand, wherein the length of the set of bits is determined by the
length indication. The generating includes performing a plurality of bit logic operations by the logic circuitry, wherein each bit logic operation on the plurality of bit logic operations corresponds
to a bit position of a plurality of bit positions, wherein each of the plurality of bit logic operations includes performing a logic operation on a bit of the first operand of a corresponding bit
location of the plurality of bit locations, a bit of a second operand of the corresponding bit location, and a bit of the third operand of the corresponding bit location to generate a result and
comparing the result with a carry indication generated from a logic operation on bits including bits of an immediately less significant bit location from the corresponding bit location. The method
may further comprise generating a first carry indication of at least one bit of a plurality of carry bits of an add operation of the first operand and the second operand; selecting, based on the
length indication, one of a group consisting of a bit of the first carry indication and a second carry indication generated from a logic operation on one bit of the first operand of a first bit
location, one bit of the second operand of the first bit location, and one bit of the third operand of the first bit location; wherein for a next significant bit location to the first bit location,
the carry indication used for the bit logic operation of the plurality of logic operations as the selected one of the group. The method may be further characterized by the logic operation for
generating the second carry indication including an addition operation of the one bit of the first operand of a first bit location, the one bit of the second operand of the first bit location, and
the one bit of the third operand of the first bit location. The method may be further characterized by the performing a plurality of bit logic operations generating a plurality of bit results; and
the generating an indication of a match including masking the plurality of the bit results as per the length indication. The method may further comprise generating a first carry indication of at
least one bit of a plurality of carry bits of an add operation of the first operand and the second operand; wherein for each bit location of a first subset of the plurality of bit locations, the
carry indication is selected, based on the length indication, from one of a group consisting of a bit of the first carry indication and a second carry indication generated from a logic operation on
one bit of the first operand of an immediately less significant bit location to each bit location of the first subset, one bit of the second operand of the immediately less significant bit location
to each bit location of the first subset, and one bit of the third operand of the immediately less significant bit location to each bit location of the first subset; and wherein for each bit of a
second subset of the plurality of bit locations, the carry indication is a carry indication is generated from a logic operation on one bit of the first operand of an immediately less significant bit
location to each bit location of the second subset, one bit of the second operand of the immediately less significant bit location to each bit location of the second subset, and one bit of the third
operand of the immediately less significant bit location to each bit location of the second subset.
Described also is a content addressable memory that includes a an input to receive a first operand, an input to receive a second operand, a plurality of storage locations, and logic circuitry for
providing a plurality of indications where each indication of the plurality of indications corresponds to a storage location of the plurality of storage locations, wherein each indication of the
plurality of indications is an indication of a match between a logical sum of the first operand and the second operand with a set of bits of a value from a corresponding storage location of the
plurality of storage locations. F or each indication of the plurality of indications, the logic circuit generates the indication of the match by performing a plurality of bit logic operations,
wherein each of the plurality of bit logic operations includes a logic operation on a bit of the first operand of a corresponding bit location, a bit of a second operand of the corresponding bit
location, and a bit of the corresponding bit location of a value from the corresponding storage location to generate a result and comparing the result to a carry indication generated from a logic
operation on bits including bits of an immediately less significant bit location from the corresponding bit location, wherein each of the bit operations of the plurality is performed for a different
corresponding bit location of a plurality of bit locations. The content addressable memory may be further characterized by, for each indication of the plurality of indications, each of the plurality
of bit logic operations including a bit addition of the bit of the first operand of a corresponding bit location, the bit of a second operand of the corresponding bit location, and the bit of the
corresponding bit location of the value from corresponding storage location. The content addressable memory may further comprise a first circuit for providing a length indication, wherein for each
indication of the plurality of indications, the bit length of the set of bits is determined by the length indication. The content addressable memory may further comprise a first carry determination
circuit, the first carry determination circuit generating a first carry indication of at least one bit of carry bits of an add operation of the first operand and the second operand, wherein for each
indication of the plurality of indications, for each bit location of at least some bit locations of the plurality of bit locations, the carry indication is selected, based on the length indication,
from one of a group consisting of a bit of the first carry indication and a second carry indication generated from a logic operation on one bit of the first operand of an immediately less significant
bit location to each bit location, one bit of the second operand of the immediately less significant bit location to each bit location, and one bit of a value from a corresponding storage location of
the immediately less significant bit location to each bit location. The content addressable memory may further comprise a first carry determination circuit, the first carry determination circuit
generating a first carry indication of at least one bit of carry bits of an add operation of the first operand and the second operand, wherein for each indication of the plurality of indications, for
each bit location of a first subset of the plurality of bit locations, the carry indication is selected, based on the length indication, from one of a group consisting of a bit of the first carry
indication and a second carry indication generated from a logic operation on one bit of the first operand of an immediately less significant bit location to each bit location of the first subset, one
bit of the second operand of the immediately less significant bit location to each bit location of the first subset, and one bit of a value from a corresponding storage location of the immediately
less significant bit location to each bit location of the first subset, and for each bit of a second subset of the plurality of bit locations, the carry indication is generated from a logic operation
on one bit of the first operand of an immediately less significant bit location to each bit location of the second subset, one bit of the second operand of the immediately less significant bit
location to each bit location of the second subset, and one bit of a value from a corresponding storage location of the immediately less significant bit location to each bit location of the second
Because the apparatus implementing the present invention is, for the most part, composed of electronic components and circuits known to those skilled in the art, circuit details will not be explained
in any greater extent than that considered necessary as illustrated above, for the understanding and appreciation of the underlying concepts of the present invention and in order not to obfuscate or
distract from the teachings of the present invention.
Although the invention is described herein with reference to specific embodiments, various modifications and changes can be made without departing from the scope of the present invention as set forth
in the claims below. For example, a different circuit may be used for implementing the logic shown in FIG. 3. Accordingly, the specification and figures are to be regarded in an illustrative rather
than a restrictive sense, and all such modifications are intended to be included within the scope of the present invention. Any benefits, advantages, or solutions to problems that are described
herein with regard to specific embodiments are not intended to be construed as a critical, required, or essential feature or element of any or all the claims.
The term “coupled,” as used herein, is not intended to be limited to a direct coupling or a mechanical coupling.
Furthermore, the terms “a” or “an,” as used herein, are defined as one or more than one. Also, the use of introductory phrases such as “at least one” and “one or more” in the claims should not be
construed to imply that the introduction of another claim element by the indefinite articles “a” or “an” limits any particular claim containing such introduced claim element to inventions containing
only one such element, even when the same claim includes the introductory phrases “one or more” or “at least one” and indefinite articles such as “a” or “an.” The same holds true for the use of
definite articles.
Unless stated otherwise, terms such as “first” and “second” are used to arbitrarily distinguish between the elements such terms describe. Thus, these terms are not necessarily intended to indicate
temporal or other prioritization of such elements.
|
{"url":"http://www.freshpatents.com/-dt20101202ptan20100306302.php","timestamp":"2014-04-20T21:52:56Z","content_type":null,"content_length":"91955","record_id":"<urn:uuid:b5668655-1b3b-4fee-8f2e-4e35dd5bc62c>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00386-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Physics Forums - View Single Post - Is force of gravity dependent on speed of one of the body?
I am not sure whether this is what you are looking for, but here's a paper discussing Einstein’s paper “Explanation of the Perihelion Motion of Mercury from General Relativity Theory”:
You will recognize the simplest post-Newtonian terms (for this special case).
I am not sure if this is what you are looking for; I guess instead of "slow motion + strong gravitational field" you are interested in "fast motion + weak gravitational field"; anyway - the starting
point is always the geodesic e.o.m.
|
{"url":"http://www.physicsforums.com/showpost.php?p=3841448&postcount=17","timestamp":"2014-04-20T16:00:59Z","content_type":null,"content_length":"7429","record_id":"<urn:uuid:cea9a532-7d1f-4af5-b1fc-1b41fa703516>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00310-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Stability experimental
Maintainer byorgey@cis.upenn.edu
The Species type class, which defines a small DSL for describing combinatorial species. Other modules in this library provide specific instances which allow computing various properties of
combinatorial species.
The Species type class
class C s => Species s whereSource
The Species type class. Note that the Differential constraint requires s to be a differentiable ring, which means that every instance must also implement instances for Algebra.Additive (the species 0
and species addition, i.e. disjoint sum), Algebra.Ring (the species 1 and species multiplication, i.e. partitional product), and Algebra.Differential (species differentiation, i.e. adjoining a
distinguished element).
Minimal complete definition: singleton, set, cycle, o, cartesian, fcomp, ofSize.
Note that the o operation can be used infix to suggest common notation for composition, and also to be read as an abbreviation for "of", as in "top o' the mornin'": set `o` nonEmpty sets.
singleton :: sSource
The species X of singletons. Puts a singleton structure on an underlying label set of size 1, and no structures on any other underlying label sets. x is also provided as a synonym.
set :: sSource
The species E of sets. Puts a singleton structure on any underlying label set.
cycle :: sSource
The species C of cyclical orderings (cycles/rings).
linOrd :: sSource
The species L of linear orderings (lists). Since linear orderings are isomorphic to cyclic orderings with a hole, we may take linOrd = oneHole cycle as the default implementation; linOrd is included
in the Species class so it can be special-cased for enumeration.
subset :: sSource
The species p of subsets is given by subset = set * set. subset is included in the Species class so it can be overridden when enumerating structures: by default the enumeration code would generate a
pair of the subset and its complement, but normally when thinking about subsets we only want to see the elements in the subset. To explicitly enumerate subset/complement pairs, you can use set * set
ksubset :: Integer -> sSource
Subsets of size exactly k, ksubset k = (set `ofSizeExactly` k) * set. Included with a default definition in the Species class for the same reason as subset.
element :: sSource
Structures of the species e of elements are just elements of the underlying set, element = singleton * set. Included with a default definition in Species class for the same reason as subset and
o :: s -> s -> sSource
Partitional composition. To form all (f `o` g)-structures on the underlying label set U, first form all set partitions of U; for each partition p, put an f-structure on the classes of p, and a
separate g-structure on the elements in each class.
(><) :: s -> s -> sSource
Cartisian product of two species. An (f >< g)-structure consists of an f-structure superimposed on a g-structure over the same underlying set.
(@@) :: s -> s -> sSource
Functor composition of two species. An (f @@ g)-structure consists of an f-structure on the set of all g-structures.
ofSize :: s -> (Integer -> Bool) -> sSource
Only put a structure on underlying sets whose size satisfies the predicate.
ofSizeExactly :: s -> Integer -> sSource
Only put a structure on underlying sets of the given size. A default implementation of ofSize (==k) is provided, but this method is included in the Species class as a special case since it can be
more efficient: we get to turn infinite lists of coefficients into finite ones.
nonEmpty :: s -> sSource
Don't put a structure on the empty set. The default definition uses ofSize; included in the Species class so it can be overriden in special cases (such as when reifying species expressions).
rec :: ASTFunctor f => f -> sSource
'rec f' is the least fixpoint of (the interpretation of) the higher-order species constructor f.
omega :: sSource
Omega is the pseudo-species which only puts a structure on infinite label sets. Of course this is not really a species, but it is sometimes a convenient fiction to use Omega to stand in for recursive
occurrences of a species.
Species CycleIndex An interpretation of species expressions as cycle index series. For the definition of the CycleIndex type, see Math.Combinatorics.Species.Types.
Species GF
Species EGF
Species ESpeciesAST
Species SpeciesAST Species expressions are an instance of the Species class, so we can use the Species class DSL to build species expression ASTs.
Convenience methods
oneHole :: Species s => s -> sSource
A convenient synonym for differentiation. oneHole f-structures look like f-structures on a set formed by adjoining a distinguished "hole" element to the underlying set.
It can be grammatically convenient to define plural versions of species as synonyms for the singular versions. For example, we can use set `o` nonEmpty sets instead of set `o` nonEmpty set.
Derived operations
Some derived operations on species.
pointed :: Species s => s -> sSource
Intuitively, the operation of pointing picks out a distinguished element from an underlying set. It is equivalent to the operator x d/dx: pointed s = singleton * differentiate s.
Derived species
Some species that can be defined in terms of the primitive species operations.
octopus :: Species s => sSource
An octopus is a cyclic arrangement of lists, so called because the lists look like "tentacles" attached to the cyclic "body": octopus = cycle `o` nonEmpty linOrds.
simpleGraph :: Species s => sSource
Simple graphs (undirected, without loops). A simple graph is a subset of the set of all size-two subsets of the vertices: simpleGraph = subset @@ (ksubset 2).
directedGraph :: Species s => sSource
A directed graph (with loops) is a subset of all pairs drawn (with replacement) from the set of vertices: subset @@ (element >< element). It can also be thought of as the species of binary relations.
|
{"url":"http://hackage.haskell.org/package/species-0.3.0.2/docs/Math-Combinatorics-Species-Class.html","timestamp":"2014-04-17T04:45:49Z","content_type":null,"content_length":"29604","record_id":"<urn:uuid:5bba9ee3-52a0-416a-9e91-31145343e633>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00111-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Springfield, PA Prealgebra Tutor
Find a Springfield, PA Prealgebra Tutor
...I truly enjoy helping students achieve their goals. Thanks for visiting my page, and best of luck!Scored 780/800 on SAT Math in high school and 800/800 on January 26, 2013 test. Routinely
score 800/800 on practice tests.
19 Subjects: including prealgebra, calculus, statistics, geometry
...My family of now five and I reside in Mullica Hill. My husband and I have a five year old (going on 20), a three year old and a one year old, along with our first born, our dog. As a teacher,
I believe in a balanced based approach between the "new math" and traditional teaching methods.
12 Subjects: including prealgebra, geometry, algebra 1, algebra 2
...I performed well on the SAT (800 point scale), scoring over a 600 in 1995. I retook it recently for a tutoring agency and scored well enough to teach their program for SAT prep. All teachers
have had some training in the writing section of the SAT.
17 Subjects: including prealgebra, reading, writing, English
...I have tutored students in this area before. Furthermore, I have helped develop curriculum for students K-10 for a summer camp before, focusing on math and language arts. I am stronger in
Molecular Biology than general biology since my undergraduate degree was in Biochemistry and Molecular Biology.
12 Subjects: including prealgebra, chemistry, geometry, ESL/ESOL
For the last 5 years I have been at Temple University studying for my PhD in organic chemistry. Two of those years I was involved in bringing science to local high school classrooms. I taught
physical science, biology, and chemistry.
6 Subjects: including prealgebra, chemistry, algebra 1, algebra 2
Related Springfield, PA Tutors
Springfield, PA Accounting Tutors
Springfield, PA ACT Tutors
Springfield, PA Algebra Tutors
Springfield, PA Algebra 2 Tutors
Springfield, PA Calculus Tutors
Springfield, PA Geometry Tutors
Springfield, PA Math Tutors
Springfield, PA Prealgebra Tutors
Springfield, PA Precalculus Tutors
Springfield, PA SAT Tutors
Springfield, PA SAT Math Tutors
Springfield, PA Science Tutors
Springfield, PA Statistics Tutors
Springfield, PA Trigonometry Tutors
|
{"url":"http://www.purplemath.com/Springfield_PA_prealgebra_tutors.php","timestamp":"2014-04-18T18:50:28Z","content_type":null,"content_length":"24252","record_id":"<urn:uuid:b3681666-5bf9-4641-a9ae-4de8eddfc936>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00133-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Education and Nemeth Code
Return to Archive
from Fall 97 issue
Math Education and Nemeth Code: Nemeth & Adventitiously Blind High School Students
Editor's Note: Ever wonder what it would be like to visit the TSBVI Website? Here's a taste of that experience, even if your modem is shut down for the day. Learn a little about math education and
the Nemeth Code. If after reading this article you would like to know more simply log on to http://www.tsbvi.edu.
A parent writes
I have been working with my daughter on math, and I know math reasonably, but it is visual in nature and a challenge to know the best way to present it. My daughter is not exactly "resisting" Nemeth,
but rather until last year, she was able to pretty much do everything in a print medium, but lost more of her vision making that impossible. She went to a residential school for the blind where she
learned Braille reasonably efficiently, and she knows Nemeth to "read" it, but writing it is often slow and she makes occasional mistakes -- which, of course, makes it difficult.
The school she is in now is a "regular" school that has no experience in dealing with blind students. They have provided the math text (as well as her other textbooks) in braille.
The problem comes in attending classes, where blackboard work to the class is effectively useless, and taking tests, etc. where translating back and forth between braille and print to have effective
communication between her and the teacher is proving very difficult. She has traditionally done everything in her head in math (she can do amazingly complex calculations in her head) but obviously,
at some point that is an unworkable strategy.
She likes math, she is very good at it, and would like to continue in it. My goal, I suppose, is to try to find the best way to go about this ... should we concentrate on Nemeth alone? Or are there
other technologies that might make this easier? I, of course, don't have a clue, and rather than "reinventing the wheel" here, I am hoping to research to find the best way for her to achieve the best
she can.
Susan Osterhaus's Response
I teach secondary mathematics at the Texas School for the Blind and Visually Impaired in Austin. In my opinion, learning to read and write Nemeth Code is absolutely essential for your daughter to be
able to continue in higher mathematics. I am surprised that she is better able to read than write. My adventitiously blind students are usually faster at writing than reading. Of course, they do all
of their homework for me in Nemeth, so I guess they get LOTS of practice! They use Perkins braille writers and can therefore easily read their own work - especially with all those steps in Algebra.
They use either an abacus or a talking calculator to perform long computations. Using the braille writer for computations is too time-consuming. Previously, our standardized tests did not allow any
students to use calculators. Now, the TAAS (our state test required for a high school diploma), SAT, and ACT are allowing braille students (and sometimes all students) to use calculators. I still
value the use of the abacus as a braille student's equivalent to paper and pencil for a sighted student.
Here in Texas, a blind student in elementary or secondary school can obtain instruction in Nemeth Code. After high school graduation, they are on their own, and I get frequent calls from college
students and their professors on how they can learn Nemeth Code. There are few opportunities for blind college students to learn Nemeth code. So, try this as an incentive for your daughter to learn
it now while she still can - assuming of course that she would like to go to college.
I am a user of technology for preparing materials for my students and for correspondence, but the field is way behind for blind individuals, especially in the areas of math, science, and engineering.
Although it is easy to translate print into Grade II literary braille, research is still continuing on how to get from mathematical print equations to Nemeth Code and vice versa. I use MegaDots for
my worksheets, tests, etc., but it can only do "baby" Nemeth. I still have to braille all my algebra equations, etc. into the computer after I convert the keyboard to six-key pad mode. (I am
currently beta testing their advanced Nemeth Translator.) For complete information on the project visit Raised Dots' Web site. However, I do allow one type of technology, if the braillewriter is not
acceptable in the mainstream classroom. A few of my students have used a Braille-Lite which has one row of refreshable braille. The student doesn't use the translation mode and simply brailles in
Nemeth Code and outputs in Nemeth. However, they can always go back a line and reread their last step as they are progressing through an algebra equation or a trig identity. The key features here are
that it is a braille device and it has a row of refreshable braille. Other manufacturers have similar devices. I do not advocate the Braille `N Speak (made by the same company) as the student only
receives voice-output as they make entries into the equipment.
There are many tools, aids, and supplies for teaching math to blind students, and I hope your daughter has had (and will continue to have) the opportunity to use them. Does she know how to graph on a
number line? Does she know how to graph on a rubber graph board (Graphic Aid for Mathematics by APH) or raised line graph paper on a cork board independently? Does she know how to measure an angle
using a braille protractor (modified goniometer)? Can she (or will she) learn how to do constructions in Geometry using a braille compass and straightedge? Is she provided manipulatives, especially
in Geometry?
An opposing view
Hi. I have been totally blind from birth. I remember math being one of the most difficult subjects because of its visual nature. There are a couple suggestions I would have to help deal with this
problem. First, it is my opinion that Nemeth code is an absolute nightmare. It looks like jumbled up nonsense under the fingertips. I took a course just so I could learn to read my math books, and it
was still ridiculously difficult. I realize this is going to stir up some controversy, but I feel that private tutoring in math is the best way to approach this, and it gives your daughter the best
chance for really understanding the concepts. I recommend the use of what is known as a raised line drawing kit to help your daughter attempt to visualize how math problems are arranged. This is
particularly important when dealing with fractions. You can obtain the raised line drawing kits from suppliers of blindness-related equipment. I learned the shape of the numbers so that sighted folks
could demonstrate concepts for me with the raised line drawing kit. There is also something called a cube slate which also can be helpful. I don't know if the cube slates are sold anymore, but they
have cubes with all the braille number combinations and a rubber board so that the cubes can be arranged to help keep track of what one is doing. Maybe a combination of these tools would be the best
Susan replies
I'm sorry to hear that you had such a negative reaction to Nemeth Code. I do not find it to be a "jumbled up nonsense under the fingertips"; on the other hand, I think for the most part that it is
very logical, systematic, and an absolute miracle for braille readers wishing to continue in higher mathematics. I am not a tactual reader though. As a math teacher with visually impaired students, I
taught myself to read Nemeth Code visually (and braille it) out of necessity to be able to teach my students. There were no courses at the university in Nemeth above the basic numbers and operations,
and I needed to be able to teach Pre-Algebra, Algebra I, Informal Geometry, Geometry, Algebra II, Math of Money, Trigonometry, etc. As I would introduce each new print mathematical symbol, the
students and I would learn the corresponding Nemeth symbol; as I said earlier, I really learned to appreciate the logic of why Dr. Nemeth did what he did. Perhaps the key here is that students learn
Nemeth Code most easily if they learn each new symbol as they progress through the mathematics. Learning Nemeth as a separate course from mathematics is as logical as a sighted person learning all
the print mathematical symbols in a separate course. However, sometimes lack of time necessitates the Nemeth Code class.
I do agree that tactile graphics made using the tactile graphics kit by APH can be extremely useful - especially when created by certain people more artistic than I am, such as the Region IV Service
Center in Houston, Texas. I was on a panel of experts called in to help facilitate the improvement of such graphics for our TAAS (state test required to graduate from high school) and for our math
I do not like the graphics produced from the Sewell raised line drawing kits, except for emergency situations. They are too flimsy when using the plastic wrap type film that comes with the kit.
However, when a piece of braille paper is placed on the drawing board and a tracing wheel and braille compass are used along with a straightedge, even I (no artist) can make an excellent quick-fix
graphic that any Math teacher (non-VI certified) can use to communicate with a blind student.
If you have access to a stereocopier machine (newer versions are less expensive), you can transform black-lined print graphics into raised line drawings within a matter of seconds. I have also had
great success using sturdy manipulatives to introduce many math concepts.
A successful blind Nemeth Code user replies
Actually, I had no trouble with the Nemeth code at all. I was first introduced to it in second or third grade. (When do we start doing math these days?) Anyway, my itinerant teacher did not know
Nemeth at all, so it was up to me to learn it. And learn it I did, as I went along. I had very little difficulty with it, and math in general was no trouble (until I reached trig in 12th grade).
Algebra was only minimally annoying with the graphed equations, but trig has lots and lots of them, and I'm sorry to say, that is the first math class I did not get at least a B in. <sigh> Oh well.
That's ok though, because if I need something like that done now, I just use my computer. *grin* Well, guess that's it. Nemeth isn't all that bad, it just takes some time. It's actually not all that
different from regular braille (whatever that is) and I found it very easy to learn.
From a Network Specialist in a data communications group
I read your messages to the list with much interest. I fully agree with your statements about the Nemeth Code and wonder what sort of educational hiccough occurred which broke the learning process
for the person who did not do well with it.
I find it alarming and totally unnecessary that so much of the blindness community seems to think that science and math are to be avoided at all possible cost. There certainly are problems in
communicating mathematical ideas using tactile methods, but it is sure not impossible by any means. I know that there are blind engineers and people should think of at least one blind mathematician
every time they use natural logarithms. There is just no excuse for a blind kid graduating from high school without even having had Algebra.
Yet another supportive user
I really enjoyed your messages! Would you consider giving a summer crash course in Nemeth and Math. I've done the Hadley course, read the Bana computer code, but really have little confidence in my
math skills such as Algebra, and the stats I took in Grad school. You ought to consider a math camp for adult blind I know you'd get a result. I'd come!
A returning student
I lost my sight 7 years ago as a result of diabetic retinopathy. In January I will be returning to school at the University to pursue simultaneous bachelor's and master's degrees in computer science
(I already have about 3/4 of my EE degree, but haven't been to school in over 15 years), and for the first couple of semesters I will be concentrating mostly on my math courses. After talking with
many people about this, I have decided to approach this by using Nemeth braille I have talked to a few who have managed to "pass" their math requirements without braille, but most of them admit
that it was a struggle, and once the course was completed, they quickly forgot about it. I want more than that; I want mastery, and I'm convinced that braille is the only way to go to get to this
level. In case you're wondering, yes, I do read grade II braille, and do have enough sensation in my fingers to do the job not very fast, but that will come with more practice.
Once again, I want to say thanks for your positive approach to math and technology for blind students the more things like this that I read, the more convinced I am that I am making the right
Return to Archive
from Fall 97 issue
Please complete the comment form or send comments and suggestions to: Jim Allan (Webmaster-Jim Allan)
Last Revision: July 30, 2002
|
{"url":"http://www.tsbvi.edu/seehear/fall97/math.htm","timestamp":"2014-04-20T23:59:27Z","content_type":null,"content_length":"15789","record_id":"<urn:uuid:53e00685-8b40-4cc7-8b97-08f60548fb27>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00002-ip-10-147-4-33.ec2.internal.warc.gz"}
|
188 helpers are online right now
75% of questions are answered within 5 minutes.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/users/diyadiya/medals","timestamp":"2014-04-20T14:00:23Z","content_type":null,"content_length":"112437","record_id":"<urn:uuid:69530373-d406-4a08-8741-210cfa5a56c5>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00108-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Wind flutter; what is the reduced frequency, K?
I'm trying to do a 2D-model of a bridge section subjected to wind, i.e. the bridge deck can have angular and vertical displacement due to the wind. I'm having some problem with understanding the
The equations that I use to describe the aerodynamic lift and moment are (sorry for the messy equations...):
Lh = 1/2 * rho * U^2 * B * [K*(H1*)*h_prim/U + K*(H2*)*B*alpha_prim/U + K^2*(H3*)*alpha + K^2*(H4*)*h/B]
M_alpha = 1/2 * rho * U^2 * B * [K*(A1*)*h_prim/U + K*(A2*)*B*alpha_prim/U + K^2*(A3*)*alpha + K^2*(A4*)*h/B]
Where rho is the air density, U is the wind speed, B is the bridge deck width, K is the reduced frequency, Hi* and Ai* are the flutter coefficients, alpha and h are the angular and vertical
K=omega*B/U. Where U is the wind speed, B is the width of the bridge deck and omega is the circular frequency (omega=2*pi*n, n=frequency of oscillation).
My question is then; what is omega (or n)? Since it is a 2D-model of the bridge it can oscillate in either vertical direction or angular. Is omega connected to these oscillations? And if so, how? One
idea that I thought of was to calculate 2 different K (one for h and one for alpha), this would solve my problem, but this approach has not been mentioned in any of the aeroelasticity books I've
If anyone can answer my question or have any thoughts around it I would very much appreciate it.
Thank you,
|
{"url":"http://www.physicsforums.com/showthread.php?t=345918","timestamp":"2014-04-18T03:15:56Z","content_type":null,"content_length":"20798","record_id":"<urn:uuid:9bde97b7-2315-49d1-824b-a3eb23dbdcee>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00256-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Next | Prev | Up | Top | Index | JOS Index | JOS Pubs | JOS Home | Search
Note that the impedance of the terminated string, seen from one of its endpoints, is not the same thing as the wave impedance reflections, they must be included in the impedance calculation, giving
it an imaginary part. We may say that the impedance has a ``reactive'' component. The driving-point impedance of a rigidly terminated string is ``purely reactive,'' and may be called a reactance (§
7.1). If force at the driving-point of the string and velocity, then the driving-point impedance is given by (§7.1)
where Laplace transforms of
where period of string vibration. Then on the frequency axis
Thus, the driving-point impedance of a rigidly terminated string is purely reactive (imaginary), with alternating poles and zeros along the 7.1 below.
Next | Prev | Up | Top | Index | JOS Index | JOS Pubs | JOS Home | Search [How to cite this work] [Order a printed hardcopy] [Comment on this page via email]
|
{"url":"https://ccrma.stanford.edu/~jos/pasp/Terminated_String_Impedance.html","timestamp":"2014-04-20T17:17:26Z","content_type":null,"content_length":"13120","record_id":"<urn:uuid:2916e3f2-1761-4f1e-b8ab-b5092afcd8d1>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00097-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Re: st: Implementation of Latent Variable Model with SEM Builder
Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: st: Implementation of Latent Variable Model with SEM Builder
From Stas Kolenikov <skolenik@gmail.com>
To statalist@hsphsun2.harvard.edu
Subject Re: st: Implementation of Latent Variable Model with SEM Builder
Date Sat, 14 Apr 2012 11:54:15 -0500
On Sat, Apr 14, 2012 at 12:55 AM, Samantha Molbach
<samantha.molbach@gmail.com> wrote:
> I need some help in implementing a Structural Equation Model in Stata
> 12. I want to create a health index according to Bound (1999): "The
> dynamic effects of health on the labor force transitions of older
> workers".
> I have the following variables available: Self-assessed health on a
> five-point scale (SAH), Age, Education, different objective measures
> of health such as blood pressure (Blood), chronic diseases (chronic)
> and physical limitations (limit).
> The theoretical model is the following:
> H = X*ß1 + Z*ß2 + u
> with H=true health; X= socioeconomic variables; Z=objective health
> measures; u=error term
This is a regression, you don't need SEM for this.
> I do not observe the true health, but only the self-assessed health
> which includes a reporting error e, thus:
> SAH = H + e
> SAH = X*ß1 + Z*ß2 + v (with v=u+e)
This is still a regression with a single response variable. You don't
need SEM for this.
> I estimate the last equation via SEM the following way:
> sem (age -> sah) (education -> sah) (blood -> sah) (chronic -> sah)
> (limit -> sah)
This is still a regression... am I repeating myself???
> Then, I'm stuck - how do I get back to the first equation and model
> the health indicator H? Also, can I estimate an ordered Probit model
> in SEM?
OK, this is an ordered probit regression, then. Note that I am not
repeating myself here! Run -oprobit- and -predict, xb- if you really
want to get some sort of continuous scores for the health variable.
However, just using this information like that will not lead you
terribly far; you will probably have a somewhat finer gradation of
your health status variable, but the amount of measurement error in it
is not quantifiable. There is no way to break down the total error v =
u + e into individual components.
What you may want to consider instead is a MIMIC (multiple indicators
- multiple causes) model, in which the true health is determined by
demographics (and health behaviors like exercise level and smoking and
what not which would have been nice to have), and has objective
measures as indicators. Ignoring the ordinal nature of SAH, your model
will then be
sem (age educ gender smoke exercise -> Health) (Health -> SAH blood
chronic limit)
I suspect that -chronic- and -limit- are also categorical though. A
more appropriate tool to account for the categorical nature of the
data is -gllamm-. I think I mentioned this in my talk on (pre-sem)
ways of analyzing structural equation models -- see
Stas Kolenikov, also found at http://stas.kolenikov.name
Small print: I use this email account for mailing lists only.
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
|
{"url":"http://www.stata.com/statalist/archive/2012-04/msg00612.html","timestamp":"2014-04-17T16:15:15Z","content_type":null,"content_length":"10529","record_id":"<urn:uuid:a2f0b78f-e56c-40e7-9986-c8ab826c36de>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00294-ip-10-147-4-33.ec2.internal.warc.gz"}
|
David Ayala
gmail account: davidayala.math
office: 464C, KAP
Welcome. As of August 2012, I am a postdoc in mathematics at University of Southern California supported in part by an NSF grant. Previously, I was a postdoc at Harvard University as well as the
University of Copenhagen. I received my PhD through Stanford University under the supervision of Ralph Cohen.
See my Curriculum Vitæ (5 pages) for specifics.
My background is in algebraic topology with an emphasis on the homotopy theory of manifolds. My research concerns the differential topology of locally defined entities, such as manifolds, and their
moduli as informed by conceptual and computational techniques from (derived) algebraic geometry and homotopy theory. Slightly more specifically, my work characterizes invariants of things like
manifolds or links which are obtained from algebraic or categorical data -- the goal is to facilitate the construction of a wealth of robust invariants, equipped with desired local-to-global
functorialities, with an eye toward calculations among them. Instances of such local invariants include sheaves, cosheaves, motives, and topological field theories; and proposed examples of such
invariants include the Turaev-Viro TFT invariants, versions of Heegaard Floer homology, Rozansky-Witten and Khovanov homology, and finite type knot invariants.
I am not teaching this academic year. In the past I have taught two calculus courses, a basic topology course, and a course on the homotopy theory of manifolds, as well as various teaching
assistantships. See my CV for specifics.
I supervised a masters project of Casper Guldberg's (University of Copenhagen) on basic constructions in quasi-categories (29 pages).
I supervised a side PhD project of Emanuele Dotto's (University of Copenhagen) where he is used simplicial techniques to consider h-principles with boundary conditions (33 pages).
Other stuff:
Some juggling videos
Some photos of outrageous natural features
Some modest comics
Mojave Run
The University of Southern California does not screen or control the content on this website and thus does not guarantee the accuracy, integrity, or quality of such content. All content on this
website is provided by and is the sole responsibility of the person from which such content originated, and such content does not necessarily reflect the opinions of the University administration or
the Board of Trustees
|
{"url":"http://www-scf.usc.edu/~davidaya/","timestamp":"2014-04-19T04:24:05Z","content_type":null,"content_length":"3809","record_id":"<urn:uuid:0578ca41-de9e-4da1-8cf0-3fa026da7140>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00309-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Create 3-D stream tube plot
streamtube(...,[scale n])
h = streamtube(...z)
streamtube(X,Y,Z,U,V,W,startx,starty,startz) draws stream tubes from vector volume data U, V, W.
The arrays X, Y, and Z, which define the coordinates for U, V, and W, must be monotonic, but do not need to be uniformly spaced. X, Y, and Z must have the same number of elements, as if produced by
startx, starty, and startz define the starting positions of the streamlines at the center of the tubes. The section Specifying Starting Points for Stream Plots provides more information on defining
starting points.
The width of the tubes is proportional to the normalized divergence of the vector field.
streamtube(U,V,W,startx,starty,startz) assumes X, Y, and Z are determined by the expression
[X,Y,Z] = meshgrid(1:n,1:m,1:p)
where [m,n,p] = size(U).
streamtube(vertices,X,Y,Z,divergence) assumes precomputed streamline vertices and divergence. vertices is a cell array of streamline vertices (as produced by stream3). X, Y, Z, and divergence are 3-D
streamtube(vertices,divergence) assumes X, Y, and Z are determined by the expression
[X,Y,Z] = meshgrid(1:n,1:m,1:p)
where [m,n,p] = size(divergence).
streamtube(vertices,width) specifies the width of the tubes in the cell array of vectors, width. The size of each corresponding element of vertices and width must be equal. width can also be a
scalar, specifying a single value for the width of all stream tubes.
streamtube(vertices) selects the width automatically.
streamtube(...,[scale n]) scales the width of the tubes by scale. The default is scale = 1. When the stream tubes are created, using start points or divergence, specifying scale = 0 suppresses
automatic scaling. n is the number of points along the circumference of the tube. The default is n = 20.
streamtube(axes_handle,...) plots into the axes object with the handle axes_handle instead of into the current axes object (gca).
h = streamtube(...z) returns a vector of handles (one per start point) to surface objects used to draw the stream tubes.
This example uses stream tubes to indicate the flow in the wind data set. Inputs include the coordinates, vector field components, and starting location for the stream tubes.
load wind
[sx sy sz] = meshgrid(80,20:10:50,0:5:15);
% Define viewing and lighting
axis tight
shading interp;
camlight; lighting gouraud
This example uses precalculated vertex data (stream3) and divergence (divergence).
load wind
[sx sy sz] = meshgrid(80,20:10:50,0:5:15);
verts = stream3(x,y,z,u,v,w,sx,sy,sz);
div = divergence(x,y,z,u,v,w);
% Define viewing and lighting
axis tight
shading interp
camlight; lighting gouraud
See Also
divergence | meshgrid | stream3 | stream3 | streamline | streamribbon
|
{"url":"http://www.mathworks.nl/help/matlab/ref/streamtube.html?s_tid=gn_loc_drop&nocookie=true","timestamp":"2014-04-24T02:04:14Z","content_type":null,"content_length":"48512","record_id":"<urn:uuid:3e7b0c79-2972-4806-8dd6-fa2392b572bf>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00037-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Area of shaded region
May 7th 2008, 03:25 PM #1
Feb 2008
Area of shaded region
I posted a question similar to this a minute ago, but I wanted to double check and make sure I'm doing it right. The question is find the shaded region of $7x(x^2-64)$ There is a picture, but I
can't display it here. The bounds are 0 to 8 and the second equation is y = 0. So, you take the top curve (0) - the bottom curve $7x(x^2-64)$. Leaving you with $-7x^3-448x$. From there you
integrate which gives $-7/4x^4 - 224x^2$. This is where I get confused with definite integration. Do I put in 8 for the first x and then 0 for the second x? E.g. $-7/4(8)^4 - 224(0)^2$ Or do I do
two whole separate equations E.g. $(-7/4(8)^4 - 224(8)^2) - (-7/4(0)^4 - 224(0)^2)$ . Can someone explain? Thanks
Last edited by zsig013; May 7th 2008 at 03:57 PM.
I posted a question similar to this a minute ago, but I wanted to double check and make sure I'm doing it right. The question is find the shaded region of $7x(x^2-64)$ There is a picture, but I
can't display it here. The bounds are 0 to 8 and the second equation is y = 0. So, you take the top curve (0) - the bottom curve $7x(x^2-64)$. Leaving you with $-7x^3-448x$. From there you
integrate which gives $-7/4x^4 - 224x^2$. This is where I get confused with definite integration. Do I put in 8 for the first x and then 0 for the second x? E.g. $-7/4(8)^4 - 224(0)^2$ Or do I do
two whole separate equations E.g. $(-7/4(8)^4 - 224(8)^2) - (-7/4(0)^4 - 224(0)^2)$ . Can someone explain? Thanks
Remember to evaluate the definite integral, we need to use the Fundamental Theorem of Calculus:
where $F(x)$ is the antiderivative of $f(x)$.
In your case, we have the antiderivative $-\frac{7}{4}x^4-224x^2$. Then we need to find $F(8)-F(0)=(-\frac{7}{4}(8)^4-224(8)^2)-(-\frac{7}{4}(0)^4-224(0)^2)$.
I hope this clarified things!!!
May 7th 2008, 07:02 PM #2
|
{"url":"http://mathhelpforum.com/calculus/37562-area-shaded-region.html","timestamp":"2014-04-17T05:09:06Z","content_type":null,"content_length":"37081","record_id":"<urn:uuid:22814952-b810-41de-b350-3fb01e1308de>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00407-ip-10-147-4-33.ec2.internal.warc.gz"}
|
QUOTIENT function
This article describes the formula syntax and usage of the QUOTIENT function (function: A prewritten formula that takes a value or values, performs an operation, and returns a value or values. Use
functions to simplify and shorten formulas on a worksheet, especially those that perform lengthy or complex calculations.) in Microsoft Excel.
Returns the integer portion of a division. Use this function when you want to discard the remainder of a division.
QUOTIENT(numerator, denominator)
The QUOTIENT function syntax has the following arguments (argument: A value that provides information to an action, an event, a method, a property, a function, or a procedure.):
● Numerator Required. The dividend.
● Denominator Required. The divisor.
If either argument is nonnumeric, QUOTIENT returns the #VALUE! error value.
The example may be easier to understand if you copy it to a blank worksheet.
1. Select the example in this article. If you are copying the example in Excel Online, copy and paste one cell at a time.
Important: Do not select the row or column headers.
Selecting an example from Help
1. Press CTRL+C.
2. Create a blank workbook or worksheet.
3. In the worksheet, select cell A1, and press CTRL+V. If you are working in Excel Online, repeat copying and pasting for each cell in the example.
Important: For the example to work properly, you must paste it into cell A1 of the worksheet.
4. To switch between viewing the results and viewing the formulas that return the results, press CTRL+` (grave accent), or on the Formulas tab, in the Formula Auditing group, click the Show Formulas
After you copy the example to a blank worksheet, you can adapt it to suit your needs.
A B
Formula Description (Result)
=QUOTIENT(5, 2) Integer portion of 5/2 (2)
=QUOTIENT(4.5, 3.1) Integer portion of 4.5/3.1 (1)
=QUOTIENT(-10, 3) Integer portion of -10/3 (-3)
Applies to:
Excel 2010, Excel Web App, SharePoint Online for enterprises, SharePoint Online for professionals and small businesses
|
{"url":"http://office.microsoft.com/en-us/starter-help/quotient-function-HP010342813.aspx?CTT=5&origin=HA010342655","timestamp":"2014-04-23T07:15:29Z","content_type":null,"content_length":"24206","record_id":"<urn:uuid:26d30db7-9709-4b18-9cec-ccaea167418d>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00071-ip-10-147-4-33.ec2.internal.warc.gz"}
|
SAS-L archives -- January 2007, week 1 (#433)LISTSERV at the University of Georgia
Date: Sun, 7 Jan 2007 00:50:34 -0500
Reply-To: Arthur Tabachneck <art297@NETSCAPE.NET>
Sender: "SAS(r) Discussion" <SAS-L@LISTSERV.UGA.EDU>
From: Arthur Tabachneck <art297@NETSCAPE.NET>
Subject: Re: Help with formatting dataset.
Comments: To: SAS-L Nirmal <lazybone2k@GMAIL.COM>
Nirmal wrote to me, off-line, indicating that I had misunderstood the
>Thanks for the reply, but A1,B1, B2, C1.....are various values of the
>variables...i used A1, B1, B2......so as to explain the levels of the
>variables. It can be anything. I want to create a dataset which contains
>the levels of the variables ....just as i mentioned in the earlier
>post.....for example the input dataset contains 1 market, 2 buyer, 4
>products and 2 types in their original values......in the output dataset
>shown below explains the various levels of the variables in numbers, ----
>market has 1 level, buyer has 2 levels, .....
In that case, I would use proc sql. For example:
data have;
input Market $ Buyer $ Product $ Type $;
A1 B1 C1 D1
A1 B1 C1 D2
A1 B1 C2 D1
A1 B1 C2 D2
A1 B1 C3 D1
A1 B1 C3 D2
A1 B1 C4 D1
A1 B1 C4 D2
A1 B2 C1 D1
A1 B2 C1 D2
A1 B2 C2 D1
A1 B2 C2 D2
A1 B2 C3 D1
A1 B2 C3 D2
A1 B2 C4 D1
A1 B2 C4 D2
proc sql;
create table want as
select count(distinct Market) as Market,
count(distinct Buyer) as buyer,
count(distinct Product) as Product,
count(distinct Type) as Type
from have;
On Sat, 6 Jan 2007 20:01:11 -0500, SAS-L Nirmal <lazybone2k@GMAIL.COM>
>Dear SAS-l users,
>I have a dataset which contains 4 vaiables. The sample data sets looks
>like this
>Market Buyer Product Type
>A1 B1 C1 D1
>A1 B1 C1 D2
>A1 B1 C2 D1
>A1 B1 C2 D2
>A1 B1 C3 D1
>A1 B1 C3 D2
>A1 B1 C4 D1
>A1 B1 C4 D2
>A1 B2 C1 D1
>A1 B2 C1 D2
>A1 B2 C2 D1
>A1 B2 C2 D2
>A1 B2 C3 D1
>A1 B2 C3 D2
>A1 B2 C4 D1
>A1 B2 C4 D2
>and i want to format this dataset into something like this.
>Market Buyer Product Type
>1 1 1 1
>1 1 1 2
>1 1 2 1
>1 1 2 2
>1 1 3 1
>1 1 3 2
>1 1 4 1
>1 1 4 2
>1 2 1 1
>1 2 1 2
>1 2 2 1
>1 2 2 2
>1 2 3 1
>1 2 3 2
>1 2 4 1
>1 2 4 2
>The problem is i cannot hard code the format values. I did quite some work
>on this(i am not lazy!!!!!)...i counted the various variables and the
>levels and used a do loop to construct the output. But its not working. I
>also tried proc format. But i want to write a program which is data
>driven. It shud read the values and construct this. I would really
>appreciate any suggestions or tips on this one. Thanks again.
|
{"url":"http://listserv.uga.edu/cgi-bin/wa?A2=ind0701a&L=sas-l&F=&S=&P=49898","timestamp":"2014-04-17T18:51:25Z","content_type":null,"content_length":"11470","record_id":"<urn:uuid:15863ff6-53e4-4837-ba7b-634151e629df>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00649-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Strategies for classroom teachers
Strategies for classroom teachers: A lesson from Mathematics Intervention
Catherine A. Pearn, Marguerite Merrifield
La Trobe University, Boroondara Park Primary School
Introduction and Background to the Program
There is some debate about the best way to implement programs designed to assist children "at risk" of not being able to fully participate in the regular class mathematics program. There are three
ways that assistance may be given to low achieving students: in a withdrawal group, individually, or as a group or individual within the class setting.
One example of a withdrawal program is the Mathematics Intervention program which is a collaborative project involving the Principal and staff of Boroondara Park Primary and mathematics educators
from La Trobe University (Pearn, 1994; Pearn, Merrifield & Mihalic, 1994a; Pearn, Merrifield & Mihalic, 1994b). This program aims to identify, then assist, those children in Years 1 at risk of not
coping with the mathematics curriculum as documented in the National Statement on Mathematics for Australian Schools (Australian Education Council, 1991). The program features elements of both
Reading Recovery (Clay, 1987) and Mathematics Recovery (Wright, 1991) and offers children the chance to experience success in mathematics by developing the basic concepts of number upon which they
build their understanding of mathematics. Children are withdrawn from their classes and work in small groups to assist with the development of their mathematical language skills and co-operation
While individualised instruction by a qualified teacher is extremely beneficial to the student (see for example, Wright, 1996), such instruction does not take account of the common language
difficulties identified by clinical interviews, used at Boroondara Park, to assess the children at the beginning of each year. We have noted several occasions where the student-student verbal
interaction was more beneficial to the student than the teacher-student interaction. For extra assistance to be made available to students experiencing difficulties in the classroom the teacher has
to develop skills to be able to recognise that the student has a problem, identify the problem, and have the expertise to assist the child overcome the problem.
Previous Research
Mathematics Intervention incorporates learning activities based on recent research about children's early arithmetical learning (Steffe, von Glasersfeld, Richards and Cobb, 1983; Wright, 1991). The
counting stages as determined by Steffe and his colleagues (1983, 1988) are summarised below:
Stage 0. Preperceptual. When attempting to count, the child is unable to coordinate number words with the items being counted.
Stage 1. Perceptual. Children are limited to counting those items they can perceive, for example, see, feel and hear.
Stage 2. Figurative. Children count from one when solving addition problems with screened collections. They appear to visualise the items and all movements are important. (Often typified by hand
waving over hidden objects).
Stage 3. Initial number sequence. Children count-on to solve addition and missing addend problems with screened collections. Children no longer count from one but begin from the appropriate number.
Stage 4. Implicitly nested number sequence. Children are able to focus on the collection of unit items. They can count-on and count-down, choosing the most appropriate to solve problems. They
generally count-down to solve subtraction problems.
Stage 5. Explicitly nested number sequence. Children are simultaneously aware of two number sequences and can disembed smaller composite units from the composite unit that contains it, and then
compare them. That is, they can conceptualise a whole (for example 12), a part (for example, 9) and the remainder (for example, 3) simultaneously. They understand that addition and subtraction are
inverse operations. They use a variety of strategies other than counting by ones, for example: using a known result, adding to ten, and commutativity.
Wright and his colleagues (1996) have documented the results from more than 200 interviews of children aged between three and eight years. The results of these interviews were used to assess young
children's arithmetical knowledge. Three conclusions arising from his research include:
* Reasonable educational goals for children are to reach Arithmetical Stage 2 or 3 at the end of the Prep year and Stage 5 at the end of Grade One.
* Only a very small percentage (probably less than 5%) of beginning first grade children have not attained at least Stage 2.
* Grade one children who begin at lower than Stage 3 are less likely to advance to Stage 5 than are children who begin at Stage 3 or 4.
Identification of Students "at risk"
Both the initial assessment and the Mathematics Intervention program require the teacher to observe and interpret the child's actions as he/she works on a set task. The initial interview requires the
teacher to assess the extent of the child's mathematical knowledge while the intervention program relies on the teacher's ability to interpret the child's mathematical knowledge and then design or
adapt tasks and problems that enable the child to progress mathematically.
All teachers involved with the Mathematics Intervention program have attended a course in Clinical Mathematics Methods at La Trobe University (Gibson, Doig & Hunting, 1993) to develop and refine
their observational and interpretative skills.
Initial Clinical Interview
Analysis of the 1993 clinical interview results showed that children requiring Mathematics Intervention were experiencing difficulties with the verbal counting sequence; specifically in counting
backwards, counting forwards by numbers larger than one and were at either Stage 1 or Stage 2 of the Counting Stages. This meant that an easier and shorter test could be administered to identify the
children who needed to participate in a Mathematics Intervention program. The modified interview (Pearn, Merrifield, Mihalic & Hunting, 1995) included simplified verbal counting tasks. For example,
"Can you count out loud for me, beginning at one, until I tell you to stop?"
"Can you count forwards by 10's starting with 10?"
and only two tasks based on the counting stages. The first task was designed to determine whether the child could count-back.
Ten counters are displayed.
"Here are some counters. Count them."
(Cover all the counters, remove two and display).
"How many counters are under the paper?"
The second task was designed to determine the strategy the child used: guess, count-all, count-on, or "just know it". It is an easier task than the previous one and allows the child to leave the
interview feeling positive.
Six counters are displayed and three hidden.
"There are six counters on the table. Can you count them?"
"Under this paper are three counters." (Lift paper briefly).
"How many counters do I have altogether?"
Two tasks were added to the earlier 1993 interview :
"Can you count out fourteen beads?"
Cards were shown of the following numbers and the children were asked to name them: 13, 31, 15, 51, 14, 41.
Over the last three years children from both Year 1 and 2 at Boroondara Park have been individually interviewed by either the teacher or researcher associated with the project. In 1994 Year 1 and 2
children participated in the program but in 1995 and 1996 the program has been restricted to Year 1 children. At the beginning of each school year children are clinically interviewed. By carefully
observing the children's solution methods the teacher ensures that she is aware of the strategies being used and if needed the following prompts are given: "How did you work that out?" or "How did
you do that?" The children enjoyed coming out of class and working on a one-to-one basis with a teacher.
One of the most significant findings from three years of testing has confirmed that there appears to be a link between children needing both Mathematics Intervention and Reading Recovery. There
appears to be a need for further research and the necessity for a more integrated approach in teaching mathematics.
The Intervention Program
In 1996 children are withdrawn from their classes for seven half-hour sessions per fortnight with a maximum participation of twenty weeks. Emphasis is placed on the verbal interaction between teacher
and students, and between students. Each session is planned to build on previous understandings as interpreted by the teacher during the session. Many games have been adapted to ensure that concepts
are presented in an informal but engaging way. Each lesson includes:
* counting activities using concrete materials such as blocks, counters, bead frames, straws.
* games designed to highlight and correct a perceived weakness.
* oral work, using concrete materials.
* questions that expected the children to reflect on their strategies.
* the expectation that all children would explain their strategies and would listen when some-one else was explaining solutions and/or strategies
* a written activity
Classroom teachers have commented on the improvement in both the attitude and skills of the children in the program. The program depends on the teacher making an instant appraisal of the child's
needs and providing the appropriate activities. This is in line with the National Statement (Australian Education Council, 1991): "Whatever their particular needs or abilities, all students have the
right to learn mathematics in a way that is personally challenging and stretches their capabilities. Achievable and satisfying tasks are an important prerequisite for success" (p.10).
The major difference between the 1993 program and those of subsequent years is the greater emphasis on written work. This was added specifically in response to a request from classroom teachers and
assists the transition back into the classroom Mathematics program.
While we believe that children experiencing difficulties with mathematics in the early years of schooling need to be withdrawn from the mainstream classroom for lessons with a teacher who has
undertaken special training we acknowledge this is not always possible. An alternative approach is to incorporate specific strategies within the context of the mainstream classroom. Hopefully these
strategies will be informed and determined by the research into special programs like Mathematics Intervention. Before deciding on appropriate strategies for use in classrooms we need to determine
specific problems that are experienced by children needing further assistance in mathematics.
Common Difficulties
Over the last four years there have been common problems exhibited by children considered to be mathematically "at risk". These problems have been noted both in the assessment procedure and during
the Mathematics Intervention lessons and include:
* difficulty elaborating the number sequence.
* difficulty in coordinating their spoken number sequence with the actual counting of objects.
* confusion with the "teen" words and "ty " words
* difficulty in counting backwards from 20.
* bridging of the decades.
* lack of understanding of the symbols.
Classroom Teaching Strategies
Experience with the Mathematics Intervention program has highlighted several strategies that could be used by classroom teachers that will allow all children to experience success with mathematics.
Verbal Counting
To facilitate the improvement of children's counting skills time must be spent each lesson counting both orally and with structured materials. For example, counting beads on a bead frame, collections
of counters, beads, bears and in fact anything countable. Emphasis must also be placed on the pronunciation of the number words. Every year Mathematics Intervention teachers have observed that
children experience difficulties with the number sequence due to poor speech especially with the "teen" and "ty" words. Quite frequently the mispronunciation had been missed by classroom teachers. As
Fuson (1988) wrote,
... children's ability to say the correct sequence of number words is very strongly affected by the opportunity to learn and practice this sequence. Children within a given age group show
considerable variability in the length of the correct sequence they can produce. Frequent exposure to "Sesame Street" or to parents, older siblings, or teachers who provide frequent counting practice
undoubtedly enables a child to say longer accurate sequences at a younger age (p. 57).
To emphasise and reinforce the difference in numbers like seventeen and seventy, a memory game was introduced (see Figure 1). This game assisted with numerical recognition, and the children became
very proficient in counting by twos to determine the number of cards they had won.
Teachers need to be skilled in questioning and able to ask mathematical questions using the correct mathematical language. Skilful questioning by the teacher is imperative to ensure that the
children's mathematical knowledge can be used to form a strong foundation on which to build further mathematical knowledge. Children should be expected to explain their strategies to both the teacher
and other students and where necessary prompts should be given such as: "How did you do that?"
Alternative Solutions
Children should be encouraged to think of and discuss different ways each task could be solved. Teachers must refrain from saying whether answer is correct or incorrect or that one procedure is
better than another. Teacher should encourage children to explain their solutions and to tell each other whether or not an explanation makes sense to them.
Young children will eventually construct the algorithms that are now prematurely imposed on them. By letting them change their minds only when they are convinced that another idea makes better sense,
we encourage them to build a solid foundation that will enable them to go on constructing higher-level thinking (Kamii, 1990, p. 30).
To ensure active participation in the Intervention program, games are used wherever appropriate. The variety of the games depended on the imagination and skill of the teachers. This is another
activity that can be used successfully in the classroom by classroom teachers.
Games are excellent activities because children play them to please themselves rather than the teacher. They are desirable because in games children care about sums, supervise each other, and give
immediate feedback. ... Games are good also because the social interaction they require contributes greatly to children's social and moral development (Kamii, 1990, p. 29).
Games using dice are used to compare numbers, add and subtract numbers and to make up their own sums. It is this ownership of the mathematics that becomes a very powerful tool in learning. Different
sized dice can be used depending on the child's ability. A game called Twenty was devised to assist children to make the transition from counting all the counters (Stage 1 and 2) to counting on
(Stage 3), or "counting back" which are much more powerful strategies and are necessary if the child is to succeed with addition and subtraction.
Implications for Classroom Teachers
The importance of the Mathematics intervention program to students "mathematically at risk" cannot be over-emphasised. As stated by the National Statement (Australian Education Council, 1991):
"Whether a particular student gains the full benefit from mathematics may be influenced by a range of personal characteristics and circumstances. It will also depend on the quality of the mathematics
offered" (p.8). Steffe and his colleagues (Steffe et al., 1983; 1988) have indicated that 6 year-old children below Stage 3 of the counting stages may require up to two years to progress to Stage 5
and even then there is no guarantee that all children will attain this level. Considered in this light the results achieved by children in a quarter of that time are a positive indication of the
viability of the Mathematics Intervention program.
Strategies used by teachers in the Mathematics Intervention Program are transferable to classroom teachers. However, no matter how effectively a teacher uses these strategies, there will always be a
need for a program such as Mathematics Intervention which is specifically designed to cater for those children who are "at risk". Mathematics Intervention teachers need to be confident and competent
in mathematics and need to share their knowledge of these special students with the classroom teacher. Both class teacher and Intervention teacher need to be aware of the child's knowledge and
strategies and able to design appropriate activities to extend their mathematical understanding together.
With the increase in Victorian class sizes, teachers are going to have even less time to spend with these children who are "at risk". If children are unable to count accurately, it will be difficult
for them to succeed with other mathematical problems and processes. A clinically trained mathematics teacher, working with a small group of children of similar mathematical ability, is more likely to
observe the difficulties experienced by these children and be able to work towards strengthening their basic numerical concepts.
Australian Education Council (1991). A National Statement on Mathematics for Australian Schools. Carlton: Curriculum Corporation.
Clay, M. M. (1987). Implementing Reading Recovery: Systematic adaptations to an educational innovation. New Zealand Journal of Educational Studies, 22 (1), 35-58
Directorate of School Education (1992). Mathematics Course Advice -- Primary. Melbourne: Author.
Fuson, K. (1988). Children's Counting and Concepts of Number. New York: Springer-Verlag.
Gibson, S. J., Doig, B. A., & Hunting, R. P. (1993). Inside their heads -- the clinical interview in the classroom. In J. Mousley & M. Rice (Eds.), Mathematics: Of primary importance (pp. 30-35).
Brunswick: Mathematical Association of Victoria.
Kamii, C. (1990).Constructivism and beginning arithmetic. In T. J. Cooney & C. R. Hirsch (Eds) Teaching and learning mathematics in the 1990s. Reston, VA: The National Council of Teachers of
Pearn, C. A. (1994). A connection between mathematics and language development in early mathematics. In G. Bell, R. Wright, N. Leeson, & J. Geake (Eds.),Challenges in mathematics education:
Constraints on construction (Vol 2, pp. 463-470) Lismore, NSW: Southern Cross University.
Pearn, C. A., Merrifield, M.,& Mihalic, H. (1994a) Mathematics Intervention: A Pilot Program in Mathematics Recovery Education. Paper presented at the Emilio Reggio Conference, Melbourne University.
September, 1994.
Pearn, C. A., Merrifield, M., & Mihalic, H., (1994b). Intensive strategies with young children: A mathematics intervention program. In D. Rasmussen & K. Beesey (Eds), Mathematics without limits.
Brunswick: Mathematical Association of Victoria.
Steffe, L. P., Von Glasersfeld, E., Richards, J.& Cobb, P. (1983). Children's counting types: Philosophy, theory, and application. New York: Praeger.
Steffe, L. P., Cobb, P., and von Glasersfeld, E. (1988). Construction of arithmetical meanings and strategies. New York: Springer-Verlag.
Wright, R. J. (1991). The role of counting in children's numerical development. The Australian Journal of Early Childhood, 16 (2), 43-48.
Wright, R.J., Stanger, G. Cowper, M. & Dyson, R. (1996). First-graders' progress in an experimental mathematics recovery program. In J. Mulligan & M. Mitchelmore (Eds.), Children's number learning
(pp. ). Adelaide: The Australian Association of Mathematics Teachers.
Back to CRME research projects
|
{"url":"http://www.crme.soton.ac.uk/publications/gdpubs/cath.html","timestamp":"2014-04-16T22:37:51Z","content_type":null,"content_length":"22638","record_id":"<urn:uuid:2ad9e6bb-d67e-4395-8d4f-8c1ccc7a4fca>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00240-ip-10-147-4-33.ec2.internal.warc.gz"}
|
sifat fisika dan kimia
SYAM IS KIND,BUT NAKED,HE'S SO FUNNY....
but he's still fun me...i dont know his girlfriend.
no,i don't miss asih.
smile please but ridiculus,
good luck for SYAM....
get your love of your life!!!!!!!
OK!!!!!! I'LL KEEP STRUGGLING...
Main article: Thermal radiation
Radiation is the transfer of heat energy through empty space. All objects with a temperature above absolute zero radiate energy at a rate equal to their emissivity multiplied by the rate at which
energy would radiate from them if they were a black body. No medium is necessary for radiation to occur, for it is transferred through electromagnetic waves; radiation works even in and through a
perfect vacuum. The energy from the Sun travels through the vacuum of space before warming the earth.
Both reflectivity and emissivity of all bodies is wavelength dependent. The temperature determines the wavelength distribution of the electromagnetic radiation as limited in intensity by Planck’s law
of black-body radiation. For any body the reflectivity depends on the wavelength distribution of incoming electromagnetic radiation and therefore the temperature of the source of the radiation. The
emissivity depends on the wave length distribution and therefore the temperature of the body itself. For example, fresh snow, which is highly reflective to visible light, (reflectivity about 0.90)
appears white due to reflecting sunlight with a peak energy wavelength of about 0.5 micrometres. Its emissivity, however, at a temperature of about -5°C, peak energy wavelength of about 12
micrometres, is 0.99.
Gases absorb and emit energy in characteristic wavelength patterns that are different for each gas.
Visible light is simply another form of electromagnetic radiation with a shorter wavelength (and therefore a higher frequency) than infrared radiation. The difference between visible light and the
radiation from objects at conventional temperatures is a factor of about 20 in frequency and wavelength; the two kinds of emission are simply different "colours" of electromagnetic radiation.
[edit] Clothing and building surfaces, and radiative transfer
Lighter colors and also whites and metallic substances absorb less illuminating light, and thus heat up less; but otherwise color makes little difference as regards heat transfer between an object at
everyday temperatures and its surroundings, since the dominant emitted wavelengths are nowhere near the visible spectrum, but rather in the far infrared. Emissivities at those wavelengths have little
to do with visual emissivities (visible colors); in the far infrared, most objects have high emissivities. Thus, except in sunlight, the color of clothing makes little difference as regards warmth;
likewise, paint color of houses makes little difference to warmth except when the painted part is sunlit. The main exception to this is shiny metal surfaces, which have low emissivities both in the
visible wavelengths and in the far infrared. Such surfaces can be used to reduce heat transfer in both directions; an example of this is the multi-layer insulation used to insulate spacecraft.
Low-emissivity windows in houses are a more complicated technology, since they must have low emissivity at thermal wavelengths while remaining transparent to visible light.
[edit] Physical Transfer
Finally it is possible to move heat by physical transfer of a hot or cold object from one place to another. This can be as simple as placing hot water in a bottle and heating your bed or the movement
of an iceberg and changing ocean currents.
[edit] Newton's law of cooling
A related principle, Newton's law of cooling, states that the rate of heat loss of a body is proportional to the difference in temperatures between the body and its surroundings. The law is
\frac{d Q}{d t} = h \cdot A( T_{\text{env}}- T(t)) = - h \cdot A \Delta T(t)\quad
Q = Thermal energy in joules
h = Heat transfer coefficient
A = Surface area of the heat being transferred
T = Temperature of the object's surface and interior (since these are the same in this approximation)
Tenv = Temperature of the environment
ΔT(t) = T(t) − Tenv is the time-dependent thermal gradient between environment and object
This form of heat loss principle is sometimes not very precise; an accurate formulation may require analysis of heat flow, based on the (transient) heat transfer equation in a nonhomogeneous, or else
poorly conductive, medium. An analog for continuous gradients is Fourier's Law.
The following simplification (called lumped system thermal analysis and other similar terms) may be applied, so long as it is permitted by the Biot number, which relates surface conductance to
interior thermal conductivity in a body. If this ratio permits, it shows that the body has relatively high internal conductivity, such that (to good approximation) the entire body is at the same
uniform temperature throughout, even as this temperature changes as it is cooled from the outside, by the environment. If this is the case, these conditions give the behavior of exponential decay
with time, of temperature of a body.
In such cases, the entire body is treated as lumped capacitance heat reservoir, with total heat content which is proportional to simple total heat capacity C , and T, the temperature of the body, or
Q = C T. From the definition of heat capacity C comes the relation C = dQ/dT. Differentiating this equation with regard to time gives the identity (valid so long as temperatures in the object are
uniform at any given time): dQ/dt = C (dT/dt). This expression may be used to replace dQ/dt in the first equation which begins this section, above. Then, if T(t) is the temperature of such a body at
time t , and Tenv is the temperature of the environment around the body:
\frac{d T(t)}{d t} = - r (T(t) - T_{\mathrm{env}}) = - r \Delta T(t)\quad
r = hA/C is a positive constant characteristic of the system, which must be in units of 1/time, and is therefore sometimes expressed in terms of a characteristic time constant t0 given by: r = 1/t0 =
ΔT/[dT(t)/dt] . Thus, in thermal systems, t0 = C/hA. (The total heat capacity C of a system may be further represented by its mass-specific heat capacity cp multiplied by its mass m, so that the time
constant t0 is also given by mcp/hA).
Thus the above equation may also be usefully written:
\frac{d T(t)}{d t} = - \frac{1}{t_0} \Delta T(t)\quad
The solution of this differential equation, by standard methods of integration and substitution of boundary conditions, gives:
T(t) = T_{\mathrm{env}} + (T(0) - T_{\mathrm{env}}) \ e^{-r t}. \quad
Here, T(t) is the temperature at time t, and T(0) is the initial temperature at zero time, or t = 0.
\Delta T(t) \quad is defined as : T(t) - T_{\mathrm{env}} \ , \quad where \Delta T(0)\quad is the initial temperature difference at time 0,
then the Newtonian solution is written as:
\Delta T(t) = \Delta T(0) \ e^{-r t} = \Delta T(0) \ e^{-t/t_0}. \quad
Uses: For example, simplified climate models may use Newtonian cooling instead of a full (and computationally expensive) radiation code to maintain atmospheric temperatures.
[edit] One dimensional application, using thermal circuits
A very useful concept used in heat transfer applications is the representation of thermal transfer by what is known as thermal circuits. A thermal circuit is the representation of the resistance to
heat flow as though it were an electric resistor. The heat transferred is analogous to the current and the thermal resistance is analogous to the electric resistor. The value of the thermal
resistance for the different modes of heat transfer are calculated as the denominators of the developed equations. The thermal resistances of the different modes of heat transfer are used in
analyzing combined modes of heat transfer. The equations describing the three heat transfer modes and their thermal resistances, as discussed previously are summarized in the table below:
Thermal Circuits.png
In cases where there is heat transfer through different media (for example through a composite), the equivalent resistance is the sum of the resistances of the components that make up the composite.
Likely, in cases where there are different heat transfer modes, the total resistance is the sum of the resistances of the different modes. Using the thermal circuit concept, the amount of heat
transferred through any medium is the quotient of the temperature change and the total thermal resistance of the medium. As an example, consider a composite wall of cross- sectional area A. The
composite is made of an L1 long cement plaster with a thermal coefficient k1 and L2 long paper faced fiber glass, with thermal coefficient k2. The left surface of the wall is at Ti and exposed to air
with a convective coefficient of hi. The Right surface of the wall is at To and exposed to air with convective coefficient ho.
Thermal Circuits2.jpg
Using the thermal resistance concept heat flow through the composite is as follows:
Thermal Circuits3.jpg
"hi and h0 should be ki and k0 in Thermal_Circuits3.jpg"
[edit] Insulation and radiant barriers
Main articles: Thermal insulation and Radiant barrier
Thermal insulators are materials specifically designed to reduce the flow of heat by limiting conduction, convection, or both. Radiant barriers are materials which reflect radiation and therefore
reduce the flow of heat from radiation sources. Good insulators are not necessarily good radiant barriers, and vice versa. Metal, for instance, is an excellent reflector and poor insulator.
The effectiveness of an insulator is indicated by its R- (resistance) value. The R-value of a material is the inverse of the conduction coefficient (k) multiplied by the thickness (d) of the
insulator. The units of resistance value are in SI units: (K·m²/W)
{R} = {d \over k}
{C} = {Q \over m \Delta T}
Rigid fiberglass, a common insulation material, has an R-value of 4 per inch, while poured concrete, a poor insulator, has an R-value of 0.08 per inch.[7]
The effectiveness of a radiant barrier is indicated by its reflectivity, which is the fraction of radiation reflected. A material with a high reflectivity (at a given wavelength) has a low emissivity
(at that same wavelength), and vice versa (at any specific wavelength, reflectivity = 1 - emissivity). An ideal radiant barrier would have a reflectivity of 1 and would therefore reflect 100% of
incoming radiation. Vacuum bottles (Dewars) are 'silvered' to approach this. In space vacuum, satellites use multi-layer insulation which consists of many layers of aluminized (shiny) mylar to
greatly reduce radiation heat transfer and control satellite temperature.
[edit] Critical insulation thickness
This section may require cleanup to meet Wikipedia's quality standards. Please improve this section if you can. (March 2009)
This section may require copy editing for grammar, style, cohesion, tone or spelling. You can assist by editing it. (March 2009)
To reduce the rate of heat transfer, one would add insulating materials i.e with low thermal conductivity (k). The smaller the k value, the larger the corresponding thermal resistance (R) value.
The units of thermal conductivity(k) are W·m-1·K-1 (watts per meter per kelvin), therefore increasing width of insulation (x meters) decreases the k term and as discussed increases resistance.
This follows logic as increased resistance would be created with increased conduction path (x).
However, adding this layer of insulation also has the potential of increasing the surface area and hence thermal convection area (A).
An obvious example is a cylindrical pipe:
* As insulation gets thicker, outer radius increases and therefore surface area increases.
* The point where the added resistance of increasing insulation width becomes overshadowed by the effects of surface area is called the critical insulation thickness. In simple cylindrical pipes:[8]
{R_{critical}} = {k \over h}
For a graph of this phenomenon in a cylidrical pipe example see: External Link: Critical Insulation Thickness diagram as at 26/03/09
[edit] Heat exchangers
Main article: Heat exchanger
A heat exchanger is a device built for efficient heat transfer from one fluid to another, whether the fluids are separated by a solid wall so that they never mix, or the fluids are directly
contacted. Heat exchangers are widely used in refrigeration, air conditioning, space heating, power generation, and chemical processing. One common example of a heat exchanger is the radiator in a
car, in which the hot radiator fluid is cooled by the flow of air over the radiator surface.
Common types of heat exchanger flows include parallel flow, counter flow, and cross flow. In parallel flow, both fluids move in the same direction while transferring heat; in counter flow, the fluids
move in opposite directions and in cross flow the fluids move at right angles to each other. The common constructions for heat exchanger include shell and tube, double pipe, extruded finned pipe,
spiral fin pipe, u-tube, and stacked plate.
When engineers calculate the theoretical heat transfer in a heat exchanger, they must contend with the fact that the driving temperature difference between the two fluids varies with position. To
account for this in simple systems, the log mean temperature difference (LMTD) is often used as an 'average' temperature. In more complex systems, direct knowledge of the LMTD is not available and
the number of transfer units (NTU) method can be used instead.
[edit] Boiling heat transfer
See also: boiling and critical heat flux
Heat transfer in boiling fluids is complex but of considerable technical importance. It is characterised by an s-shaped curve relating heat flux to surface temperature difference (see say Kay &
Nedderman 'Fluid Mechanics & Transfer Processes', CUP, 1985, p. 529).
At low driving temperatures, no boiling occurs and the heat transfer rate is controlled by the usual single-phase mechanisms. As the surface temperature is increased, local boiling occurs and vapour
bubbles nucleate, grow into the surrounding cooler fluid, and collapse. This is sub-cooled nucleate boiling and is a very efficient heat transfer mechanism. At high bubble generation rates the
bubbles begin to interfere and the heat flux no longer increases rapidly with surface temperature (this is the departure from nucleate boiling DNB). At higher temperatures still, a maximum in the
heat flux is reached (the critical heat flux). The regime of falling heat transfer which follows is not easy to study but is believed to be characterised by alternate periods of nucleate and film
boiling. Nukleate boiling slowing the heat transfer due to gas phase {bubbles} creation on the heater surface, as mentioned, gas phase thermal conductivity is much lower than liquid phase thermal
conductivity, so the outcome is a kind of "gas thermal barrier".
At higher temperatures still, the hydrodynamically quieter regime of film boiling is reached. Heat fluxes across the stable vapour layers are low, but rise slowly with temperature. Any contact
between fluid and the surface which may be seen probably leads to the extremely rapid nucleation of a fresh vapour layer ('spontaneous nucleation').
[edit] Condensation heat transfer
Condensation occurs when a vapor is cooled and changes its phase to a liquid. Condensation heat transfer, like boiling, is of great significance in industry. During condensation, the latent heat of
vaporization must be released. The amount of the heat is the same as that absorbed during vaporization at the same fluid pressure.
There are several modes of condensation:
* Homogeneous condensation (as during a formation of fog).
* Condensation in direct contact with subcooled liquid.
* Condensation on direct contact with a cooling wall of a heat exchanger-this is the most common mode used in industry:
o Filmwise condensation (when a liquid film is formed on the subcooled surface, usually occurs when the liquid wets the surface).
o Dropwise condensation (when liquid drops are formed on the subcooled surface, usually occurs when the liquid does not wet the surface). Dropwise condensation is difficult to sustain reliably;
therefore, industrial equipment is normally designed to operate in filmwise condensation mode.
[edit] Heat transfer in education
Heat transfer is typically studied as part of a general chemical engineering or mechanical engineering curriculum. Typically, thermodynamics is a prerequisite to undertaking a course in heat
transfer, as the laws of thermodynamics are essential in understanding the mechanism of heat transfer. Other courses related to heat transfer include energy conversion, thermofluids and mass
Heat transfer methodologies are used in the following disciplines, among others:
Main articles: convection and convective heat transfer
This section may require cleanup to meet Wikipedia's quality standards. Please improve this section if you can. (March 2009)
This section may require copy editing for grammar, style, cohesion, tone or spelling. You can assist by editing it. (March 2009)
Convection is the transfer of thermal energy by the movement of molecules from one part of material to another. As fluid motion goes more quickly the convective heat transfer increases. The presence
of bulk motion of fluid enhances the heat transfer between the solid surface and the fluid.[2]
There are two types of Convective Heat Transfer:
* Natural Convection: is when the fluid motion is caused by buoyancy forces that result from the density variations due to variations of temperature in the fluid. For example in the absence of an
external source, when the mass of the fluid is in contact with a hot surface its molecules separate and scatter causing the mass of fluid to become less dense. When this happens, the fluid is
displaced vertically or horizontally while the cooler fluid gets denser and the fluid sinks. Thus the hotter volume transfers heat towards the cooler volume of that fluid.[3]
* Forced Convection: is when the fluid is forced to flow over the surface by external source such as fans and pumps. It creates an artificially induced convection current.[4]
Internal and external flow can also classify convection. Internal flow occurs when the fluid is enclosed by a solid boundary such as a flow through a pipe. An external flow occurs when the fluid
extends indefinitely without encountering a solid surface. Both these convections, either natural or forced, can be internal or external as they are independent of each other.[3]
The formula for Rate of Convective Heat Transfer:[5]
q = hA(Ts − Tb)
A is the surface area of heat transfer. Ts is the surface temperature and Tb is the temperature of the fluid at bulk temperature. However Tb varies with each situation and is the temperature of the
fluid “far” away from the surface. The h is the constant heat transfer coefficient which depends upon physical properties of the fluid such as temperature and the physical situation in which
convection occurs. Therefore, the heat transfer coefficient must be derived or found experimentally for every system analyzed. Formulas and correlations are available in many references to calculate
heat transfer coefficients for typical configurations and fluids. For laminar flows the heat transfer coefficient is rather low compared to the turbulent flows, this is due to turbulent flows having
a thinner stagnant fluid film layer on heat transfer surface.[6]
* Steady state conduction is the form of conduction which happens when the temperature difference driving the conduction is constant so that after an equilibration time, the spatial distribution of
temperatures (temperature field) in the conducting object does not change any further. For example, a bar may be cold at one end and hot at the other, but the gradient of temperatures along the bar
do not change with time. The temperature at any given section of the rod remains constant, and this temperature varies linearly along the direction of heat transfer.
In steady state conduction, the amount of heat entering a section is equal to amount of heat coming out. In steady state conduction, all the laws of direct current electrical conduction can be
applied to "heat currents". In such cases, it is possible to take "thermal resistances" as the analog to electrical resistances. Temperature plays the role of voltage and heat transferred is the
analog of electrical current.
* Transient conduction There also exists non-steady-state situations, in which the temperature drop or increase occurs more drastically, such as when a hot copper ball is dropped into oil at a low
temperature. Here the temperature field within the object changes as a function of time, and the interest lies in analysing this spatial change of temperature within the object over time. This mode
of heat conduction can be referred to as transient conduction. Analysis of these systems is more complex and (except for simple shapes) calls for the application of approximation theories, and/or
numerical analysis by computer.
Lumped system analysis
A common approximation in transient conduction, which may be used whenever heat conduction within an object is much faster than heat conduction across the boundary of the object, is lumped system
analysis. This is a method of approximation that suitably reduces one aspect of the transient conduction system (that within the object) to an equivalent steady state system (that is, it is assumed
that the temperature within the object is completely uniform, although its value may be changing in time).
In this method, a term known as the Biot number is calculated, which is defined as the ratio of resistance to heat transfer across the object's boundary with a uniform bath of different temperature,
to the conductive heat resistance within the object. When the thermal resistance to heat transferred into the object is less than the resistance to heat being diffused completely within the object,
the Biot number less than 1. This case, and the approximation of spatially uniform temperature within the object can be used, since it can be presumed that heat transferred into the object has time
to uniformaly distribute itself due to the lower resistance to doing so, as compared with the resistance to heat entering the object. As this is a mode of approximation, the Biot number must be less
than 0.1 for accurate approximation and heat transfer analysis. The mathematical solution to the lumped system approximation gives Newton's law of cooling, discussed below.
This mode of analysis has been applied to forensic sciences to analyize the time of death of humans. Also it can be applied to HVAC (heating, ventilating and air-conditioning, or building climate
control), to ensure more nearly instantaneous effects of a change in comfort level setting.[1]
Heat transfer
From Wikipedia, the free encyclopedia
Jump to: navigation, search
Heat transfer is the transition of thermal energy from a hotter mass to a cooler mass. When an object is at a different temperature than its surroundings or another object, transfer of thermal
energy, also known as heat flow, or heat exchange, occurs in such a way that the body and the surroundings reach thermal equilibrium; this means that they are at the same temperature. Heat transfer
always occurs from a higher-temperature object to a cooler-temperature one as described by the second law of thermodynamics or the Clausius statement. Where there is a temperature difference between
objects in proximity, heat transfer between them can never be stopped; it can only be slowed.
Conduction is the transfer of heat by direct contact of particles of matter. The transfer of energy could be primarily by elastic impact as in fluids or by free electron diffusion as predominant in
metals or phonon vibration as predominant in insulators. In other words, heat is transferred by conduction when adjacent atoms vibrate against one another, or as electrons move from one atom to
another. Conduction is greater in solids, where a network of relatively fixed spacial relationships between atoms helps to transfer energy between them by vibration.
Heat conduction is directly analogous to diffusion of particles into a fluid, in the situation where there are no fluid currents. This type of heat diffusion differs from mass diffusion in behavior,
only in as much as it can occur in solids, whereas mass diffusion is mostly limited to fluids.
Metals (e.g. copper, platinum, gold, iron, etc.) are usually the best conductors of thermal energy. This is due to the way that metals are chemically bonded: metallic bonds (as opposed to covalent or
ionic bonds) have free-moving electrons which are able to transfer thermal energy rapidly through the metal.
As density decreases so does conduction. Therefore, fluids (and especially gases) are less conductive. This is due to the large distance between atoms in a gas: fewer collisions between atoms means
less conduction. Conductivity of gases increases with temperature. Conductivity increases with increasing pressure from vacuum up to a critical point that the density of the gas is such that
molecules of the gas may be expected to collide with each other before they transfer heat from one surface to another. After this point in density, conductivity increases only slightly with
increasing pressure and density.
To quantify the ease with which a particular medium conducts, engineers employ the thermal conductivity, also known as the conductivity constant or conduction coefficient, k. In thermal conductivity
k is defined as "the quantity of heat, Q, transmitted in time (t) through a thickness (L), in a direction normal to a surface of area (A), due to a temperature difference (ΔT) [...]." Thermal
conductivity is a material property that is primarily dependent on the medium's phase, temperature, density, and molecular bonding.
A heat pipe is a passive device that is constructed in such a way that it acts as though it has extremely high thermal conductivity.
Steady-state conduction vs. Transient conduction
Air Cooled Heat Exchanger - Thermo Q
Fin Tube
Radiator use in many industrial applications and it’s an important part in process, both in Cooling or Heating. We design and manufacture many type of Radiators, for Low or High Temperature, used for
many kind of process. Radiator used in so wide in industry and so many size and type of material or size in used, we always keep in stock a wide range of Finned Tube or fast delivery.
Brochure ... ( PDF Format , 296 KB )
Plate Fin Coil
applied to be very wide in process of cooler and also heater like :
* Evaporator
* Condenser
* Air Cooler
* Air Heater
* Air dryer
* Compress air cooler
* Gas cooler
Brochure ... ( PDF Format , 183 KB )
Shell and Tube Heat Exchanger - Thermo Q
PT. Metalindo E Engineering has custom designed and fabricated Shell and Tube Heat Exchangers of various metallurgies for installation in various industries.
Many units are designed for operating under high pressure and critical conditions.
We has extensive experience in the design and manufacture of Shell and Tube Heat Exchangers of carbon steel, stainless steel, chrome, high alloys, nickel alloys and other special metals.
All units are designed in aid of computer to meet specific requirements of each end user. They are also designed in accordance with international codes and standard such as ASME and TEMA .
Shell and Tube HE
We can partially or completely re-tube heat exchanger, if it cannot be restored. This applies to all types of shell and tube heat exchangers. We can re-manufacture new cooler rapidly from our huge
database of engineering drawings.
Brochure ... ( PDF Format , 399 KB ) NH 3 Condenser
Pembuatan Logam Alkali
Kimia Kelas 3 > Unsur-Unsur Alkali
< Sebelum Sesudah >
Dengan cara elektrolisis leburan/lelehan garamnya.
Contoh : NaCl (l) Na+ (l) + Cl- (l)
Katoda Na+ (l) + e- ® Na (s)
Anoda Cl- (l) ® 1/2 Cl2 (g) + e-
® Na+ (l) + Cl- (l) --> Na (s) + 1/2 Cl2 (g)
Sifat Fisika Dan Kimia
Kimia Kelas 3 > Unsur-Unsur Alkali
< Sebelum Sesudah >
Rb dan Cs
a. DENGAN UDARA Perlahan-lahan terjadi Li2O Cepat terjadi Na2O dan Na2O2 Cepat terjadi K2O Terbakar terjadi Rb2O dan Cs2O
b. DENGAN AIR
2L + 2H2O ® 2LOH + H2 (g)
(makin hebat reaksinya sesuai dengan arah panah)
c. DENGAN ASAM KUAT
2L + 2H+ ® 2L+ + H2 (g)
d. DENGAN HALOGEN
2L + X2 ® 2LH
Garam atau basa yang sukar larut dalam air
OH- , PO43-
ClO4- dan
[ Co(NO2)6 ]3-
Sifat Golongan Unsur Alkali
Kimia Kelas 3 > Unsur-Unsur Alkali
< Sebelum Sesudah >
1. Konfigurasi elektron
[X] ns1
2. Massa atom
3. Jari-jari atom (n.m)
4. Keelektronegatifan
Rendah (antara 0.7 - 1.0)
Di atas suhu kamar (antara 28.7o - 180.5o)
5. Suhu lebur (oC)
6. Energi ionisasi (kJ/mol)
Antara 376 - 519
7. Potensial oksidasi (volt)
Positif, antara 2.71 - 3.02 (reduktor)
8. Bilangan oksidasi +1 +1 +1 +1
Catatan :
[X] = unsur-unsur gas mulia (He, Ne, Ar, Kr, Xe, Rn)
n = nomor perioda (2, 3, 4, 5, 6, 7)
® = makin besar sesuai dengan arah panah
Hidrogen, Klor, Brom Dan Iodium
Kimia Kelas 3 > Unsur-Unsur Halogen
< Sebelum Sesudah >
Catatan :
® makin besar/kuat sesuai dengan arah panah
Sifat reduktor
Kestabilan terhadap panas
1. Bentuk pada suhu biasa
Gas tidak berwarna
2. Dalam pelarut non polar (Benzana/Toluensa)
Larut, tak menghantarkan arus listrik
3. Dalam air
Larut, menghantarkan arus listrik
4. Dengan H2SO4, pekat (oksidator)
Tidak teroksidasi
Teroksidasi menjadi Br2
Teroksidasi menjadi I2
5. Kestabilan terhadap pemanasan
Tidak terurai
Sedikit terurai
Terurai menjadi He dan I2
Sifat Fisika Dan Sifat Kimia Unsur Halogen
Kimia Kelas 3 > Unsur-Unsur Halogen
< Sebelum Sesudah >
Fluor (F2)
Klor (Cl2)
Brom (Br2)
Iodium (I2)
1. Molekulnya
2. Wujud zat (suhu kamar)
3. Warna gas/uap
Kuning muda
Kuning hijau
Coklat merah
4. Pelarutnya (organik)
CCl4, CS2
5. Warna larutan (terhadap pelarut 4)
Tak berwarna
Tak berwarna
6. Kelarutan oksidator
(makin besar sesuai dengan arah panah)
7. Kereaktifan terhadap gas H2
8. Reaksi pengusiran pada senyawa halogenida
X = Cl, Br, I
F2 + 2KX ® 2KF X2
X = Br dan I
Cl2 + 2KX ® 2KCl + X2
X = I
Br2 + KX ® 2KBr + X2
Tidak dapat mengusir F, Cl, Br
9. Reaksi dengan logam (M)
2 M + nX2 ® 2MXn (n = valensi logam tertinggi)
10. Dengan basa kuat MOH (dingin)
X2 + 2MOH ® MX + MXO + H2O (auto redoks)
11. Dengan basa kuat (panas)
3X2 + 6MOH ® 5MX + MXO3 + 3H2O (auto redoks)
12. Pembentukan asam oksi
Membentuk asam oksi kecuali F
Catatan :
I2 larut dalam KI membentuk garam poli iodida
I2 + KI ® Kl3
I2 larut terhadap alkohol coklat
|
{"url":"http://www.ali-kimia.blogspot.com/","timestamp":"2014-04-20T17:03:23Z","content_type":null,"content_length":"109759","record_id":"<urn:uuid:c4f6a941-8d22-4a06-a00f-4b0d19ca7fc6>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00170-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Generalization of a Result on Solvable Groups
up vote 3 down vote favorite
This question concerns finite groups.
It is a well-known fact that every subgroup of a solvable group must again be solvable; this is easily proven by looking at the derived series of a given subgroup.
What I have been thinking about for awhile now is if/how this generalizes to arbitrary finite groups? Specifically, given some set $S$ of finite simple groups, consider the set $\Gamma_S$ of all
finite groups whose composition series only includes groups in $S$. Then the above statement on solvable groups can be rephrased as:
THEOREM: Let $S = \{ \mathbb{Z}/p\mathbb{Z}\}_{p\in P}$. If $G\in \Gamma_S$ then $\forall H \leq G$ one has $H\in\Gamma_S$.
Trying to generalize to arbitrary $S$, the obvious generalization to try is:
CONJECTURE (1): For $S$ an arbitrary set of finite simple groups, if $G\in \Gamma_S$ then $\forall H \leq G$ one has $H\in\Gamma_S$.
Unfortunately, this is easily seen to be false; let $S = \{A_6\}$. Then $A_6\in \Gamma_S$ but $A_5\leq A_6$ and $A_5\notin \Gamma_S$. The failure in this example then leads me to the following second
attempt at a generalization:
CONJECTURE (2): Let $S$ be an arbitrary set of finite simple groups, and $c(S)$ denote the set of all finite simple groups which appear as composition factors of some subgroup of some element of
$S$. If $G\in \Gamma_S$ then $\forall H \leq G$ one has $H\in \Gamma_{c(S)}$.
I have been trying to figure out how to prove Conjecture (2). One thought is to use some appropriate analogue of the derived series for the general case, although coming up with the right analogue
seems elusive. I have also thought about using the characters of elements of $S$, but this too does not lead to any immediate insights.
So, does anyone know whether Conjecture (2) is indeed true, and if so have either a (short enough for a post) proof or reference to where this question or similar ones might have been considered
gr.group-theory finite-groups solvable-groups
I don't know whether conjecture 2 is true. I feel it is false because (I suspect) one can engineer an (arbitrarily long but finite) sequence of groups H_1 <= H_2, ... such that H_i is not in the
composition series of any decomposition of H_{i+1}. Gerhard "Ask Me About Tame Congruences" Paseman, 2011.12.15 – Gerhard Paseman Dec 15 '11 at 22:52
@Gerhard: That's pretty much why I'm worried about Conjecture (2)'s validity; in the simplest case I'm wondering if something like taking $S = \{A_5 ,A_6 ,A_7\}$ then having an $A_8$ show up as a
subgroup of some $G\in\Gamma_S$; there are no obvious obstructions that I can see. – ARupinski Dec 15 '11 at 23:07
1 I am confused why this would be false. If $1=N_n\lhd N_{n-1}\lhd\cdots\lhd N_1\lhd G$ is a composition series, then induction on $n$ should be enough. Let $H$ be a subgroup of $G$. If $n=1$, then
clearly we're OK. Otherwise, $H/(H\cap N_1)$ is a subgroup of $G/N_1$, so that's OK, and by induction so is $H\cap N_1\le N_1$. – Steve D Dec 15 '11 at 23:29
In more generality, let $\mathbf{P}$ be the closure operator of "poly-" and $\mathbf{S}$ the closure operator of "subgroups". Then $\Gamma_S=\mathbf{P}S$, and you're asking if $\mathbf{S}\mathbf
{P}S\le \Gamma_{c(S)}$. But the Schreier refinement theorem guarantees that $\mathbf{P}\mathbf{S}S\le \Gamma_{c(S)}$, and it is always true, on the level of closure operators, that $\mathbf{S}\
mathbf{P}\le \mathbf{P}\mathbf{S}$. – Steve D Dec 15 '11 at 23:58
1 What Steve said. You don't even need induction: $H\cap N_1$ is a subgroup of $N_1$, which in turn is in $S$, so all terms in the composition series of $H\cap N_1$ are in $c(S)$. I actually don't
think that this question is at "MO-level": it would be a nice exercise for an introductory group theory course. – Alex B. Dec 16 '11 at 0:34
add comment
1 Answer
active oldest votes
Say that a group H is a divisor or factor of a group G if H is a quotient group of a subgroup of G. Let C be a family of finite simple groups and let C' be the smallest class of
finite groups containing C that is closed under finite direct products and divisors.
The following are equivalent for a finite group G:
1. Each simple group divisor of G belongs to C.
up vote 3 down 2. G has a subnormal series ending at 1 with successive quotients in C'
vote accepted 3. Same as 2 but with normal series.
4. G embeds in an iterated wreath product of simple groups divisors of a group in C.
5. Each composition factor of G is a divisor of a group in C.
The class of groups above are closed under extensions and sub.
The book of Ribes and Zalesskii on profinite groups has information about formations and generalizations, which seems what the question is about. – Benjamin Steinberg Dec 16 '11 at
1 I think another common term for "divisor" is "section". – Steve D Dec 16 '11 at 0:44
Steve, thanks! I knew group theorists had another name for it, but I couldn't recall it. – Benjamin Steinberg Dec 16 '11 at 1:02
"Factor group" is an old-fashioned synonym for "quotient group". Subgroups of quotient groups are the same as quotient groups of subgroups. I like to call them subquotients. – Tom
Goodwillie Dec 16 '11 at 1:33
I had meant section where I wrote factor. I remembered the term chief factors in Marshall Hall for composition factors and thought that he must use factor for what I would call
2 divisor. A subgroup of a quotient is of course a quotient of a subgroup but the converse is false. If G is a simple group, then it has no subquotients because it has no quotients.
But it can have many divisors. – Benjamin Steinberg Dec 16 '11 at 1:44
add comment
Not the answer you're looking for? Browse other questions tagged gr.group-theory finite-groups solvable-groups or ask your own question.
|
{"url":"http://mathoverflow.net/questions/83572/generalization-of-a-result-on-solvable-groups","timestamp":"2014-04-17T12:59:44Z","content_type":null,"content_length":"65699","record_id":"<urn:uuid:e2c71835-e459-4b15-9c10-fbbfcc9bc6b1>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00561-ip-10-147-4-33.ec2.internal.warc.gz"}
|
There is much debate about who is more awesome...right-handers or south-paws. Would you want Shoeless Joe Jackson on your team, or Nomar Garciaparra? That's why it's better to have a switch hitter
like Chipper Jones at bat; we don't have to pick.
In calculus, right-hand sums are similar to left-hand sums. However, instead of using the value of the function at the left endpoint of a sub-interval to determine rectangle height, we use the value
of the function at the right endpoint of the sub-interval.
As with left-hand sums we can find the values of the function that we need using formulas, tables, or graphs.
Right-Hand Sums with Formulas
If we're given a formula for the function, we can use this formula to calculate the value of the function at the right endpoint of each sub-interval.
Right-Hand Sums with Tables
In order to find a right-hand sum we need to know the value of the function at the right endpoint of each sub-interval. We can take a right-hand sum if we have a table that contains the appropriate
function values.
Sample Problem
Some values of the decreasing function f ( x ) are given by the following table:
• Use a Right-Hand Sum with 2 sub-intervals to estimate the area between the graph of f and the x-axis on the interval [0,4].
We don't know what the function f looks like, but we know these points are part of it:
Dividing the interval [0,4] into 2 equally-sized sub-intervals gives us sub-intervals of length 2. The height of the rectangle on [0,2] is f ( 2 ) = 17, so the area of this rectangle is
height ⋅ width = 17(2) = 34.
The height of the rectangle on [2,4] is f (4) = 3, so the area of this rectangle is
height ⋅ width = 3(2) = 6.
Adding the areas of these rectangles, we estimate the area between the graph of f and the x-axis on [0,4] to be
34 + 6 = 40.
• Use a Right-Hand Sum with 4 sub-intervals to estimate the area between the graph of f and the x-axis on the interval [0,4].
Answer. Dividing the interval [0,4] into 4 evenly-sized sub-intervals produces sub-intervals of length 1.
Sub-interval [0,1]: This rectangle has height f ( 0 ) = 20 and width 1, so its area is 20.
Sub-interval [1,2]: This rectangle has height f (1) = 18 and width 1, so its area is 18.
Sub-interval [2,3]: This rectangle has height f ( 2 ) = 17 and width 1, so its area is 17.
Sub-interval [3,4]:This rectangle has height f (3) = 11 and width 1, so its area is 11.
Adding the areas of these rectangles, we estimate the area between the graph of f and the x-axis on [0,4] to be
20 + 18 + 17 + 11 = 66.
• Are the estimates in parts (b) and (c) over- or under-estimates for the area between the function f and the x-axis on the interval [0,4]?
Answer. We don't know what the function f looks like exactly, but we know it's a decreasing function that passes through these points.
That means it must look something like this:
Our estimates in (b) and (c) were both underestimates, because the rectangles didn't cover all of the area between the graph of f and the x-axis on [0,4]:
• Could we use a right-hand sum with more than 4 sub-intervals to estimate the area between the function f and the x-axis on the interval [0,4]?
Answer. No. The table doesn't contain enough data for us to divide the interval [0,4] into more than 4 sub-intervals. If we tried to use 8 sub-intervals, for example, we would need to know f (0.5),
and that value isn't in the table.
Right-Hand Sums with Graphs
When finding a right-hand sum, we need to know the value of the function at the right endpoint of each sub-interval. We can find these values by looking at a graph of the function.
Right-Hand Sum Calculator Shortcuts
For a LHS, we only use values of the function at left endpoints of subintervals. We never use the value of the function at the right-most endpoint of the original interval.
For Right-Hand sums, it's the other way around. For a RHS we only use values of the function at right endpoints, so we'll never use the value of the function at the left-most endpoint of the original
Right-Hand Sums with Math Notation
After learning the notation for left-hand sums, the notation for right-hand sums requires a very slight adjustment. Assume that we're using sub-intervals all of the same length and we want to
estimate the area between the graph of f ( x ) and the x-axis on the interval [a,b].
An interval of the form [a,b] has length ( b – a ). If we wish to divide the interval [a,b] into n equal sub-intervals, each sub-interval will have length
The endpoints of the sub-intervals are
To take a RHS with n sub-intervals we find the value of f at every endpoint but the first, add these values, and multiply by Δx.
RHS(n) = [f (x[1]) + f (x[2]) + ... + f (x[ n – 1]) + f (x[n])]Δx
Summation notation for extra fanciness is optional:
Right-Hand Sums with Sub-Intervals on Different Lengths
As with left-hand sums, we can take right-hand sums where the sub-intervals have different lengths.
Sample Problem
Values of the function f are shown in the table below. Use a right-hand sum with the sub-intervals indicated by the data in the table to estimate the area between the graph of f and the x-axis on the
interval [1,8].
Answer. The sub-intervals given in this table aren't all the same. Most of them are 2, but one is 1.
On sub-interval [1,3] the height of the rectangle is f (3) = 5 and the width is 2, so the area is
5(2) = 10.
On sub-interval [3,4] the height of the rectangle is f (4) = 3 and the width is 1, so the area is
3(1) = 3.
On sub-interval [4,6] the height of the rectangle is f (6) = 5 and the width is 2, so the area is
5(2) = 10.
On sub-interval [6,8] the height of the rectangle is f (8) = 1 and the width is 2, so the area is
1(2) = 2.
Adding the areas of the rectangles, we estimate the area between f and the x-axis on [1,8] to be
10 + 3 + 10 + 2 = 25.
Let R be the region between the graph y = f ( x ) = x^2 + 1 and the x-axis on the interval [0,4]:
Use a right-hand sum with two sub-intervals to approximate the area of R.
Let R be the region between the graph y = f ( x ) = x^2 + 1 and the x-axis on the interval [0,4]. Use a Right-Hand Sum with 4 sub-intervals to estimate the area of R.
Let S be the region between the graph of g and the x-axis on the interval [0,4].
Use a right-hand sum with 2 sub-intervals to estimate the area of S. Is this an under-estimate or an over-estimate?
Let f ( x ) = 2 + x^2 and let R be the region between the graph of f and the x-axis on the interval [0,8].
Use a right-hand sum with 4 sub-intervals to estimate the area of R.
Let f ( x ) = 4x and let R be the region between the graph of f and the x-axis on the interval [1,2]. Use a right-hand sum with 4 sub-intervals to estimate the area of R.
Let f (x) = 2x on [2,10]. Find RHS(5). That is, use a right-hand sum with 5 sub-intervals to estimate the area between the graph of f and the x-axis on [2,10].
Let R be the region between the graph y = f ( x ) = x^2 + 1 and the x-axis on the interval [0,4].
• Draw R and the 8 rectangles that result from using a right-hand sum with 8 sub-intervals to approximate the area of R.
• Use a Right-Hand Sum with 8 sub-intervals to approximate the area of R.
• Is your approximation an under-estimate or an over-estimate to the actual area of R?
Let S be the area between the graph of y = f ( x ) = 2^x and the x-axis on the interval [1,6].
• Draw S.
• Use a Right-Hand Sum with 2 subintervals to approximate the area of S. Draw S and the rectangles used in this Right-Hand Sum on the same graph.
• Use a Right-Hand Sum with 5 subintervals to approximate the area of S. Draw S and the rectangles used in this Right-Hand Sum on the same graph.
• Are your approximations in parts (b) and (c) bigger or smaller than the actual area of S?
Let W be the area between the graph of x-axis on the interval [1,4].
• Draw W.\item Use a Right-Hand Sum with 3 subintervals to approximate the area of W. Draw W and the rectangles used in this Right-Hand Sum on the same graph.
• Use a Right-Hand Sum with 6 subintervals to approximate the area of W. Draw W and the rectangles used in this Right-Hand Sum on the same graph.
• Are your approximations in parts (b) and (c) bigger or smaller than the actual area of W?
The table below shows some values of the increasing function f ( x ).
• Use a right-hand sum with one sub-interval to estimate the area between the graph of f and the x-axis on the interval [2,8].
• Use a right-hand sum with three sub-intervals to estimate the area between the graph of f and the x-axis on the interval [2,8].
• Are your answers in (a) and (b) over- or under-estimates of the actual area between the graph of f and the x-axis on the interval [2,8]?
Some values of the decreasing function g are given in the table below:
• Use a right-hand sum with 3 sub-intervals to estimate the area between the graph of g and the x-axis on the interval [-1,2].
• Use a right-hand sum with 2 sub-intervals to estimate the area between the graph of g and the x-axis on the interval [-1,2].
• Are your answers in (a) and (b) over- or under-estimates for the actual area between the graph of g and the x-axis on the interval [-1,2]?
• Let W be the region between the graph of f and the x-axis on the interval [-20,20].
Use a right-hand sum with 4 sub-intervals to estimate the area of W.
• Let Z be the region between the graph of g and the x-axis on the interval [-4,0].
• Use a right-hand sum with 2 sub-intervals to estimate the area of Z.
• Use a right-hand sum with 4 sub-intervals to estimate the area of Z.
• Are your answers in (a) and (b) over- or under- estimates for the area of Z?
Let f ( x ) = x^2 + 6x + 9. Use a right-hand sum with 6 sub-intervals to estimate the area between the graph of f and the x-axis on the interval [-6,-3].
Let f ( x ) = -x^2 + 2x + 8. Use a right-hand sum with 8 sub-intervals to estimate the area between the graph of f and the x-axis on the interval [0,4].
Let g be a function with values given by the table below. Use a right-hand sum with 3 sub-intervals to estimate the area between the graph of g and the x-axis on the interval [0,12].
Let h be a function with values given by the table below. Use a right-hand sum with 9 sub-intervals to estimate the area between the graph of h and the x-axis on the interval [-9,9].
The function f ( x ) on the interval [0,30] is graphed below. Use a right-hand sum with 3 sub-intervals to estimate the area between the graph of f and the x-axis on this interval.
Use a right-hand sum with the sub-intervals indicated by the data in the table to estimate the area between the graph of f and the x-axis on the interval [-10,1].
|
{"url":"http://www.shmoop.com/definite-integrals/right-hand-sum-help.html","timestamp":"2014-04-16T10:40:01Z","content_type":null,"content_length":"96394","record_id":"<urn:uuid:9cdf93cd-262a-471d-91fe-66c60660884f>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00210-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Post by thread:Big positioning system.
Truncated match.
PICList Thread
'Big positioning system.'
1998\08\29@172429 by riel Caffese
Dear Piclisters, I have this problem:
I have to get X,Y,Z position of two granes (in spanish: grœas) that run
on parallel rails and are used to lift very big machines (most, the size of
a medium car). There are collision posibilities, wich now are avoid with the
use of the EYE!!.
I have recived the proposition of solve the problem with an electronic
device, but I have the problem of getting the position on X,Y and Z axis to
calculate the posibility of collision.
Have you got any idea about it ??
1998\08\29@185510 by Mark Willis
A numerical analysis problem! I love it! <G>
One Quick idea; I'll call it the "node evaluation method":
For each combination of parts of the cranes that could collide;
Evaluate: (A * (Abs(X1-X2))) + (B * (Abs(Y1-Y2))) + (C * (Abs(Z1-Z2)))
If this equation gets near zero, panic <G> Pick A, B, and C values
for weighting (i.e. if the cranes often pass near each other, with OK
clearance, then you want that to not false trigger! But if they're
likely to collide you want enough weight to stop the system in time.)
For "extra credit" (i.e. faster speed) you can predict ahead the
position of each of the cranes' parts, and thus get advanced warning of
a problem; Slow down if a collision is predicted (stop before
collision.) This kind of system has inertia, and takes a while to stop,
so this is a good idea, perhaps... (Could even do look-ahead if pieces
are moving towards each other and closer than a certain threshold.)
On a "mainframe" (AT or better desktop machine) I'd probably go with a
more complex method (calculate wire frame models of the crane parts,
track the closest approaches of the pieces, something like that? I'm
out of practice right now.) The node method would work, I haven't
implemented that on a PIC though. I'll love watching this thread <G> I
think that'll work find on a PIC. I can help with the algorithm if you
need, I'm a little backlogged in coding though so I'd best stay with
algorithm help for now...
Speedup hacks:
Sorting which pieces are closest is a good idea; two pieces of the
crane that are far from each other are quite unlikely to collide! This
could speed up things (prune the evaluations needed to be performed.)
Way to do this: When you evaluate Abs(), you know the whole sum of
three Abs(x, y & Z's) will be quite non-zero if ANY Abs(whatever) is
larger than a certain value - so go on to the next node comparison.
(pick the Abs() value likely to be largest but most critical for
collisions - if your nodes are each always at the same height, start
with Z values of the nodes, as something that's lots higher or lower
than the other something, cannot collide with that other node, except in
case of severe earthquake or other disaster! {Some would have the
system turn itself off, if an earthquake hits. Worth thinking about!
Cheap to do, weight on a microswitch, this can be bought for gas mains
etc. nowadays.})
Abs(x) can be done as Square Root (Square (x)) but quicker to
implement it as (x >= 0 ? X : (-x)) in C (i.e. if the number's positive
return the number, else return 0 - the number <G>) I once saw this
implemented as a sum of Squared numbers, to speed up the process (that
does work, but just sign bit flipping makes more sense!)
Mark Willis, spam_OUTmwillisTakeThisOuTnwlink.com
Gabriel Caffese wrote:
{Quote hidden}
1998\08\29@203937 by Michael Hagberg
I think I'd go a different approach. Linear algebra is my forte, so I tend
to use it a lot.
You sample your positional data, and turn that into a velocity curve. You
can fit this, for most cases, to a linear or quadratic formula. All the
other parts of the crane, which may rotate or telescope, etc. can be
calculated by summing vectors. In order to predict where the parts will
be, you check the values of the first and second order derivatives (I know,
calculus! ugh!, but this one's easy, really). If f' is acceleration, and
f" is the rate of change of acceleration. You try to solve the system of
equations, if the formulas equate, then you collide. You look at f' to
figure out what the positional data will be like at some point in the
future, and use f" to figure out whether to over or under-estimate your
coordiate values.
Sounds complex, but I used this system in a model train world once. I had
6 trains wizzing around the tracks, and the switching, braking, and power
were all controlled by a TRS-80 CoCo2. Not a whole lot of CPU there (Z80,
of course), but it got the job done admirably.
You may loose some elegance with the lack of an FPU, but the most you have
to do is the occasional square root. All the rest is integer math.
At 03:53 PM 8/29/98 -0700, you wrote:
>A numerical analysis problem! I love it! <G>
1998\08\29@233950 by Alberto Smulders
Howdy Gabriel
give us some more info.
1) like how long is the X, the Y and the Z?
2) do the rails operate in weather? at night?
3) how much dinero can you spend?
Gus Calabrese Lola Montes WFT Electronics
4337 Raleigh Street Denver, CO 80212
303 964-9670......voicemail wftspam_OUTfrii.com
Alternate: 791 High Street Estes Park, CO 80517
if no success with @spam@wftKILLspamfrii.com, try ....
More... (looser matching)
- Last day of these posts
- In 1998 , 1999 only
- Today
- New search...
|
{"url":"http://www.piclist.com/techref/postbot.asp?by=thread&id=Big+positioning+system.&w=body&tgt=post","timestamp":"2014-04-17T13:09:56Z","content_type":null,"content_length":"23006","record_id":"<urn:uuid:8918e717-7c9e-49cf-9b51-6f6850603a9b>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00584-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Liliana Borcea
Peter Field Collegiate Professor of Mathematics.
• Ph.D. (1996) Stanford University. Scientific Computing and Computational Mathematics.
• M.S. (1994) Stanford University. Scientific Computing and Computational Mathematics.
• B.S. (1987) University of Bucharest, Romania. Department of Applied Physics.
Research interests:
• Inverse scattering in random media.
• Electro-magnetic inverse problems.
• Effective properties of composite materials, transport in high contrast, heterogeneous media.
How to reach me:
Department of Mathematics
University of Michigan
4087 East Hall
530 Church Street
Ann Arbor, MI 48109-1043
E-mail : borcea(at)umich.edu
Telephone: (734) 647-6579
FAX : (734) 763-0937
|
{"url":"http://www-personal.umich.edu/~borcea/","timestamp":"2014-04-16T10:43:07Z","content_type":null,"content_length":"1501","record_id":"<urn:uuid:0f487e24-42d9-4810-b4d8-7d6136ee5047>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00511-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Topics in Mathematical Modeling is an introductory textbook on mathematical modeling. The book teaches how simple mathematics can help formulate and solve real problems of current research interest
in a wide range of fields, including biology, ecology, computer science, geophysics, engineering, and the social sciences. Yet the prerequisites are minimal: calculus and elementary differential
equations. Among the many topics addressed are HIV; plant phyllotaxis; global warming; the World Wide Web; plant and animal vascular networks; social networks; chaos and fractals; marriage and
divorce; and El Niño. Traditional modeling topics such as predator-prey interaction, harvesting, and wars of attrition are also included. Most chapters begin with the history of a problem, follow
with a demonstration of how it can be modeled using various mathematical tools, and close with a discussion of its remaining unsolved aspects.
Designed for a one-semester course, the book progresses from problems that can be solved with relatively simple mathematics to ones that require more sophisticated methods. The math techniques are
taught as needed to solve the problem being addressed, and each chapter is designed to be largely independent to give teachers flexibility.
The book, which can be used as an overview and introduction to applied mathematics, is particularly suitable for sophomore, junior, and senior students in math, science, and engineering.
"This beautifully produced book should provide a joyful and stimulating reading experience for any layman who is curious about real-life events in the context of mathematical modelling, and it
provides an excellent entry point to more advanced areas such as mathematical biology or climate modelling."--Z. Q. John Lu, Significance
"What do global warming, predator-prey interactions, and the World Wide Web have in common? All of these disparate phenomena can be modeled using mathematics. In Topics in Mathematical Modeling, K.
K. Tung demonstrates mathıs relevance to problems of current research interest in biology, ecology, computer science, geophysics, engineering, and the social sciences."--Scientific American Book Club
"[T]his is a good introductory book about the nature and purpose of mathematical modeling. The topics chosen and the way in which they have been motivated and presented will help a wide range of
students to 'see the point' and thereby arouse and stimulate their confidence about their mathematical problem solving skills."--Bob Anderssen, Australian Mathematics Society
"I was so impressed by the breadth of examples contained in its 336 pages that I immediately set about using it to update one of my own undergraduate courses. . . . A wonderful source book for all
kinds of undergraduate mathematical activities. . . . Extremely clear. . . . It is highly recommended."--Chris Howls, Times Higher Education
More reviews
Table of Contents
Subject Area:
• Mathematics
|
{"url":"http://press.princeton.edu/titles/8446.html","timestamp":"2014-04-19T06:53:41Z","content_type":null,"content_length":"15137","record_id":"<urn:uuid:b41daab0-a15f-40f7-882a-12a09d721447>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00056-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Summary: On the Solution-Space Geometry
of Random Constraint Satisfaction Problems
Dimitris Achlioptas
Department of Computer Science
University of California Santa Cruz
Federico Ricci-Tersenghi
Department of Physics
University of Rome "La Sapienza"
For a number of random constraint satisfaction problems, such as
random k-SAT and random graph/hypergraph coloring, there are
very good estimates of the largest constraint density for which so-
lutions exist. Yet, all known polynomial-time algorithms for these
problems fail to find solutions even at much lower densities. To
understand the origin of this gap we study how the structure of
the space of solutions evolves in such problems as constraints are
added. In particular, we prove that much before solutions disap-
pear, they organize into an exponential number of clusters, each
|
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/837/2637906.html","timestamp":"2014-04-18T13:28:07Z","content_type":null,"content_length":"8283","record_id":"<urn:uuid:ab945d3f-63f3-4a93-8135-9b1d596eac1e>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00116-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Famous Theorems of Mathematics/Law of large numbers
Given X1, X2, ... an infinite sequence of i.i.d. random variables with finite expected value E(X1) = E(X2) = ... = µ < ∞, we are interested in the convergence of the sample average
The weak lawEdit
Theorem: $\overline{X}_n \, \xrightarrow{P} \, \mu \qquad\textrm{for}\qquad n \to \infty.$
This proof uses the assumption of finite variance $\operatorname{Var} (X_i)=\sigma^2$ (for all $i$). The independence of the random variables implies no correlation between them, and we have that
$\operatorname{Var}(\overline{X}_n) = \frac{n\sigma^2}{n^2} = \frac{\sigma^2}{n}.$
The common mean μ of the sequence is the mean of the sample average:
$E(\overline{X}_n) = \mu.$
Using Chebyshev's inequality on $\overline{X}_n$ results in
$\operatorname{P}( \left| \overline{X}_n-\mu \right| \geq \varepsilon) \leq \frac{\sigma^2}{n\varepsilon^2}.$
This may be used to obtain the following:
$\operatorname{P}( \left| \overline{X}_n-\mu \right| < \varepsilon) = 1 - \operatorname{P}( \left| \overline{X}_n-\mu \right| \geq \varepsilon) \geq 1 - \frac{\sigma^2}{n \varepsilon^2 }.$
As n approaches infinity, the expression approaches 1. And by definition of convergence in probability (see Convergence of random variables), we have obtained
$\overline{X}_n \, \xrightarrow{P} \, \mu \qquad\textrm{for}\qquad n \to \infty.$
Last modified on 24 July 2009, at 15:43
|
{"url":"http://en.m.wikibooks.org/wiki/Famous_Theorems_of_Mathematics/Law_of_large_numbers","timestamp":"2014-04-21T14:47:57Z","content_type":null,"content_length":"16453","record_id":"<urn:uuid:790f11c3-31bc-4c27-9390-f44df0ba90cb>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00445-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Trigonometric Equations
Trigonometric Equations Word Problems
• A pharmacist is to prepare 15 militers of special eye drops for a glaucoma patient. The eye-drop solution must have a 2% active ingredient, but the pharmamcis only has 10% solution and 1%
solution in stock. How much of each type of solution should be used to full the prescription?
trig equations word problems, trig tide problems, practice word problems trig, trigonometric equation word problems, trigonometric equations problems, trigonometric word problem with solution,
trigonometric equation word problem, word problem trigonometric equation
• 4th root of 3 times the 4th root of -3
word problems with trigonometric equations, trigonometric word problem equations, solving trigonometric shadow problems, trig tide problem, tidal ebb and flow trig problems, writing trig equation
word problems, 10th grade trigonometry word problems, Trigonometric word problems and tides
• the decay rate of a certan chemical is 9.7%. What is its half life?
word problems on trigonometric equations, trigonometric equations word problems examples, SOLVING TRIG TIDE PROBLEMS, sample trigonometric problems with solutions, solving tangent word problems,
ebb and flow trig problems, ebb and flow trigonometry model problems, word problem in trigonometric equation
• There is a 50 ft. beam that has cast a shadow of 13 ft. across the ground, what is the angle at which the shadow was casted
triogonometry word problems, tide problems trig, trigonometry equations word problems winter, trigonometry word problems hard, trigonometric equations word problems, Solve Trigonometric Equations
word problems, freedownload problems of trigonometric equations, ebb and flow trig word problems
• The depth of water at the end of a pier varies with the tides throughout the day.Today the high tide occurs at 2:17 a.m. with a depth of 9m. The low tide occurs at 8:31 a.m. with a depth of 3.5
m .If the tide ebb and flow over time in a periodic manner. Find a trignometric equation that models the depth of the water x hours after midnight
trigo word problems hard, trigonometric hard word problems, sample problems with solution for trigonometric, tide trig problem
Trigonometric Equations Related Links
|
{"url":"http://wordproblems.us/trigonometric-equations","timestamp":"2014-04-17T15:44:14Z","content_type":null,"content_length":"12669","record_id":"<urn:uuid:2d5c9789-dfe6-438c-817c-d1bb0028eb25>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00012-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Forty seventh British Mathematical Colloquium
This was held at Heriot-Watt: 3 - 6 April 1995
The enrolment was 340. The chairman was J Howie and the secretary was A R Prince.
Minutes of meetings, etc. are available by clicking on a link below
General Meeting Minutes for 1995
Scientific Committee Meeting Minutes for 1995
The plenary speakers were:
Bombieri, E How many rational points can there be on a curve?
Carter, R W New developments in the representation theory of Lie algebras, algebraic groups and quantum groups
Pisier, G Operator spaces and group representations
Serre, J-P Finite subgroups of Lie groups
The morning speakers were:
Archbold, R J Multiplicity and primality in spectra of (group) C*-algbras
Boettcher, A Toeplitz operators with piecewise continuous symbols - a neverending story ?
Bruns, W On multi-graded resolutions
Cremona, J E The arithmetic of elliptic curves
Cuntz, J Excision in bivariant periodic cyclic cohomology
Etheridge, A M SuperBrownian motion
Gilbert, N D Groups that are knot-like groups
Gowers, W T Ramsey theory, games and the structure of Banach space
Hinz, A M Self-adjoint Schrödinger operators
Jerrum, M R Random walks and approximate computation
Kropholler, P H Homological methods for group-graded rings
Lenagan, T H Catenarity in quantum algebras
Levy, L S Krull-Scmidt theorems in dimension 1
Lewis, J T Entropy, large deviations and the thermodynamical formalism
Mason, L J Twistor theory, self-duality and integrability
Paris, J B Non-monotone reasoning
Reid, A W The geometry of incompressible surfaces in hyperbolic 3-manifolds
Reid, M The McKay correspondence for finite subgroups of SL(3,C)
Ringel, C The Hall algebra approach to quantum groups
Smith, R Cohomology of operator algebras
Taylor, R L Galois groups and Fermat's last theorem
Toland, J F Self-adjoint operators, cones and polynomials
Special session: Functional analysis Organiser: A M Sinclair
Boettcher, A Toeplitz operators with piecewise continuous symbols -- a never-ending story?
Cuntz, J Excision in bivariant periodic cyclic cohomology
Hinz, A Self-adjoint Schrödinger operators
Smith, R Cohomology of operator algebras
Special session: Ring theory Organiser: K A Brown
Bruns, W On multi-graded resolutions
Kropholler, P H Homological methods for group-graded rings
Lenagan, T H Catenarity in quantum algebras
Levy, L S Krull-Schmidt theorems in dimension 1
|
{"url":"http://www-gap.dcs.st-and.ac.uk/~history/BMC/1995.html","timestamp":"2014-04-19T04:46:35Z","content_type":null,"content_length":"4548","record_id":"<urn:uuid:f4b077ac-5681-4e81-8189-f4ce378e50aa>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00372-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Need help solving this Java program excerise
January 31st, 2013, 03:45 PM #1
Junior Member
Join Date
Jan 2013
Thanked 0 Times in 0 Posts
Assignment objectives:
- Input / output
- Decision making statements
- Loops
- Methods
Shoot a watermelon from a cannon and write a program in JAVA to compute the following given the initial velocity and initial angle of trajectory:
1. Maximum horizontal distance
2. Maximum vertical distance
3. Total travel time
4. Elapsed time when it reaches maximum vertical distance
5. In addition if there is an obstacle in its path, would the cannon ball clear it or not.
Initial velocity, initial angle (in degrees), the obstacle distance from the canon and the height of the obstacle is given by the user, use any way you like to ask for the data.
The user must be allowed to try different set of inputs as many times as desired.
Formuals are;
x = vcos(x) * t
y= vcos(x) * t - (g*t/2)
Angles are in radians for the formals
Conversion :
radian=( degree * 3.14)/180
1. You must use methods for each of the above calculations.
2. You must use Dialog box for inputs from the user
3. Check the validity of input data
Sample output:
Initial velocity = ???
Initial angle = ????
Obstacle distance from the anon = ????
Obstacle height = ?????
Maximum horizontal distance = ?????
Assignment objectives:
- Input / output
- Decision making statements
- Loops
- Methods
Shoot a watermelon from a cannon and write a program in JAVA to compute the following given the initial velocity and initial angle of trajectory:
1. Maximum horizontal distance
2. Maximum vertical distance
3. Total travel time
4. Elapsed time when it reaches maximum vertical distance
5. In addition if there is an obstacle in its path, would the cannon ball clear it or not.
Initial velocity, initial angle (in degrees), the obstacle distance from the canon and the height of the obstacle is given by the user, use any way you like to ask for the data.
The user must be allowed to try different set of inputs as many times as desired.
Formuals are;
x = vcos(x) * t
y= vcos(x) * t - (g*t/2)
Angles are in radians for the formals
Conversion :
radian=( degree * 3.14)/180
1. You must use methods for each of the above calculations.
2. You must use Dialog box for inputs from the user
3. Check the validity of input data
Sample output:
Initial velocity = ???
Initial angle = ????
Obstacle distance from the anon = ????
Obstacle height = ?????
Maximum horizontal distance = ?????
Do you have any specific questions about your assignment?
Please post your code and any questions about problems you are having.
If you don't understand my answer, don't ignore it, ask a question.
Yes I am very new to Java programming and I think this site would be very helpful to help guide me on the correct path. This is my first course I am taking regarding the program. I have no clue
on how to set up the correct methods for this program, can you please help me I am so confuse, thank you.
Here's some links with info on how to write a method:
Defining Methods (The Java™ Tutorials > Learning the Java Language > Classes and Objects)
Passing Information to a Method or a Constructor (The Java™ Tutorials > Learning the Java Language > Classes and Objects)
Returning a Value from a Method (The Java™ Tutorials > Learning the Java Language > Classes and Objects)
If you have any specific questions about your assignment, post them.
Before writing any code, you need to make a list of the simple steps that the program needs to do to solve the problem. Then work on the steps one at a time: Code it, compile it, fix errors and
execute it.
Then move to the next step in the list.
If you don't understand my answer, don't ignore it, ask a question.
Thank you so much and by any chance can you please help me with the first part of the set up of getting started with computing the following given the initial velocity and initial angle of
trajectory or asking for the input so the user can put different set of inputs as many times as desired, thank you
asking for the input
You ask a question with a call to the println() method that will display a message on the console
and read the user's keyed in response with a method of the Scanner class.
If you don't understand my answer, don't ignore it, ask a question.
The Following User Says Thank You to Norm For This Useful Post:
DuaneLewis (January 31st, 2013)
Im sorry I am really new to this, can you please show me a random example of asking a question with a call to the println() method
Have you tried searching here on the forum? There are many examples of code here.
If you don't understand my answer, don't ignore it, ask a question.
January 31st, 2013, 03:47 PM #2
Junior Member
Join Date
Jan 2013
Thanked 0 Times in 0 Posts
January 31st, 2013, 03:48 PM #3
Super Moderator
Join Date
May 2010
Eastern Florida
Thanked 1,958 Times in 1,932 Posts
January 31st, 2013, 04:01 PM #4
Junior Member
Join Date
Jan 2013
Thanked 0 Times in 0 Posts
January 31st, 2013, 04:29 PM #5
Super Moderator
Join Date
May 2010
Eastern Florida
Thanked 1,958 Times in 1,932 Posts
January 31st, 2013, 04:54 PM #6
Junior Member
Join Date
Jan 2013
Thanked 0 Times in 0 Posts
January 31st, 2013, 05:19 PM #7
Super Moderator
Join Date
May 2010
Eastern Florida
Thanked 1,958 Times in 1,932 Posts
January 31st, 2013, 05:34 PM #8
Junior Member
Join Date
Jan 2013
Thanked 0 Times in 0 Posts
January 31st, 2013, 06:36 PM #9
Super Moderator
Join Date
May 2010
Eastern Florida
Thanked 1,958 Times in 1,932 Posts
|
{"url":"http://www.javaprogrammingforums.com/whats-wrong-my-code/23077-need-help-solving-java-program-excerise.html","timestamp":"2014-04-19T02:59:55Z","content_type":null,"content_length":"87168","record_id":"<urn:uuid:06a2e175-5c63-4492-a1b4-5b3523536165>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00437-ip-10-147-4-33.ec2.internal.warc.gz"}
|
What I Do When I’m Not Doing This BlogWhat I Do When I'm Not Doing This Blog
From the department of self-promotion, let me call attention to the current volume of The Electronic Journal of Combinatorics. If you click on the link and scroll down to P164, you will find a
barn-burning, rhetorical masterpiece of a paper entitled, “Isoperimetric Numbers of Regular Graphs of High Degree With Applications to Arithmetic Riemann Surfaces.” And who wrote this paper, you ask,
the suspense practically killing you? That would be my long time friend and collaborator Dominic Lanphier, of Western Kentucky University, along with yours truly. Yay! Always nice to have one more
line item for the CV.
It occurs to me that while I occasionally do math posts around here, I’ve never actually said anything about my research. So let me say a few words regarding what this paper is about. Here’s the
We derive upper and lower bounds on the isoperimetric numbers and bisection widths of a large class of regular graphs of high degree. Our methods are combinatorial and do not require a knowledge
of the eigenvalue spectrum. We apply these bounds to random regular graphs of high degree and the Platonic graphs over the rings Z[n]. In the latter case we show that these graphs are generally
non-Ramanujan for composite n and we also give sharp asymptotic bounds for the isoperimetric numbers. We conclude by giving bounds on the Cheeger constants of arithmetic Riemann surfaces. For a
large class of these surfaces these bounds are an improvement over the known asymptotic bounds.
Well, that’s a lot to parse. But let’s see if I can give you the gist of what’s going on, one sentence at a time. We start with:
We derive upper and lower bounds on the isoperimetric numbers and bisection widths of a large class of regular graphs of high degree.
In this context, a graph refers to a diagram in which certain pairs of large dots (called vertices) are connected by line segments called edges. A simple example is a square. The four corners of the
square are the vertices, and the sides of the square are the edges.
In general, a “graph” is simply a pictorial representation of data. You’re probably familiar with the idea of the graph of a function, which is a visual presentation of some abstract mathematical
relation. We sometimes refer to computer graphics, by which we mean a visual presentation of image data stored in a computer.
In chemistry, it has long been commonplace to draw diagrams depicting the structure of molecules in which the individual atoms are represented by their standard abbreviations, with the lines between
them denoting different types of bonds. These diagrams used to be called “chemicographs.” Writing in the 1870′s, mathematician Arthur Cayley shortened this to “graph” in the context of the more
general dot-line diagrams, and the name stuck.
Why study graphs? Simply put, there are a lot of physical situations that are conveniently represented by graphs. The dots might represent cities and the lines might represent airplane flights
between them. Or the dots might represent land masses and the lines represent bridges connecting them. Or the dots could represent countries and the lines represent countries sharing boundaries.
Basically, graphs just kept coming up in one application after another, and that’s the sort of thing that makes mathematicians suspect they are worth studying for their own sake.
Moving on, a graph is “regular” if every vertex has the same number of edges coming out of it. Each corner of the square has two edges coming out of it, so it is regular. If we now draw in one of the
diagonals, then two corners will have three edges while the other two corners have just two edges, so this graphs is not regular. But if we now also draw in the second diagonal, then every corner had
three edges and the graph is regular once again.
In referring to a “large class” of regular graphs, we mean that our big fancy theorem doesn’t quite apply to all regular graphs (D’oh!), but it does at least apply to an awful lot of them
The term “high degree” is slightly vague at this point, though we make it very precise in the body of the paper. The degree of a vertex is simply the number of edges coming out of it. Since our
graphs are regular every vertex has the same degree, and we can refer simply to the degree of the graph. In referring to high degree we mean, roughly, that the graph has a lot of edges relative to
the number of vertices. If you imagine 1000 dots in a circle, each connected only to its two neighbors, then you have a regular graph of degree two on one thousand vertices. Pathetic. But now imagine
that each dot is connected to every other dot (except for itself). That’s a graph of degree 999. Now we’re talking! Basically, for our trick to work we need lots of edges in the graph.
Now, on to the isoperimetric number. One question you might ask about a graph is how easy it is to fracture. If the dots represent cities and the lines represent transmission channels, you don’t want
a situation where if one line goes down large numbers of people are cut off from one another.
Picture a barbell. Two enormous weights on either end connected by a thin bar. If you make a small cut through the center of the bar then suddenly big chunks of weight go flying off into space.
That’s a small isoperimetric number. Easy to fracture. A big isoperimetric number means that you are going to have to do an awful lot of cutting to splinter off a significant number of vertices. The
bisection width, meanwhile, is simply another way of measuring the resiliency of a graph. (There are other measures, with names like “toughness” and “integrity”, but that’s a different post.)
Sadly, if you’re graph is at all big it is generally not possible to determine the isoperimetric number precisely. So you make do with saying things like, “It is no bigger than this, and no smaller
than that.” In referring to upper and lower bounds we are saying that we have the exact isoperimetric number sandwiched in between two other numbers. Happily, our upper and lower bounds are quite
close together. So even though we can’t measure the isoperimetric numbers exactly, we can, at least, get pretty darn close.
The second sentence is:
Our methods are combinatorial and do not require a knowledge of the eigenvalue spectrum.
If you took a linear algebra class at some point in your life then you might recall the term eigenvalues. Matrices have them, and they play a central role in understanding linear transformations,
among other things.
As it happens, any graph can be represented by a matrix. The trick is simple. Number the vertices. Then make a table in which row a and column b has a 1 in it if vertex a is connected to vertex b and
has a 0 in it otherwise. We now have a table of zeros and ones that records precisely what is connected to what. All of the important information from the graph is recorded in that matrix. That
matrix has eigenvalues. In this way we can employ linear algebra as a tool for studying graphs. This line of thought leads to a branch of mathematics known as “algebraic graph theory,” and it just so
happens to be my specialty.
Of course, the precise structure of the matrix depends on how you assigned the numbers. It turns out, though, that changing the ordering leaves you with the same rows but in a different order. And
one thing you learn in linear algebra is that permuting the rows does not change the eigenvalues. So we can talk about “the eigenvalues” of a graph, even though there are many possible matrices.
It turns out that the eigenvalues encode information about how easy it is to fracture the graph. So if you care about the isoperimetric number, and you know about the eigenvalues, then you’re good to
go! And for certain kinds of graphs you actually can say terribly clever things about the eigenvalues. For people who know some abstract algebra, this is especially true of Cayley graphs, which are
graphs that arise from the study of groups. (In this case we can employ a branch of mathematics called representation theory to work out the eigenvalues of the graph, but that is definitely a
different post.)
Sadly, it is generally quite difficult to determine the eigenvalues of a graph. So in this sentence of our abstract we are emphasizing that we are able to say something interesting about the
isoperimetric numbers without getting our hands dirty with eigenvalues. Not a bad trick.
To say our methods are “combinatorial” is to say that ultimately they are based on counting arguments. Subtle and difficult counting arguments (I say with all due modesty) but counting nonetheless.
Counting’s not so bad, right? (In fact, as bad and as dense as some of the calculations in the paper look, a lot of it, seriously, is just high school algebra. We even use the quadratic formula at
one point!)
Finally, in referring to the “eigenvalue spectrum,” as opposed to simply the set of eigenvalues, we mean that we care about something more that just the eigenvalues themselves. We also want to know
about how these numbers are distributed, which ones occur multiple times, the span between the largest and the smallest and so on. So, the word “spectrum” indicates that this is a set with some
Next sentence:
We apply these bounds to random regular graphs of high degree and the Platonic graphs over the rings Z[n].
This is the sentence where we show how our abstract theorem applies in certain concrete cases. Suffice it to say that “random regular graphs” and “Platonic graphs” refer to families of graphs that,
for various reasons, have attracted considerable interest among mathematicians. Since the abstract is, in part, an advertisement for the paper, we’re basically saying that you should pay attention to
our nifty new theorem because it tells us stuff about these graphs that have been deemed interesting by mathematicians.
I won’t define the technical jargon, beyond noting that Z[n] is a gadget from number theory. So we now have combinatorics, linear algebra and number theory all represented in our paper.
In the latter case we show that these graphs are generally non-Ramanujan for composite n and we also give sharp asymptotic bounds for the isoperimetric numbers.
A Ramanujan graph is one whose eigenvalue spectrum attains a certain theoretical maximum. We have already noted that the eigenvalues encode information about how hard it is to fracture the graph.
Ramanujan graphs are basically graphs that are very hard to fracture, even though they have few edges relative to the number of vertices. If those edges represent physical connections that are
expensive to build, you can understand why you might be interested in graphs that are hard to fracture despite having few connections.
Alas, it turns out that when we apply our nifty new theorem to our number theory themed graphs, they turn out not to be Ramanujan. Too bad, so sad. On the other hand, since some previous authors had
wondered about precisely this question, we have a reason for noting that we have partly resolved it.
A “composite” number is one that is not prime. It turns out that the sorts of questions we are considering are simpler in the case of prime numbers, and they had largely been resolved in previous
By “asymptotic bounds” we mean, roughly, this: We are actually studying infinite families of graphs in which the number of vertices gets bigger and bigger. Our bounds get increasingly accurate as the
number of vertices go up. To say that our bounds are “sharp” means essentially that no better such bound is possible. So don’t even try looking for one!
We conclude by giving bounds on the Cheeger constants of arithmetic Riemann surfaces.
For our purposes, a Riemann surface is really just any closed surface. A sphere and a torus (doughnut) are two simple examples. We have now introduced some geometry into our paper. Our surfaces are
“arithmetic” because they arise from certain considerations in number theory.
The Cheeger constant is a gadget introduced by Jeff Cheeger in 1970. It turns out that surfaces, no less than matrices, can be said to have eigenvalues, but hey are hard to study directly. The
Cheeger constant is based on the geometry of the surface. It is related to the eigenvalues, and can often be approximated with reasonable accuracy. It remains a useful tool for geometers studying the
eigenvalues of surfaces. Want to know something about the reclusive eigenvalues? Just study the less publicity-shy Cheeger constant!
Later, mathematician Peter Buser came up with the idea of discretizing this process. Basically, he noted that in many circumstances our surfaces are triangulated, meaning simply that they arise from
gluing together triangles along their boundary edges. In this case we can attach a graph to the surface. Each triangle is a vertex in the graph, and two vertices are connected if they represent
triangles sharing an edge. This graph captures much of the information about the surface. Which is cool, since graphs are easier to study than surfaces. What I called the isoperimetric number is
actually a discrete version of the Cheeger constant.
One reason our families of graphs are interesting is that they are related to arithmetic surfaces. Since our nifty new theorem provides information about these graphs, they automatically provide
information about these surfaces as well. Which brings us to the final sentence:
For a large class of these surfaces these bounds are an improvement over the known asymptotic bounds.
That one seems clear enough. We’re basically saying that while the previous known bounds were just adorable, they have now been exposed as amateurish hackwork in light of our nifty new theorem.
Append the word “mofo” to the end of that sentence to get the full effect. (I kid, of course.)
So, on the off chance that anyone has read this far, let’s bring it all home by providing a somewhat translated version of the abstract:
We prove a strong theorem about how easy it is to fracture a large class of graphs. We use methods that are simpler than the more familiar techniques. Among our large class of graphs are two
specific families that have been well-studied in the literature, and we can provide some new information about them. In particular, we can show that one of our families, sadly, does not provide
good examples of resilient graphs. Since our graphs arise naturally in the study of certain surfaces, we can say something new about them as well.
So there you go! Of course, it was long, winding road from the initial idea (for which Dominic deserves the lion’s share of the credit) to the finished paper. Time well spent, every second.
1. #1 GAZZA August 24, 2011
Congratulations, etc.
Is Peter Buser’s work the reason that 3d computer graphics are basically modelled with lots of triangles then? It would be nice to know who to thank for basically inventing the math that allows
most modern computer games to exist.
2. #2 Jesse Parrish August 24, 2011
Awwww you didn’t want to introduce binary relations and modules?
Also, cool posts. Most math undergraduates aren’t introduced to the really-cool-deep-stuff that happens around combinatorics and number theory.
3. #3 Reginald Selkirk August 24, 2011
Does this paper explain why there are PYGMIES + DWARFS?
4. #4 rob August 24, 2011
I started to read your paper and had to wikipedia “infimum.”
i learned something new today!
5. #5 Valhar2000 August 25, 2011
#1: Probably not. Working with triangles means that you can render a scene by computing a simple vector operation for each pixel, whereas with other methods, like ray-tracing, you might have to
compute much more. Modern GPUs do much more than that, of course, but that’s how it started.
What seems very likely to me is that people who work with these systems benefit from learning this kind of math.
6. #6 abb3w August 30, 2011
“…so what’s it good for?”
No, I’m not serious. (I suspect networking applications.) I am bothered by the number of people who would be likely to think that, however.
Also: really cool that you provide a translation from math-jargon into English (or at least most of the way).
7. #7 Bhaskar September 3, 2011
Your explanation of the abstract is simply superb. Great post.
8. #8 Simon November 2, 2011
Fantastic post I very much enjoyed it, keep up the good work.
|
{"url":"http://scienceblogs.com/evolutionblog/2011/08/24/what-i-do-when-im-not-doing-th/","timestamp":"2014-04-21T02:49:08Z","content_type":null,"content_length":"67305","record_id":"<urn:uuid:e46343ab-320a-40ee-884a-be591dd1bf46>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00249-ip-10-147-4-33.ec2.internal.warc.gz"}
|
• Annals
• 1996
• Issue No. 4 - Winter
• Abstract - On the Role of Mathematics and Mathematical Knowledge in the Invention of Vannevar Bush's Early Analog Computers
This Article
Bibliographic References
Add to:
On the Role of Mathematics and Mathematical Knowledge in the Invention of Vannevar Bush's Early Analog Computers
Winter 1996 (vol. 18 no. 4)
pp. 49-59
ASCII Text x
Susann Puchta, "On the Role of Mathematics and Mathematical Knowledge in the Invention of Vannevar Bush's Early Analog Computers," IEEE Annals of the History of Computing, vol. 18, no. 4, pp.
49-59, Winter, 1996.
BibTex x
@article{ 10.1109/85.539916,
author = {Susann Puchta},
title = {On the Role of Mathematics and Mathematical Knowledge in the Invention of Vannevar Bush's Early Analog Computers},
journal ={IEEE Annals of the History of Computing},
volume = {18},
number = {4},
issn = {1058-6180},
year = {1996},
pages = {49-59},
doi = {http://doi.ieeecomputersociety.org/10.1109/85.539916},
publisher = {IEEE Computer Society},
address = {Los Alamitos, CA, USA},
RefWorks Procite/RefMan/Endnote x
TY - MGZN
JO - IEEE Annals of the History of Computing
TI - On the Role of Mathematics and Mathematical Knowledge in the Invention of Vannevar Bush's Early Analog Computers
IS - 4
SN - 1058-6180
EPD - 49-59
A1 - Susann Puchta,
PY - 1996
VL - 18
JA - IEEE Annals of the History of Computing
ER -
The technological, professional, and intellectual context out of which the development of the continuous integraph or product integraph—as the immediate forerunner of Vannevar Bush's differential
analyzer—evolved is outlined. In particular, the affinity between transmission line research and teaching at MIT's electrical engineering department under Bush's guidance, on the one hand, and the
creation of the product integraph for evaluating integrals, which resulted from the appropriate differential equations of the transmission problems, on the other hand, is detailed. I emphasize Bush's
perception of promoting engineering by easing the applied mathematics in this field as it appeared in his contribution to the development of operational circuit analysis as an appropriate engineering
mathematics as well as in creating analog machinery that was inspired by the formulation of transmission line problems in terms of that very operational methods after Oliver Heaviside.
1. E.J. Berg, Heaviside's Electrical Circuit Theory.New York: McGraw-Hill Publishing Co., 1929.
2. G.S. Brown, "Eloge: Harold Locke Hazen, 1901-1980," Annals of the History of Computing, vol. 3, no. 1, pp. 4-12, 1981.
3. A.W. Burks and A.R. Burks, "The ENIAC: First General-Purpose Electronic Computer," Annals of the History of Computing, vol. 3, no. 4, pp. 310-389, 1981.
4. V. Bush, "Oscillating-Current Circuits: An Extension of the Theory of Generalized Angular Velocities, With Applications to the Coupled Circuit and the Artificial Transmission Line," thesis for the
degree of doctor of engineering, Massachusetts Institute of Tech nology, Cambridge, Mass., Apr. 1916, MIT archives.
5. V. Bush Record of the course 651: Advanced Alternating-Current Course 1919-1920, Graduate Course, Course VI. MIT, Cambridge, Mass.
6. V. Bush, "A Simple Harmonic Analyser," J. AIEE, vol. 39, no. 10, pp. 903-905, 1920.
7. V. Bush, "Transmission Line Transients," Transactions AIEE, vol. 42, no. 6, pp. 878-893, 1923.
8. V. Bush, "Note on Operational Calculus," J. Mathematics and Physics (MIT), vol. 3, no. 1, pp. 95-107, 1924.
9. V. Bush and members of the Research Staff, Transmission Line Transient Investigations, Publications of MIT, contribution from the Electrical Engineering Research Division, Serial No. 50, 1925.
10. V. Bush and R.D. Booth, "Power Transmission Transients," J. AIEE, vol. 44, no. 2, pp. 80-97, 1925a.
11. V. Bush, F.D. Gage, and H.R. Stewart, "A Continuous Integraph," J. Franklin Institute, vol. 203, no. 1, pp. 63-84, 1927.
12. V. Bush, "Mechanical Solution of Engineering Problems," Tech. Eng. News, vol. 9, no. 2, pp. 52-53, 66, 78, 1928.
13. V. Bush, Operational Circuit Analysis. With an Appendix by N. Wiener.New York: Wiley&Sons, 1929.
14. V. Bush, "Instrumental Analysis," Bulletin AMS, vol. 42, no. 10, pp. 649-669, 1936.
15. V. Bush, "F.B. Jewett to V. Bush," July7, 1938, and "V. Bush to F.B. Jewett," July12, 1938, Library of Congress, V. Bush papers, cont. 55, folder 1,375.
16. V. Bush, "Arthur Edwin Kennelly, 1861-1939," Nat'l Academy of Sciences of the United States of America. Biographical Memoirs, vol. 27, pp. 83-119, 1943.
17. V. Bush, Early History of Analytical Machines at M.I.T, Dec.5, 1960,Cambridge, Mass., MIT archives, MC 78, Box 19.
18. V. Bush, Oral History Transcript, 1964,Cambridge, Mass., MIT archives, MC 143.
19. V. Bush, Pieces of the Action.New York: William Morrow and Co., 1970.
20. W.B. Carlson, "Academic Entrepreneurship and Engineering Education: Dugald C. Jackson and the MIT-GE Cooperative Engineering Course, 1907-1932," Technology&Culture, vol. 29, no. 3, pp. 536-567,
21. J.R. Carson, "Theory of the Transient Oscillations of Electrical Networks and Transmission Systems," Trans. AIEE, vol. 38, pp. 345-427, 1919.
22. J.R. Carson, "The Heaviside Operational Calculus," Bell System Technical J., vol. 1, no. 2, pp. 43-55, 1922.
23. J.R. Carson, "Electric Circuit Theory and the Operational Calculus," Bell System Technical J., vol. 4, no. 4, pp. 685-761, 1925; vol. 5, no. 1, pp. 50-95, 1926; vol. 5, no. 2, pp. 336-384, 1926.
24. J.R. Carson, Electric Circuit Theory and the Operational Calculus.New York: McGraw-Hill Publishing Co., 1926.
25. L. Cohen, Heaviside's Electrical Circuit Theory.New York: McGraw-Hill Publishing Co., 1928.
26. L. Espenschied and J.R. Carson, Dictionary of American Biography, vol. 22, suppl. 2, pp. 97-98, 1958.
27. M.F. Gardner, "The M.I.T. Integraph; Its Operation and Use," Colloquium on Power-Circuit Analysis With Particular Reference to the Behavior of Machinery and to Transmission Line Stability,
conducted at Cambridge, Mass., June10-22, 1929, by the Electrical Engineering Dept., MIT, aided by power transmission designers and operators.
28. H.H. Goldstine, The Computer From Pascal to von Neumann.Princeton, N.J.: Princeton Univ. Press, 1980.
29. H.L. Hazen, "Harold L. Hazen to E.N. Hartley," July19, 1967,Cambridge, Mass., MIT archives, MC 106, box 6, folder 165.
30. H.L. Hazen, "Oral History Transcript," Smithsonian Institution, 1970,Cambridge, Mass., MIT archives, MC 106, box 5, folder 124.
31. O. Heaviside, Electromagnetic Theory, vol. 2. London: Electrician Printing and Publishing Company, 1893.
32. T.P. Hughes, Networks of Power: Electrification in Western Society, 1880-1930.Baltimore: Softshell Books, Johns Hopkins Univ. Press, 1993.
33. B.J. Hunt, "Rigorous Discipline: Oliver Heaviside Versus the Mathematicians," P. Dear, ed., The Literary Structure of Scientific Argument: Historical Studies.Philadelphia: Univ. of Pennsylvania
Press, pp. 72-95, 1991.
34. R.R. Kline, Steinmetz: Engineer and Socialist.Baltimore: Johns Hopkins Univ. Press, 1992.
35. E. Layton, The Revolt of the Engineers.Baltimore: Johns Hopkins Univ. Press, 1986.
36. J. Lhtzen, "Heaviside's Operational Calculus and the Attempts to Rigorise It," Archive for History of Exact Sciences, vol. 21, no. 2, pp. 161-200, 1979-1980.
37. D.F. Noble, America by Design.Oxford, England: Oxford Univ. Press, 1979.
38. L. Owens, "Vannevar Bush and the Differential Analyzer: The Text and Context of an Early Computer," Technology&Culture, vol. 27, no. 1, pp. 63-95, 1986.
39. L. Owens, "Straight Thinking: Vannevar Bush and the Culture of American Engineering," PhD thesis, Princeton Univ., 1987.
40. B. Randall, "From Analytical Engine to Electronic Digital Computer: The Contributions of Ludgate, Torres, and Bush," Annals of the History of Computing, vol. 4, no. 4, pp. 327-341, 1982.
41. C.P. Steinmetz, Theory and Calculation of Transient Electric Phenomena.New York: McGraw-Hill Book Co., 1909.
42. H.S. Tropp Interview with Dean H. Hazen conducted by H.S. Tropp, Feb.28, 1967,Cambridge, Mass., MIT archives, MC 106, box 5, folder 125.
43. N. Wiener, I Am a Mathematician, the Later Life of a Prodigy.Garden City, N.Y.: Doubleday&Co., 1956.
44. N. Wiener, The Function of the College Professor as I See It, 1957,Cambridge, Mass., MIT archives, MC 22, box 22, folder 813.
45. K.L. Wildes and N.A. Lindgren, A Century of Electrical Engineering and Computer Science at MIT, 1882-1982.Cambridge, Mass.: MIT Press, 1985, p. 348. Project MAC had wider aims, captured in the
alternative version of the acronym, Machine-Aided Cognition, for which see, e.g., Norberg and O'Neill, Transforming Computer Technology, and P.N. Edwards, The Closed World: Computers and the Politics
of Discourse in Cold War America. Cambridge, Mass.: MIT Press, 1996.
46. F.C. Caldwell, "A Comparative Study of the Electrical Engineering Courses Given at Different Institutions," Proc. Society for the Promotion of Eng. Education, vol. 7, pp. 127-129, 1899.
47. E.J. Townsend, "Present Conditions of Mathematical Instruction for Engineers in American Colleges," Science, vol. 28, pp. 69-79, 1908.
48. S. Hensel, "Ernst Julius Berg—Educator and Proselytizer of Heaviside's Calculus," IEEE Potentials, vol. 13, no. 3, pp. 57-60, 1994.
Susann Puchta, "On the Role of Mathematics and Mathematical Knowledge in the Invention of Vannevar Bush's Early Analog Computers," IEEE Annals of the History of Computing, vol. 18, no. 4, pp. 49-59,
Winter 1996, doi:10.1109/85.539916
Usage of this product signifies your acceptance of the
Terms of Use
|
{"url":"http://www.computer.org/csdl/mags/an/1996/04/man1996040049-abs.html","timestamp":"2014-04-24T09:57:58Z","content_type":null,"content_length":"57612","record_id":"<urn:uuid:cfb1c4aa-44bf-4533-8728-42a758382ea6>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00401-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Skillman Trigonometry Tutor
...I also took two classes on partial differential equations in graduate school. Differential equations are directly related to calculus (integration vs. differentiation). Viewing it this way
helps students understand what they are trying to accomplish when solving these problems. I took a class on linear algebra in college as part of the requirements for my BS in mathematics.
12 Subjects: including trigonometry, calculus, algebra 2, geometry
I have over 15 years of experience teaching and tutoring physics, and have a PhD. I am very patient and can teach at all levels - from high school through college. I believe in active learning -
using many examples, pictures, and simulations to illustrate key concepts while making it fun to learn as well.
15 Subjects: including trigonometry, calculus, geometry, algebra 1
...I have a Master's degree in Chemistry from Columbia University and an undergraduate degree in Biophysical Chemistry with a minor in Genetics from Dartmouth College. I love math and the hard
sciences for the ability to explain natural occurrences and break them down into clear rules and equations...
39 Subjects: including trigonometry, chemistry, writing, reading
...I remember I got the chance to instruct a small group of children on the names of all the parts of the rocket and a basic explanation of how they functioned. It was amazing to see how excited
these kids were to learn about science and now, reflecting on this experience, I see the impact I can ha...
16 Subjects: including trigonometry, Spanish, calculus, physics
...I understand their social skill deficits, so I have techniques to help them understand subtle cues. I can rephrase questions that are difficult in a way that helps them relate to the topics. I
am a special education teacher.
43 Subjects: including trigonometry, reading, writing, English
|
{"url":"http://www.purplemath.com/Skillman_trigonometry_tutors.php","timestamp":"2014-04-18T03:43:02Z","content_type":null,"content_length":"24252","record_id":"<urn:uuid:963fa721-18f8-47b6-8906-7001b7be3fc7>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00553-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Nutley Trigonometry Tutor
Find a Nutley Trigonometry Tutor
...I also encourage you to ask as many questions as possible, no matter how silly, so that you can get your own head around the problem. I then find it useful to ask questions to test your
understanding, or perhaps get you to explain the concept to someone else. I also teach solid, repeatable methods for solving problems in physics, which sorts out the first type of issue I
described above.
8 Subjects: including trigonometry, physics, geometry, algebra 1
...This includes SHSAT, PSAT, and SAT Math! I graduated from college with a bachelor's degree in Biological Sciences. Moreover, I relate to many younger students.
29 Subjects: including trigonometry, chemistry, reading, English
...I provide TOEFL/IELTS preparation and edit professional and academic writing in conjunction with tutorials in PowerPoint and Word as needed. Finally, I can assist you with public speaking for
lectures, presentations, peer reviews and oral dissertation defenses, as well as accent modification for...
39 Subjects: including trigonometry, reading, English, Spanish
...I explain the ideas of calculus in such a way that you actually understand them. I highlight the key points (most teachers just lecture and then you are stuck looking at your notes wondering
"how/why did the professor do that?" or "what the ____ did the teacher do there?"). When I tutor you in c...
23 Subjects: including trigonometry, calculus, geometry, ASVAB
...The math is the same, but it is easier to do now. The difficulty of math is learning it, not using it once you know how it works. This is how all math works!
12 Subjects: including trigonometry, physics, MCAT, calculus
|
{"url":"http://www.purplemath.com/Nutley_trigonometry_tutors.php","timestamp":"2014-04-19T17:24:58Z","content_type":null,"content_length":"23873","record_id":"<urn:uuid:0f00571b-b76c-4a27-9c2f-e3eb3453b150>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00051-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The Semantics of Destructive Lisp
Results 11 - 20 of 21
- See Gordon and Pitts , 1997
"... ing and using (L-unif) we have that any two lambdas that are everywhere undefined are equivalent. The classic example of an everywhere undefined lambda is Bot 4 = x:app(x:app(x; x); x:app(x; x))
In f , another example of an everywhere undefined lambda is the "do-forever" loop. Do 4 = f:Yv(Dox ..."
Cited by 13 (1 self)
Add to MetaCart
ing and using (L-unif) we have that any two lambdas that are everywhere undefined are equivalent. The classic example of an everywhere undefined lambda is Bot 4 = x:app(x:app(x; x); x:app(x; x)) In f
, another example of an everywhere undefined lambda is the "do-forever" loop. Do 4 = f:Yv(Dox:Do(f(x)) By the recursive definition, for any lambda ' and value v Do(')(v) \Gamma!Ø Do(')('(v))
Reasoning about Functions with Effects 21 In f , either '(v) \Gamma!Ø v 0 for some v 0 or '(v) is undefined. In the latter case the computation is undefined since the redex is undefined. In the
former case, the computation reduces to Do(')(v 0 ) and on we go. The argument for undefinedness of Bot relies only on the (app) rule and will be valid in any uniform semantics. In contrast the
argument for undefinedness of Do(') relies on the (fred.isdef) property of f . Functional Streams We now illustrate the use of (L-unif-sim) computation to reason about streams represented as
functions ...
, 1990
"... objects Abstract objects exhibit the non-inheritance aspects of object-oriented programming. An abstract object is a function with local store. Abstract objects provide a means of encapsulating
features of a structure and controlling access to that structure. The idea is that the local store can on ..."
Cited by 11 (6 self)
Add to MetaCart
objects Abstract objects exhibit the non-inheritance aspects of object-oriented programming. An abstract object is a function with local store. Abstract objects provide a means of encapsulating
features of a structure and controlling access to that structure. The idea is that the local store can only be changed by sending a message to the object. The operations on the encapsulated structure
are determined by the messages accepted by the object. We illustrate these ideas for the special case of accumulators. An accumulator object accumulates a sequence of the things sent to it (via a !
put; x? message) and responds to a !get? message by returning the sequence collected. If mkac(y) creates an accumulator object with initial contents the elements of y, then it mus satisfy the
following three laws: Specification (Accumulator behavior): (put) letfa := mkac(y)gseq(a(!put; x?); e) 12 ' letfa := mkac(append (y; cons (x; Nil)))ge (get) letfa := mkac(y)gletfz := a(!get?)ge '
letfa := mkac(...
- In Proceedings of the ACM SIGPLAN Symposium on Partial Evaluation and Semantics-Based Program Manipulation (PEPM , 1995
"... We investigate the soundness of a specialisation technique due to Scherlis, expression procedures, in the context of a higher-order non-strict functional language. An expression procedure is a
generalised procedure construct providing a contextually specialised definition. The addition of expression ..."
Cited by 8 (2 self)
Add to MetaCart
We investigate the soundness of a specialisation technique due to Scherlis, expression procedures, in the context of a higher-order non-strict functional language. An expression procedure is a
generalised procedure construct providing a contextually specialised definition. The addition of expression procedures thereby facilitates the manipulation and specialisation of programs. In the
expression procedure approach, programs thus generalised are transformed by means of three key transformation rules: composition, application and abstraction. Arguably, the most notable, yet most
overlooked feature of the expression procedure approach to transformation, is that the transformation rules always preserve the meaning of programs. This is in contrast to the unfold-fold
transformation rules of Burstall and Darlington. In Scherlis' thesis, this distinguishing property was shown to hold for a strict first-order language. Rules for call-by-name evaluation order were
stated but not proved correct....
- In ACM/IFIP Symposium on Partial Evaluation and Semantics-based Program Manipulation , 1991
"... In this paper we report progress in the development of methods for reasoning about the equivalence of objects with memory, and the use of these methods to describe sound operations on objects in
terms of formal program transformations. We also formalize three different aspects of objects: their spec ..."
Cited by 6 (5 self)
Add to MetaCart
In this paper we report progress in the development of methods for reasoning about the equivalence of objects with memory, and the use of these methods to describe sound operations on objects in
terms of formal program transformations. We also formalize three different aspects of objects: their specification, their behavior, and their canonical representation. Formal connections among these
aspects provide methods for optimization and reasoning about systems of objects. To illustrate these ideas we give a formal derivation of an optimized specialized window editor from generic
specifications of its components. A new result in this paper enables one to make use of symbolic evaluation (with respect to a set of constraints) to establish the equivalence of objects. This form
of evaluation is not only mechanizable, it is also generalizes the conditions under which partial evaluation usually takes place. 1 Overview In [19] a general challenge for partial evaluation
technology was presented ...
, 1996
"... The nu-calculus of Pitts and Stark is a typed lambda-calculus, extended with state in the form of dynamically-generated names. These names can be created locally, passed around, and compared
with one another. Through the interaction between names and functions, the language can capture notions of sc ..."
Cited by 6 (0 self)
Add to MetaCart
The nu-calculus of Pitts and Stark is a typed lambda-calculus, extended with state in the form of dynamically-generated names. These names can be created locally, passed around, and compared with one
another. Through the interaction between names and functions, the language can capture notions of scope, visibility and sharing. Originally motivated by the study of references in Standard ML, the
nu-calculus has connections to other kinds of local declaration, and to the mobile processes of the -calculus. This
- In Proceedings of the Australasian Theory Symposium, CATS ’96 , 1996
"... In this paper we describe our progress towards an operational implementation of a modern programming logic. The logic is inspired by the variable type systems of Feferman, and is designed for
reasoning about imperative functional programs. The logic goes well beyond traditional programming logics, s ..."
Cited by 5 (2 self)
Add to MetaCart
In this paper we describe our progress towards an operational implementation of a modern programming logic. The logic is inspired by the variable type systems of Feferman, and is designed for
reasoning about imperative functional programs. The logic goes well beyond traditional programming logics, such as Hoare's logic and Dynamic logic in its expressibility, yet is less problematic to
encode into higher-order logics. The main focus of the paper is too present an axiomatization of the base first-order theory, and an implementation of the logic into the generic proof assistant
Isabelle. We also indicate the directions of our current research to blend these two advances into an operational whole. Keywords semantics, logic, derivation, verification, specification, theorem
proving. 1 Introduction In this paper we continue the investigations into a Variable Typed Logic of Effects that began in [20, 11, 21, 23, 12]. In particular we present an axiomatization of the base
first-order theory...
- Theoretical Computer Science , 1996
"... In this paper we describe some of our progress towards an operational implementation of a modern programming logic. The logic is inspired by the variable type systems of Feferman, and is
designed for reasoning about imperative functional programs. The logic goes well beyond traditional programming l ..."
Cited by 4 (0 self)
Add to MetaCart
In this paper we describe some of our progress towards an operational implementation of a modern programming logic. The logic is inspired by the variable type systems of Feferman, and is designed for
reasoning about imperative functional programs. The logic goes well beyond traditional programming logics, such as Hoare's logic and Dynamic logic in its expressibility, yet is less problematic to
encode into higher order logics. The main focus of the paper is too present an axiomatization of the base first order theory. 1 Introduction VTLoE [34, 23, 35, 37, 24] is a logic for reasoning about
imperative functional programs, inspired by the variable type systems of Feferman. These systems are two sorted theories of operations and classes initially developed for the formalization of
constructive mathematics [12, 13] and later applied to the study of purely functional languages [14, 15]. VTLoE builds upon recent advances in the semantics of languages with effects [16, 19, 28, 32,
33] and go...
- ACM Sigsam Bulletin , 1993
"... INTRODUCTION Two early major applications of Lisp are the Reduce and Macsyma symbolic algebra systems. Due to the complexity of algebraic expressions, the Lisp list constructed from "cons" cells
provides an elegant and efficient representation for the manipulations that must be performed. Therefore ..."
Cited by 1 (1 self)
Add to MetaCart
INTRODUCTION Two early major applications of Lisp are the Reduce and Macsyma symbolic algebra systems. Due to the complexity of algebraic expressions, the Lisp list constructed from "cons" cells
provides an elegant and efficient representation for the manipulations that must be performed. Therefore, when Richard Gabriel selected a set of programs for benchmarking Lisp systems [Gabriel85], he
included the FRPOLY program, which is a fragment of the Macsyma system that adds and multiplies polynomials in several variables. Polynomial manipulations are the heart of any symbolic algebra
system, and as polynomial multiplication is both common and expensive, the FRPOLY benchmark raises the simple polynomial r=x+y+z+1 to the powers 2, 5, 10 and 15 by successive squaring. As the size of
the answer explodes exponentially with larger powers, this expansion of (x+y+z+1) 15 provides an interesting benchmark. "
, 1994
"... We present a program to copy bytes, together with a formal specification and a proof that the code satisfies the specification. The program, which is in the critical path for a network
implementation, has been tuned carefully over a period of time; the proof covers the entire program, and is easily ..."
Add to MetaCart
We present a program to copy bytes, together with a formal specification and a proof that the code satisfies the specification. The program, which is in the critical path for a network
implementation, has been tuned carefully over a period of time; the proof covers the entire program, and is easily updated if the program is modified. The program is written in the Standard ML
programming language and was produced as part of the Fox Project implementation of the TCP/IP protocol suite. The author's electronic mail address is: esb@cs.cmu.edu This research was sponsored by
the Defense Advanced Research Projects Agency, CSTO, under the title "The Fox Project: Advanced Development of Systems Software", ARPA Order No. 8313, issued by ESD/AVS under Contract No.
F19628-91-C-0168. The views and conclusions contained in this document are those of the author and should not be interpreted as representing official policies, either expressed or implied, of the
Defense Advanced Research Projects Agency...
, 1999
"... LISP has survived for 21 years because it is an approximate local optimum in the space of programming languages. However, it has accumulated some barnacles that should be scraped o, and some
long-standing opportunities for improvement have been neglected. It would benet from some co-operative ma ..."
Add to MetaCart
LISP has survived for 21 years because it is an approximate local optimum in the space of programming languages. However, it has accumulated some barnacles that should be scraped o, and some
long-standing opportunities for improvement have been neglected. It would benet from some co-operative maintenance especially in creating and maintaining program libraries. Computer checked proofs of
program correctness are now possible for pure LISP and some extensions, but more theory and some smoothing of the language itself are required before we can take full advantage of LISP's mathematical
basis. 1999 note: This article was included in the 1980 Lisp conference held at Stanford. Since it almost entirely corresponds to my present opinions, I should have asked to have it reprinted in the
1998 Lisp users conference proceedings at which I gave a talk with the same title. 1 1 Introduction On LISP's approximate 21st anniversary, no doubt something could be said about coming of ag...
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=1112187&sort=cite&start=10","timestamp":"2014-04-21T12:01:20Z","content_type":null,"content_length":"38878","record_id":"<urn:uuid:e47a1c1c-f97c-4520-87cd-2a4f2f2d9ca5>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00000-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Arithmetic {base}
These unary and binary operators perform arithmetic on numeric or complex vectors (or objects which can be coerced to them).
+ x
- x
x + y
x - y
x * y
x / y
x ^ y
x %% y
x %/% y
x, y
numeric or complex vectors or objects which can be coerced to such, or other objects for which methods have been written.
The unary and binary arithmetic operators are generic functions: methods can be written for them individually or via the Ops group generic function. (See Ops for how dispatch is computed.)
If applied to arrays the result will be an array if this is sensible (for example it will not if the recycling rule has been invoked).
Logical vectors will be coerced to integer or numeric vectors, FALSE having value zero and TRUE having value one.
1 ^ y and y ^ 0 are 1, always. x ^ y should also give the proper limit result when either argument is infinite (i.e., +- Inf).
Objects such as arrays or time-series can be operated on this way provided they are conformable.
For double arguments, %% can be subject to catastrophic loss of accuracy if x is much larger than y, and a warning is given if this is detected.
%% and x %/% y can be used for non-integer y, e.g. 1 %/% 0.2, but the results are subject to representation error and so may be platform-dependent. Because the IEC 60059 representation of 0.2 is a
binary fraction slightly larger than 0.2, the answer to 1 %/% 0.2 should be 4 but most platforms give 5.
Users are sometimes surprised by the value returned, for example why (-8)^(1/3) is NaN. For double inputs, R makes use of IEC 60559 arithmetic on all platforms, together with the C system function
pow for the ^ operator. The relevant standards define the result in many corner cases. In particular, the result in the example above is mandated by the C99 standard. On many Unix-alike systems the
command man pow gives details of the values in a large number of corner cases.
Arithmetic on type double in R is supposed to be done in ‘round to nearest, ties to even’ mode, but this does depend on the compiler and FPU being set up correctly.
Unary + returns x unchanged (without coercing logical vectors).
Unary - returns a numeric or complex vector with the same attributes as x: logical x are coerced to integer. The binary operators return vectors containing the result of the element by element
operations. The elements of shorter vectors are recycled as necessary (with a warning when they are recycled only fractionally). The operators are + for addition, - for subtraction, * for
multiplication, / for division and ^ for exponentiation.
%% indicates x mod y and %/% indicates integer division. It is guaranteed that x == (x %% y) + y * ( x %/% y ) (up to rounding error) unless y == 0 where the result of %% is NA_integer_ or NaN
(depending on the typeof of the arguments).
If either argument is complex the result will be complex, otherwise if one or both arguments are numeric, the result will be numeric. If both arguments are of type integer, the type of the result of
/ and ^ is numeric and for the other operators it is integer (with overflow, which occurs at +/- (2^31 - 1), returned as NA_integer_ with a warning).
The rules for determining the attributes of the result are rather complicated. Most attributes are taken from the longer argument. Names will be copied from the first if it is the same length as the
answer, otherwise from the second if that is. If the arguments are the same length, attributes will be copied from both, with those of the first argument taking precedence when the same attribute is
present in both arguments. For time series, these operations are allowed only if the series are compatible, when the class and tsp attribute of whichever is a time series (the same, if both are) are
used. For arrays (and an array result) the dimensions and dimnames are taken from first argument if it is an array, otherwise the second.
S4 methods
These operators are members of the S4 Arith group generic, and so methods can be written for them individually as well as for the group generic (or the Ops group generic), with arguments c(e1, e2)
(with e2 missing for a unary operator).
Implementation limits
R is dependent on OS services (and they on FPUs) for floating-point arithmetic. On all current R platforms IEC 60559 (also known as IEEE 754) arithmetic is used, but some things in those standards
are optional. In particular, the support for denormal numbers (those outside the range given by .Machine) may differ between platforms and even between calculations on a single platform.
Another potential issue is signed zeroes: on IEC 60659 platforms there are two zeroes with internal representations differing by sign. Where possible R treats them as the same, but for example direct
output from C code often does not do so and may output -0.0 (and on Windows whether it does so or not depends on the version of Windows). One place in R where the difference might be seen is in
division by zero: 1/x is Inf or -Inf depending on the sign of zero x.
Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) The New S Language. Wadsworth & Brooks/Cole.
D. Goldberg (1991) What Every Computer Scientist Should Know about Floating-Point Arithmetic ACM Computing Surveys, 23(1).
Postscript version available at http://www.validlab.com/goldberg/paper.ps Extended PDF version at http://www.validlab.com/goldberg/paper.pdf
** is translated in the parser to ^, but this was undocumented for many years. It appears as an index entry in Becker et al (1988), pointing to the help for Deprecated but is not actually mentioned
on that page. Even though it had been deprecated in S for 20 years, it was still accepted in R in 2008.
See Also
sqrt for miscellaneous and Special for special mathematical functions.
Syntax for operator precedence.
%*% for matrix multiplication.
x <- -1:12
x + 1
2 * x + 3
x %% 2 #-- is periodic
x %/% 5
Documentation reproduced from R 3.0.2. License: GPL-2.
|
{"url":"http://www.inside-r.org/r-doc/base/Arithmetic","timestamp":"2014-04-19T14:41:52Z","content_type":null,"content_length":"28360","record_id":"<urn:uuid:9334e24f-6410-4ddf-9c69-298c61cf4926>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00270-ip-10-147-4-33.ec2.internal.warc.gz"}
|
dailysudoku.com :: View topic - Overlapping X-Wing Solutions
Discussion of Daily Sudoku puzzles
Author Message
Asellus Posted: Tue Dec 11, 2007 9:54 am Post subject: Overlapping X-Wing Solutions
In solving yesterday's (10-Dec-2007) One-Trick Pony from sudocue.net, I came across the following:
Joined: 05
Jun 2007 +------------------+--------------------+------------------+
Posts: 865 | 28 28 9 | 3 7 6 | 5 1 4 |
Location: | 6 5 7 | 9 1 4 | 8 2 3 |
Sonoma | 1 3 4 | 5 8 2 | 6 79 79 |
County, CA, +------------------+--------------------+------------------+
USA | 3 9 28 | 6 4 5 | 1 78 27 |
| 7 4 6 | 1 a23 a38 | 9 58 25 |
| 5 28 1 | 28 9 7 | 3 4 6 |
| 9 7 3 | 248 25 18 | 24 6 15 |
| 248 b16 258 | 24 b256 19 | 7 3 159 |
| 24 B16 25 | 7 ab-23-56 A13-9 | 24 59 8 |
There is an X-Wing on <3> marked "aA" and an X-Wing on <6> marked "bB" and they overlap in a single cell (r9c5). There is also a strongly linked pair of <1>s in r9, one <1> in each of
the two X-Wings, the cells marked "A" and "B". I realized that this led to the elimination of <2> and <5> in r9c5 and <9> in r9c6, as I will explain below.
I hadn't encountered this pattern before and it made me think about the general case, which I describe below. No doubt all this has been described elsewhere. Yet, since I hadn't
encountered it, I thought I'd post it. If I've made any errors, I'm certain someone will post corrections.
Overlapping X-Wing Solutions Involving a Third Digit Strong Link
First, the case of a single-cell overlap. Then, the much less interesting case of a two-cell overlap.
ONE-CELL OVERLAP:
Two X-Wings, one on "y" and one on "z", overlap on a single cell. A third digit, "x", is strongly linked between two of these cells, one in the "y" X-Wing and one in the "z" X-Wing.
(These "x" cells can be remote and the link can be strongly inferential, as in the pincer ends of a wing or chain.) Each X-Wing has two possible solutions, which I will denote with "Y"
and "y" and with "Z" and "z", respectively. In each X-Wing, there is a diagonal solution that includes the overlap cell, and a diagonal solution that excludes the overlap cell. There
are four possible configurations, one of which is trivial, based on the locations of the strongly linked x's:
POSSIBILITY 1: Both linked x's occur in cells on the diagonals that exclude the overlap cell. Result: The overlap cell and and "x" cells become bivalues ({yx}, {zx}, and {yz}) with all
other digits in these three cells eliminated.
Xy Y The diagonals that exclude the overlap are y-y and Z-Z.
Here, one x is in an y cell and the other in a Z cell.
z xZ Polarity ("color") is induced as shown by the capitalization.
The Xy, xZ and Yz cells become bivalues; all other candidates
Z Yz y are eliminated from these three cells.
POSSIBILITY 2: One linked "x" is in a diagonal excluding the overlap cell and the other "x" in a diagonal including the overlap cell. Result: The diagonal that contains neither the
overlap cell nor one of the linked x's is True and those digits can be placed.
xy Y The y diagonal contains one of the linked x's.
The Y diagonal contains the overlap cell.
xz Z The z diagonal contains one of the linked x's and the overlap cell.
The Z diagonal contains no linked x and no overlap cell.
Z Yz y The two Z values are True and can be placed in those two cells.
y Y Only the y diagonal contains neither a linked x
nor the overlap cell.
z xZ The two y values are True and can be placed in those two cells.
Z xYz y
POSSIBILITY 3: The linked x's are each diagonally opposite the overlap cell. No eliminations or placements result from this configuration.
a xA The "x" cells are both diagonally opposite the overlap cell.
No eliminations or placements result from this configuration.
xb B
B Ab a
POSSIBILITY 4: The linked x's are in the overlap cell and a cell diagonally opposite it. The diagonal opposite that of the "x" cells is True and can be placed. (This is the trivial
possibility since it really involves only a single X-Wing.)
TWO-CELL OVERLAP:
This is not so interesting. The two overlapping cells are necessarily a locked pair. So, any linked "x" pair can only occur in the non-overlapping cells of the X-Wings. If the linked
x's do not share a row or column, then the diagonals of each X-Wing without one of these x's are true and can be placed. (Since these cells are never peers, the x's would have to be
linked by a wing or coloring or some other implication chain.) If the linked x's share a row or column, no eliminations or placements result.
Myth Jellies Posted: Mon Dec 17, 2007 9:22 am Post subject:
I don't wish to stifle your creativity, but I note that your example (as well as theoretical possibility 1) works out to be an elaborate way to find a 136-hidden triple in row 9. Some
of your other theoretical setups might be more interesting though.
Joined: 27
Jun 2006
Posts: 64
Asellus Posted: Mon Dec 17, 2007 12:08 pm Post subject:
Well, I could say, "Who wants to find a Hidden Triple in the same old way all the time, anyway?"
Yes, it is (now) obvious that "Possibility 1" is necessarily a Locked Triple when the the "x" cells and the overlap cell are colinear. However, I believe Possibility 1 (might?) still
Joined: 05 have value when the cells are not colinear:
Jun 2007
Posts: 865 Code:
Sonoma Example:
County, CA,
USA Xy Y The diagonals that exclude the overlap are y-y and Z-Z.
Here, one x is in an y cell and the other in a Z cell.
z Z Polarity ("color") is induced as shown by the capitalization.
The Xy, xZ and Yz cells become bivalues; all other candidates
xZ Yz y are eliminated from these three cells.
I don't believe that these cells are inherently a Hidden Triple provided they don't share a box. Since the "x" pair would be remote in that case, the strong link would need to be
induced externally, by a wing or chain for instance.
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum
|
{"url":"http://www.dailysudoku.com/sudoku/forums/viewtopic.php?p=8897","timestamp":"2014-04-16T16:10:23Z","content_type":null,"content_length":"39184","record_id":"<urn:uuid:ca09b082-b0bd-4c47-ac14-8dcf5691cc9c>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00575-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Performing a for-loop on a retrieved array
up vote 0 down vote favorite
Continuing on my attempt at an app and I'm stuck again on the very next object.
Last time I asked about my Performer.java object which receives an array of 5 integers.
This time I am trying to manipulate the data through my Calculations object.
(Please correct me if I am labelling the wrong things objects)
import java.util.List;
public class Calculations {
public static int performCalcs() {
Performer getInput = new Performer();
List<Integer> arrayOne = getInput.getUnit();
for(int i=0 ; i<=arrayOne.size() ; i++) {
int sumTotal = 0;
return sumTotal;
I am literally stuck. I know it doesn't make any sense, but what I am trying to do is add up all the numbers in my array and also calculate the average of the numbers in my array.
I think I did the "i" correctly, and I was able to retrieve my array into arrayOne, but don't know how to implement the "i" and make it perform either addition or multiplication until i<=
Thanks guys!
java list for-loop
Don't rush it. Go through some tutorials, and read some code. – ᴋᴇʏsᴇʀ May 7 '13 at 8:22
Bingo, go read a book. It will make life so much easier and programming a lot more fun. – Adam May 7 '13 at 8:24
Another thing: i<=arrayOne.size() you might want to replace <= with < – Maroun Maroun May 7 '13 at 8:25
An important note: You don't use an array, you use a list. In Java -and I think also in C++ and in .Net- it is a complete different object with different methods and behavior. – reporter May 7 '13
at 8:30
Not sure this deserves the -1s, the question is basic yes, but well formatted and clear – Richard Tingle May 7 '13 at 8:33
show 1 more comment
9 Answers
active oldest votes
for(int i=0 ; i<=arrayOne.size() ; i++){
int sumTotal = 0;
return sumTotal;
What you are doing in the above code, is that you are entering the loop, and after the first interation you return sumTotal (which will be 0)
up vote
2 down This is what you want to do
accepted int sumTotal = 0; // declare the variable you want to to summerize outside the loop. If you declare it inside the loop it will be garbage collected once the loop finishes and the variable is lost
for(int i=0 ; i<arrayOne.size() ; i++){
sumTotal+=arrayOne.get(i); // add the value that is at the current index each iteration
//After the loop, somTotal will have the total added value of the integers in the `List`. Now you can continue from here
<= should be < – Maroun Maroun May 7 '13 at 8:26
Thanks for spoting that – John Snow May 7 '13 at 8:28
Thanks for the simple explanations. I gotta go back and find reviews for for loops instead of asking such broad questions. Sorry everyone, and thanks for the help! – LearnIT May 7 '13 at 8:41
add comment
• You should move sumTotal outside the for loop. Otherwise the sumTotal variable is re-initialized every time.
• You should loop till i < arrayOne.size(). O.w. you'll get an IndexOutOfBoundsException
• You do retrieve the i-th number in this way: arrayOne.get(i)
• Then compute the average value by dividing sumTotal for the number of values in the arrayOne list.
Here's the code:
import java.util.List;
public class Calculations {
public static int performCalcs() {
Performer getInput = new Performer();
List<Integer> arrayOne = getInput.getUnit();
up vote 1 down vote int sumTotal = 0;
// Sum the numbers in the array
// Repeat until i < arrayOne.size()
for (int i = 0; i < arrayOne.size(); i++) {
int num = arrayOne.get(i);
sumTotal += num;
// Check that the array is not empty.
// If it is not, compute the average, o.w. return 0.
double avg = arrayOne.isEmpty() ? .0 : (sumTotal / arrayOne.size());
System.out.println("average: " + avg);
return avg;
Return. :) So much for the race! – renz May 7 '13 at 8:22
@renz: Fixed! :) – user278064 May 7 '13 at 8:24
1 Another thing, <= is wrong, should be < – Maroun Maroun May 7 '13 at 8:29
@MarounMaroun: Thanks Maroun Maroun. I see now. :) – user278064 May 7 '13 at 8:32
add comment
The best way is to use for-each loop:
int sumTotal = 0;
up vote 0 down vote for(Integer val: arrayOne) {
sumTotal += val.intValue();
add comment
To sum all the numbers in any Iterable you need to do this:
//Declare the variable to hold the total outside the loop.
int total = 0;
//Loop over the integers to sum.
for (Integer i : integers)
//Add the current integer to the total.
total += i;
up vote 0 down vote
//The total will now equal the sum of all the integers.
System.out.println("Total: " + total);
System.out.println("Average: " + (total / integers.size()));
The two important bits that your code was missing are:
• the total variable must be declared outside the loop else it is simple reinitialized with each iteration.
• you need to add the integers values together.
add comment
I guess it should look like this
public class Calculations {
public static int performCalcs(){
Performer getInput = new Performer();
List<Integer> arrayOne = getInput.getUnit();
up vote 0 down vote for(int i=0 ; i<=arrayOne.size() ; i++){
int sumTotal += i;
return sumTotal;
for(int i=0 ; i<=arrayOne.size() ; i++) something is wrong here.. – Maroun Maroun May 7 '13 at 8:27
add comment
The issue here is that you are using "return" inside your loop. The i++ part is fine. Use the return outside of the loop. All of the answers above show this.
up vote 0 down vote If you use "return" in your loop, it will only iterate once.
add comment
Move int sumTotal = 0; out of the loop (above) and return ... also (below). Inside the loop put sumTotal += arrayOne[I]. You are learning the hard way, aren't you?
up vote 0 down vote
add comment
you should pull the sum init and the return out of your loop
public static int performCalcs(){
Performer getInput = new Performer();
List<Integer> arrayOne = getInput.getUnit();
int sumTotal = 0;
up vote 0 down vote
for(int i=0 ; i < arrayOne.size(); i++){
return sumTotal += arrayOne.get(i);
return sumTotal;
2 i<=arrayOne.size() - replace <= with < – Maroun Maroun May 7 '13 at 8:28
thx edited that – Marco Forberg May 7 '13 at 8:31
add comment
Note how sumTotal is returned within the loop, meaning that on the very first time through the loop it returns the first value. You want it to say
import java.util.List;
public class Calculations {
public static int performCalcs(){
Performer getInput = new Performer();
List<Integer> arrayOne = getInput.getUnit();
int sumTotal = 0;
for(int i=0 ; i<arrayOne.size() ; i++){
return sumTotal;
up vote 0 down returns work in java (and most programming languages) that there can be many in a function, and the first one that it finds the function ends and returns whatevers after the
vote return, for example
public double someFunction(int a){
if (a!=5){
return a;
return 12;
return 999; //this return is never used under any circumstances because the function has already returned, good IDEs won't even allow it
Two returns and it uses the first one it finds
Edit: As Maroun Maroun correctly states the for loop should be
for(int i=0 ; i<arrayOne.size() ; i++){
for(int i=0 ; i<=arrayOne.size() ; i++){
This is because java array indexes start at 0, so an array with 5 entries will have indexes 0,1,2,3,4
1 <= should be < – Maroun Maroun May 7 '13 at 8:26
Quite correct, fixed – Richard Tingle May 7 '13 at 8:26
add comment
Not the answer you're looking for? Browse other questions tagged java list for-loop or ask your own question.
|
{"url":"http://stackoverflow.com/questions/16414379/performing-a-for-loop-on-a-retrieved-array/16414445","timestamp":"2014-04-18T17:44:00Z","content_type":null,"content_length":"111560","record_id":"<urn:uuid:88138a9c-0691-4799-a630-0d71ce01cecf>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00332-ip-10-147-4-33.ec2.internal.warc.gz"}
|
587: Crime Scene
Explain xkcd: It's 'cause you're dumb.
Crime Scene
Title text: I think I see a Mandelbrot set! No, that's just blood spatters. Golly.
[edit] Explanation
Mathnet was a segment on the children's television show "Square One Television", where police mathematicians solved crimes and other mysteries by math. This comic plays on that by implying, when the
show was cancelled, the Mathnet department of the Los Angeles Police Department (LAPD) was shut down, forcing the mathematicians to become regular detectives.
Here, George Frankly, one of the two lead detectives on the show, tries to glean some sort of mathematical meaning out of the murders, saying that the number of bodies, two, is the third Fibonacci
number, a set of numbers where the first two numbers in the Fibonacci sequence are 0 and 1 and then each number is the sum of the previous two; looking like this: 0, 1, 1, 2, 3, 5, 8... (Sometimes
the sequence is considered to start with 1 and 1, or 0 is considered the zeroth term in the sequence; both of which would explain why Frankly calls 2 the third number rather than the fourth.)
The title text goes on to extrapolate, saying that the George saw a Mandelbrot set in the blood spatters, a formula used to create certain kinds of fractals that look somewhat like blood spatters.
[edit] Transcript
[A crime scene is surrounded in tape. A large black pool is on the ground, with splashes around it, and some sort of tool. Two people are standing outside the tape.]
Policeman: Looks like a murder-suicide.
George: Any interesting mathematical patterns?
Policeman: No, George, just two dead bodies and a lot of blood.
George: Two... that's the third Fibonacci number!
Policeman: Not now, George.
When Mathnet shut down, the officers had trouble reintegrating into the regular L.A.P.D.
add a comment!
That officer is a fool. I'd say it'd be much more likely to relate to the first prime number (assuming you ignore 1, as apparently you're supposed to) than the third Fibonacci one, barring any prior
incidents that might or might not be attributed to the same killer. Of course, we'd perhaps have to wait until three crime-scenes later to work out which of these patterns our Malevolently
Mathematical Mastermind of Murder has memetically manipulated for us... Holy Torii, Batman! (And no wonder the policemen like both donuts and coffee cups... They're the same...) 178.105.100.250
00:20, 24 May 2013 (UTC)
" a man (presumably a former Mathnet member" - Not just anyone, the officer calls him George. George Frankly was the main character on the show. Just putting it out there. --Alcatraz ii (talk) 22:43,
6 June 2013 (UTC)
You know, this being a wiki and all, you could have added that yourself. Never mind, I've done it for you. 71.201.53.130 20:42, 20 August 2013 (UTC)
The original Fibonacci problem was formulated about the count of multiplying pairs of rabbits, starting with one pair. So 2 is definitely the 3rd number, not 4th, in that formulation.
22:21, 31 January 2014 (UTC)
|
{"url":"http://www.explainxkcd.com/wiki/index.php?title=587:_Crime_Scene","timestamp":"2014-04-18T03:20:36Z","content_type":null,"content_length":"29983","record_id":"<urn:uuid:3014ae55-03d7-424c-b5b4-37ec9094a490>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00340-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Lagrange's Theorem for Hopf Monoids in Species
Canad. J. Math. 65(2013), 241-265
Printed: Apr 2013
• Marcelo Aguiar,
• Aaron Lauve,
Following Radford's proof of Lagrange's theorem for pointed Hopf algebras, we prove Lagrange's theorem for Hopf monoids in the category of connected species. As a corollary, we obtain necessary
conditions for a given subspecies $\mathbf k$ of a Hopf monoid $\mathbf h$ to be a Hopf submonoid: the quotient of any one of the generating series of $\mathbf h$ by the corresponding generating
series of $\mathbf k$ must have nonnegative coefficients. Other corollaries include a necessary condition for a sequence of nonnegative integers to be the dimension sequence of a Hopf monoid in the
form of certain polynomial inequalities, and of a set-theoretic Hopf monoid in the form of certain linear inequalities. The latter express that the binomial transform of the sequence must be
Keywords: Hopf monoids, species, graded Hopf algebras, Lagrange's theorem, generating series, Poincaré-Birkhoff-Witt theorem, Hopf kernel, Lie kernel, primitive element, partition, composition,
linear order, cyclic order, derangement
MSC Classifications: 05A15 - Exact enumeration problems, generating functions [See also 33Cxx, 33Dxx]
05A20 - Combinatorial inequalities
05E99 - None of the above, but in this section
16T05 - Hopf algebras and their applications [See also 16S40, 57T05]
16T30 - Connections with combinatorics
18D10 - Monoidal categories (= multiplicative categories), symmetric monoidal categories, braided categories [See also 19D23]
18D35 - Structured objects in a category (group objects, etc.)
|
{"url":"http://cms.math.ca/10.4153/CJM-2011-098-9","timestamp":"2014-04-16T07:25:02Z","content_type":null,"content_length":"35794","record_id":"<urn:uuid:f7f6febb-89a4-46bd-bf3a-4dc01f463152>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00369-ip-10-147-4-33.ec2.internal.warc.gz"}
|
the first resource for mathematics
Numerical methods for nonlinear stochastic differential equations with jumps.
(English) Zbl 1186.65010
Summary: We present and analyse two implicit methods for Ito stochastic differential equations (SDEs) with Poisson-driven jumps. The first method, SSBE, is a split-step extension of the backward
Euler method. The second method, CSSBE, arises from the introduction of a compensated, martingale, form of the Poisson process. We show that both methods are amenable to rigorous analysis when a
one-sided Lipschitz condition, rather than a more restrictive global Lipschitz condition, holds for the drift.
Our analysis covers strong convergence and nonlinear stability. We prove that both methods give strong convergence when the drift coefficient is one-sided Lipschitz and the diffusion and jump
coefficients are globally Lipschitz. On the way to proving these results, we show that a compensated form of the Euler-Maruyama method converges strongly when the SDE coefficients satisfy a local
Lipschitz condition and the $p$-th moment of the exact and numerical solution are bounded for some $p>2$.
Under our assumptions, both SSBE and CSSBE give well-defined, unique solutions for sufficiently small stepsizes, and SSBE has the advantage that the restriction is independent of the jump intensity.
We also study the ability of the methods to reproduce exponential mean-square stability in the case where the drift has a negative one-sided Lipschitz constant.
This work extends the deterministic nonlinear stability theory in numerical analysis. We find that SSBE preserves stability under a stepsize constraint that is independent of the initial data. CSSBE
satisfies an even stronger condition, and gives a generalization of B-stability.
Finally, we specialize to a linear test problem and show that CSSBE has a natural extension of deterministic A-stability. The difference in stability properties of the SSBE and CSSBE methods
emphasizes that the addition of a jump term has a significant effect that cannot be deduced directly from the non-jump literature.
65C30 Stochastic differential and integral equations
60H10 Stochastic ordinary differential equations
60H35 Computational methods for stochastic equations
34F05 ODE with randomness
65L20 Stability and convergence of numerical methods for ODE
65L50 Mesh generation and refinement (ODE)
[1] Baker, C.T.H., Buckwar, E.: Exponential stability in p-th mean of solutions, and of convergent Euler-type solutions, to stochastic delay differential equations. J. Computat. Appl. Math. (to
[2] Burrage, K., Burrage, P.M., Tian, T.: Numerical methods for strong solutions of stochastic differential equations: an overview. Proceedings: Mathematical, Physical and Engineering, Royal Society
of London 460, 373–402 (2004) · Zbl 1048.65004 · doi:10.1098/rspa.2003.1247
[3] Cont, R., Tankov, P.: Financial Modelling With Jump Processes. Chapman & Hall/CRC, Florida (2004)
[4] Cyganowski, S., Grüne, L., Kloeden, P.E.: MAPLE for jump-diffusion stochastic differential equations in finance. In: Programming Languages and Systems in Computational Economics and Finance,
S.S. Nielsen (ed.), Kluwer, Boston (2002), pp. 441–460
[5] Dekker, K., Verwer, J.G.: Stability of Runge–Kutta Methods for Stiff Nonlinear Equations. North Holland, Amsterdam (1984)
[6] Gardoń, A.: The order of approximation for solutions of Itô-type stochastic differential equations with jumps. Stochastic Anal. Appl. 22, 679–699 (2004) · Zbl 1056.60065 · doi:10.1081/
[7] Gikhman, I.I., Skorokhod, A.V.: Stochastic Differential Equations. Springer-Verlag, Berlin (1972)
[8] Hairer, E., Wanner, G.: Solving Ordinary Differential Equations II: Stiff and Differential-Algebraic Problems. Springer-Verlag, Berlin, second ed. (1996)
[9] Higham, D.J., Kloeden, P.E.: Convergence and stability of implicit methods for jump-diffusion systems. International Journal of Numerical Analysis & Modeling (to appear)
[10] Higham, D.J., Mao, X., Stuart, A.M.: Strong convergence of Euler-like methods for nonlinear stochastic differential equations. SIAM J. Numer. Anal. 40, 1041–1063 (2002) · Zbl 1026.65003 ·
[11] Higham, D.J., Mao, X., Stuart, A.M.: Exponential mean square stability of numerical solutions to stochastic differential equations. London Mathematical Society J. Comput. Math. 6, 297–313 (2003)
[12] Hu, Y.: Semi-implicit Euler-Maruyama scheme for stiff stochastic equations. In: Stochastic Analysis and Related Topics, V; The Silivri Workshop, Progr. Probab., 38, H. Koerezlioglu, (ed.),
Birkhauser, Boston (1996), pp. 183–202
[13] Maghsoodi, Y.: Mean square efficient numerical solution of jump-diffusion stochastic differential equations. Indian J. Statistics 58, 25–47 (1996)
[14] Maghsoodi, Y.: Exact solutions and doubly efficient approximations and simulation of jump-diffusion Ito equations. Stochastic Anal. Appl. 16, 1049–1072 (1998) · Zbl 0920.60041 · doi:10.1080/
[15] Mao, X.: Stability of Stochastic Differential Equations with respect to Semimartingales. Longman Scientific and Technical, Pitman Research Notes in Mathematics Series 251 (1991)
[16] Mattingly, J., Stuart, A.M., Higham, D.J.: Ergodicity for SDEs and approximations: Locally Lipschitz vector fields and degenerate noise. Stochastic Processes and their Appl. 101, 185–232 (2002)
· Zbl 1075.60072 · doi:10.1016/S0304-4149(02)00150-3
[17] Milstein, G.N., Tretyakov, M.V.: Numerical integration of stochastic differential equations with nonglobally Lipschitz coefficients. SIAM J. Numer. Anal. (to appear)
[18] Schurz, H.: Stability, Stationarity, and Boundedness of some Implicit Numerical Methods for Stochastic Differential Equations and Applications. Logos Verlag (1997)
[19] Sobczyk, K.: Stochastic Differential Equations with Applications to Physics and Engineering. Kluwer Academic, Dordrecht (1991)
[20] Stuart, A.M., Humphries, A.R.: Dynamical Systems and Numerical Analysis. Cambridge University Press, Cambridge (1996)
|
{"url":"http://zbmath.org/?q=an:1186.65010","timestamp":"2014-04-21T04:44:51Z","content_type":null,"content_length":"29128","record_id":"<urn:uuid:47140ce1-4681-4471-8e1b-6dc8e68d149c>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00378-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Yahoo Groups
Re: base-2 mega & giga Gentlemen, Approximations of this variety seem to have stopped being taught about the time I hit High School. Every time I see "The list can go on" I salivate.
William Brohinsky
6:32 PM
Re: base-2 mega & giga I would love to see some of the trig approximations. (As many as you can find?) These are the little extras which made sliderules much more useful than most
Charles Quinlan
6:31 PM
Sighting - Cosmos This week's episode covers the life of Clair Patterson, investigating lead contamination in the environment as a byproduct of his research into determining the
6:29 PM
Fetching Sponsored Content...
Re: Spirule The Spirule is a fun toy, .... errr... tool. I have mine from my control systems course at Virginia Tech and got another from eBay that I plan to pass on to
Ron McConnell
6:28 PM
Spirule I have an original Spirule with original instructions and a copy of Walt Evan's book "Control System Dynamics" which has an Appendix devoted to the use of his
Apr 21
Re: base-2 mega & giga this approximation is a particular case of : general approximation (1+x)^n = 1+nx if x is fairly small compare to 1. in your case (1+(-x))^(-1) = 1 +
Fabrice 94
Apr 21
Re: base-2 mega & giga Back before computers and hand held calculators, the use of approximations or the teaching of such was very common. I recall in my undergraduate days learning
Apr 20
Re: base-2 mega & giga Thanks!!!! I had not thought about using this "approximation" rule, but it sure hit the spot for some calculations I was doing... and, just in time. Q ... On
Charles Quinlan
Apr 19
base-2 mega & giga Applying Moore's Law on a slide rule recently, I needed a value for 2^20, the base-2 "mega". Of course in many cases one million is close enough, but I wanted
Paul Hirose
Apr 18
Re: ST introducrtion Thanks, Clark, sorry for the delay. I didn't mean to imply that Rietz was the first to use the CI scale. I only meant that he added the CI to his scale
Apr 18
Re: ST introducrtion Thanks, Steve, I came to that conclusion a few hours ago. I used m which is the common nomographic term for modulus or length of the scale being used. marion
Apr 17
Re: ST introducrtion Hi Marion, This equation would be to calculate the distance from the left index, where L is the total scale length. Using 1.8 keeps the argument of the log
Steve Treadwell
Apr 17
Re: ST introducrtion Chris Redding some time ago used the expression d = L*log[x*pi/1.8]. Where does this come from? It seem strange to use 1.8 instead of 180 which is the proper
Apr 17
Re: ST introducrtion It would probably be best to use the radian as the basis for the ST or SRT scale as the tangent converges toward x from greater than x as x becomes much less
Maynard Wright
Apr 16
Re: ST introducrtion Thanks! Q ... On Wed, 4/16/14, Steve Treadwell wrote: Subject: Re: SR ST introducrtion To: sliderule@yahoogroups.com Date: Wednesday,
Charles Quinlan
Apr 16
Fetching Sponsored Content...
Re: ST introducrtion Which would make sense as the trig scales are based on the "unit circle" and the measure of the arc would be more accurate at that small an angle. Q ... On
Charles Quinlan
Apr 16
Re: ST introducrtion It seems that different manufacturers used different values - some used the radian, some the sine, and some an intermediate value. Go to the group web site
Steve Treadwell
Apr 16
Re: ST introducrtion And FR wrote definitely arc0.01x for ST scale, and there are two small marks by 6° left for sine and right for tangent. Zoltán On Wednesday, April 16, 2014
Apr 15
Re: ST introducrtion That is why K&E called it the SRT scale after 1955 on their decimal-trig rules after 1955. Clark From: sliderule@yahoogroups.com
Apr 15
Re: ST introducrtion I haven’t got any written proof but as far as I know the ST scale is neither sine nor tangent, it’s the length of the arc: sin5.5°=0.09585 ≈
Apr 15
Re: ST introducrtion Hi Marion I had not thought about the origin of the ST scale. K&E did not use it until the introduction of the 4080-3 and 4081-3 in 1936. K&E first put a CI
Apr 15
ST introducrtion Does anyone know what scale factor was used on the original introduction of the ST scale? I understand Dr. Rietz introduced the ST scale around 1925 or 1929 on
Apr 15
Thomas Wetmore
Apr 11
Re: Spare Parts and Trading Tom, If you just want to buy a cursor you can try here: http://srtco.us He has some K&E parts - a window for your 4083 lists for $16. I've bought parts and
Steve Treadwell
Apr 10
Re: Spare Parts and Trading Hi Tom If you are just looking for the glass I have 6 to 8 pieces for that frame size. They are used but in good condition. Clark McCoy From:
Apr 10
Re: Spare Parts and Trading Tom, I have recently seen on Ebay an advertisment for replacment cursor windows that I believe are for that size rule. I seem to remember an asking price
Not Here
Apr 10
Re: Spare Parts and Trading There is/was a guy on eBay selling decent repro parts for K&E. I've bought some cursors, etc. and was very satisfied. Hans.
Hans E. Hansen
Apr 10
Fetching Sponsored Content...
Spare Parts and Trading I'm a new member and a relatively new collector. I have about 30 rules, concentrating on K+E, but with representatives from other manufacturers. I have a
Apr 10
Re: Slide rule simulations Not exactly simulation software, but I have online virtual rules for Sun-Hemmi 153 K+E 4093-3 K+E Analon
Mark Armbrust
Apr 8
Robert Wolf
Apr 8
View First Topic Go to View Last Topic
Loading 1 - 30 of total 45,122 messages
|
{"url":"https://groups.yahoo.com/neo/groups/sliderule/conversations/messages","timestamp":"2014-04-23T12:15:54Z","content_type":null,"content_length":"55533","record_id":"<urn:uuid:297323e6-4762-4c12-b925-960d4f4446e4>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00019-ip-10-147-4-33.ec2.internal.warc.gz"}
|
: Dictionaries
Python dictionaries are one of its more powerful built-in types. They are generally used for look-up tables and many similar applications.
A Python dictionary represents a set of zero or more ordered pairs (k[i], v[i]) such that:
● Each k[i] value is called a key;
● each key is unique and immutable; and
● the associated value v[i] can be of any type.
Another term for this structure is mapping, since it maps the set of keys onto the set of values (in the algebraic sense).
To create a new dictionary, use this general form:
{ k[0]: v[0], k[1]: v[1], ... }
There can be any number of key-value pairs (including zero). Each key-value has the form “k[i]:v[i]”, and pairs are separated by commas. Here are some examples of dictionaries:
{'Bolton': 'Notlob', 'Ipswich': 'Esher'}
{(1,1):48, (8,20): 52}
For efficiency reasons, the order of the pairs in a dictionary is arbitrary: it is essentially an unordered set of ordered pairs. If you display a dictionary, the pairs may be shown in a different
order than you used when you created it.
>>> signals = {0:'red', 1: 'yellow', 2:'green'}
>>> signals
{2: 'green', 0: 'red', 1: 'yellow'}
|
{"url":"http://infohost.nmt.edu/tcc/help/pubs/python/web/dict-type.html","timestamp":"2014-04-21T04:33:04Z","content_type":null,"content_length":"4781","record_id":"<urn:uuid:cef3ac60-c7c4-43ac-9207-19133e08c1a8>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00031-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Tidal Effects of the Moon
Well, they could save all that orbit-maintaining fuel now couldn't they?
In a sense, they do. The Station is in a "torque equilibrium attitude". The aerodynamic drag on the ISS does not act through the center of mass: It induces a torque as well as a force on the ISS. The
gradient of the gravitational acceleration also induces a torque on the vehicle (this is yet another tide-like effect and varies with 1/R[sup]3[sup] as do most tidal effects). The aero torque and
gravity gradient torque will counterbalance when the vehicle is in just the right attitude. This torque equilibrium attitude is the ISS' nominal attitude.
There is one tidal effect that is smaller than 1/R^3. The moon and sun create the sea tides we all know about. The not-so-solid earth also undergoes deformations, smaller in height than the sea
tides, but much greater in mass. These solid-body tides subtly affect the orbits of low-earth orbit satellites. For more on this, google "
k2 Love number
Note: k2 is the most significant of several Love numbers. Googling "Love number" without the k2 results in TMI.
|
{"url":"http://www.physicsforums.com/showthread.php?p=1420448","timestamp":"2014-04-17T12:42:07Z","content_type":null,"content_length":"37644","record_id":"<urn:uuid:4c406bed-8a76-45ce-ac17-b149ef3b68dd>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00567-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Reveal the Cards
Below you should see a drawing (if my computer-skills were up to the task, that is) of four cards, each having one half (the left or the right) covered. On the other half, you can see the card is
clear or has a circle on it.
We shall number the cards 1, 2, 3 and 4, 1 being the topmost card, 4 being the card on the bottom.
I put to you the following proposition:
If a card has a circle in it's left half, then it also has a circle in it's right half.
1) What's the least number of cards that must be completely uncovered to see that the proposition is true.
2) Which cards are these?
Last edited by Dross (2007-02-14 01:14:11)
|
{"url":"http://www.mathisfunforum.com/viewtopic.php?id=6042","timestamp":"2014-04-20T23:37:27Z","content_type":null,"content_length":"13666","record_id":"<urn:uuid:b3f532e6-21fc-4ddf-b8e1-e9db7f20168a>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00007-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: abstract algebra question
Replies: 2 Last Post: Jul 3, 2013 12:22 PM
Messages: [ Previous | Next ]
abstract algebra question
Posted: Sep 14, 2010 6:42 PM
Trying to solve the following: Let G be a group with a finite number
of elements. SHoe that for any a belonging to G, there exists n
belonging to the positive integers such that a^n=e (identity
element). [Hint: consider e,a,a^2, a^3.....a^m where m is the number
of elements in G , and use the cancelation laws.]
Any suggestions welsome, this is a new subject to me.
Date Subject Author
9/14/10 abstract algebra question Guest
7/3/13 Re: abstract algebra question Frederick Williams
7/3/13 Re: abstract algebra question Guest
|
{"url":"http://mathforum.org/kb/thread.jspa?threadID=2139832","timestamp":"2014-04-19T17:25:04Z","content_type":null,"content_length":"18550","record_id":"<urn:uuid:10cfd177-860e-44db-a18b-cd71d3fea03f>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00434-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Parker, TX Algebra 2 Tutor
Find a Parker, TX Algebra 2 Tutor
...I graduated from Texas A&M University in 2005 with a bachelor’s degree in Interdisciplinary Studies. From 2005-2007 I taught 5th grade science at Thurgood Marshall Elementary, a Title 1
school, in Richardson ISD. In 2007 I moved to Westwood Junior High, also in Richardson ISD, to teach 7th grade math and begin my career as a coach.
15 Subjects: including algebra 2, chemistry, geometry, algebra 1
...I use the step approach so the students have a road map for where they are going and how to get there as the journey is more important than the destination. We also discuss the different
routes that they can take as there is rarely just one way to go. Whether your child needs to re-mediate, retain, or raise their math skills, get them started today.
10 Subjects: including algebra 2, geometry, algebra 1, GED
...Students improve an average of 6+ points. I show students multiple methods so they learn new approaches and pick the one that makes most sense. I teach an ACT prep class at a private high
school and students improve an average of 6+ points by the end of the course.
11 Subjects: including algebra 2, algebra 1, precalculus, SAT math
...I'm more than willing to work with parents to find an arrangement that works for everyone involved. My goal for every session is to ensure that both student and tutor are satisfied with our
progress. I'll be learning from them as much (or more) as they learn from me.
28 Subjects: including algebra 2, chemistry, English, biology
...Particular strengths are in unfamiliar word meanings deduced from context clues and in mathematics problems requiring both reading comprehension and correct algebraic representations and
setups. I have ACT math resources which can be used to diagnose students' particular strengths and weaknesses...
17 Subjects: including algebra 2, chemistry, geometry, GRE
Related Parker, TX Tutors
Parker, TX Accounting Tutors
Parker, TX ACT Tutors
Parker, TX Algebra Tutors
Parker, TX Algebra 2 Tutors
Parker, TX Calculus Tutors
Parker, TX Geometry Tutors
Parker, TX Math Tutors
Parker, TX Prealgebra Tutors
Parker, TX Precalculus Tutors
Parker, TX SAT Tutors
Parker, TX SAT Math Tutors
Parker, TX Science Tutors
Parker, TX Statistics Tutors
Parker, TX Trigonometry Tutors
|
{"url":"http://www.purplemath.com/parker_tx_algebra_2_tutors.php","timestamp":"2014-04-17T04:09:51Z","content_type":null,"content_length":"23938","record_id":"<urn:uuid:582cb999-9524-4279-82da-35ec9254de3f>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00182-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Summary: TimeVarying versus TimeInvariant FiniteLength MMSEDFE on Stationary
Dispersive Channels
Naofal AlDhahir \Lambda
, Member IEEE
A time--varying and a time--invariant structure for the finite--length MMSE--DFE are presented and their
performances are compared on a stationary channel impaired by ISI and additive Gaussian noise. The time--
varying structure has an innovations error sequence but incurs a throughput loss on ISI channels because
of its block--processing nature. Conditions under which the time--invariant structure exhibits near--optimal
performance are described. Both structures converge to the canonical MMSE--DFE of [3] as their filters'
lengths become infinite.
I. Introduction
The infinite--length minimum--mean--square--error decision feedback equalizer (MMSE--DFE) was shown to
be a canonical (information lossless) receiver structure in [3]. In practice, the MMSE--DFE is implemented
using finite--impulse--response (FIR) filters whose lengths are set by implementational complexity constraints.
These constraints cause the conventional 1 MMSE--DFE structure to lose its canonical property since the
error sequence is no longer stationary and hence can not be whitened by the joint action of the finite--length
time--invariant feedforward and feedback filters, as in the infinite--length case. This non--stationarity was
extensively studied and shown to have a well--defined structure in [1]. To restore the canonical property of the
MMSE--DFE under the finite--length constraint, time variance was introduced in the feedforward and feedback
|
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/1010/3901790.html","timestamp":"2014-04-18T21:55:48Z","content_type":null,"content_length":"8691","record_id":"<urn:uuid:ae472555-dd21-4215-bb9a-253394096e5f>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00044-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Competitive loan terms
Choose from a range of loan terms^1
By letting you choose loan repayment terms, the prepGATE Loan Program makes it simple to find a loan that's right for your family.^1
Rewarding responsible borrowing
• 0.25% interest rate reduction for automatic withdrawal^2
Competitive rates and fees
• Rates from one-month LIBOR + to LIBOR + (with Annual Percentage Rates (APRs) from to )^3,4
• Origination fees from to , depending on the repayment term chosen by the applicant(s)^1
Check out the information and documents you'll need to complete your application.
Compare your options in 2 easy steps
Pick a repayment period
How many years do you want to take to repay your loan? The longer you stretch out your repayment plan, the lower your monthly payments—but you'll also pay more interest over the life of your loan.
Choose the repayment term that works for your needs.
View Estimated Repayment Examples
Want to see the impact your choices may have on your repayment? Make changes above to see the new estimates in the boxes below.
Lowest Rate Highest Rate
Loan Amount^6 $10,000 $10,000
Interest Rate Type^3 Variable Variable
Current Interest Rate^3 - -
APR^7 - -
Origination Fee^8 - -
Monthly Payment - -
(during repayment)^9
Repayment Period - -
(in months)^10
Estimated Loan Payment Total^11 - -
|
{"url":"http://www.prepgate.com/rates-and-repayment/","timestamp":"2014-04-19T07:07:09Z","content_type":null,"content_length":"18594","record_id":"<urn:uuid:f82c2930-defa-422d-b13a-565da12863fb>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00632-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Topology for test functions
up vote 1 down vote favorite
One naive way to define a topology on test functions ${\mathcal D}(\Omega)$ would be to exhaust $\Omega$ by compacts $(K_n)$ and to take the metric induced by the semi-norm system $$ {\| f \|} _ {n}
:= \| f \|_{C^n(K _ {n} )}, $$ i.e. $$ d(f, g) = \sum _ n 2^{-n} \frac{ \|f-g\| _ {n} }{ 1+\|f-g\| _ {n} } $$ I read (without any reference) that this yields a non-complete space.
Do you know a reference or a concrete example how to show non-completeness?
1 Oups, the answer is simple. Take $f_n$ such that $0 \le f_n \le 1$ with $f_n = 1$ on $K_n$ but $f_n = 0$ outside $K_{n+1}$. Then $\| f_n-f_m \|_k = 0$ for all $k \le \min(n, m)$. Therefore,
letting $N$ such that $\sum_N^\infy 2^{-j} < \epsilon$ yields $d(f_n, f_m) < \eps$ for all $n, m > N$. – Bernhard Jan 8 '11 at 12:47
Do you wish your question to be closed? – Wadim Zudilin Jan 8 '11 at 12:55
add comment
closed as no longer relevant by S. Carnahan♦ Jan 8 '11 at 13:40
This question is unlikely to help any future visitors; it is only relevant to a small geographic area, a specific moment in time, or an extraordinarily narrow situation that is not generally
applicable to the worldwide audience of the internet. For help making this question more broadly applicable, visit the help center.If this question can be reworded to fit the rules in the help center
, please edit the question.
Browse other questions tagged fa.functional-analysis or ask your own question.
|
{"url":"http://mathoverflow.net/questions/51477/topology-for-test-functions","timestamp":"2014-04-16T22:36:41Z","content_type":null,"content_length":"40895","record_id":"<urn:uuid:54bad413-1805-4f7c-b066-f4066907a11d>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00060-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Re: Date's First Great Blunder
Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
Home -> Community -> Usenet -> comp.databases.theory -> Re: Date's First Great Blunder
Re: Date's First Great Blunder
From: mAsterdam <mAsterdam_at_vrijdag.org> Date: Thu, 15 Apr 2004 21:17:03 +0200 Message-ID: <407edfb2$0$562$e4fe514c@news.xs4all.nl>
Dawn M. Wolthuis wrote:
> Alfredo Novoa wrote:
>> Dawn M. Wolthuis wrote:
>>>...Mathematical models are simply models/metaphores.
>>They are formal representations.
> Yes, formal representations of a metaphore for whatever in the "real world" > they are applied to >
>>>The discipline of "doing" relational theory
>>>could be seen as mathematics, but the application of this
>>>mathematics to the discipline of application software
>>>development has to do with a belief that this mathematical model
>>>has something to do with the engineering effort underway.
>>It should not have relationship with beliefs. It is proven that it is
>>simpler than the known alternatives.
> > > Simpler to human beings? Simpler to computers? Simpler to work with > mathematically? This is a RELIGIOUS BELIEF that if you take the simplest > mathematical model (aka metaphor) for something,
then that is the best. I > could use a point as a mathematical model for God or use a more complicated > model of a triangle. I prefer the triangle metaphor because it is helpful > for describing a
trinitarian view of God. A point is surely simpler than a > triangle, however. That doesn't mean it is the best metaphor to choose.
Yup. Unfortunately some of these religions are so secretive that even the believers don't all know that they are. I am sure I have some (a lot - who knows?) of those beliefs. One of my unfounded
beliefs I *am* aware of is that if people try to understand and respect eachother, they will.
> That's really a matter of defining terms. ...
How about a c.d.theory glossary ? Received on Thu Apr 15 2004 - 14:17:03 CDT
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
|
{"url":"http://www.orafaq.com/usenet/comp.databases.theory/2004/04/15/0479.htm","timestamp":"2014-04-19T22:33:24Z","content_type":null,"content_length":"8482","record_id":"<urn:uuid:534d4482-8ab7-466f-a8f3-77cbd69c5d8b>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00484-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Thermodynamics: Enthalpy of Sucrose
1. The problem statement, all variables and given/known data
A sample of sucrose, C12H22O11, with mass 0.1265 g, is burned in a bomb calorimeter initially at 298 K. The temperature rises by 1.743 K. To produce the same temperature increase with an electrical
heater in this apparatus, it is found to require 2.0823 kJ of energy.
(1) Determine Δ H0 (298) for combustion of sucrose.
(2) Use data in Table 19.2 to calculate Δ H0 (298) for combustion of sucrose, and compare
your answer to (1).
(Table 19.2 states that Sucrose has a Molar Enthalpy of formation of -2220 kJ/Mol)
2. Relevant equations
DeltaH = DeltaU + Delta n R T
DeltaH = DeltaU + P DeltaV
U = Heat(constant v)
H = Heat(constant p)
Molar mass of Sucrose
Personal Assumption (Ideal gas?)
3. The attempt at a solution
So, I've tried a few things.
First I tried saying that the combustion of Sucrose is from 12 mol O2+ 1 mol Sucrose = 12 mol CO2 + H2O, thus delta n = 11 moles. Delta U is the energy from the problem statement because in the
calorimeter the combustion is taking place at constant volume.
But if I use this, then my Delta H is in terms of a constant volume measurement, which apparently isn't what we're supposed to use, because Delta H is usually calculated via constant pressure. Even
then, the molar value of Enthalpy is 79314.6 kJ/mol, which even for a big molecule, seems absurd compared to the value in the table for the next part of the problem.
I think the big part is that I'm assuming Ideal gas for the product gases. Is there any way out of this mess?
|
{"url":"http://www.physicsforums.com/showthread.php?t=290860","timestamp":"2014-04-20T18:23:52Z","content_type":null,"content_length":"24599","record_id":"<urn:uuid:58faa1bc-3519-4962-8b1f-ca7f65310116>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00573-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Increment Numpy multi-d array with repeated indices
up vote 2 down vote favorite
I'm interested in the multi-dimensional case of Increment Numpy array with repeated indices.
I have an N-dimensional array and a set N index arrays, who's values I want to increment. The index arrays might have have repeated entries.
Without repeats, the solution is
a = arange(24).reshape(2,3,4)
i = array([0,0,1])
j = array([0,1,1])
k = array([0,0,3])
a[i,j,k] += 1
With repeats, (ex. j=array([0,0,2]) ), I'm unable to make numpy increment the replicates.
python arrays indexing numpy
add comment
2 Answers
active oldest votes
How about this:
import numpy as np
a = np.zeros((2,3,4))
i = np.array([0,0,1])
j = np.array([0,0,1])
up vote 2 down vote k = np.array([0,0,3])
ijk = np.vstack((i,j,k)).T
H,edge = np.histogramdd(ijk,bins=a.shape)
a += H
I'm using this with cubic bins and flatten cubes of the same size for i,j and k. Any idea why it starts to break down on arrays larger than 27x27x27? – ajwood Sep 16 '11
at 2:31
add comment
I don't know if there is an easier solution with direct array indexing, but this works:
for x,y,z in zip(i,j,k):
up vote 0 down vote a[x,y,z] +=1
add comment
Not the answer you're looking for? Browse other questions tagged python arrays indexing numpy or ask your own question.
|
{"url":"http://stackoverflow.com/questions/7433142/increment-numpy-multi-d-array-with-repeated-indices","timestamp":"2014-04-18T01:29:46Z","content_type":null,"content_length":"68367","record_id":"<urn:uuid:a723d38e-9c77-488b-9741-2075b2924977>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00052-ip-10-147-4-33.ec2.internal.warc.gz"}
|
MathGroup Archive: August 2010 [00481]
[Date Index] [Thread Index] [Author Index]
Re: wrong result in solving equation and parametric ploting
• To: mathgroup at smc.vnet.net
• Subject: [mg111940] Re: wrong result in solving equation and parametric ploting
• From: Bill Rowe <readnews at sbcglobal.net>
• Date: Fri, 20 Aug 2010 07:06:38 -0400 (EDT)
On 8/19/10 at 7:20 AM, tmatsoukas at me.com (Themis Matsoukas) wrote:
>I don't claim to understand the equations you are solving but aren't
>x,y,z supposed to be real numbers?
>range = 300000000; intersec = {x, y, z} /. Solve[{1*((x - y)^2
>+ (y - z)^2 + (z - x)^2)^2 + 1000*((x - y)^3 +
>(y - z)^3 + (z - x)^3)^1 == 2*(250*10^6)^4, 0 == x + y + z}, {z, y}];
>intersec /. x -> range/10.0
>{{3.*10^7, -1.5*10^7 + 1.37486*10^8 I, -1.5*10^7 - 1.37486*10^8
>I}, {3.*10^7, -1.5*10^7 - 1.37486*10^8 I, -1.5*10^7 +
>1.37486*10^8 I}, {3.*10^7, 1.17486*10^8, -1.47486*10^8}, {3.*10^7,
>-1.47486*10^8, 1.17486*10^8}}
>For some values of x (x=0, for example), all roots are complex.
The reason complex numbers arise is due to using machine
precision for when substituting for x. Specifically, range/10.0
is machine precision. And that simply isn't enough precision to
get accurate answers when the exact solution involves
differences of values with large exponents.
>Moreover, intersec is not a list of three numbers, as ParametricPlot3D assumes, but a list of four sets of {x,y,z} roots. It seems to me that you should be choosing the appropriate roots somehow.
This is definitely an issue that would have to be corrected. But
it will not resolve the need for higher precision arithmetic to
get accurate answers and an accurate plot.
And note
In[26]:= Log[2, 2*(250*10^6)^4] // N
Out[26]= 112.589
That is the constant the equation is set equal to is a 113 bit integer.
Looking at one of the higher power terms in intersec before x is
given a value results in
In[30]:= Log[2, 387420489000000 x^10 /. x -> range/10] // N
Out[30]= 296.845
which is a 297 bit integer using exact arithmetic.
It should be clear no simple conversion of such large integers
to a machine representation with only 64 bits can result in
accurate answers.
|
{"url":"http://forums.wolfram.com/mathgroup/archive/2010/Aug/msg00481.html","timestamp":"2014-04-18T20:50:01Z","content_type":null,"content_length":"26904","record_id":"<urn:uuid:7aa1a489-f6cc-4838-8584-a79f8c7095ef>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00031-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Ryszard Wójcicki
Institute of Philosophy and Sociology, Polish Academy of Sciences
in collaboration with
Jan Zygmunt
University of Wrocław
NOTE (added on May 30, 2003). The Polish version of this paper was published in LOGIKA POLSKA OKRESU POWOJENNEGO Próba rzutu oka wstecz, Nauka Nr 4 2002, s. 157-175. This version (the translation was
done by Jan Zygmunt) is supposed to be a faithful counterpart of the original. None the less we rule out neither corrections nor minor improvements, whenever they are indisputable. Fairly recently,
various such corrections and improvements have been suggested to us by Z. Adamowicz, J. M Dunn, Wiktor Marek, Roman Murawski, Zdzisław Pawlak, Johan van Benthem. Many thanks for the assistance. We
plan to complete correcting this paper by the end of September.
1. Introductory remarks
During the 10th Congress of Logic, Methodology and Philosophy of Science (Florence, August, 19-25, 1995) I took part in a discussion panel on the situation of logic in communist countries. This
essay, written to celebrate the 50th anniversary of the Polish Academy of Sciences, is based mainly on the paper I presented there.1 The list of people I stayed in touch with while writing both this
paper as well as its previous version is fairly long.2 By saying „thank you” to all of them I wish to express my gratitude especially to Wojciech Buszkowski, Andrzej Grzegorczyk, Witold Marciszewski,
Wiktor Marek, Roman Murawski, Jerzy Tiuryn, and Jan Zygmunt. The comments and materials sent by them were especially helpful.
The scientific outcome of the postwar Polish logic is rich and tremendously varied. It consists of both logic in its basic meaning (comp. section 2) and numerous applications of logical methods. The
knowledge of all the results or even some general orientation in all the branches of logic that are developed in Poland requires the competence to which unfortunately I cannot aspire. That is the
reason for which I asked so many people for help while preparing this paper as well as the earlier one. Not all of the comments, suggestions or even critical remarks I decided or even was able to
take into consideration. That is why the full responsibility for the final content of this essay is mine. I hope, however, that this paper does not contain any serious errors, which would have been
hard to avoid without the assistance I was given. Once again I wish to thank for all the help I received.
2. Definition
The notion „logic”, as a name of the science is understood in various ways. On the answer to the question what logic is depends the answer to the question about the scope of the investigations and
obtained results. There is also another reason for dealing with this problem. This paper is both intended for the logicians, who – as I hope – will treat it as an useful attempt to look at their
achievements as a whole, and for the specialists in other areas, who would like to become acquainted with Polish logic of the 20th century’s second half. The latter will find in this section the
explanations which will make easier to understand the „proper” parts of the paper.
Logic in its basic meaning is the formal theory of reasoning. Thus, it is a theory whose chief concern are the conditions which an argument should satisfy in order to be carried out in a sound way.
The soundness of the „rules of inference” is one of the criterions for the correctness of the reasoning. Those rules should ensure that the conclusion is valid if it has been derived from valid
premisses. In the course of logical analyses of the reasoning process the premises and the conclusion are sentences. They can be also alternatively regarded as someone’s beliefs or suppositions. A
sentence is a syntactic notion. Analysis of the shape of the sentence (logicians also have to deal with it) requires the notion of syntactical rules. On the other hand, the notion of truth is a
semantic one. One cannot define it only by syntactical means. It requires the meaning of the expressions we use in sentences; since one constantly needs to refer to the language customs and
Logical issues are strongly related to and often overlap with linguistic issues, both syntactically and semantically. It is true that logicians limit their linguistic analyses to ideal languages
(satisfying conditions that are not satisfied by any natural language), but those languages are not at all „artificial”, as it might be suggested by the often used and unfortunate terminology. The
relation between them and natural languages is similar to that between material points and real three-dimensional physical objects in physics.
If logicians, instead of analysing sentences and relations between them, decided to analyse statements that are understood as someone’s beliefs or suppositions, then the terminology used would reveal
connections between logic and psychology. From the point of view of the latter, reasoning – regarded by logic as the operation on sentences – is a mental activity of some special kind. Logic,
however, is not a part of psychology, as it was thought to be until the beginning of the 20th century. Not because it avoids using the notions of psychology, but because it is a formal discipline.
Like every formal discipline it is concerned with some „arbitrarily” chosen set of assumptions that define „logical systems”. Those systems are the subject of logic. „Arbitrarily chosen” does not
mean „any possible”. It only means that if logicians define a system, then they have a right to follow their own intuitions concerning the virtues of the system they build – the virtues that are
sometimes theoretical, and often practical. Logicians, like other scientists, do not disdain the problem of usefulness of their work.
Usually (not always though) a system of logic is defined as a set of language formulae that are „logical tautologies”, i.e. that are true regardless the way we interpret them. This approach to logic
is called „sentential”. Alternative way of defining logic consists in searching for the assumptions that define in the intuitively suitable manner the notion of logical inference. This approach to
logic is called „inferential”.
When one is led by different intuitions, while defining a system of logic, one obtains different „logics”. Besides classical logic, which is a formal base for the whole standard mathematics, there
were many other logics developed, such as intuitionistic (the one that rejects the law of excluded middle), modal or many-valued logics.
3. On the outskirts of logic
Logic, quite reasonably, is regarded by the mathematicians as one of the parts of the foundations of mathematics. Other ones are set theory and theory of algorithms, or more generally, the recursion
theory. Foundations of arithmetics also belong to the foundations of mathematics, as they are concerned with the logical analysis of various kinds of formalizations of arithmetics.This way it is
possible to classify as a part of logic (and it is done so sometimes) the foundations of geometry, the foundations of algebra, etc. Category theory, universal algebra, and relatively young discipline
called the complexity theory are also regarded as the branches of modern logic.3
Logical foundations of computer science are another important and strongly related to logic discipline. Tremendous importance of logic for computer science has several reasons. Firstly, the structure
of computing machines is based on the laws of logic. Secondly, logic provides necessary tools that are needed for design and analysis of programming languages. Thirdly, „cognitive engineering”, that
is data bases and expert systems design uses fruitfully logical tools. The language of the predicate calculus and the laws of logical deduction set the general frames for computer languages. Like in
case of the foundations of mathematics, the foundations of computer science are also regarded as a part of logic in a general sense.
The foundations of mathematical linguistics and formal foundations of communication theory are two other disciplines which employ the notions and techniques of logic. Also in these two cases it is
extremely difficult to draw the border line between logic and those issues that do not belong to logic. Finally, there is another very important neighbour of logic, i.e. the theory of knowledge. If I
do not write about it here in more detail, it is because the theory of knowledge is still a conglomerate of many concepts without a well established core.
In this survey I will try to keep a reasonable balance between narrow and wide notion of logic. It is not easy. I have to ask in advance for understanding all those who will think that I mention the
results which, according to them, do not belong to logic, and those who will reckon that some important achievements of Polish logicians are not mentioned at all.
4. Interwar period
The achievements of Polish logicians of the years 1920-1939 have won tremendous worldwide recognition. As a result, the term „the Polish School of Logic” became popular within the international
community of logicians. Let us note that the work of Polish logicians overlapped in a very substantial way with the work of another formation known as the „Lvov-Warsaw School of Philosophy”.4
The Polish school of logic seen from today’s perspective appears to be a period of the achievements that are hard to overestimate. At that time K. Ajdukiewicz formulates his conception of categorial
grammar, J. Łukasiewicz inspired by some ideas of T. Kotarbiński developes his idea of many-valued logic, A. Lindenbaum and A. Tarski introduce the method of algebra of language known today as the
method of Lindenbaum algebra, A. Tarski publishes his fundamental papers on the conception of truth and deductive systems, S. Leśniewski following T. Kotarbiński’s reism idea makes an attempt to form
ontology meant to be a system alternative to set theory, L. Chwistek publishes his works on simplified theory of types, Janina Hosiasson-Lindenbaum examines methodological and logical aspects of
probability theory, J. Łukasiewicz and A. Tarski develop an algebraic treatment of logical matrices, S. Jaśkowski (independently of the German logician G. Gentzen) introduces the system of „natural
deduction”, that is a system of rules of inference that make the „formalized” inference similar to the „natural” one. This list consists of the most spectacular achievements and obviously is not
Logic of the interwar period was developed both in the philosophy and in the mathematics departments. In particular, Leśniewski and Tarski worked in the mathematics departments. Ajdukiewicz,
Kotarbiński, and Łukasiewicz represented a „philosophical wing” of logic. But the cooperation between „mathematicians” and „philosophers” was systematic and quite close.5
5. World War II
The Second World War was a disaster for both the Polish logic and the whole Polish science. For three obvious reasons.
Some of the logicians, including those who made a substantial contribution to the development of the discipline, did not survive. Priest Jan Salamucha, historian of logic and a close collaborator of
Łukasiewicz and Bocheński, was killed during the Warsaw Uprising in 1944. Because of their Jewish origin Adolf Lindenbaum, his wife Janina Hosiasson-Lindenbaum, Mojżesz Presburger, and Mordechaj
Wajsberg (author of the important papers on many-valued logics) were murdered by the Nazis. A. Tarski staying abroad avoided this tragic fate.
When the war was over many Polish logicians who were lucky enough to leave Poland earlier decided not to come back to the country. Thus, for instance, Józef M. Bocheński settled in Switzerland,
Czesław Lejewski in England, Jan Łukasiewicz in Ireland, and Henryk Hiż, Bolesław Sobociński, and Alfred Tarski in the USA. A. Ehrenfeucht and J. Mycielski decided to emigrate later on.
And finally, the war destroyed the whole structure of cooperation among scientific institutions. The Jan Kazimierz University of Lvov and the Stefan Batory University of Vilnus ceased to exist. So
did numerous research teams. The continuity of teachnig, publishing, and organizational work was broken. The manuscripts of many papers prepared for publication were destroyed. But still, during the
war, the illegal „underground university” worked in full swing, and logic was taught or studied there by: K. Ajdukiewicz, Janina and Tadeusz Kotarbiński, A. Mostowski, J. Salamucha, Z. Zawirski, H.
Hiż, Z. Czerwiński, A. Grzegorczyk, J. Pelc, R. Suszko, K. Szaniawski, and many others.
6. Early postwar period – logic in philosophy departments
Although nobody doubted that both political and social life would undergo dramatic changes, the end of the war brought new hopes. Despite tremendous losses, Polish logic did not cease to exist and
quickly started to rebirth. This applies to both logic in philosophy and mathematics (cf. the next section) departments.
Tadeusz Kotarbiński and his wife Janina started their scientific activity in Łódź which became one of the main scientific centres (the Łódź University was founded in 1945) at the time of the
rebuilding of Warsaw. Among many young people grouped around them were Jerzy Pelc, Marian Przełęcki, and Klemens Szaniawski. Kazimierz Ajdukiewicz became the Rector of the Poznań University. Besides
his rector duties he gathered together a team of people who dealt with various aspects of logic. Under his leadership worked Marcin Czerwiński, Jerzy Giedymin, Seweryna Łuszczewska-Romahnowa, and
Roman Suszko. Maria Kokoszyńska-Lutman, well known and recognized for her analyses of Tarski’s notion of truth settled in Wrocław, where also worked Henryk Mehlberg until he decided to emigrate to
the USA. In Cracow there was a group of logicians led by Izydora Dąbska and Roman Ingarden. Tadeusz Czeżowski settled in Toruń where Leon Gumański was one of his students.
Except for Ingarden (one of the most eminent Polish philosophers of the 20th century) all the mentioned leaders of the research groups were outstanding representatives of the Lvov-Warsaw School, the
formation whose founding father was Kazimierz Twardowski. Ajdukiewicz and Kotarbiński were its most influential members. Not all of the leaders were logicians, even in wide sense of the notion of
logic. Izydora Dębska was not close to the subject. All of them, however, appreciated the significance of logic and were ready to support its development. Zygmunt Zawirski was an outstanding example
of such an attitude; he was a philosopher and a methodologist, very familiar with the issues of contemporary logic, and was an editor of „Kwartalnik Filozoficzny” („Philosophical Quarterly”), and
worked as a professor at the mathematics department of the Jagiellonian University in Cracow. Under his supervision at the J.U. S. Jaśkowski (in 1945), A. Mostowski (in 1945), and J. Słupecki (in
1947) completed their habilitation theses, and R. Suszko (in 1945) received his master degree.
7. Early postwar period – logic in mathematics departments
Groups of logicians formed in the mathematics departments usually were labelled as those dealing with the foundations of mathematics or were parts of differently labelled institutions. Such a way of
naming the institutions surely had a reasonable justification. At the same time it was a camouflage, since no scientific discipline was perceived by the communist ideologists as neutral, especially
logic (as a part of philosophy, which was the main ideological discipline) became the subject of their special attention. In the Soviet Union for instance, the logicians were demanded to supress
formal logic by so-called „dialectical logic”, rediscovered from the writings of Hegel and other „classics” of Marxism. The same was expected from other countries of the „block”, especially from
In Wrocław logic was developed in two groups: one led by professor Czesław Ryll-Nardzewski and the other by professor Jerzy Słupecki. C. Ryll-Nardzewski, who was already the author of a few extremely
important papers on logic, grouped around himself several talented people, among them L. Pacholski, B. Węglorz, A. Wojciechowska. J. Słupecki, known for his work with many-valued logics, cooperated
with young talents such as Witold A. Pogorzelski, Ludwik Borkowski, Bogusław Iwanuś, Tadeusz Prucnal. One of the most brilliant logicians of the postwar period was Jerzy Łoś, whose talent was
discovered by J. Słupecki, although their cooperation did not last long.
Stanisław Jaśkowski began his work in the mathematics department of the Nicolas Copernicus University in Toruń in 1945. Later among his collaborators there were Jerzy Kotas and August Pieczkowski.
Polish postwar logic owes a great deal to professor Andrzej Mostowski, who pursued his academic career in Warsaw. He was such a prominent figure that I shall describe his achievements in a separate
8. Andrzej Mostowski
There are a few factors that make A. Mostowski an exceptional scholar.
Firstly, he was an unquestionable scientific authority. Some of his results concerning the foundations of mathematics were of a breakthrough significance. His contribution to the set theory was
immense. He was one of those who, together with A. Tarski developed the theory of decidability (the search for algorithms which enable us to identify theorems of particular theories). He suggested an
algebraic interpretation of the quantifiers and initiated the research of the so-called generalized quantifiers.6 He also made a substantial contribution to model theory.
Secondly, he was a scientist of a very well established reputation in many international institutions. Thanks to his connections Poland was visited by the most eminent researchers in logic and
foundations of mathematics.
Thirdly, he had a unique skill to look at logic as a stricly mathematical discipline without losing its deep philosophical content. His handbook Logika Matematyczna („Mathematical Logic”) published
in 1948 is a good example of presenting logical invetigations showing their technical difficulties and philosophical significance at the same time. Two other workks by him are of a similar character:
The Present State of Investigations on the Foundtations of Mathematics and Thirty Years of Foundational Studies.7 The latter will be discussed in section 14.
Of those who were his students and close collaborators one should mention the most important names of the postwar logic and foundations of mathematics: Zofia Adamowicz, A. Ehrenfeucht, A.
Grzegorczyk, W. Guzicki, W. Marek, H. Rasiowa, R. Sikorski, P. Zbierski, and many others.8 Thanks to Mostowski research in logic has gained high prestige in the department of mathematics at the
Warsaw University, where he was the head of the Section of Algebra, the Section of the Foundations of Mathematics, and of the Institute of Mathematics at the Polish Academy of Sciences. He was a
member of the Polish Academy of Sciences.
9. Alfred Tarski
During my talk at the congress in Florence I said the following: „One might say, not being entirely wrong, that for a long period of time the chief Polish seminar on logic was held in Berkeley,
California, in the residence of Alfred Tarski who kept very close cooperation with his friends and colleagues from Poland, Andrzej Mostowski in particular.” From the 40s Berkeley was visited by W.
Szmielew, A. Mostowski, A. Ehrenfeucht, J. Łoś, J. Mycielski, L. Szczerba, L. Pacholski, and others, who took part in various research programmes coordinated by Tarski.9
There is no doubt that Alfred Tarski was one of the most outstanding logicians of the 20th century. He also influenced immensely the development of the research in semiotics and philosophy.
Philosophical aspects of his theory of truth (Tarski was not only fully aware of them, but also put them forwards in his papers published in philosophical periodicals) are even at present the subject
of heated discussions and analyses.10 Intuitions associated with the notion of truth formed a background for his concept of the consequence operation, as well as (cf. section 15) the model theory.
The research center created by Tarski in Berkeley was one of the most influential centers of the foundations of mathematics in the world. Despite many difficulties (in gettig the permission to travel
abroad) Polish logicians were able to keep in touch with him and his students (who are among the most outstanding American logicians nowadays) which was hard to overestimate for the Polish postwar
10. First Congress of Polish Science
In four previous sections I tried to give some evidence that despite severe war losses Polish logic continued to develop.12 But the growing ideological pressure and some decisions of a political or
administrative nature were a real danger. Those factors had to have consequences in the situation of logic.
Nowadays, hardly anybody is aware that Jaśkowski’s discussive logic put forward by him in 1948, was an attempt to form a logical system which admits controversies and contradictions in discussions.
Jaśkowski did not care to call those controversies „dialectical” or to use the word „dialectical” on whichever occasion in his papers. Not to mention that he never cared to quote any of the so-called
„classics” of the Marxism (it was possible in Poland, while in Russia that would be an act of desperate courage).
Jaśkowski’s papers as well as a few more ones published at that time by Maria Kokoszyńska-Lutman, L. S. Rogowski, and T. Kubiński were an attempt to demonstrate implicitly that some ideas of
dialectical logic can be stated reasonably and analysed with the use of formal methods. This attempt was bound to fail, since dialectical logic was supposed to be a part of the so-called „Marxist
dialectic” and not one of the systems of logic.
The indoctrination of science, along with some institutional changes were to be launched at the First Congress of Polish Science in 1953. Logic was not the main aim of the „ideological offensive”
there. The Lvov-Warsaw School was. Party ideologists had to open the process of full eradiction of any politically wrong philosophy (different from „diamat” –dialectical materialism, and „hismat” –
historical materialism) from the academic life. Poland could not remain an oasis, free from the rules that were obeyed in other countries of the block.
The public criticism of the Lvov-Warsaw School was directed against its most eminent representatives: K. Ajdukiewicz, T. Kotarbiński, M. Ossowska, and S. Ossowski. Let us note however that as a
result of this campaign none of the „bourgeois” philosophers was expelled from the academic life. But there were two precautions taken to prevent them from teaching philosophy and other social
sciences. Some of the philosophy professors were employed as the chairs of logic. Others got their jobs in the Polish Academy of Sciences.
11. Ajdukiewicz’s programme for logic
Although the future of logic was more and more jeopardized one has to notice that one of the chief Party’s ideologists – Adam Schaff, who opposed Ajdukiewicz in the 1953 debate, supported him in
organizing Section of Logic at the Polish Academy of Sciences. That section eventually became a part of the Institute of Philosophy and Sociology of the Academy.
At the same time K. Ajdukiewicz starts Studia Logica, a periodical with the following editorial board: Kazimierz Ajdukiewicz (editor in chief), Leszek Kołakowski, Tadeusz Kotarbiński, Andrzej
Mostowski, and Roman Suszko (secretary). The presence of L. Kołakowski unfortunately does not mean that this today world famous philosopher began his career as a logician. He simply was the only
person in the editorial board who was a Party member.13
12. Fifty Years of Studia Logica
Shortly the periodical started to appear regularily, with extended editorial board, and the papers were more and more often published in other languages. The papers of SL were written both by
logicians from philosophy and mathematics departments. Mathematical logicians were also the members of the editorial board. From the very beginning A. Mostowski collaborated with SL very closely.
After K. Ajdukiewicz’s death in 1963 Jerzy Słupecki becomes the editor in chief. In 1971 Zdzisław Pawlak and in 1978 Helena Rasiowa join the editorial board; Rasiowa takes the position of one of the
chief editors in 1979.
In 1976 the periodical undergoes an important change. Thanks to the efforts of the editors of that time – Stanisław Surma, Klemens Szaniawski, Ryszard Wójcicki (editor in chief), and Jan Zygmunt
(secretary) – the international editorial board is established.14 The process of transforming a periodical in an international one was not easy and had to be cleared by the State Security Police.15
Since 1953 there have been published over 150 issues of SL. It becomes an international periodical in 1976 not only because English is the sole language of publication and not because of the
international editorial board, but also because of the new publisher. For some time it was published by Ossolineum and North-Holland as a co-publisher, replaced later by Kluwer Academic Publishers.
In 1991 (after losing some financial support), Kluwer becomes the sole publisher of SL.16 It is currently edited by Ryszard Wójcicki and Jacek Malinowski. Since 2000 it has been published in three
volumes a year, and has been accompanied by the series of books Trends in Logic, Studia Logica Library since 1995.
13. Publications on Logic
Studia Logica has not been the only periodical suitable for papers on logic. Those mathematically oriented papers have been published in Fundamenta Mathematicae, an eminent journal established in
1920 which quickly became the leading international publication on the foundations of mathematics. This journal was a vehicle for a vast number of papers written by the logicians from all over the
Another important periodical on logic in Poland have been Reports on Mathematical Logic, formed in 1973 by Stanisław Surma. A newsletter called Bulletin of the Section of Logic of the Institute of
Philosophy and Sociology of the Polish Academy of Sciences 17 have been active in promoting both research and international cooperation. In 1993 Jerzy Perzanowski started a periodical on
philosophical logic called Logic and Logical Philosophy.
Since 1974 Fundamenta Informaticae (a journal initiated by Zdzisław Pawlak) has been published under the editorship of Helena Rasiowa (and Andrzej Skowron, after her death). This journal covers
applications of logic to computer science and is published by IOS Press.
This survey of publications on logic would not be complete without mentioning the series Biblioteka Myśli Semiotycznej (The Library of Semiotics Ideas) initiated by Jerzy Pelc. It is hard to
overestimate its importance, also for logic. 46 volumes of this series gives the notion of richness and variety of the research in the areas where logic meets linguistics.18
14. The Foundations of Mathematics
An attempt of penetrating only the most important achievements of the research in the foundations of mathematics would require much more space than this section.19 One can find a highly competent
survey of this matter in A. Mostowski`s work Thirty years of Foundational Studies (cf .7), concerning the development of the subject in 1930-1964. The list of references in this book consists of 244
items. Here are the names of Poles that are included there.
A. Ehrenfeucht is an author of the paper on the methods of game theory applied to the problem of decidability of the first order theories, and co-author (together with A. Mostowski) of the paper on
model theory. There are six papers by A. Grzegorczyk mentioned there (one of them is a joint paper written with A. Mostowski and C. Ryll-Nardzewski). Grzegorczyk’s papers are, with one exception, on
the problems of decidability and computability. The joint paper is concerned with the foundations of arithmetic. The decidability theory is also a subject of the paper by A. Janiczak. The name of
Jerzy Łoś opens a list of six papers, mainly on model theory. One of them written with C. Ryll-Nardzewski explores the theory of representation of Boolean algebras and the problem of Stone theorem’s
equivalents. The paper by Łoś and Suszko is concerned with the operation of summing the models. There is E. Marczewski’s paper on abstract algebra mentioned there and a large work by S. Mazur on
computational analysis. Mostowski mentiones four of his own works. H. Rasiowa is an author of the papers on algebraic representation of some non-classical logics. Algebraic methods are the subject of
a work by H. Rasiowa and R. Sikorski. They are also the authors of the monograph The Mathematics of Metamathematics (PWN, 1963, position [173] on the list). Two papers of C. Ryll-Nardzewski are
quoted there. One of them presents a characterization of categorical theories, in the other one the axiom of induction is examined and it is proved that Peano arithmetic is not finitely
axiomatizable. In the paper by R. Sikorski the notion of a metric space is applied to the analysis of intuitionistic logic. W. Szmielew in her paper proves the famous theorem stating the decidability
of the elementary theory abelian groups. Tarski’s name appears nine times as an author and several times as a co-author. The monograph Undecidable Theories (North-Holland) is quoted as a work of A.
Tarski, A. Mostowski, and R. M. Robinson. This book set an important direction in research concerned with the search for general methods of proving the undecidability of formalized theories in
general and various mathematical theories in particular.
If one wanted to extend the list of the names and results that have been important since 1964, one needed to include the following. Z. Adamowicz (results concerned with „weak” arithmetic and
arithhmetic with open induction; she is a co-author (with P. Zbierski) of Logika Matematyczna, PWN, 1991; translated as Logic of Mathematics. A Modern coures of Classical Logic, John Willey & Sons,
1997), K.R. Apt (second order arithmetic and foundations of computer science), A. Ehrenfeucht, A. Grzegorczyk (author of Zarys Logiki Matematycznej, 3rd ed., PWN, 1973; translated as An Outline of
Mathematical Logic, Kluwer 1974 ), W. Guzicki (forcing), M. Krynicki (generalized quantifiers), A. Mostowski (reflexivity of Peano arithmetic), R. Murawski (expandability of models of Peano
arithmetic to models of second order arithmetic), S. Krajewski (nonstandard satsfaction classes), W. Marek and M. Srebrny (their investigations were concerned with higher-order arithmetic and set
theory, in particular with relations between Zermelo-Frankel theory and Kelley-Morse class theory), H. Kotlarski (automorphisms of nonstandard models of Peano arithmetic), J. Mycielski (infinite
combinatorics, universal algebra, and the analysis of Hugo Steinhaus's axiom of determinacy), Z. Ratajczyk, C. Ryll-Nardzewski, L. Szczerba (foundations of geometry, the notion of interpretability of
theories formulated in various languages), T. Traczyk (Hilbert spaces, quantuum logic), P. Zbierski (descriptive set theory), Z. Vetulani (foundations of second and higher-order arithmetic and
artificial intelligence).
It is obvious that also after 1964 the leading role in the development of the foundations of mathematics and its important branch – model theory (comp. section 15) – was played by A. Mostowski 20 and
indirectly by A. Tarski.
15. Model Theory
This theory has a distinguished position within the foundations of mathematics. Initiated in the 40s and the 50s by the works of L. Henkin, A. Robinson, and A. Tarski, forms an important part of
mathematical logic. It deals with the relations between the language in (its formalized version) and its „models”, i.e. the structures which may become what the language expressions (if they are
properly interpreted) refer to.
As it was pointed out in the previous section, many important results in model theory was obtained by the Poles. Especially significant, because of its numerous applications, was the Łoś’s
ultraproduct theorem (based on some Skolem’s ideas of the 30s). Boolean models method of H. Rasiowa and R. Sikorski 21 was another important result. It enabled to extend the field of model theory to
non-classical logics. In the 60s and the 70s model theory was developed by L. Pacholski and B. Węglorz, later by H. Kotlarski, and quite recently important and widely recognized results were obtained
by Ludomir Newelski.
16. Algebraic logics
This branch of logic (overlapping to some extent with universal algebra) in its history goes back to Lindenbaum-Tarski’s idea of the algebra of language. To make a long story short: it deals with the
relations between the laws of logic and theorems that characterize algebraic operations. 22
The basic tools for examining the relations between algebra and logic (more precisely, between algebraic structures and particular systems of logic) were established by Alfred Tarski, his
collaborators and followers. The monograph by H. Rasiowa and R. Sikorski The Mathematics of Metamathematics, PWN, 1963 (cf. 20), an outstanding source of information in this area, was also a source
of inspiration for many authors from all over the world to carry out research in algebraic logic. In 1974 H. Rasiowa published her monograph An Algebraic Approach to Non-Classical Logic 23 in which
she extended her earlier results to the large class of the so-called non-implicative logics. This class includes modal logics among others.
Besides Rasiowa and Sikorski algebraic logic was the research subject of Cecylia Rauszer, who was interested in logics with „constructive falsehood”. A lot of important and difficult results were
obtained by A. Wroński and his group of logicians (Section of Logic, Philosophy Department of the Jagiellonian University in Cracow). Their research, carried out partly in cooperation with the
Japanese logicians, made important contributions into the theory of pseudo-Boolean algebras, BCK algebras, equivalential algebras and logical systems related to these classes of algebras. In Toruń,
algebraic methods were developed by J. Kotas and his group.
The issues of algebraic logics are closely related to those of logical matrices (cf. section 19).
17. Foundations of Computer Science
The research in this branch of logic, which is still of growing importance, was initiated by H. Rasiowa and her group (Grażyna Mirkowska, Ewa Orłowska, Cecylia Rauszer, Andrzej Salwicki, Andrzej
Skowron, Jerzy Tiuryn, Anita Wasilewska, to mention but a few). The activity of those people was oriented toward developing applied logic such as algorithmic logic, logic of programming (correctness
of programs), logic of information (data bases), and logic of knowledge (expert systems). An important role in the development of the foundations of computer science was played by Zdzisław Pawlak
(„rough sets” theory, Pawlak machines), and also by Andrzej Blikle, Beata Konikowska, Józef Winkowski.
The research in mathematical foundations of computer science initiated in Poland by H. Rasiowa and Zdzisław Pawlak, since the 90s has been developed by the by the group of J. Tiuryn in the Department
of Mathematics, Mechanics, and Computer Science at the Warsaw University. This group consists of Damian Niwiński, Jerzy Tyszkiewicz, Paweł Urzyczyn, and Igor Walukiewicz, and deals with numerous
aspects of modern computer science: lambda calculus, functional programming, type theory, modal logics, and theory of finite models.
18. Theory of Consequence
The first postwar papers with reference to Tarski’s consequence theory were published by J. Łoś, J. Słupecki and W. A. Pogorzelski. One of the key papers in this subject was „The algebraic treatment
of the methodology of elementary deductive systems” 24 by J. Łoś. The research in this area was carried out also by A. Grzegorczyk and R. Suszko among others. Suszko was the one who coined the notion
„abstract logic”.25
There are two important aspects of the theory of consequence. Firstly, a methodological aspect. Tarski developed the consequence theory as a theory of „deductive systems”. This notion is a formal
equivalent of the notion „theory” or even „a system of beliefs”. Logical analysis of a deductive system has numerous and significant methodological implications. Secondly, the notions of consequence
lets us see the limitations of the propositional conception of the system of logic and brings into prominence the inferential concept of those systems.26
Many deep results concerning the so-called structural completeness of logical calculi (the notion was also examined abroad) were obtained by W.A. Pogorzelski (who introduced this notion) and his
students: T. Prucnal and P. Wojtylak among others.
The area of the consequence theory also includes the research concerned with the so-called non-monotonic reasoning. This kind of research was carried out by Witold Łukasiewicz, Wiktor Marek and
Mirosław Truszczyński.
19. Logical Matrices
The notion of logical matrix, understood as a set of „logical values” (e.g. truth and falsehood in the simplest case) that can be taken by propositions, appeared in the beginning of the 20th century.
J. Łukasiewicz introduced many-valued matrices while defining many-valued logics. A. Lindenbaum proved that each propositional logic (comp. section 2), i.e. the set of tautologies, has an adequate
logical matrix that characterizes it in a unique way. This result as well as one of the papers by A. Tarski and J. Łukasiewicz show that logical matrices can be used as certain generalizations of
algebraic methods (the latter were discussed in section 16).
J. Łoś and R. Suszko tried to find a matrix representation of some consequence operation. Their result however required some corrections (given by R. Wójcicki).27 These two results (the former is of
vital importance) gave a new direction in a very general, technically difficult, and philosophically interesting field of research.
This problem was my special concern in 1970 when I initiated a separate research project. I started to examine the representations of consequence operations together with J. Czelakowski, W. Dziobiak,
J. Hawranek, M. Tokarz, T. Prucnal, R. Suszko, J. Zygmunt, A. Wroński, and P. Wojtylak. Some contributions to this project were made by the logicians from abroad (e.g. W. Rautenberg), although this
subject is regarded as typically „Polish”.28 It was continued and developed later on by J. Czelakowski and W. Dziobiak as well as by Spanish and American logicians, especially by J.M. Font, W.J.
Block, and D. Pigozzi.29
20. Type Logics and Categorial Grammars
These areas belong to the foundations of mathematical linguistics. They were explored in Poznań by the logicians grouped around Wojciech Buszkowski. The notion „type logics” refers to the set of
operations that are used while creating the complex language expressions out of simple ones. „Type logics” belong to the family of substructural logics.
From the historical point of view this kind of research was initiated by some of the results of Ajdukiewicz, Bar-Hillel, and refers to the calculus of Lambek (1958). Buszkowski`s results (as well as
those of M. Kandulski and W. Zielonka) brought a thorough grasp of some of the fundamental problems of the mathematical linguistics and its applications. They explored the algebraic and computational
properties of various kinds of languages, especially the so-called „tree-like languages” generated by categorial grammars, the algorithms for finding minimal categorial grammars, and the relations
between categorial grammars and Chomsky`s generative grammars.30
In Poznań center the research in logic and linguistics is carried out also by T. Batóg and J. Pogonowski.
21. Suppositional Logic
In 1934 S. Jaśkowski published an article On the Rules of Suppositions in Formal Logic (published as the first issue of Studia Logica series 31). Jaśkowski’s conception and Gentzen’s conception of
natural deduction were discovered independently of each other and published almost simultaneously.32 Suppositional system of logic was the base for the computer program „Mizar” designed by Andrzej
Trybulec (Warsaw Univerity’s branch in Białystok, currently an independent university). Its function is well expressed by the title of one of the research projects (coordinated by W. Marciszewski):
„Systems of logic and algorithms for the computer testing of the correctness of proofs)”.33
22. Others
A very important institution, organized under the auspices of the Polish Academy of Sciences, was the Conference on the History of Logic. It was chaired by T. Czeżowski, J. Słupecki, S. Surma, and
since the 80s it has been organized exclusively by A. Wroński, Section of Logic of the Jagiellonian University in Cracow. Substantial contribution to the subject of the history of logic was made by
W. Marciszewski, R. Murawski, S. Surma, J. Woleński, and J. Zygmunt among others.
Creation of logical systems was not a Polish speciality. Besides discussive logic of Jaśkowski (which was mentioned in section 10) there are two exceptions: „non-Fregean logic” of R. Suszko – a
system of logic inspired by some intuitions taken from Tractatus ... of L. Wittgenstein, and the system Grz (for Grzegorczyk, its author) – one of the most widely studied systems of modal logic.
Polish logicians (J. Słupecki, H. Rasiowa, T. Traczyk, G. Malinowski, W.A. Pogorzelski, T. Prucnal, M. Tokarz, K. Trzęsicki, R. Wójcicki, not to mention the eminent precursors of the subject – J.
Łukasiewicz, A. Tarski, and M. Wajsberg) contributed tremendously to the research in many-valued logics 34. Important results were obtained in the field of intuitionistic logic (H. Rasiowa, R.
Sikorski, P. Wojtylak, A. Wroński), modal logic (A. Grzegorczyk, E. Orłowska, J. Perzanowski, H. Rasiowa, C. Rauszer, J. Hawranek, and T. Skura), relevance logic (W. Dziobiak, M. Tokarz, K.
Quite a large number of papers were concerned with deontic logic (J. Gregorowicz, L. Gumański, J. Kalinowski, T. Kubiński, W. Suchoń, K. Świrydowicz, J. Woleński, Z. Ziemba, Z. Ziembiński), casual
logic (H. Greniewski, A. Pieczkowski, K. Trzęsicki, and M. Urchs 35). The latter were mostly based on some ideas initiated by S. Jaśkowski.
Polish authors also dealt with the logic of questions (erotetic logic). It was initiated by K. Ajdukiewicz and substantially developed by T. Kubiński, L. Koj, and A. Wiśniewski.
The area of logic and theory of communication is explored by M. Tokarz.
1 „The postwar panorama of logic in Poland”, in: Logic and Scientific Methods, eds. M.L. Dalla Chiara et al., Kluwer 1997, pp. 597-608.
2 Various suggestions to the earlier „Florence” version of this survey were offered by: Janusz Czelakowski, Andrzej Grzegorczyk, Jacek Malinowski, Marcin Mostowski, Roman Murawski, Ewa Orłowska,
Witold A. Pogorzelski, Kazimierz Świrydowicz, Max Urchs, Jan Woleński, Andrzej Wójcik, Jan Zygmunt. While preparing this version of the survey I obtained the assistance from: Zofia Adamowicz,
Wojciech Buszkowski, Janusz Czelakowski, Witold Marciszewski, Wiktor Marek, Roman Murawski, Jan Mycielski, Mieczysław Omyła, Jerzy Pogonowski, Jerzy Tiuryn, Anita Wasilewski, Andrzej Wiśniewski, Jan
Woleński, and Jan Zygmunt.
3 One could think that by extending to the limits the notion of logic the logicians behave like „logical imperialists” who try to invade other branches of mathematics. It is not so. Logic and its
methods are the source of inspiration and the basic research tool for mathematics. To call the foundations of mathematics logic enables the specialists of the foundations of mathematics to establish
their scientific identity. It also enables them to see their very much varied field of research as a whole which differs from the rest of mathematics.
4 It was formed by the group of philosophers, logicians, sociologists, and other scientists who upheld a tradition started by the seminars and papers of the eminent psychologist and philosopher from
Lvov –Kazimierz Twardowski. Precise analysis and lucid argument were the virtues that Twardowski considered the base of both the scientific research and the philosophical writings. Logic was regarded
as the basic tool that helps to accomplish this. It is not odd then that Lvov-Warsaw School attracted logicians and at the same time the logicians and their works essentially influenced the School.
There is a monograph on the School by Jan Woleński, Logic and Philosophy in the Lvov-Warsaw School, Kluwer 1989. A brief and informative article on Polish logic of the interwar period can be found in
The Routledge Encyclopaedia of Philosophy, vol 7, Routledge, London and New York, 1998, pp. 498-500, „Polish Logic” by J. Zygmunt.
5 Alfred Tarski – undoubtedly the most significant person of the interwar period in Polish logic, and one of the greatest logicians of the 20th century. When he published the collection of his papers
Logic, Semantics, Metamathematics, papers from 1923 to 1938 (Clarendon Press, Oxford, 1956) he dedicated it to T. Kotarbiński, whom he calls his teacher.
6 This subject was examined in many Polish and foreign centers. In Poland it was examined by: A. Krawczyk, M. Krynicki, L. Szczerba, W. Szmielew, M. Zawadowski, and others. Its computational aspects
were analysed by A. Pawlak, H. Rasiowa, and E. Orłowska.
7 A. Mostowski (in collaboration with A. Grzegorczyk, S. Jaśkowski, J. Łoś, S. Mazur, H. Rasiowa, and R. Sikorski), „The present State of Investigations on the Foundations of Mathematics”,
Dissertationes Mathematicae 9 (1955), pp. 1-48.
A. Mostowski, „Thirty Years of Foundational Studies; Lectures on the Development of Mathematical Logic and the Study of the Foundations of Mathematics in 1930-1964”, Acta Philosophica Fennica 17
(1965), 1-180.
8 To the group of close collaborators of A. Mostowski belonged Janusz Onyszkiewicz, Stanisław Krajewski, and Konrad Bieliński. These names are well-known to all who witnessed democratic changes in
Poland (J. Onyszkiewicz was the Defence Secretary in two cabinets, S. Krajewski is one of the most eminent members of the Jewish community in Poland, and K. Bieliński was one of the leaders in the
underground Solidarity movement). There were also other logicians who were political disidents. One of the most significant role was played by Klemens Szaniawski. Jan Waszkiewicz was especially
9 In a letter that I received as a response to my request concerning the remarks to this paper J. Mycielski wrote: „Since there was a close cooperation (exchange of papers and ideas) between
Mostowski’s and Tarski’s schools, one can say that there was just one Berkeley-Warsaw school and it is impossible to discuss one without discussing the other.”
10 Not only philosophical ones. Tarski’s concept was the subject of numerous analyses and formal generalizations. An interesting survey of this topic can be found in S. Krajewski’s essay „Prawda” in:
Logika Formalna. Zarys Encyklopedyczny z Zastosowaniami do informatyki i lingwistyki, edited by W. Marciszewski, PWN 1987, pp. 144-156.
11 Tarski’s contribution to the development of logic is discussed in J. Zygmunt’s „Alfred Tarski” in: Polska Filozofia Powojenna, vol. II, edited by W. Mackiewicz, Agencja Wydawnicza Witmark,
Warszawa, 2001, pp. 342-375.
12 It should be mentioned that after the war it was possible to employ more people in the departments of mathematics. Before the World War II (as I was told by W. Marek) there were just three
professors of mathematics at Warsaw University. Karol Borsuk, despite his achievements and international reputation was employed as an assistant. A. Mostowski had a position in the Institute of
13 The note „From the Editor” says that Studia Logica will publish papers devoted to all areas of logic, including formal logic, mathematical logic, inductive logic, theory of definition and that of
classification etc., and that SL invite especially works on the history of Polish logic.
14 In its body there were: N. D. Belnap, Jr and J. M. Dunn (USA), B. I. Dahn (DDR), L. Esakia (USSR, Georgia), D. Follesdal (Norway), R. Gilles (Canada), J. Hintikka (Finland), L. Maksimowa and V. A.
Smirnow (USSR, Russia), R. Routley (Australia), I. Ruzsa (Hungary), P.Weingartner (Austria), and P. M. Williams (UK).
15 This „party vigilance” was not unjustified. Since the scientific periodicals were not censored, there was a danger that some of the papers might have been written by the „enemy of the socialism”.
Studia Logica committed this kind of crime by publishing the review of A. A. Zinoviev’s (one of the leading Soviet disidents) book Logical Physics (SL 35, 1976). It was a book free from ideological
issues, but still the members of the Soviet Academy of Sciences protested.
16 According to the contract, the periodical remains one of the publications of the Institute of Philosophy and Sociology of the Polish Academy of Sciences, and the main Polish libraries receive it
at reduced prices.
17 The aim of this newsletter (founded by the section of logic of Inst. Phil. Soc. Pol. Ac. Sc. in 1973) was to extend international cooperation and indirectly to promote Studia Logica. Since 1991 it
has been published by Łódź University, and edited by Grzegorz Malinowski.
18 An important role is played by Studia Semiotyczne (founded by J. Pelc) the publication of Polskie Towarzystwo Semiotyczne.
19 It requires the competence to which the author of this survey cannot aspire, nor probably can the workers in this discipline, because of its size and scope. On the other hand it would be odd not
to mention the foundations of mathematics. The results of this discipline are strongly related to those of logic.
20 The work of Andrzej Mostowski is discussed in five papers written by: A. Grzegorczyk, W. Guzicki, W. Marek, L. Pacholski, C. Rauszer, P. Zbierski in: A. Mostowski, Foundational Studies. Selected
Works, vol I, PWN, Warszawa, North-Holland, Amsterdam 1979.
The monograph which summarizes his metamathematical research in set theory is: A. Mostowski, Constructible Sets with Applications, PWN, Warszawa, North-Holland, Amsterdam, 1969.
21 H. Rasiowa, R. Sikorski, The Mathematics of Metamathematics, PWN, Warszawa, 1963 (3rd edition, 1970).
22 If by laws of logic one means the laws of classical logic, then they can be characterized by the laws of Boolean algebra. Non-classical logics can be represented by „non-Boolean” algebras. E.g.
intuitionistic logic is characterized by pseudo-Boolean algebras.
23 H. Rasiowa, An Algebraic Approach to Non-Classical Logic, PWN, Warszawa, North-Holland, Amsterdam, 1974.
24 Studia Logica 2, 1955, pp. 151-212.
25 This notion was explored also by the Spanish logicians.
26 These two conceptions were discussed in section 2. To what extent they differ from each other can be illustrated by the following. The system of 3-valued Łukasiewicz logic in its inferential
meaning has two „non-trivial” extensions, while the same system in its sentential meaning has only one extension. This result (obtained by R. Wójcicki) has been later generalized (G. Malinowski, W.
Dziobiak). It makes clearer some of the „paradoxical” results obtained earlier by H. Hiż and Rasiowa, which were concerned with the sentential meaning of the logical system. The distinction between
logic understood as a set of tautologies and a logical consequence enables us to show (R. Wójcicki) that we cannot define classical logic by the use of the constants of the intuitionsitic logic in
the inferential case (which is possible in the sentential case).
27 The papers in question here are: J. Łoś, R. Suszko, „Remarks on Sentential Logic”, Indagationes Mathematicae 20, 1958, pp. 177-183, and R. Wójcicki „Some Remarks on the Consequence Operation in
Sentential Logics”, Fundamenta Mathematicae 68, 1970, 269-279.
28 The results mentioned here as well as the results of other logicians (W. Blok, H. Hiż, J. Łoś, D. Pigozzi, H. Rasiowa, W.A. Pogorzelski, and others) are discussed in: R. Wójcicki, Theory of
Logical Calculi; Basic Theory of Consequence Operations, Kluwer, 1988. Some of the results were applied to the theory of automatic theorem proving (cf. Z. Stachniak, Resolution Proof System, An
Algebraic Theory, Kluwer, 1996).
29 This part of research is discussed in the monograph: J. Czelakowski, Protoalgebraic Logics, Studia Logica Library, Kluwer, 2001.
30 A thorough discussion of the results with a comparison to other centers’ results can be found in: W. Buszkowski „Mathematical linguistics and proof theory”, Handbook of Logic and Language
(collective work edited by J. van Benthem and A. ter Meulen, Elsevier and MIT Press).
31 S. Jaśkowski On the rules of Suppositions in Formal Logic, Studia Logica. Wydawnictwo poświęcone logice i jej historji, nr 1, Seminarium Filozoficzne Wydz. Matematyczno-Przyrodniczego Uniwersytetu
Warszawskiego, Warszawa, 1934. This series was to be published under the editorship of J. Łukasiewicz. Unfortunately only one volume was published. Postwar Studia Logica (cf. section 12) refered to
it but was not its continuation.
32 In order to make S. Jaśkowski’s system better known J. Słupecki and L. Borkowski elaborated some version of it (and published it in the book Elementy Logiki i Teorii Mnogości, PWN, 1963;
translated as Elements of Mathematical Logic and Set Theory, Pergamon Press 1967) and presented the possibility of applying it to mathematical reasoning.
33 This project is supported by international grants (from EU and NATO, e.g. Ph. D. scholarships in Germany and Japan) and its main aim is to complete an encyclopaedia of mathematics „on-line” (a
collection of computer verified proofs of theorems). There is also a quarterly: Formalized Mathematics – A Computer Assisted Approach published there, edited by R. Matuszewski.
34 A good introduction to this subject is a book by G. Malinowski Many-valued Logics, Clarendon Press, Oxford 1993.
35 M. Urchs is mentioned here because of his strong connections with Polish logic. He came to Poland as a young man from DDR and started to study in Toruń.
|
{"url":"http://www.ifispan.waw.pl/studialogica/PL.Logic.html","timestamp":"2014-04-20T23:29:01Z","content_type":null,"content_length":"61897","record_id":"<urn:uuid:53efa552-31e2-4d4d-9de0-7f9eec43804f>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00531-ip-10-147-4-33.ec2.internal.warc.gz"}
|
WINARSIH, SRI (2008) KAJIAN TRANSFORMASI SCHWARZ-CHRISTOFFEL. Other thesis, University of Muhammadiyah Malang.
Download (68kB) | Preview
Let a complex function is defined on a domain D in the Z-plane, then each poit in D will be correspondenced to points in the W-plane. Thereby, it is obtained a mapping from the Z-plane to the
W-plane. The mapping by analitc function is conformal. It means to preserve number and angle direction except at appoint z, with . From this conformal can be constructed a specific transformation,
know as the Schwarz-Christoffel Transformation. In the final assgment will be studied how to construt the Schwarz-Christoffel Transformation which is based on conformal mapping, by choosing a
function such that mapping the real axis in the Z-plane onto a polygon in the W-plane. The formula of Schawrz-Christoffel transformation is where A and B is kompleks konstant.
Actions (login required)
|
{"url":"http://eprints.umm.ac.id/6944/","timestamp":"2014-04-20T18:57:28Z","content_type":null,"content_length":"18516","record_id":"<urn:uuid:fa6b58fe-e0e6-4e47-ac74-185def352966>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00407-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Functional Inequality
Biographies of Women Mathematicians
A Functional Inequality
Journal of the London Mathematical Society, Vol. 23 (1948), 202-209
Received and read 22 January, 1948
In a recent paper Wright discusses sufficient restrictions on the real function f(x) and its first N derivatives to ensure that f(x) ≤ sin(x) in the interval 0 ≤ x ≤ π/2. He defines
a[n] = max | f^(n)(x) |,
0 ≤ x ≤ π/2
and proves among others the following theorem, where f(x) is real and 0 ≤ x ≤ π/2.
THEOREM 1. If (i) f(x) and all its derivatives exist and are continuous, (ii) f(0) ≤ 0, (–1)^(n-1)/2 f^(n)(0) ≤ 1 for all odd n, (iii) for some δ > 0 there is a function λ(x) such that, for π/2–δ
< x < π/2,
(π λ(x))^n
0 < λ(x) < 1, f(x) ≤ 1, (–1)^n/2 f^(n)(x) ≤ ---------
for all even n, and (iv)
then f(x) ≤ sin(x).
I shall prove a new theorem (Theorem III) of this type by means of a two-point expansion and also prove a theorem (Theorem II) in which some of the inequalities in the hypotheses are omitted and
others reversed.
Full article available online at the
Journal of the London Mathematical Society
(subscription required).
|
{"url":"http://www.agnesscott.edu/Lriddle/women/abstracts/macintyre_abstract3.htm","timestamp":"2014-04-17T15:40:10Z","content_type":null,"content_length":"4579","record_id":"<urn:uuid:c2c8388e-36c4-4161-af85-79ca1ce6025f>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00633-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Beginning in Physics
This page
is a syllabus from the International Physics Olympiad, it lists topics that one typically needs a full year of college physics to learn, so my guess is the top olympiads have pretty advanced
That you will need to know trigonometry is beyond doubt. Master it, know it very well. I would learn that straight away.
As for whether you will need calculus, that depends on what level you want to reach. With calculus, you could reach that first year of college level like the IPhO requires and other olympiads may
require. This also means you would have access to more resources because the most comprehensive textbooks require one to learn calculus.
Without calculus, you may struggle to find all the topics you need to learn about. You could learn from school textbooks, but I've come to learn that to excel in olympiads, one must know more than
one's competitors. If you are serious about doing well in olympiads, I think the calculus-college-level route is necessary.
Luckily, there are great resources available. Trigonometry you can learn from any school textbook, and I'm sure there are tons of resources online. For calculus,
this course
is great. Personally, I would not learn multivariable calculus at all, even for electricity & magnetism, no olympiad questions will require it. For the physics itself, a comprehensive college book
like Young & Freedman, University Physics (
) might be all you need, along with the myriad of physics lecture videos online. And of course, you could read through any school physics textbooks while you are learning the math you will need, to
get you started.
For olympiads, you need to go straight to the top level. I wish you the best of luck.
|
{"url":"http://www.physicsforums.com/showthread.php?t=693735","timestamp":"2014-04-20T16:02:49Z","content_type":null,"content_length":"64952","record_id":"<urn:uuid:5598113e-2926-416a-a664-1d2d516547c8>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00429-ip-10-147-4-33.ec2.internal.warc.gz"}
|
how do a video scaler work?
1. 22nd March 2005, 04:52 #1
Member level 3
Join Date
Nov 2004
3 / 3
video scaler algorithm
how do a video scaler work?
Are there any video scalers diagram on the net ?
2. 22nd March 2005, 05:02 #2
Advanced Member level 5
Join Date
Oct 2004
West Coast
2297 / 2297
videoscaler korean
Good explenation on video scalers yoou can find here:
So far I haven't come accross any detailed diagram of a video scaler ..
1 members found this post helpful.
3. 22nd March 2005, 09:20 #3
Member level 3
Join Date
Nov 2004
3 / 3
superresolution scaler
Good explenation on video scalers yoou can find here:
So far I haven't come accross any detailed diagram of a video scaler ..
thank you for replying !
information on the url above is so little, i need more detailed info on video scaler!
a block diagram is a plus to me! anyone help me!
1 members found this post helpful.
4. 24th March 2005, 10:07 #4
Full Member level 2
Join Date
Mar 2005
23 / 23
video scaler block diagram
You did not mention the platform on which you are working. But as i am working on the video scaling and super resolution on PC and DSP based solution i will recommend you to first choose a
proper video scaling filter and then proceed for further implementations.
Here are few papers on the video scaling...............
The first paper gives you a good realization of the Video Down scaling during a transcoding process.. You can get the good vision of the overall architecture and the optimzation flow of a
video down-scaler.
The second paper is on the Scaling Video Conferencing through Spatial Tiling.
The third one is a thesis "Adaptive Content-Aware Scaling for Improved Video
Streaming" and you will get great ideas from these three pdf files..........
I recommend you to read the basic introductory chapters of the thesis report first and then we can have a discussion
Once you have decided your alogrithm then we can talk about its implementation issues.
Take care,
1 members found this post helpful.
5. 29th March 2005, 07:17 #5
Member level 3
Join Date
Nov 2004
3 / 3
do video scalers work?
hi swahlah!
i am woking on the asic ! I have already ,even more, the RTL code of sacler which don't work,but I have not any documents about codes on hand.there are so few commentaries in the code. Now, I
don't know algorithm used by codes and block diagram to understand those code.So, i come here !
thank you again!
6. 29th March 2005, 08:49 #6
Full Member level 1
Join Date
Mar 2005
16 / 16
how do video scalers work
The scalar operation is 2D.
The normal architecture is performing horizontal and vertical interpolation seperately. For horizontal interpolation is just normal up-samlping method described in DSP book. The different are
filter type(sinc, hamming window, etc..) and the taps of filter you choiced.
For veritical, the operation is the same. The different is you must keep some lines above and below the current line according the filter taps. This mean you must use ram as line buffer. For
hw, the number of ram is the cost. It must be trade-off between cost and performace.
The filter can be non-adaptive or adaptive(changed acoording the context, like the edge appear...).
1 members found this post helpful.
7. 29th March 2005, 09:41 #7
Full Member level 2
Join Date
Mar 2005
23 / 23
scaler asic implementation
About ur question ........ The papers that i have posted are very nice
and in case of ASIC you first design the architecture as per your requirements.
In asics you have to make more elaborated design .......... But simply you can make a video upscalar using a normal six tap to 8 tep filter you can design it by urself or just find in some
research paper.........
Any filter is just a multiplier accumulater so you have to make a very fast MAC unit for your upscalar now how many pixels of image u wanna filter at a time is your own
I will recommend u to first design the equation of your filter in matlab and then implement in hardware,
one example is a cubic spline interpolator.
Other is a simple six tap filter (1 -5 20 20 -5 1)/32 with a rounding of 16
1 members found this post helpful.
|
{"url":"http://www.edaboard.com/thread34844.html","timestamp":"2014-04-18T10:36:07Z","content_type":null,"content_length":"77970","record_id":"<urn:uuid:ecaf53ca-b8e9-4100-befd-3f146bf0ec48>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00156-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Solve the equation using the Zero-Product Property. (x-2)(x+7)=0 A. 2, -7 B. 2, 7 C. –1,1 D. –2, -7
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/50a98b51e4b06b5e4932ce42","timestamp":"2014-04-21T00:11:22Z","content_type":null,"content_length":"39586","record_id":"<urn:uuid:5bfb5369-7ef7-4009-b4d2-c942356d39a8>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00241-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Impact of a Planar Kinematic Chain with Granular Matter
The theoretical model of a kinematic chain impacting granular matter is studied. The force of the granular medium acting on the chain is a linear superposition of a static (depth-dependent)
resistance force and a dynamic (velocity-dependent) frictional force. This resistance force is opposed to the direction of the velocity of the immersed chain. We present two methods (one using
EventLocator and the other using FixedStep) for the problem. As examples, a single and a double pendulum are simulated using different initial impact velocity conditions. We analyze how rapidly the
kinematic chain impacting the granular medium slows upon collision. For the analyzed cases the kinematic chain under high impact force (higher initial velocity) comes to rest faster in the granular
matter than the same body under low impact force (lower initial velocity).
The physical behavior of granular matter has its own specific properties that are unlike solids, liquids, or gases. Impact with a granular medium is very interesting because granular materials have
characteristics similar to a solid, yet flow like a fluid. The study of impact into granular matter is, surprisingly, in its early stages.
The focus of recent work has been to develop a force law for granular impacts and to find a mathematical formula to measure the impact force of objects dropped into granular matter [1]. Tsimring and
Volfson [2] studied the penetration of large projectiles into dry granular media. For the resistance-force model, they proposed a drag force depending on the velocity and a friction force depending
on the depth. Depth-dependent friction-force models have been developed for horizontal motion in [3, 4, 5, 6] and for vertical motion in [7, 8]. Katsuragi and Durian [1] sparked new interest in the
field of impact with granular matter. They applied the resistance-force model proposed in [2] and verified the motion of a rigid sphere with a line-scan digital CCD camera. They analyzed an
interesting phenomenon: how rapidly a sphere impacting a granular medium slows upon collision. Analysis shows that the greater the speed at which the spheres impact the medium, the sooner they will
come to rest for the vertical impact.
In our study we focus on modeling and simulating a single and a double pendulum impacting a granular medium using the resistance-force model proposed by Tsimring and Volfson [2] and verified by
Katsuragi and Durian [1]. For the single pendulum we apply NDSolve with variable step size and the simulation is stopped when the velocity changes sign. This event is captured with the command
EventLocator. For the double pendulum the equations of motion are solved using NDSolve with the FixedStep method. In this way we capture the physics of the impact process and we can calculate the
final stopping time for the kinematic chain.
Resistance Forces during Impact
For impact with penetration, the most important interaction between a rigid body and the medium is the resistance force from the moment of impact until the body stops. The resistance forces are
composed of a drag force proportional to the square of the velocity and a force related to the plasticity of the medium [9, 10]. For impact of a moving rigid body into a granular medium, recent
research [1, 2] shows that the total resistance force acting on the body is the sum of a static resistance force characterized by depth-dependent friction force and a dynamic frictional force
characterized by velocity-dependent drag force, that is,
The dynamic frictional force 1, 2] show that
The static resistance force is an internal resisting force and appears when an external force is applied. The horizontal static resistance force 3, 11] show that it is known to be proportional to the
normal component of the pressure acting at the contacting point, which increases linearly with the depth
The vertical static resistance force is defined as the internal impeding resistance acting on the vertical axis. This force is a nonlinear function of the immersion depth. Hill, Yeung, and Koehler [8
] suggest an empirical equation with coefficients calculated from experimental data as
Equations (1) to (4), representing the resistance forces, contain the sign function of the velocity of the application point of the resistance forces. It is not possible to solve the ordinary
differential equations of motion that contain the sign function with the variable step method of NDSolve, due to the discontinuity at the origin. We use the EventLocator method for NDSolve with a
variable step to solve the ODE for the single-pendulum impact model. We use the FixedStep method to solve the ODE for the double-pendulum impact model. This correctly captures the change in the sign
of the velocity.
Single-Pendulum Impact Model
A single-pendulum impact model is presented in Figure 1. The planar motion of the pendulum on impact can be described in a fixed Cartesian coordinate system. The
Figure 1. A single-pendulum impact model.
The pendulum has mass
The gravitational force acts at the center of mass
The resistance force of the granular media is assumed to act at the point
These are the position vector and the velocity of the point
This is the gravitational force acting at
We use NDSolve to solve the differential equations of motion. The expressions for the resistant forces given by equations (1) to (4) contain the signs of the vertical and horizontal velocities of the
application point E. We want to use a variable step size for the integration for this model, so we cannot use the Mathematica function Sign. (The function Sign can be used with NDSolve with the
FixedStep method; see the double-pendulum example.) Therefore for the case of the single pendulum we introduce the piecewise-constant variables
The list data1 is used for plunging (data2 is used for withdrawing (1, 3, 8] are
This defines the reference area
Here are the reference area
Here is the horizontal static resistance force.
Here are the immersed volume U of the pendulum and the vertical dynamic frictional force
This defines the resistance force.
The equation of motion of the pendulum is
Here 5), is solved in single. The input variable dq is the initial impact angular velocity of the pendulum. The initial start time t0, the initial impact angle q0, and the initial impact angular
velocity dq0 are defined as initial values. The interpolating function sol, which is the solution of the ODE, is defined as a local symbol. To save the solution data for the angle q and for the
angular velocity qresults and dqresults. The simulation is performed until the angular velocity While statement. At the first loop the ODE uses the coefficients list data1 for the plunging motion.
NDSolve solves this ODE until the EventLocator method. This event occurs when NDSolve stops solving the ODE, the stopping time is saved as tend. The last values of the angle q and the angular
velocity tend, are saved as the initial values q0 and dq0 for the next loop. The Table saves the data of the angle q and the angular velocity qresult and dqresult. After saving the data, the new
initial start time t0 of the next loop is defined as the stopping time tend and the motion index increases from the plunging motion to the withdrawing motion. After the end of the first loop, the
While statement checks the last value of the angular velocity and decides whether to do one more loop of calculation or to end the module.
The simulation of the model is performed for an initial impact angle
The simulation results of the model for the initial impact angle
Double-Pendulum Impact Model
A double-pendulum impact model is presented in Figure 2. The planar motion of the double-pendulum impact can also be described in a fixed Cartesian coordinate system. The angles between the
Figure 2. A double-pendulum impact model.
The first link of the pendulum has mass
This is the position vector from the origin
This is the position vector from the origin
This is the position vector from the origin
Here is the acceleration vector of the position vector.
We calculate the immersed depth
Here are the position vector and the velocity of the force application point
These are the gravitational forces acting at
The function Sign can be used with NDSolve with the FixedStep method as mentioned previously. Therefore, we define the horizontal dynamic frictional force.
Here is the vertical dynamic frictional force.
Here is the horizontal static resistance force.
We express the vertical resistance force coefficients,
This is the vertical static resistance force.
This is the resistance force.
The ODE for the first link, where
The ODE of the second link, where
The ODEs are solved in double. The input variables dq1 and dq2 are the initial impact angular velocities of the pendulum. The initial impact angles, q10, q20, and the initial impact angular
velocities, dq10, dq20, are defined as initial values. The solution is obtained using the ExplicitRungeKutta method with FixedStep with step size EventLocator method when
The simulation of the model is performed for initial impact angles (
The simulation results of the model for the initial impact angles (
This article considers the impact of a kinematic chain into granular matter. The solution techniques are based on the resistance forces as a sum of a static (depth-dependent) resistance force and a
dynamic (velocity-dependent) frictional force. We apply the NDSolve command to solve the equations of motion. For the single pendulum, NDSolve is used with EventLocator to identify when the velocity
changes sign. The simulation is then stopped and a new simulation is performed using as initial conditions the end values of the previous simulation. For a double pendulum, NDSolve is used with the
FixedStep method and the simulation is performed over the whole interval of time. We observe that the numerical solution leads to a seemingly paradoxical result: as the initial velocity increases,
the stopping time decreases.
For this work we used Mathematica 7.0.1 and a Intel Core Duo computer running Mac OS X with a 2.0 GHz CPU and 2 GB of RAM.
[1] H. Katsuragi and D. J. Durian, “Unified Force Law for Granular Impact Cratering,” Nature Physics, 3, 2007 pp. 420-423. www.nature.com/nphys/journal/v3/n6/full/nphys583.html.
L. S. Tsimring and D. Volfson, “Modeling of Impact Cratering in Granular Media,” in Powders and Grains 2005, Proceedings of International Conference on Powders & Grains 2005 (R. García-Rojo, H.
[2] J. Herrmann, and S. McNamara, eds.), London: Taylor & Francis, 2005
pp. 1215-1218.
[3] R. Albert, M. A. Pfeifer, A.-L. Barabási, and P. Schiffer, “Slow Drag in a Granular Medium,” Physical Review Letters, 82(1), 1999 pp. 205-208.
[4] I. Albert, P. Tegzes, B. Kahng, R. Albert, J. G. Sample, M. Pfeifer, A.-L. Barabási, T. Vicsek, and P. Schiffer, “Jamming and Fluctuations in Granular Drag,” Physical Review Letters, 84, 2000
pp. 5122-5125. arxiv.org/abs/cond-mat/9912336.
[5] I. Albert, P. Tegzes, R. Albert, J. G. Sample, A.-L. Barabási, T. Vicsek, B. Kahng, and P. Schiffer, “Stick-Slip Fluctuations in Granular Drag,” Physical Review E, 64, 2001 031307(9). arxiv.org/
[6] I. Albert, J. G. Sample, A. J. Morss, S. Rajagopalan, A.-L. Barabási, and P. Schiffer, “Granular Drag on a Discrete Object: Shape Effects on Jamming,” Physical Review E, 64, 2001 061303(4).
[7] M. B. Stone, R. Barry, D. P. Bernstein, M. D. Pelc, Y. K. Tsui, and P. Schiffer, “Studies of Local Jamming via Penetration of a Granular Medium,” Physical Review E, 70, 2004 041301(10).
[8] G. Hill, S. Yeung, and S. A. Koehler, “Scaling Vertical Drag Forces in Granular Media,” Europhysics Letters, 72(1), 2005 pp. 137-142. www.iopscience.iop.org/0295-5075/72/1/137.
[9] A. L. Yarin, M. B. Rubin, and I. V. Roisman, “Penetration of a Rigid Projectile into an Elastic-Plastic Target of Finite Thickness,” International Journal of Impact Engineering, 16(5), 1995 pp.
801-831. www.sciencedirect.com/science/article/pii/0734743X95000197.
[10] G. Yossifon, A. L. Yarin, and M. B. Rubin, “Penetration of a Rigid Projectile into a Multi-Layered Target: Theory and Numerical Computations,” International Journal of Engineering Science, 40
(12), 2002 pp. 1381-1401. doi:10.1016/S0020-7225(02)00013-7.
[11] S. N. Coppersmith, C.-H. Liu, S. Majumdar, O. Narayan, and T. A. Witten, “A Model for Force Fluctuations in Bead Packs,” Physical Review E, 53(5), 1996 pp. 4673-4685. www.arxiv.org/abs/cond-mat/
S. Lee and D. B. Marghitu, “Impact of a Planar Kinematic Chain with Granular Matter,” The Mathematica Journal, 2011. dx.doi.org/doi:10.3888/tmj.13-2.
About the Authors
Seunghun Lee was a Ph.D. student in the mechanical engineering department at Auburn University, studying theoretical modeling of robotic systems. He is now serving in the Korean army.
Dan B. Marghitu is a professor of mechanical engineering at Auburn University. His research areas are impact dynamics, mechanisms, robots, and nonlinear dynamics.
Mechanical Engineering Dept.
Auburn University
270 Ross Hall, Auburn, AL 36849
|
{"url":"http://www.mathematica-journal.com/2011/01/impact-of-a-planar-kinematic-chain-with-granular-matter/","timestamp":"2014-04-20T08:26:14Z","content_type":null,"content_length":"60783","record_id":"<urn:uuid:c0fdac9b-532b-4caf-89a2-f21c6ee398a8>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00367-ip-10-147-4-33.ec2.internal.warc.gz"}
|
greens functions
January 25th 2009, 08:17 PM #1
Jan 2009
greens functions
i am asked to find the greens function for
U' +kU=f(x)
with u(0)=0 and x>=0
i found the homogenous solution and used the 4 conditions...
my homogen soln was u=constant * exp(-kU)....
is it possible that my G(x,s)= 0 for x<s and X>s?
also how do i show that -u"= f(x) is not self adjoint with bc
u(0)=0 and u'(0) +u(1)=0
i have the greens function but the bc's dont hold for it?
is that right?
My understanding is if your equation is
$Lu(x) = f(x),$
then the Green's function is the solution to
$Lg(x;s) = \delta(x-s)$.
So when you solve this equation you have in the first case
$g'(x;s) +kg(x;s) = \delta(x-s)$
multiply by the integrating factor $\exp(kx)$ you have
$\frac{d}{dx}\left[g(x;s)\exp(kx)\right] = \delta(x-s)\exp(kx)$,
and integrating
$g(x;s)\exp(kx) = H(x-s)\exp(ks) + C$
Enforcing the $g(0;s)=0$ boundary condition and assuming $x,s > 0$ we can deduce $C=0$ and therefore
$g(x;s) = H(x-s)\exp[-k(x-s)].$
Hope this helps. You might need to give more complete details to answer the second part.
January 26th 2009, 04:50 AM #2
Junior Member
Jan 2009
|
{"url":"http://mathhelpforum.com/advanced-applied-math/69917-greens-functions.html","timestamp":"2014-04-18T12:02:35Z","content_type":null,"content_length":"33452","record_id":"<urn:uuid:a180567f-6758-444c-9b1c-9d22f9207c03>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00361-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Frobenius monads and pseudomonoids
Results 1 - 10 of 16
"... known in module theory that any A-bimodule B is an A-ring if and only if the functor − ⊗A B: MA → MA is a monad (or triple). Similarly, an A-bimodule C is an A-coring provided the functor − ⊗A
C: MA → MA is a comonad (or cotriple). The related categories of modules (or algebras) of − ⊗A B and comodu ..."
Cited by 12 (10 self)
Add to MetaCart
known in module theory that any A-bimodule B is an A-ring if and only if the functor − ⊗A B: MA → MA is a monad (or triple). Similarly, an A-bimodule C is an A-coring provided the functor − ⊗A C: MA
→ MA is a comonad (or cotriple). The related categories of modules (or algebras) of − ⊗A B and comodules (or coalgebras) of − ⊗A C are well studied in the literature. On the other hand, the right
adjoint endofunctors HomA(B, −) and HomA(C, −) are a comonad and a monad, respectively, but the corresponding (co)module categories did not find
, 2006
"... In this paper we explain the relationship between Frobenius objects in monoidal categories and adjunctions in 2-categories. Specifically, we show that every Frobenius object in a monoidal
category M arises from an ambijunction (simultaneous left and right adjoints) in some 2-categoryDinto which M fu ..."
Cited by 12 (1 self)
Add to MetaCart
In this paper we explain the relationship between Frobenius objects in monoidal categories and adjunctions in 2-categories. Specifically, we show that every Frobenius object in a monoidal category M
arises from an ambijunction (simultaneous left and right adjoints) in some 2-categoryDinto which M fully and faithfully embeds. Since a 2D topological quantum field theory is equivalent to a
commutative Frobenius algebra, this result also shows that every 2D TQFT is obtained from an ambijunction in some 2-category. Our theorem is proved by extending the theory of adjoint monads to the
context of an arbitrary 2-category and utilizing the free completion under Eilenberg-Moore objects. We then categorify this theorem by replacing the monoidal category M with a semistrict monoidal
2-category M, and replacing the 2-categoryD into which it embeds by a semistrict 3-category. To state this more powerful result, we must first define the notion of a ‘Frobenius pseudomonoid’, which
categorifies that of a Frobenius object. We then define the notion of a ‘pseudo ambijunction’, categorifying that of an ambijunction. In each case, the idea is that all the usual axioms now hold only
up to coherent isomorphism. Finally, we show that every Frobenius pseudomonoid in a semistrict monoidal 2-category arises from a pseudo ambijunction in some semistrict 3-category.
"... In this paper, we give a novel abstract description of Szabo’s polycategories. We use the theory of double clubs – a generalisation of Kelly’s theory of clubs to ‘pseudo ’ (or ‘weak’) double
categories – to construct a pseudo-distributive law of the free symmetric strict monoidal category pseudocomo ..."
Cited by 7 (1 self)
Add to MetaCart
In this paper, we give a novel abstract description of Szabo’s polycategories. We use the theory of double clubs – a generalisation of Kelly’s theory of clubs to ‘pseudo ’ (or ‘weak’) double
categories – to construct a pseudo-distributive law of the free symmetric strict monoidal category pseudocomonad on Mod over itself qua pseudomonad, and show that monads in the ‘two-sided Kleisli
bicategory’ of this pseudo-distributive law are precisely symmetric polycategories. 1
- , 2004
"... Much Australian work on categories is part of, or relevant to, the development of higher categories and their theory. In this note, I hope to describe some of the origins and achievements of our
efforts that they might perchance serve as a guide to the development of aspects of higher-dimensional wo ..."
Cited by 6 (0 self)
Add to MetaCart
Much Australian work on categories is part of, or relevant to, the development of higher categories and their theory. In this note, I hope to describe some of the origins and achievements of our
efforts that they might perchance serve as a guide to the development of aspects of higher-dimensional work. I trust that the somewhat autobiographical style will add interest rather than be a
distraction. For so long I have felt rather apologetic when describing how categories might be helpful to other mathematicians; I have often felt even worse when mentioning enriched and higher
categories to category theorists. This is not to say that I have doubted the value of our work, rather that I have felt slowed down by the continual pressure to defend it. At last, at this meeting, I
feel justified in speaking freely amongst motivated researchers who know the need for the subject is well established. Australian Category Theory has its roots in homology theory: more precisely, in
the treatment of the cohomology ring and the Künneth formulas in the book by Hilton and Wylie [HW]. The first edition of the book had a mistake concerning the cohomology ring of a product. The
Künneth formulas arise from splittings of the natural short exact sequences
- Algebra Number Theory
"... Abstract. We develop the theory of weak bimonoids in braided monoidal categories and show them to be quantum categories in a certain sense. Weak Hopf monoids are shown to be quantum groupoids.
Each separable Frobenius monoid R leads to a weak Hopf monoid R ⊗ R. Contents ..."
Cited by 5 (0 self)
Add to MetaCart
Abstract. We develop the theory of weak bimonoids in braided monoidal categories and show them to be quantum categories in a certain sense. Weak Hopf monoids are shown to be quantum groupoids. Each
separable Frobenius monoid R leads to a weak Hopf monoid R ⊗ R. Contents
"... Abstract. This paper is a rather informal guide to some of the basic theory of 2-categories and bicategories, including notions of limit and colimit, 2-dimensional universal algebra, formal
category theory, and nerves of bicategories. 1. Overview and basic examples This paper is a rather informal gu ..."
Cited by 4 (0 self)
Add to MetaCart
Abstract. This paper is a rather informal guide to some of the basic theory of 2-categories and bicategories, including notions of limit and colimit, 2-dimensional universal algebra, formal category
theory, and nerves of bicategories. 1. Overview and basic examples This paper is a rather informal guide to some of the basic theory of 2-categories and bicategories, including notions of limit and
colimit, 2-dimensional universal algebra, formal category theory, and nerves of bicategories. As is the way of these things, the choice of topics is somewhat personal. No attempt is made at either
rigour or completeness. Nor is it completely introductory: you will not find a definition of bicategory; but then nor will you really need one to read it. In keeping with the philosophy of category
theory, the morphisms between bicategories play more of a role than the bicategories themselves. 1.1. The key players. There are bicategories, 2-categories, and Cat-categories. The latter two are
exactly the same (except that strictly speaking a Cat-category should have small hom-categories, but that need not concern us here). The first two are nominally different — the 2-categories are the
strict bicategories, and not every bicategory is strict — but every bicategory is biequivalent to a strict one, and biequivalence is the right general notion of equivalence for bicategories and for
2-categories. Nonetheless, the theories of bicategories, 2-categories, and Catcategories have rather different flavours.
, 2006
"... mathematical quantum theory. This trend was observed in [3], mainly in relation to Hopf algebroids, and continued in [8] with a general account of Frobenius monoids. Below we list some of the
∗-autonomous partially ordered sets A = (A, p, j, S) ..."
Cited by 1 (1 self)
Add to MetaCart
mathematical quantum theory. This trend was observed in [3], mainly in relation to Hopf algebroids, and continued in [8] with a general account of Frobenius monoids. Below we list some of the
∗-autonomous partially ordered sets A = (A, p, j, S)
, 2008
"... We show that the equivalence between several possible characterizations of Frobenius algebras, and of symmetric Frobenius algebras, carries over from the category of vector spaces to more
general monoidal categories. For Frobenius algebras, the appropriate setting is the one of rigid monoidal catego ..."
Cited by 1 (0 self)
Add to MetaCart
We show that the equivalence between several possible characterizations of Frobenius algebras, and of symmetric Frobenius algebras, carries over from the category of vector spaces to more general
monoidal categories. For Frobenius algebras, the appropriate setting is the one of rigid monoidal categories, and for symmetric Frobenius algebras it is the one of sovereign monoidal categories. We
also discuss some properties of Nakayama automorphisms.
, 2004
"... Recall the ordinary notion of Frobenius algebra over a field k. Step 2 Lift the concept from linear algebra to a general monoidal category and justify this with examples and theorems. Step 3
Lift the concept up a dimension so that monoidal categories themselves can be examples. 1 Frobenius algebras ..."
Add to MetaCart
Recall the ordinary notion of Frobenius algebra over a field k. Step 2 Lift the concept from linear algebra to a general monoidal category and justify this with examples and theorems. Step 3 Lift the
concept up a dimension so that monoidal categories themselves can be examples. 1 Frobenius algebras An algebra A over a field k is called Frobenius when it is finite dimensional and equipped with a
linear function e:A æÆ æ k such that: e ( ab) = 0 for all a ŒA implies b = 0.
, 710
"... Abstract. The purpose of this paper is to develop a theory of bimonads and Hopf monads on arbitrary categories thus providing the possibility to transfer the essentials of the theory of Hopf
algebras in vector spaces to more general settings. There are several extensions of this theory to monoidal c ..."
Add to MetaCart
Abstract. The purpose of this paper is to develop a theory of bimonads and Hopf monads on arbitrary categories thus providing the possibility to transfer the essentials of the theory of Hopf algebras
in vector spaces to more general settings. There are several extensions of this theory to monoidal categories which in a certain sense follow the classical trace. Here we do not pose any conditions
on our base category but we do refer to the monoidal
|
{"url":"http://citeseerx.ist.psu.edu/showciting?doi=10.1.1.91.2686","timestamp":"2014-04-23T10:22:36Z","content_type":null,"content_length":"35335","record_id":"<urn:uuid:ee2b1ad6-3f67-4671-8882-483d21c1a646>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00662-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Cantor-Bendixson derivative
Cantor-Bendixson derivative
Let $A$ be a subset of a topological space $X$. Its Cantor-Bendixson derivative $A^{{\prime}}$ is defined as the set of accumulation points of $A$. In other words
$A^{{\prime}}=\{x\in X\mid x\in\overline{A\setminus\{x\}}\}.$
Through transfinite induction, the Cantor-Bendixson derivative can be defined to any order $\alpha$, where $\alpha$ is an arbitrary ordinal. Let $A^{{(0)}}=A$. If $\alpha$ is a successor ordinal,
then $A^{{(\alpha)}}=\left(A^{{(\alpha-1)}}\right)^{{\prime}}$. If $\lambda$ is a limit ordinal, then $A^{{(\lambda)}}=\bigcap_{{\alpha<\lambda}}A^{{(\alpha)}}$. The Cantor-Bendixson rank of the set
$A$ is the least ordinal $\alpha$ such that $A^{{(\alpha)}}=A^{{(\alpha+1)}}$. Note that $A^{{\prime}}=A$implies that $A$ is a perfect set.
Some basic properties of the Cantor-Bendixson derivative include
1. 1.
$(A\cup B)^{{\prime}}=A^{{\prime}}\cup B^{{\prime}}$,
2. 2.
$(\bigcup_{{i\in I}}A_{i})^{{\prime}}\supseteq\bigcup_{{i\in I}}A_{i}^{{\prime}}$,
3. 3.
$(\bigcap_{{i\in I}}A_{i})^{{\prime}}\subseteq\bigcap_{{i\in I}}A_{i}^{{\prime}}$,
4. 4.
$(A\setminus B)^{{\prime}}\supseteq A^{{\prime}}\setminus B^{{\prime}}$,
5. 5.
$A\subseteq B\Rightarrow A^{{\prime}}\subseteq B^{{\prime}}$,
6. 6.
$\overline{A}=A\cup A^{{\prime}}$,
7. 7.
The last property requires some justification. Obviously, $A^{{\prime}}\subseteq\overline{A^{{\prime}}}$. Suppose $a\in\overline{A^{{\prime}}}$, then every neighborhood of $a$contains some points of
$A^{{\prime}}$ distinct from $a$. But by definition of $A^{{\prime}}$, each such neighborhood must also contain some points of $A$. This implies that $a$ is an accumulation point of $A$, that is $a\
in A^{{\prime}}$. Therefore $\overline{A^{{\prime}}}\subseteq A^{{\prime}}$ and we have $\overline{A^{{\prime}}}=A^{{\prime}}$.
Finally, from the definition of the Cantor-Bendixson rank and the above properties, if $A$ has Cantor-Bendixson rank $\alpha$, the sets
$A^{{(1)}}\supset A^{{(2)}}\supset\cdots\supset A^{{(\alpha)}}$
form a strictly decreasing chain of closed sets.
Mathematics Subject Classification
no label found
no label found
Added: 2005-02-10 - 18:13
|
{"url":"http://planetmath.org/CantorBendixsonDerivative","timestamp":"2014-04-16T04:11:22Z","content_type":null,"content_length":"89843","record_id":"<urn:uuid:c0cef504-41d7-45b2-9e8f-4a66c8c933d5>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00354-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Brookhaven, PA Math Tutor
Find a Brookhaven, PA Math Tutor
...I am finding more and more as I get older that critical thinking is rarely taught and greatly needed. I feel that getting experience teaching students one on one is the best way for me to have
an immediate impact. This will especially help to personalize the teaching experience and is an effective way to create a trusting relationship.
16 Subjects: including precalculus, algebra 1, algebra 2, calculus
...My background is in engineering and business, so I use an applied math approach to teaching. I find knowing why the math is important goes a long way towards helping students retain
information. After all, math IS fun!In the past 5 years, I have taught differential equations at a local university.
13 Subjects: including algebra 1, algebra 2, calculus, geometry
...I can present the material in many different ways until we find an approach that works and he/she really starts to understand. Nothing gives me a greater thrill than the look of relief on a
student's face when he/she actually starts to get it and realizes that it isn't as difficult as was previo...
19 Subjects: including prealgebra, discrete math, econometrics, logic
...However, I am preparing to become a math teacher in the near future, so I'm also proficient in math and sciences. Currently, I am working as a substitute teacher throughout New Castle County.
I've had the pleasure of working with students from pre-school through high school.
23 Subjects: including SAT math, ACT Math, probability, writing
...Math is a subject that can be a bit difficult for some folks, so I really love the chance to break down barriers and make math accessible for students that are struggling with aspects of math.
I believe that I have a unique ability to present and demonstrate various topics in mathematics in a fu...
22 Subjects: including statistics, discrete math, differential equations, C++
Related Brookhaven, PA Tutors
Brookhaven, PA Accounting Tutors
Brookhaven, PA ACT Tutors
Brookhaven, PA Algebra Tutors
Brookhaven, PA Algebra 2 Tutors
Brookhaven, PA Calculus Tutors
Brookhaven, PA Geometry Tutors
Brookhaven, PA Math Tutors
Brookhaven, PA Prealgebra Tutors
Brookhaven, PA Precalculus Tutors
Brookhaven, PA SAT Tutors
Brookhaven, PA SAT Math Tutors
Brookhaven, PA Science Tutors
Brookhaven, PA Statistics Tutors
Brookhaven, PA Trigonometry Tutors
Nearby Cities With Math Tutor
Aston Math Tutors
Bridgewater Farms, PA Math Tutors
Chester Township, PA Math Tutors
Chester, PA Math Tutors
Eddystone, PA Math Tutors
Folsom, PA Math Tutors
Marcus Hook Math Tutors
Media, PA Math Tutors
Parkside Manor, PA Math Tutors
Parkside, PA Math Tutors
Rose Valley, PA Math Tutors
Swarthmore Math Tutors
Upland, PA Math Tutors
Wallingford, PA Math Tutors
Woodlyn Math Tutors
|
{"url":"http://www.purplemath.com/Brookhaven_PA_Math_tutors.php","timestamp":"2014-04-16T04:14:07Z","content_type":null,"content_length":"24146","record_id":"<urn:uuid:5b94e40b-af22-48e8-bc94-6a0f0e56e9bc>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00236-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Interactive videos
A new way to watch videos where you will be quizzed on the content as you watch.
Balanced and unbalanced forces
Blood pressure
Blood vessel diseases
Comparing fractions
Equations for beginners
Equivalent fractions and simplified form
Heart introduction
Newton's laws and equilibrium
Slow sock on Lubricon VI
Understanding fractions
|
{"url":"http://www.khanacademy.org/labs/socrates","timestamp":"2014-04-18T20:57:38Z","content_type":null,"content_length":"8372","record_id":"<urn:uuid:29edfb4a-48d6-40c0-9e6f-f7faa9f0a10b>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00286-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Browse By Person: Bevrani, Hassan
Group by:
Item Type
Number of items: 78.
Journal Article
Daneshfar, Fatheme & Bevrani, Hassan (2010) Load–frequency control : a GA-based multi-agent reinforcement learning. IET Generation, Transmission & Distribution, 4(1), pp. 13-26.
Bevrani, Hassan, Ghosh, Arindam, & Ledwich, Gerard (2010) Renewable energy sources and frequency regulation : survey and new perspectives. I.E.T. Renewable Power Generation, 4(5), pp. 438-457.
Bevrani, Hassan, Ledwich, Gerard, Ford, Jason J., & Dong, Zhao Yang (2009) On feasibility of regional frequency-based emergency control plans. Energy Conversion and Management, 50(7), pp. 1656-1663.
Ford, Jason J., Bevrani, Hassan, & Ledwich, Gerard (2009) Adaptive load shedding and regional protection. International Journal of Electrical Power & Energy Systems, 31(10), pp. 611-618.
Bevrani, Hassan & Hiyama, Takashi (2009) On load-frequency regulation with time delays: Design and real-time implementation. IEEE Transactions on Energy Conversion, 24(1), pp. 292-300.
Bevrani, Hassan, Ledwich, Gerard F., Dong, Zhao Yang, & Ford, Jason J. (2009) Regional frequency response analysis under normal and emergency conditions. Electric Power Systems Research, 79, pp.
Bevrani, Hassan, Hiyama, Takashi, & Mitani, Yasunori (2008) Power system dynamic stability and voltage regulation enhancement using an optimal gain vector. Control Engineering Practice, 16(9), pp.
Bevrani, Hassan & Hiyama, Takashi (2008) Robust decentralized PI based LFC design for time-delay power systems. Energy Conversion & Management, 49(2), pp. 193-204.
Bevrani, Hassan, Ledwich, Gerard F., Ford, Jason J., & Dong, Z. Y. (2008) On power system frequency control in emergency conditions. Journal of Electrical Engineering & Technology, 3(4), pp. 499-508.
Bevrani, Hassan & Hiyama, Takashi (2007) Robust load-frequency regulation: a real-time laboratory experiment. Optimal Control Applications and Methods, 28(6), pp. 419-433.
Bevrani, Hassan, Hiyama, Takashi, Mitani, Yasunori, & Tsuji, Kiichiro (2007) Automatic generation control: A decentralized robust approach. Intelligent Automation and Soft Computing, 13(3), pp.
Bevrani, Hassan & Hiyama, Takashi (2007) Multiobjective PI/PID Control Design Using an Iterative Linear Matrix Inequalities Algorithm. International Journal of Control, Automation, and Systems, 5(2),
pp. 117-127.
Bevrani, Hassan & Hiyama, Takashi (2007) Robust coordinated AVR-PSS design using H∞ static output feedback control. IEEJ Transactions on Power and Energy, 127(1), pp. 70-76.
Bevrani, Hassan, Hiyama, Takashi, Mitani, Yasunori, Tsuji, Kiichiro, & Teshnehlab, Mohammad (2006) Load-frequency regulation under a bilateral LFC scheme using flexible neural networks. Engineering
Intelligent Systems, 14(2), pp. 109-117.
Bevrani, Hassan, Mitani, Yasunori, Tsuji, Kiichiro, & Bevrani, Hossein (2005) Bilateral based robust load frequency control. Energy Conversion and Management, 46(7-8), pp. 1129-1146.
Bevrani, Hassan, Mitani, Yasunori, & Tsuji, Kiichiro (2004) Robust decentralized AGC in a restructured power system. Energy Conversion and Management, 45(15-16), pp. 2297-2312.
Bevrani, Hassan, Mitani, Yasunori, & Tsuji, Kiichiro (2004) Robust decentralised load-frequency control using an iterative linear matrix inequalities algorithm. IEE Proceedings - Generation,
Transmission and Distribution, 151(3), pp. 347-354.
Bevrani, Hassan, Mitani, Yasunori, & Tsuji, Kiichiro (2004) Sequential design of decentralized load frequency controllers using mu synthesis and analysis. Energy Conversion & Management, 45(6), pp.
Conference Paper
Bevrani, Hassan & Tikdari, A. G. (2010) Power system stability analysis based on descriptive study of electrical indices. In The Abstract Book of ASIJ 5th Conference, ASIJ, Tokyo, p. 3.
Bevrani, Hassan, Ledwich, Gerard F., & Ford, Jason J. (2009) On the Use of df/dt in Power System Emergency Control. In 2009 IEEE Power Systems Conference & Exposition, 15 - 18 March, Seattle,
Washington, USA. (In Press)
Bevrani, Hassan & Hiyama, Takashi (2006) On robust control of fixed pattern power rectifiers. In The Nordic Workshop on Power and Industrial Electronics (NORPIE), June 2006, Lund, Sweden.
Bevrani, Hassan & Hiyama, Takashi (2006) Robust design of power system stabilizer: an LMI approach. In The IASTED International Conference on Energy and Power Systems (EPS), March 2006, Chiang Mai,
Bevrani, Hassan & Hiyama, Takashi (2006) Stability and voltage regulation enhancement using an optimal gain vector. In 2006 IEEE Power Engineering Society General Meeting, 18-22 June 2006, Canada.
Bevrani, Hassan & Hiyama, Takashi (2005) PI/PID based multi-objective control design: an ILMI approach. In 2005 IEEE Networking, Sensing and Control, 19-22 March 2005, USA.
Bevrani, Hassan & Hiyama, Takashi (2005) Robust tuning of PI/PID controllers using H∞ control technique. In 4th International Conference of System Identification and Control Problems (SICPRO), 25-28
January 2005, Moscow, Russia.
Bevrani, Hassan, Hiyama, Takashi, Mitani, Yasunori, & Tsuji, Kiichiro (2005) A bridge between robustness and simplicity: practical control design for complex systems. In 1st ASIJ Scientific Seminar,
February 2005, Tokyo, Japan.
Bevrani, Hassan & Hiyama, Takashi (2005) A control strategy for LFC design with communication delays. In The 7th International Power Engineering Conference, IPEC 2005, Nov. 29 2005-Dec. 2 2005,
Bevrani, Hassan & Hiyama, Takashi (2005) A robust solution for PI-based LFC problem with communication delays. In IEEJ Transactions on Power and Energy, The Institute of Electrical Engineers of Japan
(IEEJ), Osaka, Japan, pp. 15-20.
Bevrani, Hassan, Ise, Toshifumi, Mitani, Yasunori, & Tsuji, Kiichiro (2004) DC-DC Quasi-Resonant Converters: Linear Robust Control. In 2004 IEEE International Symposium on Industrial Electronics
(ISIE 2004), 2004, France.
Bevrani, Hassan, Mitani, Yasunori, & Tsuji, Kiichiro (2004) Decentralized robust load-frequency control: A PI-based approach. In International Conference on Electrical Engineering (ICEE) 2004, 2004,
Sapporo, Japan.
Bevrani, Hassan, Mitani, Yasunori, & Tsuji, Kiichiro (2004) PI-based multi-objective load-frequency control in a restructured power system. In SICE 2004 Annual Conference, 4-6 August 2004, Japan.
Bevrani, Hassan, Mitani, Yasunori, & Tsuji, Kiichiro (2004) Robust AGC in a competitive environment. In 39th Universities Power Engineering Conference-UPEC 2004, 6-8 Sept. 2004, UK.
Bevrani, Hassan, Mitani, Yasunori, & Tsuji, Kiichiro (2004) Robust Decentralized LFC Design In a Deregulated Environment. In 2004 IEEE International Conference on Electric Utility Deregulation,
Restructuring and Power Technologies, 2004. (DRPT 2004), April 2004, Hong Kong.
Bevrani, Hassan, Mitani, Yasunori, & Tsuji, Kiichiro (2004) Robust LFC design using mixed H2/Hinf technique. In International Conference on Electrical Engineering (ICEE) 2004, 2004, Sapporo, Japan.
Bevrani, Hassan, Mitani, Yasunori, & Tsuji, Kiichiro (2003) Robust Load Frequency Regulation In a New Distributed Generation Environment. In IEEE Power Engineering Society General Meeting, 2003,
2003, Toronto, Canada.
Bevrani, Hassan, Mitani, Yasunori, & Tsuji, Kiichiro (2003) A scenario on load-frequency controller design in a deregulated power system. In SICE 2003 Annual Conference, 4-6 Aug. 2003, Fukui, Japan.
Bevrani, Hassan, Rezazadeh, Abdolbaghi, & Teshnehlab, Mohammad (2002) Comparison of existing LFC approaches in a deregulated environment. In Fifth International Conference on Power System Management
and Control, 17-19 April 2002, UK.
Bevrani, Hassan, Mitani, Yasunori, & Tsuji, Kiichiro (2002) Robust Control Design for a ZCS Converter. In 28th Annual Conference of the Industrial Electronics Society (IECON 02), 5-8 Nov. 2002,
Bevrani, Hassan, Ise, Toshifumi, Mitani, Yasunori, & Tsuji, Kiichiro (2002) Robust controller design for a DC-DC ZCS converter using µ-synthesis and analysis. In 2002 IEEJ Technical Meeting, 2002,
Nagona, Japan.
Bevrani, Hassan, Mitani, Yasunori, & Tsuji, Kiichiro (2002) Robust low-order load frequency controller in a deregulated environment. In 5th Asia-Pacific Conference on Control and Measurement (APCCM),
2002, Dali, China.
Bevrani, Hassan (2002) A novel approach for power system load frequency controller design. In IEEE/PES Transmission and Distribution Conference and Exhibition 2002: Asia Pacific, 6-10 Oct. 2002,
Bevrani, Hassan, Abrishamchian, M., & Safari-shad, N. (1999) Linear robust control of switching power converters. In 7th Iranian Conference on Electrical Engineering, May 1999, Tehran, Iran.
Bevrani, Hassan, Abrishamchian, M., & Safari-Shad, Nader (1999) Nonlinear and linear robust control of switching power converters. In 1999 IEEE lntemational Conference on Control Applications, August
22-27, 1999, Hawaii, USA.
Bevrani, Hassan (1999) Robust load frequency controller in a deregulated environment: a mu-Synthesis approach. In 1999 IEEE lntemational Conference on Control Applications, August 22-27, 1999,
Hawaii, USA.
Conference Item
Bevrani, Hassan, Mitani, Yasunori, & Tsuji, Kiichiro (2002) Sequential decentralized design of robust load frequency controllers in a multi-area power system. In Universities Student Meeting,
November 2002, Awaji Island, Japan.
Bevrani, Hassan (2004) Decentralized robust load-frequency control synthesis in restructured power systems. Osaka University.
This list was generated on Sat Apr 19 08:00:10 2014 EST.
|
{"url":"http://eprints.qut.edu.au/view/person/Bevrani,_Hassan.html","timestamp":"2014-04-20T23:44:42Z","content_type":null,"content_length":"77196","record_id":"<urn:uuid:33c7924a-2c70-4637-b76b-49a1fd6ecccb>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00116-ip-10-147-4-33.ec2.internal.warc.gz"}
|
A logic programming approach to implementing higher-order term rewriting
Results 1 - 10 of 12
, 1996
"... We present the linear type theory LLF as the forAppeared in the proceedings of the Eleventh Annual IEEE Symposium on Logic in Computer Science --- LICS'96 (E. Clarke editor), pp. 264--275, New
Brunswick, NJ, July 27--30 1996. mal basis for a conservative extension of the LF logical framework. LLF c ..."
Cited by 217 (44 self)
Add to MetaCart
We present the linear type theory LLF as the forAppeared in the proceedings of the Eleventh Annual IEEE Symposium on Logic in Computer Science --- LICS'96 (E. Clarke editor), pp. 264--275, New
Brunswick, NJ, July 27--30 1996. mal basis for a conservative extension of the LF logical framework. LLF combines the expressive power of dependent types with linear logic to permit the natural and
concise representation of a whole new class of deductive systems, namely those dealing with state. As an example we encode a version of Mini-ML with references including its type system, its
operational semantics, and a proof of type preservation. Another example is the encoding of a sequent calculus for classical linear logic and its cut elimination theorem. LLF can also be given an
operational interpretation as a logic programming language under which the representations above can be used for type inference, evaluation and cut-elimination. 1 Introduction A logical framework is
a formal system desig...
- ACM Trans. Softw. Eng. Methodol , 2001
"... Term rewriting is an appealing technique for performing program analysis and program transformation. Tree (term) traversal is frequently used but is not supported by standard term rewriting.
Cited by 59 (8 self)
Add to MetaCart
Term rewriting is an appealing technique for performing program analysis and program transformation. Tree (term) traversal is frequently used but is not supported by standard term rewriting.
- 15th International Conference on Automated Deduction, volume 1421 of Lecture Notes in Artificial Intelligence , 1998
"... Introduction Proof planning [4] is an approach to theorem proving which encodes heuristics for constructing mathematical proofs in a meta-theory of methods. The Clam system, developed at
Edinburgh [3], has been used for several years to develop proof planning, in particular proof plans for induction ..."
Cited by 58 (8 self)
Add to MetaCart
Introduction Proof planning [4] is an approach to theorem proving which encodes heuristics for constructing mathematical proofs in a meta-theory of methods. The Clam system, developed at Edinburgh
[3], has been used for several years to develop proof planning, in particular proof plans for induction. It has become clear that many of the theorem-proving tasks that we would like to perform are
naturally higher-order. For example, an important technique called middle-out reasoning [6] uses meta-variables to stand for some unknown objects in a proof, to be instantiated as the proof proceeds.
Domains such as the synthesis and verification of software and hardware systems, and techniques such as proof critics [7], benefit greatly from such middle-out reasoning. Since in these domains the
meta-variables often become instantiated with terms of function type, reasoning with them is naturally higher-order, and higher-order unification is a
, 1995
"... The terms of the simply-typed λ-calculus can be used to express the higher-order abstract syntax of objects such as logical formulas, proofs, and programs. Support for the manipulation of such
objects is provided in several programming languages (e.g. λProlog, Elf). Such languages also provide embed ..."
Cited by 41 (1 self)
Add to MetaCart
The terms of the simply-typed λ-calculus can be used to express the higher-order abstract syntax of objects such as logical formulas, proofs, and programs. Support for the manipulation of such
objects is provided in several programming languages (e.g. λProlog, Elf). Such languages also provide embedded implication, a tool which is widely used for expressing hypothetical judgments in
natural deduction. In this paper, we show how a restricted form of second-order syntax and embedded implication can be used together with induction in the Coq Proof Development system. We specify
typing rules and evaluation for a simple functional language containing only function abstraction and application, and we fully formalize a proof of type soundness in the system. One difficulty we
encountered is that expressing the higher-order syntax of an object-language as an inductive type in Coq generates a class of terms that contains more than just those that directly represent objects
in the language. We ove...
- In Workshop on Reduction Strategies in Rewriting and Programming (WRS’01), volume 57 of Electronic Notes in Theoretical Computer Science , 2001
"... Program transformation is used in a wide range of applications including compiler construction, optimization, program synthesis, refactoring, software renovation, and reverse engineering.
Complex program transformations are achieved through a number of consecutive modifications of a program. Transfo ..."
Cited by 24 (1 self)
Add to MetaCart
Program transformation is used in a wide range of applications including compiler construction, optimization, program synthesis, refactoring, software renovation, and reverse engineering. Complex
program transformations are achieved through a number of consecutive modifications of a program. Transformation rules define basic modifications. A transformation strategy is an algorithm for
choosing a path in the rewrite relation induced by a set of rules. This paper surveys the support for the definition of strategies in program transformation systems. After a discussion of kinds of
program transformation and choices in program representation, the basic elements of a strategy system are discussed and the choices in the design of a strategy language are considered. Several styles
of strategy systems as provided in existing languages are then analyzed.
, 2002
"... Path logic programming is a modest extension of Prolog for the specification of program transformations. We give an informal introduction to this extension, and we show how it can be used in
coding standard compiler optimisations, and also a number of obfuscating transformations. The object language ..."
Cited by 18 (6 self)
Add to MetaCart
Path logic programming is a modest extension of Prolog for the specification of program transformations. We give an informal introduction to this extension, and we show how it can be used in coding
standard compiler optimisations, and also a number of obfuscating transformations. The object language is the Microsoft .NET intermediate language (IL).
- In editors Proc. 5th International Workshop on Extensions of Logic Programming ELP'96 , 1996
"... . We present the functional logic language Higher Order Babel which provides higher order unification for parameter passing and solving equations. When searching for a function which solves an
equation, not only "polynomial functions" but also defined functions are taken into account. In contrast to ..."
Cited by 11 (2 self)
Add to MetaCart
. We present the functional logic language Higher Order Babel which provides higher order unification for parameter passing and solving equations. When searching for a function which solves an
equation, not only "polynomial functions" but also defined functions are taken into account. In contrast to all other programming languages which support higher order unification HO-Babel replaces
the expensive fi-reduction by the much more efficient combinator reduction. Moreover, HO-Babel is more homogeneous since it does not distinguish functions which only represent data structures and
defined functions which are equipped with the full execution mechanism of the language. 1 Introduction In comparison to purely logic programming languages, integrated functional logic programming
languages allow a more efficient implementation since functions are deterministic and this determinism can be exploited to reduce the search space. On the other hand, functional logic languages have
more expressive po...
- In TPHOLs’01, volume 2152 of LNCS , 2001
"... This paper reports a case study in the use of proof planning in the context of higher order syntax. Rippling is a heuristic for guiding rewriting steps in induction that has been used
successfully in proof planning inductive proofs using first order representations. Ordinal arithmetic provides a nat ..."
Cited by 5 (0 self)
Add to MetaCart
This paper reports a case study in the use of proof planning in the context of higher order syntax. Rippling is a heuristic for guiding rewriting steps in induction that has been used successfully in
proof planning inductive proofs using first order representations. Ordinal arithmetic provides a natural set of higher order examples on which transfinite induction may be attempted using rippling.
Previously Boyer-Moore style automation could not be applied to such domains. We demonstrate that a higher-order extension of the rippling heuristic is sufficient to plan such proofs automatically.
Accordingly, ordinal arithmetic has been implemented in Clam, a higher order proof planning system for induction, and standard undergraduate text book problems have been successfully planned. We show
the synthesis of a fixpoint for normal ordinal functions which demonstrates how our automation could be extended to produce more interesting results than the textbook examples tried so far.
, 2000
"... . We describe a system for the synthesis of logic programs from specications based on higher-order logical descriptions of appropriate renement operations. The system has been implemented within
the proof planning system Clam. The generality of the approach is such that its extension to allow sy ..."
Cited by 3 (1 self)
Add to MetaCart
. We describe a system for the synthesis of logic programs from specications based on higher-order logical descriptions of appropriate renement operations. The system has been implemented within the
proof planning system Clam. The generality of the approach is such that its extension to allow synthesis of higher-order logic programs was straightforward. Some illustrative examples are given. The
approach is extensible to further classes of synthesis. 1 Introduction Earlier work on the synthesis of logic programs has taken the approach of constructing a program in the course of proving
equivalence to a specication, which is written in a richer logic than the resulting program. Typically, quantiers and thus binding of variables are present in the speci- cation, and have to be
manipulated correctly. We extend earlier work using as far as possible a declarative reading in a higher-order logic. The higher-order proof planning framework which we employ provides a more
, 1998
"... We present an axiomatization of term rewriting systems in Forum, a presentation of linear logic in terms of uniform proofs, which allows us to relate provability and derivability in a natural
way. The resulting theory can be used to prove properties of the original system. Vice versa the structure o ..."
Cited by 1 (1 self)
Add to MetaCart
We present an axiomatization of term rewriting systems in Forum, a presentation of linear logic in terms of uniform proofs, which allows us to relate provability and derivability in a natural way.
The resulting theory can be used to prove properties of the original system. Vice versa the structure of the formulas used in the encoding suggests us a possible operational interpretation of Forum.
The considered fragment turns out to be an extension of previously proposed multi-conclusion logics.
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=1774310","timestamp":"2014-04-17T23:14:57Z","content_type":null,"content_length":"38610","record_id":"<urn:uuid:56f1d6b3-2fd1-47aa-afab-6afec5420f53>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00496-ip-10-147-4-33.ec2.internal.warc.gz"}
|
How to correctly average together values from a logarithmic scale (dBm in this case).
November 7th 2012, 06:27 AM #1
Nov 2012
How to correctly average together values from a logarithmic scale (dBm in this case).
I'm not sure if I'm posting this in the correct category but here goes:
I have two sets of numbers, milli-Watt (mW) and dBm. The mW values are linear while dBm is a logarithmic scale. The formula to get from mW to dBm is dBm = Log10(mW)*10. This meas of course to get
from dBm to mW the equation is mW = 10^(dBm/10).
Now, starting with the values in mW of 10 and 1000, their corresponding dBm values are 10 and 30. Averaging the Mw values gives 505 and averaging the dBm values the same way gives 20 (using
arithmetic mean if I'm understanding the term correctly). The problem is when 505 mW is converted to dBm the result is 27.03 and converting 20 dBm to mW gives a value of 100.
Given that I am fairly sure the arithmetic mean always works for linear values, since that is what it seems made for, I am assuming that some other method must be used to get the mean of a set of
values that are on a logarithmic scale. I have read about a method called the Geometric Mean (which is sqrt(y*x) ) and when used on the mW values of 10 and 1000 the result is 100 but when it is
used on the dBm values the result is 17.32.
To summarize, I have a set of dBm values which change over time and I need to average together, and in another case average together two sets and then subtract one from the other to see if the
values are declining over time. So I need to be able to successfully average together the values in the logarithmic scale and at this point I'm not sure what the correct method is.
My questions:
1) How to correctly average together dBm values that are on a logerithmic scale.
2) How that average should correspond to mW values.
My current suspicion is that 20dBm and 100mW are the correct averages but I'm not sure if I'm right or not, please advise.
Follow Math Help Forum on Facebook and Google+
|
{"url":"http://mathhelpforum.com/differential-geometry/206956-how-correctly-average-together-values-logarithmic-scale-dbm-case.html","timestamp":"2014-04-18T11:06:58Z","content_type":null,"content_length":"31809","record_id":"<urn:uuid:f9d80b16-89ed-4fd1-8ad9-3de6fb6f38e1>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00364-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Academics_He Jifeng computer software expert
He Jifeng, computer software expert£¬was born in Shanghai in August, 1943, and graduated from the math department of Fudan University in the year 1965. He is the president of Hangzhou International
Outsourcing College. Since 1965, he has been working in Huadong Normal University (HDNU) as a teacher, and then a professor since the year 1986. During 1980-1982, He studied in Stanford University
and University of San Francisco as a visiting scholar, and then as a visiting professor and senior researcher in Oxford University during 1984-1998. He has been a tutor of Ph.D. students in HDNU
since 1995, and a part-time professor and tutor of Ph.D. students of Shanghai Jiaotong University since 1996. Since 1998, he has been a senior researcher of United Nations University International
Institute for International Software Technology (UNU-IIST). He is also a part-time professor and tutor of Ph.D. students of Nanjing University. In the year 2001, he became the president of Software
School of Huadong Normal University. In the year 2002, he was selected as a professor for life in Huadong Normal University. Since 2003, he has become a part-time professor and tutor of Ph.D.
students of Zhejiang University. At present, he is the president of Software School of Huadong Normal University and director of Shanghai Institute of imbedded Systems. In December, 2005, he was
elected as an academician of Chinese Academy of Sciences.
He has started to engage in the research of programming theory and application since 1980. And in 1986, he and C.A.R.Hoare co-proposed ¡°the program decomposition operator ", and regarded the
standard language and the program language as the identical kind of mathematical object. Then he proposed to take ¡°the relational algebra" as the unified mathematical model to standardize the
procedure and software so as to ensure the use relational algebra to describe the process of program decomposition and combination, which enables to support the development of software directly. In
the data refinement aspect, he has worked out the complete method to process the indefinite program language data refinement.
In 1995, upon summering a series of program language semantic theories and methods, he and C.A.R. Hoare co-proposed the programming unification theory and a mathematic principle in connecting each
kind of programming theory. He also proposed to communicate several program languages with the formalized surface theory, as well as the mathematical model and algebra law for indefinite data flow.
In recent years, his research on co-design system of the software and hardware provides a beneficial way to reduce time and cost in the systematic chip design.
In 1985, he won the First Prize for software design awarded by the Ministry of Electronics Industry; in 1986, the First Prize for advancement of science and technology in Shanghai; in 1988, selected
as Outstanding Middle-aged Expert with Prominent Contribution to the country; Queen¡¯s Award for Advanced Technology in 1989 and 1993 respectively; in 2000, the First Prize for advancement of science
and technology in Shanghai; in 2002, the Second Prize for National Natural Science
May, 1960 ~ Feb., 1965 Study in math department of Fudan University
July, 1980 ~ July, 1981 visiting scholar in Stanford University and University of San Francisco
Mar., 1965 ~ July, 1985 teacher in Huadong Normal University
Dec., 1984 ~ July, 1998 visiting professor and senior researcher in Oxford University
Aug., 1986 up to present working as a professor in Huadong Normal University
1988 up to present State Outstanding Expert
Aug., 1995 up to present tutor of Ph.D. students in Huadong Normal University
July, 1998 up to present senior researcher of United Nations University International Institute for International Software Technology (UNU-IIST)
Aug., 1996 up to present part-time professor and tutor of Ph.D. students of Shanghai Jiaotong University
Aug., 1998 up to present part-time professor and tutor of Ph.D. students of Nanjing University
Nov., 2001 up to present president of Software School of Huadong Normal University
Dec., 2002 up to present professor for life in Huadong Normal University
May, 2003 up to present part-time professor and tutor of Ph.D. students of Zhejiang University
Since December, 2005, academician of Chinese Academy of Sciences
First Prize for software design awarded by the Ministry of Electronics Industry in 1985
First Prize for advancement of science and technology in Shanghai in 1986
Queen¡¯s Award for Advanced Technology in 1989
Queen¡¯s Award for Advanced Technology in 1993
First Prize for advancement of science and technology in Shanghai in 2000
Second Prize for National Natural Science in 2002
Major Academic Research Papers and Achievements
Since 1985, his academic research achievement in the ¡°Design of Strict Security Software Complete Mathematic Calculation System¡± has mainly provided two sorts of technology, inclusive of (1)
establishing standardized programming and software calculation system, which is able to use mathematic calculation to support the set-up of technical files and verification task related to software
design in each important stage in the process of software development, and (2) design of complete calculation principles to guide the development tasks including the standardized description of all
software system parts educed based on the users¡¯ requirement, and the function description of the low layer software module process calculated from the standardized description of all parts. His
related academic research papers were respectively published in the well-known international journals like ¡°Comm of the ACM¡±¡¢¡°Formal Aspect of Computing¡±¡¢¡°Science of Computer
Programming¡±¡¢¡°Acta Information¡± or read in some important academic meetings. By the year 2002, his academic viewpoints have been cited by SCI index for 169 times.
1. J. He, J. He, C.A.R. Hoare, J.W. Sanders£¬Data Refinement Refined - Resume£¬Lecture Notes in Computer Science£¬1986£¬213: 187-196
2. C.A.R. Hoare, He Jifeng, J.W. Sanders£¬Prespecification in Data Refinement£¬Information processing Letters£¬1987£¬25: 71-76
3. He Jifeng£¬Process Simulation and Refinement£¬Formal Aspects of Computing£¬1989£¬1: 229-241
4. He Jifeng, C.A.R. Hoare£¬From Algebra to Operational Semantics£¬Information processing letters£¬1993£¬45: 75-80
5. He Jifeng£¬From CSP to Hybrid Systems£¬A Classical Mind£¬1994£¬171-191
6. He Jifeng, C.A.R. Hoare£¬Provably Correct Systems£¬Lecture Notes of Computer Science£¬1994£¬863: 288-335
7. He Jifeng£¬Provably Correct Systems: Modelling of Communication Languages and Design of Optimized Compilers£¬McGraw-Hill Publisher£¬1994.
8. He Jifeng, K. Seidel, A. McIver£¬Probabilistic Models for The Guarded Command Language£¬Science of Computer Programming£¬1997£¬28: 171-192
9. C.A.R. Hoare, He Jifeng£¬Unifying Theories of Programming£¬Prentice Hall International£¬1998.
10. He Jifeng£¬A Common Framework for Mixed Hardware/Software Systems£¬Proceedings of IFM'99£¬1999£¬1-24.
Major achievements
1. In allusion to the imperfection of data refinement method and its limitations on process of only definite programming language, He Jifeng and his collaborator co-proposed a complete data
refinement theory and put forward a data refinement method for processing the indefinite program language in their research papers ¡°Data Refinement Refined¡± and ¡°Prespecification in Data
Refinement¡±, by using ¡°up-down simulation image set¡± to obtain function declaration of each process from every programming module for a mature calculation principle. In the year 1986, He and his
collaborator co-proposed the direct connection between data refinement theory and algebraic programming law, so that the corresponding data refinement methods can be chosen according to different
languages. The well-known European specification B language has adopted such method (see ¡°The B Book¡± written by J-R Abrial, page 501 and 550, Cambridge University Press). Theoretical Computer
Science Series published by Cambridge University Press once pointed out that this data refinement theory is a milestone of model-oriented software development.
2. In 1986, he and his collaborator co-proposed process dissociation operator, and took specification language and programming language together as the same mathematical object. One year later, in
1987, he proposed in his academic research paper ¡°The Weakest Prespecification¡± the idea of using relational algebra as the unified mathematical model of programming and software specification, and
discovering that the algorithm analysis equations (X£»Q£¾S and P£»X£¾S) in the mathematical frame can finally be solved. In this case, relational algebra can be used to describe courses of the
decomposition and composition of the process. Therefore, mathematical methods can be taken directly to develop software. Based on the above work, he proposed the theory of algebra law in programming,
and a set of algebra laws were then concluded regarding programming language so that program transformation can be accomplished directly by the application of basic laws.
3. In 1998, he and C.A.R.Hoare, in their monograph ¡°Unifying Theories of Programming¡±, co-proposed a unifying mathematic model which is able to describe Serial Language, Parallel language,
Communication Language, Logic language, Functional programming language and proved the Consistency of three Semantics, i.e. Denotation semantics, algebraic semantics and operational semantics, and
also brings in the model of The probability programming language by using "linking theory".
4. He used formalized interface theory to communicate in several programming languages, and proposed the mathematical models and the algebra law of indefinite data flow. In 1989, in the European
research program Eureka PROCOS, whose major task is to take the formalized interface theory to communicate in several programming languages and design a provable compiler and programming
transformation system. He Jifeng, in this program, did an important contribution. He has summarized the related work in his monograph "Provable Correct Systems: Modelling of Communication Languages
and Design of Optimized Compilers". In 1990, he raised a mathematical model of indefinite data flow and algebra law in his academic research paper "A theory of Synchrony and Asynchrony", which was
used to support Jackson development methods and the design of asynchronous communication process.
5. Since 1992, he has gone in for software-hardware coordinated design frame of the industry-based programming language VERILOG to discuss the consistency of language simulator semantics and
synthetic semantics, and designed the mathematical model for an optimized software-hardware decomposer and synthesizer. So far, the development of software-hardware co-design platform named Poseidon
has been completed and put into use.
Projects/programs completed in recent years
2001 - 2003, VERILOG simulator and synthesizer design, sponsored by Shanghai Information Commission (CX20010005)
2002 - 2004, formalized theory of UML software development process, key project sponsored by the Ministry of Education (02104)
2003 - 2005, theory of software security and hardware and software co-design, "211" project
2002 - 2007, formalized theory and method of study of software network configuration, ¡°973¡± project (study of intermediate software theory and method which is based on Internet environment, project
code: 2002CB31200001)
|
{"url":"http://hise.hznu.edu.cn/enNewsShow.aspx?ID=158","timestamp":"2014-04-21T02:04:21Z","content_type":null,"content_length":"17084","record_id":"<urn:uuid:acfcbc7f-deb3-474e-9aaa-2dbb6b3c138b>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00139-ip-10-147-4-33.ec2.internal.warc.gz"}
|
tests whether the variance of the data is one.
VarianceTest[{data[1], data[2]}]
tests whether the variances of and are equal.
• performs a hypothesis test on data with null hypothesis that the true population dispersion parameter , and alternative hypothesis that .
• Given and , VarianceTest tests against .
• VarianceTest[dspec, will choose the most powerful test that applies to dspec.
• VarianceTest[dspec, All] will choose all tests that apply to dspec.
• VarianceTest[dspec, test"] reports the -value according to .
• Most tests require normally distributed data. If a test is less sensitive to a normality assumption, it is called robust. Some tests assume that data is symmetric around its medians.
• "BrownForsythe" robust robust Levene test
"Conover" symmetry based on squared ranks of data
"FisherRatio" normality based on
"Levene" robust,symmetry compare individual and group variances
"SiegelTukey" symmetry based on ranks of pooled data
• VarianceTest[data, , "HypothesisTestData"] returns a HypothesisTestData object htd that can be used to extract additional test results and properties using the form htd["property"].
• VarianceTest[data, , "property"] can be used to directly give the value of .
• "AllTests" list of all applicable tests
"AutomaticTest" test chosen if Automatic is used
"DegreesOfFreedom" the degrees of freedom used in a test
"PValue" list of -values
"PValueTable" formatted table of -values
"ShortTestConclusion" a short description of the conclusion of a test
"TestConclusion" a description of the conclusion of a test
"TestData" list of pairs of test statistics and -values
"TestDataTable" formatted table of -values and test statistics
"TestStatistic" list of test statistics
"TestStatisticTable" formatted table of test statistics
• AlternativeHypothesis "Unequal" the inequality for the alternative hypothesis
SignificanceLevel 0.05 cutoff for diagnostics and reporting
VerifyTestAssumptions Automatic set which diagnostic tests to run
• For tests of variance, a cutoff is chosen such that is rejected only if . The value of used for the and properties is controlled by the SignificanceLevel option. This value is also used in
diagnostic tests of assumptions including tests for normality and symmetry. By default is set to .
• Named settings for VerifyTestAssumptions in VarianceTest include:
New in 8
|
{"url":"http://reference.wolfram.com/mathematica/ref/VarianceTest.html","timestamp":"2014-04-18T10:57:49Z","content_type":null,"content_length":"54527","record_id":"<urn:uuid:0f7426af-4818-4221-afec-59877fbd0711>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00395-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Demonstrating Equivalent Fractions
Date: 10/25/2000 at 10:24:54
From: Karen Harris
Subject: Equivalent fractions
My daughter is a 5th grader in a gifted class. She has been given an
assignment to teach other children in her class about equivalent
fractions. The demonstration must last at least 10 minutes, and she
must use visuals. She does not understand the concept well enough, she
feels, to explain and demonstrate to others.
I am trying to help her with a lesson plan that would be unique and
simple so that she can learn as well as teach others her age. I have
gone to various sites and am having a difficult time finding something
I can explain to her and that she can use.
Date: 10/25/2000 at 13:09:55
From: Doctor Peterson
Subject: Re: Equivalent fractions
Hi, Karen.
One thing to do first is to go to our search page and enter the phrase
"equivalent fractions" (don't use the quotes, and check the button for
'that exact phrase'). There your daughter will be able to look through
examples of how we have explained the concept to others, which may
give a number of different approaches to try - one of which may be
just what she needs in order to feel more confident about it.
Here's my favorite way. Take a sheet of paper and divide it into some
number of columns, and shade some of them in:
|XXX|XXX| |
|XXX|XXX| |
|XXX|XXX| |
|XXX|XXX| |
|XXX|XXX| |
|XXX|XXX| |
|XXX|XXX| |
Now make several copies of this, by hand or with a copier. Label the
shaded part on the original "2/3" (in my example).
Now take one of the copies and draw a line across the middle
|XXX|XXX| |
|XXX|XXX| |
|XXX|XXX| |
|XXX|XXX| |
|XXX|XXX| |
|XXX|XXX| |
Suddenly the 2/3 has changed to 4/6! Write "4/6" on the shaded part.
You could have fun with this and act like it's a magic trick, making
the change behind a scarf. If someone puts the act down, saying
nothing really changed, well, that's the whole point. Fractions aren't
magic, they're just common sense.
Now ask someone in the audience if (s)he can see how to change the
thirds (on another sheet) into ninths, or twelfths. After repeating
this a few times, you will have a set of equivalent fractions taped to
the board: 2/3, 4/6, 6/9, 8/12, and so on. These are all different
ways to name the same fraction.
Now here comes the real math. Write on the board:
--- x --- = ---
This explains what the first equivalent fraction means: we have twice
as many pieces in all (the denominator - which means "namer," since it
says that each piece is a "third"); and also twice as many pieces in
our fraction (the numerator - which means "numberer," since it tells
how many of those pieces we have.) In multiplying both by the same
number, we do not change the meaning of the fraction.
If there is time, your daughter might like to think up a way to do the
same sort of thing, but starting with a fraction that is not in lowest
terms, such as 4/6. One way is to color in 4 of 6 columns (probably
best done with the paper in landscape position), then cut them all
apart and rearrange them from 1 by 6 into a 2 by 3 formation, with the
shaded pieces looking the way they do in my picture for 4/6. Proper
use of tape can change the fraction to 2/3. The fact that both
numerator and denominator can be evenly divided by 2 allows this to be
done; you couldn't make such a rearrangement for 5/6, though you could
for 3/6.
There's a lot to learn by playing like this!
- Doctor Peterson, The Math Forum
|
{"url":"http://mathforum.org/library/drmath/view/58877.html","timestamp":"2014-04-17T13:45:33Z","content_type":null,"content_length":"8588","record_id":"<urn:uuid:c522dd7b-9c30-4f24-9da1-dd0fba959473>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00058-ip-10-147-4-33.ec2.internal.warc.gz"}
|
h2g2 - A Conversation for Bernoulli's Principle
Slower knots!
HollePolle Started conversation Mar 29, 2001
Hi Croz and purple!
This is a very nice entry! A "complicated" technical principle in easy to understand words!
However, I think you made a little mistake in transforming units or maybe missed the decimal point. 25 knots should be 12.5 meters per second, shouldn't they?
Good job!
Slower knots!
Zak T Duck Posted Mar 29, 2001
Yep you're right, the decimal point is missing. I'll ask Ashley to sort it out.
Key: Complain about this post
Slower knots!
"The Hitchhiker's Guide to the Galaxy is a wholly remarkable book. It has been compiled and recompiled many times and under many different editorships. It contains contributions from countless
numbers of travellers and researchers."
|
{"url":"http://www.h2g2.com/approved_entry/A517169/conversation/view/F63306/T104367","timestamp":"2014-04-17T19:32:31Z","content_type":null,"content_length":"14370","record_id":"<urn:uuid:cb8480c6-1841-4f42-9bbe-a884a1fa03c6>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00179-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Key Biscayne Math Tutor
Find a Key Biscayne Math Tutor
...I pride myself on my flexibility and availability. I have even been known to respond to text messages about individual math problems when needed. Aside from tutoring, I have experience
teaching at the college level, having taught courses in American Government, undergraduate and graduate level Statistics, and Government and Business.
16 Subjects: including algebra 2, prealgebra, trigonometry, writing
...During my college years I led group and individual tutoring sessions for children of all ages at the Jackie Joyner-Kersee Foundation in East St. Louis. It was here I developed a life-long love
for helping children succeed.
26 Subjects: including algebra 1, geometry, English, prealgebra
...It was this course that I knew I wanted to also major in Chemistry. In addition to Biology, this subject fascinates with ease. It was also the course I truly discovered my talent for teaching.
30 Subjects: including prealgebra, GED, reading, algebra 1
...He is also a college professor with a background in Political Science and Social Work so it has rubbed off on me. I like to base my curriculum off of students' as well as parents' needs, and
am very flexible in helping your student identify their strengths and weaknesses not only in the subject ...
24 Subjects: including ACT Math, SAT math, chemistry, geometry
...I must have had a mental block about it. Now that I've been through college and grad school and am about to start Medical School I realize just how fun science can be IF it is taught right! I
think it's very important to work with grade school students and find ways to get them interested in what they are learning.
40 Subjects: including algebra 1, geometry, prealgebra, chemistry
Nearby Cities With Math Tutor
Coral Gables, FL Math Tutors
El Portal, FL Math Tutors
Maimi, OK Math Tutors
Medley, FL Math Tutors
Mia Shores, FL Math Tutors
Miami Math Tutors
Miami Beach Math Tutors
North Bay Village, FL Math Tutors
Palmetto Bay, FL Math Tutors
Pinecrest, FL Math Tutors
South Miami, FL Math Tutors
Sunny Isles Beach, FL Math Tutors
Sweetwater, FL Math Tutors
West Miami, FL Math Tutors
West Park, FL Math Tutors
|
{"url":"http://www.purplemath.com/key_biscayne_math_tutors.php","timestamp":"2014-04-18T21:36:05Z","content_type":null,"content_length":"23824","record_id":"<urn:uuid:b6e4cb3a-30ef-485b-acab-8816f031320b>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00469-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Parking Functions and Noncrossing Partitions
A parking function is a sequence $(a_1,\dots,a_n)$ of positive integers such that, if $b_1\leq b_2\leq \cdots\leq b_n$ is the increasing rearrangement of the sequence $(a_1,\dots, a_n),$ then $b_i\
leq i$. A noncrossing partition of the set $[n]=\{1,2,\dots,n\}$ is a partition $\pi$ of the set $[n]$ with the property that if $a < b < c < d$ and some block $B$ of $\pi$ contains both $a$ and $c$,
while some block $B'$ of $\pi$ contains both $b$ and $d$, then $B=B'$. We establish some connections between parking functions and noncrossing partitions. A generating function for the flag
$f$-vector of the lattice NC$_{n+1}$ of noncrossing partitions of $[{\scriptstyle n+1}]$ is shown to coincide (up to the involution $\omega$ on symmetric function) with Haiman's parking function
symmetric function. We construct an edge labeling of NC$_{n+1}$ whose chain labels are the set of all parking functions of length $n$. This leads to a local action of the symmetric group ${S}_n$ on
Full Text:
|
{"url":"http://www.combinatorics.org/ojs/index.php/eljc/article/view/v4i2r20","timestamp":"2014-04-20T08:14:33Z","content_type":null,"content_length":"14669","record_id":"<urn:uuid:ad669d6f-ec3a-44d0-bc7e-b610d69660ba>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00561-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Write a program that takes text as input and produces a program that reproduces that text
up vote 29 down vote favorite
Recently I came across one nice problem, which turned up as simple to understand as hard to find any way to solve. The problem is:
Write a program, that reads a text from input and prints some other program on output. If we compile and run the printed program, it must output the original text.
The input text is supposed to be rather large (more than 10000 characters).
The only (and very strong) requirement is that the size of the archive (i.e. the program printed) must be strictly less than the size of the original text. This makes impossible obvious solutions
std::string s;
/* read the text into s */
std::cout << "#include<iostream> int main () { std::cout<<\"" << s << "\"; }";
I believe some archiving techniques are to be used here.
3 What have you tried? – Lightness Races in Orbit Jun 29 '11 at 19:34
1 What do you mean by "text"? [A-Z][a-z][some-punctuation]*? – belisarius Jun 29 '11 at 19:38
4 So you could start with the Library of Congress, run it through this program many times, and wind up with a program of a few lines which could reproduce the entire library. Do you feel one of your
eyebrows moving? – Beta Jun 29 '11 at 19:51
3 If you define "Text" as "printable characters on the keyboard" then @Aasmund Eldhuset solution works fine, however if "Text" means a array of bytes between the value 0 and 255 this is impossable
as @templatetypedef says due to your requirement of the size of the archive (i.e. the program printed) must be strictly less than the size of the original text. as I can craft a 10000 byte file
that is incompressible. – Scott Chamberlain Jun 29 '11 at 20:26
1 This might be a good candidate for code golf. – crazy2be Jun 30 '11 at 3:15
show 9 more comments
5 Answers
active oldest votes
Unfortunately, such a program does not exist.
To see why this is so, we need to do a bit of math. First, let's count up how many binary strings there are of length n. Each of the bits can be either a 0 or 1, which gives us one of two
choices for each of those bits. Since there are two choices per bit and n bits, there are thus a total of 2^n binary strings of length n.
Now, let's suppose that we want to build a compression algorithm that always compresses a bitstring of length n into a bitstring of length less than n. In order for this to work, we need
to count up how many different strings of length less than n there are. Well, this is given by the number of bitstrings of length 0, plus the number of bitstrings of length 1, plus the
number of bitstrings of length 2, etc., all the way up to n - 1. This total is
2^0 + 2^1 + 2^2 + ... + 2^n - 1
Using a bit of math, we can get that this number is equal to 2^n - 1. In other words, the total number of bitstrings of length less than n is one smaller than the number of bitstrings of
length n.
But this is a problem. In order for us to have a lossless compression algorithm that always maps a string of length n to a string of length at most n - 1, we would have to have some way
of associating every bitstring of length n with some shorter bitstring such that no two bitstrings of length n are associated with the same shorter bitstream. This way, we can compress
the string by just mapping it to the associated shorter string, and we can decompress it by reversing the mapping. The restriction that no two bitstrings of length n map to the same
shorter string is what makes this lossless - if two length-n bitstrings were to map to the same shorter bitstring, then when it came time to decompress the string, there wouldn't be a way
to know which of the two original bitstrings we had compressed.
up vote 53
down vote This is where we reach a problem. Since there are 2^n different bitstrings of length n and only 2^n-1 shorter bitstrings, there is no possible way we can pair up each bitstring of length
accepted n with some shorter bitstring without assigning at least two length-n bitstrings to the same shorter string. This means that no matter how hard we try, no matter how clever we are, and no
matter how creative we get with our compression algorithm, there is a hard mathematical limit that says that we can't always make the text shorter.
So how does this map to your original problem? Well, if we get a string of text of length at least 10000 and need to output a shorter program that prints it, then we would have to have
some way of mapping each of the 2^10000 strings of length 10000 onto the 2^10000 - 1 strings of length less than 10000. That mapping has some other properties, namely that we always have
to produce a valid program, but that's irrelevant here - there simply aren't enough shorter strings to go around. As a result, the problem you want to solve is impossible.
That said, we might be able to get a program that can compress all but one of the strings of length 10000 to a shorter string. In fact, we might find a compression algorithm that does
this, meaning that with probability 1 - 2^10000 any string of length 10000 could be compressed. This is such a high probability that if we kept picking strings for the lifetime of the
universe, we'd almost certainly never guess the One Bad String.
For further reading, there is a concept from information theory called Kolmogorov complexity, which is the length of the smallest program necessary to produce a given string. Some strings
are easily compressed (for example, abababababababab), while others are not (for example, sdkjhdbvljkhwqe0235089). There exist strings that are called incompressible strings, for which
the string cannot possibly be compressed into any smaller space. This means that any program that would print that string would have to be at least as long as the given string. For a good
introduction to Kolmogorov Complexity, you may want to look at Chapter 6 of "Introduction to the Theory of Computation, Second Edition" by Michael Sipser, which has an excellent overview
of some of the cooler results. For a more rigorous and in-depth look, consider reading "Elements of Information Theory," chapter 14.
Hope this helps!
2 Whilst you couldn't guarantee that the program would absolutely always be smaller than the input, you can reasonably assume it for typical inputs... depending upon what those inputs
are. That's how ZIPs work: in practice, have you ever seen a ZIP file whose contents are larger compressed than uncompressed? – Lightness Races in Orbit Jun 29 '11 at 19:42
5 @Jason- The OP does mention (in bold text and with the phrase "very strong requirement") that it has to work for all inputs. The existence of incompressible strings says that you can't
possibly do this. I agree with you that in general you can compress most strings, but for the OP's question the existence of a single bad string rules out any possible program to do
this. – templatetypedef Jun 29 '11 at 19:48
2 @templatetypedef: Can you prove (or give a source) that for any N there exists a string longer than N that is not compressible (supposing that the alphabet is less than N symbols)?
Because the OP specified that the input is greater than 10000 symbols (whereas the a;phabet is no more than 256 symbols) – Armen Tsirunyan Jun 29 '11 at 19:57
@Armen Tsirunyan- Sure! The above pigeonhole argument works for arbitrary N. So there are 2^10000 possible strings of length 10000, but "only" 2^10000 - 1 strings of length shorter
5 than that (and thus there are at most 2^10000 - 1 programs shorter than 10000 characters). Consequently, there's no way that every string of length at least 2^10000 can be invertibly
mapped to a smaller program, since if it could at least two strings would have to map to the same program, which couldn't tell which of the two strings to print. – templatetypedef Jun
29 '11 at 20:03
@Armen, en.wikipedia.org/wiki/Pigeonhole_principle . Whatever the alphabet may be, there exists no complete inversible function from strings of N symbols to strings of less than N
2 symbols. If you restricted the set of symbols allowed to something smaller than those allowed in C strings (ie, mapped from a string of N symbols from an alphabet of X allowed symbols
to a string of <N with >X allowed symbols), you might be able to pull it off, but that's going to be rather restrictive. – bdonlan Jun 29 '11 at 20:06
show 11 more comments
What you are describing is essentially a program for creating self-extracting zip archives, with the small difference that a regular self-extracting zip archive would write the original
data to a file rather than to stdout. If you want to make such a program yourself, there are plenty of implementations of compression algorithms, or you could implement e.g. DEFLATE (the
algorithm used by gzip) yourself. The "outer" program must compress the input data and output the code for the decompression, and embed the compressed data into that code.
string originalData;
up vote 7 cin >> originalData;
down vote char * compressedData = compress(originalData);
cout << "#include<...> string decompress(char * compressedData) { ... }" << endl;
cout << "int main() { char compressedData[] = {";
(output the int values of the elements of the compressedData array)
cout << "}; cout << decompress(compressedData) << endl; return 0; }" << endl;
Indeed. Though this is not an answer... – Lightness Races in Orbit Jun 29 '11 at 19:36
@Tomalak Geret'kal: Now it is :-) – Aasmund Eldhuset Jun 29 '11 at 19:37
Mmm.. – Lightness Races in Orbit Jun 29 '11 at 19:38
Surely "use gzip" is the answer to the question? :) – Jack V. Jul 5 '11 at 15:15
add comment
If we are talking about ASCII text...
I think this actually could be done, and I think the restriction that the text will be large than 10000 chars is there for a reason (to give you coding room).
People here are saying that the string cannot be compressed, yet it can.
Requirement: OUTPUT THE ORIGINAL TEXT
Text is not data. When you read input text you read ASCII chars (bytes). Which have both printable and non printable values inside.
Take this for example:
ASCII values characters
0x00 .. 0x08 NUL, (other control codes)
0x09 .. 0x0D (white-space control codes: '\t','\f','\v','\n','\r')
0x0E .. 0x1F (other control codes)
up vote 7 ... rest of printable characters
down vote
Since you have to print text as output, you are not interested in the range (0x00-0x08,0x0E-0x1F). You can compress the input bytes by using a different storing and retrieving mechanism
(binary patterns), since you don't have to give back the original data but the original text. You can recalculate what the stored values mean and readjust them to bytes to print. You would
effectively loose only data that was not text data anyway, and is therefore not printable or inputtable. If WinZip would do that it would be a big fail, but for your stated requirements it
simply does not matter.
Since the requirement states that the text is 10000 chars and you can save 26 of 255, if your packing did not have any loss you are effectively saving around 10% space, which means if you
can code the 'decompression' in 1000 (10% of 10000) characters you can achieve that. You would have to treat groups of 10 bytes as 11 chars, and from there extrapolate te 11th, by some
extrapolation method for your range of 229. If that can be done then the problem is solvable.
Nevertheless it requires clever thinking, and coding skills that can actually do that in 1 kilobyte.
Of course this is just a conceptual answer, not a functional one. I don't know if I could ever achieve this.
But I had the urge to give my 2 cents on this, since everybody felt it cannot be done, by being so sure about it.
The real problem in your problem is understanding the problem and the requirements.
add comment
1. Assuming "character" means "byte" and assuming the input text may contains at least as many valid characters as the programming language, its impossible to do this for all inputs, since
as templatetypedef explained, for any given length of input text all "strictly smaller" programs are themselves possible inputs with smaller length, which means there are more possible
inputs than there can ever be outputs. (It's possible to arrange for the output to be at most one bit longer than the input by using an encoding scheme that starts with a "if this is 1,
the following is just the unencoded input because it couldn't be compressed further" bit)
up vote
0 down 2. Assuming its sufficient to have this work for most inputs (eg. inputs that consist mainly of ASCII characters and not the full range of possible byte values), then the answer readily
vote exists: use gzip. That's what its good at. Nothing is going to be much better. You can either create self-extracting archives, or treat the gzip format as the "language" output. In some
circumstances you may be more efficient by having a complete programming language or executable as your output, but often, reducing the overhead by having a format designed for this
problem, ie. gzip, will be more efficient.
add comment
It's called a file archiver producing self-extracting archives.
up vote 0 down vote
add comment
Not the answer you're looking for? Browse other questions tagged c++ algorithm compression data-compression lossless-compression or ask your own question.
|
{"url":"http://stackoverflow.com/questions/6526181/write-a-program-that-takes-text-as-input-and-produces-a-program-that-reproduces","timestamp":"2014-04-20T02:39:02Z","content_type":null,"content_length":"106629","record_id":"<urn:uuid:df716fa2-cf0f-4ea4-83a7-82d8e9b694c5>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00120-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Radical Arithmetic
The word radical has a lot of interesting definitions, but radical arithmetic doesn't actually refer to arithmetic that favors drastic political, economic, or social reforms. We're talking about
doing arithmetic with expressions that contain radical symbols. Sorry, you can put those signs down.
We've seen two special types of expressions so far: polynomial expressions, and rational expressions. Now we'll add one more special type, because things are funnier in threes: A radical expression
is any expression with one or more radical signs in it. Another way to put this is that a radical expression has at least one radical term, or a term with at least one radical in it. Yet one more way
to put this, because things are funnier in threes, is that a radical expression radiates with radicalocity.
Hmm, not so much. So much for the "rule of threes."
Examples of Radical Expressions
|
{"url":"http://www.shmoop.com/squares-square-roots/radical-arithmetic.html","timestamp":"2014-04-19T22:24:35Z","content_type":null,"content_length":"36372","record_id":"<urn:uuid:8fd479b9-c186-4715-befa-58ea491065a0>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00265-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Wolfram Demonstrations Project
Using Rule 30 to Generate Pseudorandom Real Numbers
The rule 30 cellular automaton is the engine behind Mathematica's generation of pseudorandom numbers. This Demonstration looks at an oversimplified version of how the central column in the evolution
of this automaton can be used to generate pseudorandom real numbers between 0 and 1.
An initial integer seed is chosen (controlled here by a slider), converted to base 2, and then converted to a string of light and dark squares (here 1s get converted to dark squares and 0s to light
squares). These squares (shaded blue at the top) are padded on either side with a number of light squares, and rule 30 is evolved from this initial condition. The squares in the central column of
this evolution can be regarded as the base-2 decimal expansion of a number between 0 and 1 (again, dark squares correspond to 1s and light squares to 0s). As the initial seed varies over a wide
range of integers, the numbers produced by this algorithm appear to be distributed nearly uniformly over the unit interval. Among several differences between this demonstration and random number
generators used in practice is that in the latter, the recording of the bits of the number generated does not begin with the first step in the evolution of the cellular automata but only after the
cellular automata has been allowed to evolve for a while.
|
{"url":"http://www.demonstrations.wolfram.com/UsingRule30ToGeneratePseudorandomRealNumbers/","timestamp":"2014-04-20T00:58:13Z","content_type":null,"content_length":"43730","record_id":"<urn:uuid:afff6ae3-2e04-46c5-a878-696384d13ff6>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00223-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Content Emphases by Cluster
Content Emphases by Cluster (Grade 6)
PDF version
Not all of the content in a given grade is emphasized equally in the standards. Some clusters require greater emphasis than the others based on the depth of the ideas, the time that they take to
master, and/or their importance to future mathematics or the demands of college and career readiness. In addition, an intense focus on the most critical material at each grade allows depth in
learning, which is carried out through the Standards for Mathematical Practice.
To say that some things have greater emphasis is not to say that anything in the standards can safely be neglected in instruction. Neglecting material will leave gaps in student skill and
understanding and may leave students unprepared for the challenges of a later grade. All standards figure in a mathematical education and will therefore be eligible for inclusion on the PARCC
assessment. However, the assessments will strongly focus where the standards strongly focus.
In addition to identifying the Major, Additional, and Supporting Clusters for each grade, suggestions are given following the table below for ways to connect the Supporting to the Major Clusters of
the grade. Thus, rather than suggesting even inadvertently that some material not be taught, there is direct advice for teaching it, in ways that foster greater focus and coherence.
Click on each cluster heading to see the standards.
Key: Major Clusters; Supporting Clusters; Additional Clusters
Ratios and Proportional Reasoning
Understand ratio concepts and use ratio reasoning to solve problems.
6.RP.1 Understand the concept of a ratio and use ratio language to describe a ratio relationship between two quantities. For example, "The ratio of wings to beaks in the bird house at the zoo was
2:1, because for every 2 wings there was 1 beak." "For every vote candidate A received, candidate C received nearly three votes."
6.RP.2 Understand the concept of a unit rate a/b associated with a ratio a:b with b ≠ 0, and use rate language in the context of a ratio relationship. For example, "This recipe has a ratio of 3
cups of flour to 4 cups of sugar, so there is 3/4 cup of flour for each cup of sugar." "We paid $75 for 15 hamburgers, which is a rate of $5 per hamburger." (Expectations for unit rates in this
grade are limited to non-complex fractions.)
6.RP.3 Use ratio and rate reasoning to solve real-world and mathematical problems, e.g., by reasoning about tables of equivalent ratios, tape diagrams, double number line diagrams, or equations.
Make tables of equivalent ratios relating quantities with whole-number measurements, find missing values in the tables, and plot the pairs of values on the coordinate plane. Use tables to compare
Solve unit rate problems including those involving unit pricing and constant speed. For example, If it took 7 hours to mow 4 lawns, then at that rate, how many lawns could be mowed in 35 hours? At
what rate were lawns being mowed?
Find a percent of a quantity as a rate per 100 (e.g., 30% of a quantity means 30/100 times the quantity); solve problems involving finding the whole given a part and the percent.
Use ratio reasoning to convert measurement units; manipulate and transform units appropriately when multiplying or dividing quantities.
The Number System
Apply and extend previous understandings of multiplication and division to divide fractions by fractions.
6.NS.1 Interpret and compute quotients of fractions, and solve word problems involving division of fractions by fractions, e.g., by using visual fraction models and equations to represent the
problem. For example, create a story context for (2/3) ÷ (3/4) and use a visual fraction model to show the quotient; use the relationship between multiplication and division to explain that (2/3) ÷
(3/4) = 8/9 because 3/4 of 8/9 is 2/3. (In general, (a/b) ÷ (c/d) = ad/bc.) How much chocolate will each person get if 3 people share 1/2 lb of chocolate equally? How many 3/4-cup servings are in 2
/3 of a cup of yogurt? How wide is a rectangular strip of land with length 3/4 mi and area 1/2 square mi?
Compute fluently with multi-digit numbers and find common factors and multiples.
6.NS.2 Fluently divide multi-digit numbers using the standard algorithm.
6.NS.3 Fluently add, subtract, multiply, and divide multi-digit decimals using the standard algorithm for each operation.
6.NS.4 Find the greatest common factor of two whole numbers less than or equal to 100 and the least common multiple of two whole numbers less than or equal to 12. Use the distributive property to
express a sum of two whole numbers 1-100 with a common factor as a multiple of a sum of two whole numbers with no common factor. For example, express 36 + 8 as 4 (9 + 2).
Apply and extend previous understandings of numbers to the system of rational numbers.
6.NS.5 Understand that positive and negative numbers are used together to describe quantities having opposite directions or values (e.g., temperature above/below zero, elevation above/below sea
level, debits/credits, positive/negative electric charge); use positive and negative numbers to represent quantities in real-world contexts, explaining the meaning of 0 in each situation.
6.NS.6 Understand a rational number as a point on the number line. Extend number line diagrams and coordinate axes familiar from previous grades to represent points on the line and in the plane
with negative number coordinates.
Recognize opposite signs of numbers as indicating locations on opposite sides of 0 on the number line; recognize that the opposite of the opposite of a number is the number itself, e.g., -(-3) = 3,
and that 0 is its own opposite.
Understand signs of numbers in ordered pairs as indicating locations in quadrants of the coordinate plane; recognize that when two ordered pairs differ only by signs, the locations of the points
are related by reflections across one or both axes.
Find and position integers and other rational numbers on a horizontal or vertical number line diagram; find and position pairs of integers and other rational numbers on a coordinate plane.
6.NS.7 Understand ordering and absolute value of rational numbers.
Interpret statements of inequality as statements about the relative position of two numbers on a number line diagram. For example, interpret -3 > -7 as a statement that -3 is located to the right
of -7 on a number line oriented from left to right.
Write, interpret, and explain statements of order for rational numbers in real-world contexts. For example, write -3°C > -7°C to express the fact that -3°C is warmer than -7°C.
Understand the absolute value of a rational number as its distance from 0 on the number line; interpret absolute value as magnitude for a positive or negative quantity in a real-world situation.
For example, for an account balance of -30 dollars, write |-30| = 30 to describe the size of the debt in dollars.
Distinguish comparisons of absolute value from statements about order. For example, recognize that an account balance less than -30 dollars represents a debt greater than 30 dollars.
6.NS.8 Solve real-world and mathematical problems by graphing points in all four quadrants of the coordinate plane. Include use of coordinates and absolute value to find distances between points
with the same first coordinate or the same second coordinate.
Expressions and Equations
Apply and extend previous understandings of arithmetic to algebraic expressions.
6.EE.1 Write and evaluate numerical expressions involving whole-number exponents.
6.EE.2 Write, read, and evaluate expressions in which letters stand for numbers.
Write expressions that record operations with numbers and with letters standing for numbers. For example, express the calculation "Subtract y from 5" as 5 - y.
Identify parts of an expression using mathematical terms (sum, term, product, factor, quotient, coefficient); view one or more parts of an expression as a single entity. For example, describe the
expression 2(8 + 7) as a product of two factors; view (8 + 7) as both a single entity and a sum of two terms.
Evaluate expressions at specific values for their variables. Include expressions that arise from formulas in real-world problems. Perform arithmetic operations, including those involving
whole-number exponents, in the conventional order when there are no parentheses to specify a particular order (Order of Operations). For example, use the formulas V = s^3 and A = 6 s^2 to find the
volume and surface area of a cube with sides of length s = 1/2.
6.EE.3 Apply the properties of operations to generate equivalent expressions. For example, apply the distributive property to the expression 3(2 + x) to produce the equivalent expression 6 + 3x;
apply the distributive property to the expression 24x + 18y to produce the equivalent expression 6 (4x + 3y); apply properties of operations to y + y + y to produce the equivalent expression 3y.
6.EE.4 Identify when two expressions are equivalent (i.e., when the two expressions name the same number regardless of which value is substituted into them). For example, the expressions y + y + y
and 3y are equivalent because they name the same number regardless of which number y stands for.
Reason about and solve one-variable equations and inequalities.
6.EE.5 Understand solving an equation or inequality as a process of answering a question: which values from a specified set, if any, make the equation or inequality true? Use substitution to
determine whether a given number in a specified set makes an equation or inequality true.
6.EE.6 Use variables to represent numbers and write expressions when solving a real-world or mathematical problem; understand that a variable can represent an unknown number, or, depending on the
purpose at hand, any number in a specified set.
6.EE.7 Solve real-world and mathematical problems by writing and solving equations of the form x + p = q and px = q for cases in which p, q and x are all nonnegative rational numbers.
6.EE.8 Write an inequality of the form x > c or x < c to represent a constraint or condition in a real-world or mathematical problem. Recognize that inequalities of the form x > c or x < c have
infinitely many solutions; represent solutions of such inequalities on number line diagrams.
Represent and analyze quantitative relationships between dependent and independent variables.
6.EE.9 Use variables to represent two quantities in a real-world problem that change in relationship to one another; write an equation to express one quantity, thought of as the dependent variable,
in terms of the other quantity, thought of as the independent variable. Analyze the relationship between the dependent and independent variables using graphs and tables, and relate these to the
equation. For example, in a problem involving motion at constant speed, list and graph ordered pairs of distances and times, and write the equation d = 65t to represent the relationship between
distance and time.
Solve real-world and mathematical problems involving area, surface area and volume.
6.G.1 Find the area of right triangles, other triangles, special quadrilaterals, and polygons by composing into rectangles or decomposing into triangles and other shapes; apply these techniques in
the context of solving real-world and mathematical problems.
6.G.2 Find the volume of a right rectangular prism with fractional edge lengths by packing it with unit cubes of the appropriate unit fraction edge lengths, and show that the volume is the same as
would be found by multiplying the edge lengths of the prism. Apply the formulas V = l w h and V = b h to find volumes of right rectangular prisms with fractional edge lengths in the context of
solving real-world and mathematical problems.
6.G.3 Draw polygons in the coordinate plane given coordinates for the vertices; use coordinates to find the length of a side joining points with the same first coordinate or the same second
coordinate. Apply these techniques in the context of solving real-world and mathematical problems.
6.G.4 Represent three-dimensional figures using nets made up of rectangles and triangles, and use the nets to find the surface area of these figures. Apply these techniques in the context of
solving real-world and mathematical problems.
Statistics and Probability
Develop understanding of statistical variability.
6.SP.1 Recognize a statistical question as one that anticipates variability in the data related to the question and accounts for it in the answers. For example, "How old am I?"is not a statistical
question, but "How old are the students in my school?" is a statistical question because one anticipates variability in students´ ages.
6.SP.2 Understand that a set of data collected to answer a statistical question has a distribution which can be described by its center, spread, and overall shape.
6.SP.3 Recognize that a measure of center for a numerical data set summarizes all of its values with a single number, while a measure of variation describes how its values vary with a single
Summarize and describe distributions.
6.SP.4 Display numerical data in plots on a number line, including dot plots, histograms, and box plots.
6.SP.5 Summarize numerical data sets in relation to their context, such as by:
Reporting the number of observations.
Describing the nature of the attribute under investigation, including how it was measured and its units of measurement.
Giving quantitative measures of center (median and/or mean) and variability (interquartile range and/or mean absolute deviation), as well as describing any overall pattern and any striking
deviations from the overall pattern with reference to the context in which the data were gathered.
Relating the choice of measures of center and variability to the shape of the data distribution and the context in which the data were gathered
Examples of Linking Supporting Clusters to the Major Work of the Grade
• Solve real-world and mathematical problems involving area, surface area and volume: In this cluster, students work on problems with areas of triangles and volumes of right rectangular prisms,
which connects to work in the Expressions and Equations domain. In addition, another standard within this cluster asks students to draw polygons in the coordinate plane, which supports other work
with the coordinate plane in The Number System domain.
Mathematics Overview
Model Content Frameworks - Mathematics
Also in this grade - ELA/L
|
{"url":"http://www.parcconline.org/mcf/mathematics/content-emphases-cluster-2","timestamp":"2014-04-20T10:47:47Z","content_type":null,"content_length":"66637","record_id":"<urn:uuid:16655791-a0f4-4f66-8d4f-2110a07664d8>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00627-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Harleysville Trigonometry Tutors
...Thanks for visiting my page, and best of luck!Scored 780/800 on SAT Math in high school and 800/800 on January 26, 2013 test. Routinely score 800/800 on practice tests. Taught high school math
and have extensive experience tutoring in SAT Math.
19 Subjects: including trigonometry, calculus, statistics, geometry
I am currently employed as a secondary mathematics teacher. Over the past eight years I have taught high school courses including Algebra I, Algebra II, Algebra III, Geometry, Trigonometry, and
Pre-calculus. I also have experience teaching undergraduate students at Florida State University and Immaculata University.
9 Subjects: including trigonometry, geometry, algebra 2, GRE
...This will especially help to personalize the teaching experience and is an effective way to create a trusting relationship. I am especially personable and I know I have the ability to inspire
students to have success beyond their expectations especially with the creative method I use for teachin...
16 Subjects: including trigonometry, Spanish, calculus, physics
...My Experience I am currently an adjunct professor of Chemistry at Rowan University. While I don't have a formal teaching certificate, I love teaching and my students appreciate my approach to
teaching. Maybe I'll be your chemistry teacher one day!
14 Subjects: including trigonometry, chemistry, algebra 1, algebra 2
...I have tutored math and sciences in many volunteer and job opportunities. I have experience with after school tutoring from 2003-2006. I was an Enon Tabernacle after school ministry tutor for
elementary and high school students 2011-2012.
13 Subjects: including trigonometry, chemistry, geometry, biology
|
{"url":"http://www.algebrahelp.com/Harleysville_trigonometry_tutors.jsp","timestamp":"2014-04-19T04:30:13Z","content_type":null,"content_length":"25238","record_id":"<urn:uuid:6acde140-a77e-4d65-9cb0-8db6b6824328>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00032-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Posts about arxiv on The Gauge Connection
Much closer to the Standard Model
Today, the daily from arxiv yields a contribution from John Ellis and Tevong You analyzing new data presented at Aspen and Moriond the last two weeks by CMS and ATLAS about Higgs particle (see here).
Their result can be summarized in the following figure
that is really impressive. This means that the updated data coming out from LHC constraints even more the Higgs particle found so far to be the Standard Model one. Another impressive conclusion they
are able to draw is that the couplings appear to be proportional to the masses as it should be expected from a well-behaved Higgs particle. But they emphasize that this is “a” Higgs particle and the
scenario is well consistent with supersymmetry. Citing them:
The data now impose severe constraints on composite alternatives to the elementary Higgs boson of the Standard Model. However, they do not yet challenge the predictions of supersymmetric models,
which typically make predictions much closer to the Standard Model values. We therefore infer that the Higgs coupling measurements, as well as its mass, provide circumstantial support to
supersymmetry as opposed to these minimal composite alternatives, though this inference is not conclusive.
They say that further progress on the understanding of this particle could be granted after the upgraded LHC will run and, indeed, nobody is expecting some dramatic change into this scenario from the
data at hand.
John Ellis, & Tevong You (2013). Updated Global Analysis of Higgs Couplings arXiv arXiv: 1303.3879v1
From arxiv today
Today the daily from arxiv is particularly rich. A couple of papers are from my friend Marco Ruggieri, one in collaboration with Raul Gatto (see here) and the other is the contribution to Paris
Conference proceedings (see here). Marco is currently using a Nambu-Jona-Lasinio model to understand the behavior of hadronic matter at high temperatures and densities. These works imply the use of a
chiral chemical potential that has the benefit to permit an identification of the critical end-point on the lattice without the infamous sign problem, as Marco showed quite recently (see here). These
works are well founded as the Nambu-Jona-Lasinio model is the right low-energy limit of QCD (see here and here).
Another theoretical confirmation on my result on the beta function for Yang-Mills theory appeared today (see here). This author works out classical solutions to Yang-Mills theory and derives a “beta
function” of the form $4\alpha_s$, exactly the one I get (see here). This result marks evidence of an infrared trivial fixed point for this theory and the correctness of the mapping on a scalar
field. This paper gives just a clue as the treatment remains at a classical level.
Last but not least, the complete Coleman’s lectures appeared finally on arxiv (see here). We have to give a great thank to Bryan Chen and Yuan-Sen Ting for their excellent work. These lectures are a
reference today yet and I hope they will see the light in a book with some addenda from most of Coleman’s famous students.
Raoul Gatto, & Marco Ruggieri (2011). Hot Quark Matter with an Axial Chemical Potential arXiv arXiv: 1110.4904v1
Marco Ruggieri (2011). Quark Matter with a Chiral Chemical Potential arXiv arXiv: 1110.4907v1
Marco Ruggieri (2011). The Critical End Point of Quantum Chromodynamics Detected by Chirally
Imbalanced Quark Matter Phys.Rev.D84:014011,2011 arXiv: 1103.6186v2
Marco Frasca (2008). Infrared QCD Int.J.Mod.Phys.E18:693-703,2009 arXiv: 0803.0319v5
Kei-Ichi Kondo (2010). Toward a first-principle derivation of confinement and chiral-symmetry-breaking crossover transitions in QCD Phys. Rev. D, 82 065024 (2010) arXiv: 1005.0314v2
Ding-fang Zeng (2011). Confinings of QCD at purely classic levels arXiv arXiv: 1110.5054v1
Marco Frasca (2010). Mapping theorem and Green functions in Yang-Mills theory PoS FacesQCD:039,2010 arXiv: 1011.3643v3
Sidney Coleman (2011). Notes from Sidney Coleman’s Physics 253a arXiv arXiv: 1110.5013v1
It was twenty years ago today . . .
With these beautiful words starts a recollection paper by the founder of arXiv, Paul Ginsparg. This is worth the reading as this history spans a number of years exactly overlapping the computer
revolution that definitely changed our lives. What Paul also changed through these new information tools was the way researchers should approach scientific communication. It is a revolution that is
not stopped yet and all the journals I submit my papers have a link to arXiv for direct uploading of the preprint. This change has had also a great impact on the way these same journals should
present to authors, readers and referees as well at their website.
For my readers I would like just to point out how relevant was all this for our community with the Grisha Perelman’s case. I think all of you are well aware that Perelman never published his papers
on a journal: You can find both of them on arXiv. Those preprints paid as much as a Fields medal and a Millenium prize. Not bad I should say for a couple of unpublished papers. Indeed, it is common
matter to have a paper largely discussed well before its publication and often a preprint becomes a case in the community without not even seeing the light of a publication. It is quite common for us
doing research to console colleagues complaining about the harsh peer-review procedure by saying that today exists arXiv and that is enough to make your work widely known.
I was a submitter since 1994, almost at the very start, and I wish that the line of successes of this idea will never end.
Finally, to prove how useful is arXiv for our community, I would like to point out to you, for your summer readings a couple of papers. The first one is this from R. Aouane, V. Bornyakov, E.-M.
Ilgenfritz, V. Mitrjushkin, M. Müller-Preussker, A. Sternbeck. My readers should know that these researchers always do a fine work and get important results on their lattice computations. The same
happens here where they study the gluon and ghost propagators at finite temperature in the Landau gauge. Their conclusion about Gribov copies is really striking, comforting my general view on this
matter (see here), that Gribov copies are not essential not even when one rises the temperature. Besides, they discuss the question of a proper order parameter to identify the phase transition that
we know exists in this case.
The next paper is authored by Tereza Mendes, Axel Maas and Stefan Olejnik (see here). The idea in this work is to consider a gauge, the $\lambda$-gauge, with a free parameter interpolating between
different gauges to see the smoothness of the transition and the way of change of the propagators. They reach a volume of 70^4 but Tereza told me that the errors are too large yet for a neat
comparison with smaller volumes. In any case, this is a route to be pursued and I am curious about the way the interpolated propagator behaves at the deep infrared with larger lattices.
Discussions on Higgs identification are well alive yet ( you can see here). take a look and enjoy!
Paul Ginsparg (2011). It was twenty years ago today … arXiv arXiv: 1108.2700v1
R. Aouane, V. Bornyakov, E. -M. Ilgenfritz, V. Mitrjushkin, M. Müller-Preussker, & A. Sternbeck (2011). Landau gauge gluon and ghost propagators at finite temperature from
quenched lattice QCD arXiv arXiv: 1108.1735v1
Axel Maas, Tereza Mendes, & Stefan Olejnik (2011). Yang-Mills Theory in lambda-Gauges arXiv arXiv: 1108.2621v1
A physics software repository
Scientific publishing has undergone a significant revolution after Paul Ginsparg introduced arXiv. Before this great idea, people doing research used to send preprints of their works to some selected
colleagues for comments. This kind of habit was costly, time consuming and reached very few people around the World until the paper eventually went through some archival journal. Ginsparg’s idea was
to use the web to accomplish this task making widely known papers well before publication to all the community. This changed the way we do research as it is common practice to put a paper on arXiv
before submission to journals. This has had the effect to downgrade the relevance of these journal for scientific communication. This is so true that Perelman’s papers on Poincaré conjecture never
appeared on literature, they are just on arXiv, but the results were anyhow generally acknowledged by the scientific community. This represents an extraordinary achievement for arXiv and shows
unequivocally the greatness of Ginsparg’s idea.
Of course, research is not just writing articles and get them published somewhere. An example is physics where a lot of research activity relies on writing computer programs. This can happen on a lot
of platforms as Windows, Mac, Linux or machines performing parallel computations. Generally, these programs are relegated to some limited use to a small group of researchers and other people around
the World, having similar problems, could be in need of it but are forced to reinvent the wheel. This happens again and again and often one relies on the kindness of colleagues that in some cases
could have not the good will to give away the software. This situation is very similar to the one encountered before arXiv come into operation. So, my proposal is quite simple: People in the
scientific community having the good will to share their software should be stimulated to do so through a repository that fits the bill. This could be easily obtained by extending arXiv itself that
already contains several papers presenting software written by our colleagues that, aiming to share, just put there a link. But having a repository, it could be easier to maintain versions as already
happens to paper and there would be no need to create an ad hoc site that could be lost in the course of time.
I do not know if this proposal will meet with success but it is my personal conviction that a lot of people around the World has this need and this could be easily realized by the popularity of
certain links to download programs for doing computations in physics. This need is increasingly growing thanks to parallel computation made available to desktop computers that today is a reality. I
look forward to hear news about this.
Today in arXiv
After the excitation for the findings at Tevatron, we turn back to routine. Of course, I have never forgotten to cast a glance at arXiv where it is crystal clear the vitality of the physics
community. I want to put down these few lines to point to your attention a couple of papers appeared today on the preprint archive. Today, Nele Vandersickel uploaded her PhD Thesis (see here). She
has got her PhD on March this year. Nele was one of the organizers of the beautiful and successful conference in Ghent (Belgium) where I was present last year (see here, here and here). But most
important is her research work with the group of Silvio Sorella and David Dudal that is the central theme of her thesis. Nele does an excellent job in presenting a lot of introductory material,
difficult to find in the current literature, beside her original research. Sorella and Dudal have accomplished an interesting research endeavor by supporting the Gribov-Zwanziger scenario, at odds in
the initial formulation with lattice data, with their view that condensates must be accounted for. In this way, Gribov-Zwanziger scenario can be taken to agree with lattice computations. These
theoretical studies describe a consistent approach and these authors were able to obtain the masses of the first glueball states. I would like to conclude with my compliments for the PhD reached by
Nele and for the excellent wotk her and the other people in the group were able to realize.
The other fine paper I have found is a report by a group of authors, “Discoverig Technicolor”, giving a full account of the current situation for this theoretical approach to the way particles
acquire their masses. As you know, the original formulation of the Higgs particle that entered into the Standard Model contains some drawbacks that motivated several people to find better solutions.
Technicolor is one of these. One assumes the existence of a set of Fermions with a self-interaction. We know that this kind of models, as Nambu-Jona-Lasinio is, are able to break symmetries and
generate masses to massless particles. Indeed, one can formulate a consistent theory with respect to all the precision tests of the Standard Model as also discussed in this report. This means in turn
that in accelerator facilities one should look for some other Fermions and their bound states that can also mimic a standard Higgs scalar boson. It is important to note that in this way some
drawbacks of the original Higgs mechanism are overcome. Of course, the relevance of this report cannot be underestimated in view of the results coming out from LHC and we could know very soon if an
idea like Technicolor is the right one or not. For sure, this is time for answers in the end.
Nele Vandersickel (2011). A study of the Gribov-Zwanziger action: from propagators to glueballs arXiv arXiv: 1104.1315v1
J. R. Andersen, O. Antipin, G. Azuelos, L. Del Debbio, E. Del Nobile, S. Di Chiara, T. Hapola, M. Jarvinen, P. J. Lowdon, Y. Maravin, I. Masina, M. Nardecchia, C. Pica, & F. Sannino (2011).
Discovering Technicolor arXiv arXiv: 1104.1255v1
Problems at arxiv
Today I received the following message from arxiv at Cornell:
Access Denied
Sadly, you do not currently appear to have permission to access http://arxiv.org/
If you believe this determination to be in error, see http://arxiv.org/denied.html for additional information.
All the mirrors seem to work well.
500,000 and more!
Cornell announced that arxiv overcome 500,000 submissions (see here). This is a key milestone. At this rate the threshold at 1,000,000 is expected for 2015. This is the undeniable proof of a great
success and the evidence for a revolution in scientific publishing. Articles are all freely accessible. The future will say what other impact this will have. But already today we see a reshaping of
all the publishing community to account for the very existence of arxiv.
I think this is the moment for cheers.
Paul Ginsparg and arxiv
One of the greatest revolutions in scientific publishing we have witnessed in these latest years has been the invention of arxiv due to Paul Ginsparg.
A beautiful article he wrote for Physics World is appeared in the latest number of the journal (see here). This paper has been emotionally shocking for me and turned back time to the pioneering era
of personal computing. I still remember the PDP-11 machine I worked on at University of Rome “La Sapienza” to compute the lifetime of $K_s^0$, repeating an experiment performed years before, with two
other classmates. As soon as I uncovered arxiv I partecipate to this adventure submitting my preprints to it. Indeed, you can find papers of mine there starting from 1994.
Since then arxiv has changed in a significant way introducing a web interface, endorsement and requiring registration. But we have also been part of Perelman’s story and his preprints never published
but just put on arxiv. They just contain the proof of Poincare’ conjecture. And someone thinks yet preprints are not worth a publication…
|
{"url":"http://marcofrasca.wordpress.com/tag/arxiv/","timestamp":"2014-04-18T13:06:07Z","content_type":null,"content_length":"125551","record_id":"<urn:uuid:ea9f31de-e6da-4ea7-9ef5-9ed4ac6cd61b>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00540-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Foster City, CA Calculus Tutor
Find a Foster City, CA Calculus Tutor
...I also am talented at breaking down difficult material and explaining it in a way easy to understand, tailored to the level the student is at. As I've always said, "If you can't explain it to
an intelligent 12 year old, then you don't really understand it" I explain to my students because I u...
24 Subjects: including calculus, chemistry, physics, geometry
...I have worked with students from a wide variety of curricula and various textbooks, and in all sorts of school settings. Algebra 2 courses vary greatly in content and emphasis, and my
experience makes it possible for me to help students with any Algebra 2 course. For example, some Algebra 2 cou...
15 Subjects: including calculus, geometry, algebra 1, GED
...So I turned to something I had done for my friends in High School, my troops in the field, and my neighborhood kids, TUTORING! I have been doing it professionally now for over ten years. I
love it when my student's understand a new concept.
10 Subjects: including calculus, geometry, precalculus, algebra 1
...I teach all levels of Chinese to people of all ages, and I can adjust my teaching speed and content according to your need and interest. I can teach the specific aspect of Chinese culture and
language you want to learn. I took biostatistics in college and through UC Berkeley Extension.
22 Subjects: including calculus, geometry, statistics, biology
I am a Naperville North High School and University of Illinois Urbana-Champaign graduate with a degree in Mathematics. I have extensive problem-solving experience including winning 1st place at
the ICTM State Math Contest in 2007 and placing in the top 500 in the national Putnam competition. My tu...
17 Subjects: including calculus, chemistry, physics, statistics
Related Foster City, CA Tutors
Foster City, CA Accounting Tutors
Foster City, CA ACT Tutors
Foster City, CA Algebra Tutors
Foster City, CA Algebra 2 Tutors
Foster City, CA Calculus Tutors
Foster City, CA Geometry Tutors
Foster City, CA Math Tutors
Foster City, CA Prealgebra Tutors
Foster City, CA Precalculus Tutors
Foster City, CA SAT Tutors
Foster City, CA SAT Math Tutors
Foster City, CA Science Tutors
Foster City, CA Statistics Tutors
Foster City, CA Trigonometry Tutors
|
{"url":"http://www.purplemath.com/foster_city_ca_calculus_tutors.php","timestamp":"2014-04-16T16:23:47Z","content_type":null,"content_length":"24171","record_id":"<urn:uuid:f1fd3d98-35e3-4e42-b058-4cecbe5b72c3>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00239-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Roslyn Heights Algebra 2 Tutor
Find a Roslyn Heights Algebra 2 Tutor
...It is easy for me to get along with people and help them with any problems they have, in education and even personally if necessary. A little about me, I grew up in New York with great family
and friends. I am responsible, hardworking, caring, and a great listener.
29 Subjects: including algebra 2, English, chemistry, geometry
...The next levels of algebraic thinking- linear algebra and abstract algebra, were also mainstays in my studies. I believe that studying higher levels of the subjects gives you a real
understanding of them that you cannot get from only studying the basics. I think it's sort of akin to a writer wh...
22 Subjects: including algebra 2, calculus, geometry, trigonometry
...I will invest quality time, similar to how I did during studies for my Bachelor of Science degree in mechanical engineering at Hofstra University. I am determined to provide this kind of
service through listening to my students' needs and expectations. In addition, I will seek to achieve a common ground with all my future students.
20 Subjects: including algebra 2, physics, writing, algebra 1
...I have successfully completed coursework at Polytechnic Institute of NYU with an overall GPA of 4.00 out of 4.00. At Polytechnic Institute of NYU, the following coursework (pertaining to
Computer Programming) was covered in detail: Introduction to Programming (C++) Data Struct...
28 Subjects: including algebra 2, calculus, statistics, geometry
...Majoring in some type of engineering at a rigorous tech school means mastering single and multivariable calculus at a much higher level than a standard college or AP course. While mastering the
specifics is important, I have found many teachers are often unable to convey the important underlying...
22 Subjects: including algebra 2, chemistry, calculus, physics
|
{"url":"http://www.purplemath.com/Roslyn_Heights_Algebra_2_tutors.php","timestamp":"2014-04-17T07:44:09Z","content_type":null,"content_length":"24358","record_id":"<urn:uuid:f58d9728-0832-4ba8-968c-3df4a7737a5e>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00554-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Radiation Resistance
17,146pages on
this wiki
Derived statistic
Radiation Resistance
Van Buren
modifies Amount of radiation your character is exposed to is reduced by this amount.
The amount of radiation you are exposed to is reduced by this percentage. Radiation Resistance can be modified by the type of armor worn, and anti-radiation chems.
”— Fallout In-game description
Radiation Resistance is a derived statistic in Fallout, Fallout 2, Fallout 3, Fallout Tactics and Fallout: New Vegas.
It reduces radiation damage by a percentage.
Rad-X temporarily increases Radiation Resistance.
Fallout, Fallout 2, Fallout Tactics, Fallout: New Vegas
$\text{Initial level}=\text{Endurance}\times2$
• Average characters will have a 10% radiation resistance.
• The Fast Metabolism trait sets initial radiation resistance to 0.
• The Rad Resistance perk increases radiation resistance by 15% per rank. (10% per rank in FO1)
• The Vault City Inoculations perk increases radiation resistance by 10%.
• Rad-X temporarily increases radiation resistance by 50%. Half of it is lost after 24 hours, with resistance returning to its original value after another 24 hours.
• In Fallout, Radiation Resistance can go as high as 100% (Which is needed to survive The Glow). In Fallout 2, it is capped at 95%, in Fallout: New Vegas, however it is capped at 85%.
Fallout 3
$\text{Initial level }%=(\text{Endurance}-1)\times2$
Example: A starting Endurance of 5.
Permanently increasing Radiation Resistance
Temporarily increasing Radiation Resistance
• In Fallout 3, Radiation Resistance is capped at 85%.
|
{"url":"http://fallout.wikia.com/wiki/Radiation_Resistance?oldid=1717203","timestamp":"2014-04-16T19:11:37Z","content_type":null,"content_length":"107605","record_id":"<urn:uuid:6a99f02e-d00d-406f-89f9-da8de8fe0c35>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00302-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The Resource Description Framework (RDF) version 1.1 defines the concept of RDF datasets, a notion introduced first by the SPARQL specification [[RDF-SPARQL-QUERY]]. An RDF dataset is defined as a
collection of RDF graphs where all but one are named graphs associated with an IRI or blank node (the graph name), and the unnamed default graph [[RDF11-CONCEPTS]]. Given that RDF is a data model
equipped with a formal semantics [[RDF11-MT]], it is natural to try and define what the semantics of datasets should be.
The RDF Working Group was chartered to provide such semantics in its recommendation:
Required features
□ Standardize a model and semantics for multiple graphs and graphs stores [...]
However, discussions within the Working Group revealed that very different assumptions currently exist among practitioners, who are using RDF datasets with their own intuition of the meaning of
datasets. Defining the semantics of RDF datasets requires an understanding of the two following issues:
• what the graph names (IRI or blank node) denote, or what are the constraints on what the names can possibly denote;
• how the triples in the named graph influence the meaning of the dataset.
Possible choices for the denotation of graph names are:
• it denotes the RDF graph in the (name,graph) pair;
• it denotes the pair itself;
• it denotes a supergraph of the graph inside the pair;
• it denotes a container for the RDF graph, that is, a mutable element;
• it denotes the information resource that can be obtained by dereferencing the graph name, when it is an IRI and if such resource exists;
• it denotes an arbitrary resource that is constrained to be in a special relationship (for instance, ex:hasGraph with the graph inside the pair;
• it denotes the deductive closure of the graph inside the pair;
• it denotes an arbitrary resource that is in a special relation with the deductive closure, or with a superset of the graph;
• it denotes an unconstrained resource;
• etc.
Even with an intuitive understanding of what the truth of an RDF dataset should be, the precise model-theoretic formalization can be subject to many variations.
Possible choices for the meaning of the triples in the named graphs include:
• all the triples in the named graphs and default graphs contribute to the truth of the dataset in the same way triples contribute to the truth of a single graph;
• the triples of the named graphs are considered part of the knowledge of the default graph;
• different named graphs indicate different “contexts”, or different “worlds”, and the triples inside a named graph are assumed to be true in the associated context only; in this case, the default
graph can be interpreted as yet another context, or be considered as a “global context” which must hold in all contexts, or again as metadata about the contexts;
• the named graphs are considered as “hypothetical graphs” which bear the same consequences as their RDF graphs, but they do not participate in the truth of the dataset; this is similar to the
“context” option above but it allows a graph to contain contradictions without making the dataset contradictory;
• the triples are merely quoted without any indication of what they mean; they do not participate in the truth of a dataset.
Depending on the assumptions taken with respect to these two issues, the formalization of the semantics of RDF datasets can vary very much.
In this Working Group Note, we examine the propositions that were given by Working Group members in the course of a one-year-and-a-half debate.
Existing Work
We first take a look at existing specifications that could shed a light on how the semantics of datasets should be defined. There are three important documents that closely relate to the issue:
• the RDF semantics, as standardized in 2004 [[RDF-MT]] and its revision in 2014 [[RDF11-MT]];
• the article Named Graphs by Carroll et al. [[CARROLL-05]], which first introduced the term “named graph” and contains a section on formal semantics;
• the SPARQL specification [[SPARQL11-QUERY]], which defines RDF datasets and how to query them.
The RDF semantics
As described in RDF 1.1 Semantics, a set of RDF graphs can be interpreted as either the union of the graphs or as their merge ([[RDF11-MT]], Technical note, Section 5.2).
So, a first intuition could be that an RDF dataset, being presented as a collection of graph, should mean exactly what the set of its named graphs and default graph means. However, this completely
leaves out the potential meaning of graph names, which could be valuable indicators for the truth of a dataset.
Formally, the semantics of RDF defines a notion of interpretation for a set of triples (i.e., an RDF graph), which then can extend to a set of RDF graphs. A dataset is neither a set of triples nor a
set of RDF graphs. It is a set of pairs (name,graph) together with a distinguished RDF graph and the RDF semantics does not itself specify a meaning for these pairs.
Conceptually, it is problematic since one of the reasons for separating triples into distinct (named) graphs is to avoid propagating the knowledge of one graph to the entire triple base. Sometimes,
contradicting graphs need to coexist in a store. Sometimes named graphs are not endorsed by the system as a whole, they are merely quoted.
The Named Graphs paper
In Carroll et al. [[CARROLL-05]], a named graph is defined as a pair comprising an IRI and an RDF graph. The notion of RDF interpretation is extended to named graphs by saying that the graph IRI in
the pair must denote the pair itself. This non-ambiguously answers the question of what the graph IRI denotes. This can then be used to define proper dataset semantics, as shown in Section 3.3. Note
that it is deliberate that the graph IRI is forced to denote the pair rather than the RDF graph. This is done in order to differentiate two occurrences of the same RDF graph that could have been
published at different times, or authored by different people. A simple reference to the RDF graph would simply identify a mathematical set, which is the same wherever it occurs.
The SPARQL specification
RDF 1.1 borrows the notion of RDF dataset from the SPARQL specification [[SPARQL11-QUERY]], with the notable different that RDF 1.1 allows graph names to be blank nodes. So, in order to understand
the semantics of dataset, it is worthwhile looking at how SPARQL uses datasets. SPARQL defines what answers to queries posed against a dataset are, but it never defines the notions that are key to a
model theoretic formal semantics: it neither presents interpretations nor entailment. Still, it is worth noticing that a ASK query that only contains a basic graph pattern without variables yields
the same result as asking whether the RDF graph in the query is entailed by the default graph. Based on this observation, one may extrapolate that a ASK query containing no variables and only GRAPH
graph patterns would yield the same result as dataset entailment.
This can be used as a guide for formalizing the semantics of datasets, as can be seen in Section 3.7.
Formal definitions
This section presents the different options proposed, together with their formal definitions. We include each time a discussion of the merits of the choice, and some properties.
Each subsection here describes the option informally, before presenting the formal definitions. As far as the formal part is concerned, one has to be familiar with the definitions given in RDF
Semantics. We rely a lot on the notion of interpretation and entailment, which are key in model theory.
All proposed options share some commonalities:
• they behave identically on datasets that do not contain named graphs; precisely, entailment between datasets having no named graph is carried out in the same way as entailment between RDF graphs;
• they define notions of interpretation and entailment in function of the corresponding notions in RDF Semantics.
The first item above reflects the indication given in [[RDF11-MT]] (Section "RDF Datasets") with respect to dataset semantics: “a dataset SHOULD be understood to have at least the same content as its
default graph”.
The dependency on RDF semantics is such that most of the dataset semantics below reuse RDF semantics as a black box. More precisely, it is not necessary to be specific about how truth of RDF graphs
is defined as long as there is a notion of interpretation that determines the truth of a set of triples. In fact, RDF Semantics does not define a single formal semantics, but multiple ones, depending
on what standard vocabularies are endorsed by an application (such as the RDF, RDFS, XSD vocabularies). Consequently, we parameterize most of the definitions below with an unspecified entailment
regime E. RDF 1.1 defines the following entailment regimes: simple entailment, D-entailment, RDF-entailment, RDFS-entailment. Additionally, OWL defines two other entailment regimes, based on the OWL
2 direct semantics [[OWL2-DIRECT-SEMANTICS]] and the OWL 2 RDF-based semantics [[OWL2-RDF-BASED-SEMANTICS]].
For an entailment regime E, we will say E-interpretation, E-entailment, E-equivalence, E-consistency to describe the notions of interpretations, entailment, equivalence and consistency associated
with the regime E. Similarly, we will use the terms dataset-interpretation, dataset-entailment, dataset-equivalence, dataset-consistency for the corresponding notions in dataset semantics.
This document provides examples in TriG [[TRIG]] and assumes that the following prefixes are defined:
Namespace prefixes and IRIs used in this document
Namespace prefix Namespace IRI
rdf http://www.w3.org/1999/02/22-rdf-syntax-ns#
rdfs http://www.w3.org/2000/01/rdf-schema#
xsd http://www.w3.org/2001/XMLSchema#
ex http://example.org/voc#
Named graphs have no meaning
The simplest semantics defines an interpretation of a dataset as an RDF interpretation of the default graph. The dataset is true, according to the interpretation, if and only if the default graph is
true. In this case, any datasets that have equivalent default graphs are dataset-equivalent.
This means that the named graphs in a dataset are irrelevant to determining the truth of a dataset. Therefore, arbitrary modifications of the named graphs in a graph store always yield a logically
equivalent dataset, according to this semantics.
Considering an entailment regime E, a dataset-interpretation with respect to E is an E-interpretation. Given an interpretation I and a dataset D having default graph G and named graphs NG, I(D) is
true if and only if I(G) is true.
Examples of entailment and non-entailments
Consider the following dataset:
{ ex:s ex:p ex:o . }
ex:g1 { ex:a ex:b ex:c }
does not dataset-entail:
{ ex:s ex:p ex:o .
ex:a ex:b ex:c .}
but dataset-entails:
{} # empty default graph
ex:g2 { ex:x ex:y ex:z }
Since graph names are not particularly constrained, one can use them in triples, for instance:
{ ex:g1 ex:author ex:Bob .
ex:g1 ex:created "2013-09-17"^^xsd:date .}
ex:g1 { ex:a ex:b ex:c }
but it would dataset-entail:
{ ex:g1 ex:author ex:Bob .
ex:g1 ex:created "2013-09-17"^^xsd:date .}
ex:g1 { ex:x ex:y ex:z }
Properties of this dataset semantics
Assuming this semantics is convenient since it merely ignores named graphs in a dataset for any reasoning task. As a result, datasets can be simply treated as regular RDF graphs by extracting the
default graph. Named graphs can still be used to preserve useful information, but it bears no more meaning than a commentary in a program source code.
The obvious disadvantage is that, since named graphs are completely disregarded in terms of meaning, there is no guarantee that any information intended to be conveyed by the named graphs is
preserved by inference.
Default graph as union or as merge
It is sometimes assumed that named graphs are simply a convenient way of sorting the triples but all the triples participate in a united knowledge base that takes the place of the default graph. More
precisely, a dataset is considered to be true if all the triples in all the graphs, named or default, are true together. This description allows two formalizations of dataset semantics, depending on
how blank nodes spanning several named graphs are treated. Indeed, if one blank node appears in several named graphs, it may be intentional, to indicate the existence of only one thing across the
graphs, in which case union is appropriate. If the sharing of blank nodes is incidental, merge is also an applicable solution.
Formalization: first version
We define a dataset-interpretation with respect to an entailment regime E as an E-interpretation. Given a dataset-interpretation I and a dataset D having default graph G and named graphs NG, I(D) is
true if and only if I(G) is true and for all ng in NG, I(ng) is true.
This is equivalent to I(D) is true if I(H) is true where H is the merge of all the RDF graphs, named or default, appearing in D.
Formalization: second version
We define a dataset-interpretation with respect to an entailment regime E as an E-interpretation. Given a dataset-interpretation I and a dataset D having default graph G and named graphs NG, I(D) is
true if and only if I(H) is true where H is the union of all the RDF graphs, named or default, appearing in D.
An alternative presentation of this variant is the following: define I+A to be an extended interpretation which is like I except that it uses A to give the interpretation of blank nodes; define blank
(D) to be the set of blank nodes in D. Then I(D) is true if and only if [I+A](D) is true for some mapping A from blank(D) to the set of resources in I.
Consider the following dataset:
{ ex:s ex:p ex:o . } # default graph
ex:g1 { ex:a ex:b ex:c }
{ ex:s ex:p ex:o .
ex:a ex:b ex:c .}
If the entailment regime E is RDFS with the recognized datatype xsd:integer, then the following RDF dataset is RDFS-dataset-inconsistent:
{ } # empty default graph
ex:g1 { ex:age rdfs:range xsd:integer . }
ex:g2 { ex:bob ex:age "twenty" .}
Properties of this dataset semantics
This semantics allows one to partition the triples of an RDF graph into multiple named graphs for easier data management, yet retaining the meaning of the overall RDF graph. Note that this choice of
semantics does not impact the way graph names are interpreted: it is possible to further constrain the graph names to denote the RDF graph associated with it, or other possible constraints. The
possible interpretations of graph names, and their consequences, are presented in the next sections.
This semantics is implicitly assumed by existing graph store implementations. The OWLIM RDF database management system implements reasoning techniques over RDF datasets that materialize inferred
statements into the database [[citation needed]]. This is done by taking the union of the graphs in the named graphs, applying standard entailment regimes over this RDF graph and putting the inferred
triples into the default graph.
This dataset semantics makes all triples in the named graphs contribute to a global knowledge, thus making the whole dataset inconsistent whenever two graphs are mutually contradictory. In situations
where named graphs are used to store RDF graphs obtained from various sources on the open Web, inconsistencies or contradictions can easily occur. Notably, Web crawlers of search engines harvest all
RDF documents, and it is known as a fact that the Web contains documents serializing inconsistent RDF graphs as well as documents that are mutually contradicting yet consistent on their own. In this
case, this semantics can be seen as problematic.
The graph name denotes the named graph or the graph
It is common to use the graph name as a way to identify the RDF graph inside the named graphs, or rather, to identify a particular occurrence of the graph. This allows one to describe the graph or
the graph source in triples. For instance, one may want to say who the creator of a particular occurrence of a graph is. Assuming this semantics for graph names amounts to say that each named graph
pair is an assertion that sets the referent of the graph name to be the associated graph or named graph pair.
Intuitively, this semantics can be seen as quoting the RDF graphs inside the named graphs. In this sense, ex:alice {ex:bob ex:is ex:smart} has to be understood as “Alice said: “Bob is smart”” which
does not entail “Alice said: “Bob is intelligent”” because Alice did not use the word “intelligent”, even though “smart” and “intelligent” can be understood as equivalent. Note, however, that this
analogy is only valid insofar as it can provide an intuition of this type of semantics, but the formalization does not actually refer to speech and the act of asserting.
In order to be consistent with RDF model theory, blank nodes used as graph names are treated like existential variables. Consequently, their semantics is formalized according to the same notation
presented in [[RDF11-MT]]:
Suppose I is an interpretation and A is a mapping from a set of blank nodes to the universe IR of I. Define the mapping [I+A] to be I on names, and A on blank nodes on the set: [I+A](x)=I(x) when
x is a name and [I+A](x)=A(x) when x is a blank node; and extend this mapping to triples and RDF graphs using the rules given above for ground graphs.
A dataset-interpretation I with respect to an entailment regime E is an E-interpretation extended to named graphs and datasets as follows:
• if (n,g) is a named graph where the graph name is an IRI, then I(n,g) is true if and only if I(n) = (n,g).
• if D is a dataset comprising default graph DG and named graphs NG, then I(D) is true if and only if there exists a mapping from blank nodes to the universe IR of I such that [I+A](DG) is true and
for all named graph (n,g) in NG, [I+A](n) = (n,g).
Consider the following dataset:
{ } # empty default graph
ex:g1 { ex:a ex:b ex:c }
ex:g2 { ex:x ex:y ex:z }
{ }
_:b { ex:a ex:b ex:c }
ex:g2 { ex:x ex:y ex:z }
but does not dataset-entail:
{ }
ex:g1 { [] ex:b ex:c }
ex:g2 { ex:x ex:y ex:z }
{ }
ex:g1 { }
If the entailment regime E is RDFS with the recognized datatype xsd:integer, then the following RDF dataset is RDFS-dataset-inconsistent:
{ ex:age rdfs:range xsd:integer .
ex:me ex:age ex:g1 . } # default graph
ex:g1 { ex:s ex:p ex:o }
The graph name can be used in triples to attached metadata (here ex:hasNextVersion is a custom term that does not enforce a formal constraint, so it is up to the implementation to decide how to treat
{ ex:g1 ex:published "2013-08-26"^^xsd:date .
ex:g1 ex:hasNextVersion ex:g2 .}
ex:g1 { ex:s1 ex:p1 ex:o1 .
ex:s2 ex:p2 ex:o2 }
ex:g2 { ex:s1 ex:p1 ex:o1 }
Properties of this dataset semantics
There are important implications with this semantics. In this case, a named graph pair can only entail itself or a graph that is structurally equivalent if the graph name is a blank node. Graph names
have to be handled almost like literals. Unlike other IRIs or blank nodes, their denotation is strictly fixed, like literals are. This means that graph IRIs may possibly clash with constraints on
datatypes, as in the example above.
A variant of this dataset semantics imposes that the graph name denotes the RDF graph itself, rather than the pair. This means that two occurrences of the same graph in different named graph pairs
actually identify the same thing. Thus, the graph names associated with the same RDF graphs are interchangeable in any triple in this case.
Each named graph defines its own context
Named graphs in RDF datasets are sometimes used to delimit a context in which the triples of the named graphs are true. From the truth of these triples according to the graph semantics, follows the
truth of the named graph pair. An example of such situation occurs when one wants to keep track of the evolution of facts with time. Another example is when one wants to allow different viewpoints to
be expressed and reasoned with, without creating a conflict or inconsistency. By having inferences done at the named graph level, one can prevent for instance that triples coming from untrusted
parties are influencing trusted knowledge. Yet it does not disallow reasoning with and drawing conclusions from untrusted information.
Intuitively, this semantics can be seen as interpreting the RDF graphs inside the named graphs. In this sense, ex:alice {ex:bob ex:is ex:smart} has to be understood as “Alice said that Bob is smart”
which entails “Alice said that Bob is intelligent” because it is what Bob means, whether he used the term “smart”, “intelligent”, or “bright”. Neither sentence implies that Alice used these actual
There are several possible formalizations of this leading to similar entailments. One way is to interpret the graph name as denoting a graph, and a named graph pair is true if this graph entails the
graph inside the pair. In this case, a dataset-interpretation with respect to an entailment regime E is an E-interpretation such that:
• given a mapping A from blank nodes to the univers IR and a named graph pair ng = (n,G), [I+A](ng) is true if [I+A](n) is an RDF graph and E-entails G;
• for a dataset D = (DG,NG), I(D) is true if there exists a mapping A from blank nodes to the universe IR such that [I+A](DG) is true and for all named graph ng in NG, [I+A](ng) is true;
• I(D) is false otherwise.
Consider the following dataset:
{ } # empty default graph
ex:g1 { ex:YoutubeEmployee rdfs:subClassOf ex:GoogleEmployee .
ex:steveChen rdf:type ex:YoutubeEmployee . }
ex:g2 { ex:chadHurley rdf:type ex:YoutubeEmployee }
{ }
ex:g1 { ex:steveChen rdf:type ex:GoogleEmployee }
but does not RDFS-dataset-entail:
{ }
ex:g2 { ex:chadHurley rdf:type ex:GoogleEmployee }
Graph names used in triples that express metadata do not necessarily generate inconsistency:
{ ex:g1 ex:validAfter "2006"^^xsd:gYear .
ex:g1 ex:published "2013-08-26"^^xsd:date .
ex:g2 ex:validAt "2005"^^xsd:gYear .}
ex:g1 { ex:YoutubeEmployee rdfs:subClassOf ex:GoogleEmployee .
ex:steveChen rdf:type ex:YoutubeEmployee . }
ex:g2 { ex:chadHurley rdf:type ex:YoutubeEmployee }
(here, ex:validAfter and ex:validAt are custom terms that do not enforce a formal constraint, but may be used internally for, e.g., checking the temporal validity of triples in the named graph).
Properties of this dataset semantics
This semantics assumes that the truth of named graphs is preserved when replacing the RDF graphs inside named graphs with equivalent graphs. This means in particular, that one can normalize literals
and still preserve the truth of a named graph. This means too that standard RDF inferences that can be drawn from the RDF graphs inside named graphs can be added to the graph associated with the
graph name without impacting the truth of the RDF dataset.
While this semantics does not guarantee that reasoning with RDF datasets will preserve the exact triples of an original dataset, it is semantically valid to store both the original and any entailed
An example implementation of such a context-based semantics is Sindice [[DELBRU-ET-AL-2008]].
Variants of this dataset semantics
There are several variants of this type of dataset-semantics
• The default graph is interpreted as universal truth, that is, for a named graph (n,G), I(n) E-entails the default graph.
• The graph name does not denote an RDF graph but a resource associated with an RDF graph.
• Each named graph could be associated with a distinct E-interpretation and impose all interpretations to be true for their corresponding graph, in order for the dataset to be true.
Named graph are in a particular relationship with what the graph name dereferences to
In accordance with linked data principles, IRIs may be assumed to reference the document that is obtained by dereferencing it. If the document contains an RDF graph it can be assumed that the graph
in the named graph is in a special relationship (such as, equals, entails) with this RDF graph.
In such case, the truth of an RDF dataset is dependent on the state of the Web, and the same dataset may entail different statements at different times.
Let d be the function that maps an IRI to an RDF graph that can be obtained from dereferencing the IRI. For an IRI u, d(u) is empty when dereferencing returns an error or a document that does not
encode an RDF graph.
A dataset-interpretation I with respect to an entailment regime E is an E-interpretation such that:
• for a named graph pair ng = (n,G), I(ng) is true if d(n) equals (respectively, is a subgraph of, is entailed by) G;
• for a dataset D = (DG,NG), I(D) is true if I(DG) is true and for all named graph ng in NG, I(ng) is true;
• I(D) is false otherwise.
Entailments in this semantics depend not only on the content of a dataset but also on the content of the Web and the ability of a reasoner to accept this content. Moreover, the entailments vary
whether the considered relation is “equals”, or “subgraph of”, or “entailed by”.
For instance, if the reasoner is offline, then the dereferencing function d in the previous definition always return an empty graph. In this case, if the relation is “equals” or “subgraph of”, only
empty named graphs can be true; if the relation is “entails by”, then only named graphs containing axiomatic triples are true. In general, if the relationship is “equals”, named graph do not provide
extra entailments.
Properties of this dataset semantics
The distinguishing characteristic of this dataset semantics is the fact that a single RDF dataset can lead to different entailments, depending on the state of the Web. This can be seen as a feature
for systems that need to be in line with what is found online, but is a drawback for systems that must retain consistency even when they go offline.
Quad semantics
This approach consists in considering named graph as sets of quadruples, having the subject, predicate and object of the triples as first three components, and the graph IRI as the fourth element.
Each quadruple is interpreted similarly to a triple in RDF, except that the relation that the predicate denotes is not indicating a binary relation but a ternary relation.
This semantics is extending the semantics of RDF rather than simply reusing it.
A quad-interpretation is a tuple (IR,IP,IEXT,IS,IL,LV) where IR, IP, IS, IL and LV are defined as in RDF and IEXT is a mapping from IP into the powerset of IR × IR union IR × IR × IR.
Since this option modifies the notion of simple-interpretation, which is the basis for all E-interpretations in any entailment regime E, it is not clear how it can be extended to arbitrary entailment
regimes. For instance, does the following quad set:
ex:a rdf:type ex:c ex:x .
ex:c rdfs:subClassOf ex:d ex:x .
ex:a rdf:type ex:d ex:x .
Properties of this dataset semantics
With this semantics, all inferences that are valid with normal RDF triples are preserved, but it is necessary to extend RDFS in order to accommodate for ternary relations. There are several existing
proposals that extend this quad semantics by dealing with a specific “dimension”, such as time, uncertainty, provenance. For instance, temporal RDF [[TEMPORAL-RDF]] uses the fourth element to denote
a time frame and thus allow reasoning to be performed per time frame. Special semantic rules allow one to combine triples in overlapping time frames. Fuzzy RDF [[FUZZY-RDF]] extends the semantics to
deal with uncertainty. stRDF [[ST-RDF]] extends temporal RDF to deal with spatial information. Annotated RDF [[ANNOTATED-RDF]] generalizes the previous proposals.
Quoted graphs
Quoted graphs are a way to associate information to a specific RDF graph without constraining the relationship between a graph name and the graph associated with it in a dataset. An RDF graph is
“quoted” by using a literal having a lexical form that is a syntactic expression of the graph. For instance:
{ ex:g ex:quotes "ex:a ex:b []"^^ex:turtle . }
ex:g { ex:b rdf:type rdfex:Property .
ex:a ex:b _:x . }
This technique allows one to assume a dataset semantics of contexts (as in Section 3.4) and still preserve an initial version of a graph. However, quoting big graphs may be cumbersome and would
require a custom datatype to be recognized.
Relationship with SPARQL entailment regime
There is a strong relationship between SPARQL ASK queries with an entailment regime [[SPARQL11-ENTAILMENT]] and inferences in the regime. If an ASK query does not contain variables and its WHERE
clause only contains a basic graph pattern, then the query can be seen as an RDF graph. If such a graph query Q returns true when issued against an RDF graph G with entailment regime E, then G
E-entails Q. If it returns false, then G does not E-entail Q.
A dataset semantics can also be compared to what ASK queries return when they do not contain variables but may contain basic graph patterns or graph graph patterns. For instance, consider the
{ }
ex:g1 { ex:x rdf:type ex:c .
ex:c rdfs:subClassOf ex:d . }
ex:g2 { ex:y rdf:type ex:c . }
Then the query:
ASK WHERE {
GRAPH ex:g1 { ex:x rdf:type ex:d }
with RDFS entailment regime would answer true, but the query:
ASK WHERE {
GRAPH ex:g1 { ex:x rdf:type ex:d }
GRAPH ex:g2 { ex:y rdf:type ex:d }
would answer false.
This can lead to a classification of dataset semantics in terms of whether they are compatible with SPARQL ASK queries or not. It can be noted that a semantics where each named graph defines its own
context is “SPARQL-ASK-compatible”, while a semantics where the graph name denotes the graph or named graph is not compatible in this sense.
Declaring the intended semantics
The RDF Working Group did not define a formal semantics for a multiple graph data model because none of the semantics presented before could obtained consensus. Choosing one or another of the
propositions before would have gone against some deployed implementations. Therefore, the Working Group discussed the possibility to define several semantics, among which an implementation could
choose, and provide the means to declare which semantics is adopted.
This was not retained eventually, because of the lack of experience, so there is no definite option for this. Nonetheless, for completeness, we describe here possible solutions.
Using vocabularies
A dataset can be described in RDF using vocabularies like voiD [[VOID]] and the SPARQL service description vocabulary [[SPARQL11-SERVICE-DESCRIPTION]]. VoiD is used to describe how a collection of
RDF triples is organized in a web site or across web sites, giving information about the size of the datasets, the location of the dump files, the IRI of the query endpoints, and so on. The notion of
dataset in voiD is used as a more informal and broader concept than RDF dataset. However, an RDF dataset and the graphs in it can be describe as voiD datasets and the information can be completed
with SPARQL service description
@prefix er: <http://www.w3.org/ns/entailment> .
@prefix sd: <http://www.w3.org/ns/sparql-service-description#> .
[] a sd:Dataset;
sd:defaultEntailmentRegime er:RDF;
sd:namedGraph [
sd:name "http://example.com/ng1";
sd:entailmentRegime er:RDFS
] .
A vocabulary specifically tailored for describing the intended dataset semantics could be defined in a future specification.
Using other mechanisms
Communication of the intended semantics could be performed in various ways, from having the author tell the consumers directly, to inventing a protocol for this. Use of the HTTP protocol and content
negotiation could be a possible way too. Special syntactic markers in the concrete serialization of datasets could convey the intended meaning. All of those are solutions that do not follow current
|
{"url":"https://dvcs.w3.org/hg/rdf/raw-file/default/rdf-dataset/index.html","timestamp":"2014-04-18T15:40:32Z","content_type":null,"content_length":"47540","record_id":"<urn:uuid:87874f94-c264-4dbd-854c-816602ec70d0>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00521-ip-10-147-4-33.ec2.internal.warc.gz"}
|