content
stringlengths 86
994k
| meta
stringlengths 288
619
|
|---|---|
As misleading as comparing penetration by a rapist to penetration by photons but (mercifully) less offensive
UPDATE for the sake of traffic directed here by Krugman and DeLong
: I've written a couple things at this point so the summary is:
1. This post (you're here. you don't need a link)
2. Henry Hazlitt is often promoted as a place for people to start with economics.
I don't think that's wise
3. The estimable Gene Callahan has an older post on this same argument from Rothbard, but I think his criticism doesn't go far enough -
explanation here
There is a legitimate problem with the Keynesian cross, but New Keynesianism addresses it
5. One of my commenters makes my point much more succinctly -
I reproduce that with an explanation of how it relates to my point here
"Now, though I cannot seem to find a reference, I have a vague memory that it was Murray Rothbard who observed [<<< DPK: Well we're clearly off to a bad start] that the really neat thing about
this argument is that you can do exactly the same thing with any accounting identity. Let’s start with this one:
Y = L + E
Here Y is economy-wide income, L is Landsburg’s income, and E is everyone else’s income. No disputing that one.
Next we observe that everyone else’s share of the income tends to be about 99.999999% of the total. In symbols, we have:
E = .99999999 Y
Combine these two equations, do your algebra, and voila:
Y = 100,000,000 L
That 100,000,000 there is the soon-to-be-famous “Landsburg multiplier”. Our equation proves that if you send Landsburg a dollar, you’ll generate $100,000,000 worth of income for everyone else."
No, no, no, no.
The Keynesian multiplier has two sides: income and expenditure. The economy's income today is the economy's expenditure yesterday. If you had a lot of income today but there was not a lot of
expenditure today, then tomorrow you will have less income.
Landsburg drops the expenditure side of that equation. In effect, he swaps:
Y = C + I
Y = wL + rK
Except his income earners are himself and everybody else.
Wonder of wonders, when you set income equal to income you get a forty-five degree line.
So by botching the income/expenditure distinction Landsburg gives us a triviality.
Even Say did better than that
. Say at least kept income and expenditure separate. His mistake (initially) was assuming they were always equal. But on top of that triviality, Landsburg adds a genuine error.
The slope of the consumption component of the
expenditure side
of the Keynesian cross (the side that Landsburg is missing) is not strictly speaking determined by the consumption share of output, it's determined by the marginal propensity to consume. When we
consider the economy as a whole, of course, we can think in terms of the consumption share of output (investment equals savings and all that). Normally we
talking about the economy as a whole, so it doesn't matter all that much that we make that distinction (when I TAed an intro macro class I always taught it as MPC, though).
Now as I highlighted above, we can't even really call Landsburg's 0.99999999 figure an MPC because it's not expenditure - it's income. But Landsburg is
"L + E" like expenditure, so let's at least do it correctly for him. Either it's all consumed or part of it is not consumed. Landsburg seems to want E to consume all of it's income (hence the
0.99999999 coefficient) and since he is furnishing that dollar out of L to E presumably he is consuming all of his income, or 0.00000001 of total income. So actually the MPC for the economy is 1, and
not 0.99999999, implying an infinite multiplier. I buy something from you for one dollar, you spend that whole dollar buying something from someone else for one dollar, etc. etc. and the dollars keep
racking up. You can think of it as shifting the consumption curve (which is parallel to the 45-degree line) up such that there is no longer any stable equilibrium and consumption bounces off to
This brings me back to my initial point: Keynes made the multiplier go because he had income
expenditure. When you have income and income you get infinite solutions because you just have a tautology. When Landsburg mixed up MPC and income share (and implicitly assumed an MPC of 1 even though
he mistakenly called it an MPC of 0.99999999, which gave him a very large equilibrium solution), he was essentially repeating the tautology under the guise of having performed the same exercise.
A very minor additional problem posed by Landsburg
So Landsburg confuses a couple things - he doesn't have an expenditure side so he has to smuggle it in later, and when he smuggles it in later he confuses MPC and income share which concealed the
fact that he was only dealing in a single identity (if he had realized his implicit MPC was 1 and not 0.99999999 he might have picked up on what was going wrong).
Econ 101 students ought to know better because of course the Keynesian cross is pretty uniformly taught with both income and expenditure from the get-go!
But this does raise a ([very] minor) additional problem: if people consume a lot of their income then we can get similar outrageously high multipliers! Wouldn't the implication be that policymakers
should encourage citizens to consume as much of their income as they can?
Of course not. Because every Econ 101 textbook I know of teaches aggregate supply before they teach aggregate demand. You can gussy that up with as much optimization as you want, but at the end of
the day an Econ 101 student can still understand how that pins the multiplier down with the imperative of investment.
One more point
Take the consumption share of income, use that as an MPC for convenience sake, and get a multiplier. What is it - four? five? That's still outrageously high.
I've seen people actually cite this as proof that the Keynesian model is wrong.
Guess what folks: you have to factor in crowding out. As Keynes wrote in chapter 10 of the
General Theory
: "
if the propensity to consume in various hypothetical circumstances is (together with certain other conditions) taken as given and we conceive the monetary or other public authority to take steps to
stimulate or retard investment, the change in the amount of employment will be a function of the net change in the amount of investment
Another way of putting this is that when the government spends money you can't just multiply that by the multiplier from the Keynesian cross! Or, put another way, you have to multiply government
expenditure by the multiplier from the Keynesian cross, and subtract out the reduction in investment multiplied by the multiplier from the Keynesian cross (and of course also subtract out any
reduction in consumption through taxation, etc., multiplied by a here-unspecified multiplier that will be different from "the multiplier"). In a sense it's misleading to call the empirical government
spending multiplier "the multiplier" and to also call the Keynesian cross multiplier "the multiplier", but if you think about what the Keynesian cross is doing it's not all that hard to keep
58 comments:
1. Thanks for that very detailed post. Rothbard's point had always struck me as ludicrous / unfair, but I never took the time to really try to think through in what way(s) exactly this is so. Now
all I have to do is study your post for a while to get the answer.
1. Assuming it's right... although I think it is. It took me a little while to figure out what was going wrong.
2. I'll wait till Bob Murphy has weighed in so that I will know what the correct position is.
3. You have no idea what kind of self restraint I am exercising right now ;-)
4. I've asked DeLong if I'm thinking about this right... see what he says too.
2. I think you're reading a bit much into what the example says. The very general approach is the same in both cases.
Here's all I take from it:
1. Write an identity, any identity: A=B+G-X^2
2. Posit some kind of plausible relationship (presumably based on empirical evidence): B=A/2
3. Combine 1 with 2: A=2*(B+G-X^2)
I have no idea what the letters stand for, it doesn't matter. The lesson? The equation produced in 3 doesn't tell us anything about the causal relationship between the RHS variables and A...
1. Whoops. #3 should be A=2*(G-X^2).
2. Ummm... right. Equations don't give you the causal story. Who says they do?
I think you're missing something edarniw. It's important that Landsburg misses the income/expenditure distinction and that he misses the MPC/income share distinction and that he completely
fails to discuss investment (it's an implicit MPC=1 economy) and therefore fails to bring in any supply considerations.
All these points I belabored matter precisely because when Landsburg tosses all that out he has only a few equations, and then he rebukes the Keynesians cross for not telling us anything
about the causal relationships when it is Landsburg that is throwing out (it seems without even realizing it) every element of the story that gives a causal structure to the equations.
If his only point is that a one-to-one mapping exhibits explosive growth when you introduce an exogenous shift, then he doesn't have much of a point.
3. re:Ummm... right. Equations don't give you the causal story. Who says they do?
Are you saying no one commits this mistake? I don't think it's prevalent, but the error is made, especially if you're not acquainted with the stuff.
re:I think you're missing something edarniw. It's important that Landsburg misses the income/expenditure distinction and that he misses the MPC/income share distinction and that he completely
fails to discuss investment (it's an implicit MPC=1 economy) and therefore fails to bring in any supply considerations.
It really seems like your trying to reconcile the examples with each other in a very specific way.
I don't think you can conclude it's an MPC=1 economy. All that's assumed is that income=income of steve + income everyone else. 1. nothing is said about expenditure and 2. some of those other
individuals could be businesses or government employees (in which case they're subsequent expenditure would not go under "C").
4. edarniw -
People make this mistake but I would think the solution is to introduce supply constraints and diminishing returns and depreciation, which make investment necessary, which pins down the
multiplier. I don't think the solution is to say - as Landsburg has said - that "the reasoning is invalid". It's not invalid at all. He is simply wrong when he says that.
re: "I don't think you can conclude it's an MPC=1 economy."
If the MPC is not 1 then he has derived his result inaccurately and his identification of consumption share with MPC is extremely confusing.
You resolve that for me and I'm happy to remove this part of the post.
re: "1. nothing is said about expenditure"
Right - that's the whole problem.
3. Rob RawlingsJune 25, 2013 at 5:09 PM
If you assume that the MPC in Steve's model is 0.99999999 then isn't the multiplier 100,000,000? If so then his result would hold. Maybe that is what he meant, but if so he should really have
stated that assumption.
1. Right - although it's not clear why his MPC is 0.99999999 by his logic. But that's what I get into in the "very minor problem" section. It's still not a real problem if you have an MPC that
big unless you ignore everything else we know in economics (and why would you do that?).
2. Rob RawlingsJune 25, 2013 at 5:51 PM
He states in the comments section:
"So the point is a serious one: Of *course* when you give me a dollar, there’s no reason to think equation E = .99999999Y still holds, which invalidates the reasoning. And equally of course,
when the govt spends an extra dollar, there’s no reason to think the equation C=.8Y still holds, which invalidates *that* reasoning."
3. That was in response to NickJ's argument which was completely different from mine (and IMO not convincing).
And the fact that it's a coefficient on Y does not make it an MPC.
4. Yes, this is a terrible argument.
5. Something that I have noted in economics blog posts is a lack of temporal subscripts. Yes, if we understand that we are talking about a particular interval of time, then we do not need
subscripts. But "if you send Landsburg $1" means going from the equation, Y0 = 100,000,000 * L0 to Y1 = L1 + E1, where L1 = L0 + 1. Obviously we cannot solve for Y1, based only on this
If this argument is meant as a refutation of Keynes, it is silly. Keynes was a better mathematician than to make such an error.
1. The subscripts are less important when you keep expenditures and income distinct because you're just solving for an equilibrium and you can effectively ignore the dynamics. When you do what
Landsburg did - and just use income and income - I agree it starts to get a little confusing without the subscripts.
Even then, we're just skipping "find where Y1 = Y2" and going right to "find Y-bar". It shouldn't be that confusing I hope.
2. It appears that it confused some people. ;)
3. To expand on that a bit, if you are assuming equilibrium, then sending $1 to Landsburg is a small perturbation, such that Y1 = Y0 + 1 and the relation that Y = 100,000,000 * L no longer
holds, or sending $1 to Landsburg stands for sending $1 to everybody, and Y1 = 100,000,000 * Y0. (According to the pragmatics of language, the second case should not apply, because then there
is a reason of singling out Landsburg, and he is not a stand-in for everybody. But we are talking about math. :) )
6. Have you seen Landsburg's followup yet?
1. Yes. It's just stuff he's said in the comment section before. As far as I know, my criticism still holds.
7. Daniel, if you're going to bring in supply side and crowding out constraints, OK, but then I think that destroys the textbook case for using an MPC to figure out the multiplier on government
spending. That's the whole point. Neither Rothbard nor Landsburg was trying to endorse the typical Keynesian multiplier.
1. I can't think of how you would teach the textbook Keynesian cross without supply and and crowding out. Have you ever taught it without that???
2. The textbook use of the MPC critically relies on exogenous investment spending. That relies on supply. When we talk about government spending multipliers we are assuming an exogenous G shock
that does not change I. That means an after-crowding-out-has-been-taken-into-effect shock (hence the Keynes quote). These are all part of the Keynesian cross instruction as far as I'm aware.
This is why empirically we don't get spending multipliers on the order of four or five.
If I am missing how we teach the Keynesian cross, please tell me where I'm wrong.
8. No, of course not Daniel, but that's because I'm an Austrian economist who thinks the typical Keynesian case for a government multiplier is stupid.
The question is, if I go and look up the original Samuelson textbook reference, will those caveats be in the textbook? Then, harder, if I go look up Krugman and Wells' textbook discussion, will
your points about time-subscripts and crowding out, be in there? I am confident they won't be in Samuelson, but not as sure about Krugman/Wells.
1. Time subscripts I don't think are relevant here. Undergrad textbooks ignore the dynamics. Who cares? You're solving for an equilibrium. How could that possible matter?
Yes, I am positive Krugman will have supply and crowding out in his textbook and I've never even read the thing.
I feel like I'm taking crazy pills - of course kids get taught supply.
9. I just posted this at Landsburg. This is all "a lot about nothing":
Sigh. This is worthless
If Y = E + L and E= .999999Y then L = .000001Y
Y-E =L means
Y(1-.999999) =L which means that
Y = L/(.000001) which means that
Y = .000001Y/.000001
and all this means is that Y=Y. Hmmm.
1. Excellent - had to share in a new post.
2. Thank you!
10. It seems you proved that an accounting identity is indeed an identity! Can I play, too?
Y = C + I + G
C = 0.8Y
I + G = 0.2Y
Y - C = I + G
Y(1-0.8) = I + G
Y = (I + G)/(1-0.8) = 0.2Y / (0.2) = Y
1. Right, you can pull the identity back out if you want to. You put it in there, so nothing's stopping you from pulling it back out.
The question is, what is the theoretical content of your third equation? It has none.
That there is a difference between your second and your third equation is PRECISELY why it's wrong to talk in terms of shares of income and why you have to talk in terms of MPC. If we stick
to talking in terms of MPC, it would be easy to see that your second and third equations just restate your first equation, so you had nothing other than your first equation all along.
When it's an MPC relation, then your second equation adds something new - a behavioral relation.
I'm not sure if you're directing this at Landsburg or me. If you're directing it at Landsburg I assume you already know all this. If you're directing it at me and don't understand my
counter-argument let me know and I'll try to say it differently.
2. It was directed at malcolm, who seems to be saying that an accounting identity is worthless because it is an identity.
Landsburg's logic is perfectly clear. He is saying that if you combine an accounting identity with a bad assumed model (E=0.999999Y or C=0.8Y regardless of government policy), then you cannot
trust the derived results.
I don't follow your logic. I guess you are trying to say that C=0.8Y is not as bad an assumption as E=0.999999Y. If so, you miss Landburg's point. Unless you are saying that C=0.8Y is a very
good model even when government policy changes. If so, then you have a fundamental disagreement with Landsburg.
3. Your second and third equation essentially repeated your first equation. Of course you're just going to get your first equation out of it... that's elementary.
A model requires discriminating behavioral assumptions to do any more than that.
If we assume MPC = 0.99999999 it will work, it will get a huge multiplier, and that's fine... except that there's no good reason to expect an MPC like that based on what we know about supply,
diminishing returns, depreciation, population growth, etc. etc. See my section labeled "A very minor additional problem posed by Landsburg" for this point.
Now, on top of all this Landsburg (and Rothbard) are extremely confusing in discussing income vs. expenditure and MPC vs. income share. My contention is that this confusion on the very basic
building blocks of the model is what leads them to miss the point that I'm making.
An MPC of 0.99999999 with an enormous multiplier isn't illogical in the sense that we normally think of logic. The problem there is garbage-in-garbage-out which is why I called it a "very
minor" problem that is basically solved by noting what's garbage according to economic theory and what's not. That's why I prefer to focus on the other problems.
4. To see why MPC=0.99999999 leading to an enormous multiplier is not logically a problem, consider what it implies. It implies that exogenous investment is essentially zero. The only way that
could happen is if one way or another we were post-scarcity - if optimization was not constrained by the productive capacity of the economy. In that circumstance it's reasonable to think that
we would have huge multipliers because we could keep producing and producing and producing with no supply constraints or scarcity inducing exogenous investment.
All the components of the model - low exogenous investment and high multipliers - hang together just fine logically and in the context of economic theory in a given set of circumstances. The
reason why it seems so odd is that those circumstances don't apply to the world we live in.
5. We are going in circles.
malcolm's second and third equations repeated his first equation. That is what an identity is!
Take an identity. Plug in a simplistic model (i.e., a model that does not hold when government policy chnages). Derive a multiplier.
Landsburg's point is that the simplistic model invalidates the derived multiplier.
As far as I can tell, all you are saying is that the C=0.8Y model makes more sense than the E=0.999999Y model.
But unless you are claiming that C=0.8Y is a very good model that holds even when government policy changes, then it is a poor choice to derive a multiplier. Which is Landsburg's point.
Accounting identity + poor model = untrustworthy multiplier derivation.
6. No, no, no.
Malcolm's second and third equation are behavioral laws. Landsburg established the consumption function for E. Malcolm adds that if Landsburg is spending his dollar clearly his personal MPC
is 1 so his contribution to the total MPC is 0.00000001, so the economy's MPC is 1.
That is NOT an accounting identity. That is a behavioral law added to an identity that together produce a trivial result.
Your coefficient on G+I is not an MPC by definition (I and G are not C). If you want to assume an MPI relation with Y, fine, but that's something different. You got them by repeating the
accounting identity (if C is 0.8 of the total then the rest has to be I and G).
This is exactly why it's important to keep MPC and income share separate. This is why I spent so much time on that point while Malcolm just condensed it.
7. No, no, no to you, too.
I don't see what any of that has to do with Landsburg's point.
1) Take an accounting identity. Landsburg's examples were Y = C + I + G and Y = L + E. Those are both identities.
2) Now make an assumption. Landsburg's examples were C = 0.8Y and E=0.999999Y. Those are both assumptions or models. They are both poor models at times when policy changes.
3) Combine (2) with (1) to derive a multiplier. But the multiplier is not credible since (2) is a poor model.
Rather than trying to confuse the issue by going off on a tangent, you should try to follow the logic there. If you think the logic has a problem, then why not specifically state where you
think that the logic is wrong?
8. 1. Strictly speaking, Y = L + E is not an accounting identity, it's a definition. Y = C+I+G is an accounting identity because it takes both sides of the ledger (income and expenditure) into
account. That's just semantics when it speaks to what we are calling it, but the exclusion of the expenditure side leads to other problems later (i.e. - not recognizing the point I'm about to
make in #2).
2. No, they are both good models. The latter just doesn't bear any particular resemblance to the current economy and also doesn't jive with everything else we know from economic theory. So
it's garbage-in-garbage-out but there's nothing at all wrong with the logic.
3. See my response to 2.
I am not confusing the issue at all.
9. So, aside from semantics about mathematical identities, you agree with Landsburg's point -- garbage-in-garbage-out. Good thing you did not confuse the issue at all.
10. As I've said before, if his only point is that a one-to-one mapping exhibits explosive growth when you introduce an exogenous shift, then he doesn't have much of a point.
I think you are missing the deeper problems.
If you think he is following the same logic as the Keynesian model, I think you are missing the deeper problem. He is dropping several of the key elements that ensure a sensible result. You
don't get to pick out parts of a theory show that in isolation it's possible to put garbage in and get garbage out, and then say there are problems with the theory.
11. JohnW,
I don't know why you're not seeing this because it's been explained to you several times, but SL's equation is just an accounting identity that doesn't even have a multiplier! You derive the
Keynesian multiplier by creating a behavioral equation about consumption and then plug it into the identity to explain how it holds.
Behavioral vs. accounting identity. That's the mistake here and it's rather brow-raising that a professor with a Mathematics Ph.D misses this.
12. RJ: Just where did JohnW say that he has a doctorate in mathematics?
13. Steve Landsburg was who I was referencing.
I can't say for certain if JohnW has one or not.
14. My mistake then.
11. "Because every Econ 101 textbook I know of teaches aggregate supply before they teach aggregate demand."
Maybe things have changed since I've read ECON 101 textbooks. I recall that McConnell and Brue teach the Keynesian Cross before getting to aggregate supply. Case & Fair too. Have they changed?
12. Landsburg is a contrarian troll. I have never read anything by him that wasn't easily debunked. If this is now appearing on his blog, Slate must have gotten tired of him.
13. mere mortalJune 26, 2013 at 8:06 PM
It is a little understandable that math people get monomaniacal about the equations, but economists shouldn't make that mistake. I think the person who made the point about time series, and the
author making the point about equilibrium got closest to discarding this truly silly discussion.
The mathematical model tries to approximate what is happening at a point in time. At this time a high multiplier implies, non-mathematically:
- A person with impending bills and extra time to do more work. He gets some money, most of it quickly goes out, and he still has some free time, and has more bills.
- If that money goes to another person in the same sad shape, much of it moves along again.
- If that money goes to someone in fine shape, the money stalls for a time, maybe a long time.
The more people in rough shape (bills and not enough work), the higher the chance the money keeps moving, and quickly at that. Of course this isn't an infinite cycle, hopefully in a pleasant,
livable society, more and more people get to be in the third group.
Some idiot playing gotcha with the equations trying to describe the economy assuming that everything is static (or already in equilibrium) isn't worth a minute of time, unless that minute helps
someone else understand how idiotic it is.
This isn't complicated, it isn't even math.
14. One quibble:
"The economy's income today is the economy's expenditure yesterday."
This is incorrect. Correct would be to say: "The economy's income today is the economy's expenditure today." It is just double entry bookkeeping. Each transaction is recorded twice as income
(seller) and expenditure (buyer). In each transaction income = expenditure, you just sum them up, there is no today-tomorrow. Economy's income today most certainly is not the economy's
expenditure yesterday, both income and expenditure vary from day to day.
1. Quibble accepted.
Presumably it's OK to say "expenditure today is a function of income yesterday", and that's the intertemporal link in the chain.
2. Yep!
15. Another way to make everyone rich.
Assume: I is exogenous
Government revenue =tY where t is the tax rate then:
G=tY +Gd where Gd is deficit spending that is assumed exogenous.
Also assume C=Co +CmYd – a pretty common Keynesian consumption function where: Cm=marginal propensity to consume, Co is subsistence consumption and Yd = disposable income = Y-tY = Y(1-t)
Y=Co +CmY(1-t) +Gd +tY +I
do the math and:
Y= (Gd+Co+I)/((1-Cm)(1-t))
Where the Keynesian multiplier is 1/((1-Cm)(1-t))
So all you need do is make the tax rate t =99.9999999% and voila – infinite Y!
Note: this works even if the government runs a balanced budget (Gd=0)and Cm changes in response to a change in tax policy.
16. While this might be a "whole lot of nothing", to paraphrase Malcolm's words, I still have a question on modeling mathematically (even if it might be very simple maths) Keynesian theory.
Why does J.M. Keynes, particularly in Chapters 8 to 10 of The General Theory, use the formulation of MPC < 1?
If you read his book properly, one of the big things he's really talking about is actually the stabilization of investment.
Correct me if I'm wrong, but he also seems to leave it to the reader to attach his "Marginal Propensity" concept to other parts of the macro-economy (i.e., Marginal Propensity to Invest, Marginal
Propensity to Import, et cetera).
If Keynes wanted to make sure even the most careless of readers wouldn't take away the wrong message of mindless consumption spending at all costs, why didn't he explicitly spell out the other
marginal propensities?
Why didn't he put the following thing in his book, which might have been more helpful?
MPC + MPI = 1
17. Thanks!
"all of it's income" should be all of its income. Please delete this comment after typo corrected.
18. Thanks!
"all of it's income" should be all of its income. Please delete this comment after typo corrected.
19. I think you are giving his argument way too much credit. When discussing spending/income in the context of the Multiplier Effect you look at how much was spent/earned over specific period of
time. So, in effect the discussion is a conversation about rates. It is always preferable to discuss rates using calculus. However, this discussion can be expressed algebraically provided that
you recognize that you are dealing with degrees of change and provide a coefficient that demonstrates how changing one variable will change the rate of other variables. The only thing Landsburg
shows is that Landsburg makes .0000001Y. There is nothing to indicate that the initial relationship will hold if his income increases by $X.
20. How do you jump from these esoteric technical disputes to outrageous sweeping denunciations?
That guy is such a fool! What sort of IDIOT doesn't realise that MPC < 1, Y= (Gd+Co+I)/((1-Cm)(1-t)) ?!
GOD these non-Keynesians are SO stupid!
1. Who are you talking about Old Odd jobs? I never said any of that.
21. Lols and Gags, Lol Pictures, Lol Videos, Funny Pictures, Lol is the Laugh out of Laugh where you can Fun Unlimited and Laughing Unlimited. Visit the Best Lol Network Ever, where you can every
thing is lol and Funny, Troll Images, Funny Vidoes, Prank Peoples, Funny Peoples, Prank Images, Fail Pictures, Epic Pictures, Epic Videos, Prank Videos, Fail Videos and Much More Fun and
|
{"url":"http://factsandotherstubbornthings.blogspot.com/2013/06/as-misleading-as-comparing-penetration.html","timestamp":"2014-04-16T04:16:40Z","content_type":null,"content_length":"280323","record_id":"<urn:uuid:64c85a92-337b-4a53-b79d-12e6979093cd>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00642-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Why Algebraic Expressions with Parentheses?
Date: 03/18/2003 at 10:27:34
From: Jason
Subject: Algebraic Expressions Without Parentheses
How would you write these algebraic expressions without parentheses?
I was told that you can, but I don't think you can without solving
the problem. The problem is, there is not enough information there to
solve the problem. So, if it is possible to write the expressions
without the parentheses, then how do you do it? Also if you can do
it, then why are the parentheses there in the first place?
Date: 03/18/2003 at 12:48:26
From: Doctor Ian
Subject: Re: Algebraic Expressions Without Parentheses
Hi Jason,
I think you mean that these are expressions, and not equations, so
there's no way to determine unique values for x and y. If so, you're
absolutely right.
One way to write the expressions without the parentheses is to
-1 * whatever
since those are equivalent. Let's see what happens when we do that:
= -1 * (2x - 3y - 6)
Now we can apply the distributive property:
= ((-1)2x - (-1)3y - (-1)6)
= (-2x + 3y + 6)
After you do this enough times, you'll notice that you can just flip
the signs, e.g.:
-(5x - 13y - 1)
= (-5x + 13y + 1)
>Also if you can do
>it, then why are the parentheses there in the first place?
Sometimes the parentheses are there because the expression came from
somewhere else, and had to be substituted as a whole. For example, you
might start with something like
(number of zibbles) = (number of brizzles) - (number of wilmons)
= (3x + 4y - 4) - (5x - 13y + 1)
And now your life will be easier if you move the minus sign inside:
= (3x + 4y - 4) + (-5x + 13y - 1)
because now you can just drop all the parentheses:
= 3x + 4y - 4 + -5x + 13y - 1
But just because you had to substitute the expression using
parentheses, that doesn't mean you want to keep the parentheses around
any longer than you have to.
Does that make sense?
- Doctor Ian, The Math Forum
|
{"url":"http://mathforum.org/library/drmath/view/62478.html","timestamp":"2014-04-20T10:53:58Z","content_type":null,"content_length":"7178","record_id":"<urn:uuid:724418c1-5993-46c9-9cb2-f8bf1bd9c466>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00246-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Formula Operators and the Order of Operations
Easy to Follow and Great Content
Awesome tutorials. This is great work; thanks a lot for all of these tutorials, and at a great price. The tutorials are easy to follow and understand and are very valuable for me. Teachmsoffice.com
has helped me a lot in terms of increasing my Excel skills and learning how to master Excel. Keep up the good work!
- Sarah M.
Thank you so much!
Thank you so much! My professor was giving me a hard time with this...so I told him I will just figure it out on my own and this site really helped me figure everything out!!
- Ms. R
Thank you and godbless
My text book really sucks and I'm very thankful that you have all of these tutorials. I finished my last two questions because of your site! Thank you and godbless.
- Rob V.
Amazing Videos
Just wanted to say that your videos are amazing. I'm watching them and they really help me understand. The way you explain and go through everything is amazing. Thank you soo much!
- Design X
Saved a Ton of Time!
You have just saved me so much time, I cannot thank you enough!
- L. J.
Thank You!
Thank you for sharing these important tips. Your voice is clear and the steps are clear. It is easy to follow. Thank you!
- Kent
Saved Me Time
Thanks for the tutorials, I never had to do this before today and I needed it done very quickly. These videos saved me time I didn't have!
- Mitchel S.
Thank You!
Thank you! Helped me so much!
- M. Clean
Excellent Tutor
I was just listening to the tutorial online and I must say that the tutor is doing an excellent job.
- D. B.
Sign Up to View Tutorials
|
{"url":"http://www.teachmsoffice.com/tutorials/40/formulas-operators-order-of-operations-pemdas-parenthesis-math-mathematical","timestamp":"2014-04-20T08:27:17Z","content_type":null,"content_length":"69345","record_id":"<urn:uuid:764b9049-7c0b-4d70-a9b6-5be82f299086>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00195-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Specialist
melissac.taylor@vbschools.com www.vbschools.com www.alantones.vbschools.com
Melissa Taylor, Math Specialist
Alanton Elementary School
Virginia Beach City Public Schools uses a Virginia Beach Curriculum for math with Math Connects as its main resource.
Math Connects
Grade Levels K - 5
Macmillan/McGraw-Hill Math Connects provides opportunities for students to build their understanding of mathematical concepts and ample practice to master important skills. Most importantly, all
concepts are taught through and practiced within a strong problem-solving environment, insuring that students become life-long problem solvers.
• It’s All Connected
Math Connectsis intended for use in all elementary math classes as a balanced approach to teaching mathematics. This program is designed to excite your students about learning mathematics while
at the same time providing the teachers with all the tools and materials they need to teach the program.
Your students will be motivated as they solve real-world problems.
In Math you can expect to see...
Problem Solving
• Build new mathematical knowledge through problem solving
• Solve problems that arise in mathematics and in other contexts
• Apply and adapt a variety of appropriate strategies to solve problems
• Monitor and reflect on the process of mathematical problem solving
Reasoning and Proof
Instructional Links:
The Links on this page have been identified by Virginia Beach City Public Schools as having educational value. The school division does not control or guarantee the content of the site, nor does the
school division endorse the organization, its views, or services.
Homework assistance through the school division's “Homework Hotline” is available to students and parents from 5:00 p.m. to 7:30 p.m. on ...
|
{"url":"https://vbschools.schoolnet.com/outreach/aes1/learningspecialists/mathspecialist/","timestamp":"2014-04-16T10:40:22Z","content_type":null,"content_length":"41724","record_id":"<urn:uuid:94a0f900-d8a3-44cf-9203-b15a3157d6d4>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00458-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Spring City, PA Math Tutor
Find a Spring City, PA Math Tutor
...Most of the time people get hung up on the language or complex symbols used in math and science when really the key to understanding is to be able to look beyond those things and visualize
something physical. I promote using some imagination when looking at these topics, especially in physics. ...
16 Subjects: including precalculus, algebra 1, algebra 2, calculus
...Just recently (2012), I retired from the public school system after having taught for a total of 19 years. I taught for seven years in the archdiocese of Philadelphia at the middle school
level. I have a master's degree in educational leadership and a bachelor's degree in mathematics.
13 Subjects: including precalculus, algebra 1, algebra 2, geometry
...Someone who loves people.2. Someone who shows up on time.3. Someone who is prepared to teach the desired subject matter.4.
16 Subjects: including calculus, chemistry, elementary (k-6th), physics
...I have experience tutoring in all types of basic math, from counting and basic arithmetic through telling time and word problems. I took the MCATs once and got a 40, putting me in the 99.5th
to 99.9th percentile, and I already have substantial experience with standardized test tutoring from my w...
35 Subjects: including algebra 1, algebra 2, linear algebra, calculus
Hi,I graduated from the College of William and Mary with a Ph. D. degree in Chemistry, and this is my 7th year teaching chemistry in college. I like to tutor chemistry as well as math, and I look
forward to working with you to improve your understandings of chemistry and/or math.I am an instructor in college teaching chemistry, and I have taught organic chemistry (both semesters) many
9 Subjects: including algebra 1, algebra 2, chemistry, geometry
Related Spring City, PA Tutors
Spring City, PA Accounting Tutors
Spring City, PA ACT Tutors
Spring City, PA Algebra Tutors
Spring City, PA Algebra 2 Tutors
Spring City, PA Calculus Tutors
Spring City, PA Geometry Tutors
Spring City, PA Math Tutors
Spring City, PA Prealgebra Tutors
Spring City, PA Precalculus Tutors
Spring City, PA SAT Tutors
Spring City, PA SAT Math Tutors
Spring City, PA Science Tutors
Spring City, PA Statistics Tutors
Spring City, PA Trigonometry Tutors
|
{"url":"http://www.purplemath.com/Spring_City_PA_Math_tutors.php","timestamp":"2014-04-21T07:11:16Z","content_type":null,"content_length":"23930","record_id":"<urn:uuid:2b2638a0-250a-4baa-a850-8ac405396755>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00218-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Quantum statistics in the classical limit
Next: The Planck radiation law Up: Quantum statistics Previous: Maxwell-Boltzmann statistics
The preceding analysis regarding the quantum statistics of ideal gases is summarized in the following statements. The mean number of particles occupying quantum state
where the upper sign corresponds to Fermi-Dirac statistics and the lower sign corresponds to Bose-Einstein statistics. The parameter
Finally, the partition function of the gas is given by
Let us investigate the magnitude of i.e., when 618) can only be satisfied if each term in the sum over states is made sufficiently small; i.e., if
Consider, next, the case of a gas made up of a fixed number of particles when its temperature is made sufficiently large: i.e., when 618), the terms of appreciable magnitude are those for which i.e.,
it is again necessary that
The above discussion suggests that if the concentration of an ideal gas is made sufficiently low, or the temperature is made sufficiently high, then
for all
for all 620) and Eqs. (621) are satisfied, as the classical limit.
According to Eqs. (617) and (620), both the Fermi-Dirac and Bose-Einstein distributions reduce to
in the classical limit, whereas the constraint (618) yields
The above expressions can be combined to give
It follows that in the classical limit of sufficiently low density, or sufficiently high temperature, the quantum distribution functions, whether Fermi-Dirac or Bose-Einstein, reduce to the
Maxwell-Boltzmann distribution. It is easily demonstrated that the physical criterion for the validity of the classical approximation is that the mean separation between particles should be much
greater than their mean de Broglie wavelengths.
Let us now consider the behaviour of the partition function (619) in the classical limit. We can expand the logarithm to give
However, according to Eq. (623),
It follows that
Note that this does not equal the partition function 615) from Maxwell-Boltzmann statistics: i.e.,
In fact,
where use has been made of Stirling's approximation ( ad hoc fashion, in Sect. 7.7 in order to avoid the non-physical consequences of the Gibb's paradox. Clearly, there is no Gibb's paradox when an
ideal gas is treated properly via quantum mechanics.
In the classical limit, a full quantum mechanical analysis of an ideal gas reproduces the results obtained in Sects. 7.6 and 7.7, except that the arbitrary parameter
A gas in the classical limit, where the typical de Broglie wavelength of the constituent particles is much smaller than the typical inter-particle spacing, is said to be non-degenerate. In the
opposite limit, where the concentration and temperature are such that the typical de Broglie wavelength becomes comparable with the typical inter-particle spacing, and the actual Fermi-Dirac or
Bose-Einstein distributions must be employed, the gas is said to be degenerate.
Next: The Planck radiation law Up: Quantum statistics Previous: Maxwell-Boltzmann statistics Richard Fitzpatrick 2006-02-02
|
{"url":"http://farside.ph.utexas.edu/teaching/sm1/lectures/node82.html","timestamp":"2014-04-21T07:05:51Z","content_type":null,"content_length":"18914","record_id":"<urn:uuid:e3fc0bab-2030-4db6-8df6-cd1601211149>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00159-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Java3D World coordinate system vs. Object coordinate system
Author Java3D World coordinate system vs. Object coordinate system
Joined: Im new to Java3D but have a fair to good amount of 3D application development experience. I am trying to write a swing application that has an object centered around the origin. I want to
Jun 25, be able to rotate this object around, but it seems to me that when I perform the rotations using the functions
2002 transform3D_object.rotX(radians);
Posts: 1 etc, it rotates the object around the WORLD coordinate system. I am trying to figure out how to get the rotations to apply to the OBJECT's coordinate system. That is to say, the object has
an implicit x,y, and z axis of its own. I want the rotations to be performed about the object's coordinate system. Therefore, say if the object is rotated about it x axis, the object's y
axis will rotate around, so if you then do a rotation about the object's y axis, that rotation would not be the same as rotating the object around the world's y axis.
I see in a Java3D book I have that you can use quaternions to rotate an object about an arbitrary axis. I guess what i really want to know is that is it true that a) objects being rotated
using the rotX function ALWAYS are rotated around the world coordinate axis, regardless of orientation of the object's coordinate system. and b) if so, what is the method for doing
rotations about an object's coordinate system if a) is true.
Thanks for the help, I much appreciate it.
Hi "Greg B", welcome to JavaRanch.
Joined: Sorry I can't answer your specific question as I don't know much about the Java3D API, although I have the Manning book and plan to go over it at some point.
Jan 07, In the meantime, please change your name to comply with the naming policy to which you agreed when you registered here. You need more than a single letter in your last name.
For your publicly displayed name,
use a first name, a space, and a last name.
You can change your name:
You can also find the naming policy:
Thank You!
[ June 28, 2002: Message edited by: Rob Ross ]
SCJP 1.4
subject: Java3D World coordinate system vs. Object coordinate system
|
{"url":"http://www.coderanch.com/t/271293/java/java/Java-World-coordinate-system-Object","timestamp":"2014-04-20T06:53:09Z","content_type":null,"content_length":"21146","record_id":"<urn:uuid:dbc0f01d-e443-454f-9bcf-bfb93c5a240f>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00215-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Opponents of pi mark special day
28 June 2011 Last updated at 09:46
'Tau day' marked by opponents of maths constant pi
The mathematical constant pi is under threat from a group of detractors who will be marking "Tau Day" on Tuesday.
Tau Day revellers suggest a constant called tau should take its place: twice as large as pi, or about 6.28 - hence the 28 June celebration.
Tau proponents say that for many problems in maths, tau makes more sense and makes calculations easier.
Not all fans of maths agree, however, and pi's rich history means it will be a difficult number to unseat.
"I like to describe myself as the world's leading anti-pi propagandist," said Michael Hartl, an educator and former theoretical physicist.
"When I say pi is wrong, it doesn't have any flaws in its definition - it is what you think it is, a ratio of circumference to diameter. But circles are not about diameters, they're about radii;
circles are the set of all the points a given distance - a radius - from the centre," Dr Hartl explained to BBC News.
By defining pi in terms of diameter, he said, "what you're really doing is defining it as the ratio of the circumference to twice the radius, and that factor of two haunts you throughout
The discrepancy is most noticeable when circles are defined not as a number of degrees, but as what are known as radians - of which there are two times pi in a full circle. With tau, half a circle is
one-half tau.
Dr Hartl reckons people still use degrees as a measure of angle because pi's involvement in radians makes them too unwieldy.
He credits Bob Palais of the University of Utah with first pointing out that "pi is wrong", in a 2001 article in the Mathematical Intelligencer.
But it is Dr Hartl who is responsible for the Tau Manifesto - calling tau the more convenient formulation and instituting Tau Day to celebrate it.
Kevin Houston, a mathematician from the University of Leeds, counts himself as a convert.
"It was one of the weirdest things I'd come across, but it makes sense," he told BBC News.
"It's surprising people haven't changed before. Almost anything you can do in maths with pi you can do with tau anyway, but when it comes to using pi versus tau, tau wins - it's much more natural."
Dr Hartl is passionate about the effort, but even he is surprised by the fervent nature of some tau adherents.
"What's amazing is the 'conversion experience': people find themselves almost violently angry at pi. They feel like they've been lied to their whole lives, so it's amazing how many people express
their displeasure with pi in the strongest possible terms - often involving profanity.
"I don't condone any actual violence - that would be really bizarre, wouldn't it?"
BBC News website readers have been sending in their thoughts on the pi versus tau debate; a selection of them appears below.
John R Jones from Lytham St Annes, UK writes:
As a mathematician I respect the value of pi and Dr Hartl's views are opinionated bias against the number - especially as he harps on about circles being to do with radii and not circumferences. We
all know the circumference is the length around the circle, so why doesn't it matter? All circles are similar in shape and pi is a convenient ratio used in many formulae connecting length, area and
Alan Jones in Lee-on-the-Solent, UK
I teach maths to aircraft engineering apprentices and although I am no maths scholar, I use pi a lot of the time. Replacing pi with tau would be plain silly. Take the area of a circle: pi x radius x
radius. If we used tau it would be (Tau/2)x (diameter x diameter)/4. It is bad enough trying to get these very able apprentices to do simple engineering maths without making it more complicated.
We're trying to make science and engineering more attractive not more difficult.
Louie from Chicago, Illinois, US emails:
Not once did anyone mention area (pi*r^2). Pi makes that equation very clean. I do understand the arguments with radians, but most folk use degrees. Even if radians are needed, they just do a
conversion (theta*pi/180). It's just too late to make this change. It's like the qwerty keyboard. The layout was not developed because of typing efficiency, but the likeliness of alternating letters
between hands. This was all in hopes of preventing typewriter jams. Typewriters have long since gone and there are much more efficient keyboard layouts, but it's so much easier just to stick with
what people are already familiar - same goes with pi.
Darren in Bagshot, UK says:
Tau is nonsense. You cannot use the radius in the relationship to the circumference, simply because it is theoretical and cannot be measured. That is why in engineering you use the diameter for
calculations as this is physical and can be measured. The radius cannot be measured ie for the area of a circle engineers use pi multiplied by diameter squared over four and not pi multiplied by
radius squared. Pi rules.
Liam from Oxford, UK emails:
It makes sense. I sometimes need to write software for scientific applications and I often end up defining a constant "TWO_PI" to save constantly multiplying pi by two and thus make the algorithms
more efficient. I don't think it's worth making a fuss though - I just use whichever is most convenient for the current problem.
Alec Findlater in Reigate, UK writes:
The best thing about pi is the formula e^(i x pi) +1 = 0. This includes e, i, pi, 1 and 0, which are pretty well the most important numbers in mathematics. Changing to tau, and having to use tau/2 in
place of pi loses elegance. Hang on in there pi.
Ben S in New Orleans, US
While I basically agree with the more elegant concept of using tau for circle-related calculations, completely disposing of pi would yield a sloppier, less elegant, version of Euler's Identity (in
simplified notation: e^(i * pi) = -1). Euler's Identity brings together five constants: 0, 1, pi, e, and i in one place. Tau/2 just doesn't look as appealing, unless, of course you want to be able to
state that "Euler's Identity brings together six constants: 0, 1, 2, tau, e, and i in one place".
Jenny Bartle in Bristol, UK says:
The most important reason to use tau is that it will be easier to teach a lot of key concepts at GCSE and A-level maths, and physics and engineering too. These are subjects that are already
considered hard, and we don't want to inconvenience people more than we need to!
Gareth Boyd in Aberdeen, UK writes:
Dr Hartl's theoretical background would seem to be on show here. He has forgotten about the practical application of mathematics - engineering. Tau is already one of the most important symbols in
mechanical engineering as it denotes shear stress. Additionally the ratio of diameter to circumference is very important when we work with bars of material or pipes. We tend not to purchase these by
the radius. Perhaps a little more thought and debate are required in this matter before we start a revolution.
Simon in London writes:
Tau makes much more sense than pi. Pi is the equivalent of defining 1kg as the mass of two litres of water or defining 1 joule as the energy expended imparting a force of one newton over two metres.
Its definition contains an unnecessary factor of two which makes it inelegant and has to be compensated for in almost every situation in which it is used.
Emma Faulkner in Leicester emails:
I wholeheartedly agree with the use of tau rather than pi! Hadn't heard of it before today but when making calculations to do with circles (I am a programmer and work with mapping software) I always
have to calculate from a central point, which means I'm using the radius not the diameter. Having to then multiply by two is counter-intuitive. Circles are all about the central point and the
distance from there to the circumference!
Dana L Marek in Houston, Texas, US says:
I will continue to celebrate Pi Day, alongside Tau Day - anything that promotes mathematics and may encourage children to improve their maths skills. For years I have been telling them maths = $ -
the more you learn, the more you earn.
|
{"url":"http://www.bbc.co.uk/news/science-environment-13906169","timestamp":"2014-04-17T09:40:05Z","content_type":null,"content_length":"103691","record_id":"<urn:uuid:d5b3d5ee-9db8-43f0-9936-53b94c5cdf49>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00276-ip-10-147-4-33.ec2.internal.warc.gz"}
|
An Introduction to Bayesian Networks with Jayes
Submitted by Michael Kutschke on
At Eclipse Code Recommenders, most of our recommendation engines use Bayesian Networks, which are a compact representation of probability distributions. They thus serve to express relationships
between variables in a partially observable world. Our recommenders use these networks to predict what the developer wants to use next, based on what he has done previously.
When the Code Recommenders project first started, there was a need for a new open-source, pure-Java bayesian network library. As part of my bachelor thesis, I created such a library, called Jayes.
Jayes has since become the backend of most Code Recommenders’ recommendation engines. Its development continues and a new version of Jayes will be included in the upcoming Code Recommenders 2.0.
This post describes how to use Jayes for your own inference tasks.
Guest Blogger: Michael
Michael Kutschke is currently completing his Master of Science at the Computer Science department of Technische Universität Darmstadt. His areas of interest include large scale data analysis,
statistics and optimization. Michael contributes to the open source Code Recommenders project through his student work terms with Codetrails.
What Jayes is, and what it isn’t
Jayes is a library for Bayesian networks and the inference in such networks. At the moment, there is no learning component included. (We can, however, recommend the Apache Mahout library.)
Where can I get it?
There are two sources for getting your hands on Jayes’ source code:
The version in the Code Recommenders repository that we’re using for this post has not, at the time of writing, been merged into the Eclipse repository but will be soon. Also, it is still under
development and does not yet carry the version number 2.0 (as we are just about to start the 2.0 branch of Code Recommenders). That means for the most current version of Jayes, my Github repository
is the place to go. The Github repository also contains the classes used for evaluating and benchmarking Jayes, which will not move to Eclipse in the foreseeable future. These classes allow you to
assess the runtime performance of Jayes.
How do I use it?
Before we start, you’ll need to have a rough idea of what a Bayesian network is and what it looks like. Wikipedia has a pretty good introduction. So let’s get started. For the use case of inference,
there are only three classes you need to know:
• org.eclipse.recommenders.jayes.BayesNet
• org.eclipse.recommenders.jayes.BayesNode
• org.eclipse.recommenders.jayes.inference.junctionTree.JunctionTreeAlgorithm
The first two are used for setting up the model itself, while the third is the algorithm used for inference.
The BayesNet is a container for BayesNodes, which represent the random variables of the probability distribution you are modeling. The preferred way in Jayes 2.0 for creating BayesNodes is through
BayesNet.createNode(String name). This is most likely the only method of BayesNet you need to use.
A BayesNode has outcomes, parents, and a conditional probability table. It is important to set the probabilities last, after setting the outcomes of the parent nodes. The following diagram shows the
The methods from BayesNode that perform these steps are
• BayesNode.addOutcomes(String...),
• BayesNode.setParents(List<BayesNode>), and
• BayesNode.setProbabilities(double...).
The probabilities in the conditional probability table need to be specified in a particular order. This is shown in the following code snippet:
BayesNet net = new BayesNet();
BayesNode a = net.createNode("a");
a.addOutcomes("true", "false");
a.setProbabilities(0.2, 0.8);
BayesNode b = net.createNode("b");
b.addOutcomes("one", "two", "three");
0.1, 0.4, 0.5, // a == true
0.3, 0.4, 0.3, // a == false
BayesNode c = net.createNode("c");
c.addOutcomes("true", "false");
c.setParents(Arrays.asList(a, b));
// a == true
0.1, 0.9, // b == one
0.0, 1.0, // b == two
0.5, 0.5, // b == three
// a == false
0.2, 0.8, // b == one
0.0, 1.0, // b == two
0.7, 0.3, // b == three
We now have a network and want to perform inference. The class used for this task is JunctionTreeAlgorithm.
IBayesInferer inferer = new JunctionTreeAlgorithm();
Map<BayesNode,String> evidence = new HashMap<BayesNode,String>();
evidence.put(a, "false");
evidence.put(b, "three");
double[] beliefsC = inferer.getBeliefs(c);
This gives us the probability distribution P(c | a = “false”, b =”three”).
Potential pitfalls
Inference algorithms use an internal representation of the network that will not be updated when you update the BayesNet. Should your use case require changes to the BayesNet, you need to call
IBayesInferer.setNetwork() again to update the internal representation.
We discourage using a BayesNode from a different network as a parent. The inference algorithms will access all BayesNodes through the BayesNet and this mixing of BayesNodes from different networks is
very likely to lead to errors.
Advanced features
In Jayes 2.0, we added several advanced features that allowed us to trade-off between three major performance indicators for the inference engine: memory consumption, runtime performance, and
numerical stability. For example, in terms of numerical stability, one limitation of Jayes is the network size. With increasing network size, any observed event is so unlikely that it becomes
indistinguishable from an impossible event. This leads to an error because everything suddenly has zero probability – which is, of course, not true and Jayes consequently throws an exception to
inform the user about this situation. Some of the advanced features described below have an influence on when this problem appears, and therefore how large the networks can become.
Out-of-the-box, Jayes allows for fine-tuning in several areas:
• Floating point representation
• Logarithmic values
• Graph elimination algorithm
• Factor decomposition
Floating point representation
Jayes can compute with double precision as well as single precision. Using single precision consumes less memory, but networks with more than ~200 variables are likely to suffer from numerical
instability. Double precision can, on the other hand, easily support several thousand variables.
To set the floating point representation, use org.eclipse.recommenders.jayes.factor.FactorFactory.setFloatingPointType(Class). (Valid arguments are float.class and double.class). The default is
double precision, although using single precision reduces the memory consumption by approximately 50% and has no measurable impact on runtime performance.
Important: You need to set floating point precision before the network is set in the inference algorithm.
JunctionTreeAlgorithm algo = new JunctionTreeAlgorithm();
Logarithmic values
Jayes can also use logarithmic values internally. This drastically improves numerical stability, but approximately doubles the time needed for inference. The FactorFactory is again the class that
provides this option.
JunctionTreeAlgorithm algo = new JunctionTreeAlgorithm();
Graph elimination algorithm
Jayes has also added the capability to choose the graph elimination algorithm used for the generation of the junction tree used internally by JunctionTreeAlgorithm. This has influence on the time
needed to set up the algorithm, as well as potentially the performance of the resulting inference engine, both in terms of memory consumption and runtime performance. There are two heuristic
algorithms available which can be set in the JunctionTreeAlgorithm. Both reside in the org.eclipse.recommenders.jayes.util.triangulation package.
• MinFillIn: the best available quality, but is not suited for big networks with several hundred variables, as loading will take too long. Thus best suited for small, complex networks. This is the
• MinDegree: suitable for any size of network, but with complex networks the quality may suffer a bit. This could lead to a higher memory footprint, increased inference times and eventually the
danger of numerical instability.
JunctionTreeAlgorithm algo = new JunctionTreeAlgorithm();
JunctionTreeBuilder builder = JunctionTreeBuilder.forHeuristic(new MinFillIn());
Factor decomposition
For probability distributions learned from real data, many parameters are zero because of a lack of data. However, in order to be able to predict in previously unseen cases, the distributions are
smoothed, meaning some of the probability mass is distributed among the cases we did not see in our data.
Jayes is able to take advantage of sparse distributions. However, the smoothing intentionally leads to non-sparse distributions. Using linear algebra magic, Jayes provides algorithms to make the
smoothing an explicit part of the model, in the form of new variables. This allows the distributions to be sparse, which saves memory – for our models ~20-30%. The extra variables make the model more
complex, leading to increased inference times – for our models twice the time. So, this again is a memory/time trade-off.
The decomposition algorithms can be found in the org.eclipse.recommenders.jayes.transformation bundle. The algorithm to use for smoothed distributions is the SmoothedFactorDecomposition. This class
has one public method, decompose(BayesNet,BayesNode), which will decompose the given BayesNode and augment the network with the results.
Here are the most important things to think about when using this feature:
• Evaluate the use of this feature for every model you use. For some models there will be no memory benefit.
• It is not the best strategy to decompose all nodes – therefore you should choose the nodes to decompose. The more a distribution needs to be smoothed, the better the decomposition will perform.
Jayes and You
I hope this article has given you an overview of what Jayes does and how you can use it. If you have any questions regarding Jayes, please contact us on the Eclipse Code Recommenders mailing-list or
at info@codetrails.com
If you like what the Code Recommenders and Codetrails team is doing and want to keep up to date: Follow us on Google+.
Permalink Submitted by Ashwin Jayaprakash (not verified) on Tue, 08/27/2013 - 21:18.
Hi Michael, would you be able to tell/direct me how the Eclipse plugin is building this network? I imagine the network is very complex and built by analyzing the AST of all loaded classes.
I looked in the source code but couldn't figure out where the network was actually being built.
(Source links: #1 and #2)
Permalink Submitted by Michael Kutschke (not verified) on Thu, 08/29/2013 - 13:54.
|
{"url":"http://www.codetrails.com/blog/introduction-bayesian-networks-jayes","timestamp":"2014-04-21T02:21:22Z","content_type":null,"content_length":"54448","record_id":"<urn:uuid:90a0daad-83a4-402c-9efc-84840547300f>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00639-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Surface Modeling
Implicit Surfaces
POVRAY isosurface renders an equipotential surface defined by an implicit equation and bound constraints
ZENO implicit surface raytracer (public domain)
Implicit Surface Polygonizer (in C)
Implicit Surfaces Bibliography
Implicit Surface Links (including upcoming conferences)
The Implicit Site
``A Repository for Information on the Use of Implicit Surfaces in Computer Graphics''
Implicit Solid Modeling Techniques for Reconstruction of 3-D Data Points
Mesh Generation
Mesh Generation & Grid Generation on the Web The list of public domain and commercial mesh generators
Mesh Generation: Theory, Algorithms, and Software (A survey by Stephen Vavasis)
Surface Interpolation and Approximation
Surfpack, multidimensional function approximation for sparse, irregularly-spaced data sets (C++ and Fortran 77)
VolPack Volume Rendering Library (free of charge for non-commercial use)
AI-GEOSTATS, Spatial data analysis and Geoinformatic
A very useful source for 2D interpolation and approximation
Surface Reconstruction from Unorganized Points (html, Ph.D. Thesis by Hugues Hoppe)
Links to 3D scanning and surface reconstruction
Oberflächen- und Volumen-Rekonstruktion komplexer Objekte
Voronoi Regions in Arbitrary Dimensions
Frequently Asked Questions in Polyhedral Computation
Tetrahedral and Triangle Methods Bibliography
Surfactor (public domain, SGI executables only)
Volume Visualization Resources (links to public domain software, web pages, etc.)
Image Analysis, Image Processing, and 3-D Reconstruction (including links to public domain software)
Three-dimensional surface reconstruction from multiple images
Annotated Computer Vision Bibliography: Three Dimensional Object Description and Computation Techniques (with many cross-links)
Three-Dimensional Surface Reconstruction Bibliography
Perspective Texture Mapping by Chris Hecker
Radial Basis Functions
Scattered Data Interpolation and Approximation using Radial Base Functions, multivariate (in Matlab)
Fast Fitting and Evaluation of Radial Basis Functions
Visualization Software
Ray Tracing
RayLab Raytracer (freeware)
POV-Ray, Persistance of Vision Ray Tracer (freeware)
Rayshade Raytracer (freeware, programmable?)
The Geometry Center Software Page and its free Geomview interactive 3D geometry viewing program
German mirror site
Data Analysis and Visualization Tools, a comprehensive archive
3-D Software and Imaging Sites
Visualization Software (University of Minnesota Supercomputing Institute)
NASA Annotated Scientific Visualization Bibliography
NIH Image (image processing software) (Macintosh based)
Tess, online 3D graphics newsletter
Visual Numerics (PV-wave, Stanford-Grapics; commercial)
Software for Graphics and Data Analysis
3d artists from all around the world (collected by Raphael Benedet)
Khoros, image processing and data visualization environment
Digital Image Processing Instructional Database (by Robert Bamberger and Michael Kuperstein)
graphics file formats (from Webopedia, the online encyclopedia dedicated to computer technology)
IGES - Initial Graphics Exchange Specification
Graphics, Modeling, and CAGD Conferences
4th AFA Conf. Curves and Surfaces Saint-Malo (France), July 1-7, 1999.
Eighth IMA Conference on THE MATHEMATICS OF SURFACES
University of Birmingham UK 31st August - 2nd September 1998
Confirmed invited speakers include:
P. Besl (Silicon Graphics), R. Farouki (University of Michigan), H. Hagen
(University of Kaiserlauten), J. Hoschek (Technische Hochschule Darmstadt),
G. Lukacs (Hungarian Academy of Sciences), D. Manocha (University of North
Carolina), and H.-P. Seidel (University of Erlangen).
Scientific Visualization Bibliography (over 350 annotated references)
Computational Geometry Database of the Max-Planck-Institut für Informatik, Saarbrücken, Germany
Directory of Computational Geometry Software
Mathematical Software
Statistics Links
Mathematics Links
Global Optimization
my home page (http://www.mat.univie.ac.at/~neum)
Arnold Neumaier (Arnold.Neumaier@univie.ac.at)
|
{"url":"http://www.mat.univie.ac.at/~neum/surface.html","timestamp":"2014-04-18T03:47:29Z","content_type":null,"content_length":"10403","record_id":"<urn:uuid:8a6db71e-a41d-4253-86d7-543575bccbc4>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00236-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Khinchin constant: Introduction to the classical constants (subsection ClassicalConstants/02)
Classical constants and the imaginary unit include eight basic constants: golden ratio , pi , the number of radians in one degree , Euler number (or Euler constant or base of natural logarithm) ,
Euler-Mascheroni constant (Euler gamma) , Catalan number (Catalan's constant) , Glaisher constant (Glaisher-Kinkelin constant) , Khinchin constant (Khintchine's constant) , and the imaginary unit .
They are defined by the following formulas:
The number is the ratio of the circumference of a circle to its diameter.
|
{"url":"http://functions.wolfram.com/Constants/Khinchin/introductions/ClassicalConstants/02/","timestamp":"2014-04-16T22:26:30Z","content_type":null,"content_length":"39693","record_id":"<urn:uuid:acc4c79d-54fa-4dfd-8bda-4f84edc1d9fa>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00221-ip-10-147-4-33.ec2.internal.warc.gz"}
|
I'm not understanding this limit problem...
March 29th 2009, 09:17 PM
I'm not understanding this limit problem...
lim of (n)sin(1/n) as n goes to infinity. I thought the limit would be 0 since sin[1/(really big number)] = 0, but the book says the limit is 1. how do you get the limit as 1? Thanks.
March 29th 2009, 09:32 PM
Chris L T521
Let $z=\frac{1}n\implies n=\frac{1}{z}$. Then as $n\to\infty$, $z\to0$. Thus, $\lim_{n\to\infty}n\sin\frac{1}{n}=\lim_{z\to0}\fra c{\sin z}{z}=\dots$
March 29th 2009, 09:52 PM
Dont forget this
Its not just
sin[1/(really big number)]
But its
(really big number) x (sin[1/(really big number)])
So can't just do it that way
However put 1/n =t
So when n-> infinity , t-> 0
Hence it becomes
Lt_{t->0} (sin[t]/t) = 1
For the geometrical proof of this
Read this
--thanks to Chris for it
EDIT: Chris won!! :D
|
{"url":"http://mathhelpforum.com/calculus/81386-im-not-understanding-limit-problem-print.html","timestamp":"2014-04-17T22:23:03Z","content_type":null,"content_length":"6506","record_id":"<urn:uuid:02c5934d-8c57-4ce6-aef5-7bf36de17978>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00083-ip-10-147-4-33.ec2.internal.warc.gz"}
|
HESI A2 for the entrance exam study guide, Gateway - pg.5
HESI A2 for the entrance exam study guide, Gateway - page 5
by CAwant2be
hi everyone, i need to take the hesi a2 for the entrance exam next month at gateway college.... i am not able to go to the book store to purchase it..... what is the name of the study guide? can i
buy the book on amazon.com? ... Read More
1. 0
Hesi A2
any one know free practice test for Hesi test
2. 0
I need help to passing HEsi Test I made it 60 I need atleast 75. I am looking free practice site ....... any one I realy appreciated
3. 0
Jun 30, '10 by
Madax ,
What I did, it might help you too, with reading comprehension, grammar,vocab... and even math. If you are in Arizona or whatever state you are in, go to the public library and make a card (it is
free) with that card you can get all kind of books written by Learning Express ("205 reading comprehension tests for nursing") is one of them
or even access with the library card the Learning Express website, where they have a bunch of tests, for each subject. Also if money is not a problem for you, you could go on their web site and
buy all the books thay have available, they have a lot of books with a lot of tests (AST,SAT,GMA,NLN, TOEFL..etc) you will get help in grammar, vocabulary, homonyms, math,reading plus it is
timed, so you will know how you did right after the test. I did all the tests, and I have seen results)
Good lack, I hope this info will do good to you.
4. 0
from rzyzzy
You get to use the windows calculator through the entire test. And at least @ gateway, we didn't even take the bio section of the test.
It sounds like you're well prepared, get some sleep - don't rush through the test, and pee before you go into actual test-taking room - we couldn't leave after we started the test!
Thanks! its seem like im prepared but im still scared. im not ready yet, im memorizing how to convert gallons, ounces, mL... ETC... also the roman numeral and military time, sound easy but when i
look at them i always forget..
I need more time to study, english is my second language & i dont usually read the newspaper because there are words that has deeper meanings. i suck at vocabulary.. But im trying to read them
nowadays, to prepare myself for the reading part.
5. 0
Hey guys..
those of you who already took the test thank you for your help
HELP PLEASE
I have a question and please please please some one reply back.. I did not take the AP 2 yet and I am taking the test soon.. what is the AP section like on the test? alot of AP 2 stuff?
6. 0
I am preparing to take the HESI exam around 10/1. I've been studying the HESI Evolve book. Others have said the math portion is the easiest. Though the book has roman numerals, and tempreture
conversions (0 C = 32 F, and 100 C = 212 F), it doesn't describe how to break down temps. I know there are 2 ways to convert from F - C and vice versa (f -32=(x) /9=(x) x 5 = C, etc. Are these
type questions on the exam? I.e. convert 72 F to C? And is the calculator available for these?
I've also read know metric. Does that mean just know the conversions? i.e. 1 ton = 2,000 pounds, 1 quart = 2 pints?
Since there aren't any type questions for addition, multiplications, etc. for these, I'm assuming just know what is in the book by memory?
Appreciate all the help here!
7. 2
from LoSe
I am preparing to take the HESI exam around 10/1. I've been studying the HESI Evolve book. Others have said the math portion is the easiest. Though the book has roman numerals, and tempreture
conversions (0 C = 32 F, and 100 C = 212 F), it doesn't describe how to break down temps. I know there are 2 ways to convert from F - C and vice versa (f -32=(x) /9=(x) x 5 = C, etc. Are
these type questions on the exam? I.e. convert 72 F to C? And is the calculator available for these?
I've also read know metric. Does that mean just know the conversions? i.e. 1 ton = 2,000 pounds, 1 quart = 2 pints?
Since there aren't any type questions for addition, multiplications, etc. for these, I'm assuming just know what is in the book by memory?
Appreciate all the help here!
your fear - In fact, I stayed up
late the evening before the test learning and re-learning the temperature conversion formulas... and
didn't get
any of those questions..
test isn't the same as
As far as english/metric conversions they're
questions - i.e., how many cc's/ml's in a gallon and a half, or how many ounces in 1000ml. The single most important formula to remember is 30 ml's = 1 ounce. - If you know how many ounces in a
cup/pint/quart/gallon, you can figure the answer in your head from there - and YES... there is the basic "windows calculator" available the entire time.
roman numeral questions - In my case, if you knew how to count to a hundred using roman numerals, you'ld be
. Also, there were
time questions - those are
points you can get with not a lot of studying investment.
Other than that, I remember there were a large number of fraction questions - you'll need to
how to
, and
fractions -
The single biggest skill I can think of with fractions is being able to pull them into a decimal, and how to convert a decimal to a fraction - if you can do that, windows calc will do the heavy
lifting for you.
It depends a little bit on how
mind works, but in my case, the possible answers were often far enough
that I used the calculator very little. 1/2 of 1/4 isn't 4, or 196, or 27/32...
Alot of the incorrect answers were created by doing the math incorrectly, which means a quick "reality check" can lead you to an
Knock out the
wrong answers and the correct ones stand up on their own... if you're not "rattled"..
Another big point, and it's probably the hardest to remember leading up to the test - don't let them "psych" you out. You'll have
time than you'll
need, so don't be afraid to read a question two or three times before selecting an answer -
and if the question seems super easy, it probably is, but
read it again
Every point on the Hesi is the same value as another - so you can
pass if you're baffled by roman numerals, or you forget how many ounces are in a cup, as long as you're not "rattled" by it.
The Hesi really is a great test for new nurses, because it teaches you a little bit about "triage"...
"Save" the points that
be saved, and let the others go with a clean conscience..
Last edit by rzyzzy on Aug 17, '10
8. 0
Wow, thank you for such a detailed response. That makes me feel much better about things. I feel comfortable with all you mentioned above, so hopeful to get great scores when I take it in
October. In the interim, I will study. Thanks again for the response!
9. 0
from rzyzzy
I understand your fear - In fact, I stayed up very late the evening before the test learning and re-learning the temperature conversion formulas... and didn't get any of those questions.. my
test isn't the same as your test...
As far as english/metric conversions they're fluid questions - i.e., how many cc's/ml's in a gallon and a half, or how many ounces in 1000ml. The single most important formula to remember is
30 ml's = 1 ounce. - If you know how many ounces in a cup/pint/quart/gallon, you can figure the answer in your head from there - and YES... there is the basic "windows calculator" available
the entire time.
There were roman numeral questions - In my case, if you knew how to count to a hundred using roman numerals, you'ld be fine. Also, there were military time questions - those are easy points
you can get with not a lot of studying investment.
Other than that, I remember there were a large number of fraction questions - you'll need to know how to flip, fold, and mutilate fractions -
The single biggest skill I can think of with fractions is being able to pull them into a decimal, and how to convert a decimal to a fraction - if you can do that, windows calc will do the
heavy lifting for you.
It depends a little bit on how your mind works, but in my case, the possible answers were often far enough apart that I used the calculator very little. 1/2 of 1/4 isn't 4, or 196, or 27/
Alot of the incorrect answers were created by doing the math incorrectly, which means a quick "reality check" can lead you to an accurate guesstimate...
Knock out the obviously wrong answers and the correct ones stand up on their own... if you're not "rattled"..
Another big point, and it's probably the hardest to remember leading up to the test - don't let them "psych" you out. You'll have more time than you'll need, so don't be afraid to read a
question two or three times before selecting an answer - and if the question seems super easy, it probably is, but read it again anyway.
Every point on the Hesi is the same value as another - so you can still pass if you're baffled by roman numerals, or you forget how many ounces are in a cup, as long as you're not "rattled"
by it.
The Hesi really is a great test for new nurses, because it teaches you a little bit about "triage"...
"Save" the points that can be saved, and let the others go with a clean conscience..
I understand that this is an older thread, but thank you so, so much for your helpful advice, Rzyzzy! After studying everything you described religiously, I managed to score a 90 on the math
portion of the Hesi A2! I didn’t have any temperature conversion questions either, too weird!
10. 0
Mar 1, '13 by
here's what I did to study for anyone else looking for advice: I used flashcards to help memorize conversions and units, and used some math flashcards to get better/faster at mental math. You can
make your own, or there are some you can buy for this type of stuff; I think making your own is a good idea b/c you have to think about it as you make them, but either way is good. A lot of the
math isn't too advanced on the test, but you don't want to make silly errors and lose points that shouldn't be lost which is why I suggest practicing some of the basics. That's where most people
lose points is needless, minor errors. Khan Academy is also a good resource I used some for math (can you tell that Math was a focus? haha). Finally, I got a study guide (got the one by Trivium
Test Prep) which I choose mainly b/c it had a lot of practice questions, but I actually really liked it for the lessons too. It proved to be very helpful with the science sections, which I would
definitely suggest putting some focus on that. For me, math and the science section (chem, bio, and A&P specifically) were what I studied for the most.
|
{"url":"http://allnurses.com/hesi-entrance-exam/hesi-a2-entrance-476777-page5.html","timestamp":"2014-04-16T10:49:35Z","content_type":null,"content_length":"52372","record_id":"<urn:uuid:12487a93-f13d-468f-886e-79fc142e56c8>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00188-ip-10-147-4-33.ec2.internal.warc.gz"}
|
A transient solver for current density in thin conductors for magnetoquasistatic conditions
Abstract (Summary)
A computer simulation of transient current density distributions in thin conductors was developed using a time-stepped implementation of the integral equation method on a finite element mesh. A study
of current distributions in thin conductors was carried out using AC analysis methods. The study of the AC current density distributions was used to develop a circuit theory model for the thin
conductor which was then used to determine the nature of its transient response. This model was used to support the design and evaluation of the transient current density solver.
A circuit model for strip lines was made using the Partial Inductance Method to allow for simulations with the SPICE circuit solver. Magnetic probes were designed and tested that allow for physical
measurements of voltages induced by the magnetic field generated by the current distributions in the strip line. A comparison of the measured voltages to simulated values from SPICE was done to
validate the SPICE model. This model was used to validate the finite-integration model for the same strip line.
Formulation of the transient current density distribution problem is accomplished by the superposition of a source current and an eddy current distribution on the same space. The mathematical
derivation and implementation of the time-stepping algorithm to the finite element model is explicitly shown for a surface mesh with triangular elements. A C++ computer program was written to solve
for the total current density in a thin conductor by implementing the time-stepping integral formulation.
Evaluation of the finite element implementation was made regarding mesh size. Finite element meshes of increasing node density were simulated for the same structure until a smooth current density
distribution profile was observed. The transient current density solver was validated by comparing simulations with AC conduction and transient response simulations of the SPICE model. Transient
responses are compared for inputs at different frequencies and for varying time steps. This program is applied to thin conductors of irregular shape.
Bibliographical Information:
School:Kansas State University
School Location:USA - Kansas
Source Type:Master's Thesis
Keywords:current density transient solver finite elelemt method integral equation engineering electronics and electrical 0544
Date of Publication:01/01/2009
|
{"url":"http://www.openthesis.org/documents/transient-solver-current-density-in-524732.html","timestamp":"2014-04-18T16:05:41Z","content_type":null,"content_length":"10111","record_id":"<urn:uuid:a57fb436-e59e-420d-96cf-cce3adf66464>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00320-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Conic Questions!
January 15th 2010, 03:22 AM #1
Jan 2010
Conic Questions!
a) The point (x,y) is equidistant from the circle x^2 +y^2 = 1 and the point (2,0). Show that (x,y) must lie on the curve (x-1)^2 +y^2 = 4(x-(5/4))^2. Show that this curve is a hyperbola.
b) If two tangents with slopes m1, m2 intersect at a point (X,Y) show that m1 and m2 must be the roots of the quadratic equation:
(a^2 - X) m^2 +2XY + (b^2 - Y^2) = 0
and deduce that if the tangents are perpendicular to each other, the point (X,Y) lies on a circle, centre the origin.
a) I am unsure how to show that the point MUST lie on the curve. All I can think of doing is finding a point that I know is equidistant between the curve and point, and then substituting these
values into the equation of the curve? i.e. (1.5,0). Is this the correct way to approach this part of the problem.
To show that the curve is a hyperbola, do I just need to rearrange the equation they have given me into the form of a hyperbola ie. (x^2/a^2 - y^2/b^2 = 1) If so I have done this, and I have an
equation that resembles a hyperbola, but I'm not sure this really SHOWS that the curve is a hyperbola. I think maybe I am missing something, any help would be great!
b) I don't know im totally stuck!
a) I am unsure how to show that the point MUST lie on the curve. All I can think of doing is finding a point that I know is equidistant between the curve and point, and then substituting these
values into the equation of the curve? i.e. (1.5,0). Is this the correct way to approach this part of the problem.
To show that the curve is a hyperbola, do I just need to rearrange the equation they have given me into the form of a hyperbola ie. (x^2/a^2 - y^2/b^2 = 1) If so I have done this, and I have an
equation that resembles a hyperbola, but I'm not sure this really SHOWS that the curve is a hyperbola. I think maybe I am missing something, any help would be great!
yep, just rearrange to show that it is a hyperbola.
as for the other questions, umm... I'll think about it
January 15th 2010, 03:38 AM #2
Dec 2008
|
{"url":"http://mathhelpforum.com/differential-geometry/123890-conic-questions.html","timestamp":"2014-04-23T20:59:53Z","content_type":null,"content_length":"31895","record_id":"<urn:uuid:b04452e6-2dd0-4127-bf37-d34d89172aae>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00373-ip-10-147-4-33.ec2.internal.warc.gz"}
|
lebesgue measurable set construction
January 6th 2010, 03:37 AM #1
lebesgue measurable set construction
Construct a lebesgue measurable set $S$ such that for any nonempty interval $I, 0< m(S\cap I)< m(I)$,
where $m$ is the lebesgue measure in real line.
Is it possible that S has finite lebesgue measure?
.>-<. Any1 help me please.
Open to any solution or idea.
January 16th 2010, 02:39 AM #2
|
{"url":"http://mathhelpforum.com/differential-geometry/122615-lebesgue-measurable-set-construction.html","timestamp":"2014-04-18T01:57:07Z","content_type":null,"content_length":"32432","record_id":"<urn:uuid:d33425d5-9451-4a7f-8576-1dda4be88374>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00602-ip-10-147-4-33.ec2.internal.warc.gz"}
|
st: New versions of -punaf-, -regpar-, -margprev- and -marglmean- on SSC
Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
st: New versions of -punaf-, -regpar-, -margprev- and -marglmean- on SSC
From "Roger B. Newson" <r.newson@imperial.ac.uk>
To "statalist@hsphsun2.harvard.edu" <statalist@hsphsun2.harvard.edu>
Subject st: New versions of -punaf-, -regpar-, -margprev- and -marglmean- on SSC
Date Mon, 18 Jun 2012 12:28:31 +0100
Thanks yet again to Kit Baum, new versions of the packages -punaf-, -regpar-, -margprev- and -marglmean- are now available for download from SSC. In Stata, use the -ssc- command to do this, or
-adoupdate- if you already have old versions of these packages.
The -punaf-, -regpar-, -margprev- and -marglmean- packages are described as below on my website, and estimate population attributable fractions, population attributable risks, marginal prevalences
and marginal means, respectively, after estimation commands whos predicted values are conditional means or prevalences, using Normalizing and variance-stabilizing transformations to derive the
confidence intervals. The new versions have an added -predict()- option, corresponding to the option of the same name for -margins- (which is used by these packages together with -nlcom-). This
-predict()- option allows the user to use the packages after multi-equation commands such as -mlogit-. For instance, in the -sysdsn1- data, the user might use -regpar- to estimate the decrease in
prevalence of uninsured status that might be expected ina fantasy scenario where all subjects were 50 years old but all other covariates stayed the same, as follows:
webuse sysdsn1, clear
mlogit insure age male nonwhite i.site
regpar, at(age==50) predict(outcome(3))
Note that the -punafcc- package, the other member of this suite of packages, has not been updated with a -predict()- option. This is because -punafcc- uses -margins- with the -expression()- option,
which is mutually exclusive with the -predict()- option. I cannot think of an instance where a -predict()- option would be useful with -punafcc-, which is designed for use with case-control or
survival data.
Best wishes
Roger B Newson BSc MSc DPhil
Lecturer in Medical Statistics
Respiratory Epidemiology and Public Health Group
National Heart and Lung Institute
Imperial College London
Royal Brompton Campus
Room 33, Emmanuel Kaye Building
1B Manresa Road
London SW3 6LR
Tel: +44 (0)20 7352 8121 ext 3381
Fax: +44 (0)20 7351 8322
Email: r.newson@imperial.ac.uk
Web page: http://www.imperial.ac.uk/nhli/r.newson/
Departmental Web page:
Opinions expressed are those of the author, not of the institution.
package punaf from http://fmwww.bc.edu/RePEc/bocode/p
'PUNAF': module to compute population attributable fractions for cohort studies
punaf calculates confidence intervals for population
attributable fractions, and also for scenario means and
their ratio, known as the population unattributable
fraction. punaf can be used after an estimation command
whose predicted values are interpreted as conditional
arithmetic means, such as logit, logistic, poisson, or glm.
It estimates the logs of two scenario means, the baseline
scenario ("Scenario 0") and a fantasy scenario ("Scenario
1"), in which one or more exposure variables are assumed to
be set to particular values (typically zero), and any other
predictor variables in the model are assumed to remain the
same. It also estimates the log of the ratio of the Scenario 1
mean to the Scenario 0 mean. This ratio is known as the
population unattributable fraction, and is subtracted from 1 to
derive the population attributable fraction, defined as the
proportion of the mean of the outcome variable attributable to
living in Scenario 0 instead of Scenario 1.
KW: confidence intervals
KW: population attributable fractions
Requires: Stata version 12
Distribution-Date: 20120618
Author: Roger Newson, National Heart and Lung Institute at Imperial College London
Support: email r.newson@imperial.ac.uk
INSTALLATION FILES (click here to install)
(click here to return to the previous screen)
package regpar from http://www.imperial.ac.uk/nhli/r.newson/stata12
regpar: Population attributable risks from binary regression models
regpar calculates confidence intervals for population attributable
risks, and also for scenario proportions. regpar can be used after
an estimation command whose predicted values are interpreted as
conditional proportions, such as logit, logistic, probit, or glm.
It estimates two scenario proportions, a baseline scenario
("Scenario 0") and a fantasy scenario ("Scenario 1"), in which one
or more exposure variables are assumed to be set to particular
values (typically zero), and any other predictor variables in the
model are assumed to remain the same. It also estimates the
difference between the Scenario 0 proportion and the Scenario 1
proportion. This difference is known as the population
attributable risk (PAR), and represents the amount of risk
attributable to living in Scenario 0 instead of Scenario 1.
Author: Roger Newson
Distribution-Date: 03june2012
Stata-Version: 12
INSTALLATION FILES (click here to install)
(click here to return to the previous screen)
package margprev from http://www.imperial.ac.uk/nhli/r.newson/stata12
margprev: Marginal prevalences from binary regression models
margprev calculates confidence intervals for marginal
prevalences, also known as scenario proportions. margprev can be
used after an estimation command whose predicted values are
interpreted as conditional proportions, such as logit, logistic,
probit, or glm. It estimates a marginal prevalence for a
scenario ("Scenario 1"), in which one or more predictor variables
may be assumed to be set to particular values, and any other
predictor variables in the model are assumed to remain the same.
Author: Roger Newson
Distribution-Date: 03june2012
Stata-Version: 12
INSTALLATION FILES (click here to install)
(click here to return to the previous screen)
package marglmean from http://www.imperial.ac.uk/nhli/r.newson/stata12
marglmean: Marginal log means from regression models
marglmean calculates symmetric confidence intervals for log
marginal means (also known as log scenario means), and
asymmetric confidence intervals for the marginal means
themselves. marglmean can be used after an estimation
command whose predicted values are interpreted as positive
conditional arithmetic means of non-negative-valued outcome
variables, such as logit, logistic, probit, poisson, or glm
with most non-Normal distributional families. It can
estimate a marginal mean for a scenario ("Scenario 1"), in
which one or more exposure variables may be assumed to be
set to particular values, and any other predictor variables
in the model are assumed to remain the same.
Author: Roger Newson
Distribution-Date: 03june2012
Stata-Version: 12
INSTALLATION FILES (click here to install)
(click here to return to the previous screen)
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
|
{"url":"http://www.stata.com/statalist/archive/2012-06/msg00839.html","timestamp":"2014-04-17T15:54:33Z","content_type":null,"content_length":"16853","record_id":"<urn:uuid:4a6582cc-e635-47f7-9a94-6c3f70053465>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00385-ip-10-147-4-33.ec2.internal.warc.gz"}
|
MathGroup Archive: May 2008 [00741]
[Date Index] [Thread Index] [Author Index]
Re: How to plot a graph, using a distance matrix
• To: mathgroup at smc.vnet.net
• Subject: [mg89034] Re: [mg88993] How to plot a graph, using a distance matrix
• From: Daniel Lichtblau <danl at wolfram.com>
• Date: Sat, 24 May 2008 03:53:50 -0400 (EDT)
• References: <200805230707.DAA25775@smc.vnet.net>
Eric Heiman wrote:
> My dilemna is as such:
> I have a matrix (it happens to be 21x21, but don't worry about that) which contains distances between points.
> So column 1 has distances from point 1 to every other point, with row 1 being 0 (because the distance to itself is zero).
> What I am wondering is how I would be able to get mathematica to plot a graph of these points.
> Thanks in advance!
You can use ListPlot, once you have good placement of the points. To get
this you might proceed as follows.
Put your first point at the origin, and your second point on the
positive x axis, with coordinate specified by mat[[1,2]] (call the
coordinates {x2,0} for use below. That is, x2=mat[[1,2]]). Give your
third point the positive y3 solution to the two equations given by
Norm[{x3,y3}]^2 - mat[[1,3]]^2 == 0
Norm[{x3,y3}-{x2,0}]^2 - mat[[2,3]] == 0
You now have located three points. For subsequent points 3<j<=21 you
would do similarly, in first solving pairs of quadratics
Norm[{xj,yj}]^2 - mat[[1,j]]^2 == 0
Norm[{xj,yj}-{x2,0}]^2 - mat[[2,j]] == 0
Of the two solutions, take the one that comes closest to making the
distances to already placed points correct.
If you require really careful layout you might use the point coordinates
as found above as initial values for a least squares optimization. Once
you place those first two points, your other distances give an
overdetermined system of equations (171, if I am counting correctly) in
38 unknowns.
Daniel Lichtblau
Wolfram Research
• References:
|
{"url":"http://forums.wolfram.com/mathgroup/archive/2008/May/msg00741.html","timestamp":"2014-04-18T08:30:56Z","content_type":null,"content_length":"26967","record_id":"<urn:uuid:e9fcca0e-4902-4019-9f18-408290b24d49>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00539-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Decimals are a method of writing fractional numbers without writing a fraction having a numerator and denominator.
The fraction 37/100 could be written as the decimal 0.37. The period or decimal point indicates that this is a decimal.
The decimal 0.37 could be pronounced as THIRTY-SEVEN HUNDREDTHS or as ZERO POINT THREE SEVEN or ZERO POINT THIRTY-SEVEN.
|
{"url":"http://www.aaamath.com/g2_37bx2.htm","timestamp":"2014-04-19T22:07:57Z","content_type":null,"content_length":"5714","record_id":"<urn:uuid:c6eabbd1-0ad1-4293-9cb7-545298d4ede4>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00090-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Antenna Characteristics
beam width
side lobes
front to back ratio
Figure 1: Antenna pattern in a polar-coordinate graph
Antenna Gain
Independent of the use of a given antenna for transmitting or receiving, an important characteristic of this antenna is the gain. Some antennas are highly directional; that is, more energy is
propagated in certain directions than in others. The ratio between the amount of energy propagated in these directions compared to the energy that would be propagated if the antenna were not
directional (Isotropic Radiation) is known as its gain. When a transmitting antenna with a certain gain is used as a receiving antenna, it will also have the same gain for receiving.
beam width
side lobes
front to back ratio
Figure 1: Antenna pattern in a polar-coordinate graph
Antenna Pattern
Most radiators emit (radiate) stronger radiation in one direction than in another. A radiator such as this is referred to as anisotropic. However, a standard method allows the positions around a
source to be marked so that one radiation pattern can easily be compared with another.
The energy radiated from an antenna forms a field having a definite radiation pattern. A radiation pattern is a way of plotting the radiated energy from an antenna. This energy is measured at various
angles at a constant distance from the antenna. The shape of this pattern depends on the type of antenna used.
To plot this pattern, two different types of graphs, rectangular-and polar-coordinate graphs are used. The polar-coordinated graph has proved to be of great use in studying radiation patterns. In the
polar-coordinate graph, points are located by projection along a rotating axis (radius) to an intersection with one of several concentric, equally-spaced circles. The polar-coordinate graph of the
measured radiation is shown in Figure 1.
The main beam (or main lobe ) is the region around the direction of maximum radiation (usually the region that is within 3 dB of the peak of the main beam). The main beam in Figure 1 is northbound.
The sidelobes are smaller beams that are away from the main beam. These sidelobes are usually radiation in undesired directions which can never be completely eliminated. The sidelobe level (or
sidelobe ratio) is an important parameter used to characterize radiation patterns. It is the maximum value of the sidelobes away from the main beam and is expressed in Decibels. One sidelobe is
called backlobe. This is the portion of radiation pattern that is directed opposing the main beam direction.
beam width
side lobes
front to back
Figure 2: The same antenna pattern in a rectangular-coordinate graph
beam width
side lobes
front to back
Figure 2: The same antenna pattern in a rectangular-coordinate graph
The now following graph shows the rectangular-coordinated graph for the same source. In the rectangular-coordinate graph, points are located by projection from a pair of stationary, perpendicular
axes. The horizontal axis on the rectangular-coordinate graph corresponds to the circles on the polar-coordinate graph. The vertical axis on the rectangular-coordinate graph corresponds to the
rotating axis (radius) on the polar-coordinate graph. The measurement scales in the graphs can have linear as well as logarithmic steps.
For the analysis of an antenna pattern the following simplifications are used:
Beam Width
The angular range of the antenna pattern in which at least half of the maximum power is still emitted is described as a „Beam With”. Bordering points of this major lobe are therefore the points at
which the field strength has fallen in the room around 3 dB regarding the maximum field strength. This angle is then described as beam width or aperture angle or half power (- 3 dB) angle - with
notation Θ (also φ). The beamwidth Θ is exactly the angle between the 2 red marked directions in the upper pictures. The angle Θ can be determined in the horizontal plane (with notation Θ[AZ]) as
well as in the vertical plane (with notation Θ[EL]).
Major and Side Lobes (Minor Lobes)
The pattern shown in the upper figures has radiation concentrated in several lobes. The radiation intensity in one lobe is considerably stronger than in the other. The strongest lobe is called major
lobe; the others are (minor) side lobes. Since the complex radiation patterns associated with arrays frequently contain several lobes of varying intensity, you should learn to use appropriate
terminology. In general, major lobes are those in which the greatest amount of radiation occurs. Side or minor lobes are those in which the radiation intensity is least.
Front-to-Back Ratio
The front-to-back ratio of an antenna is the proportion of energy radiated in the principal direction of radiation to the energy radiated in the opposite direction. A high front-to-back ratio is
desirable because this means that a minimum amount of energy is radiated in the undesired direction.
Figure 3: The antenna aperture is a section of a spherical surface
The effective aperture of an antenna A[e] is the area presented to the radiated or received signal. It is a key parameter, which governs the performance of the antenna. The gain is related to the
effective area by the following relationship:
4π · A[e] λ = wave length
A[e] = effective antenna aperture
G = ; A[e] = K[a]·A Where: A = physical area of the antenna (1)
λ^2 K[a] = antenna aperture efficiency
The aperture efficiency depends on the distribution of the illumination across the aperture. If this is linear then K[a]= 1. This high efficiency is offset by the relatively high level of sidelobes
obtained with linear illumination. Therefore, antennas with more practical levels of sidelobes have an antenna aperture efficiency less than one (A[e]< A).
|
{"url":"http://www.radartutorial.eu/06.antennas/an05.en.html","timestamp":"2014-04-18T05:29:48Z","content_type":null,"content_length":"27842","record_id":"<urn:uuid:6c845266-dc91-4442-bb5a-492d409a22eb>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00152-ip-10-147-4-33.ec2.internal.warc.gz"}
|
ndgrid (MATLAB Function Reference)
MATLAB Function Reference Search  Help Desk
Generate arrays for multidimensional functions and interpolation
[X1,X2,X3,...] = ndgrid(x1,x2,x3,...)
[X1,X2,...] = ndgrid(x)
[X1,X2,X3,...] = ndgrid(x1,x2,x3,...) transforms the domain specified by vectors x1,x2,x3... into arrays X1,X2,X3... that can be used for the evaluation of functions of multiple variables and
multidimensional interpolation. The ith dimension of the output array Xi are copies of elements of the vector xi. [X1,X2,...] = ndgrid(x) is the same as [X1,X2,...] = ndgrid(x,x,...).
Evaluate the function
[X1,X2] = ndgrid(-2:.2:2, -2:.2:2);
Z = X1 .* exp(-X1.^2 - X2.^2);
The ndgrid function is like meshgrid except that the order of the first two input arguments are switched. That is, the statement
[X1,X2,X3] = ndgrid(x1,x2,x3)
produces the same result as
[X2,X1,X3] = meshgrid(x2,x1,x3).
Because of this, ndgrid is better suited to multidimensional problems that aren't spatially based, while meshgrid is better suited to problems in two- or three-dimensional Cartesian space.
See Also
meshgrid, interpn
[ Previous | Help Desk | Next ]
|
{"url":"http://dali.feld.cvut.cz/ucebna/matlab/techdoc/ref/ndgrid.html","timestamp":"2014-04-21T02:01:36Z","content_type":null,"content_length":"4858","record_id":"<urn:uuid:a1e67b4c-963a-4113-b263-304c1d8e0eb7>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00411-ip-10-147-4-33.ec2.internal.warc.gz"}
|
the first resource for mathematics
The entire function
$\phi \left(x\right)={\int }_{-\infty }^{\infty }exp\left(-{w}^{2m}+ixw\right)dw$
$m\in ℕ$
is studied. It appears in many areas: in Waring’s problem, as a solution of a special form of Turrittin’s differential equation, as a generalization of the Airy function, in questions about analytic
hypoellipticity of the tangential Cauchy-Riemann operator, in the representation of the Bergman and Szegő kernel of weakly pseudoconvex domains in
and in a connection between Brownian motion and a generalized heat equation. First the asymptotic behavior of
at infinity is considered, then the asymptotic expansion of
is computed. It is also shown that
can be approximated by the Bessel function. In the final part of the paper the properties of the zeroes of
are investigated. It is added as a note that meanwhile the conjecture that all zeroes of
are simple has been verified.
30D10 Representations of entire functions by series and integrals
41A60 Asymptotic approximations, asymptotic expansions (steepest descent, etc.)
30E15 Asymptotic representations in the complex domain
34E20 Asymptotic singular perturbations, turning point theory, WKB methods (ODE)
32W05 $\overline{\partial }$ and $\overline{\partial }$-Neumann operators
30C40 Kernel functions and applications (one complex variable)
|
{"url":"http://zbmath.org/?q=an:0902.30022","timestamp":"2014-04-21T12:11:30Z","content_type":null,"content_length":"23136","record_id":"<urn:uuid:3c8b3c57-7419-4c7e-ae4d-628cc67e482f>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00162-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Help
December 6th 2008, 09:08 PM #1
Junior Member
Sep 2008
1. In triangle ABC ,angle c=90 degree and ABPQ is a square on AB.If PN is perpendicular to AC from P, prove that PN=AC + CB.
Thank You.
There are assumptions on the orientation of the figure, because there are situations where it doesn't work.
Okay, look at the sketch. M is the perpendicular to BC from P.
CMPN is a rectangle. So in order to prove that PN=AC+CB, it is sufficient and necessary to prove that BM=AC.
And this is easy if you consider triangles BMP and ABC.
□ They both have right a right angle.
□ Angle ABP is 90° since ABPQ is a square. Hence $\angle PBM=90-\angle ABC=\angle CAB$.
Therefore, the triangles BMP and ABC are similar.
But we know that measures BA and BP are equal since ABPQ is a square. Thus triangles BMP and ABC are congruent.
And we can conclude : BM=AC.
---------> PN=MC=BC+BM=BC+AC ...... $\square$
December 7th 2008, 02:00 AM #2
|
{"url":"http://mathhelpforum.com/geometry/63694-proof.html","timestamp":"2014-04-17T20:04:48Z","content_type":null,"content_length":"34111","record_id":"<urn:uuid:266275d4-8aa4-4c49-b1da-10c815a9f2fc>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00622-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Hallandale Prealgebra Tutor
Find a Hallandale Prealgebra Tutor
...I also was a curriculum coach for the same public school district in Pittsburgh, Pennsylvania. In that position I demonstrated lessons for K-5 math teachers, wrote and presented professional
development for teachers, did workshops for parents,and wrote curriculum. Following my retirement from t...
7 Subjects: including prealgebra, reading, geometry, algebra 1
I am a tutor for math and computer programming. I am a professional computer programmer with hands on experience on software development and applications of mathematics in business and industry.
Part of my work is to explain very complex problems in very simple terms so that users can be productive in their work.I have an associates degree in Computer Programming.
14 Subjects: including prealgebra, Spanish, statistics, geometry
...It is my goal to help students master those foundational skills of the elementary curriculum so that that they can apply those key, basic principles to every new challenge they meet as they
progress as young scholars. I actively engage my students and encourage parents to be an active participan...
17 Subjects: including prealgebra, reading, writing, English
...However, while my specialization was in social science, (history, government, economics), I am certified to teach middle grades science (life science, Earth science, and physical science), as
well as being certified to teach high school biology. Additionally, I have a middle grades and secondary...
30 Subjects: including prealgebra, Spanish, English, elementary (k-6th)
My name is Louis, I am finishing my degree in education at FIU. I have a year of math tutoring experience thanks to City Year, where I served as an algebra I tutor and teacher assistant. Seven of
my students received a level 5 on their End Of Course exams and 60% received a passing score which was a double digit increase from the previous year.
3 Subjects: including prealgebra, algebra 1, elementary math
|
{"url":"http://www.purplemath.com/Hallandale_Prealgebra_tutors.php","timestamp":"2014-04-21T15:19:33Z","content_type":null,"content_length":"24339","record_id":"<urn:uuid:ecf49ad1-c0a2-483d-ac06-aa1a95269551>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00198-ip-10-147-4-33.ec2.internal.warc.gz"}
|
When do the Cars Meet?
Date: 01/23/97 at 12:55:34
From: Jennifer
Subject: Math
Write an equation that describes this information:
Two cars, an Edsel and a Studebaker, are 635 kilometers apart. They
start at the same time and drive toward each other. The Edsel travels
at a rate of 70 kilometers per hour and the Studebaker travels 57
kilometers per hour. In how many hours will the two cars meet?
I'm stuck on the whole thing. I don't understand this kind of
Date: 01/23/97 at 14:52:10
From: Doctor Wilkinson
Subject: Re: Math
I suppose you know the basic formula for doing this kind of problem
(D is distance, R is rate, T is time):
D = RT
I'll give you two ways to look at this problem. You have two rates
given. The rate of the Edsel is 70 and the rate of the Studebaker
is 57. So the distance traveled by the Edsel in time T is 70T and
the distance traveled by the Studebaker in time T is 57T.
Now the question is, when do they meet? The other piece of
information is that they started 635 kilometers apart. So they're
going to meet when the distance traveled by the Edsel and the distance
traveled by the Studebaker adds up to 635. This gives us an equation
which you can solve for T:
70T + 57T = 635
Another way of looking at it is to just look at the distance between
the two cars. It's decreasing because the cars are driving towards
each other. How fast is it decreasing? The rate of decrease is the
sum of the speeds of the two cars. So we get the following equation
for the time it takes to reduce the distance from 635 to 0:
(57 + 70)T = 635
-Doctor Wilkinson, The Math Forum
Check out our web site! http://mathforum.org/dr.math/
Date: 01/23/97 at 20:43:39
From: Doctor Wallace
Subject: Re: Math
Hi Jennifer!
I bet you're surprised to get two answers to this one. I originally
composed this one, but it got munched by my computer, and before I
could send you a new one, my colleague Dr. Wilkinson sent you his
thoughts on the problem. I hope my original answer helps you out too.
Here goes:
The first thing I do when faced with any math problem is to draw a
diagram. Here is the diagram I drew for this problem:
E -> 70km/h 57km/h <- S
635 km
The E stands for Edsel, and the S for Studebaker. I listed the speeds
of the cars, and their direction of travel. The line represents their
path, which is 635 km long We need to understand exactly what is
happening in this problem. Pretend we have a stopwatch which reads 0.
When we press the button to start the stopwatch, the cars will begin
moving toward each other. At some point, they have to meet, and we
will then press the button to stop the stopwatch. What we want to
find in this problem is what time the stopwatch reads when we stop it.
In other words, how long does it take for the two cars to meet?
A helpful relation for any problem of this type (where you have
something like a car moving at a constant speed) is:
D = R x T
Distance = Rate x Time
For example, if a car is traveling at 60 km per hour for 2 hours, it
will have traveled 60 x 2 or 120 km at the end of the 2 hours. (The
rate is 60 and the time is 2.)
Now how do we solve your problem? Well, there are two ways to go
about it. One way uses algebra, and the other doesn't. Since I don't
know how old you are, or what grade you're in, or whether you know
algebra, I'll show you both ways. If you're not in algebra, you can
save the other way until you get there.
Way No. 1: Make a table (without algebra)
Since your problem seems to hint that the answer will be a nice, round
number of hours, a table seems like a good way to solve the problem.
It also helps in understanding. We'll make a table of the distance
values of each car after various hours of time. We'll start with time
zero as our first entry. At time zero, the cars haven't gone
anywhere, so their distance is, of course, zero. Notice that,
throughout the whole table, the rate of each car stays the same.
This is because the cars do not change speed during the trip.
Here are the first three entries for our table:
Edsel Studebaker
Time (hours) Rate Distance Rate Distance
Do you see how we get each entry in the table? We just multiply the
rate by the time.
Now, the big question: How do we know when the two cars meet? Well,
let's take a look at our diagram again. (You can plot the cars'
progress on the line if it helps you.)
Pick a place on the line where you think the cars will meet. It
doesn't matter where. Call it point A. Now, the Edsel will have
traveled from the left to point A, and the Studebaker from the right
to point A. Notice that wherever you put point A, if we add up the
distance that both cars have traveled, we always get 635 km. This is
very important. When the cars finally do meet, together they must
have traveled the length of the whole path. Individually, each car
will only have traveled part of the whole path. But if we add them
up, it must equal the whole 635 km. Okay?
Now why is this important? Well, it means that all we have to do to
find out whether or not the cars have met is to add up their distances
in our table. Have they met after 1 hour? Well, after 1 hour, the
Edsel has gone 70 km and the Studebaker 57 km. 70 + 57 = 127 km, so
they have not met after 1 hour. How about after 2 hours? Well, after
2 hours, the Edsel has gone 140 km and the Studebaker 114 km.
140 + 114 = 254 km, so they have not met after 2 hours either. You
can finish the table and check each hour and you'll easily find out
when the two cars meet.
Way No. 2: Algebra
To solve any algebra word problem, you need three things. First, you
have to understand the relations between the elements in the problem.
Second, you have to be able to translate those relations into
equations using symbols. Third, you need to have the skills to
solve those equations. Let's look at how these apply to this problem.
1. Relations
We've already seen the two important relations for this problem.
The first is that Distance = Rate times Time. The second is that the
total distance traveled by both cars together once they meet is 635km.
2. Equations
This part is harder. We have to choose some variables. First, we'll
choose one for the quantity we're looking for. This is the time at
which the two cars meet. We'll call this t. What are some other
things we don't know? Well, when the two cars meet, we don't know how
far each has traveled individually. So we'll call the total distance
the Edsel travels x, and the total distance the Studebaker travels y.
So now we can use our relation D = R x T and write:
x = 70t and y = 57t
Why is this true? Well, after t hours, each car will have traveled a
distance equal to its rate times the time spent traveling. Now, we
have 3 variables here, and we can't solve our equations until we only
have 1 variable. So we have to rewrite our equations in terms of only
one variable, the one we want to solve for; in this case, t, the time.
We do that by using our other relation.
Our other relation is that the total distance traveled by both
cars is 635 km. Because x and y are the total distances traveled by
each car, the total distance traveled by both cars expressed in terms
of our variables is:
x + y = 635
Now notice that we already have two expressions for x and y. In other
words, we already know what x and y are. As above, x is 70t and y is
57t. So we can substitute these into x + y = 635 and we get:
70t + 57t = 635
Now we have an equation in one variable!
3. Solve
Now all we have to do is solve this equation and we'll get t, which
will be the time that passed before both cars to met. I'll leave that
to you. It works out to the same answer as using the table.
After you become skilled at algebra, you'll want to use this method
instead of constructing a table. This is because your table entries
have to get more and more "fine-tuned" if the answer isn't a round
number. The cars could meet after, say 2.776 hours. It would take a
lot longer to "guess and check" using table entries to get that
precise. The algebra method would give you this precision with less
Well, that was probably a longer answer than you were expecting, but I
hope it helps you get an understanding of this and problems like it.
If you have any more questions, or need further help, please feel free
to write back!
-Doctor Wallace, The Math Forum
Check out our web site! http://mathforum.org/dr.math/
|
{"url":"http://mathforum.org/library/drmath/view/52881.html","timestamp":"2014-04-18T15:56:03Z","content_type":null,"content_length":"14370","record_id":"<urn:uuid:ca61ee73-572f-4a3f-b9b1-4cbc4923f60c>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00075-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Education Researchers in Demand
When Ann Ryu Edwards entered the job market in 2006, she knew she could be picky. Many newly minted academics send dozens of applications with the hope of getting just one position, but Edwards's
field, mathematics education, isn't like that. In fact, she applied to just four universities and got four interviews. Two--including the University of Maryland, College Park, where she is now an
assistant professor--offered her a job.
The academic job market in mathematics education has been on fire for years, thanks in part to high retirement rates, says Robert Reys, a mathematics educator at the University of Missouri, Columbia.
According to his 2008 report in Notices of the American Mathematical Society, universities advertised 128 math-education positions in the fall of 2007, nearly all tenure-track. About 40% went
unfilled, often for lack of qualified candidates. Mathematics-education researchers also find an open, dynamic job market outside of universities--in government, nonprofits, and commercial outfits.
Mathematics-education research demands a rare combination of interests and skills, says Alan Schoenfeld, a mathematics education researcher at the University of California, Berkeley. Mathematics is
important, of course; but math-education researchers also use social-science tools to study teacher behavior, student reasoning, educational equity, technology, and other topics. "A lot people who
are brought up in the mathematics and the sciences tend to think the social stuff is soft and therefore not as intellectually interesting or rigorous," says Schoenfeld, who holds a Ph.D. in
mathematics. But "math ed., properly done, is actually more challenging than mathematics, and that's because simple systems sit still and people don't."
"What we want in the long run is someone who blends two important things," Schoenfeld says. "One is a deep understanding of the mathematics, and a second is a deep understanding of thinking and
Two academic homes
That deep understanding can come in part from years of teaching experience. Dana Cox, 33, was a mathematics major at Hope College in Holland, Michigan, and then taught mathematics in Michigan for 7
years, primarily to seventh-graders, often collaborating with local university researchers.
In 2004, the brand-new Center for the Study of Mathematics Curriculum--one of several National Science Foundation–funded Centers for Teaching and Learning--invited her to apply as a doctoral student.
It was a difficult decision, because she had to give up her comfortable salary, tenure, and some retirement investments. Many teachers are unable to make that sacrifice, she says, which is one
possible reason for the dearth of math-education Ph.D.s.
She accepted the offer, ultimately earning a doctorate in mathematics education--with the equivalent of a master's degree in mathematics--from Western Michigan University in Kalamazoo, one of the
center's three collaborating universities. For her dissertation, she interviewed middle-school students as they solved geometry problems, in order to learn how teachers can build on children's
intuitive understanding.
Upon graduating, Cox had the option of applying for positions in mathematics or education departments. Reys's study shows that math-education positions are almost evenly split between them--but
mathematics-department positions often require advanced mathematics training, which Cox had. She also felt that "a culture of mathematics fit my personality better." She interviewed for two
mathematics-department positions, was offered both, and chose Miami University in Oxford, Ohio. "I just knew that this was home," she says.
Cox says there can be cultural differences between mathematics-education researchers and their research mathematician colleagues. For example, mathematicians are expected to publish their best
research alone and at a young age, whereas mathematics-education researchers tend to collaborate and get better with time. This can become a problem in tenure reviews, but so far, Cox says, she's not
worried. "Right now, all I can really focus on is making sure that I'm doing the best job in my field. And later on, I'll work on making the case to other people outside my field."
Jill Newton was Cox's graduate classmate at the Center for the Study of Mathematics Curriculum. She also looked for jobs last year. Unlike Cox, she had not studied graduate-level mathematics, and she
preferred to teach education courses, so she applied to education departments.
A mathematics major at Michigan State University in East Lansing, Newton started her career by teaching mathematics and science in Papua New Guinea in the Peace Corps. She followed that with a
master's degree in international education from George Washington University in Washington, D.C., graduating in 1995, and then spent the next several years teaching mathematics and science abroad and
in the United States.
By 2004, when she decided to move closer to her aging parents, Newton was ready for a change, and someone suggested that she consider mathematics-education research. "I never even imagined that you
could get a Ph.D. in mathematics education," she says. "It sounded perfect." She returned to Michigan State University, another of the center's three collaborating institutions. For her dissertation,
she focused on curriculum research: analyzing textbooks and observing how curricula play out in classrooms. Ultimately, she accepted a position at Purdue University in West Lafayette, Indiana, which
allows her to do top-level research, teach, and remain close to her parents.
Both Cox and Newton say that their work keeps them close to schools--for example, doing research in classrooms or running professional-development workshops for teachers. And although research can
seem abstract compared with the practical work of teaching, Schoenfeld says the theoretical foundations that academics lay eventually make it into classrooms and curricula. "The ideas of the basic
research of the 1970s and '80s played out in the curricula in the 1990s and the first decade of the 21st century," he says.
"Tremendous freedom"
(Kendra Lockman Photography)
Some say one way to make that happen faster is to work in the private sector. Outside of universities, mathematics-education researchers do work that runs the gamut from research to policy, textbook
writing, and product design. Daniel Scher, 41, a curriculum developer at KCP Technologies, says working in the private sector allows him to "put research into practice in a very tangible way." A
mathematics major with an English minor at the University of Pennsylvania, he found that mathematics education combined his interests. While completing his master's at Cornell University, which he
finished in 1993, he discovered a paper by a researcher at the Education Development Center, an independent nonprofit institute in Newton, Massachusetts. "It seemed like a really neat place to work,"
he says. In 1995, he got hired there to develop curricula.
Soon, he decided he would need a doctorate if he wanted to advance to a leadership position. He went to New York University in New York City, where he researched the effectiveness of The Geometer's
Sketchpad software, which helps students of all ages learn mathematics. After getting his degree in 2002, he worked for a private, three-person research group that brought Russian mathematics
curricula to the United States. Funded by a small group of investors, "it had all the excitement of a small start-up," he says. In 2004, he signed on with KCP Technologies, the developer of The
Geometer's Sketchpad, where his work involves writing grant applications and curricula, as well as "everything from doing professional development, to writing journal articles, to thinking about
updates to Sketchpad, to dealing with the Board of Education in New York," he says. Although KCP Technologies is in Emeryville, California, Scher works remotely from New York City.
(Courtesy, Teresa Lara-Meloy )
Outside of universities, a Ph.D. is not always necessary. Teresa Lara-Meloy, 36, was a good mathematics student growing up in Tehuacán, Mexico, but she never considered a career in mathematics. She
went to Georgetown University in Washington, D.C., as an undergraduate to study international affairs. After graduating, she taught Spanish-speaking adults at a nonprofit in her free time. "Nobody
wanted to teach mathematics or science," she says, "so I ended up doing it and realized there was a dearth of resources" for Spanish speakers. She soon came to see mathematics as a human right, one
that many people are denied. "Calculus is this all-powerful tool to think with," she says, "and most people don't get that ... because the system doesn't help them."
She decided to go to Harvard University to get a master's degree in mathematics education and graduated in 2000. "I don't think I got mathematics until I got to graduate school," she says. She then
took a position at TERC, an education-research organization in Cambridge, Massachusetts. There, she collaborated with teachers and students to help create better mathematics-teaching practices and
tools. It "was an ideal world," she says. "That's when I realized that I could pursue a career" in research outside of universities.
After her project at TERC ended, she taught and researched in Mexico for 2 years before moving back to Massachusetts to the Education Development Center, where she did research on using technology in
education. In early 2007, she started at SRI International, a nonprofit research institute in Menlo Park, California, where she works on projects that include a mathematics curriculum for Girls Inc.
and an after-school program. Although other people in her group have doctorates, Lara-Meloy says, she doesn't feel that there's a limitation on her advancement at SRI. "The years of experience ... do
count for something," she says.
Jeremy Roschelle, a mathematics-education researcher and the director of SRI's Center for Technology in Learning, says his work is similar to that of a university professor except that he doesn't
have teaching responsibilities. His team applies for grants from the National Science Foundation and the Department of Education and attracts school districts and commercial product-developers as
clients. "If you're good at bringing in the funding, you can basically do anything you want," he says. "So it really offers tremendous freedom."
Mixed effect
The economic recession seems to be having a mixed effect on the market for mathematics-education researchers. Hiring freezes at universities mean that more positions will go unfilled--an effect that
young researchers are noticing, Newton says. On the other hand, some American Recovery and Reinvestment Act of 2009 money will go to mathematics-education research, says James Middleton, a
mathematics educator at Arizona State University, Tempe. "I suspect the private and think-tank arenas will probably hire more to write and conduct grants," he wrote in an e-mail.
In the long run, however, there are plenty of opportunities for people who want to address the challenges of educating future generations in mathematics. "If you are interested in policy, there's
room for you. And if you're interested in designing curriculum, there's room for you. And if you're interested in developing better teachers, there's questions there, too," Cox says. "It's such a
broad field right now. It's juicy."
Source: Robert Reys, Robert Glasgow, Dawn Teuscher, and Nevels Nevels. Doctoral Programs in Mathematics Education in the United States: 2007 Status Report, Notices of the American Mathematical
Society 55(10), 1291 (2007)
(Click on image for full-size display.)
│Chelsea Wald is a freelance writer in New York City. │
|
{"url":"http://sciencecareers.sciencemag.org/career_magazine/previous_issues/articles/2009_07_24/caredit.a0900090","timestamp":"2014-04-21T13:58:51Z","content_type":null,"content_length":"52346","record_id":"<urn:uuid:3db94ca3-0c83-4e52-bc1f-20b4cd5ce556>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00549-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Consider The Circuit Shown Below. The Switch Has ... | Chegg.com
Please answer the question below! Will rate..
Image text transcribed for accessibility: Consider the circuit shown below. The switch has been closed a long time when it is suddenly opened at t=0. Draw the equivalent circuit and find the initial
current i and the voltage across the capacitor (v) just before t=0. This is the initial condition for part b. Note that once the switch is opened, the 10V source is disconnected from the circuit.
Draw the equivalent circuit then find plot i(t) for t>0 from the solution to the 2nd order equation. Plot the current.
Electrical Engineering
|
{"url":"http://www.chegg.com/homework-help/questions-and-answers/consider-circuit-shown--switch-closed-long-time-suddenly-opened-t-0-draw-equivalent-circui-q4440738","timestamp":"2014-04-17T15:00:56Z","content_type":null,"content_length":"21429","record_id":"<urn:uuid:425cf7d7-ea3b-434a-9064-0c08eb502574>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00485-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Exams Page (Midterm and Final)
SOME GENERIC INFORMATION. Please read carefully.
• All our exams (Midterm and Final) are closed book, with 'cheat-sheets' as explained next.
□ For midterm, you are allowed to bring one 'cheat-sheet'. This is a 8" by 11" sheet (2-sided) of notes. There is no restriction on what you put on this sheet, or how you prepare it (printed or
hand-written, as small a font as you like).
□ For the final, you are allowed two cheat-sheets. Some students like to re-use the midterm cheat-sheet, and prepare a second cheat-sheet as complement. But you may also prepare two new
• During exams, you will be given 'blue books' (standard university-supplied) books for writing your answers. We want you to ONLY write on the RIGHT-HAND SIDE of each double-page on these blue
books. You are free to use the LEFT-HAND SIDE for your scratch work (this is strongly encouraged).
• In general, the exam will include all material until the last lecture before the exam. Emphasis will be put on contents of lectures, recitations and homework. But you will also be responsible for
material on the reading list, even if we did not cover them in lectures, recitations or homework.
• Please read the above Generic Information.
• This is an in-class exam, during recitation period.
• Note the new date: Thursday Oct 13.
• You are not allowed to use calculators.
• Sample midterms? Get last semester's midterm, plus an older one, both under a Subdirectory of Homework Directory.
• Extra Office Hours on week of Midterm:
Tuesday, 3-5. Wednesday 4-6.
• WHAT IS COVERED:
See the generic rules above. But for Lecture III, we only cover (a,b) trees up to the extent that I lectured in class (roughly, up to and including page 45 of Lecture III.
• SOME SPECIAL HINTS:
Study the homework questions and its solutions.
Be sure you know how to do hand simulation of the algorithms for BST, AVL trees.
• SOME TYPOS in sample midterms (note that there are 2 samples):
** Midterm Spring 2011. Q6, line 7:
"t(n) = theta(100^n * root(log n))" should be "t(n) = theta(100^n * root(n))"
** Midterm Spring 2011. Solution to Q7(c):
At line 5, "...by its predecessor, not predecessor" is clearly wrong. It should say "...by its predecessor or successor". In other words, whichever convention you pick, some of the keys in the
set {3,8,19} will not have two rebalancing acts.
• I expect the final exam to be the first Tuesday of Final Exam Week, i.e., Dec 20.
|
{"url":"http://cs.nyu.edu/~yap/wiki/class/index.php/FunAlgo/2011f-exam","timestamp":"2014-04-21T09:48:25Z","content_type":null,"content_length":"13319","record_id":"<urn:uuid:32674633-fc90-452f-85c9-048ab60c847c>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00127-ip-10-147-4-33.ec2.internal.warc.gz"}
|
A. Existing coarse-graining techniques
B. The freely jointed chain model
A. The wavelet transform
B. Using wavelets to construct a coarse-grained model
C. “Convergence” of on-lattice and off-lattice coarse-grained computations
A. Hierarchical simulation approach
B. Coarse-grained simulation algorithm
C. Probability distributions for coarse-grained internal coordinates
1. Bond-length distribution
2. Bond-angle distribution
3. Torsion-angle distribution
|
{"url":"http://scitation.aip.org/content/aip/journal/jcp/122/23/10.1063/1.1924480","timestamp":"2014-04-21T10:55:25Z","content_type":null,"content_length":"81872","record_id":"<urn:uuid:8c5de17d-4be5-4507-9cb7-2d9b254035a9>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00316-ip-10-147-4-33.ec2.internal.warc.gz"}
|
heaps.pl -- heaps/priority queues
Heaps are data structures that return the entries inserted into them in an ordered fashion, based on a priority. This makes them the data structure of choice for implementing priority queues, a
central element of algorithms such as best-first/A* search and Kruskal's minimum-spanning-tree algorithm.
This module implements min-heaps, meaning that items are retrieved in ascending order of key/priority. It was designed to be compatible with the SICStus Prolog library module of the same name.
merge_heaps/3 and singleton_heap/3 are SWI-specific extension. The portray_heap/1 predicate is not implemented.
Although the data items can be arbitrary Prolog data, keys/priorities must be ordered by @=</2. Be careful when using variables as keys, since binding them in between heap operations may change the
The current version implements pairing heaps. All operations can be performed in at most O(lg n) amortized time, except for delete_from_heap/4, heap_to_list/2, is_heap/1 and list_to_heap/2.
(The actual time complexity of pairing heaps is complicated and not yet determined conclusively; see, e.g. S. Pettie (2005), Towards a final analysis of pairing heaps, Proc. FOCS'05.)
- Lars Buitinck
- The "decrease key" operation is not implemented.
Adds Key with priority Priority to Heap0, constructing a new heap in Heap.
Deletes Key from Heap0, leaving its priority in Priority and the resulting data structure in Heap. Fails if Key is not found in Heap0.
- This predicate is extremely inefficient and exists only for SICStus compatibility.
True if Heap is an empty heap.
True if Heap is a heap with the single element Priority-Key.
Retrieves the minimum-priority pair Priority-Key from Heap0. Heap is Heap0 with that pair removed.
Determines the number of elements in Heap.
Constructs a list List of Priority-Element terms, ordered by (ascending) priority.
Returns true is X is a heap.
- May return false positives.
If List is a list of Priority-Element terms, constructs a heap out of List.
Unifies Key with the minimum-priority element of Heap and Priority with its priority value.
Gets the two minimum-priority elements from Heap.
Merge the two heaps Heap0 and Heap1 in Heap.
|
{"url":"http://www.swi-prolog.org/pldoc/doc/swi/library/heaps.pl","timestamp":"2014-04-16T10:10:25Z","content_type":null,"content_length":"13821","record_id":"<urn:uuid:d20a64ce-2b90-45d4-81f9-4fd6dd5a23ea>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00213-ip-10-147-4-33.ec2.internal.warc.gz"}
|
find the slope of the curve
June 21st 2009, 02:18 PM
find the slope of the curve
The equation sin xy = y defines y implicitly as a function of x.
Find the slope y'(pi/3,1/2) at the point x=pi/3, y=1/2.
i know that y' = (ycos(xy))/(1-xcos(xy))
but i just dont know how to solve the problem when i plug in x and and y to find the slope (Blush)
please help
June 21st 2009, 02:32 PM
The equation sin xy = y defines y implicitly as a function of x.
Find the slope y'(pi/3,1/2) at the point x=pi/3, y=1/2.
i know that y' = (ycos(xy))/(1-xcos(xy))
but i just dont know how to solve the problem when i plug in x and and y to find the slope (Blush)
please help
y=sin(xy) so $y'=\frac{y\cos{xy}}{1-x\cos{xy}}$ and plug in to get $y'=\frac{\frac{1}{2}\cos{\frac{\pi}{6}}}{1-\frac{\pi}{3}\cos{\frac{\pi}{6}}}$ which is the slope of your function
June 21st 2009, 03:02 PM
The equation sin xy = y defines y implicitly as a function of x.
Find the slope y'(pi/3,1/2) at the point x=pi/3, y=1/2.
i know that y' = (ycos(xy))/(1-xcos(xy))
but i just dont know how to solve the problem when i plug in x and and y to find the slope (Blush)
please help
The derivative of a function evaluated at a given point is the slope of the line tangent to the graph of the function at that point.
So, You've got a derivative, and you've got a point. Evaluate the derivative at that point and that will give the slope.
i.e. Plug in $\frac{\pi}{3}$ wherever you see an x, and $\frac{1}{2}$ whereever you see a y.
Note* $\cos\frac{\pi}{6}=\frac{\sqrt{3}}{2}$.
|
{"url":"http://mathhelpforum.com/calculus/93433-find-slope-curve-print.html","timestamp":"2014-04-21T05:26:32Z","content_type":null,"content_length":"6933","record_id":"<urn:uuid:9c060f39-afcc-4025-afd4-df7369989347>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00021-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Sparse representations
We will now present an example of three basis expansions that yield different levels of sparsity for the same signal. A simple periodic signal is sampled and represented as a periodic train of
weighted impulses (see Figure 1). One can interpret sampling as a basis expansion where our elements in our basis are impulses placed at periodic points along the time axis. We know that in this
case, our dual basis consists of sinc functions used to reconstruct our signal from discrete-time samples. This representation contains many non-zero coefficients, and due to the signal's
periodicity, there are many redundant measurements. Representing the signal in the Fourier basis, on the other hand, requires only two non-zero basis vectors, scaled appropriately at the positive and
negative frequencies (see Figure 1). Driving the number of coefficients needed even lower, we may apply the discrete cosine transform (DCT) to our signal, thereby requiring only a single non-zero
coefficient in our expansion (see Figure 1). The DCT equation is Xk=∑n=0N-1xncos(πN(n+12)k)Xk=∑n=0N-1xncos(πN(n+12)k) with k=0,⋯,N-1k=0,⋯,N-1, xnxn the input signal, and N the length of the
Figure 1: Cosine signal in three representations: (a) Train of impulses (b) Fourier basis (c) DCT basis
|
{"url":"http://cnx.org/content/m37168/latest/?collection=col11133/1.5","timestamp":"2014-04-23T19:37:51Z","content_type":null,"content_length":"77983","record_id":"<urn:uuid:fcf59a14-e627-4869-aff8-ba835d5a0f30>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00264-ip-10-147-4-33.ec2.internal.warc.gz"}
|
the encyclopedic entry of Dirac
Dirac, Paul Adrien Maurice
, 1902-84, English physicist. He was educated at the Univ. of Bristol and St. John's College, Cambridge, and became professor of mathematics at Cambridge in 1932. In 1928, Dirac published a version
of quantum mechanics that took into account the theory of
quantum theory
). One consequence of his theory was the prediction of negative energy states for the electron, implying the existence of an
to the electron; this antiparticle, the positron, was discovered in 1932 by C. D. Anderson. Dirac's equation for the motion of a particle is a relativistic modification of the Schrödinger wave
equation, the basic equation of quantum mechanics. For their work Dirac and Erwin
shared the 1933 Nobel Prize in Physics. Dirac also received the Copley Medal of the Royal Society in 1952 for this and other contributions to the quantum theory, including his formulation (with
) of the Fermi-Dirac statistics and his work on the quantum theory of electromagnetic radiation. He wrote
The Principles of Quantum Mechanics
(1930, 4th ed. 1958).
See biographies by H. Kragh (1990) and G. Farmelo (2009).
The Columbia Electronic Encyclopedia Copyright © 2004.
Licensed from Columbia University Press
|
{"url":"http://www.reference.com/browse/Dirac","timestamp":"2014-04-18T10:10:02Z","content_type":null,"content_length":"84111","record_id":"<urn:uuid:5c40580e-077b-4707-bb69-5141f3bde207>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00635-ip-10-147-4-33.ec2.internal.warc.gz"}
|
joint normal distribution
joint normal distribution
A finite set of random variables $X_{1},\ldots,X_{n}$ are said to have a joint normal distribution or multivariate normal distribution if all real linear combinations
are normal. This implies, in particular, that the individual random variables $X_{i}$ are each normally distributed. However, the converse is not not true and sets of normally distributed random
variables need not, in general, be jointly normal.
If $\boldsymbol{X}=(X_{1},X_{2},\ldots,X_{n})$ is joint normal, then its probability distribution is uniquely determined by the means $\boldsymbol{\mu}\in\mathbb{R}^{n}$ and the $n\times n$positive
semidefinite covariance matrix $\boldsymbol{\Sigma}$,
$\displaystyle\Sigma_{{ij}}=\operatorname{Cov}(X_{i},X_{j})=\mathbb{E}[X_{i}X_{% j}]-\mathbb{E}[X_{i}]\mathbb{E}[X_{j}].$
Then, the joint normal distribution is commonly denoted as $\operatorname{N}(\boldsymbol{\mu},\boldsymbol{\Sigma})$. Conversely, this distribution exists for any such $\boldsymbol{\mu}$ and $\
Figure 1: Density of joint normal variables $X,Y$ with $\operatorname{Var}(X)=2$, $\operatorname{Var}(Y)=1$ and $\operatorname{Cov}(X,Y)=-1$.
The joint normal distribution has the following properties:
1. 1.
If $\boldsymbol{X}$ has the $\operatorname{N}(\boldsymbol{\mu},\boldsymbol{\Sigma})$ distribution for nonsigular $\boldsymbol{\Sigma}$ then it has the multidimensional Gaussian probability
density function
$f_{{\boldsymbol{X}}}(\boldsymbol{x})=\frac{1}{\sqrt{(2\pi)^{n}\det{\boldsymbol% {(\Sigma})}}}\exp\left(-\frac{1}{2}(\boldsymbol{x}-\boldsymbol{\mu})^{{% \operatorname{T}}}\boldsymbol{\Sigma}^
{{-1}}(\boldsymbol{x}-\boldsymbol{\mu})% \right).$
2. 2.
If $\boldsymbol{X}$ has the $\operatorname{N}(\boldsymbol{\mu},\boldsymbol{\Sigma})$ distribution and $\boldsymbol{\lambda}\in\mathbb{R}^{n}$ then
$\boldsymbol{\lambda}\cdot\boldsymbol{X}=\lambda_{1}X_{1}+\cdots+\lambda_{n}X_{% n}\sim\operatorname{N}(\boldsymbol{\lambda}\cdot\boldsymbol{\mu},\boldsymbol{% \lambda}^{{\operatorname{T}}}\
3. 3.
Sets of linear combinations of joint normals are themselves joint normal. In particular, if $\boldsymbol{X}\sim\operatorname{N}(\boldsymbol{\mu},\boldsymbol{\Sigma})$ and $A$ is an $m\times n$
matrix, then $A\boldsymbol{X}$ has the joint normal distribution $\operatorname{N}(A\boldsymbol{\mu},A\boldsymbol{\Sigma}A^{{\operatorname{T}}})$.
4. 4.
The characteristic function is given by
$\varphi_{{\boldsymbol{X}}}(\boldsymbol{a})\equiv\mathbb{E}\left[\exp(i% \boldsymbol{a}\cdot\boldsymbol{X})\right]=\exp\left(i\boldsymbol{a}\cdot% \boldsymbol{\mu}-\frac{1}{2}\boldsymbol{a}^{{\
operatorname{T}}}\boldsymbol{% \Sigma}\boldsymbol{a}\right),$
for $\boldsymbol{X}\sim\operatorname{N}(\boldsymbol{\mu},\boldsymbol{\Sigma})$ and any $\boldsymbol{a}\in\mathbb{C}^{n}$.
5. 5.
6. 6.
Let $\boldsymbol{X}$ be a random vector whose distribution is jointly normal. Suppose the coordinates of $\boldsymbol{X}$ are partitioned into two groups, forming random vectors $\boldsymbol{X_
{1}}$ and $\boldsymbol{X_{2}}$, then the conditional distribution of $\boldsymbol{X_{1}}$ given $\boldsymbol{X_{2}}=\boldsymbol{c}$ is jointly normal.
jointly normal, multivariate normal distribution
multivariate Gaussian distribution
Mathematics Subject Classification
no label found
no label found
Added: 2005-07-01 - 21:47
|
{"url":"http://planetmath.org/jointnormaldistribution","timestamp":"2014-04-20T03:18:09Z","content_type":null,"content_length":"110639","record_id":"<urn:uuid:e12c127b-ed3b-49f7-85ea-f6575cf06f35>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00253-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Yield to Maturity (YTM)
What it is:
Yield to maturity (YTM) measures the annual return an investor would receive if he or she held a particular bond until maturity.
How it works/Example:
To understand YTM, one must first understand that the price of a bond is equal to the present value of its future cash flows, as shown in the following formula:
P = price of the bond
n = number of periods
C = coupon payment
r = required rate of return on this investment
F = maturity value
t = time period when payment is to be received
To calculate the lien, the investor then uses a financial calculator or software to find out what percentage rate (r) will make the present value of the bond's cash flows equal to today's selling
price. For example, let's assume you own a Company XYZ bond with a $1,000 par value and a 5% coupon that matures in three years. If this Company XYZ bond is selling for $980 today on the market,
using the formula above we can calculate that the YTM is 2.87%.
Note that because the coupon payments are semiannual, this is the YTM for six months. To annualize the rate while adjusting for the reinvestment of interest payments, we simply use this formula:
[Use our Yield to Maturity (YTM) Calculator to measure your annual return if you plan to hold a particular bond until maturity.]
Why it Matters:
YTM allows investors to compare a bond's expected return with those of other securities. Understanding how yields vary with market prices (that as bond prices fall, yields rise; and as bond prices
rise, yields fall) also helps investors anticipate the effects of market changes on their portfolios. Further, YTM helps investors answer questions such as whether a 10-year bond with a high yield is
better than a 5-year bond with a high coupon.
Although YTM considers the three sources of potential return from a bond (coupon payments, capital gains, and reinvestment returns), some analysts consider it inappropriate to assume that the
investor can reinvest the coupon payments at a rate equal to the YTM.
It is important to note that callable bonds should receive special consideration when it comes to YTM. Call provisions limit a bond's potential price appreciation because when interest rates fall,
the bond's price will not go any higher than its call price. Thus, a callable bond's true yield, called the yield to call, at any given price is usually lower than its yield to maturity. As a result,
investors usually consider the lower of the yield to call and the yield to maturity as the more realistic indication of the return on a callable bond.
Best execution refers to the imperative that a broker, market maker, or other agent acting on behalf of an investor is obligated to execute the investor's order in a way that is most advantageous to
the investor rather than the agent.
|
{"url":"http://www.investinganswers.com/yield-maturity-ytm-811","timestamp":"2014-04-17T03:53:35Z","content_type":null,"content_length":"51943","record_id":"<urn:uuid:0d781772-6ec4-4382-b6a5-f7feeb81bc1a>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00397-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Gauss-Jordan with complex numbers
July 9th 2009, 12:38 PM #1
Oct 2007
Gauss-Jordan with complex numbers
I did a quick search on google but I did not find anything.
Is this the right approach:
1. divide the first row by 2
2. add the first row to row 2...
is this good so far?
now should I divide the 2nd row by i, and then subtract is by row 1?
I did a quick search on google but I did not find anything.
Is this the right approach:
1. divide the first row by 2
2. add the first row to row 2...
is this good so far?
now should I divide the 2nd row by i, and then subtract is by row 1?
Divide the first row by $2$ is OK
but then subtract $-1+i$ times the first row from the second.
(the reason you found nothing specific to complex matrices is that the process is identical to that for real matrices)
July 9th 2009, 01:48 PM #2
Grand Panjandrum
Nov 2005
|
{"url":"http://mathhelpforum.com/advanced-algebra/94744-gauss-jordan-complex-numbers.html","timestamp":"2014-04-20T05:45:19Z","content_type":null,"content_length":"34717","record_id":"<urn:uuid:4185573d-328c-4001-8520-79e779911e00>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00277-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Convolution Filter - Float Precision C Vs Java
up vote 4 down vote favorite
I'm porting a library of image manipulation routines into C from Java and I'm getting some very small differences when I compare the results. Is it reasonable that these differences are in the
different languages' handling of float values or do I still have work to do!
The routine is Convolution with a 3 x 3 kernel, it's operated on a bitmap represented by a linear array of pixels, a width and a depth. You need not understand this code exactly to answer my
question, it's just here for reference.
Java code;
for (int x = 0; x < width; x++){
for (int y = 0; y < height; y++){
int offset = (y*width)+x;
if(x % (width-1) == 0 || y % (height-1) == 0){
input.setPixel(x, y, 0xFF000000); // Alpha channel only for border
} else {
float r = 0;
float g = 0;
float b = 0;
for(int kx = -1 ; kx <= 1; kx++ ){
for(int ky = -1 ; ky <= 1; ky++ ){
int pixel = pix[offset+(width*ky)+kx];
int t1 = Color.red(pixel);
int t2 = Color.green(pixel);
int t3 = Color.blue(pixel);
float m = kernel[((ky+1)*3)+kx+1];
r += Color.red(pixel) * m;
g += Color.green(pixel) * m;
b += Color.blue(pixel) * m;
input.setPixel(x, y, Color.rgb(clamp((int)r), clamp((int)g), clamp((int)b)));
return input;
Clamp restricts the bands' values to the range [0..255] and Color.red is equivalent to (pixel & 0x00FF0000) >> 16.
The C code goes like this;
for(y=1; y<height-1; y++){
offset = x + (y*width);
xk = x + xOffsets[z];
yk = y + yOffsets[z];
kOffset = xk + (yk * width);
rAcc += kernel[z] * ((b1[kOffset] & rMask)>>16);
gAcc += kernel[z] * ((b1[kOffset] & gMask)>>8);
bAcc += kernel[z] * (b1[kOffset] & bMask);
// Clamp values
rAcc = rAcc > 255 ? 255 : rAcc < 0 ? 0 : rAcc;
gAcc = gAcc > 255 ? 255 : gAcc < 0 ? 0 : gAcc;
bAcc = bAcc > 255 ? 255 : bAcc < 0 ? 0 : bAcc;
// Round the floats
r = (int)(rAcc + 0.5);
g = (int)(gAcc + 0.5);
b = (int)(bAcc + 0.5);
output[offset] = (a|r<<16|g<<8|b) ;
It's a little different xOffsets provides the xOffset for the kernel element for example.
The main point is that my results are out by at most one bit. The following are pixel values;
FF205448 expected
FF215449 returned
44 wrong
FF56977E expected
FF56977F returned
45 wrong
FF4A9A7D expected
FF4B9B7E returned
54 wrong
FF3F9478 expected
FF3F9578 returned
74 wrong
FF004A12 expected
FF004A13 returned
Do you believe this is a problem with my code or rather a difference in the language?
Kind regards,
The proper word for "clamp" is actually "saturate". If you are talking to someone they will know right away what "saturate" means but not necessarily "clamp". – Trevor Boyd Smith May 29 '09 at
@Trevor: That really depends on the person's background. – Jon Cage May 30 '09 at 10:51
add comment
5 Answers
active oldest votes
After a quick look:
up vote 7 down vote do you realize that (int)r will floor the r value instead of rounding it normally? in the c code, you seem to use (int)(r + 0.5)
+1: I agree - I was about to post the same :-) – Jon Cage May 29 '09 at 12:06
1 Hi Fortega I thought that (int)(rAcc + 0.5) was a cheap way of rounding floats to ints which you know are positive? You are quite correct however, the legacy code truncates the
floats whereas I was rounding them (Trying to). Thank you for your help :) – gav May 29 '09 at 12:38
(int)(rAcc + 0.5) is indeed a (cheap? dirty? quick?) way of rounding floats to ints, both in java and in c... – Fortega May 29 '09 at 12:48
@Gav: That's a neat little trick. I'll remember that! – Jon Cage May 30 '09 at 10:50
add comment
Further to Fortega's answer, try the roundf() function from the C math library.
up vote 2 down
Sidenote: If you're using that, just remember that it'll get called at least 3 times per pixel. – Jasper Bekkers May 29 '09 at 12:12
2 @Jasper: If you're worrying it might take a while I'd try it first and if it seems unacceptably slow then start worrying about it. No point in optomising prematurely. – Jon Cage
May 29 '09 at 12:27
add comment
Java's floating point behaviour is quite precise. What I expect to be happening here is that the value as being kept in registers with extended precision. IIRC, Java requires that the
precision is rounded to that of the appropriate type. This is to try to make sure you always get the same result (full details in the JLS). C compilers will tend to leave any extra
up vote 1 precision there, until the result in stored into main memory.
down vote
That was my initial thought too--but the differences he shows are too big to be accounted for by "long double" precision. I think Fortega's got it. – Drew Hall May 29 '09 at 11:18
add comment
I would suggest you use double instead of float. Float is almost never the best choice.
up vote 1 down vote
add comment
This might be due to different default round in the two languages. I'm not saying they have (you need to read up to determine that), but it's an idea.
up vote 0 down vote
add comment
Not the answer you're looking for? Browse other questions tagged java c image floating-point convolution or ask your own question.
|
{"url":"http://stackoverflow.com/questions/925469/convolution-filter-float-precision-c-vs-java","timestamp":"2014-04-18T06:43:01Z","content_type":null,"content_length":"90123","record_id":"<urn:uuid:8a260d80-daab-482e-8682-008cb2e4caee>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00279-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Prove that if f(x) = integral from 0 to x of f(t) dt then f = 0
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/4ee22c07e4b0a50f5c561923","timestamp":"2014-04-20T18:43:33Z","content_type":null,"content_length":"122322","record_id":"<urn:uuid:44fbcaeb-a45f-441a-ae97-0e19d1dafd02>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00408-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The naming of individual polyhedra is discussed on its own separate page.
antiprism - A semi-regular polyhedron constructed from two n-sided polygons and 2n triangles. See the prisms and antiprisms entry.
Archimedean - The 13 Archimedean solids are convex semi-regular polyhedra.
canonical form - A form of any given polyhedron distorted so every edge is tangent to the unit sphere and the center of gravity of the tangent points is the origin. See the canonical form page.
chiral - Having different left-handed and right-handed forms; not mirror symmetric; opposite of reflexible. The cube is not chiral; the snub cube is chiral as these two versions of the snub cube
compound - An assemblage of two or more polyhedra, usually interpenetrating and having a common center.
convex - A convex polygon or polyhedron contains no holes or indentions. If one constructs a line segment between any two points of a convex object, then every point on the line segment is part
of the object. The pentagram is a non-convex polygon; the Kepler-Poinsot solids are non-convex polyhedra.
dihedral angle - The angle defined by two given faces meeting at an edge, e.g., all the dihedral angles of a cube are 90 degrees. An almost-spherical polyhedron (with many faces) has small
dihedral angles.
edge - A line segment where two faces meet. A cube has 12 edges.
enantiomorph - the mirror image of a given chiral polyhedron.
face - A polygon bounding a polyhedron. A cube has six square faces.
golden ratio - (1+sqrt(5))/2, approximately 1.61803, which happens to be the ratio of a diagonal of a pentagon to its side. This constant shows up in many metrical properties of the dodecahedron
and icosahedron just as the square root of 2 shows up in the metrical properties of the cube. A golden rectangle has sides in this ratio. A golden rhombus has diagonals in this ratio.
net - a drawing of a polyhedron unfolded along its edges, to lay flat in a plane. The earliest known examples of nets to represent polyhedra are by Albrecht Durer.
pentagram - five-pointed star.
Platonic - Five fundamental convex polyhedra. They have regular faces and identical vertices.
polygon - A connected two-dimensional figure bounded by line segments, called sides, with exactly two sides meeting at each vertex.
polyhedron - A three dimensional object bounded by polygons, with each edge shared by exactly two polygons. Various authors differ on the fine points of the definition, e.g., whether it is a
solid or just the surface, whether it can be infinite, and whether it can have two different vertices that happen to be at the same location.
prism - A semi-regular polyhedron constructed from two n-sided polygons and n squares. See the prisms and antiprisms entry.
quasi-regular - The edge-regular polyhedra within the uniform solids having special properties.
reflexible - Having a plane of mirror symmetry; opposite of chiral.
regular - A polygon is regular if its sides are equal and its angles are equal. A polyhedron is regular if every face is regular and if every vertex figure is regular. Standardly, there are nine
regular polyhedra: the five Platonic solids and the four Kepler-Poinsot solids, but others might be allowed, depending on the definition of polyhedron.
rhombus - A polygon consisting of four equal sides, e.g., in zonohedra.
self-intersecting - A polygon with edges which cross other edges; a polyhedron with faces which cross other faces.
semi-regular - Consisting of two or more types of regular polygons, with all vertices identical. This includes the Archimedean solids, the prisms and antiprisms, and the nonconvex uniform solids.
stellation - The process of constructing a new polyhedron by extending the face planes of a given polyhedron past their edges. See, e.g., the 59 stellations of the icosahedron.
truncate - To slice off a corner of a polyhedron around a vertex. The figure at the top of this page shows a cube with one vertex truncated.
uniform - A uniform polyhedron has regular faces, with each vertex equivalently arranged. This includes the Platonic solids, the Archimedean solids, the prisms and antiprisms, and the nonconvex
uniform solids.
vertex - A point at which edges meet. A cube has 8 vertices.
vertex figure - The polygon which appears if one truncates a polyhedron at a vertex. The figure at the top of this page shows that the vertex figure of the cube is the equilateral triangle. To be
sure to be consistent, one can truncate at the midpoints of the edges.
zonohedron - A polyhedron in which the faces are all parallelograms or parallel-sided.
|
{"url":"http://www.georgehart.com/virtual-polyhedra/glossary.html","timestamp":"2014-04-19T09:31:43Z","content_type":null,"content_length":"7045","record_id":"<urn:uuid:f8e448bf-612f-405c-acc6-20912ae52aa6>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00098-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Trying to sketch this graph on the complex plane
September 10th 2011, 06:17 AM #1
Jul 2011
Trying to sketch this graph on the complex plane
Sketch the graph of $|z+5| = 4$; that is, find all $z \in \mathbb{C}$ which satisfy this equation. What geometric shape is it?
$|x + iy +5| = 4$
$|(x+5) +iy|=4$
$\sqrt{(x+5)^2+y^2} = 4$
$x^2+10x+25 +y^2 = 16$
$x^2 + y^2 = -10x - 9$
I don't know what to do here. Is it a circle?
My notes have steps in polar form:
$(\sqrt3)^4(cos\left(\frac{8\pi}{3}\right)+isin \left( \frac{8\pi}{3}\right) )$
$= 9(-\frac{1}{2}+i\frac{\sqrt3}{2})$
I'm not sure how these were derived.
Re: Trying to sketch this graph on the complex plane
It is a circle with radius 4 and centre at $(x,y) = (-5,0)$.
Last edited by sander; September 10th 2011 at 11:06 PM. Reason: removed incorrect remark
Re: Trying to sketch this graph on the complex plane
Re: Trying to sketch this graph on the complex plane
because y is the imaginary part, not iy, therefore you dont square i along with y.
it is a circle with radius 4 and center at (-5,0) as Sander pointed out
Re: Trying to sketch this graph on the complex plane
Sander and andrew2322 are using the fact that |z- a| is the distance from the point z to the point a in the complex plane. |z+ 5|= |z- (-5)| so |z+ 5|= 4 is satisfied for all points whose
distanced from -5 is equal to 4- in other words, the circle with center -5 and radius 4.
Also, you arrived at $(x+ 5)^2+ y^2= 16$ and then proceeded to multiply out the square, etc. You should not have done that. You should have recognized immediately that this is of the form $(x- a)
^2+ (y- b)^2= r^2$, the equation of a circle with center (a, b) and radius r, with a= -5, b= 0, and r= 4.
Re: Trying to sketch this graph on the complex plane
Sander and andrew2322 are using the fact that |z- a| is the distance from the point z to the point a in the complex plane. |z+ 5|= |z- (-5)| so |z+ 5|= 4 is satisfied for all points whose
distanced from -5 is equal to 4- in other words, the circle with center -5 and radius 4.
Also, you arrived at $(x+ 5)^2+ y^2= 16$ and then proceeded to multiply out the square, etc. You should not have done that. You should have recognized immediately that this is of the form $(x- a)
^2+ (y- b)^2= r^2$, the equation of a circle with center (a, b) and radius r, with a= -5, b= 0, and r= 4.
That made a lot of sense, thanks! Yah, I didn't realise it was a circle. I only knew it in the form x^2 + y^2 = r^2
September 10th 2011, 06:24 AM #2
Sep 2011
September 10th 2011, 06:29 AM #3
Jul 2011
September 11th 2011, 01:35 AM #4
Jan 2008
September 11th 2011, 04:34 AM #5
MHF Contributor
Apr 2005
September 11th 2011, 08:33 AM #6
Jul 2011
|
{"url":"http://mathhelpforum.com/pre-calculus/187699-trying-sketch-graph-complex-plane.html","timestamp":"2014-04-18T04:39:58Z","content_type":null,"content_length":"48012","record_id":"<urn:uuid:40fbe6be-7efe-477d-a499-77d19618ade9>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00597-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Benjamin Peirce
First published Sat Feb 3, 2001; substantive revision Fri Aug 22, 2008
Benjamin Peirce (b. April 4, 1809, d. October 6, 1880) was a professor at Harvard with interests in celestial mechanics, applications of plane and spherical trigonometry to navigation, number theory
and algebra. In mechanics, he helped to establish the (effects of the) orbit of Neptune (in relation to Uranus). In number theory, he proved that there is no odd perfect number with fewer than four
distinct prime factors. In algebra, he published a comprehensive book on complex associative algebras. Peirce is also of interest to philosophers because of his remarks about the nature and necessity
of mathematics.
Born in 1809, Peirce became a major figure in mathematics and the physical sciences during a period when the U.S. was still a minor country in these areas (Hogan 1991). A student at Harvard College,
he was appointed tutor there in 1829. Two years later he became Professor of Mathematics in the University, a post which was changed in 1842 to cover astronomy also; he held it until his death in
1880. He played a prominent role in the development of the science curriculum of the university, and also acted as College librarian for a time. However, he was not a successful teacher, being
impatient with students lacking strong gifts; but he wrote some introductory textbooks in mathematics, and also a more advanced one in mechanics (Peirce 1855). Among his other appointments, the most
important one was Director of the U.S. Coast Survey from 1867 to 1874. Peirce also exercised influence through his children. By far the most prominent was Charles Sanders Peirce (1839–1914), who
became a remarkable though maverick polymath, as mathematician, chemist, logician, historian, and many other activities. In addition, James Mills (1834–1906) became in turn professor of mathematics
at Harvard, Benjamin Mills (1844–1870) a mining engineer, and Herbert Henry Davis (1849–1916) a diplomat. Harvard professor Benjamin Osgood Peirce (1854–1914), mathematician and physicist was a
cousin. Benjamin Peirce did not think of himself as a philosopher in any academic sense, yet his work manifests interests of this kind, in two different ways. The first was related to his teaching.
To a degree unusually explicit in a mathematician of that time Peirce affirmed his Christianity, seeing mathematics as study of God's work by God's creatures. He rarely committed such sentiments to
print; but a short passage occurs in the textbook on mechanics previously mentioned, when considering the idea that the occurrence of perpetual motion in nature
would have proved destructive to human belief, in the spiritual origin of force and the necessity of a First Cause superior to matter, and would have subjected the grand plans of Divine
benevolence to the will and caprice of man (Peirce 1855, 31).
Peirce was more direct in a course of Lowell Lectures on ‘Ideality in the physical sciences’ delivered at Harvard in 1879, which James Peirce edited for posthumous publication (Peirce 1881b).
‘Ideality’ connoted ‘ideal-ism’ as evident in certain knowledge, ‘pre-eminently the foundation of the mathematics’. His detailed account concentrated almost entirely upon cosmology and cosmogony with
some geology (Petersen 1955). He did not argue for his stance beyond some claims for existence by design.
Peirce was primarily an algebraist in his mathematical style; for example, he was enthusiastic for the cause of quaternions in mechanics after their introduction by W. R. Hamilton in the mid 1840s,
and of the various traditions in mechanics he showed some favour for the ‘analytical’ approach, where this adjective refers to the links to algebra. His best remembered publication was a treatment of
‘linear associative algebras’, that is, all algebras in which the associative law x(yz)=(xy)z was upheld. ‘Linear’ did not carry the connotation of matrix theory, which was still being born in
others' hands, but referred to the form of linear combination, such as:
q = a + bi + cj + dk
in the case of a quaternion q. Peirce wrote an extensive survey (Peirce 1870), determining the numbers of all algebras with from two to six elements obeying also various other laws (Walsh 2000, ch.
2). To two of those he gave names which have become durable: ‘idempotent’, the law x^m = x (for m≥2) which George Boole had introduced in this form in his algebra of logic in 1847; and ‘nilpotent’,
when x^m = 0, for some m. The history of the publication of this work is very unusual (Grattan-Guinness 1997). Peirce had presented some of his results from 1867 onwards to the National Academy of
Sciences, of which he had been appointed a founder member four years earlier; but they could not afford to print it. Thus, in an initiative taken by Coast Survey staff, a lady without mathematical
training but possessing a fine hand was found who could both read his ghastly script and write out the entire text 12 pages at a time on lithograph stones. 100 copies were printed (Peirce 1870), and
distributed world-wide to major mathematicians and professional colleagues. Eleven years later Charles, then at Johns Hopkins University, had the lithograph reprinted posthumously, with some
additional notes of his own, as a long paper in American journal of mathematics, which J.J. Sylvester had recently launched (Peirce 1881a); it also came out in book form in the next year. This study
helped mathematicians to recognise an aspect of the wide variety of algebras which could be examined; it also played a role in the development of model theory in the U.S. in the early 1900s. Enough
work on it had been done by then for a book-length study to be written (Shaw 1907).
Peirce seems to have upheld his theological stance for all mathematics, and a little sign is evident in the dedication at its head:
To my friends This work has been the pleasantest mathematical effort of my life. In no other have I seemed to myself to have received so full a reward for my mental labor in the novelty and
breadth of the results. I presume that to the uninitiated the formulae will appear cold and cheerless. But let it be remembered that, like other mathematical formulae, they find their origin in
the divine source of all geometry. Whether I shall have the satisfaction of taking part in their exposition, or whether that will remain for some more profound expositer, will be seen in the
future (Peirce 1870, 1).
Peirce began with a philosophical statement of a different kind about mathematics which has become his best remembered single statement “Mathematics is the science that draws necessary conclusions”
(Peirce 1870, p. 1). What does ‘necessary’ denote? Perhaps he was following a tradition in algebra, upheld especially by Britons such as George Peacock and Augustus De Morgan (a recipient of the
lithograph), of distinguishing the ‘form’ of an algebra from its ‘matter’ (that is, an interpretation or application to a given mathematical and/or physical situation) and claiming that its form
alone would deliver the consequences from the premises. In his first draft of his text he wrote the rather more comprehensible “Mathematics is the science that draws inferences”, and in the second
draft “Mathematics is the science that draws consequences”, though the last word was altered to yield the enigmatic form involving ‘necessary’ used in the book. The change is not just verbal; he must
have realised that the earlier forms were not sufficient (they are satisfied by other sciences, for example), and so added the crucial adjective. Certainly no whiff of modal logic was in his air. His
statement appears in the mathematical literature fairly often, but usually without explanation. One feature is clear, but often is not stressed. In all versions Peirce always used the active verb
‘draws’: mathematics was concerned with the act of drawing conclusions, not with the theory of so acting, which belonged in disciplines such as logic. He continued:
Mathematics, as here defined, belongs to every enquiry; moral as well as physical. Even the rules of logic, by which it is rigidly bound could not be deduced without its aid (Peirce 1870, 3).
In a lecture of the late 1870s he described his definition as
wider than the ordinary definitions. It is subjective; they are objective. This will include knowledge in all lines of research. Under this definition mathematics applies to every mode of enquiry
(Peirce 1880, 377).
Thus Peirce maintained the position asserted by Boole that mathematics could be used to analyse logic, not the vice versa relationship between the two disciplines that Gottlob Frege was about to put
forward for arithmetic, and which Bertrand Russell was optimistically to claim for all mathematics during the 1900s. Curiously, the third draft of the lithograph contains this contrary stance in
“Mathematics, as here defined, belongs to every enquiry; it is even a portion of deductive logic, to the laws of which it is rigidly subject”; but by completion he had changed his mind. Peirce's son
Charles claimed to have influenced his father in forming his definitive position, and fiercely upheld it himself; thereby he helped to forge a wide division between the algebraic logic which he was
developing from the early 1870s with his father, Boole and de Morgan as chief formative influences, and the logicism (as it became called later) of Frege and Russell and also the ‘mathematical logic’
of Giuseppe Peano and his school in Turin (Grattan-Guinness 1988).
This list includes some valuable items not cited in the text.
Primary Sources
• Peirce Manuscripts: Houghton Library, Harvard University.
• 1855. Physical and celestial mathematics, Boston: Little, Brown.
• 1861. An elementary treatise on plane and spherical trigonometry, with their applications to navigation, surveying, heights, and distances, and spherical astronomy, and particularly adapted to
explaining the construction of Bowditch's navigator, and the nautical almanac, rev. ed., Boston: J. Munroe.
• 1870. Linear associative algebra, Washington (lithograph).
• 1880. ‘The impossible in mathematics’, in Mrs. J. T. Sargent (ed.), Sketches and reminiscences of the Radical Club of Chestnut St. Boston, Boston : James R. Osgood, 376–379.
• 1881a. ‘Linear associative algebra’, Amer. j. math., 4, 97–215. Also (C.S. Peirce, ed.)in book form, New York, 1882. [Printed version of Peirce 1870.]
• 1881b. Ideality in the physical sciences, (J. M. Peirce, ed.), Boston: Little, Brown.
• 1980. Benjamin Peirce: “Father of Pure Mathematics” in America, (I. Bernard Cohen, ed.), New York: Arno Press. [Photoreprints, including that of (Peirce 1881a).]
Secondary Sources
• Archibald, R.C. 1925. [ed.], ‘Benjamin Peirce’, American mathematical monthly, 32: 1–30; repr. Oberlin, Ohio.: Mathematical Association of America.
• Archibald, R.C. 1927. ‘Benjamin Peirce's linear associative algebra and C.S. Peirce’, American mathematical monthly, 34: 525–527.
• Kent, D. 2005. Benjamin Peirce and the promotion of research-level mathematics in America: 1830–1880. Doctoral Dissertation, University of Virginia.
• Grattan-Guinness, I. 1988. ‘Living together and living apart: on the interactions between mathematics and logics from the French Revolution to the First World War’, South African journal of
philosophy, 7/2: 73–82.
• Grattan-Guinness, I. 1997. ‘Benjamin Peirce's Linear associative algebra (1870): new light on its preparation and “publication”’, Annals of science, 54: 597–606.
• Hogan, E. 1991. ‘ “A proper spirit is abroad”: Peirce, Sylvester, Ward, and American mathematics’, Historia mathematica, 18: 158–172.
• Hogan, E. 2008. Of the human heart. A biography of Benjamin Peirce, Bethlehem: Lehigh University press.
• King, M. 1881. (Ed.), Benjamin Peirce. A memorial collection, Cambridge, Mass.: Rand, Avery. [Obituaries.]
• Novy, L. 1974, ‘Benjamin Peirce's concept of linear algebra’, Acta historiae rerum naturalium necnon technicarum (Special Issue), 7: 211–230.
• Peterson, S. R. 1955. ‘Benjamin Peirce: mathematician and philosopher’, Journal of the history of ideas, 16: 89–112.
• Pycior, H. 1979. ‘Benjamin Peirce's linear associative algebra’, Isis, 70: 537–551.
• Schlote, K.-H. 1983. ‘Zur Geschichte der Algebrentheorie in Peirces “Linear Associative Algebra”’, Schriftenreihe der Geschichte der Naturwissenschaften, Technik und Medizin, 20/1: 1–20.
• Shaw, J. B. 1907. Synopsis of linear associative algebra. A report on its natural development and results reached to the present time, Washington.
• Walsh, A. 2000. ‘Relationships between logic and mathematics in the works of Benjamin and Charles S. Peirce’, Ph. D. thesis, Middlesex University.
How to cite this entry.
Preview the PDF version of this entry at the Friends of the SEP Society.
Look up this entry topic at the Indiana Philosophy Ontology Project (InPhO).
Enhanced bibliography for this entry at PhilPapers, with links to its database.
|
{"url":"http://plato.stanford.edu/entries/peirce-benjamin/","timestamp":"2014-04-19T09:40:57Z","content_type":null,"content_length":"29151","record_id":"<urn:uuid:f40a8cc2-0f83-4dda-b95e-6bec2595f46a>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00152-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Matlab - selecting values
Stack Overflow is a question and answer site for professional and enthusiast programmers. It's 100% free, no registration required.
Suppose I have a matrix
A = [1,2,3,4,5 ; 1,1,1, 21, 43]
up vote 1 down vote favorite I want to select the entries from the first row that have a 1 in the row below them, basically end up with [1,2,3] as a result. How do I do this? Thank you very much
add comment
I want to select the entries from the first row that have a 1 in the row below them, basically end up with [1,2,3] as a result. How do I do this? Thank you very much
You can use logical indexing like this:
result = A(1, A(2,:) == 1)
up vote 2 down This says take the first row of A and columns for which the expression A(2,:) == 1 holds true.
A(2,:) == 1 checks for every column in row 2 whether the value is 1 and returns an array of true or false that acts as a selector as described above. In your example, it would
produce an array [1 1 1 0 0].
add comment
This says take the first row of A and columns for which the expression A(2,:) == 1 holds true.
A(2,:) == 1 checks for every column in row 2 whether the value is 1 and returns an array of true or false that acts as a selector as described above. In your example, it would produce an array [1 1 1
0 0].
|
{"url":"http://stackoverflow.com/questions/5347911/matlab-selecting-values","timestamp":"2014-04-18T08:16:37Z","content_type":null,"content_length":"65722","record_id":"<urn:uuid:e7b3dc71-0f09-4cf0-8919-cbd745f2b1e0>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00645-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Introduction to Induction
Introduction to Induction
2414 views, 2 ratings - 00:23:53
An introduction to the art of proving by induction. A list of steps to follow in order to prove something using induction, and several famous examples of types of induction proofs.
• Lesson Slides - Slides that are written on from the lesson.
• Mathematical Induction - An explanation of how to prove a statement by induction.
• Mathematical Induction 2 - Another extensive explanation of induction.
• What is induction?
• How do you prove something by induction?
• What is an induction assumption?
• How do you use the induction assumption?
This lesson is wonderful for anyone who is just being introduced to the art of proofs. Induction is a great starter proof that helps a student who is new to proofs wrap his or her head around the
idea. Not only is the basic structure of a proof by induction explained, several examples are included for additional practice and understanding. A really great crash course in Induction, but
containing no additional resources.
Oh this video is pretty legit. I really like the slides you use. You should b a math teacher love derek
|
{"url":"http://mathvids.com/topic/12-beyond-calculus/major_subtopic/116/lesson/25-introduction-to-induction/mathhelp","timestamp":"2014-04-16T04:25:46Z","content_type":null,"content_length":"84668","record_id":"<urn:uuid:f5033de8-a0d1-4802-a46b-952404d149ab>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00170-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Sinai's Billiards
Sinai's Billiards (Lorentz Gas) is a term referring to the study of the chaotic dynamical properties of hard elastic balls (read on for a less technical translation). One of the early and astounding
results of this study is that a gas of two hard balls is strongly ergodic (i.e. the gas obeys the Boltzmann hypothesis and becomes stochastic, allowing the basic laws of thermodynamics to hold true
and be applicable). This is a truly amazing result: you don't need millions of atoms to have a system that behaves stochastically: two atoms are enough.
What's a 'hard elastic ball', I hear you saying, and why is this interesting? A 'hard elastic ball' is just what it sounds like: think of steel ball bearings or BB's, something that bounces very well
and is very hard. The study of how they bounce is interesting at several levels. At a very basic level, this should remind you of the 'ideal gas' as studied in high-school physics class: you are
taught to envision a bunch of round, bouncy atoms all bouncing around (randomly) in a box. So its kinda interesting to ask how things bounce around. But what really makes this an interesting problem
is that its a doorway to much much deeper mysteries about the universe, as explained in the 'digression' below.
Below, we present a series of pictures that visually show this result, by ray-tracing a simple cubic crystalline lattice. We put 'atoms' or mirrored balls in the lattice, and then trace a ray of
light through the lattice. We then look at lattices of different sizes: only a few atoms, or hundreds, or zillions. Because of the periodic boundary conditions, this problem is identical to one where
we put a reflective ball in the center of a mirrored six-sided cube. We stop ray tracing when the ray has bounced some number of times off the walls. (That reflective vs. toroidal boundary conditions
make no qualitative difference is illustrated at the bottom of this page).
The chaotic nature of the billiards is even more strikingly apparent if we look at where a ray came from. In the picture below, we've colored the walls of the cube different colors. We can see these
walls reflected in the sphere in the center. You can clearly see the perspective fore-shortening of the walls of the cube, and you can see clearly how the sphere is reflecting the walls.
At this point in time, physicists are engaged in a fundamental debate pitting the philosophical ideas of 'emergent behavior' and 'reductionism' against each other. On the one hand, (according to the
dogma of reductionism), air is made out of atoms (quantum mechanical ones at that), which are made out of smaller pieces, and so on. On the other hand (according to the dogma of emergent behavior),
out of this gas of atoms appear the laws of statistical mechanics; and the laws of statistical mechanics don't seem to care at all that the atoms were quantum mechanical. Neither dogma explains very
well (ok, doesn't really explain at all) how we got from here to there. The problem is that we don't really understand how to get from deterministic, time-reversible equations of motion for point
particles to the smooth, density-based, time-irreversible equations of statistical mechanics. Where did the time symmetry go? How did point particles become a smooth continuum?
Most textbooks give a hand-waving explanation about how Avogadro's number is so big, and maybe a derivation of the Gaussian distribution as the large-number limit of the Poisson distribution. Tastes
great, less filling. The surprise of Sinai's billiards (as so amply illustrated in these pages) is that you don't need a million atoms to get stochastic behavior, you only need two. Furthermore, the
loss of determinism is not due to some 'random averaging', but is a fully deterministic, chaotic process: it looks like chaos one sees in fractals.
Several mysteries remain. First, what happens when we replace the hard elastic balls with quantum-mechanical dispersive waves? After all, we do believe that atoms are quantum mechanical, and we do
know that if we put a single atom in a box, localize it to one spot, close the box and wait, then its wave function will slowly expand to fill the box. If we put two in, well, the same. Is this
problem completely unrelated to Sinai's billiards? It seems to be, but is there something more subtle going on?
A related mystery is brought up by Dean Driebe's derivation of time asymmetry from the Bernoulli map. The derivation is important, but the explanation seems shallow: one picks one representation (in
the sense of 'representation of a group') and gets reversible particle/point dynamics, and one picks a different representation, and gets time-asymmetric chaotic evolution. This works great in the
pure, mathematically abstract realm of the Bernoulli transform and Baker's map. But when applied to the physics of a hard gas, its disconcerting: the physics depends on the representation. This
disconcerting feeling is not new: quantum mechanics is rife with it: the time evolution of a quantum state depends on how it was prepared, which is a way of saying that it depends on the
representation. (e.g. 'singlet', 'triplet' are different representations of the rotation group). We got used to this notion in quantum mechanics, but its once again disturbing to bump into this again
when dealing with chaotic systems. Its kind of like state reduction in quantum measurement: one can talk about the chaotic evolution of the billiards system, but when one asks 'where's the billiard
ball right now?' one must leave the time-asymmetric representation behind, and hop back to the time-symmetric, deterministic point-particle representation. Very discontinuous. Makes you wonder what
reality really is. How can reality just change like that, on a whim, as it were?
The fourth mystery invokes the words 'quantum mechanics' again, but in a different way than above. The 'quantum measurement problem' to this day remains a 'problem', or is at least opaque.
Measurements seem to happen when a quantum particle interacts with a many-body system, whether (for example) silver on a film plate, or condensation in a cloud chamber. Well, Sinai's billiards are a
sort-of realistic model for crystals and gases; is there perchance any connection at all between quantum measurement, and the goings-on with billiards? Maybe there is none at all, but where else can
we start looking for a suitable foundation for the many-body nature of wave-function collapse?
Miscellaneous Technical Results
The mean free path of a ray in this lattice is presented here.
Below follows the entire atlas of images, for a variety of ball sizes and lattice sizes. We provide this atlas for all of the usual reasons: you, the reader, no doubt is curious as to what might
happen if a parameter or two was varied. The atlas provides a simple visual guide to these variations. In the first round of images, the ball radius is equal to 0.3 of the size of the fundamental
basic cubic cell. First, we start with the image of one ball:
Then a two-deep lattice:
Then a three-deep lattice:
Then a four-deep lattice: Notice the reflections of the other balls are showing in each of the balls.
And so on. Note how the reflected image in the sphere keeps getting more and more complicated. That is the other way to envision the transition to chaos: two rays of light, initially very close to
each other, hit the sphere, and bounce off in slightly different directions. But as we can see, the reflection of a reflections of a reflection gets more and more complicated and filigreed: even
close-by rays will hit on different reflections. Eventually, the pattern becomes so complicated, it just turns to mud.
Radius 0.4
We can see that larger sphere's just mix things up a whole lot quicker. The images below get muddy, quickly.
Radius 0.2
Smaller sphere's aren't as quickly mixing as the big ones. But with a big enough lattice, they get there eventually. Notice the diagonal lines appearing in the image. These lines correspond to the
iterated Bernoulli map.
Radius 0.1
Some really small spheres. Just like above, the key word is 'eventually'. With small spheres, the mean free path can be very very long. But the spheres, no matter how small, still introduce
hyperbolic trajectories, and as we know, anything hyperbolic is chaotic.
Toroidal Boundary Conditions
All of the above images were computed using reflective boundary conditions. What if we had used toroidal boundary conditions instead? That is, what if we ray-traced a lattice of 'real' spheres laid
out on a 'real' lattice, instead of a mirror-land of reflections? The pictures below show the progress of rays in a true, non-mirrored lattice. As you can see, it makes no qualitative difference.
Note how the background is a uniform color, not a checkerboard. The background still looks tiled, and it may be worth understanding why. When I carve out a finite sized lattice, what I carve out is
roughly spherical. Think of a sphere assembled from toy Lego blocks. What looks like 'tiles' are in fact just the side-walls of the more distant cells.
'Ahah', you might think, 'but what if we worked with a true cube?' That should make the tiling effect go away. And as the pictures below show, this does indeed seem to simplify things, at least as
first. We can see that much of the apparent complexity of Sinai's billiards does indeed seem to be due to the boundary conditions. But in the end, its not the boundary conditions that matter. It
really is the hyperbolic effect of rays bouncing off spheres that makes classical trajectories through a lattice of atoms chaotic.
Closing Thoughts
Is it really remarkable that a ray bouncing through a lattice of balls has a chaotic trajectory? Well, maybe not, for if one thinks about it, how could it be otherwise? In that case, the surprise
should come when one thinks about passing a wave, rather than a point particle, through the lattice. As any high-school student (that didn't sleep through physics class) knows, waves traveling in a
lattice are not chaotic, but instead exhibit diffraction. You don't need x-rays shining on a crystal to get diffraction: simple water-wave tanks will show water waves diffracting off of pilings.
That, in a nutshell, summarizes what happens when one looks for chaos in the quantum world: one usually finds that the quantum analog is plain, simple and ordered, and not chaotic.
But this conclusion is misleading. The quantum version of Sinai's billiards is not textbook diffraction from a crystalline lattice. The traditional diffraction calculations make a simplifying
assumption: there is only one interaction, only one bounce, between the incoming and outgoing rays. This one bounce is what allows the waves to coherently superimpose. If one allows for multiple
bounces, there is a powerful mixing or decoherence that completely damps the wave. Indeed, looking at a wave tank, we can see that diffraction is a 'surface' effect. The waves penetrate some depth
into the regular lattice, but they do not penetrate arbitrarily deep. Diffraction is happening near the surface, where the waves can penetrate, bounce, and get back out relatively unscathed.
The way to see this by means of a ray-tracing numeric simulation is to use a Feynmann Path Integral. As a ray bounces through the lattice, the distance that it travels is kept track of. This distance
is used to compute the phase of the ray as it emerges from the lattice, using the traditional exp (ikx). Many (randomly generated) rays can be passed through the lattice, and their phases are summed
as they emerge. One quickly finds that the phases all cancel out and everything washes out to zero. The quantum analog of Sinai's billiards appears to be a kind of anechoic chamber, where all waves
are absorbed without reflection.
Open Research Items
• Determine distribution of free paths
• Compare the above to the mean free path of x-rays in crystals for real-life systems.
• Find literature that discusses penetration depth of waves in regular lattices.
• Compute similar results for a two-balls hard gas.
Notes & Bibliography
Assorted notes & bibliography.
• Introduction (from a physicists/mathematicians viewpoint) to Sinai's Billiards.
• Chaos in Semiconductor and Optical Billiards introduces the quantum equivalent of Sinai's Billiards in a two-dimensional electron gas in Gallium Arsenide. Very curiously, the magneto-resistance
is a fractal that appears to be exactly self-similar!!
Copyright (c) 2001, 2002 Linas Vepstas
Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.1; with no Invariant Sections, with no Front-Cover Texts, and
with no Back-Cover Texts. A copy of the license is included at the URL http://www.linas.org/fdl.html, the web page titled "GNU Free Documentation License".
To contact Linas, see his Home Page.
|
{"url":"http://www.linas.org/art-gallery/billiards/billiards.html","timestamp":"2014-04-19T22:05:42Z","content_type":null,"content_length":"24251","record_id":"<urn:uuid:f85ee9db-529e-4b82-9d9f-41256308870d>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00321-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Dynamic Debugging
Mathematica 6 introduced a novel and unique technology for building user interfaces that are embedded in notebook documents. The new technology is based around Dynamic components that are embedded in
user interfaces and graphical elements. These work both as an event function, tracking changes in state, and also as an object model, allowing parts of a structure to be modified. It can be
contrasted with the more traditional approach offered by GUIKit. Information about the technology can be found at http://reference.wolfram.com/mathematica/guide/CustomInterfaceConstruction.html.
If you want to work with Dynamic in Mathematica 6, you can use the Workbench to help with debugging your implementation.
You should launch Mathematica in debugging mode.
Now you need some code that is called by a Dynamic computation. The code should reside in a Mathematica project. For example, here is some code.
This makes a nice simple interactive graphical example; as you move the locator, the spiral moves around with it.
You should now set a breakpoint in the code.
When you move the locator in the notebook, this triggers the breakpoint. This is a typical display of the Debug view.
When you are at a breakpoint, the notebook front end is waiting for the result to the computation. Since it cannot proceed, it will become unresponsive until execution continues. This is typical for
debugging a user interface application.
Due to technical details, you cannot debug Dynamic when the Mathematica kernel is already busy. For many useful applications this is not a severe limitation, and one day it will be removed.
|
{"url":"http://reference.wolfram.com/workbench/topic/com.wolfram.eclipse.help/html/tasks/notebook/dynamicdebug.html","timestamp":"2014-04-20T21:23:00Z","content_type":null,"content_length":"3293","record_id":"<urn:uuid:b9a915ab-affa-43e2-a0ae-513bdc60fde7>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00274-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Finding Intersection of Parametric Equation and Axis
March 31st 2010, 02:41 AM #1
Mar 2010
Finding Intersection of Parametric Equation and Axis
Hi, I've been trying to learn the C4 paper ahead of schedule and have gotten down to one last question I can't quite get.
Q5) A curve C has parametric equations $x = at^2$ , $y = 2at$. Show that the equation of the normal to C at the point P, whose parameter is $p$, is:
$px + y - 2ap - ap^3 = 0$
The normal to C at P meets the x-axis at Q. The perpendicular from P to the x-axis meets the x-axis at R. Find the length of QR.
This is my working so far:
$dx/dt = 2at$
$dy/dt = 2a$
$dy/dx = dy/dt * dt/dx = 1/t$ (Chain Rule)
Normal Gradient = $-1/m$ = $-p$
(Use p as parameter for normal equation)
$y - y1 = m(x - x1)$
$y - 2ap = -p(x - ap^2)$
$y - 2ap = -px + ap^3$
$px + y - 2ap - ap^3 = 0$ (As required)
And thats about as far as I got, I've been trying to let y = 0 and let q be the parameter, so:
$qx - 2aq - aq^3 = 0$
But do you then make them equal to each other and figure it out from there? I don't really know, its a WJEC paper too so I can't get any marking schemes without paying, so I thought this way
would be better to get an understanding.
Hi, I've been trying to learn the C4 paper ahead of schedule and have gotten down to one last question I can't quite get.
Q5) A curve C has parametric equations $x = at^2$ , $y = 2at$. Show that the equation of the normal to C at the point P, whose parameter is $p$, is:
$px + y - 2ap - ap^3 = 0$
The normal to C at P meets the x-axis at Q. The perpendicular from P to the x-axis meets the x-axis at R. Find the length of QR.
This is my working so far:
$dx/dt = 2at$
$dy/dt = 2a$
$dy/dx = dy/dt * dt/dx = 1/t$ (Chain Rule)
Normal Gradient = $-1/m$ = $-p$
(Use p as parameter for normal equation)
$y - y1 = m(x - x1)$
$y - 2ap = -p(x - ap^2)$
$y - 2ap = -px + ap^3$
$px + y - 2ap - ap^3 = 0$ (As required)
And thats about as far as I got, I've been trying to let y = 0 and let q be the parameter, so:
$qx - 2aq - aq^3 = 0$
But do you then make them equal to each other and figure it out from there? I don't really know, its a WJEC paper too so I can't get any marking schemes without paying, so I thought this way
would be better to get an understanding.
So you do the hard part and get stuck in an almost trivial part? $y=0$ in the line you get $x=2a+ap^2\Longrightarrow Q=(2a+ap^2,0)$ , and the perpendicular from the point P to the x-axis meets
the axis at $(ap^2,0)$ , so...
Oh yeah, haha that put me in my place, thanks a lot, was a long day, spent 4-5 hours researching unknown topics
Thanks again
March 31st 2010, 04:13 AM #2
Oct 2009
April 2nd 2010, 05:01 AM #3
Mar 2010
|
{"url":"http://mathhelpforum.com/calculus/136644-finding-intersection-parametric-equation-axis.html","timestamp":"2014-04-17T10:42:33Z","content_type":null,"content_length":"44089","record_id":"<urn:uuid:6916261d-b482-4380-a7d1-bcf88423bae9>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00191-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Ideas to represent such a case
up vote 2 down vote favorite
I am designing a flight simulation program and I am looking for ideas on how to properly implement such requirement.
Kindly look at my picture below. The points represents location.
The idea is this, I wanted to properly create a data structure to best represent such scenario in java such that
• when I am in Point 1, how far I am from the last point which is Point 8?
□ Points 2, 3 and 5 are at the same distance from Point 8
□ From Point 1, I could traverse to Point 3 to Point 6 then 7 then eight that would equate to 4 Steps.
• when I am in Point 0
□ I could traverse to Point 4 then 5 then 7 then reach point 8 which would equate to 4 steps also.
I just wanted to assist user to help them find different route.
Is this possible and which java data structure should best fit this requirement? Also any design ideas how to implement this?
Sorry if my question might be vague, I am just trying to get as much information as I can to properly handle such requirement.
java design data-structures graph
3 en.wikipedia.org/wiki/Graph_theory I'd recommend read this first. – Alex Stybaev May 10 '12 at 7:52
And Travelling_salesman_problem would be the next step. – Marko Topolnik May 10 '12 at 8:05
arrggghh... as a Non-CS student who happens to land a job in programming, I happen to understand only basic data structures.. Also, some of my web project(s) doesn't need any graphs. Time to check
this further thanks everyone. – Mark Estrada May 11 '12 at 8:40
add comment
2 Answers
active oldest votes
What you have is a weighted graph where the weights represent the distance between the nodes (which is very common). You could easily implement this yourself (and it's a great
way to learn!), but there are lots of java source code for this out there.
Of course, this is not a java data structure. It's simply a data structure (or a concept), used by everyone, everywhere.
up vote 5 down vote
accepted Calculating steps and distances is very easy once you've implemented a weighted graph.
There are massive amounts of documentation on all of this, especially here on Stackoverflow.
add comment
This is a Shortest Path problem, a common Graph problem. The usual way to represent the data is an Adjancency List or Matrix:
An adjancency list keeps, for every node, all 1-step reachable destinations. This is often implemented as a Linked List. It's the proper choice if your graph is relatively sparse (i.e.
few destinations per node).
up vote 1 down An adjancency matrix is used for (very) dense graphs. Basically, you keep an NxN matrix of values (weights/costs, or yes/no booleans). Then, distances[i][j] represents the cost of
vote travelling to j from i. An unavailable arc has a cost of INF (or some error value).
The problem itself is generally solved by Dijkstra's Algorithm.
add comment
Not the answer you're looking for? Browse other questions tagged java design data-structures graph or ask your own question.
|
{"url":"http://stackoverflow.com/questions/10529607/ideas-to-represent-such-a-case/10529669","timestamp":"2014-04-18T03:38:16Z","content_type":null,"content_length":"75138","record_id":"<urn:uuid:8fb7466d-265e-4885-922f-8faad0f8b290>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00533-ip-10-147-4-33.ec2.internal.warc.gz"}
|
16 search hits
The Top-Dog Index: A New Measurement for the Demand Consistency of the Size Distribution in Pre-Pack Orders for a Fashion Discounter with Many Small Branches (2008)
Sascha Kurz Jörg Rambau Jörg Schlüchtermann Rainer Wolf
We propose the new Top-Dog-Index, a measure for the branch-dependent historic deviation of the supply data of apparel sizes from the sales data of a fashion discounter. A common approach is to
estimate demand for sizes directly from the sales data. This approach may yield information for the demand for sizes if aggregated over all branches and products. However, as we will show in a
real-world business case, this direct approach is in general not capable to provide information about each branchs individual demand for sizes: the supply per branch is so small that either the
number of sales is statistically too small for a good estimate (early measurement) or there will be too much unsatisfied demand neglected in the sales data (late measurement). Moreover, in our
real-world data we could not verify any of the demand distribution assumptions suggested in the literature. Our approach cannot estimate the demand for sizes directly. It can, however,
individually measure for each branch the scarcest and the amplest sizes, aggregated over all products. This measurement can iteratively be used to adapt the size distributions in the pre-pack
orders for the future. A real-world blind study shows the potential of this distribution free heuristic optimization approach: The gross yield measured in percent of gross value was almost one
percentage point higher in the test-group branches than in the control-group branches.
On the minimum diameter of plane integral point sets (2007)
Sascha Kurz Alfred Wassermann
Since ancient times mathematicians consider geometrical objects with integral side lengths. We consider plane integral point sets P, which are sets of n points in the plane with pairwise integral
distances where not all the points are collinear. The largest occurring distance is called its diameter. Naturally the question about the minimum possible diameter d(2,n) of a plane integral
point set consisting of n points arises. We give some new exact values and describe state-of-the-art algorithms to obtain them. It turns out that plane integral point sets with minimum diameter
consist very likely of subsets with many collinear points. For this special kind of point sets we prove a lower bound for d(2,n) achieving the known upper bound n^{c_2loglog n} up to a constant
in the exponent.
Integral point sets over Z_n^m (2007)
Axel Kohnert Sascha Kurz
There are many papers studying properties of point sets in the Euclidean space or on integer grids, with pairwise integral or rational distances. In this article we consider the distances or
coordinates of the point sets which instead of being integers are elements of Z_n, and study the properties of the resulting combinatorial structures.
Bounds for the minimum oriented diameter (2008)
Sascha Kurz Martin Lätsch
We consider the problem of finding an orientation with minimum diameter of a connected bridgeless graph. Fomin et. al. discovered a relation between the minimum oriented diameter an the size of a
minimal dominating set. We improve their upper bound.
An exact column-generation approach for the lot-type design problem (2012)
Sascha Kurz Miriam Kießling Jörg Rambau
We consider a fashion discounter distributing its many branches with integral multiples from a set of available lot-types. For the problem of approximating the branch and size dependent demand
using those lots we propose a tailored exact column generation approach assisted by fast algorithms for intrinsic subproblems, which turns out to be very efficient on our real-world instances.
The Integrated Size and Price Optimization problem (2012)
Miriam Kießling Sascha Kurz Jörg Rambau
We present the Integrated Size and Price Optimization Problem (ISPO) for a fashion discounter with many branches. Based on a two-stage stochastic programming model with recourse, we develop an
exact algorithm and a production-compliant heuristic that produces small optimality gaps. In a field study we show that a distribution of supply over branches and sizes based on ISPO solutions is
significantly better than a one-stage optimization of the distribution ignoring the possibility of optimal pricing.
|
{"url":"http://opus.ub.uni-bayreuth.de/opus4-ubbayreuth/solrsearch/index/search/searchtype/authorsearch/author/%22Sascha+Kurz%22/start/10/rows/10/languagefq/eng","timestamp":"2014-04-16T22:05:43Z","content_type":null,"content_length":"41435","record_id":"<urn:uuid:d93bf543-1b6b-4e53-a960-83c8b161a0d9>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00022-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Mental Floss9 Silly Venn Diagrams
Venn diagrams give a visual representation of sets and their logical relations and overlaps. The actual logic behind these nine diagrams varies according to the humor intended.
1. Animals Playing Music
This Venn diagram explains how such diagrams work with no words at all. It's a t-shirt from Tenso Graphics called Math, also available as a poster.
2. Muppet Names
There is a method to the madness of naming the Muppets. You could probably stick a few more of them in here if there were room.
3. Dating Expectations
This diagram illustrates how both men and women have expectations of meeting the opposite sex in more diverse places than they actually do. What's that bar at the bottom? Oh, that's a progress bar...
when men and women finally meet at a bar. See this diagram in motion at Top Cultured.
4. The Origin
What does it mean when alcohol overlaps Japanese culture? The invention of a new pastime!
5. Van Diagram
Adam Koford designed the Van Diagram t-shirt for Woot! You are warned not to wear it backwards because people will think it says "boxcar" and that doesn't make any sense.
6. Shakespeare in a Minute
Several diagrams illustrate some of Shakespeare's most enduring lines from various plays. Only those somewhat familiar with the Bard's works will find this entertaining.
7. Nerds and Geeks
Are you a geek or a nerd? How you answer that question (one or the other, or a long rant about the definitions of the two terms) determines where you fall on this diagram by Randall Munroe of xkcd.
If you are neither a nerd nor a geek, your opinion on the distinction is mild compared to those who have a stake in the controversy.
8. Real World Tables
These nesting Venn-inspired tables (there are two of them) can be reconfigured as you like so you can illustrate overlapping sets to people who visit your home. Whether they ever return again will
depend on how math-savvy and tolerant they are.
9. The Last Word
There are generators in which you can enter specific data to create your own Venn diagrams. Here's one I have used before. It's much easier if you just make up your data out of whole cloth; I made
this simple diagram myself with a paint program.
See also: Fun with Venn and Euler Diagrams and 10 Venn and Not-quite-Venn Diagrams.
|
{"url":"http://mentalfloss.com/node/25591/atom.xml","timestamp":"2014-04-17T12:34:54Z","content_type":null,"content_length":"7694","record_id":"<urn:uuid:1bfb7699-b0b6-4d45-bf27-205c78cdfaec>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00567-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Rockwall Prealgebra Tutor
Find a Rockwall Prealgebra Tutor
I am a certified secondary teacher with over 15 years experience teaching and tutoring a wide variety of subjects and ages. I have a degree in English from Southern Methodist University with a
specialization in Creative Writing. I also have a minor in education.
25 Subjects: including prealgebra, reading, English, geometry
...I was required to teach math essentials, reading essentials, SAT/ACT Prep, as well as, homework support for students grades K-12th. I also had a handful of college students and adults that
would come to the center for homework support. At Sylvan Learning Center I acquired the necessary skills t...
23 Subjects: including prealgebra, reading, chemistry, SAT math
...I was a tutor in college for students that needed help in math. I have a Master's degree in civil engineering and have practiced engineering for almost 40 years where math is important to
performing my job. I hold a Master's Degree in Education with emphasize on instruction in math and science for grades 4th through 8th.
11 Subjects: including prealgebra, geometry, algebra 1, algebra 2
...I am a Texas state certified teacher (math 4-12), I teach complete Algebra 2 course to students whether as acceleration (credit by exam at the end from their district) or as extension of their
school curriculum. The course includes units: system of equations and Inequalities, Property of Parent ...
20 Subjects: including prealgebra, physics, calculus, statistics
...As a economic analyst, I developed an advanced MS Excel tool to mine raw data, organize and analyze it which saved our organization countless man-hours of work. I'm well versed in MS Excel
functions and the writing of macros in VBA. I have more than three years of paid economics tutoring experience at the undergraduate level.
7 Subjects: including prealgebra, algebra 1, economics, business
Nearby Cities With prealgebra Tutor
Allen, TX prealgebra Tutors
Balch Springs, TX prealgebra Tutors
Duncanville, TX prealgebra Tutors
Farmers Branch, TX prealgebra Tutors
Garland, TX prealgebra Tutors
Heath, TX prealgebra Tutors
Highland Park, TX prealgebra Tutors
Lancaster, TX prealgebra Tutors
Lucas, TX prealgebra Tutors
Mesquite, TX prealgebra Tutors
Murphy, TX prealgebra Tutors
Parker, TX prealgebra Tutors
Rowlett prealgebra Tutors
Sachse prealgebra Tutors
Wylie prealgebra Tutors
|
{"url":"http://www.purplemath.com/Rockwall_Prealgebra_tutors.php","timestamp":"2014-04-18T00:40:39Z","content_type":null,"content_length":"23991","record_id":"<urn:uuid:4e940f69-be4d-4740-bbef-081c48600119>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00353-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Patent US6697499 - Digital watermark inserting system and digital watermark characteristic parameter table generating method
The present application is a divisional of copending application Ser. No. 09/480,023 filed on Jan. 10, 2000.
1. Field of the Invention
The present invention relates toga digital watermark inserting system for inserting digital watermark information into an input image, a digital watermark characteristic parameter table generating
method, and a computer readable record medium on which a digital watermark characteristic parameter table generating program has been recorded.
2. Description of the Related Art
In recent years, digital watermarks are becoming attractive. However, there have been few studies that focus on attacks against digital watermarks. As a result, it is difficult to compare robustness
values against attacks in different digital watermark systems. As a related art reference, a framework of a robustness evaluation value calculating system for categorizing attacks that take place in
conventional image processing and so forth and calculating robustness evaluation values for individually categorized attacks has been disclosed by Ryoma Oami, Yoshihiro Miyamoto, and Mutsumi Ohta,
NEC C & C Media Research Laboratories in “Robustness Measure against attacks for digital watermarking and its application,” 1998 Image Media Processing Symposium (IMPS 98). The related art reference
describes a method for obtaining the optimum digital watermark strength.
In the related art reference, attacks against digital watermarks are largely categorized as (1) deterioration that takes place in an image processing or the like and (2) intentional forgery of
embedded information. Since attacks of the category (1) inevitably take place in a conventional image process. Thus, strong robustness is required for attacks of the category (1). Attacks of the
category (2) are higher level attacks than those of the category (1). The attacks of the category (2) largely depend on the digital watermark inserting and detecting system. In the related art
reference, attacks of the category (1) are further categorized from a view point of a signal processing. Robustness evaluation values are calculated for individually categorized attacks. The
categorized attacks are for example coding loss in JPEG (LC), uniform and Gauss type noise (N), geometric transform such as scale and rotation (GT), pixel value conversion such as gray and binary
(PVC), and image processing such as sharpening and median filtering (IP).
In the system of the related art reference, a digital watermark is inserted into an image. Thus, a digital watermark inserted image is obtained. Thereafter, the digital watermark inserted image is
attacked in a predetermined manner and then the digital watermark is detected. By repeating the above procedures, varying a parameter for adjusting the attack strength, a detection ratio is obtained.
The detection ratio data is statistically processed. Thus, robustness evaluation values for individually categorized attacks are calculated. As the statistical process, weighted means method or
threshold value method is used, for example. In the evaluation value calculating system, (a) by properly setting a weighting function, evaluation values for applications having different attack
characteristics can be calculated; (b) robustness evaluation values can be compared among different systems regardless of the values of the attack strength x for really measuring detection ratios;
and (c) the accuracy of evaluation values can be improved progressively. According to the evaluation value calculating system, evaluation values 0 to 1 against watermark strength values 1 to 4 depend
on conventional digital watermark systems.
However, according to the digital watermark inserting system of the related art reference, it is difficult to properly set the strength of a digital watermark. The strength of the digital watermark
largely depends on the contents of the image. Even if the user designates the relation of the strength of digital watermarks, the deterioration of the image quality, and the robustness values against
attacks in a deterministic manner, the relation cannot be applied to all images. Thus, the optimum strength of digital watermarks cannot be obtained.
The inventor of the present invention has invented “digital watermark inserting system” that is currently being filed as Japanese Patent Application No. 10-150823 (hereinafter, this invention may be
referred to as second related art reference). This invention was made to embed (insert) copyright information and so forth into digital signals of audio data, image data, and so forth.
FIG. 1 is a block diagram showing the structure of the “digital watermark inserting system” as the second related art reference. In FIG. 1, a categorizing portion 103 calculates a feature amount of
an input image, obtains a category of the image with the calculated feature amount, and outputs the index indicating the category as a category index to a storing unit 2001. The storing unit 2001
selects a table corresponding to the category index that is received from the categorizing portion 103 and outputs an image quality deterioration ratio and a robustness evaluation value corresponding
to digital watermark strength that is received from a digital watermark strength calculating portion 100 to the digital watermark strength calculating portion 100.
The digital watermark strength calculating portion 100 outputs various digital watermark strength values to the storing unit 2001. The digital watermark strength calculating portion 100 decides the
optimum digital watermark strength based on the image quality deterioration ratio and the robustness evaluation value that are received from the storing unit 2001 and based on restriction information
of digital watermark strength that is input by the user and outputs the decided optimum digital watermark strength data to a digital watermark inserting portion 102.
The digital watermark inserting portion 102 converts embedding data into digital watermark information, inserts the digital watermark into the image with the optimum digital watermark strength
received from the digital watermark strength calculating portion 100, and outputs a digital watermark inserted image.
Next, the operation of the digital watermark inserting system shown in FIG. 1 will be described. First of all, several symbols used in the operation will be defined.
The number of categories of input images is denoted by K. K categories are distinguished with category index k (where k=1, . . . , K). The digital watermark strength with which a digital watermark is
inserted is denoted by s(m) (where m=1, . . . , M). The parameter used as the digital watermark strength depends on the digital watermark inserting algorithm for use. When the digital watermark
strength is successively varied, it is digitized into M different values and their values are denoted by s(m). When the category index is k and the digital watermark strength is s(m), the image
quality deterioration ratio and the robustness evaluation value against the attack are denoted by D(k,m) and V(k,m), respectively.
Next, with reference to FIG. 1, the operation of the digital watermark inserting system will be described. An input image is supplied to the categorizing portion 103. The categorizing portion 103
calculates a feature amount of the image, decides the category of the input image based on the obtained feature amount, and outputs a category index that represents the category. In reality, the
categorizing portion 103 stores feature amount values that represent boundaries of categories. The categorizing portion 103 compares the stored feature amount values with the calculated feature
amount value and categorizes the input image based on the compared result. The feature amount is for example an activity of the entire image (the activity is the mean value of AC frequency
The category index that is output from the categorizing portion 103 is input to the storing unit 2001. The storing unit 2001 stores digital watermark feature tables for individual category indexes.
Each of the digital watermark feature tables represents the relation of the digital watermark strength values, the image quality deterioration ratios and the robustness evaluation values against
attacks. A digital watermark characteristic table for a category index k is shown in Table 1.
TABLE 1
Image quality Robustness
Digital watermark deterioration evaluation values
strength amount against attack
s(1) D(k, 1) V(k, 1)
s(2) D(k, 2) V(k, 2)
. . . . . . . . .
s(M) D(k, M) V(k, M)
In addition, when a digital watermark strength s(m) is input from the digital watermark strength calculating portion 100, the storing unit 2001 selects a digital watermark characteristic table
corresponding to the category index k that is received from the categorizing portion 103, and outputs the image quality deterioration amount D(k, m) and the robustness evaluation value V(k, m) to the
digital watermark strength calculating portion 100.
After the input image has been supplied to the system and the categorizing portion 103 has calculated the category index k, the digital watermark strength calculating portion 100 calculates the
optimum digital watermark strength based on digital watermark strength restriction information that is input by the user. Basically, the digital watermark strength that maximizes the following
objective function is defined as the optimum digital watermark strength.
Z(m)=(1−a)(1−D(k, m))+aV(k, m).(1)
where a satisfies the relation of 0≦a≦1. Z(m) is calculated for each digital watermark strength value. The digital watermark strength s(m) with the maximum value is calculated as the optimum digital
watermark strength.
The optimum digital watermark strength that is output from the digital watermark strength calculating portion 100 is input to the digital watermark inserting portion 102. The digital watermark
inserting portion 102 converts the input embedding data into digital watermark information and inserts the input embedding data into the image. The digital watermark strength used in inserting the
input embedding data into the image is the optimum digital watermark strength that is output from the digital watermark strength calculating portion 100. The resultant image is output as a digital
watermark inserted image.
According to the second related art reference, the digital watermark inserting algorithm is not limited as long as the user can designate the digital watermark strength or the like for the digital
watermark inserted into the image. For example, the watermarking algorithm disclosed in Japanese Patent Laid-Open Publication No. 9-191394 and periodical “IEEE Transactions on Image Processing,” Vol.
IP-6, pp. 1673-1687, No. 12, 1997 can be used.
In this algorithm, the entire image is processed by discrete cosine transform (DCT) method or discrete Fourier transform (DFT) method. The N largest transform coefficients are selected from the
obtained transform coefficients. Thereafter, digital watermark information is inserted. In reality, digital watermark information is inserted corresponding to the following formula.
where x is a digital watermark signal; ν is a transform coefficient into which the watermark signal is embedded; α is the digital watermark strength; and ν′ is a digital watermark inserted transform
coefficient. For the obtained digital watermark inserted transform coefficient, inverse DCT method or inverse DFT is performed. Thus, a digital watermark inserted image is generated and output. In
this algorithm, the digital watermark strength is represented by parameter α in the formula (2) or the formula (3).
Next, a digital watermark characteristic table generating unit that generates a digital watermark characteristic table stored in the storing unit 2001 of the system shown in FIG. 1 will be described.
FIG. 2 is a block diagram showing the structure of a conventional digital watermark characteristic table generating unit. In FIG. 2, a digital watermark inserting portion 200 converts embedding data
into proper data, inserts digital watermark information with the input digital watermark strength into the input image, and outputs the resultant digital watermark inserted image to an attack
executing portion 201. The attack executing portion 201 attacks the digital watermark inserted image with a predetermined strength corresponding to an input attack parameter in a predetermined manner
and outputs the attacked image to a digital watermark detecting portion 202.
The digital watermark detecting portion 202 detects a digital watermark from the attacked image that is received from the attack executing portion 201 and outputs the detected result to a digital
watermark characteristic table generating portion 2201. An image quality deterioration amount calculating portion 203 calculates an image quality deterioration amount with both the digital watermark
inserted image that is received from the digital watermark inserting portion 200 and the input image and outputs the calculated image quality deterioration amount to the digital watermark
characteristic table generating portion 2201. A categorizing portion 204 categorizes the input image and outputs a category index corresponding to the categorized result to the digital watermark
characteristic table generating portion 2201.
The digital watermark characteristic table generating portion 2201 obtains a robustness evaluation value against the attack and an image quality deterioration ratio, based on the detected result that
is received from the digital watermark detecting portion 202, the digital watermark strength, the attack parameter, the image quality deterioration amount that is received from the image quality
deterioration amount calculating portion 203, and the category index that is received from the categorizing portion 204, and it outputs the relation of the robustness evaluation value, the image
quality deterioration ratio, and the digital watermark strength as a digital watermark characteristic table.
Next, the operation of the digital watermark characteristic table generating unit shown in FIG. 2 will be described. For easy understanding, several symbols necessary for explaining the operation of
the digital watermark characteristic table generating unit will be defined.
The number of input images is denoted by I. The I input images are distinguished by an index i (where i=1, . . . , I). The value of the attack parameter is denoted by x(j) (where j=1, . . . , J). The
attack parameter is a parameter for adjusting the attack strength. The category index k (where k=1, . . . , K), the digital watermark strength s(m) (where m=1, . . . , M), the image quality
deterioration ratio D(k, m), and the attack robustness evaluation value V(k, m) are defined as described above. A category index for an input image i is denoted by k(i). An image quality
deterioration amount for an input image i is denoted by d(i). When the category index is denoted by k, the digital watermark strength index is denoted by m, and the attack parameter index is denoted
by j, the detected result and the detection ratio are denoted by y(k, m, j) and r(k, m, j), respectively.
Next, with reference to FIG. 2, the operation of the digital watermark characteristic table generating unit will be described. An input image i is supplied to the digital watermark inserting portion
200. Input embedding data is converted into digital watermark information. With a parameter of input digital watermark strength s(m), the digital watermark is inserted into the image. The obtained
image is output as a digital watermark inserted image to the image quality deterioration amount calculating portion 203 and the attack executing portion 201.
The attack executing portion 201 attacks the digital watermark inserted image in a predetermined manner and outputs the attacked image to the digital watermark detecting portion 202. The attack
strength is adjusted by the input attack parameter x(j). When the digital watermark inserted image is attacked by a noise adding attack, the attack parameter is an amount of noise power, noise
amplitude, PSNR (Peak Signal to Noise Ratio), or the like. When the digital watermark inserted image is attacked as an enlarging attack or a shrinking attack, the attack parameter is an amount of
enlargement/shrinkage magnification or equivalent amount.
The attacked image that is output from the attack executing portion 201 is input to the digital watermark detecting portion 202. The digital watermark detecting portion 202 detects a digital
watermark from the attacked image. When the digital watermark detecting portion 202 has detected an embedded digital watermark, it outputs “1” as a detected result. When the digital watermark
detecting portion 202 has not detected an embedded digital watermark, it outputs “0” as a detected result. When the digital watermark detecting portion 202 has detected part of an embedded digital
watermark, it outputs a value between “0” and “1” (for example, “0.5”) as a detected result. The data that is output from the digital watermark detecting portion 202 is input to the digital watermark
characteristic table generating portion 2201.
Both the input image and the digital watermark inserted image that is output from the digital watermark inserting portion 200 are input to the image quality deterioration amount calculating portion
203. The image quality deterioration amount calculating portion 203 compares the input image with the digital watermark inserted image and calculates the image quality deterioration amount due to the
inserted digital watermark. As the image quality deterioration amount, a PSNR value of the digital watermark inserted image against the original image or a WSNR (Weighted Signal to Noise Ratio) value
in consideration of visual characteristics is used. Alternatively, a ratio of the deterioration to a JND (Just Noticeable Distortion), which is derived by dividing the differences between the digital
watermark inserted image and the original input image by JND values after the JND values are calculated, can be used. The calculated image quality deterioration amount is output to the digital
watermark characteristic table generating portion 2201.
In FIG. 2, the input image is also supplied to the categorizing portion 204. The operation of the categorizing portion 204 is the same as that of the categorizing portion 103 shown in FIG. 1. The
categorizing portion 204 calculates a feature amount of the input image and categorizes the input image based on the calculated feature amount. Thereafter, the categorizing portion 204 outputs a
category index that represents the category to the digital watermark characteristic table generating portion 2201.
The digital watermark characteristic table generating unit shown in FIG. 2 performs such a process for I input images i=1, . . . , I. For each input image i, the procedures described above are
performed with the M different digital watermark strength values s(m) (where m=1, . . . , M). For each digital watermark strength s(m), the procedures described above are performed with the J
different attack parameters x(j) (where j=1, . . . , J). The detected results y(k(i), m, j), the digital watermark strength s(m), the index m, the attack parameter x(j), the index J, the image
quality deterioration amount d(k(i), m), and the category index k(i) are supplied to the digital watermark characteristic table generating portion 2201. The digital watermark characteristic table
generating portion 2201 generates and outputs a digital watermark characteristic table describing the relation between these input factors.
Next, the digital watermark characteristic table generating portion 2201(2303) will be described. FIG. 3 is a block diagram showing the structure of the digital watermark characteristic table
generating portion 2201(2303). A detected result totaling portion 300 totals the detected result of the digital watermark detecting portion 202 for each attack parameter, each digital watermark
strength, and each category index, calculates a detection ratio with the totaled result, and outputs the calculated detection ratio to a robustness evaluation value calculating portion 2301.
An image quality deterioration amount totaling portion 301 totals the image quality deterioration amount that is received from the digital watermark deterioration amount calculating portion 203 for
each category index and each digital watermark strength, calculates an image quality deterioration ratio with the totaled image quality deterioration amount, and outputs the calculated image quality
deterioration ratio to a data combining portion 2302.
Next, a robustness evaluation value calculating portion 2301 calculates an attack robustness evaluation value with the attack parameter and the detection ratio that is received from the detected
result totaling portion 300 and outputs the calculated attack robustness evaluation value to the data combining portion 2302. For each category index, the data combining portion 2302 generates a
table that describes the relation between the digital watermark strength, the image quality deterioration ratio, and the robustness evaluation value, and outputs the table as a digital watermark
characteristic table.
Next, the operation of the digital watermark characteristic table generating portion shown in FIG. 3 will be described. The digital watermark detected result y(k(i), m, j) is input to the detected
result totaling portion 300. The detected result totaling portion 300 has a storing means. The detected result totaling portion 300 totals the digital watermark detected result y(k(i), m, j) for each
category index k, each digital watermark strength index m, and each attack parameter index j and calculates a mean value r(k, m, j) as a detection ratio, and outputs the detection ratio r(k, m, j) to
the robustness evaluation value calculating portion 2301.
The robustness evaluation value calculating portion 2301 calculates an attack robustness evaluation value V(k, m) based on the detection ratio r(k, m, j) that is received from the detected result
totaling portion 300 and outputs the attack robustness evaluation value V(k, m) to the data combining portion 2303. The operation of the robustness evaluation value calculating portion 2301 will be
described later.
On the other hand, the image quality deterioration amount d(i) is input to the image quality deterioration amount totaling portion 301. The image quality deterioration amount totaling portion 301 has
a storing means. The image quality deterioration amount totaling portion 301 totals the image quality deterioration amount d(i) for each category index k and each digital watermark strength index m,
calculates the mean value D(k, m) as an image quality deterioration ratio, and outputs the calculated mean value D(k, m) to the data combining portion 2302.
The data combining portion 2302 combines the robustness evaluation value V(k, m) that is received from the robustness evaluation value calculating portion 2301, the image quality deterioration ratio
D(k, m) that is received from the image quality deterioration amount totaling portion 301, and the digital watermark strength s(m) and generates and outputs a digital watermark characteristic table
shown in FIG. 1 for each category index k. Next, the robustness evaluation value calculating method performed by the robustness evaluation value calculating portion 2301 shown in FIG. 3 will be
To calculate the robustness evaluation value, the variation of the detection ratio in the case that the attack parameter x is successively varied is considered. The robustness evaluation value V(k,
m) is given by the following formula. $V ( k , m ) = ∫ - ∞ ∞ w ( x ) r ( k , m , x ) x ( 4 )$
where k is a category index; m is a digital watermark strength index; x is an attack parameter; r(k, m, x) is a detection ratio that is a function of the attack parameter x when the digital watermark
strength index is m; and W(X) is a weighting function.
The weighting function W(X) determines the degree of contribution of the detection ratio at each attack parameter x to the robustness evaluation value V(k, m). When the weighting function is properly
set, the user's sense against the deterioration due to an attack and attack frequency information can be affected to the evaluation value. In reality, the detection ratio against the digitized value
x(j) is obtained rather than the detecting ratio against any attack parameter x. Thus, the robustness evaluation value V(k, m) is calculated by digitizing and approximating the formula (4).
Alternatively, the robustness evaluation value V(k, m) can be obtained corresponding to the following formula: $V ( k , m ) = 1 L ∫ - ∞ ∞ T ( r ( k , m , x ) , α ) x ( 5 )$
where k is a category index; m is a digital watermark strength index; V(k, m) is a robustness evaluation value V(k, m); and L is a reference interval length of the attack parameter. T(x, α) is a
thresholding function given by the following formula: $T ( x , a ) = { 1 ( x > α ) 0 ( x ≤ α ) ( 6 )$
However, the second related art reference has the following problems.
As a first problem, it is difficult for the user to customize an attack robustness evaluation value calculating method. Although it is preferred for the user to freely designate a weighting function
and a threshold value for calculating a robustness evaluation value, the digital watermark inserting system of the second related art reference uses a pre-calculated robustness evaluation value as an
attack robustness evaluation value, it is difficult to tune the robustness evaluation value calculating method.
As a second problem, when a robustness evaluation value for a combination of a plurality of attacks is used for calculating the optimum digital watermark strength, the data amount to be stored
adversely increases. Thus, it is necessary to reduce the data amount. In other words, according to the second related art reference, since all robustness evaluation values corresponding to
combinations of a plurality of attacks should be stored, a huge amount of storage capacity is required. However, it is difficult to satisfy such a huge storage capacity.
An object of the present invention is to provide a digital watermark inserting system that allows the optimum digital watermark strength to be automatically calculated corresponding to a robustness
evaluation value against an attack and an image quality deterioration ratio.
A first aspect of the present invention is a digital watermark inserting system for inserting digital watermark information into an input image, comprising a means for calculating a feature amount of
the input image, categorizing the input image, and outputting a category index as the categorized result, a digital watermark characteristic calculating means for calculating an image deteriorating
ratio and a robustness evaluation value against a digital watermark strength based on a robustness evaluation value calculation parameter and the category index, the robustness evaluation value
calculation parameter being input by the user, a digital watermark strength calculating means for outputting the digital watermark strength to the digital watermark characteristic calculating means,
deciding the optimum digital watermark strength based on digital watermark strength restriction information that is input by the user, and outputting the optimum digital watermark strength, and a
digital watermark inserting means for converting input embedding data into digital watermark information, inserting the digital watermark information into the input image with an input parameter of
the optimum digital watermark strength, and outputting the resultant image as a digital watermark inserted image.
A second aspect of the present invention is a digital watermark characteristic parameter table generating method for inserting digital watermark information into an input image, comprising the steps
of calculating a feature amount of the input image, categorizing the input image with the calculated result, and outputting a category index as the categorized result, converting input embedding
information into the digital watermark information, inserting the digital watermark information into the input image with input digital watermark strength, and generating a digital watermark inserted
image as the inserted data, adjusting the strength of an attack with an input attack parameter, attacking the digital watermark inserted image with the adjusted attack strength, generating a
resultant attacked image, detecting a digital watermark from the attacked image, outputting the detected result, comparing the input image with the digital watermark inserted image, calculating an
image quality deterioration amount caused by the inserted digital watermark, and outputting the calculated the image quality deterioration ratio amount, and receiving the detected result of the
digital watermark, the digital watermark strength, the attack parameter, the image quality deterioration amount, and the category index, totaling the detected results for each of combinations of the
category index, the digital watermark strength, and the attack parameter, obtaining a detection ratio as the totaled result, totaling the image quality deterioration amount for each of combinations
of the category index and the digital watermark strength, obtaining an image quality deterioration ratio as the totaled result, and calculating a digital watermark characteristic parameter table by
using the detection ratio and the image quality deterioration ratio, and outputting the digital watermark characteristic parameter table.
A third aspect of the present invention is a record medium from which a computer reads a program that causes the computer to drive a digital watermark inserting system for inserting digital watermark
information into an input image, the system comprising a means for calculating a feature amount of the input image, categorizing the input image, and outputting a category index as the categorized
result, a digital watermark characteristic calculating means for calculating an image deteriorating ratio and a robustness evaluation value corresponding to a digital watermark strength based on a
robustness evaluation value calculation parameter and the category index, the robustness evaluation value calculation parameter being input by the user, a digital watermark strength calculating means
for outputting the digital watermark strength to the digital watermark characteristic calculating means deciding the optimum digital watermark strength based on digital watermark strength restriction
information that is input by the user, and outputting the optimum digital watermark strength, and a digital watermark inserting means for converting input embedding data into digital watermark
information, inserting the digital watermark information into the input image with an input parameter of the optimum digital watermark strength, and outputting the resultant image as a digital
watermark inserted image.
A fourth aspect of the present invention is a record medium from which a computer reads a program that causes the computer to perform a method for inserting digital watermark information into an
input image, the method comprising the steps of (a) calculating a feature amount of the input image, categorizing the input image, and outputting a category index as the categorized result, (b)
digital watermark characteristic calculating means for calculating an image deteriorating ratio and a robustness evaluation value corresponding to a digital watermark strength based on a robustness
evaluation value calculation parameter and the category index, the robustness evaluation value calculation parameter being input by the user, (c) digital watermark strength calculating means for
outputting the digital watermark strength to step (b), deciding the optimum digital watermark strength based on digital watermark strength restriction information that is input by the user, and
outputting the optimum digital watermark strength, and (d) digital watermark inserting means for converting input embedding data into digital watermark information, inserting the digital watermark
information into the input image with an input parameter of the optimum digital watermark strength, and outputting the resultant image as a digital watermark inserted image.
Next, with reference to the accompanying drawings, the present invention will be described. A digital watermark inserting system according to the present invention comprises a means (103, FIG. 4) for
calculating a feature amount of the input image, categorizing the input image and outputting a category index as the categorized result, a digital watermark characteristic calculating means (104,
FIG. 4) for calculating an image deteriorating ratio and a robustness evaluation value corresponding to a digital watermark strength based on a robustness evaluation value calculation parameter and
the category index, the robustness evaluation value calculation parameter being input by the user, a digital watermark strength calculating means (100, FIG. 4) for outputting the digital watermark
strength to the digital watermark characteristic calculating means (104, FIG. 4), deciding the optimum digital watermark strength based on digital watermark strength restriction information that is
input by the user, and outputting the optimum digital watermark strength, and a digital watermark inserting means (102, FIG. 4) for converting input embedding data into digital watermark information,
inserting the digital watermark information into the input image with an input parameter of the optimum digital watermark strength, and outputting the resultant image as a digital watermark inserted
In the digital watermark inserting system according to the present invention, the digital watermark characteristic calculating means (104. FIG. 4) has a first storing means (101, FIG. 5) for storing
a digital watermark characteristic parameter table for each of various category indexes, the digital watermark characteristic parameter table describing the relation of a digital watermark strength,
an image detection ratio, and a detection ratio parameter, the detection ratio parameter describing a detection ratio curve/curved surface that approximates the variation of the detection ratio of
the digital watermark information against an attack parameter, selecting a detection ratio characteristic parameter table corresponding to the category index, and outputting the image quality
deterioration ratio and the detection ratio characteristic parameter corresponding to the digital watermark strength that is output from the digital watermark strength calculating means (100, FIG. 4
), and a robustness evaluation value calculating means (105, FIG. 5) for obtaining the detection ratio curve/curved surface with the detection ratio characteristic parameter, performing a statistic
process based on the robustness evaluation value calculation parameter that is input by the user, calculating the robustness evaluation value, and outputting the robustness evaluation value. In the
digital watermark inserting system according to the present invention, the digital watermark characteristic calculating means (104, FIG. 4) has a second storing means (171, FIG. 6) for storing a
digital watermark characteristic parameter table describing the relation of a category index, an image quality deterioration ratio curve parameter describing an image quality deterioration ratio
curve that approximates the variation of the image quality deterioration ratio against a digital watermark strength, and a detection ratio characteristic general parameter that describes a detection
ratio characteristic parameter curve approximating the variation of a detection ratio characteristic parameter against the digital watermark strength and outputting an image quality deterioration
ratio curve parameter and a detection ratio characteristic general parameter corresponding to the category index, an image quality deterioration ratio calculating means (100, FIG. 4) for obtaining an
image quality deterioration ratio curve with the image quality deterioration ratio curve parameter, calculating the image quality deterioration ratio corresponding to the digital watermark strength
that is output from the digital watermark strength calculating portion, and outputting the calculated image quality deterioration ratio, and a robustness evaluation value calculating means (173, FIG.
6) for obtaining the detection ratio characteristic parameter curve with the detection ratio characteristic general parameter, calculating a detection ratio characteristic parameter corresponding to
the digital watermark strength that is output from the digital watermark strength calculating portion, obtaining the detection ratio curve/curved surface with the calculated detection ratio
characteristic parameter, performing a statistic process based on the robustness evaluation value calculation parameter that is input by the user, and outputting the calculated robustness evaluation
value as the processed result.
In the digital watermark inserting system according to the present invention, the robustness evaluation value calculating means (105, FIG. 5 or 173, FIG. 6) obtains an inner product of the detection
ratio curve and a weighting function so as to calculate the robustness evaluation value.
In the digital watermark inserting system according to the present invention, the robustness evaluation value calculating means (105, FIG. 5 or 173, FIG. 6) obtains a region of an attack parameter of
which a detection ratio exceeds a predetermined threshold value with the detection ratio curve and calculates the robustness evaluation value based on the length of the region.
In the digital watermark inserting system according to the present invention, the detection ratio characteristic parameter is a detection ratio curve parameter that represents the detection ratio
curve for a single attack.
In the digital watermark inserting system according to the present invention, the detection ratio characteristic parameter is composed of the detection ration curve parameter for a single attack and
an attack correlation curved surface parameter that is a parameter that describes an attack correlation curved surface approximating an attack correlation value defined based on the ratio of the
product of detection ratios of single attacks and a detection ratio for a complex attack, and the robustness evaluation value calculating means (105, FIG. 5) obtains the detection ratio curve for a
single attack composing a complex attack with the detection ratio curve parameter for the single attack, obtains an attack correlation curved surface with the attack correlation curved surface
parameter, obtains the detection ratio curved surface for the complex attack based on the product of the detection ratio curve for the single attack and the attack correlation curved surface, and
calculates the robustness evaluation value.
In the digital watermark inserting system according to the present invention, the detection ratio general parameter is a detection ratio curve general parameter that represents a curve approximating
the variation of the detection ratio curve parameter against a digital watermark strength for a single attack. In the digital watermark inserting system according to the present invention, the
detection ratio characteristic general parameter is composed of a detection ratio curve general parameter for a single attack and an attack correlation curved surface general parameter that
represents a curve approximating the variation of an attack correlation curve parameter against the digital watermark strength. The robustness evaluation value calculating means (173, FIG. 6) obtains
the detection ratio curve for a single attack composing a complex attack with a detection ratio curve general parameter for the single attack, obtains an attack correlation curved surface with the
attack correlation curved surface general parameter, obtains the detection ratio curved surface for the complex attack based on the product of the detection ratio curve for the single attacks and the
attack correlation curved surface, and calculates the robustness evaluation value.
In the digital watermark inserting system according to the present invention, the attack correlation curved surface and the weighting function for a complex attack are a linear sum of a function
separable for an attack parameter of each attack.
In the digital watermark inserting system according to the present invention, the restriction information of the digital watermark strength is an allowable limit value of the image quality
deterioration ratio. The digital watermark strength calculating means decides the optimum digital watermark strength in the allowable limit value of the image quality deterioration ratio and outputs
the decided optimum digital watermark strength.
In the digital watermark inserting system according to the present invention, the restriction information of the digital watermark strength is a limit value of a safety index against an attack. The
digital watermark strength calculating means decides the optimum digital watermark strength in a range of which the robustness evaluation value against the attack exceeds the limit value of the
safety index and outputs the decided optimum digital watermark strength.
In the digital watermark inserting system according to the present invention, the restriction information of the digital watermark is a weighting index that defines the balance of the image quality
deterioration amount and the safety index. The digital watermark strength calculating means decides the ratio of the contribution of the image quality deterioration amount and the safety index for
deciding the optimum digital watermark strength with the weighting index.
In the digital watermark inserting system according to the present invention, the digital watermark characteristic means (131, FIG. 7) has a digital watermark characteristic parameter table
generating means (132, FIG. 7) for generating the digital watermark characteristic parameter table that is input to the digital watermark characteristic calculating means (131, FIG. 7).
In the digital watermark inserting system according to the present invention, the digital watermark characteristic parameter table generating means (132, FIG. 7) has a digital watermark inserting
means (200, FIG. 12) for converting input embedding information into digital watermark information, inserting the digital watermark information into the input image with the input digital watermark
strength, and generating the digital watermark inserted image, an attack image generating means (201, FIG. 12) for adjusting the strength of an attack with an input attack parameter against the
digital watermark inserted image, and generating an attacked image, a digital watermark detecting menas (202, FIG. 12) for detecting a digital watermark from the attacked image and outputting the
detected result, an image quality deterioration amount calculating means (203, FIG. 12) for comparing the input image with the digital watermark inserted image, calculating an image quality
deterioration amount caused by the inserted digital watermark with the compared result, and outputting the calculated image quality deterioration amount, a categorizing means (204, FIG. 12) for
calculating a feature amount of the input image, categorizing the input image with the calculated feature amount, and outputting a category index corresponding to the categorized result, and a
digital watermark characteristic parameter table calculating means (205, FIG. 12) for receiving the detected result of the digital watermark, the digital watermark strength, the attack parameter, the
image quality deterioration amount, and the category index, totaling the detected results of each of combinations of the category index, the digital watermark strength, and the attack parameter,
obtaining a detection ratio as the totaled result, totaling an image quality deterioration amount of each of combinations of the category index and the digital watermark strength, obtaining a image
quality deterioration ratio as the totaled result, calculating a digital watermark characteristic parameter table using the detection ratio and the image quality deterioration ratio, and outputting
the calculated digital watermark characteristic parameter table.
In the digital watermark inserting system according to the present invention, the digital watermark characteristic parameter table calculating means has a detection ratio calculating means (300, FIG.
13) for totaling a detected result of the digital watermark information for each of the attack parameter, the digital watermark strength, and the category index, calculating detection ratio data with
the totaled result, and outputting the calculated detection ratio data, an image quality deterioration ratio calculating means (301, FIG. 13) for totaling an image quality deterioration amount for
each of the category index and the digital watermark strength and outputting the resultant statistic amount as an image quality deterioration ratio, a digital watermark characteristic extracting
means (302, FIG. 13) for calculating detection ratio descriptive information describing the variation of the detection ratio data against the digital watermark strength, the attack parameter, and the
category index and image quality deterioration ratio descriptive information describing the variation of the image quality deterioration ratio and outputting the detection ratio descriptive
information and the image quality deterioration ratio descriptive information, and a data combining means (303, FIG. 13) for combining the digital watermark strength, the category index, the image
quality deterioration ratio descriptive information, and the detection ratio descriptive information, generating a digital watermark characteristic parameter table as the combined result, and
outputting the generated digital watermark characteristic parameter table.
In the digital watermark inserting system according to the present invention, the digital watermark characteristic extracting means has a detection ratio characteristic extracting means (320, FIG.
14) for approximating a function representing the variation of the detection ratio data against the attack parameter for each of the category index and the digital watermark strength with a curve/
curved surface, calculating a detection ratio characteristic parameter describing the curve/curved surface, and outputting the calculated detection ratio characteristic parameter as the detection
ration descriptive information. The image quality deterioration ratio is output as the image quality deterioration ratio descriptive information.
In the digital watermark inserting system according to the present invention, the detection ratio characteristic parameter calculated by the detection ratio characteristic extracting means (320, FIG.
14) is a detection ratio curve parameter for a single attack.
In the digital watermark inserting system according to the present invention, the detection ratio characteristic parameter calculated by the detection ratio characteristic extracting means (320, FIG.
14) is composed of a detection ratio curve parameter for a single attack and an attack correlation curved surface parameter describing the correlation of single attacks.
In the digital watermark inserting system according to the present invention, the digital watermark characteristic extracting means (302, FIG. 13) has a detection ratio characteristic calculating
means (340, FIG. 15) for approximating a function that represents the variation of the detection ratio data against the attack parameter for each of the category index and the digital watermark
strength with a curve/curved surface, calculating a detection ratio characteristic parameter that represents the curve/curved surface, approximating the variation of the detection ratio
characteristic parameter against the digital watermark strength with a curve, obtaining a detection ratio characteristic general parameter that describes the curve, and outputting the detection ratio
characteristic general parameter as the detection ratio descriptive information, and an image quality deterioration ratio characteristic extracting means (341, FIG. 15) for approximating the
variation of the image quality deterioration ratio against the digital watermark strength with a curve, calculating an image quality deterioration ratio curve parameter that describes the curve, and
outputting the image quality deterioration ratio curve parameter as the image quality deterioration ratio descriptive information.
In the digital watermark inserting system according to the present invention, the detection ratio characteristic parameter and the calculated detection ratio characteristic general parameter
calculated by the detection ratio characteristic calculating means (340, FIG. 15) are a detection ratio curve parameter for a single attack and a detection ratio curve general parameter for a single
attack, respectively.
In the digital watermark inserting system according to the present invention, the detection ratio characteristic parameter calculated by the detection ratio characteristic calculating means (340,
FIG. 15) is composed of a detection ratio curve parameter for a single attack and an attack correlation curved surface parameter that describes the correlation of single attacks. The detection ratio
characteristic general parameter calculated by the detection ratio characteristic calculating means (340, FIG. 15) is composed of a detection ratio curve general parameter for a single parameter and
an attack correlation cured surface general parameter.
These and other objects, features and advantages of the present invention will become more apparent in light of the following detailed description of a best mode embodiment thereof as illustrated in
the accompanying drawings.
FIG. 1 is a block diagram showing the structure of a conventional digital watermark inserting system;
FIG. 2 is a block diagram showing the structure of a conventional digital watermark characteristic table generating unit;
FIG. 3 is a block diagram showing the structure of a digital watermark characteristic table generating portion 2201;
FIG. 4 is a block diagram showing a first example of the structure of the system according to the present invention;
FIG. 5 is a block diagram showing a first example of the structure of a digital watermark characteristic calculating portion 104 according to the present invention;
FIG. 6 is a block diagram showing a second example of the structure of the digital watermark characteristic calculating portion 104 according to the present invention.
FIG. 7 is a block diagram showing the structure of a second embodiment of the present invention;
FIG. 8 is a block diagram showing the structure of a third embodiment of the present invention;
FIGS. 9A to 9D are graphs showing examples of a detection ratio approximating method;
FIGS. 10A to 10D are graphs showing examples of a digitizing process in a robustness evaluation value calculating method according to the present invention;
FIGS. 11A to 11D are graphs showing examples of an approximating method for an image quality deterioration ratio and a detection ratio curve parameter according to the present invention;
FIG. 12 is a block diagram showing the structure of a digital watermark characteristic parameter table generating unit 132 shown in FIG. 7 according to the present invention;
FIG. 13 is a block diagram showing the structure of a digital watermark characteristic parameter table calculating portion 205 shown in FIG. 12 according to the present invention;
FIG. 14 is a schematic diagram showing a first example of the structure of a digital watermark characteristic extracting portion 302 shown in FIG. 13 according to the present invention;
FIG. 15 is a schematic diagram showing a second example of the structure of the digital watermark characteristic extracting portion 302 shown in FIG. 13 according to the present invention;
FIGS. 16A to 16D are graphs showing real examples of a detection ratio curve calculated according to the present invention;
FIGS. 17A to 17C are graphs showing the relation of a detection ratio curve and a parameter according to the present invention;
FIGS. 18A to 18D are graphs showing real examples of a detection ratio curve calculated according to the present invention;
FIGS. 19A and 19B are graphs showing real examples of an image quality deterioration curve calculated according to the present invention;
FIGS. 20A and 20B are graphs showing real examples of an approximation curve of a detection ratio curve parameter calculated against a detection ratio curve parameter of the graph shown in FIG. 16B;
FIGS. 21A and 21B are graphs showing real examples of an approximation curve of a detection ratio curve parameter calculated against a detection ratio curve parameter of the graphs shown in FIG. 16C;
FIGS. 22A to 22C are graphs showing real examples of an approximation curve of a detection ratio curve parameter calculated against a detection ratio curve parameter of the graph shown in FIG. 16D;
FIG. 23 is a graph showing a real example of an attack correlation value calculated according to the present invention;
FIGS. 24A and 24B are graphs showing real examples of an attack correlation value and an attack correlation curved surface calculated according to the present invention; and
FIGS. 25A to 25C are graphs showing examples of an approximation curve of an attack correlation curved surface parameter calculated against an attack correlation curved surface parameter of the graph
shown in FIG. 24B.
Next, with reference to the accompanying drawings, embodiments of the present invention will be described.
[First Embodiment]
FIG. 4 is a block diagram showing the structure of a digital watermark inserting system according to a first embodiment of the present invention. The digital watermark inserting system according to
the first embodiment comprises a categorizing portion 103, a digital watermark characteristic calculating portion 104, a digital watermark strength calculating portion 100, and a digital watermark
inserting portion 102. The digital watermark characteristic calculating portion 104 uses a robustness evaluation value calculation parameter. The digital watermark strength calculating portion 100
calculates digital watermark strength with the digital watermark strength restriction information.
The categorizing portion 103 calculates a feature amount of an input image, categorizes the image based on the obtained feature amount, and outputs the categorized result as a category index to the
digital watermark characteristic calculating portion 104. The digital watermark characteristic calculating portion 104 calculates an image quality deterioration ratio and a robustness evaluation
value for the digital watermark strength that is output from the digital watermark strength calculating portion 100 based on the category index that is output from the categorizing portion 103 and a
robustness evaluation value calculation parameter that is input by the user and outputs the calculated image quality deterioration ratio and robustness evaluation value to the digital watermark
strength calculating portion 100.
In addition, the digital watermark strength calculating portion 100 outputs various values of the digital watermark strength based on the digital watermark strength restriction information to the
digital watermark characteristic calculating portion 104, decides the optimum digital watermark strength based on the image quality deterioration ratio and robustness evaluation value (against the
digital watermark strength) received from the digital watermark characteristic calculating portion 104 and based on the digital watermark strength restriction information that is input by the user,
and outputs the decided optimum digital watermark strength to a digital watermark inserting portion 102.
The digital watermark inserting unit 102 converts embedding data into a digital watermark, inserts the digital watermark into the image with the optimum digital watermark strength that is received
from the digital watermark strength calculating portion 100, and outputs the resultant image as a digital watermark inserted image. Next, the operation of the digital watermark inserting system shown
in FIG. 4 will be described.
An input image is supplied to the categorizing portion 103. The operation of the categorizing portion 103 is the same as that of the digital watermark inserting system shown in FIG. 1. The
categorizing portion 103 categorizes the input image and outputs the categorized result as a category index. The feature amount used in the categorizing portion 103 is for example an activity of an
image, the mean value of JND, the number of colors for use, entropy, or the like. Alternatively, the category may be used to distinguish image types such as medical images, CG, animation, and so
forth. With the feature amount of the image, the image type may be automatically predicted. Moreover, the user may explicitly designate the category of an input image.
The category index that is output from the categorizing portion 103 is input to the digital watermark characteristic calculating portion 104. The digital watermark characteristic calculating portion
104 calculates an image quality deterioration ratio and a robustness evaluation value corresponding to the digital watermark strength that is output from the digital watermark strength calculating
portion 100 with an image quality deterioration ratio descriptive information and the detection ratio descriptive information stored in a storing unit shown in FIG. 5 (the storing unit will be
described later) and outputs the calculated image quality deterioration ratio and robustness evaluation value to the digital watermark strength calculating portion 100.
The image quality deterioration ratio descriptive information may be an image quality deterioration ratio or an image quality deterioration ratio curve parameter that is a parameter of a curve that
approximates the variation of an image quality deterioration ratio against a digital watermark strength (hereinafter, the curve is referred to as image quality deterioration ratio curve).
On the other hand, the detection ratio descriptive information may be a parameter that describes a curve/curved surface that approximates the variation of a detection ratio against an attack
parameter (hereinafter, the curve/curved surface is referred to as detection ratio curve/curved surface) or a parameter of a curve that approximates the variation of a detection ratio characteristic
parameter against a digital watermark strength (hereinafter, this curve and this parameter are referred to as detection ratio characteristic parameter curve and detection ratio characteristic general
parameter, respectively).
In the case of a single attack composed of a single process, the detection ratio characteristic parameter is a detection ratio curve parameter that is a curve parameter that approximates the
variation of a detection ratio against an attack parameter. In the case of a complex attack that is a combination of a plurality of processes, the detection ratio characteristic parameter is composed
of a detection ratio curve parameter against a single attack and an attack correlation curved surface parameter that approximates the correlation values of single attacks.
In the case of a single attack, the detection ratio characteristic general parameter is a detection ratio curve general parameter that is a curve parameter that approximates the variation of a
detection ratio curve parameter against a digital watermark strength. In the case of a complex attack, the detection ratio characteristic general parameter is composed of a detection ratio curve
general parameter against a single attack and an attack correlation curved surface parameter that is a curve parameter that approximates the variation of an attack correlation curved surface
parameter against a digital watermark strength.
The structure and operation of the digital watermark characteristic calculating portion 104 depend on whether the image quality deterioration ratio descriptive information is an image quality
deterioration ratio or an image quality deterioration ratio curve parameter. In addition, the structure and operation of the digital watermark characteristic calculation portion 104 depend on whether
the detection ratio descriptive information is a detection ratio characteristic parameter or a detection ratio characteristic general parameter. Moreover, the structure and operation of the digital
watermark characteristic calculation portion 104 depend on whether the considered attack is a single attack or a complex attack. The structure and operation of the digital watermark characteristic
calculating portion 104 in these cases will be described later.
A robustness evaluation value that is output from the digital watermark characteristic calculating portion 104 may be a robustness evaluation value for one attack or a statistic amount of which a
statistic process such as weighted means method is performed for robustness evaluation values calculated for various single/complex attacks. When a means value is calculated, weights to individual
attacks may be input as a robustness evaluation value calculation parameter to the digital watermark characteristic calculating portion 104. The user may vary the weights.
The operation of the digital watermark strength calculating portion 100 is the same as the operation of the digital watermark inserting system shown in FIG. 1. An input image is supplied to the
system. The categorizing portion 103 calculates a category index of the input image and then calculates a digital watermark strength that maximizes the value of the formula 1 as the optimum digital
watermark strength.
The optimum digital watermark strength that is output from the digital watermark strength calculating portion 100 is input to the digital watermark inserting portion 102. The operation of the digital
watermark inserting portion 102 is the same as that of the conventional system. The digital watermark inserting portion 102 converts input embedding data into a digital watermark, inserts the digital
watermark into the input image, and outputs the resultant image as a digital watermark inserted image.
In the above-described digital watermark inserting system, an image quality deterioration limit value D0 may be changed as digital watermark strength restriction information. When the user does not
designate the limit value D0, a predetermined value is used as a default value. When the user designates the limit value D0, it is used. Thus, the user can adjust image quality deterioration caused
by an inserted digital watermark.
In the above-described digital watermark inserting system, a safety limit value V0 against an attack may be changed as digital watermark strength restriction information. When the user does not
designate the limit value V0, a predetermined value is used as a default value. When the user designates the limit value V0, it is used. Thus, the user can adjust the robustness of a digital
watermark against an attack.
In the above-described digital watermark inserting system, the parameter “a” of the formula (1) that is a weighting index that allows an image quality deterioration amount of an objective function
and a safety index to be balanced can be changed as digital watermark strength restriction information. When the user does not designate the parameter “a”, a predetermined value is used as a default
value. When the user designates the parameter “a”, it is used. Thus, the user can selectively emphasize the deterioration of the image quality or the robustness against an attack.
In addition, the image quality deterioration permission limit value D0, the attack safety index limit value V0, and the weighting index “a” for emphasizing either the image quality deterioration
amount or the safety index may be changed as digital watermark strength restriction information. The user can designate these values. When the user does not designate these values, predetermined
values are used as default values. Thus, the user can freely adjust an image quality deterioration permission limit value, an attack safety index limit value, and the balance between image quality
deterioration and an attack.
Next, with reference to FIG. 4, the structure and operation of the digital watermark characteristic calculating portion 104 in the case that the image quality deterioration ratio descriptive
information is an image quality deterioration ratio and the detection ratio descriptive information is a detection ratio characteristic parameter will be described.
FIG. 5 is a block diagram showing the structure of the digital watermark characteristic calculating portion 104. A storing unit 101 selects a digital watermark characteristic parameter table
corresponding to a category index that is received from the categorizing portion 103 shown in FIG. 4 and outputs an image quality deterioration ratio corresponding to a digital watermark strength
that is received from the digital watermark strength calculating portion 100 shown in FIG. 4 to the digital watermark strength calculating portion 100. In addition, the storing unit 101 outputs a
detection ratio characteristic parameter to a robustness evaluation value calculating portion 105. The robustness evaluation value calculating portion 105 obtains a detection ratio curve/curved
surface that approximates the relation of a detection ratio and an attack parameter with the detection ratio characteristic parameter that is received from the storing unit 101, calculates a
robustness evaluation value based on a robustness evaluation value calculation parameter that is input by the user, and outputs the calculated robustness evaluation value to the digital watermark
strength calculating portion 100 shown in FIG. 4.
Next, the operation of the digital watermark characteristic calculating portion shown in FIG. 5 will be described. First of all, the operation of the digital watermark characteristic calculating
portion in the case of a single attack will be described. In this case, the detection ratio characteristic parameter is a detection ratio curve parameter.
A category index that is output from the categorizing portion 103 shown in FIG. 4 is input to the storing unit 101. The storing unit 101 stores digital watermark characteristic parameter tables for
individual category indexes. Each of the digital watermark characteristic parameter tables describe the relation between a digital watermark strength, an image quality deterioration ratio, and a
detection ratio curve parameter. Table 2 shows a digital watermark characteristic parameter table for a category index k.
TABLE 2
Digital Image quality
watermark deterioration Detection ratio curve
strength ratio parameter
S(1) D(k, 1) c1(k, 1), c2(k, 1), . . .
S(2) D(k, 2) c1(k, 2), c2(k, 2), . . .
. . . . . . . . .
S(M) D(k, M) c1(k, M), c2(k, M), . . .
In this example, when a category index and a digital watermark strength index are denoted by k and m, respectively, a detection ratio curve parameter is denoted by c1(k, m), c2(k, m), . . .
For example, in the case where the detection ratio varies with the attack parameter as shown in FIG. 9A (where x=0 corresponds to no attack), it is approximated with a logistic curve expressed by the
following formula as shown in FIG. 9B: $r ( x ) = 1 1 + exp ( c 1 ( x + c 2 ) ) ( 7 )$
Alternatively, it is approximated with a graph of broken lines expressed by the following formula as shown in FIG. 9C: $r ( x ) = 1 c 1 - c 2 ( x - c 2 ) ( 8 )$
Alternatively, it is approximated with a graph of broken lines expressed by the following formula as shown in FIG. 9D: $r ( x ) = { - 1 - c 2 c 2 x + 1 ( 0 ≤ x < c 1 ) - c 2 c 3 - c 1 ( x -
c 3 ) ( c 1 ≤ x ≤ c 3 ) ( 9 )$
In the formula (7), c1 and c2 are detection ratio curve parameters. In the formula (8), c1 and c2 are detection ratio curve parameters. In the formula (9), c1, c2, and c3 are detection ratio curve
Feature amounts used as parameters are not limited as long as they have the same degree of freedom. For example, instead of c1 and c2 of the formula (8), the slope of a straight line and the value of
an intercept may be used as detection ratio curve parameters. In addition, other approximation curves may be used. For example, a fraction function expressed by the following formula may be used: $r
( x ) = 1 1 + c 1 x + c 2 x 2 ( 10 )$
Alternatively, an exponential function expressed by the following formula may be used:
r(x)=exp(−c [1](x−c [2]))(11)
In the function expressed by the formula (10), c1 and c2 are detection ratio curve parameters. In the function expressed by the formula (11), c1 and c2 are detection ratio curve parameters. One of
these curves can be properly selected for each attack.
In the example, the case that the state of x=0 represents that there is no attack was described. Alternatively, the above-described curve can also be applied in other cases by shifting or inverting
The storing unit 101 has a digital watermark characteristic table shown in Table 2. The storing unit 101 selects a digital watermark parameter characteristic table corresponding to a category index k
that is received from the categorizing portion 103. A digital watermark strength s(m) is input to the storing unit 101 from the digital watermark strength calculating portion 100 and then outputs an
image quality deterioration ratio D(k, m) to the digital watermark strength calculating portion 100. On the other hand, the storing unit 101 outputs detection ratio characteristic parameters c1(k,
m), c2(k, m), . . . to the robustness evaluation value calculating portion 105.
The robustness evaluation value calculating portion 105 obtains a curve that represents the relation of a detection ratio and an attack parameter with the detection ratio curve parameters c1(k, m),
c2(k, m), . . . that are received from the storing unit 101 and calculates a robustness evaluation value. The robustness evaluation value calculating portion 105 has designated types of curves for
individual attacks. Thus, the robustness evaluation value calculating portion 105 decides the shape of the curve based on an input detection ratio curve parameter value. Alternatively, the robustness
evaluation value calculating portion 105 may select one of several types of curves. In this case, an index that designates the type of a curve is contained in the detection ratio curve parameter.
Hereinafter, a phrase “obtaining a curve/curved surface” represents that the shape of a curve is decided based on an input parameter.
With input detection ratio curve parameters, a detection ratio curve that represents the relation of an attack parameter and a detection ratio that are expressed by the formula (7) is obtained by the
following formula: $r ( k , m , x ) = 1 1 + exp ( c 1 ( k , m ) + c 2 ( k , m ) x ) ( 12 )$
Next, with the formulas (4) and (5), a robustness evaluation value V(k, m) is calculated. The robustness evaluation value calculating portion 105 has a storing means that stores data of a weighting
function w(x) and a threshold value a.
When a robustness evaluation value is really calculated, as shown in FIGS. 10A to 10D, a detection ratio r(k, m, x) and a weighting function w(x) are digitized and calculated using the following
formula: $V ( k , m ) = ∑ k = 1 H w ( x 0 + h Δ x ) r ( k , m , x 0 + h Δ x ) Δ x ( 13 )$
or the following formula: $V ( k , m ) = 1 L ∑ k = 1 H T ( r ( k , m , x 0 + h Δ x ) , α ) Δ x ( 14 )$
Alternatively, a function may be approximated and integrated using such as Simpson's formula or trapezoid formula.
In addition, before a digital watermark is inserted, the user can change the weighting function and the threshold value. In FIG. 4, the evaluation value calculation parameter is data that represents
a weighting function or a threshold value. When the weighted mean of robustness evaluation values of a plurality of attacks is obtained and output as a robustness evaluation value to the digital
watermark strength calculating portion 100, the evaluation value calculation parameter is a weighting coefficient. When the user does not designate these values, predetermined values stored in the
storing unit are used as default values. When the user designates these values, they are used. With the changed weighting function and threshold value or weighting coefficients of individual attacks,
the robustness evaluation value is calculated.
Next, an attack correlation curved surface will be described. Thereafter, the operation of the digital watermark characteristic calculating portion shown in FIG. 5 in the case of a complex attack
will be described. In this case, the detection ratio characteristic parameter is composed of a detection ratio curve parameter of each single attack composing the complex attack and an attack
correlation curved surface parameter.
The attack correlation curved surface is a curved surface that represents the relation of detection ratios of individual single attacks composing the complex attack and the detection ratio of the
complex attack. In the following example, the operation of the digital watermark characteristic calculating portion in the case of a complex attack composed of two single attacks (hereinafter
referred to as attack 1 and attack 2) will be described. However, it should be noted that the present invention can be applied to a complex attack composed of more than two single attacks. The
detection ratio of the attack 1 against the attack parameter value that is x1 is denoted by r1(x1). The detection ratio of the attack 2 against the attack parameter value that is x2 is denoted by r2
(x2). The detection ratio of the complex attack is denoted by r1,2(x1, x2).
When attacks are combined, if there is no synergism effect thereof, it is predicted that the detection ratio is expressed by the following formula:
On the other hand, if there is a synergist effect, the formula (15) is not satisfied. In this case, a function that represents the synergism effect of the combination of attacks is expressed by the
following formula: $z 1 , 2 ( x 1 , x 2 ) = r 1 , 2 ( x 1 , x 2 ) r 1 ( x 1 ) r 2 ( x 2 ) ( 16 )$
The formula (16) is referred to as attack correlation value. A curved surface that approximates the variation of an attack correlation value against an attack parameter is referred to as attack
correlation curved surface. If there is no synergism effect of a combination of attacks, z1,2(x1, x2) is always 1.
Assuming that the state of which both attack parameters x1 and x2 are 0 represents no attacks, the following formula is satisfied: ${ r 1 , 2 ( x 1 , 0 ) = r 1 ( x 1 ) r 1 , 2 ( 0 , x 2 ) = r 2
( x 2 ) ( 17 )$
In addition, basically, the following relation is satisfied:
r [1](0)=r [2](0)=1(18)
Thus, the following relation is satisfied:
z [1,2](x [1], 0)=z [1,2](0, x [2])=1(19)
Next, the operation of the digital watermark characteristic portion shown in FIG. 5 will be described.
A category index that is output from the categorizing portion 103 shown in FIG. 4 is input to the storing unit 101. For each category index, in addition to the table 2, which describes the relation
between a digital watermark strength, an image quality deterioration ratio, and a detection ratio curve parameter against a single attack, the storing unit 101 stores a digital watermark
characteristic parameter table that describes the relation between a digital watermark strength and an attack correlation curved surface parameter for each category index k. Table 3 shows the digital
watermark characteristic parameter table.
TABLE 3
Attack correlation curved
Digital watermark strength surface parameter
s(1) q1(k, 1), q2(k, 1), . . .
s(2) q1(k, 2), q2(k, 2), . . .
. . . . . .
s(M) q1(k, M), q2(k, M), . . .
In Table 3, k is a category index; m is a digital watermark strength index; and q1(k, m), q2(k, m), . . . are attack correlation curved surface parameters.
For example, in each single attack composing a complex attack, when the state of which the value of the attack parameter is 0 represents that there is no attack, it can be approximated by functions
expressed by the following formulas: $z 1 , 2 ( x 1 , x 2 ) = 1 1 + q 1 x 1 q2 x 2 q3 ( 20 )$
where q1, q2, and q3 are attack correlation curved surface parameters: $z 1 , 2 ( x 1 , x 2 ) = 1 - 1 ( 1 + exp ( q 1 + q 2 x 1 ) ) ( 1 + exp ( q 3 + q 4 x 2 ) ) ( 21 )$
where q1, q2, q3, and q4 are attack correlation curved surface parameters. A curve to be used depends on a combination of types of attacks. Thus, a proper curve is selected for each complex attack.
The storing unit 101 stores the digital watermark characteristic parameter tables shown in Tables 2 and 3. In accordance with the category index k that is output from the categorizing portion 103,
the storing unit 101 selects a proper digital watermark parameter characteristic table. When a digital watermark strength s(m) is input to the storing unit 101 from the digital watermark strength
calculating portion 100, the storing unit 101 outputs an image quality deterioration ratio D(k, m) to the digital watermark strength calculating portion 100. On the other hand, the storing unit 101
outputs detection ratio characteristic parameters c1(k, m), c2(k, m), . . . for a single attack and attack correlation curved surface parameters q1(k, m), q2(k, m), . . . to the robustness evaluation
value calculating portion 105.
The robustness evaluation value calculating portion 105 obtains an attack correlation curved surface with the attack correlation curved surface parameters q1(k, m), q2(k, m), . . . that are input
from the storing unit 101. In other words, when an approximation is performed with the formula (20), the attack correlation curved surface is obtained by the following formula: $z 1 , 2 ( k , m , x
1 , x 2 ) = 1 1 + q 1 ( k , m ) x 1 q 2 ( k , m ) x 2 q 1 ( k , m ) ( 22 )$
Next, with the detection ratio curve parameters c1(k, m), c2(k, m), . . . for single attacks, a detection ratio curve for each single attack composing the complex attack is obtained. With the
obtained attack correlation curved surface and the detection ratio curve, the detection ratio curved surface for the complex attack is obtained by the following formula:
r [1,2](k,m,x [1] ,x [2])=r [1](k,m,x [1])r [2](k,m,x [2])z [1,2](x [1] ,x [2])(23)
With the following formulas that are equivalent to the formulas (4) and (5) for single attacks, a robustness evaluation value is calculated. When a robustness evaluation value is really calculated
using the follow formulas, as with the case for a single attack, a detection ratio, an attack correlation curved surface, and a weighting function are digitized and calculated: $V ( k , m ) = ∫ ∫ w
1 , 2 ( x 1 , x 2 ) r 1 , 2 ( k , m , x 1 , x 2 ) x 1 x 2 = ∫ ∫ w 1 , 2 ( x 1 , x 2 ) r 1 ( k , m , x 1 ) r 2 ( k , m , x 2 ) z 1 , 2 ( x 1 , x 2 ) x 1 x 2 ( 24 )$
$V ( k , m ) = 1 L ∫ ∫ T ( r 1 , 2 ( k , m , x 1 , x 2 ) , α ) x 1 x 2 = 1 L ∫ ∫ T ( r 1 ( k , m , x 1 ) r 2 ( k , m , x 2 ) z 1 , 2 ( x 1 , x 2 ) , α ) x 1 , x 2
( 25 )$
The robustness evaluation value calculating portion 105 has a storing means that stores data of a weighting function w1,2(x1, x2) and a threshold value α.
When the weighting function w1,2(x1, x2) is a separable for the attack parameters x1 and x2 and the attack correlation curved surface z1,2(x1, x2) is the linear sum of a separable function as with
the formula (21) (namely, the following formulas are satisfied):
w [1,2](x [1] ,x [2])=w [1](x [1])w [2](x [2])(26) $w 1 , 2 ( x 1 , x 2 ) = w 1 ( x 1 ) w 2 ( x 2 ) ( 26 ) z 1 , 2 ( x 1 , x 2 ) = ∑ i Z 1 ( i ) ( x 1 ) z 2 ( i ) ( x 2 )
( 27 )$
the right side of the formula (24) can be expressed as follows: $∑ i ∫ w 1 ( x 1 ) r 1 ( k , m , x 1 ) z 1 ( i ) ( k , m , x 1 ) x 1 ∫ w 2 ( x 2 ) r 2 ( k , m , x 2 ) z 2 (
i ) ( k , m , x 2 ) x 2 ( 28 )$
Thus, after an integrating calculation is performed for each single attack, the results are multiplied and added. Thus, the robustness evaluation value V(k, m) can be calculated. Consequently, the
calculation amount can be remarkably reduced. In particular, in the case of z1,2(x1, x2)=1, only by calculating the product of the robustness evaluation values of single attacks, the robustness
evaluation value of a complex attack can be derived.
Next, with reference to FIG. 6, the structure and operation of the digital watermark characteristic general parameter 104 shown in FIG. 4 in the case that information that represents an image quality
deterioration ratio is an image quality deterioration ratio curve parameter and information that represents a detection ratio characteristic general parameter will be described.
FIG. 6 is a block diagram showing an example of the structure of the digital watermark characteristic calculating portion according to the first embodiment of the present invention.
A storing unit 171 outputs an image quality deterioration ratio curve parameter corresponding to the category index that is received from the categorizing portion 103 shown in FIG. 3 to an image
quality deterioration ratio calculating portion 172. In addition, the storing unit 171 outputs a detection ratio characteristic general parameter to a robustness evaluation value calculating portion
The image quality deterioration ratio calculating portion 172 obtains an image quality deterioration ratio curve that represents the variation of an image quality deterioration ratio against a
digital watermark strength with the image quality deterioration ratio curve parameter that is received from the storing unit 171, calculates an image quality deterioration ratio corresponding to a
digital watermark strength that is received from the digital watermark strength calculating portion 100 and outputs the calculated image quality deterioration ratio to the digital watermark strength
calculating portion 100.
The robustness evaluation value calculating portion 173 obtains a detection ratio curve/curved surface with the detection ratio characteristic general parameter that is received from the storing unit
171 and the digital watermark strength that is received from the digital watermark strength calculating portion 100 shown in FIG. 4. In addition, the robustness evaluation value calculating portion
173 calculates a robustness evaluation value based on a robustness evaluation value calculation parameter that is input by the user, and outputs the calculated robustness evaluation value to the
digital watermark strength calculating portion 100 shown in FIG. 4.
Next, the operation of the digital watermark characteristic calculating portion shown in FIG. 6 will be described.
First of all, the operation of the digital watermark characteristic calculating portion in the case of a single attack will be described. In this case, the detection ratio characteristic general
parameter is a detection ratio curve general parameter. A category index that is output from the categorizing portion 103 shown in FIG. 4 is input to the storing unit 171. The storing unit 171 stores
a digital watermark characteristic parameter table that describes the relation between a category index, an image quality deterioration ratio curve parameter, and a detection ratio curve general
parameter as shown in Table 4.
TABLE 4
Image quality
deterioration Detection ratio
ratio curve curve general
Category index parameter parameter
1 b1(1), b2(1), . . . p1(1), p2(1), . . .
2 b1(2), b2(2), . . . p1(2), p2(2), . . .
. . . . . . . . .
K b1(K), b2(K), . . . p1(K), p2(K), . . .
where k is a category index; b1(k), b2(k), . . . are image quality deterioration ratio curve parameters; and p1(k), p2(k), . . . are detection ratio curve general parameters.
The image quality deterioration ratio is approximated by for example a graph of broken lines or a polynomial. When an image quality deterioration ratio D(s) varies against a digital watermark
strength s as shown in FIG. 11A, the image quality deterioration ratio D(s) is approximated as shown in FIG. 11B. For example, the image quality deterioration ratio D(s) is approximated with a
quadratic function expressed by the following formula, b1, b2, and b3 are image quality deterioration ratio curve parameters:
D(s)=b [1] +b [2] s+b [3] s ^2(29)
A detection ratio curve parameter is approximated with for example a graph of broken lines or a polynomial. For example, when a detection ratio curve parameter ci varies against a digital watermark
strength as shown in FIG. 11C, the detection ratio curve parameter is approximated as shown in FIG. 11D. For example, when the detection ratio curve parameter is approximated with a quadratic
function given by the following formula, p1, p2, and p3 are detection ratio curve general parameters:
c [i](k,s)=p [1] +p [2] s+p [3] s ^2(30)
The storing unit 171 stores a digital watermark characteristic table shown in Table 4. When a category index is input to the storing unit 171 from the categorizing portion 103, the storing unit 171
outputs image quality deterioration ratio curve parameters b1(k), b2(k), . . . corresponding thereto to the image quality deterioration ratio calculating portion 172. In addition, the storing unit
171 outputs detection ratio curve general parameters p1(k), p2(k), . . . to the robustness evaluation value calculating portion 173.
When the image quality deterioration ratio curve parameters are input to the image quality deterioration ratio calculating portion 172 from the storing unit 171, the image quality deterioration ratio
calculating portion 172 obtains an image quality deterioration ratio curve. The image quality deterioration calculating portion 172 calculates an image quality deterioration ratio corresponding to a
digital watermark strength that is received from the digital watermark strength calculating portion 100 shown in FIG. 4 with the obtained image quality deterioration curve and outputs the calculated
image quality deterioration ratio to the digital watermark strength calculating portion 100 shown in FIG. 4.
When the detection ratio curve general parameters are input to the robustness evaluation value calculating portion 173 from the storing unit 171, the robustness evaluation value calculating portion
173 obtains a curve that represents the variation of a detection ratio curve parameter against a digital watermark strength. Thereafter, the robustness evaluation value calculating portion 173
obtains a detection ratio curve parameter corresponding to the digital watermark strength that received from the digital watermark strength calculating portion 100 shown in FIG. 4.
Next, the robustness evaluation value calculating portion 173 obtains a detection ratio curve with the obtained detection ratio curve parameter. With the formula (4) or (5), the robustness evaluation
value calculating portion 173 calculate a robustness evaluation value V(k, m). The calculating method used by the robustness evaluation value calculating portion 173 is the same as that used by the
robustness evaluation value calculating portion 105 shown in FIG. 5 in the case for a single attack. The robustness evaluation value calculating portion 173 outputs the obtained robustness evaluation
value to the digital watermark strength calculating portion 100 shown in FIG. 4.
Next, the operation of the digital watermark characteristic calculating portion shown in FIG. 6 in the case of a complex attack will be described. In this case, the detection ratio characteristic
general parameters are composed of detection ratio curve general parameters for each single attack composing a complex attack and attack correlation curved surface general parameters.
The category index that is output from the categorizing portion 103 shown in FIG. 4 is input to the storing unit 171. In addition to the table that describes the relation between a category index, an
image quality deterioration ratio curve parameter, and a detection ratio curve general parameter shown in Table 4, the storing unit 171 stores a digital watermark characteristic parameter table shown
in FIG. 5. The digital watermark characteristic parameter table describes the relation between a category index and an attack correlation curved surface general parameter.
TABLE 5
Attack correlation curved
Category index surface general parameter
1 t1(1), t2(1), . . .
2 t1(2), t2(2), . . .
. . . . . .
K t1(K), t2(K), . . .
where k is a category index; and t1(k), t2(k), . . . are attack correlation curved surface general parameters.
As with a detection ratio curve parameter, an attack correlation curve parameter qi is approximated with for example a graph of broken lines or a polynomial. For example, the attack correlation curve
parameter ql is approximated with a quadratic function expressed by the following formula:
q [i](k,s)=t [1] +t [2] s+t [3] s ^2(31)
where t1, t2, and t3 are attack correlation curved surface general parameters.
The storing unit 171 stores the digital watermark characteristic tables shown in Table 4 and Table 5. When a category index k is input to the storing unit 171 from the categorizing portion 103, the
storing unit 171 outputs image quality deterioration ratio curve parameters b1(k), b2(k), . . . corresponding to the category index k to the image quality deterioration ratio calculating portion 172.
On the other hand, the storing unit 171 outputs detection ratio curve general parameters p1(k), p2(k), . . . for single attacks and attack correlation curved surface general parameters t1(k), t2(k),
. . . to the robustness evaluation value calculating portion 173.
When the image quality deterioration ratio curve parameters are input to the image quality deterioration ratio calculating portion 172 from the storing unit 171, the image quality deterioration ratio
calculating portion 172 obtains an image quality deterioration ratio curve. The image quality deterioration ratio calculating portion 172 calculates an image quality deterioration ratio corresponding
to the digital watermark strength that is received from the digital watermark strength calculating portion 100 shown in FIG. 4 with the obtained image quality deterioration ratio curve and outputs
the calculated image quality deterioration ratio to the digital watermark strength calculating portion 100 shown in FIG. 4.
When the attack correlation curved surface general parameters are input to the robustness evaluation value calculating portion 173 from the storing unit 171, the robustness evaluation value
calculating portion 173 obtains a curve that represents the variation of an attack correlation curved surface parameters against a digital watermark strength. Thereafter, the robustness evaluation
value calculating portion 173 obtains attack correlation curved surface parameters corresponding to the digital watermark strength that is received from the digital watermark strength calculating
portion 100 shown in FIG. 4. Next, the robustness evaluation value calculating portion 173 obtains an attack correlation curved surface with the obtained attack correlation curved surface parameters.
When detection ratio curve general parameters for a single attack are input to the robustness evaluation value calculating portion 173 from the storing unit 171, the robustness evaluation value
calculating portion 173 obtains a curve that represents the variation of the detection ratio curve parameters against the digital watermark strength. Next, the robustness evaluation value calculating
portion 173 obtains detection ratio curve parameters corresponding to the digital watermark strength that is received from the digital watermark strength calculating portion 100 shown in FIG. 4.
Thereafter, with the obtained detection ratio curve parameters, the robustness evaluation value calculating portion 173 obtains a detection ratio curve. With the obtained attack correlation curved
surface and detection ratio curve, as with the formula (23), the robustness evaluation value calculating portion 173 obtains a detection ratio curved surface for the complex attack.
Next, with the formulas (24) and (25), the robustness evaluation value calculating portion 173 calculates a robustness evaluation value V(k, m). This calculating method used in the robustness
evaluation value calculating portion 173 is the same as that used in the robustness evaluation value calculating portion 105 shown in FIG. 5 in the case of a complex attack. The obtained robustness
evaluation value is output to the digital watermark strength calculating portion 100 shown in FIG. 4.
[Second Embodiment]
Next, with reference to FIG. 7, a second embodiment of the present invention will be described.
FIG. 7 is a block diagram showing the structure of a digital watermark inserting system according to the second embodiment of the present invention. In the digital watermark inserting system shown in
FIG. 7, a digital watermark characteristic calculating portion 131 is used instead of the digital watermark characteristic calculating portion 104 of the digital watermark inserting system shown in
FIG. 4. In addition, a digital watermark characteristic parameter table generating unit 132 is connected to the digital watermark characteristic calculating portion 131. The other portions of the
digital watermark inserting system shown in FIG. 7 are the same as those of the digital watermark inserting system shown in FIG. 4.
In the system shown in FIG. 7, the digital watermark characteristic parameter table generating unit 132 generates a digital watermark characteristic parameter table. The digital watermark
characteristic parameter table is output to the digital watermark characteristic calculating portion 131 and stored in a storing unit thereof. The operation of the digital watermark characteristic
calculating portion 131 is the same as that of the digital watermark characteristic calculating portion 104 shown in FIG. 4. The digital watermark characteristic parameter table generating unit 132
will be described later.
[Third Embodiment]
Next, with reference to FIG. 8, a third embodiment of the present invention will be described.
FIG. 8 is a block diagram showing the structure of a digital watermark inserting system according to a third embodiment of the present invention. In the digital watermark inserting system shown in
FIG. 8, a digital watermark characteristic calculating portion 151 is used instead of the digital watermark characteristic calculating portion 104 of the digital watermark inserting system shown in
FIG. 4. In addition, an input unit 152 is connected to the digital watermark characteristic calculating portion 151. A record medium unit 153 is connected to the input unit 152. The other portions of
the digital watermark inserting system shown in FIG. 8 are the same as those of the digital watermark inserting system shown in FIG. 4.
In the system shown in FIG. 8, a unit equivalent to the digital watermark characteristic parameter table generating unit 132 shown in FIG. 7 generates a digital watermark characteristic parameter
table. The generated digital watermark characteristic parameter table is stored in the record medium unit 153. The digital watermark characteristic parameter table stored in the record medium unit
153 is input to the digital watermark characteristic calculating portion 151 through the input unit 152 and stored to a storing unit of the digital watermark characteristic calculating portion 151.
The operation of the digital watermark characteristic calculating portion 151 is the same as the operation of the digital watermark characteristic calculating portion 104 shown in FIG. 4.
[Fourth Embodiment]
Next, with reference to FIG. 12, a digital watermark characteristic parameter table generating unit according to a fourth embodiment of the present invention will be described.
FIG. 12 is a block diagram showing the structure of the digital watermark characteristic parameter table generating unit according to the fourth embodiment of the present invention. The structure of
the digital watermark characteristic parameter table generating unit according to the fourth embodiment is the same as that of the conventional digital watermark characteristic table generating unit
shown in FIG. 2 except that a digital watermark characteristic parameter table calculating portion 205 is used instead of the digital watermark characteristic table generating portion 2201. The
digital watermark characteristic parameter table calculating portion 205 obtains information that describes an image quality deterioration ratio and a detection ratio with a detected result that is
received from the digital watermark detecting portion 202, a digital watermark strength, a attack parameter, an image quality deterioration amount that is received from the image quality
deterioration amount calculating portion 203, and a category index that is received from the categorizing portion 204. The digital watermark characteristic parameter table calculating portion 205
outputs a table that describes the relation between these factors, a category index, and a digital watermark strength as a digital watermark characteristic parameter table.
Next, the operation of the digital watermark characteristic parameter table generating unit shown in FIG. 12 will be described. The operation of the digital watermark characteristic parameter table
generating unit shown in FIG. 12 is the same as that of the conventional digital watermark characteristic table generating unit shown in FIG. 2 except for a digital watermark characteristic parameter
table calculating portion 205. Next, with reference to FIG. 13, the digital watermark characteristic parameter table calculating portion 205 will be described in detail.
FIG. 13 is a block diagram showing the structure of a digital watermark characteristic parameter table calculating portion 205. A detected result totaling portion 300 totals detected results for each
attack parameter, each digital watermark strength, and each category index, calculates detection ratios with the totaled results, and outputs the calculated detection ratios to a digital watermark
characteristic extracting portion 302. An image quality deterioration amount totaling portion 301 totals image quality deterioration amounts for each category index and each digital watermark
strength, calculates image quality deterioration ratios with the totaled results, and outputs the calculated image quality deterioration ratios to a digital watermark characteristic extracting
portion 302.
The digital watermark characteristic extracting portion 302 obtains the relation of attack parameters and detection ratios that are received from the detected result totaling portion 300, calculates
detection ratio descriptive information that describes a curve that approximates the relation, and outputs the calculated result to a data combining portion 303. In addition, the digital watermark
characteristic extracting portion 302 calculates image quality deterioration ratio descriptive information that describes the image quality deterioration amounts that are received from the image
quality deterioration amount totaling portion 301 and outputs the calculated result to the data combining portion 303. The data combining portion 303 generates a table that describes the relation
between a category index, a digital watermark strength, and the detection ratio descriptive information and the image quality deterioration ratio descriptive information that are received from the
digital watermark characteristic extracting portion 302 and outputs the generated table as a digital watermark characteristic parameter table.
Next, the operation of the digital watermark characteristic parameter table calculating portion shown in FIG. 13 will be described. The operations of the detected result totaling portion 300 and the
image quality deterioration amount totaling portion 301 of the digital watermark characteristic parameter table calculating portion shown in FIG. 13 are the same as the operations of the detected
result totaling portion 300 and the image quality deterioration amount totaling portion 301 of the conventional digital watermark characteristic parameter table calculating portion. When a detection
ratio for a complex attack is calculated, detection ratios of attack parameters of individual single attacks composing the complex attack are calculated.
A detection ratio that is output from the detected result totaling portion 300 and an image quality deterioration ratio that is output from the image quality deterioration amount totaling portion 301
are input to the digital watermark characteristic extracting portion 302. The digital watermark characteristic extracting portion 302 obtains image quality deterioration ratio descriptive information
and detection ratio descriptive information, and outputs to the data combining portion 303.
The image quality deterioration ratio descriptive information may be an image quality deterioration ratio or an image quality deterioration ratio curve parameter.
On the other hand, the detection ratio descriptive information may be a detection ratio characteristic parameter or a detection ratio characteristic general parameter. In the case of a single attack
that is a single process, the detection ratio characteristic parameter is a detection ratio curve parameter. In the case of a complex attack, the detection ratio characteristic parameter is composed
of a detection ratio curve parameter and an attack correlation curved surface parameter. In the case of a single attack, the detection ratio characteristic general parameter is a detection ratio
curve general parameter. In the case of a complex attack, the detection ratio characteristic general parameter is composed of a detection ratio curve general parameter and an attack correlation
curved surface general parameter.
The structure and operation of the digital watermark characteristic extracting portion 302 depend on whether the image quality deterioration ratio descriptive information is an image quality
deterioration ratio or an image quality deterioration ratio curve parameter. In addition, the structure and operation of the digital watermark characteristic extracting portion 302 depend on whether
the detection ratio descriptive information is a detection ratio characteristic parameter or a detection ratio characteristic general parameter. Moreover, the structure and operation of the digital
watermark characteristic extracting portion 302 depend on whether a single attack or a complex attack is applied. These cases will be described later in detail.
The data combining portion 303 generates a digital watermark characteristic parameter table with the image quality deterioration ratio descriptive information and the detection ratio descriptive
information that are received from the digital watermark characteristic extracting portion 302 and outputs the generated digital watermark characteristic parameter table. The operation of the data
combining portion 303 will be described later along with the operation of the digital watermark characteristic extracting portion 302.
Next, with reference to FIG. 14, the structure and operation of the digital watermark characteristic extracting portion 302 shown in FIG. 13 in the case that the image quality deterioration ratio
descriptive information is an image quality deterioration ratio and that the detection ratio descriptive information is a detection ratio characteristic parameter will be described.
FIG. 14 is a schematic diagram showing an example of the structure of the digital watermark characteristic extracting portion 302 according to the present invention. The detection ratio
characteristic extracting portion 320 obtains the relation of a detection ratio that is received from the detected result totaling portion 300 shown in FIG. 13, an attack parameter, and a digital
watermark strength for each category index, calculates detection ratio characteristic parameters with the obtained results, and outputs the calculation results as detection ratio descriptive
information. On the other hand, image quality deterioration ratios are input to the digital watermark characteristic extracting portion 302 from the image quality deterioration amount totaling
portion 301 and outputs them as image quality deterioration ratio descriptive information.
Next, the operation of the detection ratio characteristic extracting portion 320 shown in FIG. 14 will be described. First of all, the operation of the detection ratio characteristic extracting
portion 320 in the case of a single attack will be described. In this case, the detection ratio characteristic parameter is a detection ratio curve parameter.
Next, the method for calculating a detection ratio curve parameter with the relation (x(1), r(x(1))), (x(2), r(x(2)), . . . , (x(N), r(x(N))) of an attack parameter x at N points and a detection
ratio r(x) will be described in the assumption that the relation of x(1)≦x(2)≦ . . . ≦x(N) is satisfied.
First of all, the detection ratio characteristic extracting portion 320 checks the variation of the detection ratio against the attack parameter and approximates the variation with a curve defined by
several parameters. The type of curve that approximates the variation depends on the digital watermark system and attack for use. The type of curve maybe pre-designated for each attack.
Alternatively, the detection ratio characteristic extracting portion 320 may calculate a detection ratio curve parameter for each curve that has been registered and select a curve that has the
minimum approximation error.
When a logistic curve expressed by the formula (7) is used, the following relation is satisfied: $c 1 ( x + c 2 ) = ln [ 1 r ( x ) - 1 ] ( 32 )$
Thus, for the N points, the detection ratio characteristic extracting portion 320 obtains the relation of the amount of the right side of the formula (32) and the attack parameter x and approximates
the relation with a line, and obtains detection ratio curve parameters c1 and c2. To approximate the relation with a line, for example, the method of least squares can be used.
For a curve that satisfies the following formula as with a logistic curve: $u = 1 1 + ℯ ′ ( 33 )$
when v is changed by a small amount Δv, the amount of change Δu of u is expressed by the following formula: $Δ u = - ℯ ′ ( 1 + ℯ ′ ) 2 Δ v = - u ( 1 - u ) Δ v ( 34 )$
Thus, the influence of the approximation error against the line is proportional to the following formula:
Thus, when coefficients are calculated by the method of least squares, approximation errors can be weighted with the value of the formula (35) or a value as a function thereof. In other words, they
are weighted according to the following formula: $∑ x - 1 N r ( x ( n ) ) { 1 - r ( x ( n ) ) } { c 1 ( x ( n ) + c 2 ) - ln ( 1 r ( x ( n ) ) - 1 ) } 2 ( 36 )$
Parameters that minimize the coefficients are calculated. Thus, the total approximation error can be suppressed.
When a graph of broken lines expressed by the formula (8) is used, a point that satisfies the following relation of the detection ratio is selected from the N points and the selected point is
directly applied, the detection ratio curve parameters c1 and c2 can be obtained:
r1≦r(x)≦r [2](37)
where r1 and r2 may be any values as long as they satisfy the following relation:
For example, r1=0.1 and r2=0.9.
When a graph of broken lines expressed by the formula (9) is used, the N points are separated into n points close to 0 and the remaining (N−n) points. Lines comprising the two portions are obtained.
With the point of intersection of these lines and the point of intersection of the line of the (N−n) points and the axis of the attack parameter, the detection ratio curve parameters c1, c2, and c3
can be obtained. In this case, n points can be selected in various manners. For example, an integer that satisfies the relation of 1≦n≦N that allows the approximation error to be minimum can be used.
When a fractional function expressed by the formula (10) is used as a curve, the following relation is satisfied: $c 1 x + c 2 x 2 = 1 r ( x ) - 1 ( 39 )$
Thus, for the N points, the relation of the amount of the right side of the formula (39) and the attack parameter x is obtained and applied to a quadratic function. Consequently, the detection ratio
curve parameters c1 and c2 can be obtained. To applies the relation to a quadratic function, for example, the method of least squares can be used.
For a curve that satisfies the following formula: $u = 1 v ( 40 )$
when v is changed by a small amount Δv, the amount of change Δu of u can be expressed by the following formula: $Δ u = - 1 v 2 Δ v = - u 2 Δ v ( 41 )$
Thus, the method of least squares can be used along with the weighting method. When an exponential function expressed by the formula (11) is used as a curve, the following relation is satisfied:
−c [1](x−c [2])=lnr(x)(42)
Thus, for the N points, the relation of the natural logarithm of the detection ratio and the attack parameter x is obtained and applied to a line. Thus, the detection ratio curve parameters c1 and c2
are obtained. As a method for applying the relation to a line, for example, the method of least squares can be used.
For a curve that is expressed by the following formula:
when v is changed by a small amount Δv, the amount of change Δu of u is expressed by the following formula:
Δu=−e ^−v Δv=−uΔv(44)
The method of least squares maybe used along with the weighting method. In such a manner, detection ratio curve parameters are calculated for each category index and each digital watermark strength.
Along with the image quality deterioration ratio descriptive information, the detection ratio curve parameters are output as detection ratio descriptive information to the data combining portion 303.
The data combining portion 303 generates and outputs a digital watermark characteristic parameter table as shown in Table 2 for each category index.
Next, the operation of the digital watermark characteristic calculating portion shown in FIG. 14 in the case of a complex attack will be described. In this case, the detection ratio characteristic
parameter is composed of a detection ratio curve parameter of each single attack composing the complex attack and an attack correlation curved surface parameter.
In the above-described manner, a detection ratio curve parameter for each single attack composing a complex attack is calculated.
Next, a method for calculating an attack correlation curved surface parameter with a detection ratio r1,2(x1, x2) at N points (x1(1), x2(1)), (x1(2) x2(2)), . . . , (x1(N), x2(N)) will be described.
First of all, for N combinations of (x1, x2), an attack correlation value z1,2(x1, x2) expressed by the formula (16) is calculated with values r1(x1) and r2(x2) obtained from a detection ratio curve
for a single attack and a detection ratio r1,2(x1, x2) of a complex attack. The N pieces of data of (x1, x2, z1,2(x1, x2)) are applied to a curved surface. When an attack correlation value is
calculated, really measured detection ratios for single attacks can be used instead of values obtained from the detection ratio curve. When a function expressed by the formula (20) is used as a
curved surface, the following relation is satisfied: $ln q 1 + q 2 ln x 1 + q 3 ln x 2 = ln [ 1 z 1 , 2 ( x 1 , x 2 ) - 1 ] ( 45 )$
Thus, when the relation of the logarithmic values of attack parameters and the amount of the right side of the formula (45) is obtained for the N points and applied to a plane, ln q1, q2, and q3 can
be obtained. Thus, the value of q1 can be obtained with ln q1. Consequently, attack correlation curved surface parameters can be calculated. When the relation is applied to a plane, for example, the
method of least squares can be used.
In this case, the formula (20) can be expressed by the following formula: $z 1 , 2 ( x 1 , x 2 ) = 1 1 + exp ( ln q 1 + q 2 ln x 1 + q 3 ln x 2 ) ( 46 )$
In addition, for a curve that satisfies the formula (33), when v is changed by a small amount Δv, the amount of change Δu of u is expressed by the formula (34). Thus, as with the formula (36), the
method of least squares can be used along with the weighting method. When single attack detection ratios r1(x1) and r2(x2) are small, the formula (23) shows that the influence of the approximation
error of z1,2(x1, x2) against the complex attack detection ratio r1,2(x1, x2) is small. Thus, accurate approximation of z1,2(x1, x2) is required only in the range of which the single attack detection
ratios r1(x1) and r2(x2) are large.
When a function expressed by the formula (21) is used as a curved surface, it is difficult to obtain parameters that analytically minimize the approximation error. However, when a proper algorithm
such as the steepest descent method is used, attack correlation parameters can be calculated. In such a manner, attack correlation curve surface parameters for each category index and each digital
watermark strength are calculated. In addition to the image quality deterioration ratio descriptive information, the calculated attack correlation curved surface parameters and the single attack
detection ratio curve parameters are output as the detection ratio descriptive information to the data combining portion 303. The data combining portion 303 generates and outputs digital watermark
characteristic parameter tables shown in Tables 2 and Tables 3 for individual category indexes.
Next, with reference to FIG. 15, the structure and operation of the digital watermark characteristic extracting portion 302 shown in FIG. 13 in the case that the image quality deterioration ratio
descriptive information is an image quality deterioration ratio curve parameter and that the detection ratio descriptive information is a detection ratio characteristic general parameter will be
described. FIG. 15 is a block diagram showing an example of the structure of the digital watermark characteristic extracting portion 302 according to the present invention.
For each category index, a detection ratio characteristic parameter calculating portion 340 obtains the relation of a detection ratio that is received from the detected result totaling portion 300
shown in FIG. 13, an attack parameter, and a digital watermark strength, calculates detection ratio characteristic general parameters with the obtained relation, and outputs the calculated detection
ratio characteristic general parameters as detection ratio descriptive information. An image quality deterioration ratio characteristic extracting portion 341 obtains the relation of an image quality
deterioration ratio that is received from the image quality deterioration amount totaling portion 301 shown in FIG. 13 and a digital watermark strength, calculates image quality deterioration ratio
curve parameters with the obtained relation, and outputs the calculated image quality deterioration ratio curve parameters as image quality deterioration ratio descriptive information.
Next, the operation of the digital watermark characteristic extracting portion shown in FIG. 15 will be described.
First of all, the operation of the digital watermark characteristic extracting portion in the case of a single attack will be described. In this case, the detection ratio characteristic general
parameter is a detection ratio curve general parameter. The image quality deterioration ratio characteristic extraction portion 341 calculates parameters of an image quality detection ratio curve
that approximates the variation of an image quality deterioration ratio against a digital watermark strength. For example, the image quality detection ratio characteristic extracting portion 341
approximates the variation with a quadratic function expressed by the formula (29), by fitting a quadratic curve to the variation, image quality detection ratio curve parameters b1, b2, and b3 can be
calculated. In this case, for example, the method of least squares can be used. The obtained image quality deterioration ratio curve parameters are output as image quality deterioration ratio
descriptive information to the data combining portion 303 shown in FIG. 13.
The detection ratio characteristic extracting portion 340 calculates detection ratio characteristic general parameters. In the same manner as the digital watermark characteristic extracting portion
320 shown in FIG. 14, the detection ratio characteristic extracting portion 340 calculates detection ratio curve parameters for each category index and each digital watermark strength. Thereafter,
the detection ratio characteristic extracting portion 340 obtains the variation of detection ratio curve parameters against each digital watermark strength, fits a curve to the variation, and
calculates detection ratio curve general parameters.
When the variation of detection ratio curve parameters against each digital watermark strength is approximated with a quadratic function expressed by the formula (30), by fitting a quadratic curve to
the variation, detection ratio curve general parameters p1, p2, and p3 can be calculated. In this case, for example, the method of least squares can be used. The obtained detection ratio curve
general parameters are output as detection ratio descriptive information to the data combining portion 303 shown in FIG. 13. The data combining portion 303 generates and outputs a digital watermark
characteristic parameter table as shown in Table 4.
Next, the operation of the digital watermark characteristic extracting portion shown in FIG. 15 in the case of a complex attack will be described. In this case, the detection ratio characteristic
general parameter is composed of a detection ratio curve general parameter for each attack composing a complex attack and a correlation curved surface general parameter.
The operation of the image quality deterioration ratio characteristic extracting portion 341 in the case of a complex attack is the same as that in the case of a single attack. The image quality
deterioration ratio characteristic extracting portion 341 outputs image quality deterioration ratio curve parameters as image quality deterioration ratio descriptive information to the data combining
portion 303 shown in FIG. 13.
For each attack composing the complex attack, the detection ratio characteristic extracting portion 340 calculates detection ratio curve parameters and detection ratio curve general parameters in the
same manner as the digital watermark characteristic extracting portion 342 performs for a single attack. In addition, the detection ratio characteristic extracting portion 340 calculates attack
correlation curved surface parameters in the same manner as the digital watermark characteristic extracting portion 320 shown in FIG. 14 for a complex attack. The detection ratio characteristic
extracting portion 340 obtains the variation of attack correlation curved surface parameters against each digital watermark strength and calculates parameters of a curve that approximates the
For example, when the variation of attack correlation curved surface parameters against each digital watermark strength is approximated with a quadratic function expressed by the formula (31), by
fitting a quadratic curve to the variation, attack correlation curved surface general parameters t1, t2, and t3 can be calculated. The obtained detection ratio curve general parameters and attack
correlation curved surface general parameters are output as detection ratio descriptive information to the data combining portion 303 shown in FIG. 13. The data combining portion 303 generates and
outputs digital watermark characteristic parameter tables shown in Tables 4 and 5.
The digital watermark inserting system and the digital watermark characteristic parameter table generating unit have been described. Next, a record medium according to the present invention will be
described. On the record medium, a program that allows the digital watermark inserting system and the digital watermark characteristic parameter table generating unit to be accomplished has been
The program for the digital watermark inserting system and the digital watermark characteristic parameter table generating unit is coded in a program language of which a computer reads the program.
The record medium is for example a CD-ROM or a floppy disk.
The record medium may be a record means such as a hard disk of a server unit. When the computer program is recorded to the storing means and read through a network, the record medium according to the
present invention can be accomplished.
EXAMPLES First Example
Next, examples of the embodiments of the present invention will be described.
FIG. 16A is an example of a graph showing the variation of a detection ratio against an attack for adding noise. As an attack parameter, the standard deviation of noise was used. The digital
watermark strength was varied in the range from 1 to 4 (1, 2, 3, and 4). The approximated results of the variation with a logistic curve expressed by the formula (7), with a graph of broken lines
expressed by the formula (8), and with a graph of broken lines expressed by the formula (9) are shown in FIGS. 16B, 16C, and 16D, respectively. The respective digital watermark characteristic
parameter tables are shown in Tables 6, 7, and 8.
TABLE 6
Image quality Detection ratio
Digital watermark deterioration curve parameters
strength ratio (c1, c2)
1 0.693 0.6408, −10.46
2 0.644 0.3556, −14.61
3 0.533 0.3207, −18.36
4 0.347 0.1422, −30.27
TABLE 7
Image quality Detection ratio
Digital watermark deterioration curve parameters
strength ratio (c1, c2)
1 0.693 5.59, 15.38
2 0.644 6.36, 23.35
3 0.533 9.08, 26.93
4 0.347 10.89, 52.91
TABLE 8
Image quality Detection ratio
Digital watermark deterioration curve parameters
strength ratio (c1, c2, c3)
1 0.693 5.98, 0.9601, 15.38
2 0.644 6.56, 0.9158, 23.30
3 0.533 11.95, 0.9467, 25.37
4 0.347 16.78, 0.8779, 50.92
Image quality deterioration ratios D in the tables are calculated with SNR values according to the formula (47): $D = { 1 ( SNR > 45 ) ( SNR - 30 ) / 15 ( 30 ≤ SNR ≤ 45 ) 0 ( SNR < 30 ) ( 47 )$
FIG. 18A is a graph showing the variation of a detection ratio against the enlargement/shrinkage in the horizontal direction as another attack example. As an attack parameter, the magnification of
the enlargement/shrinkage was used. The digital watermark strength was varied in the range from 1 to 4 (1, 2, 3, and 4).
The following curves were fitted to the variation. In this case, when the attack parameter x is 1, it represents that there is no attack. Thus, in two cases x<1 and x>1 respectively, respective
curves were fitted to the variation. In this case, each parameter is shown in FIGS. 17A, 17B, and 17C. The approximated results of the variation with a logistic curve expressed by the formula (7),
with a graph of broken lines expressed by the formula (8), and with a graph of broken lines expressed by the formula (9) are shown in FIGS. 18B, 18C, and 18D, respectively. The respective digital
watermark characteristic parameter tables are shown in Tables 9, 10, and 11, respectively.
TABLE 9
Image quality Detection ratio
Digital watermark deterioration curve parameters
strength ratio (c1, c2, c3, c4)
1 0.693 −142.6, 136.6, 124.3, −130.4
2 0.644 −124.8, 119.0, 122.9, −129.6
3 0.533 −171.8, 163.6, 129.7, −137.6
4 0.347 −131.6, 124.9, 106.0, −112.9
TABLE 10
Image quality Detection ratio
Digital watermark deterioration curve parameters
strength ratio (c1, c2, c3, c4)
1 0.693 0.942, 0.974, 1.031, 1.071
2 0.644 0.933, 0.974, 1.034, 1.075
3 0.533 0.933, 0.966, 1.042, 1.082
4 0.347 0.928, 0.969, 1.043, 1.093
TABLE 11
Digital Image quality Detection ratio curve
watermark deterioration parameters
strength ratio (c1, c2, c3, c4, c5, c6)
1 0.693 0.941, 0.974, 0.989, 1.034, 0.918, 1.072
2 0.644 0.933, 0.973, 0.988, 1.036, 0.956, 1.075
3 0.533 0.937, 0.963, 0.955, 1.044, 0.943, 1.082
4 0.347 0.928, 0.967, 0.948, 1.046, 0.935, 1.093
Thus, in such a manner, detection ratio curve parameters can be calculated.
FIG. 19A shows an example of which the variation of an image quality deterioration ratio expressed by the formula (47) against a digital watermark strength was checked and an image quality
deterioration ratio curve was obtained. In FIG. 19A, the image quality deterioration ratio was approximated with a quadratic function. FIG. 19B shows an example of which an image quality
deterioration curve was obtained in another digital watermark system. In FIG. 19B, the digital watermark strength was approximated with a graph of broken lines. In such a manner, by fitting a curve
to the variation, image quality deterioration ratio curve parameters can be calculated. Next, an example of which the variation of a detection ratio curve parameter against a digital watermark
strength was checked and approximated with a quadratic curve is described.
FIGS. 20A and 20B are examples of which quadratic functions are fitted to detection ratio curve parameters c1 and c2 shown in Table 6, respectively. FIGS. 21A and 21B are examples of which quadratic
functions are fitted to detection ratio curve parameters c1 and c2 shown in Table 7, respectively. FIGS. 22A, 22B, and 22C are examples of which quadratic functions are fitted to detection ratio
curve parameters c1, c2, and c3 shown in Table 8, respectively. In such a manner, by fitting a curve to the variation, detection ratio curve general parameters can be obtained.
Next, an example of which an attack correlation curved surface is calculated for a complex attack is described. FIG. 23 shows a calculated result of an attack correlation value against a complex
attack that is a combination of a chromatic saturation varying attack and a noise adding attack according to the formula (16). For the variation of chromatic saturation, the ratio of varied chromatic
saturation against original chromatic saturation is used as an attack parameter. FIG. 23 shows that the complex attack does not have a synergism effect of a combination of single attacks.
When a weighting function separable for each variable is used, a robustness evaluation value against a complex attack of the variation of chromatic saturation and noise can be expressed by a produce
of a robustness evaluation value against the variation of chromatic saturation and a robustness evaluation value against noise. Thus, the calculation amount can be reduced. When the robustness
evaluation value is approximated with the following formula (48), in the region of which the standard deviation of noise is large, the accuracy of approximation is low. In the region, since the
detection ratio against noise is small, the influence of an error against a calculated robustness evaluation value is small.
Z [1,2](X [1] ,X [2])=1(48)
Next, an example of which an attack correlation curved surface against another complex attack will be described. FIG. 24A shows a calculated result of an attack correlation value for a complex attack
of a combination of an image cropping attack and a noise adding attack according to Formula (16). For the image slicing attack, the area ratio of the cropped image against the original image is used
as an attack parameter.
On the other hand, for the noise adding attack, standard deviation is used as an attack parameter. FIG. 24B shows an example of an attack correlation curved surface approximated with a curved surface
expressed by the formula (20). However, for the image cropping attack, when the attack parameter value x is 1, it represents that there is no attack. Thus, the curve expressed by the formula (20) was
horizontally moved and inverted. In such a manner, a curve surface can be fitted to the variation and an attack correlation curved surface parameter can be obtained.
For a complex attack of a combination of an image cropping attack and a noise adding attack, by varying the digital watermark strength, an attack correlation curved surface parameter is calculated.
The variation is approximated with a quadratic function. The results are shown in FIGS. 25A, 25B, and 25C. Thus, in such a manner, by fitting a curve to the variation, attack correlation curved
surface general parameters can be calculated.
According to the present invention, since data for obtaining a detection ratio is stored rather than a robustness evaluation value, the user can customize a method for calculating a digital watermark
robustness evaluation value before inserting a digital watermark into an image.
In addition, since detection ratio characteristic and image quality deterioration characteristic are stored as parameters that approximate detection ratio data and image quality deterioration data,
the data amount to be stored can be remarkably reduced in comparison with the case that detection ratio data is stored. Thus, the memory amount can be reduced.
Moreover, since detection ratio data for each single attack that composes a complex attack and data that has a synergism effect of attacks are separately stored, robustness evaluation values for
single attacks and a complex attack can be effectively calculated. In addition, for complicated and plurality of types of digital watermark information, robustness evaluation value, image quality
deterioration ratio, and digital watermark strength can be analyzed. Although the present invention has been shown and described with respect to a best mode embodiment thereof, it should be
understood by those skilled in the art that the foregoing and various other changes, omissions, and additions in the form and detail thereof may be made therein without departing from the spirit and
scope of the present invention.
|
{"url":"http://www.google.com/patents/US6697499?dq=5,960,411","timestamp":"2014-04-17T06:53:54Z","content_type":null,"content_length":"278656","record_id":"<urn:uuid:79f92fcb-b21b-4864-ac22-3f382a779788>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00154-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Verbal Logic
From Logic Law
(From the Logic and Mathematics page.)
For the symbolic extension of Verbal Logic, see First-Order Predicate Logic.^[1]
This outline is dedicated to Katherine Brandl, Ph.D. - certainly one of David Hilbert's most illustrious and beautiful mathematical descendants.
Verbal logic^[2] is the primary meta-language (i.e., "generative grammar") for defining all formal, symbolic, language systems.^[3]
The definitions used in this outline are designed to minimize the undefined vocabulary as much as possible, where practicable, resulting in a vocabulary where all terms are ultimately some
combination of only three, general, and essentially undefined, primitive terms:
1. Existence - or terms synonymous with existence, such as "to be," "is manifest," "is expressed," "occurs," etc.,
2. Negation - or terms synonymous with negation, such as "not" and the prefixes "non-," "un-," or "im-," or negations of previously defined terms, such as "neither" for the negation of "either,"
etc., and
3. Relation - in this case we select inclusive disjunction as our most primitive use of a relational term, or terms synonymous with disjunction, such as "or," "otherwise," "else," "either," or
"whether," etc.^[4]
The philosophy underlying this theory of logic is essentially an intersection of Frege's
and a non-Finitist
version of Brouwer's
The indefinite articles ("a" or "an") are also undefined and constitute structural, syntactic elements in the English grammar, denoting only the existence of an indefinite object, and are thus
classed with the terms that are purely existential. In addition, no term used in this outline will be defined in terms that have not been previously defined in the order of definition of terms
Although the terms used in this outline are selected to be "intuitively obvious," they remain "terms of art" and their meanings are necessarily idiosyncratic to this outline in a manner designed to
generalize and eliminate as much nuance as possible and where reasonable. Of course, much grammar is assumed. Defining the nuanced and contextual meanings of grammatical terms as they might be used
in ordinary English is beyond the scope of the present outline.
For the purposes of this outline, after a term has been defined in its most general form, a later part of the outline may refine that definition and thereby make it less general^[8] by the use of
other previously defined terms.^[9]^[10]
Primitive Semiotics
Existence and Non-Existence^[11]
Identification, Differentiation, and Relationship
1. An other is not a self.^[29]^[30]^[31]
2. Identification with ^[32]^[33]^[34] an object (=, :=, ≡)^[35] is not an other.^[36]
3. An object identified with an other or its self is identical, duplicative, redundant, repetitive, synonymous, equivalent,^[37] such, congruent, in common with^[38] or like an other, similar,
self-similar, or the same.^[39]^[40]
4. An object not identified with its self or an other is different, separate, apart, or distinct (. , ; ≠, ¬≡).
5. Different objects are objects for/about/in/of/from/as/to^[41]/under^[42] which, where or whereby, when, if,^[43] or than^[44] an other object exists.
6. A relation or correspondence of/for/as to each or in regards to each^[45] is an object that is identical to or different from an other.^[46]^[47]
Uniqueness and Definition
1. An object is exactly one, precisely one, sole, single, mere, pure, the object, or unique (∃! ) where no other is identical.^[48]^[49]
2. A definition, identity introduction, specification, particularization, valuation, extension, projection, representation, or assignment is a unique identification of an object.
3. defining, calling, saying, naming, locating, marking, particularizing, specifying, valuing, projecting, representing, assigning, identifying, or describing are unique identification.
4. A defined object is a definiendum.^[50]
5. An expression or expressions that define, identify, call, say, name, locate, mark, particularize, specify, value, project, represent, assign, identify, or describe a definiendum are definiens.
1. The separation, division, disjunction, differentiation, or disunion (A[xy], ∨, +, || ) of objects is a defined relation of an object to another or to its self. The related objects are disjuncts.^
2. Or, otherwise, except, instead, alternatively, else, either, whether, but or but for,^[57] however, than, whereas, for/about/in/of/from/as/to^[58]/under^[59] which, where or whereby, when, or if,
^[60] or represent a separation, division, disjunction, differentiation, or disunion.^[61]
3. An object that is a separation, division, disjunction, differentiation, or disunion of another object is a member, point,^[62] element, part, place, aspect, segment, portion, or constituent of
the other object.^[63]^[64]
4. An object that is separated, divided, disjoined, differentiated, or disunited into other objects is composed of, comprised of, made of, consists of, contains, possesses, owns, includes, adds,
appends, conjugates, conjoins, combines, or has those other objects as members, points, elements, parts, places, aspects, segments, portions, or constituents.
5. An elemental, atomic, or singular object, member, point, element, part, place, aspect, segment, portion, or constituent is an object that does not consist of other objects.^[65]^[66]
6. An object that is not atomic is plural or consists of more than one^[67] object.^[68]
7. A combination, collection, compound, association, group, molecule, class, set,^[69] space, integration, union, construction, product, conjugation, or conjunction (K[xy], ·, ∧, &, &&) is the
identification^[70] of the members, points, elements, parts, places, aspects, segments, portions, or constituents of an object. The objects related by conjunction are conjuncts.
8. And, yet, still, moreover, unless, nonetheless, also, together, as, so that, such that,^[71] depending on, lies within, is part of, by, or both represent a conjunction.
9. Analysis, determination, evaluation, computation, interpretation, or resolution is the combined or separate relations of integration and differentiation.
10. Any, every, all, or the totality or universe (∀) of objects is a collection of objects where no such other objects exist without the collection.^[72]
11. A combination that is identical with an other but for the presence of one or more than one different object or objects contained within the other, and where the combination otherwise possesses
all the same constituents as the other, has fewer or less constituent objects than the other.
12. A combination that is identical with an other but for the presence of one or more than one different object or objects contained within it, and which does not have fewer or less objects than the
other, has greater or more constituent objects than the other.
13. The amount or quantity of an object is a determination as to whether the object or set of objects is singular, plural, contains fewer or more objects than an other object,^[73] or is
14. An object that consists of nothing is empty.^[74]
Order and Timing
1. A first, beginning, or prime object is stated if no other related objects are yet stated.^[75]
2. A subsequent, successive, future, next, or anticipated object is stated if all other related objects are stated first.
3. After or will be represent a subsequent object.
4. A previous, preceding, former, past, or prior object is stated if all other related objects are subsequently stated.^[76]
5. Already, before, was, or has been represent a previous object.
6. A last, ultimate, ending, terminating, or final object is stated if all other related objects have been stated.^[77]
7. An object of a combination that is first or last is called a terminal or end.
8. An object or collection of objects in combination with more than one terminal or end point is in the middle, between, over, through, or within^[78] those terminals or end points, and the
combination, including the terminal or end points, is called a segment.^[79]
9. An order, sequence, series, or time line of objects is a determination of whether an object or objects exist before or after another object.^[80]
10. Timing, ordering, or sequencing is a determination of an order, sequence, or series of events.^[81]
11. A determined order, sequence, or series of events is temporal, ordered, or sequenced.^[82]
12. A period is a segment of time.
13. A moment is an atomic instance of time.^[83]
14. The present is a moment of time that is neither past nor future.
15. An object that exists in the past, present, and future always or forever occurs.
16. An object that always or forever occurs is eternal.^[84]
The Objects of Mind and Language
1. An intention^[85] or purpose is an expression of self.^[86]^[87]^[88]
2. An awareness or consciousness is a present manifestation of intention or purpose.^[89]^[90]
3. Sentience, deliberation, intentionality, or thinking is an awareness of self and the relationship of self and an other.^[91]^[92]
4. A being is a conscious object; a thinker is a sentient being.^[93]
5. A mind, psyche, or intelligence is an objectification of a sentient being^[94] or thinker's consciousness.
6. A thought, idea, cognition, or concept is an object related by a mind, psyche, or intelligence.^[95]
7. A meaning or intensionality^[96] is a thought, idea, cognition, or concept put in relation to another or to one's self^[97] and is called conveyed or communicated.^[98]^[99]^[100]
8. A thought, idea, cognition, or concept that is meaningful is semantic.
9. A structural relation is a relation that is not semantic.^[101]
10. Information or an expression is a thought, idea, cognition, or concept when conveyed or communicated.^[102]
11. A symbol or signifier is an object or collection of objects that represent a thought, idea, cognition, or concept.^[103]
12. A language is a symbolic expression of meaning or information.^[104]
13. A grammar is a definition of a language.
14. A syntax is a defined collection of structural relations of grammatical combination.^[105]
15. Rhetoric is a conveyance or communication of information to another or one's self by a language.^[106]
16. A word or string is an element of grammar.^[107]^[108]
17. An alphabet is a collection of symbols, called letters or characters, of which words or strings are composed.^[109] A word, string, symbol, or signifier composed of only one letter is called an
atomic or elemental word, string, symbol, or signifier. A word, string, symbol, or signifier composed of no letters is empty.
18. A lexicon or vocabulary is a collection of words or strings that compose a language.^[110]
19. A sentence or formula is a grammatical and syntactic combination of words, strings, letters, characters, symbols, or other signifiers.^[111]^[112]^[113]
20. A phrase or sentence fragment is any part of a sentence.^[114]
21. A statement^[115] or message is a rhetorical sentence.^[116]^[117]
22. A syntactic and grammatical relation of words, strings, letters, characters, symbols, or other signifiers is well formed.^[118]^[119]
23. A proposition is meaning intended or information conveyed by a sentence, statement, formula, or message.^[120]
1. The substance, content, or material of an object is a definition of that object.^[121]
2. A condition, qualification, predicate, context, circumstance, characteristic, attribute, property, state, structure, quality, design, figure, pattern, or form is a definition of an object's
meaning apart from its substance and is called content neutral.^[122]^[123]^[124]
3. Objects share a condition or qualification, or have a common condition or qualification, where each of the objects possesses a same condition or qualification.
4. A difference in a condition or qualification represents a change or variable in that condition or qualification.
5. A condition or qualification that does not change is constant or remains the same.^[125]
6. A condition or qualification is true, verum, a truth, positive, affirmative, manifest, actual, evident, solved, satisfied, found, applicable, valid, correct, or holds true^[126] where that
condition exists in relation to an object.^[127] Where it does not so exist, the condition is false, falsum, a falsehood, negative, invalid, inconsistent, incorrect, inchoate, inert, fictitious,
unsolved, or inapplicable.^[128]
7. Whether a condition or qualification is true (T or 1) or false (F or 0)^[129] is the truth value, truth condition, logical value,^[130] truthfulness, veracity, or falsity of that condition.^[131]
8. The opposite, complementary, or contradictory ( ¬, ~,^[132] !, N, -, \, "co-" prefix, or a bar over the term) truth value of true is false, and the opposite, complementary, or contradictory truth
value of false is true.
9. A condition that exists so that a contradictory condition does not occur is determined, certain, absolute, particular or specific, exact, precise, consistent, or proved true or false.^[133]^[134]
10. A condition that is not certain may/might or may/might not occur.
11. A condition that exists so that contradictory conditions may/might or may/might not occur is possible ("sometimes").
12. A possibility that always may/might occur can occur.^[135]
13. A possibility that can occur is able to occur.^[136]
14. A possibility that always may/might not occur cannot occur.^[137]
15. A possibility that cannot occur is not able to occur.
16. A possibility that cannot not occur must, will, or shall occur.
17. An elemental, atomic, or singular object, member, point, element, part, place, aspect, segment, portion, or constituent is an object that cannot be separated, divided, disjoined, differentiated,
or disunited into other objects.
18. An impossible ("never") condition is a condition that cannot exist for truthfulness to occur.^[138]
19. A necessary, required, inherent, essential, or intrinsic ("always") condition is a condition that must exist for truthfulness to occur.^[139]
20. A sufficient ("enough") condition is a condition that is not necessary for truthfulness but for which, where it exists, truthfulness will always occur.^[140]
21. A whole, entire, or complete object is one that possesses all its necessary parts.
22. A well-formed sentence or formula is a sentence or formula that is whole, entire, or complete, as well as syntactic.^[141]^[142]^[143]
23. A condition that is or can be determined to exist by a sentient being is known or knowable.^[144]^[145]
24. Observation is a sentient being^[146] knowing an other.^[147]^[148]
25. An empirical truth is a determination of validity by observation.^[149]^[150]^[151]
26. An assumption or presumption^[152] ( | or : ) is a condition that is not empirically proved but exists for the purpose of making a proposition or other statement.
27. Reason^[153] is a determination of the truthfulness or falsity of a condition.^[154]^[155]
28. A condition, the truthfulness or falsity of which can be determined, is reasonable or within reason.^[156]
29. A belief is a reason to know the truthfulness or falsity of a condition.^[157]^[158]
30. A rational condition is a condition that is both reasonable and true.
31. A rigorous condition is one that is both reasonable and certain.^[159]
32. A condition is well-defined or well-founded where it is both rational and rigorous.^[160]^[161]
33. A memory or recollection is an awareness of a past event or events.
34. A prescience, prediction, or foretelling is a belief^[162] in the possibility or certainty of a future event or events.
1. A term is a symbol, word, phrase, sentence, formula or other expression for a well-defined condition or combination of conditions.^[163]
2. A term is proper when it is well formed.^[164]
3. A term that cannot be known without definition or proof is explicit or express. Such a term is called a posteriori.
4. A term that may be known without explicit definition or proof is intuitive or implicit. Such a term is called a priori.
5. Terms which can only be known intuitively^[165] are primitive.^[166]
6. Axiom: There are only three primitive terms in verbal logic: the collective existential terms,^[167] disjunction, and negation.^[168]
7. A subject term ( x )^[169] of a statement is a term about which a predicate ( P ) term^[170] conveys a meaning.^[171]^[172]^[173]
Example: "x is over the age of eighteen" is the same as stating "Over the age of eighteen is x" because, regardless of word order, x is the subject for which being over the age of eighteen is the
8. A copula is a relationship between the subject and predicate terms of a statement.^[177]
9. A particular or specific condition is a condition applicable to one or more terms.
10. To state there exists, there is, or for some (∃) is to state a particular or specific condition about a term or group of terms that is not necessarily unique.
11. To state there is exactly one, precisely one, or only one (∃!) is to state a particular or specific condition about a term or group of terms so that the term or group of terms are necessarily
12. A general or universal condition is necessarily applicable to all terms of a sentence, statement, formula, or message.^[179]
13. To state for all, every, any, or each (∀) is to state a general or universal condition for a term or terms.
14. A simple term or statement is a phrase or sentence composed of only one term or statement.
15. A compound term or statement is a phrase or sentence composed of more than one term or statement.
16. substitution or replacement ( / ) occurs when the same conditions are applicable to a term as another term and the term stands in the same relation as the other term.
17. A term that is true under some possible interpretation is satisfiable.
18. A term that is true under every possible interpretation is validated.
19. A term that is false under every possible interpretation is unsatisfiable.
20. A term that is false under some possible interpretation is invalidated.
21. Axiom: A validated substitution of a validated term in any well-formed statement, sentence, or formula always results in a valid expression.
22. A validated substitution of a validated term is justifiable or truth preserving.^[180]
23. A term that must exist and result in a valid expression is bound.
24. A term that need not exist and result in a valid expression is free or unbound.
25. The reversal or switching of terms^[181] occurs when, for both terms, one term is substituted for or replaces the other term.
26. A term that stands in consistent and valid relation to another follows, derives, draws from, or depends on ( ⊢ or ∴ ) the other.
27. Therefore, hence, as such, thereby, or how represent a consistent and valid relation.
28. A statement has existential import (∃, ∃!, ∀), and the necessary and/or sufficient conditions contained within the statement are thereby bound, if the truth of the statement depends on the
existence of an object or relation of objects.
Antecedents and Consequences
1. An antecedent, protasis, condition precedent, or premise is a condition from which another condition follows that is called a consequence, consequent, condition subsequent,^[182] apodosis, result
, yield, conclusion, or derivation.^[183]^[184]
2. Axiom: A condition is necessarily either an antecedent or a consequence.^[185]
3. A condition is true, verum, a truth, positive, affirmative, manifest, actual, evident, solved, satisfied, unqualified, found, applicable, valid, justified, correct, or holds true^[186] where it
is impossible for the truth value of a conclusion to be different from the truth value of a sufficient premise;^[187]^[188] in this case, the premises and conclusions are said to be consistent or
4. Can, could, is able, or is represent a truth.
5. A condition is false, falsum, a falsehood, negative, invalid, unjustified, inconsistent, incorrect, inchoate, inert, fictitious, unsolved, or inapplicable where it is possible for the truth value
of a conclusion to be different from the truth value of a sufficient premise;^[190] in this case, the premises and the conclusion are said to be inconsistent or contradictory.
6. Cannot, could not, is unable, or is not represent a falsity.
Claims, Arguments, and Conclusions
1. A claim, conjecture, hypothesis, question, problem, belief,^[191] or allegation is a condition or combination of conditions of unknown truthfulness.
2. To purport, allege, surmise, hypothesize, posit, propose, believe, or make a case is to state a claim.
3. To pose an argument or to debate is to relate premises and conclusions so as to state a claim about the truthfulness of a relation^[192] or condition.^[193]
4. A passage or pericope is one or more terms or statements that together may or may not contain an argument. A passage or pericope contains an argument where it purports to prove the truthfulness
of a relation or condition; otherwise it does not contain an argument.
5. A statement, proposition, or argument is unambiguous or unequivocal where it purports to prove one and only one possible conclusion.
6. A statement, proposition, or argument is ambiguous or equivocal where it purports to prove more than one possible conclusion.
7. An explanation or clarification is a statement or group of statements that state a proposition that has been previously proved.^[194]^[195]
8. The explanandum states the proposition to be explained; the explanans is the statement or group of statements that purport to explain the explanandum.
Sets and Their Members
1. A set or space ({ }) is an object that relates a well-defined^[196] combination or collection of objects.^[197]^[198]
2. A member of a set, element of a set, point of a set, instance of a set, or constituent of a set is an object that satisfies the conditions required for combination or collection (i.e., membership
) in a set.^[199]
3. A set contains, consists of, or is otherwise composed of, its members.
4. A member of a set is inside or contained within the set.^[200] An object that is not a member of a set is outside the set.
5. Axiom: A set is a kind of object and may therefore be a member of another set.
6. A class, category, type, family, or kind is a combination or collection of all objects that share a condition.^[201]^[202]
7. A case is a set of conditions.^[203]
8. A categorical proposition is a case made for a category of objects.
1. An inference is an application of reason to the relations of terms of a statement, proposition, or argument.^[204]
2. An inference is valid, justified, or truth-preserving^[205] if a conclusion is true whenever any sufficient premise, or all necessary premises, are also true.
3. An inferential rule, principle, or criterion is a statement of a generally valid inference.^[206]
4. A particular inference complies with,^[207] is permissible, consistent with, or obeys a rule or set of rules where any conclusion drawn from an application of the rule or set of rules is valid.
5. A cause is any antecedent that is necessary and/or sufficient^[208] for the truth of an inference.
6. An effect is any true result of a causal inference.
Logical Inference
1. Logic is a reasoned analysis, determination, evaluation, computation, interpretation, or resolution of the satisfiability or validity of conditions, terms, or inferences contained within a
statement, proposition, or argument.^[209]
2. A condition, term, or inference is logical if it is determined to be satisfiable or valid.
3. A logical statement, logical proposition, logical expression, or logical argument ("truth-bearer") is a statement, proposition, or argument constructed entirely from logical conditions, terms, or
4. A formal or abstract analysis is an analysis that occurs in regard to the meanings of predicate terms but without regard to the meanings of subject terms. Such an analysis is content neutral.^
5. An informal, meta,^[212] concrete, or reified analysis is an analysis that occurs in regard to the meanings of both subject and predicate terms. Such an analysis is not content neutral.
6. A logical form, argument form, or test form is obtained by formal abstraction of the inferential relationships apart from the meanings of the subject terms.^[213]
7. Informal verbal logic is the logic of statements, propositions, expressions, or arguments and the terms they contain in the meta-language.^[214]
8. A logical object is a logical term, statement, proposition, expression, or argument, and any associated conditions or inferences, for which a purported truth value is claimed.^[215]
9. Any logical terms, statements, propositions, expressions, or arguments are logically equivalent (≡) if a term, statement, proposition, expression, or argument may be substituted for another term,
statement, proposition, expression, or argument with no change in the logical conditions or truth value of the term, statement, proposition, expression, or argument in which the substitution
10. Objects are equal ( = ) if all the conditions of one object are identical with all the corresponding conditions of the other objects.^[217]^[218]
11. The complement, opposite, contrary, or contradiction of a logical object is a purported equivalent object evaluating to the opposite truth value.^[219]^[220]
12. A logical object is self-consistent or internally consistent where it is true by its own conditions, terms, and inferences.
13. A logical object is self-contradictory, internally contradictory, or internally inconsistent where it is false by its own conditions, terms, and inferences.
14. A logical object is self-evident where it requires no explicit proof for the determination of its truthfulness other than the knowledge^[221] of its conditions, terms, and inferences.^[222]
15. An axiom or postulate is a logical object that is both self-consistent and self-evident.
16. Axiom: A logical object is always a reasonable object.
17. An axiom schema is an axiom that contains a condition of general applicability to the members of a specific set of objects.^[223]
18. A corollary is an axiom that follows^[224] intuitively^[225] from another axiom or logical object.
19. Axiom: Truth of Axioms: All proper axioms are true.
20. Corollary: Logical Equivalence of Axioms: All proper axioms are logically equivalent.
21. Corollary: Logical Equivalence of Axioms and Corollaries: All corollaries of proper axioms are true.
22. Axiom: Logical Equivalence of Objects and Corollaries: A logical object and its corollary are logically equivalent.
23. Axiom: Law of Identity (an aspect of the Aristotelian Non-Included Mean):^[226]A logical object is always identical to itself.^[227]
24. Axiom: Law of the Excluded Middle (an aspect of the Aristotelian Non-Included Mean):^[228]A logical object has only two possible states: true or false.^[229]^[230]^[231]
25. Axiom: Law of Non-Contradiction (an aspect of the Aristotelian Non-Included Mean):^[232] A logical object is never both true and false or neither.^[233]
26. Corollary: Complementarity of Bivalent Truth Values: For every affirmation there corresponds exactly one negation, and every affirmation and its negation are necessarily 'opposed' such that one
and only one of them must be true, and the other false.^[234]^[235]
27. A logical possibility is any statement that may or may not be true, depending on the choice of premises.
Example: A canine may or may not be a dog.
28. A logical impossibility is any statement that cannot be true, regardless of the choice of premises.
Example: A dog is not a dog.
29. A logical necessity or logical truth is a statement that is true under any possible interpretation.^[236]
Example: Something that is true cannot be false.^[237]^[238]
30. A tautology or tautological truth ( ⊨ ) is a statement that is true regardless of the truth values of the premises.^[239]^[240]^[241]^[242]
Example: x ≡ x.
31. Axiom: The logical complement of a tautology is a contradiction.
32. A vacuous truth is a logically valid statement that is devoid of content because it asserts something about all members of a class or set that is empty.
Example: "All cell phones in the room are turned off. Therefore, we will not hear any telephones ringing during the performance." If there were, in fact, no cell phones in the room to turn on at
the time the argument was made then this argument states a vacuous truth.
33. Theorem: The negation of a tautology can never be proved without a contradiction.
Assume that x ≡ y is a tautology.^[243] Because, by definition, this statement must be true regardless of whether x is true or y is true^[244] then it is impossible to choose any premise for
which the statement will be false. Therefore, we can never state/prove the negation (i.e., falsity) of a tautology. Since a logical tautology and a contradiction are logical complements of each
other, the negation of a tautology can never be stated.
34. Theorem: Every tautology is also a logical necessity.^[245]
Because a tautology is true regardless of our choice of premises, it must also be a logical necessity since it is always true.
35. Theorem: Not every logical necessity is also a tautology.
It is a logical necessity that the Set of All Sets must contain itself as a member. However, such a set must also contain the Set of All Sets That Do Not Contain Themselves as Members, resulting
in a contradiction. Therefore, although the premise is a logical necessity, it is not tautological because the conclusion is false.
36. A logical object is tautologically equivalent ( ≡^[246] or := ^[247]) to another logical object if both always result in the same truth values, given the truth values of all possible premises.^
37. Theorem: Every tautological equivalence is also a logical necessity.
Because a tautology is true regardless of our choice of premises, it must also be a logical necessity since it is always true. Because two statements that are equivalent must also share the same
truth value, where the premises must be true then the equivalent statements must also be true.
38. Theorem: Not every logical necessity is also a tautological equivalence.
It is a logical necessity that the Set of All Sets must contain itself as a member. However, such a set must also contain the Set of All Sets That Do Not Contain Themselves as Members, resulting
in a contradiction. Therefore, although the premise is a logical necessity, it is not tautological because the conclusion is false. Since a logical necessity can result in a false conclusion, and
because a tautological equivalence can only have a true result, not every logical necessity is also a tautological equivalence.
39. A logical object that must be false if any necessary premise is also false, and must be true if any sufficient premise is also true, is called a logical consequence or logical implication ( → ).^
40. A logical object that is always true because all possible antecedent premises are also always true is called a tautological consequence or tautological implication.
41. Theorem: Every tautological consequence is also a logical consequence.
Since all possible antecedent are true for a tautological consequence, any necessary or sufficient conditions must also be true, which means that the consequence must also be true. Therefore any
tautological consequence also satisfies the requirements of a logical consequence.
42. Theorem: Not every logical consequence is also a tautological consequence.
A logical consequence may be false whenever a necessary condition is also false. A tautological consequence, on the other hand, can never be false because no antecedent condition may ever be
false. Therefore, not every logical consequence is also a tautological consequence.
The Logical Operators
1. An operator is a term or symbol that represents an inference.^[250]
2. An operand is any antecedent to which an operator is applied.
3. An operation is a combination of an operator and the operands on which the operator is applied that results in some consequence, also known as a result or solution.
4. A definition that contains an additional term or symbol in the definiendum that is also contained in a definiens, and where the definiendum does not appear in the definens, is a recursive
5. An operation or inference that contains an additional term or symbol in the result that is also contained in one or more operands, and where the result does not appear as a term in any operand,
is a recursive operation or inference.^[252]
6. Theorem: The Logical Validity of Recursion - Whereas the evaluation of a term stated in a conclusion might logically depend on an evaluation of the same term stated in an antecedent (e.g., x → [x
→ y], where x is known or assumed and y is unknown and unassumed), in a recursive inference the term in the antecedent does not depend for its truth on a proof of the statement's conclusion. This
must be distinguished from a circular argument, which is not logically valid because, in such an argument, the evaluation of an antecedent term logically depends on an evaluation of the
argument's conclusion (e.g., [x → y] → y, where x is known or assumed and y is unknown and unassumed).^[253]^[254]
Example: "If a duck has wings then it is a duck that can fly." Because we have assumed the fact that a duck exists as one of the premises, there is nothing wrong with including the fact of the
duck within the conclusion - whether the duck can fly. This argument is not circular because it does not entirely assume the truth of the conclusion by assuming the truth of a premise but,
instead, assumes a premise as part of the conclusion that is proved to be true. However, the statement: "if a duck has wings then a duck exists" is circular and logically invalid because the
truth of the conclusion is entirely dependent on the truth of one of the statement's assumptions.
7. A negation ("not") operator ( ¬, ~, !, N, \, or a bar over the term ) states the opposite truth condition of an operand^[255] statement.
8. An inclusive (non-exclusive) disjunctive ("or") operator ( +, ∨, || ) is a compound construction^[256] where any operand antecedent must be true for the consequence also to be true.^[257]^[258]
9. An exclusive disjunctive ("either...or") operator ( ⊕ ) is a compound construction^[259] where one and only one antecedent must be true for the consequence also to be true.^[260]^[261]
10. A conjunctive ("and") operator ( &, &&,∧,∙·, K[xy])^[262] is a compound construction^[263] where all antecedent operands must be true for the consequence also to be true.^[264]
Conditional Statements
1. A conditional operator ( → ), conditional statement ("if . . . then" statement, "because," "since"), or linguistic implication expresses an inference where a consequence follows from an
antecedent or antecedents.^[265]^[266]
Example: If the animal has feathers then it is a bird.^[267]
2. A conditional statement expresses a necessary inferential relationship or condition where the presence of the consequence implies^[268] the presence of the antecedent, but where the presence of
the antecedent does not imply that the consequence will occur, hence with some ambiguity in the result.^[269]^[270]
Example: If X is not legally an adult then X is not over the age of eighteen.
3. A conditional statement expresses a sufficient inferential relationship or condition where the presence of the antecedent implies the presence of the consequence, but where the presence of the
consequence does not imply the presence of the antecedent, hence with some ambiguity in the cause.^[271]^[272]
Example: If X is over the age of eighteen then X is legally an adult.^[273]
4. If the validity or invalidity of an antecedent is a necessary or sufficient condition for the occurrence of a consequence then the antecedent supports, implies, entails or is relevant to or
evidence of (i.e., "tends to prove or disprove") the consequence, otherwise it is irrelevant.^[274]
5. An inferential relationship is causal or material ( → ) if an unambiguous consequence or set of consequences^[275] necessarily result from the occurrence of a sufficient antecedent or set of
6. An inferential relationship is non-causal or immaterial if a consequence or set of consequences is ambiguous or does not necessarily result from the occurrence of an antecedent or antecedents.
7. Theorem: Every term that is material is also relevant.
Proof: Since a material term must, by definition, be a sufficient antecedent for the truth of the consequent, it also satisfies the definition of a relevant antecedent, which can be either
sufficient or necessary. Therefore, every material term is also a relevant term.
8. Theorem: Not every relevant term is also material.
Proof: A relevant term may be a sufficient or necessary antecedent for the truth of the consequence. However, a material term, by definition may only be a sufficient term, not merely a necessary
term, for the truth of the consequence. Therefore, not every relevant term is also a material term, as is the case with relevant necessary terms.
9. A material implication, logical implication, or logical consequence ( → ) occurs where a conditional statement is false if and only if the antecedent is true and the consequence is false.^[277]
10. Where a consequence is true if and only if both the antecedent and consequence are true, and false if and only if both are false, and both results are unambiguous, then the truth of the
antecedent is a necessary and sufficient condition for the truth of the consequence, and vice-versa. This inferential relationship is also known as a bidirectional conditional statement ( ↔ ) and
it is an alternative definition of logical equivalence (≡).^[278]
Logical Argument
1. A sound^[279] argument is an argument that contains both valid inferential claims^[280] and valid factual claims.^[281]^[282]
2. An unsound argument is an argument that has either an invalid inferential claim or an invalid factual claim, or both.
3. Axiom: The Truthfulness of Sound Arguments: A sound argument is always true.^[283]^[284]^[285]
4. Axiom: A Sound Argument May Contain Irrelevant False Claims: Irrelevant claims may be included in or omitted from an argument without affecting the argument's truthfulness.^[286]^[287]
5. A deduction is an argument where a conclusion necessarily follows from one or more premises.^[288]^[289]
6. An induction is a predictive argument where a conclusion is arrived at by reasoning from the parts to a whole, from particulars to generals, or from the individual to the universal.^[290]
7. A syllogism is a deduction that is composed of at least two premises (the major and minor premises) and one conclusion, where each premise is stated in relation to the other premises so as to
infer the conclusion.
Example: All birds have feathers. (the major premise)
An ostrich is a kind of bird. (the minor premise)
Therefore, an ostrich has feathers. (the conclusion)
8. The major term of a syllogism is shared by the major premise and the predicate of the conclusion.^[291]
9. The minor term of a syllogism is shared by the minor premise and the subject of the conclusion.^[292]
10. The middle term of a syllogism is shared by the major and minor premises.^[293]
11. A polysyllogism (also called multi-premise syllogism, climax, or gradatio) is a set of any number of syllogisms such that the conclusion of one is a premise for another. Each constituent
syllogism is called a prosyllogism except the very last, because the conclusion of the last syllogism is not a premise for another syllogism.
Example: If one argues that a given number of grains of sand do not make a heap, and that an additional grain does not either, then to conclude that no additional amount of sand will make a heap
is to construct a polysyllogism.
12. A sorites argument is a particular form of polysyllogism in which a set of incomplete syllogisms is so arranged that the predicate of each premise forms the subject of another until the subject
of the first is joined with the predicate of the last in the conclusion.^[294]
Example: If A then B; If B then C; If C then D; If D then E; Therefore, if A then E.
13. A theorem is an argument where a conclusion is deduced or induced^[295] from a group of axioms and/or proven premises.^[296]
14. A theory or system is a class or set of theorems that prove an hypothesis.^[297]
15. A lemma^[298] is a theorem that exists as an element in the proof of another theorem.^[299]
16. A system or theory is completely/fully/entirely axiomatized when every aspect of the system or theory may ultimately be derived from a known set of axioms.
17. The scientific method is the expression of empirical evidence and inductive logic to posit the existence of universal laws^[300] or theories, called models, and the use of empirical evidence and
deductive logic to test a model and thereby prove or disprove the truthfulness of the posited universal laws.
Manipulation of Terms
1. The positive or original form of a term, statement, proposition, or argument is identical to the term, statement, proposition, or argument itself.
2. Conversion occurs when the subject and predicate terms of an inferential statement replace each other. The resulting statement is called the converse of the original statement. A statement and
its converse are not logically equivalent statements unless the inferential relationship is bidirectional.^[301]
Example: If X is over the age of eighteen then X is legally an adult. Positive Statement
If X is legally an adult then X is over the age of eighteen. Converse Statement
3. Contraposition occurs due to the complementation of the truth values of both the subject and predicate terms of an inferential statement and then their replacement of one term for the other. The
resulting statement is called the contra-positive of the original statement.^[302]^[303]
Example: If X is over the age of eighteen then X is legally an adult. Positive Statement
If X is not legally an adult then X is not over the age of eighteen. Contrapositive Statement
4. The inversion of an inferential statement occurs due to the complementation of the truth values of both the subject and predicate terms without substituting those terms for each other. The
resulting statement is called the inverse of the original statement and, by definition, inversion necessarily contradicts the truth value of the original statement.
Example: If X is over the age of eighteen then X is legally an adult. Positive Statement
If X is not over the age of eighteen then X is not legally an adult. Inverse Statement
5. The obversion of an inferential statement occurs due to the complementation of the truth value of the predicate term of the statement. The resulting statement is called the obverse of the
original statement. Because of the complementation of the predicate term, obversion necessarily contradicts the truth value of the original statement since the predicate term of the obverse will
always be the logical inverse of of the predicate term of the original statement.
Example: If X is over the age of eighteen then X is legally an adult. Positive Statement
If X is over the age of eighteen then X is not legally an adult. Obverse Statement
6. Theorem: The Logical Equivalence of the Contraposition of Statements: The contraposition of the terms of an inferential statement creates a logically equivalent statement.^[304]
positive →
│ p │ q │ p → q │
│ T │ T │ T │
│ F │ T │ T │
│ T │ F │ F │
│ F │ F │ T │
contraposition →
│ p │ q │ ¬q → ¬p │
│ F │ F │ T │
│ F │ T │ T │
│ T │ F │ F │
│ T │ T │ T │
7. Theorem: The Logical Equivalence of the Conversion and Inversion of Statements: The conversion of the terms of an inferential statement creates a statement that is logically equivalent to the
inversion of the terms of that statement, and vice-versa.^[305]
conversion →
│ p │ q │ q → p │
│ T │ T │ T │
│ T │ F │ F │
│ F │ T │ T │
│ F │ F │ T │
inversion →
│ p │ q │ ¬p → ¬q │
│ F │ F │ T │
│ T │ F │ F │
│ F │ T │ T │
│ T │ T │ T │
8. Theorem: The Principle of Bidirectionality: Because the subject and predicate terms of a bidirectional conditional statement are both necessary and sufficient conditions of each other, the
subject and predicate terms are logically synonymous and either the contraposition, conversion, or inversion of the terms of an inferential statement are logically equivalent to the original, but
not the obversion.
positive ↔
│ p │ q │ p ↔ q │
│ T │ T │ T │
│ F │ T │ F │
│ T │ F │ F │
│ F │ F │ T │
conversion ↔
│ p │ q │ q ↔ p │
│ T │ T │ T │
│ T │ F │ F │
│ F │ T │ F │
│ F │ F │ T │
contraposition ↔
│ p │ q │ ¬q ↔ ¬p │
│ F │ F │ T │
│ F │ T │ F │
│ T │ F │ F │
│ T │ T │ T │
inversion ↔
│ p │ q │ ¬p ↔ ¬q │
│ F │ F │ T │
│ T │ F │ F │
│ F │ T │ F │
│ T │ T │ T │
obversion ↔
│ p │ q │ p ↔ ¬q │
│ T │ F │ F │
│ F │ F │ T │
│ T │ T │ T │
│ F │ T │ F │
The Classical Inferences^[306]
1. Modus Ponens (Modus Ponendo Ponens^[307] or Implication Elimination) (A→B; A ∴ B)
This argument requires two premises. The first premise is the conditional ("if-then") claim, namely that A implies B. The second premise is that A, the antecedent of the conditional claim, is
true. From these two premises it can be logically deduced that B, the consequent of the conditional claim, must be true as well. Modus ponens is both self-consistent and intuitively obvious.
Therefore, we consider modus ponens to be axiomatic.^[308]^[309]
If a bird quacks then it is a duck.
A certain bird is quacking.
Therefore, the bird must be a duck.
2. Modus Tollens (Modus Tollendo Tollens^[310] or Denying the Consequent) (A→B; ¬B ∴ ¬A)
This argument requires two premises. The first premise is the conditional ("if-then") claim, namely that A implies B. The second premise is that B is false. From these two premises, it can be
logically deduced that A must be false.^[311]
If a bird quacks then it is a duck.
A certain bird is not a duck.
Therefore, the bird must not be quacking.
The proof for this proposition lies in the realization that modus tollens is simply stating the contra-position of the original statement which, we have seen earlier in this outline, always
preserves the truth of the original statement.
3. Modus Ponendo Tollens^[312] (¬[A∧B]; A ∴ ¬B)
If the negation of a conjunction and a conjunction of the negation with one of its conjuncts hold true, then the negation of its other conjunct also holds true.
We are told it is not true that a certain bird both quacks and is also a sparrow.
We then find that the bird does quack.
Therefore, the bird cannot be a sparrow.^[313]^[314]
The proof for this proposition can be seen from an evaluation of the sufficiency requirement for the antecedent, where both conditions of the antecedent cannot be true in order for the
consequence to also be true. Therefore, where one of the antecedent conditions is true, the other antecedent must be false since otherwise both of the antecedents would be true, which would
violate the sufficiency requirement of the truth condition for the antecedent.
4. Conjunction Introduction (A & B ∴ A∧B)
If A is true, and B is true, then the conjunction of A and B is true.
If the bird quacks,
and if the bird is also a duck,
Then it is true that the bird both quacks and is also a duck.^[315]
This proposition is essentially axiomatic. If we have the presence of both antecedent conditions then, by definition, we have their conjunction since both conditions are present.
5. Simplification (A∧B ∴ A)
If the conjunction of A and B is true, then A must be true.^[316]
The bird both quacks and is a duck.
Therefore, it is true that the bird must quack.
(It is also true that the bird must be a duck.)^[317]
Simplification states that, where the disjunction of two antecedent conditions is permitted, the presence of either (or both) of those conditions satisfies the truth condition for the
consequence. The utility of this proposition is readily observable from the symbolic statement of this proposition: it allows us to eliminate either term, at our discretion, and thus simply the
statement by reducing the number of operative terms necessary to prove the truth of the proposition.
6. Disjunction Introduction (A ∴ A∨B)
This argument has one premise, A, and an unrelated proposition, B. From the assumed truth of the premise, A, it can be logically concluded that either A or B is true, since we at least know the
The bird is quacking.
Therefore, it is true that the bird quacks or it is a duck (or both), since we know that it does indeed quack.^[319]
Disjunction introduction is essentially the converse statement of simplification, which itself is a bidirectional implication. Since either antecedent condition may be present for the sufficiency
of the antecedent to be true then either or both terms may be present and the consequence will necesssarily be true. The utility of this proposition lies in the fact that it permits us to add
additional desired terms to a logical proposition without contradiction so long as the relationship between the added terms is disjunctive (i.e., both antecedent terms need not be true in order
for the consequence to be true so long as at least one of those antecedent terms is true).
7. Disjunction Elimination ([A∨B] & [A→C] & [B→C] ∴ C)
If A or B is true, and A entails C, and B entails C, then we may justifiably infer C.^[320]
For all birds, either it quacks or it is a duck, or both.
If a bird quacks then we know it is a kind of water fowl.
If a bird is a duck then it is also a kind of water fowl.
Since all birds either quack or are ducks then all birds must be water fowl.
Here, the truth of the proposition lies in the fact that either of the first two antecedent conditions imply the truth of the third antecedent condition. Therefore, where either of those
antecedent conditions is present (i.e., the disjunction of those first two terms), the third antecedent must also be present by reason of modus ponens.
8. Disjunctive Syllogism ([A∨B] & ¬A ∴ B)
In this argument, the disjunction tells us that at least one of two statements (A or B) is true. Then we are told that A is not true. Therefore, we must infer that B must be true.^[321]^[322]
For all birds, either it quacks or it is a duck, or both.
The bird does not quack.
Therefore, the bird must at least be a duck.
This proposition is similar to disjunction elimination in that we eliminate the disjunction but this time by way of modus tollens, rather than by modus ponens. Since either (or both) of the two
antecedent conditions must be true for the consequence to be true, if one of those conditions is not true then the other must be true in order to preserve the truthfulness of the proposition.
9. Hypothetical Syllogism (The Theory of Consequences or Logical Transitivity) ([A→B] & [B→C] ∴ A→C)
This rule states the commonly known principle of transitivity. If A implies B, and B implies C, then A implies C.
If a bird quacks then it is a duck.
If a bird is a duck then it is water fowl.
Therefore, if the bird quacks then it is water fowl.
Beside modus ponens and modus tollens, this is probably the most important and useful of the classical inferences. It is transitivity that allows the causal linking of two antecedent conditions
with a third so as to deductively prove a causal relationship between the first and last antecedent conditions in a series, which can be as long as the number of antecedents conditions used in
the proposition. Its proof lies in the fact that each antecedent condition is the material cause of the next so that the "chain" of cause and effect ultimately links the first and last conditions
via the chain of the others.^[323]
10. Constructive Dilemma ([A→P] & [B→Q] & [A∨B] ∴ P∨Q)
If two conditionals are true, and at least one of their antecedents is in fact true, then at least one of their consequents must also be true.^[324]
If a bird sings then it is a song bird.
If a bird quacks then it is a duck.
Either the bird is singing or it is quacking.
Therefore, the bird is either a song bird or it is a duck.
Here we have a syllogistic linking of terms without creating a chain of causality as we did in the hypothetical syllogism. The disjunction of the antecedent conditions for each material
implication ensures that there is no "chaining" of all the antecedent conditions. However, the disjunction also ensures that, where either or both of the conditions that are antecedent to the two
material implications are true then one or both of the consequences must also be true.
11. Destructive Dilemma ([A→P] & [B→Q] & [¬P∨¬Q] ∴ ¬A∨¬B)
The destructive dilemma is the disjunctive version of modus tollens. The disjunctive version of modus ponens is the constructive dilemma.
If a bird sings then it is a song bird.
If a bird quacks then it is a duck.
Either the bird is not a song bird or it is not a duck.
Therefore either the bird is not singing or it is not quacking.
12. Biconditional Introduction ([A→B] & [B→A] ∴ A↔B)
If B follows from A, and A follows from B, then A if and only if B.
If the bird quacks then it is a duck.
If the bird is a duck then it quacks.
Therefore, the bird quacks if and only if it is a duck.
This is essentially the very definition of bidirectionality and is an example of proof by definition where the definition is itself well-founded.^[325]
13. Biconditional Elimination (A↔B ∴ A→B)
If ( A ↔ B ) is true then one may infer either direction of the biconditional - i.e., ( A → B ) and/or ( B → A ).
A bird quacks if and only if it is a duck.
Therefore, if a bird is quacking then it is a duck, and if a bird is a duck then it also quacks.
Every bidirectional implication may be constructed from two material implications that state the inferential relationship of necessary and sufficient conditions in each direction because, in the
case of bidirectionality, both antecedents are both necessary and sufficient conditions, thereby permitting the bidirectionality to occur. Therefore, where bidirectionality exists then either or
both of the component material implications can be essentially disjoined.
Forms of Logical Argument^[326]
1. Mathematical Argument (deductive or strong inductive): An argument in which the exclusive means of inferring a conclusion occurs by way of theorizing the logical relationships of quantitative
2. Deductive Argument: An argument where the conclusion necessarily follows from the premises.^[327]^[328]
3. Argument by Definition: An argument in which the exclusive means of inferring the conclusion occurs by way of the definition of some word or phrase.^[329]
4. Universal Instantiation: An argument wherein an inference is made from a truth about all members of a class of individuals to a truth about a particular individual of that class.^[330]
5. Universal Generalization: An argument wherein an inference is made from a truth about a particular member of a class of individuals to a truth about all members of that class.^[331]
Example: All birds have feathers.
An ostrich is a bird.
An ostrich has feathers.
6. Categorical Syllogism (Categorical Argument): A syllogism in which all the statements are categorical propositions.
Example: All birds have feathers.
All ostriches are birds.
All ostriches have feathers.
7. Hypothetical Syllogism (Hypothetical Argument): A syllogism having a conditional statement for at least one of its premises.
Example: All birds have feathers.
If an ostrich exists then it is a kind of bird.
All ostriches have feathers.
8. Disjunctive Syllogism: A syllogism having a disjunctive statement for at least one of its premises.
Example: Either a penguin has feathers or it has skin.
All birds have feathers.
A penguin is a kind of bird.
Therefore, a penguin has feathers.
9. (Weak) Inductive Argument (Generalization): An argument where the conclusion is arrived at by reasoning from a part to a whole, from particulars to generals, or from the individual to the
universal. With the exception of mathematical induction, inductive arguments are generally weak.^[332]
10. Prediction: An argument where one or more premises state a known or knowable proposition occurring in the present or the past but where the conclusion states a proposition that is inferred to
occur in the future.
11. Argument Based on Signs: A kind of weak inductive prediction in which the stated signs (indications) are inferred to be predictive of the stated conclusion. A statistical syllogism occurs where
this prediction is based on a statistical, mathematical model.^[333]
12. Argument by Analogy: An argument that depends on the inferred similarity^[334] between two or more propositions.
13. Argument by Authority: An argument in which the conclusion is inferred from a statement by a presumed authority.^[335]
14. Causal Inference: An argument^[336] in which the knowledge of an effect is inferred from the knowledge of a cause, or vice versa.
Example 1: I left the ice cream cone on the hot pavement.
Therefore, it is probably melted.
Example 2: The ice cream cone is melted.
This is probably because I left it on the hot pavement.
15. Plausability Argument (Educated Guess): An argument that hypothesizes a theory based on experience and similar results in analogous circumstances.^[337]
Categorical Propositions
1. The quality of a categorical proposition refers to whether the proposition affirms or denies the inclusion of an object to the class of the predicate. The two qualities of a categorical
proposition are therefore either affirmation or negation of the copula.^[338]
2. The quantity of a categorical proposition refers to the amount of objects in one class that are included in another class. The three possible quantities of a categorical proposition in informal
verbal logic are either all, some, or none.
3. The distribution of a categorical proposition refers to the logically permissible inferences that may be drawn from a particular combination of quality and quantity terms.^[339]
Types of Categorical Propositions
1. The Universal Affirmative (Latin mnemonic A)
General Statement: All S are P. ("All" is the quantifier; S is the subject; "are" is the copula; P is the predicate.)
Distribution: only the subject term is distributed (the predicate term is not distributed).
Example: All dogs are mammals, but not all mammals are dogs.
2. The Universal Negative (Latin mnemonic E)
General Statement: No S is P.
Distribution: all objects in the subject and predicate terms distribute bidirectionally.
Example: No beetles are dogs and no dogs are beetles.
3. The Particular Affirmative (Latin mnemonic I)
General Statement: Some S are P.
Distribution: neither term is entirely distributable in the other term.
Example: Some flowers have scent (it is not possible to say that all flowers have scent or that all scent comes from flowers).
4. The Particular Negative (Latin mnemonic O)
General Statement: Some S are not P.
Distribution: only the predicate term is distributed (the subject term is not distributed).
Example: Some mammals are not dogs, but all dogs are mammals.
The Square of Opposition
The modern Square of Opposition states that corresponding universal affirmatives and universal negatives always necessarily contradict each other, and that corresponding particular affirmatives and
particular negatives always necessarily contradict each other.^[340]
Categorical Syllogisms
1. Classification of Categorical Syllogisms
Because, by the definition of a syllogism, S is the subject of both the minor premise and of the conclusion, P is the predicate of both the major premise and of the conclusion, and M is the
middle term, the major premise links M with P, and the minor premise links M with S. However, the middle term can be either the subject or the predicate of each premise in which it appears. The
logical equivalence of the middle term in each premise gives rise to the four basic types of syllogisms, known as The Four Figures:
Figure One
If M then P.
If S then M.
Therefore, if S then P.
Figure Two
If P then M.
If S then M.
Therefore, if S then P.
Figure Three
If M then P.
If M then S.
Therefore, if S then P.
Figure Four
If P then M.
If M then S.
Therefore, if S then P.
2. Identification of Categorical Syllogisms
Because there are four types of categorical proposition within each syllogism, and there are four types of categorical syllogisms, there are four to the fourth power (256) combinations of
possible syllogisms. Each syllogism can be identified by combining the Latin mnemonic for each type (class) of categorical proposition for each premise, followed by the number of the figure of
the syllogism structure.
Example: All dogs are mammals. Universal Affirmative - I
No beetles are dogs. Universal Negative - A
Therefore, no beetles are mammals. (Universal Negative - A)
Therefore, this syllogism can be identified as type IAA-1.
Validity of Categorical Syllogisms
Although there are 256 possible combinations of terms in categorical syllogisms, only some of those syllogisms are logically valid. During medieval times, the valid combinations were given the
following designations:^[341]
1. Barbara (AAA-1) Example: All men are animals. All animals are mortal. Therefore, all men are mortal.
2. Celarent (EAE-1) Example: No reptiles have fur. All snakes are reptiles. Therefore, no snakes have fur.
3. Darii (AII-1) Example: All kittens are playful. Some pets are kittens. Therefore, some pets are playful.
4. Ferio (EIO-1) Example: No homework is fun. Some reading is homework. Therefore, some reading is not fun.
5. Cesare (EAE-2) Example: No healthy food is fattening. All cakes are fattening. Therefore, no cakes are healthy.
6. Camestres (AEE-2) Example: All horses have hooves. No humans have hooves. Therefore, no humans are horses.
7. Festino (EIO-2) Example: No lazy people pass exams. Some students pass exams. Therefore, some students are not lazy.
8. Baroco (AOO-2) Example: All informative things are useful. Some websites are not useful. Therefore, some websites are not informative.
9. Darapti (AAI-3) Example: All fruit is nutritious. All fruit is tasty. Therefore, some tasty things are nutritious.
10. Disamis (IAI-3) Example: Some mugs are beautiful. All mugs are useful. Therefore, some useful things are beautiful.
11. Datisi (AII-3) Example: All the industrious boys in this school have red hair. Some of the industrious boys in this school are boarders. Therefore, some boarders in this school have red hair.
12. Felapton (EAO-3) Example: No jug in this cupboard is new. All jugs in this cupboard are cracked. Therefore, some of the cracked items in this cupboard are not new.
13. Bocardo (OAO-3) Example: Some cats have no tails. All cats are mammals. Therefore, some mammals have no tails.
14. Ferison (EIO-3) Example: No tree is edible. Some trees are green. Therefore, some green things are not edible.
15. Bramantip (AAI-4) Example: All apples in my garden are wholesome. All wholesome fruit is ripe. Therefore, some ripe fruit are apples in my garden.
16. Camenes (AEE-4) Example: All coloured flowers are scented. No scented flowers are grown indoors. Therefore, no flowers grown indoors are coloured.
17. Dimaris (IAI-4) Example: Some small birds live on honey. All birds that live on honey are colourful. Therefore, some colourful birds are small.
18. Fesapo (EAO-4) Example: No humans are perfect. All perfect creatures are mythical. Therefore, some mythical creatures are not human.
19. Fresison (EIO-4) Example: No competent people are people who always make mistakes. Some people who always make mistakes are people who work here. Therefore, some people who work here are not
competent people.
Syllogistic Fallacies
Occurring in Any Syllogism Type^[342]
1. Fallacy of Four Terms (Syllogistic Equivocation)
Also called quaternio terminorum, this fallacy occurs when a categorical syllogism has four terms but only two premises. Valid categorical syllogisms always have no more than three terms for two
Example: All fish have fins. (major premise)
All goldfish are fish. (minor premise)
All goldfish have fins. (conclusion)
Here, the three terms are: "goldfish," "fish," and "fins." However, using four terms and only two premises invalidates the syllogism.
Example: All fish have fins. (major premise)
All goldfish are fish. (minor premise)
All humans have fins. (conclusion)
The premises don't connect "humans" with "fins," so the reasoning is invalid.^[343]^[344]^[345]
Occurring in Categorical Syllogisms
1. Affirmative Conclusion from a Negative Premise
This fallacy occurs when a categorical syllogism has a positive conclusion but one or more negative premises. For example: No fish are dogs, and no dogs can fly, therefore all fish can fly. This
is a fallacy because any valid form of categorical syllogism that asserts a negative premise must have a negative conclusion.
2. Existential Fallacy
In traditional Aristotelian logic, this fallacy occurs where a categorical syllogism has two universal premises and a particular conclusion. In other words, for the conclusion to be true, at
least one member of the class must exist, but the premises do not establish this condition. In modern logic, the existential fallacy is obviated by the use of conditional premises. Example #1:
All inhabitants of other planets are friendly, and all Martians are inhabitants of another planet; therefore, there are friendly Martians. The conclusion assumes the existence of Martians, the
factual invalidity of which can be rectified by using the premise if there are inhabitants of other planets then they are friendly. Example #2: All unicorns are animals; therefore, some animals
are unicorns. The conclusion assumes the existence of unicorns, the factual invalidity of which can be rectified by using the conditional premise if unicorns exist then they are animals.
3. Fallacy of Exclusive Premises
This fallacy occurs when both of the premises of a categorical syllogism are negative. Example: No mammals are fish. Some fish are not whales. Therefore, some whales are not mammals. This
syllogism is not valid because at least one premise of any given syllogism must be affirmative.
4. Fallacy of the Undistributed Middle
This fallacy occurs when the middle term in a categorical syllogism isn't distributed. Example: All Zs are Bs. Y is a B. Therefore, Y is a Z. It mayor may not be the case that "all Zs are Bs,"
but in either case it is irrelevant to the conclusion. What is relevant to the conclusion is whether it is true that "all Bs are Zs," which is ignored in the argument. The fallacy is similar to
affirming the consequent and denying the antecedent in that, if the terms were swapped around in either the conclusion or the first co premise, there would no longer be a fallacy.
5. Illicit Major Premise
This fallacy occurs in a categorical syllogism that is invalid because its major term is undistributed in the major premise but distributed in the conclusion. Example: All dogs are mammals. No
cats are dogs. Therefore, no cats are mammals. In this argument, the major term is "mammals." This term is distributed in the conclusion because we are making a claim about a property of all
mammals: that they are not cats. However, it is not distributed in the major premise (the first statement) where we are only talking about a property of some mammals: only some mammals are dogs.
This error occurs because we are assuming that the converse of the first statement (that all mammals are dogs) is also true, which is never logically possible.
6. Illicit Minor Premise
This fallacy occurs in a categorical syllogism that is invalid because its minor term is undistributed in the minor premise but distributed in the conclusion. This fallacy has the following
argument form: All A are B. All A are C. Therefore, all C are B. Example: All cats are felines. All cats are mammals. Therefore, all mammals are felines. The minor term here is "mammal," which is
not distributed in the minor premise "All cats are mammals" because this premise is only defining a property of possibly some mammals (i.e., that they're cats.) However, in the conclusion "all
mammals are felines," mammal is distributed by stating that all mammals will be found to be felines. It is shown to be false by any mammal that is not a feline; for example, a dog.
7. Fallacy of Necessity
This fallacy occurs when a degree of unwarranted necessity is asserted in the conclusion. Example: Bachelors are necessarily unmarried. John is a bachelor. Therefore, John cannot marry. The major
premise (bachelors are necessarily married) is a tautology and therefore valid on its face. The minor premise (John is a bachelor) is a statement of fact about John which makes him subject to the
major premise; that is, the minor premise declares John a bachelor, and the major premise states that all bachelors are unmarried. Because the conclusion presumes the minor premise will be valid
in every case, this presumption creates a fallacy of necessity. John, of course, is always free to stop being a bachelor, simply by getting married; if he does so, the minor premise is no longer
true and thus not subject to the tautology of the major premise. In this case, the conclusion has an unwarranted necessity by assuming, incorrectly, that John cannot stop being a bachelor.
Occurring in Disjunctive Syllogisms
1. Affirming a (non exclusive) Disjunct
Also known as the fallacy of the alternative disjunct, this fallacy occurs when a deductive argument takes either of the two following forms: A or B A Therefore, it is not the case that B A or B
B Therefore, it is not the case that A The fallacy lies in concluding that one disjunct must be false because the other disjunct is true; in fact they may both be true (unless an exclusive
disjunctive forms the major premise). A similar form that is valid has the second premise (rather than the conclusion) be a negation. The valid form is known as disjunctive syllogism. The
following argument is a clear case of this fallacy: It will rain somewhere tomorrow or the sun will shine somewhere tomorrow. It will rain somewhere tomorrow. (It will rain here according to the
weather forecast). Therefore, it is not the case that the sun will shine somewhere tomorrow. This inference is obviously invalid. The sun is almost always shining somewhere on earth.Both of the
premises are clearly true while the conclusion is clearly false. The following example is trickier: Two is an even number or two is an odd number. Two is an even number. Therefore, it is not the
case that two is an odd number. This argument seems to be valid because there is another use of the word "or" in ordinary language that would seem more appropriate. If the disjunction is
exclusive, that is to say, the "or" implies that only one of the disjuncts is perfectly true, then the argument is valid. However, the meaning of"or" used in ordinary language is different than
its use in informal verbal logic where it is defined as an operator that avoids equivocation, and therefore this argument is invalid. In this case, the "or" is said to be inclusive, in that it
stipulates that one or both of the disjuncts is true. A similar argument that is in fact valid will have the implied assumption explicitly stated, as follows: Two is an even number or two is an
odd number. Two is an even number. No number can be both even and odd. Therefore, it is not the case that two is an odd number.
Occurring in Statistical Syllogisms^[346]
1. Accident
Also called destroying the exception or dicto simpliciter ad dictum secundum quid, this fallacy occurs in statistical syllogisms (a kind of argument based on a generalization) when an exception
to the generalization is ignored. It is one of the thirteen fallacies originally identified by Aristotle. The fallacy occurs when one attempts to apply a general rule to an irrelevant situation.
Related inductive fallacies include the overwhelming exception and the hasty generalization. For instance: Cutting people with a knife is a crime. Surgeons cut people with knives. Surgeons are
criminals. It is easy to construct fallacious arguments by applying general statements to specific incidents that are obviously exceptions. Generalizations that are weak generally have more
exceptions(the number of exceptions to the generalization need not be a minority of cases) and vice versa. This fallacy may occur when we confuse particular generalizations ("some") for universal
categorical statements ("always and everywhere"). It may be encouraged when no qualifying words like"some," "many," "rarely," etc., are used to mark the generalization. For example: All Germans
were Nazis. The premise above could be used in an argument concluding that all Germans or current Germans should beheld responsible for the crimes of the Nazis. Qualifying the first term corrects
the argument: Some Germans were Nazis. This premise makes the weakness of the generalization more obvious, rather than appearing to be the statement of a categorical rule.
2. Converse Accident
Also called reverse accident, destroying the exception or dicto secundum quid ad dictum simpliciter, this fallacy is the deductive version of the hasty generalization, and occurs in a statistical
syllogism when an exception to a generalization is wrongly called for. For example: Every swan I have seen is white, so it must be true that all swans are white. This fallacy is similar to the
slippery slope, where the opposition claims that if a restricted action under debate is allowed (i.e., allowing people with glaucoma to use medical marijuana) then the action will by stages
become acceptable in general (i.e., eventually everyone will be allowed to use marijuana). The two arguments imply there is no difference between the exception and the rule and, in fact,
fallacious slippery slope arguments often use the converse accident to the contrary as the basis for the argument. However, a key difference between the two is the point and position being
argued. The above argument using converse accident is an argument for full legal use of marijuana given that glaucoma patients use it. The argument based on the slippery slope argues against
medicinal use of marijuana because it will lead to full use. Whereas a slippery slope argument is not necessarily fallacious, a converse accident is always a formal fallacy.
Methods of Proof
Direct Proof (Proof of Consequence or Entailment)
Given any set of one or more propositions, statements, or arguments, the set will entail another proposition, statement, or argument if the conjunction of the elements of the set is inconsistent with
the negation of any of the set's elements [(Γ → γ) ↔ ¬(θ ∧ ¬Ψ)].^[347]
Our theory tells us that all ducks have webbed feet.
We look for a duck without webbed feet.
However, every duck we observe has webbed feet.
Without a contradiction, our theory holds.
Indirect Proof (Proof of Non-Consequence or Disentailment)
Indirect Proof, Disentailment, Proof by Contradiction ("counter-example"), Reductio Ad Impossible, or Reductio Ad Absurdum occurs when, by assuming the negation of a proposition, statement, or
argument, another proposition, statement, or argument that would otherwise logically follow is shown to be contradicted [¬P → (Q ∧ ¬Q)].
We assume, for the sake of argument, if a bird quacks then it is not a duck.
We have found a duck and it is quacking.
However, according to our definition, a quacking bird is not a duck.
The contradiction proves that the assumption does not hold in all cases.
Therefore, some ducks do quack.^[348]
Proof by Transposition (Proof by Contraposition)
Proof by transposition or proof by contraposition establishes the conclusion "if p then q" by proving the equivalent contrapositive statement "if not q then not p".
No birds quack except ducks.
Therefore, if the bird is not a duck then it does not quack.
Hence, if a bird quacks then it is a duck.
Non-Constructive Proof (Existence "Proof" or "Pure" Existence Theorem)^[349]
In a non-constructive proof, existence "proof," or "pure" existence theorem, we assume the non-existence of a thing whose existence is required to be proved and then deduce a logical contradiction
without producing an empirical example. The non-existence of the thing has therefore been shown to be logically impossible, and yet an actual example of the thing has not been determined.^[350]
Mathematical Existence Theorems^[351]
Proof by Construction
Proof by construction is the statement of the existence of a logical object by the construction of the formal symbolism that represents it.^[352]^[353]
Proof by Exhaustion
In proof by exhaustion, the conclusion is established by dividing it into a finite number of all possible cases and proving each one separately.^[354]
Probabalistic Proof
A probabilistic proof is one in which an example is shown to exist, with certainty, by using methods of Probability Theory. This is not to be confused with an argument that a theorem is 'probably'
Combinatorial Proof
A combinatorial proof establishes the equivalence of different expressions by showing that they count the same object in different ways.^[356]
Mathematical Induction
In proof by mathematical induction, first a "base case" is proved, and then an "induction rule" is used to prove a (potentially infinite) series of other cases. Since the base case is true, the
infinity of other cases must also be true, even if all of them cannot be proved directly because of their infinite number. As such, mathematical induction, also known as strong induction^[357] is in
fact an application of deductive reasoning. For proof of the logical validity of mathematical induction, see Set Theory.
Informal Fallacies
Fallacies of Relevance
Fallacies of relevance are attempts to prove a conclusion by offering considerations that simply don't bear on its truth. In order to prove that a conclusion is true, one must offer evidence that
supports it. Arguments that commit fallacies of relevance don't do this; the considerations that they offer in support of their conclusion are irrelevant to determining whether that conclusion is
true. The considerations offered by such are usually psychologically powerful, however, even if they don't have any evidentiary value, making such arguments appear to be persuasive, even if logically
1. Ad Hominem Attack
It is important to note that the label ad hominem is ambiguous, and that not every kind of ad hominem argument is fallacious. In one sense, an ad hominem argument is a valid argument in which the
arguer offers premises that the arguer doesn't accept, but which the arguer knows the listener does accept, in order to show that his position is incoherent (as in, for example, the Euthyphro
dilemma of Plato). There is nothing logically wrong with this type of ad hominem argument. The other type of ad hominem argument is a form of genetic fallacy. Arguments of this kind focus not on
the evidence for a view but on the character of the person advancing it; they seek to discredit positions by discrediting those who hold them. It is always important to attack arguments, rather
than arguers, and this is where ad hominems fall down. Example: William Dembski argues that modern biology supports the idea that there is an intelligent designer who created life. Dembski would
say that because hes religious. Therefore, modern biology doesn't support intelligent design . This argument rejects the view that intelligent design is supported by modern science based on a
remark about the person advancing the view, not by engaging with modern biology. It ignores the argument, focusing only on the arguer; it is therefore a fallacious argument ad hominem.
2. Bandwagon Fallacy
The bandwagon fallacy is committed by arguments that appeal to the growing popularity of an idea as a reason for accepting it as true. They take the mere fact that an idea is suddenly attracting
adherents as a reason for us to join in with the trend and become adherents of the idea themselves. This is a fallacy because there are many other features of ideas than truth that can lead to a
rapid increase in popularity. Peer pressure, tangible benefits, or even mass stupidity could lead to a false idea being adopted by lots of people. A rise in the popularity of an idea, then, is no
guarantee of its truth. The bandwagon fallacy is closely related to the appeal to popularity; the difference between the two is that the bandwagon fallacy places an emphasis on current fads and
trends, on the growing support for an idea, whereas the appeal to popularity does not. Example: Increasingly, people are coming to believe that Eastern religions help us to get in touch with our
true inner being. Therefore, Eastern religions help us to get in touch with our true inner being. This argument commits the bandwagon fallacy because it appeals to the mere fact that an idea is
becoming more fashionable as evidence that the idea is true. Mere trends in thought are not reliable guides to truth, though; the fact that Eastern religions are becoming more fashionable does
not logically imply that they are true.
3. Fallacist's Fallacy
The fallacist's fallacy involves rejecting an idea as false simply because the argument offered for it is fallacious. Having examined the case for a particular point of view, and found it
wanting, it can be tempting to conclude that the point of view is false. This, however, would be to go beyond the evidence. It is possible to offer a fallacious argument for any proposition,
including those that are true. One could argue that 2+2=4 on the basis of an appeal to authority: "Simon Singh says that 2+2=4". Or one could argue that taking paracetamol relieves headaches
using a post hoc: "I took the paracetamol and then my headache went away; it worked!" Each of these bad arguments has a true conclusion. A proposition therefore should not be dismissed because
one argument offered in its favour is faulty.
4. Fallacy of Composition
The fallacy of composition is the fallacy of inferring from the fact that every part of a whole has a given property that the whole also has that property. This pattern of argument is the reverse
of that of the fallacy of division. It is not always fallacious, but we must be cautious in making inferences of this form. Examples: A clear case of the fallacy of composition is this: Every
song on the album lasts less than an hour. Therefore, the album lasts less than an hour. Obviously, an album consisting of many short tracks may itself be very long. Not all arguments of this
form are fallacious, however. Whether or not they are depends on what property is involved. Some properties, such as lasting less than an hour, may be possessed by every part of something but not
by the thing itself. Others, such as being bigger than a bus, must be possessed by the whole if possessed by each part. One case where it is difficult to decide whether the fallacy of composition
is committed concerns the cosmological argument for the existence of God. This argument takes the contingency of the Universe (i.e. the alleged fact that the universe might not have come into
being) as implying the existence of a God who brought it into being. The simplest way to argue for the contingency of the Universe is to argue from the contingency of each of its parts, as
follows: Everything in the Universe is contingent (i.e. could possibly have failed to exist). Therefore, the Universe as a whole is contingent (i.e. could possibly have failed to exist). It is
clear that this argument has the form of the fallacy of composition; what is less clear is whether it really is fallacious. Must something composed of contingent parts itself be contingent? Or
might it be that the universe is necessarily existent even though each of its parts is not? Another controversial example concerns materialistic explanations of consciousness. Is consciousness
just electrical activity in the brain, as mind brain identity theory suggests, or something more? Opponents of mind brain identity theory sometimes argue as follows: The brain is composed of
unconscious neurons. Therefore, the brain itself is not conscious. It is certainly difficult to see how consciousness can emerge from purely material processes, but the mere fact that each part
of the brain is unconscious does not entail that the whole brain is the same.
5. Fallacy of Division
The fallacy of division is the reverse of the fallacy of composition. It is committed by inferences from the fact that a whole has a property to the conclusion that a part of the whole also has
that property. Like the fallacy of composition, this is only a fallacy for some properties; for others, it is a legitimate form of inference. Example: An example of an inference that certainly
does commit the fallacy of division is this: Water is liquid. Therefore, H2O molecules are liquid. This argument, in attributing a macro property of water, liquidity, to its constituent parts,
commits the fallacy of division. Though water is liquid, individual molecules are not. Note, however, an argument inferring from the fact that a computer is smaller than a car that every part of
the computer is smaller than a car would not be fallacious; arguments with this logical form need not be problematic.
6. Gambler's Fallacy
The gamblers fallacy is the fallacy of assuming that short term deviations from probability will be corrected in the short term. Faced with a series of events that are statistically unlikely,
say, a serious of nine coin tosses that have landed heads up, it is very tempting to expect the next coin toss to land tails up. The past series of results, though, has no effect on the
probability of the various possible outcomes of the next coin toss. Example: This coin has landed heads up nine times in a row. Therefore, it will probably land tails up next time it is tossed.
This inference is an example of the gamblers fallacy . When a fair coin is tossed, the probability of it landing heads up is 50%, and the probability of it landing tails up is 50%. These
probabilities are unaffected by the results of previous tosses. The gambler's fallacy appears to be a reasonable way of thinking because we know that a coin tossed ten times is very unlikely to
land heads up every time. If we observe a tossed coin landing heads up nine times in a row we therefore infer that the unlikely sequence will not be continued, that next time the coin will land
tails up. In fact, though, the probability of the coin landing heads up on the tenth toss is exactly the same as it was on the first toss. Past results don't bear on what will happen next.
7. Genetic Fallacy
The genetic fallacy is committed when an idea is either accepted or rejected because of its source, rather than its merit. Even from bad things, good may come; we therefore ought not to reject an
idea just because of where it comes from, as ad hominem arguments do. Equally, even good sources may sometimes produce bad results; accepting an idea because of the goodness of its source, as in
appeals to authority, is therefore no better than rejecting an idea because of the badness of its source. Both types of argument are fallacious. Examples: My mommy told me that the tooth fairy is
real. Therefore, the tooth fairy is real. Eugenics was pioneered in Germany during the war. Therefore, eugenics is a bad thing. Both of these arguments commit the genetic fallacy. Each judges an
idea by the goodness or badness of its source, rather than on its own merits.
8. Naturalistic Fallacy
Assume there are two fundamentally different types of statement: statements of fact which describe the way that the world is, and statements of value which describe the way that the world ought
to be. The naturalistic fallacy is the alleged fallacy of inferring a statement of the latter kind from a statement of the former kind. To understand how this is so, consider arguments that
introduce completely new terms in their conclusions. The argument, (1) All men are mortal, (2) Socrates is a man, therefore (3) Socrates is a philosopher is clearly invalid; the conclusion
obviously doesn't follow from the premises. This is because the conclusion contains an idea that of being a philosopher that isn't contained in the premises; the premises say nothing about being
a philosopher, and so cannot establish a conclusion about being a philosopher. Arguments that commit the naturalistic fallacy are arguably flawed in exactly the same way. An argument whose
premises merely describe the way that the world is, but whose conclusion describes the way that the world ought to be, introduce a new term in the conclusion in just the same way as the above
example. If the premises merely describe the way that the world is then they say nothing about the way that the world ought to be. Such factual premises cannot establish any value judgment; you
cannot get an ought from an is. Examples: Feeling envy is only natural. Therefore, there's nothing wrong with feeling envy. This argument moves from a statement of fact to a value judgment, and
therefore commits the naturalistic fallacy. The arguments premise simply describes the way that the world is, asserting that it is natural to feel envious. To describe the way that the world is,
though, is to say nothing of the way that it ought to be. The arguments conclusion, then, which is value judgment, cannot be supported by its premises. It is important to note that much
respectable moral argument commits the naturalistic fallacy. Whether arguments of the form described here are fallacious is controversial. If they are, then the vast majority of moral philosophy
commits a basic logical error.
9. Moralistic Fallacy
The moralistic fallacy is the opposite of the naturalistic fallacy. The naturalistic fallacy moves from descriptions of how things are to statements of how things ought to be, the moralistic
fallacy does the reverse. The moralistic fallacy moves from statements about how things ought to be to statements about how things are; it assumes that the world is as it should be. This, sadly,
is a fallacy; sometimes things arent as they ought to be. Examples: Have you ever crossed a one way street without looking in both directions? If you have, reasoning that people shouldn't be
driving the wrong way up a one way street so there's no risk of being run over from that direction, then you've committed the moralistic fallacy. Sometimes things aren't as they ought to be.
Sometimes people drive in directions that they shouldn't. The rules of the road don't necessarily describe actual driving practices.
10. Red Herring Argument
The red herring is as much a debate tactic as it is a logical fallacy. It is a fallacy of distraction, and is committed when a listener attempts to divert an arguer from his argument by
introducing another topic. This can be one of the most frustrating, and effective, fallacies to observe. The fallacy gets its name from fox hunting, specifically from the practice of using smoked
herrings, which are red, to distract hounds from the scent of their quarry. Just as a hound may be prevented from catching a fox by distracting it with a red herring, so an arguer maybe prevented
from proving his point by distracting him with a tangential issue. Example: Many of the fallacies of relevance can take red herring form. An appeal to pity, for example, can be used to distract
from the issue at hand: You may think that he cheated on the test, but look at the poor little thing!How would he feel if you made him sit it again?
11. Weak Analogy
Arguments by analogy rest on a comparison. Their logical structure is this: A and B are similar. A has a certain characteristic. Therefore, B must have that characteristic too. For example,
William Paley's argument from design suggests that a watch and the universe are similar (both display order and complexity), and therefore infers from the fact that watches are the product of
intelligent design that the universe must be a product of intelligent design too. An argument by analogy is only as strong as the comparison on which it rests. The weak analogy fallacy (or false
analogy, or questionable analogy) is committed when the comparison is not strong enough. The example of an argument by analogy given above is controversial, but is arguably an example of a weak
analogy. Are the similarities in the kind and degree of order exhibited by watches and the universe sufficient to support an inference to a similarity in their origins?
Irrelevant Appeals
Irrelevant appeals attempt to sway the listener with information that, though it may be generally relevant, is not specifically irrelevant to the matter at hand. There are many different types of
irrelevant appeal - i.e., many different ways of influencing what people think without using evidence. Each is a different type of fallacy of relevance.
1. Appeal to Antiquity
An appeal to antiquity is the opposite of an appeal to novelty. Appeals to antiquity assume that older ideas are better, that the fact that an idea has been around for a while implies that it is
true. This, of course, is not the case; old ideas can be bad ideas, and new ideas can be good ideas. We therefore cant learn anything about the truth of an idea just by considering how old it is.
Example: Religion dates back many thousands of years (whereas atheism is a relatively recent development). Therefore, some form of religion is true. This argument is an appeal to antiquity
because the only evidence that it offers in favor of religion is its age. There are many old ideas, of course, that are known to be false: e.g. that the Earth is flat, or that it is the still
center of the solar system. It therefore could be the case that the premise of this argument is true (that religion is older than atheism) but that its conclusion is nevertheless false (that no
religion is true). We need a lot more evidence about religion (or any other theory) than how old it is before we can be justified in accepting it as true. Appeals to antiquity are therefore
2. Appeal to Authority
An appeal to authority is an argument from the fact that a person judged to be an authority affirms a proposition to the claim that the proposition is true. Appeals to authority are always
deductively fallacious; even a legitimate authority speaking on his area of expertise may affirm a falsehood, so no testimony of any authority is guaranteed to be true. However, the informal
fallacy by way of induction occurs only when the authority cited either (a) is not an authority, or (b) is not an authority on the subject on which he is being cited. If someone either isn't an
authority at all, or isn't an authority on the subject about which they are speaking, then that undermines the value of their testimony. Example: Marilyn vos Savant says that no philosopher has
ever successfully resolved the problem of evil. Therefore, no philosopher has ever successfully resolved the problem of evil. This argument is fallacious because Marilyn vos Savant, though
arguably an authority, is not an authority on the philosophy of religion. Her judgment that no philosopher has ever successfully resolved the problem of evil therefore carries little evidential
weight; if there were a philosopher somewhere that had successfully resolved the problem then there's a good chance that Marilyn vos Savant wouldn't know about it. Her testimony is therefore
insufficient to establish the conclusion of the argument.
3. Appeal to Consequences
An appeal to consequences is an attempt to motivate belief with an appeal either to the good consequences of believing or the bad consequences of disbelieving. This may or may not involve an
appeal to force. Such arguments are clearly fallacious. There is no guarantee, or even likelihood, that the world is the way that it is best for us for it to be correct. Belief that the world is
the way that it is best for us for it to be, absent other evidence, is therefore just as likely to be false as true. Examples:
4. Appeal to Good Consequences:
If you believe in God then you'll find a kind of fulfillment in life that you've never felt before. Therefore, God exists. Appeal to Bad Consequences: If you don't believe in God then you'll be
miserable, thinking that life doesn't have any meaning. Therefore, God exists. Both of these arguments are fallacious because they provide no evidence for their conclusions; all they do is appeal
to the consequences of belief in God. In the case of the first argument, the positive consequences of belief in God are cited as evidence that God exists. In the case of the second argument, the
negative consequences of disbelief in God are cited as evidence that God exists. Neither argument, though, provides any logical evidence for the actual existence of God. The consequences of a
belief are rarely a good guide to its truth. Both arguments are therefore fallacious. Each of the arguments above features in real world discussions of God's existence. In fact, they have been
developed into an argument called Pascals Wager, which openly advocates belief in God based on its good consequences, rather than on evidence that it is true. Example: People argue that there
must be an afterlife because they just can't accept that when we die that's it. This is an appeal to consequences; there is no life after death. Another example occurs in the film The Matrix.
There Neo is asked whether he believes in fate; he says that he doesn't. He is then asked why, and replies, I don't like the thought that I'm not in control. This is not an appeal to evidence,
but to the unpleasantness of believing in fate: fate would imply that the world is a way that I don't want it to be, therefore there is no such thing.
5. Appeal to Force
An appeal to force is an attempt to persuade using threats. Its Latin name, argumentum ad baculum, literally means argument with a cudgel. Disbelief, such arguments go, will be met with
sanctions, perhaps physical abuse; therefore, you'd better believe. Appeals to force are thus a particularly cynical type of appeal to consequences, where the unpleasant consequences of disbelief
are deliberately inflicted by the arguer. Of course, the mere fact that disbelief will be met with sanctions is only a pragmatic justification of belief;it is not evidence that the resultant
belief will be true. Appeals to force are therefore fallacious. Example: If you don't accept that the Sun orbits the Earth, rather than the other way around, then you=ll be excommunicated from
the Church. Therefore, the Sun orbits the Earth, rather than the other way around. This argument, if it can properly be called an argument, makes no attempt to provide evidence for its
conclusion; whether or not you=ll be excommunicate d for disbelieving the geocentric model has no bearing on whether the geocentric model is true. The argument therefore commits the appeal to
force fallacy.
6. Appeal to Novelty
An appeal to novelty is the opposite of an appeal to antiquity. Appeals to novelty assume that the newness of an idea is evidence of its truth. They are thus also related to the bandwagon
fallacy. That an idea is new certainly doesn't entail that it i s true. Many recent ideas have no merit whatsoever, as history has shown; every idea, including those that we now reject as absurd
beyond belief, were new at one time. Some ideas that are new now will surely go the same way. Examples: String theory is the most recent development in physics. Therefore, string theory is true.
Religion is old fashioned; atheism is a much more recent development. Therefore, atheism is true. Each of these arguments commits the appeal to novelty fallacy. The former takes the newness of
string theory to be evidence that string theory is true; the latter takes the newness of atheism to be evidence that atheism is true. Merely being a new idea, of course, is no guarantee of truth.
The newness of string theory and atheism alone, then, should not be taken to be evidence of the truth of these two positions.
7. Appeal to Pity
An appeal to pity attempts to persuade using emotion, specifically sympathy, rather than evidence. Playing on the pity that someone feels for an individual or group can certainly affect what that
person thinks about the group; this is a highly effective, and so quite common, fallacy. This type of argument is fallacious because our emotional responses are not always a good guide to truth;
emotions can cloud, rather than clarify, issues. We should base our beliefs upon reason, rather than on emotion, if we want our beliefs to be true. Examples: Pro life campaigners have recently
adopted a strategy that capitalizes on the strength of appeals to pity. By showing images of aborted foetuses, anti abortion materials seek to disgust people, and so turn them against the
practice of abortion. A BBC News article, Jurors shown graphic 9/11 images, gives another clear example of an appeal to pity: A US jury has been shown graphic images of people burned to death in
the 11 September 2001 attack on the Pentagon. The jurors will decide whether al Qaeda plotter Zacarias Moussaoui should be executed or jailed for life... Prosecutors hope such emotional evidence
will persuade the jury to opt for the death penalty.
8. Appeal to Popularity
Appeals to popularity suggest that an idea must be true simply because it is widely held. This is a fallacy because popular opinion can be, and quite often is, mistaken. Hindsight makes this
clear: there were times when the majority of the population believed that the Earth is the still center of the universe, and that diseases are caused by evil spirits; neither of these ideas was
true, despite its popularity. Example: Most people believe in a god or higher power. Therefore, God, or at least a higher power, must exist. This argument is an appeal to popularity because it
suggests that God must exist based solely on the popularity of belief in God. An atheist could,however,accept the premise of this argument (the claim that belief in God is widespread) but reject
its conclusion without inconsistency.
9. Appeal to Poverty
The appeal to poverty fallacy is committed when it is assumed that a position is correct because it is held by the poor. The opposite of the appeal to poverty is the appeal to wealth. There is
sometimes a temptation to contrast the excesses, greed, and immorality of the rich with the simplicity, virtue, and humility of the poor. This can give rise to arguments that commit the appeal to
poverty fallacy. The poverty of a person that holds a view, of course, does not establish that the view is true; even the poor can sometimes err in their beliefs. Example: The working classes
respect family and community ties. Therefore, respect for family and community ties is virtuous. This argument is an appeal to poverty because it takes the association between a position and
poverty as evidence of the goodness of that position. There is, however, no necessary connection between a position being associated with poverty and its being true, and so the argument is
10. Appeal to Wealth
The appeal to wealth fallacy is committed by any argument that assumes that someone or something is better simply because they are wealthier or the thing is more expensive. It is the opposite of
the appeal to poverty. In a society in which we often aspire to wealth, where wealth is held up as that to which we all aspire, it is easy to slip into thinking that everything that is associated
with wealth is good. Rich people can be thought to deserve more respect than poorer people; more expensive goods can be thought to be better than less expensive goods solely because of their
price. This is a fallacy. Wealth need not be associated with all that is good, and all that is good need not be associated with wealth. Examples: My computer cost more than yours. Therefore, my
computer is better than yours. Warren is richer than Wayne. Therefore, Warren will make a better dinner guest than Wayne. Each of these arguments takes an association with money to be a sign of
superiority. They therefore both commit the appeal to wealth fallacy.
Fallacies of Ambiguity
Fallacies of ambiguity appear to support their conclusions only due to their imprecise use of language. Once terms are clarified, fallacies of ambiguity are exposed. It is to avoid fallacies of this
type that philosophers often carefully define their terms before launching into an argument.
1. Accent Fallacies/Equivocation
Accent fallacies are fallacies that depend on where the stress is placed in a word or sentence. The meaning of a set of words may be dramatically changed by the way they are spoken, with out
changing any of the words themselves. Accent fallacies are a type of equivocation. Example: Suppose that two people are debating whether a rumor about the actions of a third person is true. The
first says, I can imagine him doing that; it's possible. The second replies, Yes, its possible to imagine him doing that. This looks like agreement. If however, the second person stresses the
word imagine, then this appearance vanishes; Yes, its possible to imagine him doing that. This now sounds like a pointed comment meaning that though it may just about be possible to imagine him
doing that, there's no way that he would actually do it.
2. Straw Man Arguments
The fallacy of equivocation is committed when a term is used in two or more different senses within a single argument. For an argument to work, words must have the same meaning each time they
appear in its premises or conclusion. Arguments that switch between different meanings of words equivocate, and so don't work. This is because the change in meaning introduces a change in
subject. If the words in the premises and the conclusion mean different things, then the premises and the conclusion are about different things, and so the former cannot support the latter.
Example: The church would like to encourage theism. Theism is a medical condition resulting from the excessive consumption of tea. Therefore, the church ought to distribute tea more freely. This
argument is obviously fallacious because it equivocates on the word theism. The first premise of the argument is only true if theism is understood as belief in a particular kind of god; the
second premise of the argument is only true if theism is understood in a medical sense. Real World Examples: Christianity teaches that faith is necessary for salvation. Faith is irrational, it is
belief in the absence of or contrary to evidence. Therefore, Christianity teaches that irrationality is rewarded. This argument, which is a reasonably familiar one, switches between two different
meanings of faith. The kind of faith that Christianity holds is necessary for salvation is belief in God, and an appropriate response to that belief. It does not matter where the belief and the
response come from; someone who accepts the gospel based on evidence (e.g. Doubting Thomas) still gets to heaven, according to Christianity. For the kind of faith for which (1) is true, (2) is
therefore false. Similarly, for the kind of faith for which (2) is true, (1) is false. There is no one understanding of faith according to which both of the arguments premises are true, and the
argument therefore f ails to establish its conclusion.
Fallacies of Presumption
Fallacies of presumption are not errors of reasoning in the sense of logical errors, but are nevertheless commonly classed as fallacies. Fallacies of presumption begin with a false (or at least
unwarranted) assumption, and so fail to establish their conclusion.
1. Affirming the Consequent
The fallacy of affirming the consequent is committed by arguments that have the form: "if A then B; B, therefore A." The first premise of such arguments notes that if a state of affairs A
obtained then a consequence B would also obtain. The second premise asserts that this consequence B does obtain. The faulty step then follows: the inference that the state of affairs A obtains.
Examples: If Fred wanted to get me sacked then he'd go and have a word with the boss. There goes Fred to have a word with the boss. Therefore, Fred wants to get me sacked. This argument is
clearly fallacious; there are any number of reasons why Fred might be going to have a word with the boss that do not involve him wanting to get me sacked: e.g. to ask for a raise, to tell the
boss what a good job I'm doing,etc. Fred's going to see the boss there fore doesn't show that he's trying to get me fired. If Zeus was a real, historical figure, but the Catholic Church covered
up his existence, then we wouldn't have any evidence of a historical Zeus today. We don't have any evidence of a historical Zeus today. Therefore, Zeus was a real, historical figure, but the
Catholic Church covered up his existence.
2. Argument from Ignorance
Arguments from ignorance infer that a proposition is true from the fact that it is not known to be false. Not all arguments of this form are fallacious; if it is known that if the proposition
were not true then it would have been disproven, then a valid argument from ignorance may be constructed. In other cases, though, arguments from ignorance are fallacious. Example: No one has been
able to disprove the existence of God. Therefore, God exists. This argument is fallacious because the non-existence of God is perfectly consistent with no one having been able to prove God's non
3. Begging the Question (the "Circular" Argument)
An argument is circular if its conclusion is among its premises - i.e, if it assumes (either explicitly or not) what it is trying to prove. Such arguments are said to "beg the question." A
circular argument fails as a proof because it will only be judged to be sound by those who already accept its conclusion. This is because, anyone who rejects a circular argument's conclusion
should also reject at least one of its premises (the one that is the same as its conclusion), and so they should also reject the argument as a whole. Anyone who accepts all of the argument's
premises already accepts the argument's conclusion, so they can't be said to have been persuaded by the argument. In neither case, then, will the argument be successful.
Example: The Bible affirms that it is inerrant. Whatever the Bible says is true. Therefore, the Bible is inerrant. This argument is circular because its conclusion, that the Bible is inerrant, is
the same as its second premise - whatever the Bible says is true. Anyone who would reject the argument's conclusion should also reject its second premise and, along with it, the argument as a
Real World Examples: The above argument is a straightforward, real world example of a circular argument. Other examples can be a little more subtle. Typical examples of circular arguments include
rights claims - e.g.,I have a right to say what I want, therefore you shouldn't try to silence me; Women have a right to choose whether to have an abortion or not, therefore abortion should be
allowed; The unborn has a right to life, therefore abortion is immoral. Having a right to X is the same as other people having an obligation to allow you to have X, so each of these arguments
begs the question, assuming exactly what it is trying to prove.
However, it should be noted that, while a circular argument is not logically valid, a recursive argument is. An argument is recursive if one of its premises is contained within its conclusion.
Such arguments are not circular because they do prove what they assume. This is because, anyone who rejects a circular argument's conclusion must also reject the premise that is also contained in
the conclusion and, thereby, may logically reject the argument as a whole. Anyone who accepts all of the argument's premises logically accepts the argument's conclusion and, since they accept the
premise already stated in the conclusion, there is nothing logically incorrect with accepting the conclusion.
Example: If a duck has wings then it is a duck that can fly. Because we have assumed the fact that a duck exists as one of the premises, there is nothing wrong with including the fact of the duck
within the conclusion. This argument is not circular because it does not assume the conclusion among one of the premises but, instead, assumes a premise as part of the conclusion. However, we
cannot say "if a duck has wings then a duck exists" because the truth of the conclusion must be assumed as a premise of the argument and is therefore an example of circular reasoning.
4. Complex Question
The complex question fallacy is committed when a question is asked (a) that rests on a questionable assumption, and (b) to which all answers appear to endorse that assumption. Examples "Have you
stopped beating your wife?" This is a complex question because it presupposes that you used to beat your wife, a presupposition that either answer to the question appears to endorse. "Are you
going to admit that you=re wrong?" Answering yes to this question is an admission of guilt. Answering no to the question implies that the accused accepts that he is in the wrong, but will not
admit it. No room is left to protest ones innocence. This is therefore a complex question, and a subtle false dilemma.
5. Argument Cum Hoc
The cum hoc fallacy is committed when it is assumed that because two things occur together, they must be causally related. This, however, does not follow; correlation is possible without
causation. This fallacy is closely related to the post hoc fallacy. Real World Example: Nestle, the makers of the breakfast cereal Shredded Wheat, once ran an advertising campaign in which the
key phrase was this: People who eat Shredded W heat tend to have healthy hearts. This is very carefully phrased. It does not explicitly state that there is any causal connection between eating
Shredded Wheat and having a healthy heart, but it invites viewers of the advertisements to make the connection; the implication is there. Whether or not there is any such connection, the mere
fact that the two things are correlated does not prove that there is such a connection. In tempting viewers to infer that eating Shredded Wheat is good for your heart, Nestle are tempting viewers
to commit a fallacy.
6. False Dilemma
The bifurcation fallacy is committed when a false dilemma is presented, i.e. when someone is asked to choose between two options when there is at least one other option available. Of course,
arguments that restrict the options to more than two but less than there really are are similarly fallacious. Examples: Either a Creator brought the universe into existence, or the universe came
into existence out of nothing. The universe didn't come into existence out of nothing (be cause nothing comes from nothing). Therefore, a Creator brought the universe into existence. The first
premise of this argument presents a false dilemma; it might be thought that the universe neither was brought into existence by a Creator nor came into existence out of nothing, because it existed
from eternity. Another example emerged when George W Bush launched the war on terror, insisting that other nations were either for or against America in her campaign, excluding the quite real
possibility of neutrality. Complex questions are subtle forms of false dilemma. Questions such as: "Are you going to admit that you're wrong?" implicitly restrict the options to either being
wrong and admitting it or being wrong and not admitting it, thus excluding the option of not being wrong.
7. Hasty Generalization
A hasty generalization draws a general rule from a single, perhaps atypical, case. It is the reverse of a sweeping generalization. Example: My Christian / atheist neighbor is a real grouch.
Therefore, Christians / atheists are grouches. This argument takes an individual case of a Christian or atheist, and draws a general rule from it, assuming that all Christians or atheists are
like the neighbor. The conclusion that it reaches hasn't been demonstrated, because it may well be that the neighbor is not a typical Christian or atheist, and that the conclusion drawn is false.
8. No True Scotsman Argument
The no true Scotsman fallacy is a way of reinterpreting evidence in order to prevent the refutation of ones position. Proposed counter examples to a theory are dismissed as irrelevant solely
because they are counter examples, but purportedly because they are not what the theory is about. Example: If Angus, a Glaswegian, who puts sugar on his porridge, is proposed as a counter example
to the claim "no true Scotsman puts sugar on his porridge," the No True Scotsman fallacy would run as follows: Angus puts sugar on his porridge. No (true) Scotsman puts sugar on his porridge.
Therefore, Angus is not a (true) Scotsman. Therefore, Angus is not a counter example to the claim that no Scotsman puts sugar on his porridge. This fallacy is a form of circular argument, with an
existing belief being assumed to be true in order to dismiss any apparent counter examples to it. The existing belief thus becomes unfalsifiable. Real World Examples: An argument similar to this
is often arises when people attempt to define religious groups. In some Christian groups, for example, there is an idea that faith is permanent, that once one becomes a Christian one cannot fall
away. Apparent counter examples to this idea, people who appear to have faith believe but subsequently lose it, are written of fusing the No True Scotsman fallacy: they didn't really have faith,
they werent true Christians. The claim that faith cannot be lost is thus preserved from refutation. Given such an approach, this claim is unfalsifiable, there is no possible refutation of it.
9. Argument Post Hoc
The Latin phrase post hoc ergo propter hoc means, literally, "after this therefore because of this." The post hoc fallacyis committed when it is assumed that because one thing occurred after
another, it must have occurred as a result of it. Mere temporal succession, however, does not entail causal succession. Just because one thing follows another does not mean that it was caused by
it. This fallacy is closely related to the cum hoc fallacy. Example: Most people who are read the last rites die shortly afterward. Therefore, priests are going around killing people with magic
words! This argument commits the post hoc fallacy because it infers a causal connection based solely on temporal order. Real World Examples: One example of the post hoc flaw is the evidence often
given for the efficacy of prayer. When someone reasons that as they prayed for something and it then happened, it therefore must have happened because they prayed for it, they commit the post hoc
fallacy. The correlation between the prayer and the event could result from coincidence, rather than cause, so does not prove that prayer works. Superstitions often arise from people committing
the post hoc fallacy. Consider, for example, a sportsman who adopts a pre-match ritual because one time he did something before a game he got a good result. The reasoning here is presumably that
on the first occasion the activity preceded the success, so the activity must have contributed to the success, so repeating the activity is likely to lead to a recurrence of the success. This is
a classic example of the post hoc fallacy in action.
10. Slippery Slope Argument
Slippery slope arguments falsely assume that one thing must lead to another. They begin by suggesting that ifwe do one thing then that will lead to another, and before we know it we'll be doing
something that we don't want to do. They conclude that we therefore shouldn't do the first thing. The problem with these arguments is that it is possible to do the first thing that they mention
without going on to do the other things; restraint is possible. Example: If you buy a Green Day album, then next you'll be buying Buzzcock's albums, and before you know it you'll be a punk with
green hair and everything. You don't want to become a punk. Therefore, you shouldn't buy a Green Day album. This argument commits the slippery slope fallacy because it is perfectly possible to
buy a Green Day album without going on to become a punk; we could buy the album and then stop there. The conclusion therefore hasn't been proven, because the argument's first premise is false.
11. Sweeping Generalization
A sweeping generalization applies a general statement too broadly. If one takes a general rule, and applies it to a case to which, due to the specific features of the case, the rule does not
apply, then one commits the sweeping generalization fallacy. This fallacy is the reverse of a hasty generalization, which infers a general rule from a specific case. Example: Children should be
seen and not heard. Little Wolfgang Amadeus is a child. Therefore, little Wolfgang Amadeus shouldn't be heard. No matter what you think of the general principle that children should be seen and
not heard, a child prodigy pianist about to perform is worth listening to; the general principle doesn't apply.
12. Overwhelming Exception
This is a logical fallacy similar to a hasty generalization. It is a generalization which is accurate, but which comes with one or more qualifications that eliminate so many cases that what
remains is much less impressive than the initial statement might have led one to assume. Examples: "All right, but apart from the sanitation, the medicine, education, wine, public order,
irrigation, roads, a fresh water system, and public health, what have the Romans ever done for us?" (The attempted implication (fallaciously false in this case) is that the Romans did nothing for
us). This is a quotation from Monty Python's Life of Brian. "Our foreign policy has always helped other countries, except of course when it is against our National Interest..." (The false
implication is that our foreign policy always helps other countries). "All Americans are useless at foreign languages. Ok, I'll make an exception for those who live in multi-ethnic neighborhoods,
have parents who speak a foreign language, are naturally gifted in languages, have lived abroad or who went to a school with a good foreign language program, but the rest are absolutely useless
at foreign languages." All dogs are black, except for those which are not black. (This is also a tautology).
13. Argument Tu Quoque
The tu quoque fallacy is committed when it is assumed that because someone else has done a thing there is nothing wrong with doing it. This fallacy is classically committed by children who, when
told off, respond with "so and so" did it too, with the implied conclusion that there is nothing wrong with doing whatever it is that they have done. This is a fallacy because it could be that
both children are in the wrong, and because, as we were all taught, two wrongs dont make a right. Example: The Romans kept slaves. Therefore, we can keep slaves too. This argument commits the tu
quoque fallacy because it assumes that if someone else does a thing then its okay for us to do it too. It does not follow, however, from the simple fact that, because the Romans kept slaves, that
there is nothing wrong with keeping slaves. It is plausible to think that the Romans acted immorally in keeping slaves,and that we would act immorally if we followed their example. The conclusion
of the argument therefore does not follow from its premise. Examples of the tu quoque fallacy occur all the time. For instance, in an article entitled "Manchester United defend ticket price
rise," BBC Sport reported: "Manchester United have hit their fans with a 12.3% average rise in season ticket prices for the next campaign. A top price ticket will cost $38, and the cheapest $23
... But United have defended the price rises, saying they compare favorably with the rest of the Premiership. 'We do not know what most of our rivals will charge next year, buy even a price
freeze across the rest of the Premiership would mean that next year only seven clubs will have a cheaper ticket than $23, and nine clubs will have a top price over $39, in some cases almost
double,' said Humby [Manchester United finance director]." Humby's argument was essentially this: Other Premiership clubs charge more, therefore our ticket prices are justified. This commits the
tu quoque fallacy because it is quite possible that all clubs, including Manchester United, "overcharge" for their tickets.
1. ↑ Although Liebniz originally called his predicate logic the "Propositional Calculus" (and some scholars continue to call it by this name), the term "propositional" has also become associated
with the "logic" of ordinary, spoken language, where the meanings of the subject terms may be considered, as opposed to limiting study to the meanings of the predicate terms (i.e., the
inferential relationships). In this outline, we refer to "verbal logic" exclusively as the relatively informal, meta-theoretical statement of logical principles and to "predicate logic" as the
more abstract, symbolic and formal statement of those principles.
2. ↑ Alternative designations might be "linguistic logic" or "verbal reasoning." The designation "Propositional Logic" is eschewed for the reasons given in the first footnote of this outline above.
3. ↑ Because the communication of all understanding must begin with a common language that is generally understood in an intuitive manner, the informal expression of logical statements (i.e., in
plain English or another language of common usage) may be considered the "meta-language" from which the formal, symbolic languages are derived.
4. ↑ It is also apparent that, no matter how one might attempt to reduce the meanings of linguistic terms to a combination of only three, this reduction can only represent a gross approximation of
the ordinary, semantic relationships between linguistic terms. Because the terms used in this Verbal Logic outline are taken from ordinary language, they inevitably connote their ordinary
linguistic meanings which cannot be avoided but which are generally beyond the scope of this outline. In this outline, we are only concerned with the essential, intuitive, inferential
relationships between these terms, as they are used in a manner that is peculiar to this outline.
5. ↑ Not the Objectivism of Ayn Rand.
6. ↑ In other words, although the idea of an infinite quantity may be illogical, there is still the logical possibility of a potentially infinite quantity, at least where a class (rather than a
well-defined set) is concerned.
7. ↑ A very brief description of the intersection of these philosophies could go as follows: One must assume the existence of an objective reality that is knowable. However, one must also assume
that we cannot know this reality without the analysis of our observations by a rational mind that can make an abstract and recursive, albeit ultimately artificial (i.e., not a part of objective
reality except as a product of the mind), distinction between form and substance for the purposes of analysis (and recursive because the form of some thing may also be analyzed itself as an
object subject to formal conditions, and so on, until we reach some fundamentally basic, a priori object that cannot be understood in terms more formal or simpler than itself). Finally, although
an objective reality is assumed to exist, one must assume that the purely logical and mathematical structures applied to an analysis or understanding of this reality are essentially products of
the mind and do not exist as a part of objective reality except in the mind itself, where the mind itself is an object of reality. Of course, a pure intuitionist would reject any sort of
linguistic meta-philosophical approach to an understanding of logic. In this regard, this outline is not purely intuitionist but represents an effort to bridge the gap between "ordinary,"
linguistic understanding and "pure" formalism.
8. ↑ And therefore more nuanced.
9. ↑ In this sense, all logical, rational thought, as defined in this outline, must ultimately be recursive in nature. See the Recursion Theory outline.
10. ↑ In formal Predicate Logic, and as explained in the Axioms of Further Definability section of the First-Order Predicate Logic outline, we have defined our system so that all terms may be
ultimately defined as alternative systems based on one or the other of the following combinations of certain pairs of inferences: (1) conjunction ("and") and negation ("not"); (2) disjunction
("or") and negation; or (3) material implication ("if-then") and negation. In this regard, the inference of conjunction is fundamentally analogous to the process of identification by combination;
the inference of disjunction is fundamentally analogous to the process of identification by differentiation; and the inference of material implication is fundamentally analogous to the process of
identification by implication. Although it is equally valid to build a system of logic on the basis of combinations of the operations of conjunction and negation or of implication and negation,
the author of this outline has chosen to proceed with disjunction and negation as the two most "primitive" or entirely intuitive operations of logic, upon which every other logical term or
inference may be constructed. This methodology was chosen primarily because the author believes that the operation of disjunction is the simplest operation that can be described in Boolean terms
and, therefore, has the greatest informational entropy of all logical terms. See the Information Theory outline.
11. ↑ This principle of "opposites" was first stated in purely logical terms by Heracleitus of Ephesus at the beginning of the Fifth Century B.C.E.
12. ↑ Many items in this outline are presented as numbered listes. However, it should be noted that numeric quantity has not yet been defined (which occurs later in Set Theory). Therefore, the use of
numbering in this outline is a meta-linquistic use of the numeric terms that are defined in this outline for the terms "more" or "fewer." Numeric ordering is not yet defined and should not be
implied by the use of any numeric, subscript terms or by any line item numbering.
13. ↑ Best represented linguistically by the interrogative "what".
14. ↑ Best represented linguistically by the interrogatives "who" or "whom," which includes the later-defined concept of subjectivity, and which is distinguished further from all other kinds of
objects in the next section of this outline.
15. ↑ Italicized, lower-case, Roman letters are often used to denote an unspecified object.
16. ↑ In this context, verbs such as "leaves," "departs," or "disappears" denote negation and therefore are not stated here as being existential.
17. ↑ Although this outline is not concerned primarily with symbolic logic, the symbols of symbolic logic will be introduced within parentheses, when they first become relevant.
18. ↑ In ordinary experience, an object is the manifestation of some entity. In such a case, we may say that the entity's existence is "true." However, this idea of truth, which accounts for all
possible characteristics of an object (i.e., the object's "identity"), is distinct from the logical "truth value" of a proposition - a part of the proposition's logical "equivalence" - which
accounts only for the logical values of "true" or "false" and the minimally sufficient conditions that must exist to cause those values, as explained more fully and later in this outline.
Therefore, "truth" is not included in the definition of "object" and its synonyms and is defined more particularly later in this outline. See the footnotes to "Empirical Truth," below.
19. ↑ In this definition, the predicate terms: "is manifest," "is expressed," "is created," "is referred to," "is signified," "is present," "is evident," "is constructed," "is built," "is made," "is
conveyed," "is moved," "is transferred," "is put or placed," "is stated," "exists," "occurs," "comes," "arrives," "appears," "acts," and "does" are equivalent and essentially undefined. We call
these predicate terms "existential" because they are all defined to be synonymous with the essentially undefined existential term "is" (albeit with the possibility of later-defined symantic
nuances for each particular term, such as will occur later in this outline with the definition of the subject term "self").
20. ↑ Although, at first glance, the definition of an object and its relation to the existential terms may appear indistinct or logically circular, in fact it is not. For the purposes of this
outline, the predicated existential terms: "is manifest," "is expressed," "is created," "is referred to," "is signified," "is present," "is evident," "is constructed," "is built," "is made," "is
conveyed," "is moved," "is transferred," "is put or placed," "is stated," "exists," "occurs," "comes," "arrives," "appears," "acts," and "does" are considered generally synonymous with each other
and essentially undefined. However, the subject terms of any proposition - i.e., "object," "entity", "unit," "individual," "body," "item," "referent," "occurrence," "appearance," "arrival,"
"instance," "event," "fact," "self," "expression," "thing," "manifestation," "actualization," "presence," "state," "action," and "construct" - although essentially synonymous with each other, are
not synonymous with the predicated existential terms because these subject terms are the objects upon which those existential terms operate; they are not the existential terms themselves.
Therefore, although the existential terms are considered wholly undefined as a priori objects that can only be understood intuitively, the subject (bolded) terms introduced in this definition are
considered generally synonymous with the subject term "object" and are therefore properly defined as already manifested into existence and not purely existential and a priori. This is the most
general statement of Renee Descartes' famous axiom: "I think, therefore I am" and, as such, it constitutes the most primitive possible abstraction of form from substance. In regard to any
objection that we have now defined "something" in terms of something else that is not itself expressly definable, all we can say is that we must first find some place to start, if we will reason
at all about anything. In so doing, we have achieved our first instance of abstraction - i.e., distinguishing the thing from its creation, the noun from the verb.
21. ↑ Unfortunately, the semantics of ordinary language cannot make the distinction between the act of existence and the thing that exists with complete effectiveness. Therefore, this entire outline
rests on three fundamental assumptions: (1) that an objective reality exists apart from the observer; (2) that an objective reality can be known by the observer; and (3) that the act of coming
into existence or knowing the truth of some thing's existence may be distinguished (i.e., abstracted) from the thing itself - e.g., that the act of giving birth can be distinguished from the fact
of the birth - which simply states the relationship of assumptions (1) and (2) with each other. In addition, it should be noted that an act or event in itself may be an object for examination.
Therefore, an object need not be material to qualify for this definition and, hence, the term "materialization" is not included with the other synonyms for the term "object." Therefore, although
one might believe that, ultimately, "all is an illusion," this outline assumes the existence of a provable truth and leaves the possibility of an ultimate "nothingness" to more transcendental
endeavors that are worthier of the question, such as theology.
22. ↑ It should be noted that, even though an event is manifest, this does not mean the event has become known. The characteristics of knowledge are defined later in this outline.
23. ↑ Although this outline assumes the existence of an objective reality, this definition does not denote an objective permanence; it only denotes that an object exists, even if only for a moment.
The concept of time, or a sequencing of events or objects, is treated later.
24. ↑ This definition relates the concept of objective existence to the use of the impersonal pronouns.
25. ↑ Note that this definition connotes the plural, whereas the previous definition connotes only the singular. The more rigorous definition of plural ("more than one") is given later in this
outline. However, it should be noted that Verbal Logic, or its extension First-Order Predicate Logic, has no more particular quantification of objects, except to say that an object is not unique
or that a predication is universal. Numeric quantification is the subject of the extensions of Second-Order Predicate Logic, such as Set Theory.
26. ↑ These terms include both the act of negation as well as the thing (i.e., referent) that is negated - both are "instances" of non-existence; either the "thing" is already naught or it is
eliminated. The more specifically relational versions of these terms is provided in the next definition.
27. ↑ As explained above, negation is a wholly undefined and primitive concept that must be accepted, intuitively, for the purposes of this outline, as a priori, and without further definition.
Furthermore, the state of non-existence can only "exist" as an ideal conceptualization since, by referring to it as something that exists (in this instance, a "state of non-existence"), it is
posited that something exists, even if the only thing that is described is simply the idea of the state of non-existence (which, itself, is simply a description of the absence of some thing).
28. ↑ Unlike the terms defined in the prior definition, the terms defined here are applied to an extant referent that is then described as standing in relation to the defined terms of non-existence.
Therefore, in this case, it is the referent that is described to be missing. Whether non-existence itself can be described as a referent that is "missing" forms a linguistic paradox which arises
from an attempt to realize the concrete expression of an ideal concept - the a priori concept of nothingness. It is a fundamental assumption of this outline that two paradoxes must exist that can
never be eliminated from a "complete" theory of logic: an ideal nothingness and an ideal identification of two objects. Ideal identification is defined in the next section of this outline.
29. ↑ Until now, the term "self" has been indistinguishable from other objects. Now it is distinguished by its essential relationship with the "other" that is uniquely characteristic of the self - an
object whose existence can only be understood in terms of its relation to others.
30. ↑ Note that, in each definition, the definiendum is only a sufficient condition for the definiens, whereas the definiens are necessary conditions for the definiendum. Therefore, an object is not
necessarily a self and a self is always an object. Here, the importance of the definition is to put "self" and "other" into a relation, but not necessarily as purely complementary states, as well
as to make the definition of "self" a more particular kind of object.
31. ↑ The processes of identification and differentiation are fundamental to the development of any understanding. Note also that this definition does not necessarily denote a state of consciousness,
which will be defined later in this outline; it merely denotes a distinction made between two objects (i.e., "a thing in itself" is not necessarily a conscious being).
32. ↑ "itself"
33. ↑ The preposition "with" is often omitted and implied.
34. ↑ Note that the definition of identity given in the Verbal Logic Outline is greatly refined in the Second Order Predicate Logic Outline as the concept of "equality" so as to distinguish it
entirely from the First-Order concept of equivalence.
35. ↑ As with the other terms in this outline, the meanings of these symbols become more refined and precise, and thereby become distinguished from each other, later in this outline.
36. ↑ In their a priori understanding, identification (in which two things are exactly the same in every way, including spatial-temporal location and energy) and negation (a perfect nothing which, by
definition, can never exist because we still need some concept or symbol to represent it) are ideal states. However, since ideal identification results in no distinction, its symbolic
representation can only be an approximation of the idea itself, as noted by Ludwig Wittgenstein during the early part of the Twentieth Century. An ideal "nothing" or "non-existence" is also an
ideal concept we can only approximate since, to give it any symbolic representation, is to create "something."
37. ↑ As with "identity" or "equality," this term obtains a much more precise meaning in the definition of "logical equivalence" below.
38. ↑ The definition of "common," as distinguished from "in common," is provided for later in the section on Conditions in Sentences.
39. ↑ This is the broadest possible definition for the synonyms of identification and will not necessarily comport with common usage, which is much more nuanced in its meanings. However, the meanings
of the particular definienda in this definition may be refined later with other previously defined terms so as to provide the desired nuance.
40. ↑ Of course, any two objects that are identical in all ways, including spatial-temporal location and energy, would not be distinguishable as two, different objects and, logically and in fact,
could only be one object. Therefore, any well-defined definition of identity must be qualified in some way so that the term "identity" only reflects those qualities actually possessed by an
object, not all possible qualities (see Russell's Paradox and the Problem of Universals).
41. ↑ There is also "as to."
42. ↑ The separation of these terms by the slash (/) symbol indicates that each term may substitute for another in combination with the term "which" to form the inference of differentiation. In cases
where these terms appear by themselves, use of the term "which" is implied.
43. ↑ Although this term is intimately associated with material implication ("if-then"), necessarily defined much later in this outline, the term "if" by itself may also denote the very general
concept of differentiation without the necessity of implication and therefore is presented also in this much more primitive definition.
44. ↑ Note that these terms are not prepositions in the spatial-temporal sense but have a meaning that is idiosyncratic to logic - for example, where a statement is valid "under every possible
interpretation." In this manner, "under" states an inferential relationship, rather than a spatial-temporal relationship.
45. ↑ "vis-à-vis"
46. ↑ The concept of a relationship is the most generalized understanding of the nature of the self and the other.
47. ↑ The reference to "each" is often omitted and implied.
48. ↑ See the footnotes to the definitions of "identification" and "identical" above.
49. ↑ The precise definition of numeric "one" depends on many antecedent concepts presented in this outline and the other logic outlines and remains to be defined in the Set Theory outline, where Set
Theory is an extension of Second-Order Predicate Logic.
50. ↑ In this outline, all definienda are bolded so as to distinguish them from the associated definiens.
51. ↑ Anytime a relation exists (even when it is a thing put in relation with itself as an identity), a differentiation occurs between the two things that are put in relation to each other. In the
case of an identification of objects, a differentiation occurs simply because perfect identity is an ideal state and any relation of identity that occurs must differentiate the operands, even if
only symbolically.
52. ↑ According to the theory of this outline, because every other logical operation can be reduced to a combination of disjunction and negation operations, every other logical operation can be
considered a special case of disjunction and negation. Furthermore, this definition also reflects the primitive, a priori nature of disjunction assumed in this outline (i.e., disjunction cannot
be defined in simpler terms), as well as the formal definition that any disjunction is true so long as at least one of its operands also exists.
53. ↑ The same may be said for conjunction - i.e., that any time a thing is put in relation to another thing a conjunction exists. This is also true because, as we will see, any disjunction can be
expressed as a combination of conjunction and negation operations and vice-versa. Therefore, whether we begin with disjunction or with conjunction as the most primitive relation in terms of which
all other relations are defined (except negation), we will achieve the same result.
54. ↑ A conditional statement ("if-then," "because," "since," etc.) also results in the differentiation or, alternatively, conjugation of a causal relation in regards to the components of cause and
effect; its specifically logical form is more particularly defined later in this outline.
55. ↑ When only one object is present, a self-referential "differentiation" or "combination" exists that is called a "singleton."
56. ↑ This is not a strictly Intuitionist understanding of the relations of disjunction and conjunction since, according to the Intuitionists, those terms cannot be expressed in completely equivalent
terms in regard to each other.
57. ↑ "But for" also connotes implication whereas "but" alone does not.
58. ↑ There is also "as to."
59. ↑ The separation of these terms by the slash (/) symbol indicates that each term may substitute for another in combination with the term "which" to form the inference of disjunction. In cases
where these terms appear by themselves, use of the term "which" is implied.
60. ↑ Although this term is intimately associated with material implication ("if-then"), necessarily defined much later in this outline, the term "if" by itself may also denote the very general
concept of disjunction without the necessity of implication and is therefore also stated as one of the definienda of this more primitive definition.
61. ↑ Note that many of these terms might be more properly categorized as forms of exclusivity or implication, both of which are more specific forms of disjunction. But here we are only interested in
any form of inference that might generally be described as disjunction without necessarily distinguishing any more specific inferential powers they might have. For this reason also, the general
terms of differentiation presented earlier are also stated here.
62. ↑ Best represented linguistically by the interrogative "where".
63. ↑ Therefore, possession and membership necessarily imply the complementary existence of each other or of an other; see DeMorgan's Law.
64. ↑ The possessive tense of a noun also expresses this concept.
65. ↑ This definition is refined later in this outline to state that an elemental object cannot be differentiated in any way; however, at this stage, we have not defined the term "cannot" and
therefore must satisfy ourselves with the present definition. In this sense, an object is deemed "elemental" until it can be shown to consist of more than one object.
66. ↑ It is critically important to note that a singleton may be considered a "collection" of one object but is still atomic because there are no other objects collected with it from which it may be
divided. However, as defined in a following definition, a singleton is never a plural object.
67. ↑ The simple term "more," as distinguished from "more than one," is defined more precisely later in this section of this outline.
68. ↑ Therefore, the terms "atomic" (and its synonyms) and "plural" are logically complementary to each other.
69. ↑ The definition of "set" is further refined and distinguished from the term "class" later in this outline.
70. ↑ Note that it is this process of identification that distinguishes conjunction from disjunction, which merely places the constituent objects in relation to each other but does not necessarily
identify them as parts of the whole.
71. ↑ Note that, in this outline, "such that" or "so that" denote the most general form of inclusive disjunction whereas, by themselves, "such" denotes identification and "so" denotes a conclusion.
72. ↑ This language is awkward but necessary to avoid using undefined terms where possible. Also, a "universe" of objects must be well-defined so as to contain only objects that are specified for
that universe and cannot contain all possible objects; see also, Russell's Paradox.
73. ↑ It should be noted that Verbal Logic, or its extension First-Order Predicate Logic, has no more particular quantification of objects, except to say that an object is not unique or that a
predication is universal. Numeric quantification is the subject of an extension of Second-Order Predicate Logic that we call Set Theory.
74. ↑ Of course, an object that consists entirely of nothing is ideal and is therefore only approximated by the symbol ∅. In addition, an object may be "empty" for a particular condition where some
specified aspect of the object contains nothing but where the object, as a whole, exists.
75. ↑ For use of the term "yet," see the definition of "conjunction," above.
76. ↑ It should be noted that, in this and the prior definitions, "object" may refer to a single object or to an individual set or other grouping of objects, where those objects are collectively
identified as an individual set or grouping.
77. ↑ See the previous definition for use of the terms "has been."
78. ↑ Note that these terms are not prepositions in the spatial-temporal sense but have a meaning that is idiosyncratic to logic, such as where a statement is valid "over every possible
interpretation." In this manner, these terms state an inferential relationship, rather than a spatial-temporal relationship.
79. ↑ Note that this definition of the term "segment" provides a more refined or specific meaning than provided by the earlier use of the term in the definition of the separation, division,
disjunction, differentiation, or disunion of another object, as stated in the above section on Combination.
80. ↑ It is important to note that, unless stated otherwise, the contents of a set, or any other grouping, are not assumed to be ordered unless expressly stated otherwise.
81. ↑ Of course, according to the theory of this outline, "event" is generally synonymous with object, referent, occurrence, instance, fact, self, expression, thing, manifestation, actualization,
presence, or construct. Therefore, this outline's use of terms associated with "time" includes a sequence or order of any of those objects as well.
82. ↑ Best represented linguistically by the interrogative "when".
83. ↑ Unless the moment referred to is the beginning or ending moment of time, a moment is always a segment of time.
84. ↑ The term "immortal" is not used since that denotes a living condition that is not defined as such in this outline, although it may be implied by the later definitions of consciousness and
85. ↑ Note that this term is completely distinct from the concept of "intentionality," which is part of the philosophy of Utilitarianism.
86. ↑ Although not explicit, this definition and the prior distinction drawn between the self and an other provide a contextual frame of reference for the term "self" that can be distinguished from
other objects.
87. ↑ We have chosen to define these terms according to their ordinary usage in the English language, which does not denote their usage as requiring any sentience for their expression but are
commonly used to describe the mechanical action of an inanimate object or process - e.g., "a falling rock will follow its intended path to its target." In the sense of this outline's definition,
a non-living object rolling down a hill does have an intention or purpose by the fact it is moving according to a thermodynamic tendency but without the necessity of consciousness or self
awareness. In other words, the "intention" or "purpose" of the rolling, non-living object is provided by some other motivating force. Therefore, in our system, intention, like consciousness (see
below), is not necessarily the possession of any particular object.
88. ↑ Unlike awareness or consciousness, this definition does not necessarily connote a spontaneous self-motivation, as opposed to purely programmed behavior. The definition of existence itself is
beyond the scope of this outline.
89. ↑ Let's assume that an "awareness" or "consciousness" is defined to be merely a temporal manifestation of intention or purpose. Furthermore, let's define an "intention" or "purpose" as
essentially a momentary expression of self that may connote a continuity of intention or purpose but which does not necessarily describe a self-awareness or sentience, and that self-awareness or
sentience are prerequisites for knowledge. Therefore, according to our definition, a non-living object rolling down a hill, or a chemical substance that is crystallizing, may express an awareness
or consciousness of its motion or self-ordering known to others who are self-aware or sentient and observing it but only insofar as this expression is not known to the rolling or crystallizing
objects and must therefore be knowable by others. In this sense, awareness or consciousness would not be the possession of any particular object but would be a universal phenomenon that is known
only by sentient beings - i.e., beings who are "self-aware"; whether or not it is known by some particular object is another matter entirely. Hence, such a definition of consciousness would be
the most general definition possible and would therefore be consistent with the possibility of what has been called a "cosmic" or "universal" consciousness but does not necessarily require the
existence of such a phenomenon. According to this definition, a log rolling down a hill or a chemical that is crystallizing might possess a temporal continuity that we have defined as
"consciousness" or "awareness" while it expresses its thermodynamic "motivation"; however, such motion is not spontaneously self-ordered or self-motivated, is not necessarily (and, by our
definition of "knowledge" stated later in this outline, is not at all) known by the log or crystal, and could therefore never be described as sentient or self-aware (i.e., by our definition of
knowledge stated later in this outline, there must be the existence of a sentient being for the log or crystal's own existence to be known). Therefore, although we might assume the existence of
an objective Universe, it does not exist for all practical purposes until we can observe it; see the "Anthropic Principle." Therefore, in our system, consciousness or awareness is not necessarily
the possession of any particular object. The fact that a particular self may believe that consciousness is its own possession does not rule out the possibility it is not. Our definition of
consciousness is merely the most general statement of this idea that is possible under all circumstances. It may be the possession of a particular object or it may be something global. We simply
choose the more general definition for more general purposes and to distinguish this very general idea from sentience as that form of consciousness to which we can ascribe a more particular
self-motivation or spontaneous self-ordering - that "something more" that makes us self-motivated, knowing beings.
90. ↑ This understanding is similar to that of John Locke's, who defined the realization of "self" as the experience of a continuity of consciousness.
91. ↑ Sentience connotes the necessity of a spontaneous self-motivation as essential to an expressed intention or purpose that is more than mere computation. What exactly that something "more" is
that results in spontaneous self-motivation is beyond the scope of this outline; see "Fuzzy Logic."
92. ↑ According to our definition, both self-awareness and the awareness of others are necessary for a finding of "sentience."
93. ↑ In this context, "thinking" is a spontaneously self-motivated condition and therefore necessarily more than mere computation.
94. ↑ Hence, the mere application of logic does not, by itself, constitute "thinking"; something more is needed to constitute true intelligence.
95. ↑ As such, these objects may be purely structural (i.e., syntactic) and without semantic meaning, as well as meaningful and semantic. However, it should be considered that even syntax has some
meaning in regards to its purely structural/relational purposes.
96. ↑ Best represented linguistically by the interrogative "why".
97. ↑ By self-reflection.
98. ↑ Therefore, by our definition, the experience of a pure emotion is only "meaningful" once it has been put in relation to the self or another.
99. ↑ Coincidentally, the understanding of language, meaning, and structure occur in separate regions of the human brain.
100. ↑ In this sense, meaning can be expressed by a non-sentient being, even if that being cannot know its own communication. For example, an "idea" may be the electrical impulse of the brain of
someone existing in a persistent vegetative state that is communicated by an EEG or the apparently random utterance of the patient, even though the patient is most likely unable to comprehend the
meaning of this "message," the meaning of which is limited to the physician's diagnostic purposes. Another example might be an organism such as an echinoderm, which expresses an "idea" to a
biologist through the function of its primitive nervous system, even though the organism has an extremely limited self-awareness, if any. The same is probably not true for a rock or plant since
consciousness at this level probably does not exist for that individual.
101. ↑ As such, relations that are purely structural may have a kind of meaning vis-a-vis other structural elements but they do not have a meaning in regard to the observer outside of this structure.
102. ↑ According to our earlier definitions, information must be conveyed to be meaningful (i.e., put in relation to something), even if only to one's own self via self-reflection. Furthermore,
information that is expressed but not yet conveyed constitutes an inchoate meaning for the individual who may convey it, and meaning continues to exist for information that is conveyed but that
meaning becomes particular to the recipient's understanding and may not match the precise meaning that was intended by the individual who conveyed it.
103. ↑ As such, a symbol is a generalization or abstraction of the particular experience of thinking of an idea, cognition, or concept, whether or not that meaning is conveyed or communicated.
104. ↑ Because a purely formal and symbolic language, such as Predicate Logic, may state the meaning of an inference with only one symbol, and because this inference need not be communicated or
otherwise conveyed to be valid, this outline assumes that all meaning that is symbolically expressed fits within the definition of "language," even if this meaning is not communicated so that it
becomes "information."
105. ↑ As such, a grammar includes a syntax.
106. ↑ This definition of "rhetoric" is very broad and would encompass any meaningful use of language, including linguistic constructions not often considered in the same category as classical
rhetoric, such as poetry or self-reflective meditation.
107. ↑ An idea, thought, cognition, or concept may be essentially structural and not semantic other than by any meaning inherent in the structure itself. As such, a word may be an essentially
structural element of grammar and need not convey any meaning beyond the structure.
108. ↑ As such, a word or string is a kind of symbolic expression, as is a gesture, which can also be considered a kind of signifier.
109. ↑ As such, it is possible for a word or string to be composed of only one letter.
110. ↑ As such a lexicon or vocabulary is composed of both structural and semantic words or strings.
111. ↑ Therefore, to be a sentence as defined in this outline, the combination of words must be intentional.
112. ↑ Later, these terms are refined to mean a "complete," well-formed expression.
113. ↑ As defined above, grammar includes syntax, so the mention of both in this definition is redundant, albeit explicit.
114. ↑ Although phrases are generally considered to be groups of more than one word, string, letter, character, symbol, or other signifier, individual words, strings, letters, characters, symbols, or
other signifiers are also phrases by this definition.
115. ↑ As this term is defined and used in the study of informal verbal logic. It should be noted that to "state a proposition" is entirely different from a "state of being," and therefore the term
"state" has an entirely different meaning in those two contexts.
116. ↑ As defined later in this outline, a logical statement concerns only the form of the statement, not its substance, and is generally declarative or descriptive in structure (as distinguished
from interrogatory, exclamatory, suggestive, or imperative statements).
117. ↑ For the purposes of this and later outlines, the term "sentence" or "formula" will be considered synonymous with the terms "statement" or "message."
118. ↑ By this definition, all sentences, statements, formulas, or messages are necessarily well formed.
119. ↑ As defined above, grammar includes syntax, so the mention of both in this definition is redundant, albeit explicit.
120. ↑ In traditional Aristotelian logic, proposition and statement were identical terms and not distinguished from each other, whereas in modern Predicate Logic this distinction is fundamental. In
this outline, however, whenever the term "statement" is used, it is assumed that a proposition is also present, unless stated otherwise.
121. ↑ An object, without more, simply exists. Therefore, a substance is something more abstract since it is a definition of an object. However, although a definition might ultimately describe the
meaning of an object in terms of its conditions, this present definition merely describes an object's substance as something different and a priori vis-à-vis other objects. Therefore,
identification in this context does not connote an understanding of meaning or conditions associated with that definition but merely that something may exist and be identified as different from
other objects.
122. ↑ In this sense, the conditions of an object are an expression of the "idea" of that object. It is this non-Aristotelian (in fact Stoic, although ironically Platonic) abstraction of form from
substance that is essential to the later development of the Predicate Calculus (symbolic logic) and the Principle of Extensionality. The Platonic connection is ironic because Aristotle, Plato's
student, did not effectively distinguish subjective meaning from an objectively predicated inference. The distinction of form from substance is later fully developed and rigorously stated by
Immanuel Kant in his seminal work, The Critique of Pure Reason. In that work, Kant states that what we consider a prior may also be some combination of purely a priori mental constructs with a
posteriori experience. An example that he gives of such a combination is causality (and here the author is inclined to agree since causality is considered by him to be an extension of relation,
which is a more primitive notion upon which causality must necessarily depend for its existence).
123. ↑ This definition addresses a distinction that is necessary for the abstraction of a logical inference, also called a predicate, from the subject of the predicate. Without this distinction,
there can be no separate, symbolic representation of an inference apart from its subject, and it is the study of the validity of the inferences, and not the subjects, which is the purpose of
Predicate Logic.
124. ↑ In this sense, the terms "substance" and "form" are relative and recursive terms; the particular form of an object is also the substance of an analysis of that form. In addition, and unless
the object is ideally and entirely undifferentiated (which, beyond the mere idea of existence or a location in space-time, is utterly meaningless), forms must also be contained within an object's
substance. A philosophical question exists as to whether the substance of an object can be known apart from its forms or conditions, or whether the "substance" of an object is nothing but forms
and conditions. In this outline, we assume that some substance exists apart from forms and conditions, even if this is merely a mental construct used to articulate the concept of form by the
creation of a distinction which is purely intellectual, as with the definitions of "identification" and "nothing." In this sense, although an ultimate substance is assumed or hypothecated to
objectively exist (at least for the sake of argument), it is not assumed that an "ultimate substance" is knowable or can be proved (even though it may be assumed to objectively exist). For
purposes of this outline, an "ultimate substance" is purely existential and is therefore an undefinable, a priori, conceptualization. This does not mean that such a conceptualization cannot exist
in objective reality since, at the very least, it exists as an object of the mind, which itself is defined as an object of reality.
125. ↑ We reserve the use of the term "consistent" for the definition of truth and distinguish the term "constant" for other purposes.
126. ↑ Often the word "true" is omitted and implied when a valid condition is said to "hold."
127. ↑ The terms "determined," "certain," "absolute," "particular" or "specific," "exact," "precise," "consistent," or "proved" have a more precise or rigorous meaning in this system and are defined
128. ↑ In regard to Proof Theory, this definition pertains to the "satisfiability" of a proposition, not its ultimate consistency or, in the language of Proof Theory, its "validity," despite the fact
that we use the term "valid" in this outline as synonymous with "satisfied," so as to comport with ordinary usage. Of course, "inconsistent" also means "not consistent with truth" and, therefore,
false, so we continue to use this term as a synonym for false, even though it is also used in regard to the certainty of an outcome, whether true or false. To distinguish the terms used in this
outline from their Proof Theory counterparts, we use the terms "satisfiable" and "validated" when referring to the Proof Theory concepts. To avoid conflict with the definitions of Proof Theory,
"consistent" is not one of the definienda for the definition of truth (although "inconsistent" is permissibly used for the definition of false without creating such a conflict).
129. ↑ The symbol ∅ and the numeral 0 are generally considered to be logically synonymous terms.
130. ↑ Also called the "logic value of the expression." We avoid the term "logical condition," which connotes a much broader meaning than the expression "logical value."
131. ↑ In informal verbal logic, only two truth values exist: true and false. A two-valued system of logic is called "bivalent."
132. ↑ In some systems, this symbol may represent bidirectionality of implication and therefore its use must be explicitly defined to avoid confusion.
133. ↑ Note that, unlike the definition of truth given above, this definition does not require that a condition must exist, only that it consistently exist or not exist. According to this definition,
although the condition may be either true or false, what matters is that it remains true or false. This corresponds to the principle of ultimate "validity" found in Proof Theory, whereas our use
of the term "valid" above pertains to the idea of "satisfiability," as this term is used in Proof Theory. In Proof Theory, the terms "validity" and "satisfiability" are terms of art and peculiar
to that doctrine, and their meanings in that regard do not comport with ordinary usage. To distinguish the terms used in this outline from their Proof Theory counterparts, we use the terms
"satisfiable" and "validated" when referring to the Proof Theory concepts.
134. ↑ Note that this definition does not require that a certain condition remain forever certain. A condition is certain only so long as it is not contradicted. Once contradicted, the certainty
evaporates and the condition becomes once again "uncertain." As such, one of the premises of this outline is that absolute certainty on any question, like an absolutely universal proposition, is
not provable. Because the validity of every question ultimately rests on the validity of its assumptions, any change in those assumptions may cause a contradiction to arise. Even the validity of
1 + 1 = 2 depends entirely on the assumptions upon which that hypothesis is based and a change in those assumptions may cause such a statement to become invalid (see Peano Arithmetic). Therefore,
according to Godel's Incompleteness Theorem, the validity of "first assumptions" will always rest with the discipline of Philosophy, not Science.
135. ↑ Therefore, as defined in this outline, "can" connotes a certain universality of existence whereas "may" only connotes the possibility of existence. Therefore, "can" and "may" are not
essentially synonymous.
136. ↑ This definition is important for inclusion of the intransitive form of the verb tense for "can."
137. ↑ Here it is important to show that the verbs responsible for permissibility are also subject to negation.
138. ↑ The complementary truth value of possible is impossible. However, the complementary truth values of never/impossible and always are ambiguous. This is because the complement of "always" is
never or possible, while the complement of "never" is always or possible. This means that, philosophically speaking, we can only, truly prove with logic alone that something does not happen or
does not exist; logic, by itself, can never prove the mere fact of existence - existence must be assumed at some point as a necessary antecedent to every logical proof. See below, "The Modern
Square of Opposition."
139. ↑ As seen later in this outline, necessity may be defined entirely in terms of the disjunction of all sufficient terms.
140. ↑ As seen later in this outline, sufficiency may be defined entirely in terms of the conjunction of all necessary terms.
141. ↑ We know from the definition of "sentence" or "formula" given in the preceding section that a well-formed sentence or formula must also be syntactic and grammatic.
142. ↑ Note that, by this definition and the definition of "phrase" given above (which must, by that definition, be only a portion of a sentence or formula), a phrase must now be any syntactic
expression that is not whole, entire, or complete but that cannot, by this definition, be "well-formed."
143. ↑ Although a word or string is not, in itself, a well-formed sentence, a word or string may be considered a well-formed word or string where its spelling obeys the rules of common usage.
144. ↑ In this outline, we have chosen to define knowledge in terms of sentience; i.e., knowledge is the special province of self-awareness. Therefore, although we may loosely say that the log
rolling down a hill comes to "know" a rock when it collides with that rock, causing its trajectory to be altered by the collision, and although the terms "consciousness" or "awareness" have been
very broadly defined as not identified with any particular entity but as merely the temporal expression of intention or purpose, it is the author's opinion that defining "knowledge" on such broad
terms would make its usage relatively meaningless for its purpose, which is to relate sentient entities (the ultimate arbiters of what it means for some condition to be "true") to truth finding.
145. ↑ Of course, the existence of an object "in itself" - i.e., apart from the conditions that describe it - can never be logically proved; it must be assumed.
146. ↑ By virtue of the definition of knowledge, the knowing "object" must be sentient.
147. ↑ For the purposes of this outline, the definition of "observation" is extremely broad and includes the act of knowing "objects of the mind" as well as tangible or material objects.
148. ↑ It should also be noted that, because this definition is stated in terms of knowledge, and because knowledge is stated in terms of sentience, observation on these terms is solely the province
of sentient beings. Therefore, although we can say that a camera might "observe" a subject, it is really not the camera that makes the observation but the person who is operating it that does so.
149. ↑ An empirical truth is a condition that is believed to exist as a consequence of the immediate observation of that condition by one's senses. Note that "immediate observation" may occur by
reading the measurement of an instrument that acts as an extension of the observer's senses where, without that instrument, ordinary and immediate observation of a particular property would not
be possible. Of course, one must reasonably believe, again through logical analysis and empirical observation, that the measuring instrument itself is capable of transmitting measurements or
other data that may be considered reasonably reliable for proving the truth of the property observed.
150. ↑ Whether known empirically or intuitively, an object that is the subject of a proposition is only "true" if we may know the truth of its predicate conditions.
151. ↑ An empirical truth is "proved" and "consistent," according to the terminology of Proof Theory
152. ↑ In human law, a "presumption" has the additional property of shifting the burden of proof to the party that did not previously have the burden of proof wherever this condition is applicable.
153. ↑ Best represented linguistically by the interrogative "how".
154. ↑ Reason is more general than logic because logic, as defined later in this outline, relates more specifically to an analysis of the validity of inferences. Therefore, logic is a special case of
155. ↑ Defining this term to comport with ordinary English language usage, our definition of "reason" does not require sentience for its meaning, unlike knowledge. Therefore, a computer's
calculations may be reasonable even if they cannot be known by the machine that is performing the process.
156. ↑ Note that the terms of a condition may not be certain by definition and the existence of uncertain terms may be the quality that is determined.
157. ↑ As such, a belief is an abstraction of the condition of determinability from the thing that is believed to be determined. Therefore, a belief can exist without actually knowing that a thing
does in fact exist. For this reason, belief and knowledge are fundamentally different concepts.
158. ↑ Because belief requires knowledge for its definition in this outline, according to our system only sentient beings possess the ability to "believe" a proposition may or may not be true.
159. ↑ i.e., a rigorous condition may be consistently false, as well as consistently true.
160. ↑ i.e. reasonable, valid, and certain.
161. ↑ Note that the same can be said of the condition's definition or proof.
162. ↑ This definition connotes no claims as to the knowledge of certainty or possibility.
163. ↑ As such, a term may be a complete proposition or just part of a proposition, so long as a well-defined meaning exists for the term.
164. ↑ And, as defined earlier, a term must also have well-defined conditions for its content. Therefore, all proper terms are also necessarily well-defined.
165. ↑ i.e., without definition or proof
166. ↑ Hence, to be truly primitive, such a term must be a priori.
167. ↑ See above, Semiotics
168. ↑ See the Introduction to this outline.
169. ↑ Italicized, lower-case Roman letters are often used to denote an unspecified object. In this definition, the subject term is an object of the sentence, as is the predicate term.
170. ↑ Predicates are often represented symbolically by italicized, upper-case roman letters.
171. ↑ In this sense, the predicate term is some attribute, circumstance or other condition regarding the subject term, and the predicate and subject terms must therefore necessarily stand in
relation to each other.
172. ↑ A premise, conclusion, or conditional statement all contain both subject and predicate terms.
173. ↑ See also the footnotes to the definition for condition or predicate above for an explanation of the distinction of form from substance that lies at the heart of the definition of "subject."
174. ↑ Although the predicate inferentially "follows" (see below) from the subject, subject and predicate are not necessarily stated in any particular word order, except as provided by the rules of
grammar for the language used.
175. ↑ In an English-language unconditional statement, the subject term usually (although not necessarily) precedes the copula (see below) in word order and the predicate term usually (although not
necessarily) follows the copula in word order.
176. ↑ In a conditional statement, a subject term is synonymous with an antecedent; a predicate term is synonymous with a consequent (see below).
177. ↑ In a logical statement, a copula is usually a linking verb with an accompanying subject complement or adverbial phrase. As such, the copula of a sentence is more properly considered a part of
the predicate, rather than the subject. Also, formal (symbolic) logic is only concerned with the validity of predicates and assumes the validity of subjects as presumed premises of the sentence.
178. ↑ Note that the term "one" is often omitted and is implied by the use of the terms "exactly," "precisely," and "only."
179. ↑ As such, a general statement may also operate as the statement of a class.
180. ↑ A purely intuitionist understanding of this concept would be that a valid substitution is "justifiable," not necessarily truth-preserving.
181. ↑ Sometimes expressed by the Latin term "vice-versa."
182. ↑ This is a slightly different meaning from the use of this term in law, where it more particularly means a condition the occurrence of which causes a duty that has previously arisen to be
extinguished (as opposed to a condition precedent that must occur before a duty will arise).
183. ↑ "Q.E.D.," "therefore," "as such," "wherefore," or "ergo"
184. ↑ Therefore, an antecedent and a consequence must necessarily stand in relation to each other.
185. ↑ As such, a condition may be particular or general, simple or compound, and conjunctive or disjunctive.
186. ↑ Often the word "true" is omitted and implied when a valid condition is said to "hold."
187. ↑ In other words, the consequence is a necessary result of the premise.
188. ↑ The logical inference of implication follows from the sufficiency, not the necessity, of any antecedent premise, since there may be more than one sufficient premise which may or may not also
be necessary to the occurrence of the consequence. However, it is important to remember that any sufficient premise might also be considered a conjunction of all its necessary parts and that, in
the absence of any necessary condition, there will be no sufficiency for the occurrence of the consequence. Therefore, a sufficient premise always implies the occurrence of all necessary
189. ↑ It is important to note that a valid expression can result where both the antecedent and the consequence are false or where the antecedent is false but the consequence is true so long as that
relation is consistent (and the latter relation may occur where the antecedent is sufficient but not necessary). However, an expression where a sufficient antecedent is true and the consequence
is false is always false by virtue of the definition of sufficient conditions.
190. ↑ In other words, the consequence is not a necessary result of a sufficient premise.
191. ↑ As defined earlier, a "belief" is a reason to know the truth or that the truth can be known but it is not, in itself, knowledge of the truth.
192. ↑ This refers to the truthfulness of an inference (see below).
193. ↑ The condition may be either a premise or a conclusion.
194. ↑ As distinguished from an argument, where the conclusion must be proved to be true before it will be accepted as factual.
195. ↑ Sometimes the distinction between an explanation and an argument (which are both examples of passages) must be determined from the context in which the passage exists. Due to this similarity,
arguments can be restated as explanations. In cases where the intention of the passage is truly ambiguous, the passage may be considered as either an argument or an explanation, depending on how
the audience or author chooses to view it.
196. ↑ Because an object is the most general form of entity, it may or may not be logically constructed. Therefore, an object may be well-defined (i.e., logically and internally consistent) or
ill-defined (i.e., illogical or not internally consistent). Sets, on the other hand, must be well-defined if we want them to be a logically sound building block upon which the rest of mathematics
may be constructed.
197. ↑ A set may be empty and a non-empty, un-ordered set (the most general kind of set) disregards any order or repetition of the objects contained within it. Whether a well-defined set may be
infinite is a subject addressed in the Set Theory outline.
198. ↑ The un-ordered set is considered the simplest, logical definition of the concept of a collection that remains useful to set theorists for reasons that are revealed as one studies Set Theory.
199. ↑ Note that this definition is distinguishable from the definition of "elemental object, member, or point" given in the Semiotics section of this outline. A "member of a set" may be another set,
whereas an "elemental object" is something that is essentially indivisible, although it may be a set by itself when existing in the form of a singleton.
200. ↑ And, therefore, this is an object for which the definition of membership in the set is true.
201. ↑ In NBG Set Theory, the term "class" has the more specific meaning of any combination or collection of objects which share a common condition but for which the definition of the class is not
necessarily well-defined, and therefore is not synonymous with set.
202. ↑ The expression "categorical statement" implies a categorical proposition.
203. ↑ As distinguished from a class, which treats of objects more generally.
204. ↑ The logical operators "and", "or", "not," and "if-then" are all examples of logical inferences.
205. ↑ A purely intuitionist understanding of this concept would be that a valid substitution is "justifiable," not necessarily truth-preserving.
206. ↑ A rule may be made more particular by qualifying specific conditions for its application.
207. ↑ The prepositions "by" and "with" used in this definition are generally implied and omitted.
208. ↑ In the law, a cause that is necessary but not sufficient is often called a "substantial factor."
209. ↑ Even without the explicit use of the term, "reason" (see above) is necessarily implied within the present definition.
210. ↑ Formal arguments generally use symbolic methods of analysis, such as we find in Predicate Logic, since metalinguistic context is no longer directly determinative.
211. ↑ Logical form should not be confused with the syntax used to represent it; there may be more than one set of symbols that represents the same logical form, depending on the language used.
212. ↑ Usually used as a descriptive prefix to another term, such as "meta-language."
213. ↑ i.e., by regarding the content terms as mere placeholders for any particular subject matter, like blanks on a form.
214. ↑ For purposes of this outline, "statement" and "proposition" are considered generally equivalent terms.
215. ↑ If no truth value is claimed then it is assumed that the claim for the terms, statement, proposition, expression, or argument is one for truthfulness.
216. ↑ Logical equivalence is concerned with only two circumstances: (1) the truth value of the proposition and (2) the minimal set of conditions that are sufficient to produce that truth value.
Therefore, logical equivalence is not the same thing as Second-Order logical identity, which is concerned with all possible conditions that define the object.
217. ↑ The meaning of a logical statement can be either its truth value or a meta-logical description of a condition, logical inference, or term. Likewise, the meaning of a mathematical statement can
be either its numeric value or the quantitative concept it expresses.
218. ↑ Of course, any two objects that are identical in all ways, including position in time, space, and energy, would not be distinguishable as two objects and, logically, could only be, in fact,
one object. Therefore, any well-defined definition of equality must be qualified in some way so that the relation only reflects specified qualities, not all possible qualities (see Russell's
219. ↑ Where such a state exists.
220. ↑ The complement of an object is generally formed by the logical operation of negation.
221. ↑ By itself or another.
222. ↑ Knowledge is essential to this definition for, without it, an undefined object (i.e., one that is not identified in any manner) would be utterly without meaning.
223. ↑ As such, not every axiom of general applicability is an axiom schema. An axiom schema must be an axiom that relates to a series of specific objects, even if that series is stated in the most
general terms.
224. ↑ Stands in a consistent and valid relation with the other object.
225. ↑ Can be known without proof or definition.
226. ↑ A principle of Bivalence or "Two-Valued" Logic.
227. ↑ Applies only to informal verbal logic and bivalent First Order Predicate Logic, but not necessarily to other logical forms, such as Fuzzy Logic.
228. ↑ A principle of Bivalence or "Two-Valued" Logic.
229. ↑ Pursuant to the Law of Non-Contradiction stated below, one and only one of these states must be true at any particular moment.
230. ↑ This axiom applies only to informal, verbal logic and bivalent, First Order, Predicate Logic, but not necessarily to other, more Intuitionist, logical forms, such as Fuzzy Logic.
231. ↑ The Law of the Excluded Middle is not strictly Intuitionist since, where neither A or not-A have been proved or disproved then we cannot assume the truth of this axiom.
232. ↑ A principle of Bivalence or "Two-Valued" Logic.
233. ↑ Applies only to informal verbal logic and bivalent First Order Predicate Logic, but not necessarily to other logical forms, such as Fuzzy Logic.
234. ↑ In other words, for every logical object there is exactly one complement, and each logical object is necessarily contradicted by its complement. Truth is always the complement of falsity, and
falsity is always the complement of truth. Therefore, truth and falsity necessarily contradict each other.
235. ↑ This axiom is only applicable to systems of bivalent logic.
236. ↑ Logical Necessity and Tautological Truth are logically complementary to a Logical Impossibility.
237. ↑ It is often disputed whether a particular claim can constitute a logical necessity under all circumstances. For instance, the proposition that "the set of all sets must contain itself as a
member" is contradicted by some non-standard arithmetics created under various applications of the Peano Axioms. This is also known as the logical Problem of Universals and relates to Russell's
238. ↑ The essential difference between a logical necessity and a tautological truth is that the latter is necessarily well-defined, whereas the former is not necessarily so.
239. ↑ Logical Necessity and Tautological Truth are logically complementary to a Logical Impossibility.
240. ↑ A tautology is an explicit identification of the same logical objects.
241. ↑ Although a pure tautology in informal verbal logic is both self-consistent and self evident (like an axiom), it is devoid of any real meaning (unlike an axiom or definition), and therefore
should be avoided. The same is not true of predicate logic, where the meanings of the inferential terms, and not the subject terms, are generally the subject of examination.
242. ↑ The essential difference between a logical necessity and a tautological truth is that the latter is necessarily well-defined, whereas the former is not necessarily so.
243. ↑ This is also called an "arbitrary tautology" and is represented by the same symbol for truth. An arbitrary contradiction is represented by F or, more commonly, by an inverted T, ⊥.
244. ↑ In formal logic, we do not determine the truthfulness of the subject terms; we only determine the truthfulness of the inference. Determining the truthfulness of the subject terms may be a
worthy endeavor (i.e., so that we construct a sound argument), however it is not necessary if we are analyzing the inference, in which case we can assume the truthfulness of the subject terms for
the "sake of argument."
245. ↑ The proof of this theorem is so obvious that it can be properly classified as an axiom or corollary.
246. ↑ It should be noted that bidirectional implication ("if and only if", symbolized by ↔) also expresses the relationship of equivalence.
247. ↑ The latter symbol is more particularly used to denote that a definition follows.
248. ↑ The term "equivalence," rather than the term "equal," is used to describe this relationship since the idea of equivalence does not necessarily connote the required condition in logic of
identical form, or the required condition in Set Theory or mathematics of identical meaning and/or quantity. Because identity is a concept of Second-Order Predicate Logic and is not definable in
first-order terms, the equals sign is not seen at all in First-Order Predicate Logic. As such, equivalence is only concerned with whether two expressions evaluate to the same truth value, which
is why every equivalence in logic is essentially nothing more than a tautology (although one we find useful and desirable). Equality (identity), on the other hand, is the equivalence relation
which every thing has to itself and to nothing else and which satisfies Leibniz's Law (a second-order expression): ∀x∀y[(x = y) ↔ ∀P(Px ↔ Py)] (entities x and y are identical if and only if any
predicate possessed by x is also possessed by y and vice versa).
249. ↑ "if-then"
250. ↑ e.g., the operators of equivalence and implication are ≡ and →, respectively.
251. ↑ Distinguish "Begging the Question" in Fallacies of Presumption.
252. ↑ Distinguish "Begging the Question" in Fallacies of Presumption.
253. ↑ In the first example, we see that the value of a known x in the antecedent does not depend on the result of the conclusion for an unknown y (and the intermediate conclusion x is merely a
tautological implication by identity of the antecedent x). However, in the second example, we see that the value of an unknown y, stated in the antecedent x → y, depends on the value of the
conclusion y which, of course, is also unknown, being the same term. Because the proof of an unknown conclusion must be based on known antecedents, we cannot prove the truth of an hypothesis
where the truth of one of the antecedents must also be proved based on an evaluation of the conclusion.
254. ↑ Distinguish "Begging the Question" in Fallacies of Presumption.
255. ↑ Note that this does not necessarily mean an atomic, or even a simple, operand. Hence, a negation may operate on one operand that itself is composed of multiple terms, but it will negate the
value of the operand as a whole.
256. ↑ i.e., containing more than one antecedent.
257. ↑ Note that this logical use of the word "or" as an inclusive disjunctive is actually equivalent to the grammatical expression "and/or."
258. ↑ Hence, the operation of inclusive disjunction connotes existence.
259. ↑ i.e., containing more than one antecedent.
260. ↑ The remaining alternative premises must be complementary to the consequent.
261. ↑ Hence, the operation of exclusive disjunction connotes uniqueness.
262. ↑ The absence of any operator is also generally interpreted to be the operation of conjunction.
263. ↑ i.e., containing more than one antecedent.
264. ↑ Hence, the operation of conjunction connotes universality.
265. ↑ As explained further in this outline, this definition is not "truth-functional" or "truth-preserving" because an ambiguity may exist as to the use of necessary and/or sufficient conditions.
However, this ambiguity may be eliminated through a refinement of the definition (i.e., by considering a conditional statement false if and only if the antecedent is true and the consequence is
false, see below), in which case the unambiguous form is called material implication. In contradistinction, the ambiguous form of the present definition is sometimes called linguistic implication
266. ↑ Aristotelian logic concerns itself with the meaning of the terms of a logical argument. Therefore, under the Aristotelian system, a conditional statement is true if and only if both the
antecedent is true and the consequence is true, or if and only if both the antecedent is false and the consequence is false, making the Aristotelian form of the conditional statement effectively
bidirectional (see below, "Bidirectional Conditional Statement"). However, under the Aristotelian system, and unlike a truly bidirectional conditional statement, the truth value of the antecedent
in relation to that of the consequence in a strictly unidirectional material implication is not without ambiguity. This ambiguity occurs because, in the Aristotelian system, the truthfulness of
the statement cannot be known with certainty where an antecedent is necessary but has unknown sufficiency, or where a consequence is sufficient but has unknown necessity. In a bidirectional
conditional statement, by contrast, there is no such ambiguity; a bidirectional conditional statement is only true if both the antecedent and consequence are true or if they are both false. Also,
unlike a bidirectional conditional statement, the converse of an Aristotelian unidirectional conditional statement is never necessarily also true.
TRUTH TABLE (where p is a necessary condition for q but with unknown sufficiency; U = unknown value):
│ p │ q │ p → q │
│ T │ T │ T │
│ F │ T │ F │
│ T │ F │ U │
│ F │ F │ T │
TRUTH TABLE (where p is a sufficient condition for q but with unknown necessity; U = unknown value):
│ p │ q │ p → q │
│ T │ T │ T │
│ F │ T │ U │
│ T │ F │ F │
│ F │ F │ T │
The Stoics eliminated the ambiguity in the Aristotelian system by disregarding the truthfulness of the meaning of the antecedent terms of a statement. Instead, they focused solely on the meanings
of the inferential relationships contained within the logical structure of a statement. Hence, under the Stoic system, we assume the truthfulness or falsity of the meaning of the terms of a
statement "for the sake of argument"; all that matters under this system is that the structure of a logical proposition is itself correct, and the actual truthfulness of the meanings of the
subject, antecedent terms may be determined later. Therefore, under the Stoic system, the following argument would be considered true:
Example: If the animal does not have feathers then it is a bird.
The animal has feathers.
Therefore, the animal is not a bird.
Although the results of such an argument may not seem reasonable according to our everyday knowledge about the world, the Stoic method has been adopted by modern logicians because it allows for a
completely unambiguous analysis of whether an argument is logically correct in form, as distinguished from whether or not the terms of an argument are empirically correct; the proof of an
empirical truth for any of the terms may be conducted separately, and a substitution of the new, empirically true terms does not change the logical validity of the propositions - it only changes
the meaning of the result. The more intuitive understanding, where both the subject and predicate terms must be true for the result to also be true, is distinguished as a "sound" argument.
However, if we limit our interest to the validity of the inference and not the truthfulness of the subject terms, as under the Stoic method, a unidirectional conditional is false if and only if
the antecedent is true and the consequence is false, and this results in the following, unambiguous, truth table:
│ p │ q │ p → q │
│ T │ T │ T │
│ F │ T │ T │
│ T │ F │ F │
│ F │ F │ T │
Therefore, under the Stoic system, in order for a consequence to be true, the antecedents must be sufficient - to wit, they are either sufficient, albeit unnecessary, or they are necessary and
sufficient - and the consequence must necessarily follow, even if insufficient by itself, for the occurrence of the antecedent.
267. ↑ In this example, there is no claim that the animal of which we speak actually has feathers (although this fact might be assumed for the sake of understanding the inference) and, for that
reason, there exists no statement that reaches the definite (or even probable) conclusion that the animal is actually a bird. Therefore, although the inference is sufficient to be logically true,
it does not necessarily constitute a sound argument.
268. ↑ States a condition where a consequence follows from an antecedent.
269. ↑ Only a conjunction of all necessary conditions will achieve sufficiency for the existence of a conclusion.
270. ↑ This occurs because another antecedent may also be required to cause the consequence to occur; therefore, the truth of the antecedent does not necessarily imply the truth of the consequence
unless all necessary conditions are present.
271. ↑ A disjunction of any sufficient conditions will always achieve necessity for the existence of a conclusion.
272. ↑ This occurs because another antecedent may alternatively cause the consequence to occur; therefore, the truth of the consequence does not necessarily imply the truth of the antecedent.
273. ↑ However, there may be some other antecedent, such as legal emancipation, that may permit a finding of adulthood even without reaching the age of legal majority. Therefore, the age of legal
majority is a sufficient, but not necessary, condition for a finding of adulthood.
274. ↑ This definition only pertains to a determination of relevance, not a certain statement of truth.
275. ↑ The set is an unambiguous consequence, as opposed to other possible sets of consequences.
276. ↑ The absence of ambiguity in this inference is the reason that the Stoics described the truth of the conditional statement as requiring a sufficient condition for the antecedent and called this
inference a "material implication"; see the definition for "Material Implication" below.
277. ↑ As defined earlier in this outline, a condition is true if and only if it is impossible for the truth value of a conclusion to be different from the truth value of a sufficient premise.
Because it is unambiguous, this definition of implication, as opposed to the definition of linguistic implication given earlier, is used for the definition of the conditional statement in
Predicate Logic.
278. ↑ TRUTH TABLE:
↔ or ≡
│ p │ q │ (p ↔ q) or (p ≡ q) │
│ T │ T │ T │
│ T │ F │ F │
│ F │ T │ F │
│ F │ F │ T │
279. ↑ In informal verbal logic, as opposed to formal Predicate Logic, for a passage to prove a sound conclusion, and therefore contain a sound argument, two conditions must be satisfied: (1) at
least one of the statements must claim to know the existence of facts or reasons to believe that certain evidence is true, also known as a factual claim; and (2) there must be a claim that the
facts or reasons to believe those facts support (or imply) the conclusion, also known as an inferential claim. A passage that does not satisfy both of these conditions, such as may typically
occur in warnings or advisory statements, statements of unqualified belief or opinion, loosely associated statements, unsubstantiated reports, expository passages, or illustrations, does not
contain an empirically and logically valid argument and cannot prove an empirically and logically valid conclusion. Note, however, that according to the Stoics (see above), it may still prove a
logically valid, albeit unempirical, conclusion. Hence, in Predicate Logic, as opposed to informal verbal logic, the empirical validity of the factual premises is generally unimportant. See
below, "Vacuous Truth." The best practice is to test the validity of the inferential claim by testing the validity of the logical relationships between premises and conclusion, by assuming that
all the premises are true, before testing the validity of the factual claims since, if the inferential claim is false - i.e., if the supposed logical argument is faulty in its inferential method
- the validity of any factual claim, although it may be interesting in itself, will have no importance to the empirical validity of the argument.
280. ↑ i.e., valid logical relationships.
281. ↑ i.e., all the factual premises are empirically true.
282. ↑ Whether an argument is "sound" is only important to informal verbal logic and not to formal Predicate Logic, where the only concern is the validity of the argument and not the empirical
truthfulness of the premises on which the conclusion is based. However, where science is concerned, all arguments must be sound.
283. ↑ We have called this an axiom to remain consistent with the style of this outline and, although such a statement qualifies for the definition of an axiom, it is more properly a "proof by
284. ↑ A single conditional statement or other inferential claim may become an argument if the antecedent and/or consequence are restated to posit both factual and inferential claims. However, such
constructions usually result in wordy and cumbersome statements and are therefore generally avoided.
Example: If the animal has feathers, and in this instance it actually does have feathers, the animal is a bird.
285. ↑ A conditional statement or other inferential claim may serve as either (or both) a premise or conclusion of a statement, proposition, or argument.
Example: If the animal has feathers then it may be a bird. (inferential claim stated as a premise)
The animal has feathers. (premise)
The animal is a bird, but only if it can fly. (conditional conclusion)
286. ↑ Therefore, the inclusion or omission of a false but irrelevant argument will not affect the argument's validity.
287. ↑ As a matter of style and clarity, however, it is rhetorically ineffective to include irrelevant statements in a well-formed argument.
288. ↑ Any inference that is invariably truth preserving is an example of deductive reasoning. Material implication is a specific example of deductive logic, but the definition also applies to the
operation of any well-defined logical operator.
289. ↑ As such, deductive reasoning tends to move from the more general case to the more specific - i.e., arriving at a particular conclusion by inference from one or more premises.
290. ↑ Unlike deductive reasoning (which is the process of arriving at a particular conclusion by inference from one or more general or universal premises), a conclusion arrived at through inductive
reasoning in informal verbal logic generally does not necessarily follow from the premises. This is not the case for mathematical induction, which does necessarily prove one, and only,
conclusion. The case of mathematical induction is proved during the study of Second-Order Predicate Calculus.
291. ↑ i.e., in the example above, the major term is "have/has feathers").
292. ↑ i.e., in the example above, the minor term is "ostrich is a kind" of some class, in this case the class of all birds).
293. ↑ i.e., in the example above, the middle term is "bird").
294. ↑ A sorites argument is an example of the Principal of Transitivity.
295. ↑ i.e., proved.
296. ↑ Which themselves may be the conclusions of other theorems.
297. ↑ A theory is not an unproven argument, as often assumed by lay persons who are misusing this term to mean a proposition that is stated hypothetically.
298. ↑ A lemma (see below) is a theorem that is particularly useful because of its many uses for the proof of other theorems. There is no real, logical difference between theorems and lemmas and it
is only a matter of custom that they are distinguished based on a perceived "usefulness" by the community of logicians that use them.
299. ↑ Usually in the context of a particular theory.
300. ↑ i.e., universal truths. See generally, the Problem of Universals.
301. ↑ At first glance it appears that the statements are logically equivalent. However, this is not necessarily the case. If "legal adulthood" can be satisfied by a judicial finding of emancipation
then a child may be an adult and still be under the age of 18, which is why this example was chosen. If this is the case then being over the age of 18 is a sufficient, but not a necessary,
condition for adulthood. Thus, by replacing the subject and predicate terms with each other, we are not guaranteed a logically true statement since we could have someone who is under the age of
18 and still be an adult, which means that the converse is not necessarily true as stated. However, if legal emancipation is not an option and being over the age of 18 is both a necessary and
sufficient condition for adulthood then the inferential relationship is bidirectional and the statement is logically equivalent to its converse.
302. ↑ An inferential statement and its contraposition are always logically equivalent. This can be readily seen from the above example. For bidirectional inferences, if being over the age of
eighteen is a necessary and sufficient condition for legal adulthood then, as we saw with conversion above, the replacement of subject and predicate terms with each other will not invalidate the
truth of the statement due to the bidirectionality of the inferential relationship. Furthermore, inverting both the subject and predicate terms by taking their logical complements does not
invalidate a statement that is bidirectional since, by definition, a bidirectional statement is true if both its subject and predicate terms are true or if both its subject and predicate terms
are false.
303. ↑ For material implications (i.e., inferences that are not bidirectional), the truth is also preserved for contraposition. If we suppose that being over the age of 18 is a sufficient but not a
necessary condition for adulthood because of the option of legal emancipation then the original statement is true. If we replace the subject and predicate terms with each other and then invert
both terms, we still have a true statement since, by definition, a material implication is only false if the truth value of the predicate term is different than the truth value of a sufficient
subject term, and we don't care whether a necessary subject term is true or false since, in that case, the truth of the statement will be the same as the truth of the predicate term in any case.
However, with contraposition, the truth is preserved because, even if the subject term is a sufficient but not necessary condition for the truth of the predicate term, replacing and negating both
terms will preserve the truth value of the original statement. For a definitive proof of this proposition, see the truth tables given below.
304. ↑ The subject of a sufficient conditional statement may be restated as the predicate of a necessary conditional statement by the contraposition of the original terms.
305. ↑ The subject of the converse of a sufficient conditional statement may be restated as the predicate of the inverse of a necessary conditional statement, or the subject of the converse of a
necessary conditional statement may be restated as the predicate of the inverse of a sufficient conditional statement, and the two statements will remain equivalent.
306. ↑ Proofs of the following propositions are stated in the First-Order Predicate Logic outline.
307. ↑ "The way that affirms by affirming."
308. ↑ This is probably the most fundamental inference in all of logic.
309. ↑ In artificial intelligence, modus ponens is called forward chaining.
310. ↑ "The way that denies by denying."
311. ↑ This is essentially the contra-positive form of Modus Ponens.
312. ↑ "The way that affirms by denying."
313. ↑ However, we cannot conclusively deduce that the bird is a duck, based solely on the information given.
314. ↑ This is a good example of how the meaning of terms in informal verbal logic tends to obscure the actual logical inference that exists apart from the truth of the meaning of the terms. This
occurs because the reader will tend to use their actual experience to evaluate the terms because of the ordinary meanings normally associated with them. If we know absolutely nothing about ducks
or bird that quack then the example given rings true in every sense. Hence the value of a purely formal, symbolic logic, where the meaning of the subject terms is not important (those can be
proved in separate empirical investigations) and what is really important are the meanings and values of the logical inferences.
315. ↑ Even though the two conjuncts are also material implications, it is the conjunction of the terms, and not the material implications themselves, that are at issue here. In this sense, it might
also be possible to write the problem as (A→B & A→C) ⊢ (A→B ∧ A→C), but this would not serve to illustrate the intended point as clearly.
316. ↑ And B is also true, which simply restates the initial assumption.
317. ↑ The second consequent (that the bird must be a duck) is not necessary to illustrate the inference in its most essential character, but it is included to make the point that the illustration
need not be limited to only the first conjunct.
318. ↑ Of course, it is not possible to assume that both are true, based on the information that is given.
319. ↑ But, based on the information given, we cannot say with any certainty that it is in fact a duck.
320. ↑ Since at least one of two statements (A or B) is true, and since either of them would be sufficient to entail C, C is certainly always true in these circumstances.
321. ↑ The reason this is called "disjunctive syllogism" is that, first, it is a syllogism--a three-step argument--and second, it contains a disjunction, which means simply an "or" statement.
322. ↑ Note that the disjunctive syllogism works whether 'or' is considered 'exclusive' or 'inclusive' disjunction.
323. ↑ In this sense, we have our first inkling of mathematical induction.
324. ↑ However, we simply don't know with any certainty which of the antecedents is in fact true.
325. ↑ In symbolic logic, we never use a definition unless it is meets the criteria of eliminability and non-creativity, which are explained in the outlines for those subjects or, in the case of set
theory, the proof of a justifying theorem which does not include the definition itself as a premise.
326. ↑ Note that a style of argument does not necessarily constitute a method of proof and might even be logically invalid.
327. ↑ i.e., if the premises are true then the conclusion cannot be false.
328. ↑ As such, a deductive argument tends to move from the more general case to the more specific - i.e., by arriving at a particular conclusion by inference from one or more general or universal
329. ↑ As such, an argument by definition is essentially axiomatic in character, although a logically true axiom will not necessarily result; see above: Axiom.
330. ↑ Therefore, universal instantiation is essentially an example of deductive reasoning.
331. ↑ Without the use of valid inductive reasoning, universal generalizations are not generally valid.
332. ↑ There exists a proof in Second-Order Predicate Calculus for why mathematical induction is as strong a form of reasoning as deductive logic.
333. ↑ A statistical syllogism is a weak inductive argument and, with some very significant exceptions, generally does not qualify as true mathematical induction.
334. ↑ Real or supposed.
335. ↑ Or other expert witness.
336. ↑ Usually weak inductive and predictive.
337. ↑ This style of argument can be distinguished from an argument by analogy in that an educated guess emphasizes experience and past empirical observation.
338. ↑ i.e., "is/are" and "is/are not."
339. ↑ See below, "Types of Categorical Propositions").
340. ↑ In the modern square of opposition there are no contraries, sub-contraries, or sub-alterns (as required in the square of opposition of traditional Aristotelian logic).
341. ↑ The names of syllogism types that result in the existential fallacy are italicized.
342. ↑ Excepting polysyllogisms and disjunctive syllogisms.
343. ↑ Notice that there are four terms: "fish", "fins", "goldfish" and "humans." Two premises aren't enough to connect four different terms since there must be one term common to both premises to
establish a connection.
344. ↑ In everyday reasoning, the fallacy of four terms occurs most frequently by equivocation - i.e., using the same word or phrase in each statement but with a different meaning each time, creating
a fourth term even though only three apparently distinct words or phrases are used.
Example: Nothing is better than eternal happiness. (major premise)
A ham sandwich is better than nothing. (minor premise)
A ham sandwich is better than eternal happiness. (conclusion)
The word "nothing" in the example above has two meanings: "nothing is better" means the thing being named has the highest value possible; "better than nothing" means the thing being described has
only marginal value. Therefore, "nothing" acts as two different terms, creating the fallacy of four terms.
345. ↑ A more tricky example of syllogistic equivocation is as follows:
Example: The hand touches the pen. (major premise)
The pen touches the paper. (minor premise)
Therefore, the hand touches the paper. (conclusion)
The fallacy is more clear if one uses "is touching" instead of "touches." It then becomes clear that "touching the pen" is not the same as "the pen," thus creating four terms: "the hand";
"touching the pen"; "the pen"; and "touching the paper." A valid form of this argument would then be as follows:
Example: The hand touches the pen. (major premise)
All that touches the pen also touches the paper. (minor premise)
Therefore, the hand touches the paper. (conclusion)
Now the term "the pen" has been eliminated, leaving three terms and correcting the logic of the syllogism.
346. ↑ Dicto Simpliciter syllogisms.
347. ↑ The truth of this proposition is proved in the First-Order Predicate outline.
348. ↑ It should be noted that, unlike entailment, indirect proof does not prove the universality of any condition; it only proves the possibility of a contrary circumstance, thereby disproving the
hypothesis. (needs proof).
349. ↑ An existence theorem may be called "pure" if the statement given does not also indicate the construction of whatever kind of object for which existence is asserted. From a more rigorous point
of view, this is a problematic concept. This is because, in these instances, "existence theorem" is merely a tag applied to a statement for which the "proof" is never unqualified. Hence, the term
"pure" is used in a manner that violates the standard "proof irrelevance" rule of mathematical theorems. That is, these "theorems" are in fact unproven statements of truth, at least in the formal
sense of the term "proof." Thus, many constructivist mathematicians who work in extended, predicate logics (such as intuitionistic logic, where pure existence statements are considered to be
intrinsically weaker than their constructivist counterparts) generally do not utilize non-constructive proofs, except for meta-definitional purposes. Thus, the use of the term "proof" to describe
these statements amounts to an informal misnomer.
350. ↑ This is essentially the same thing as an indirect proof except that it generally finds its application in non-empirical, purely "philosophical" disciplines, such as Meta-Logic. The fundamental
difference between an indirect proof and a non-constructive proof is that, whereas the possibility of the existence of a material thing (such as a quacking duck) may be proved directly by the
observation of its existence, a non-constructive proof only results in the proof of the reality of an idea as a mental construct without the certainty of knowing that the idea actually exists in
objective reality beyond the reality of the mental construct (see below, "Constructive Proof"). For example, we may assert the existence of the Axiom of Infinity, the Axiom of Choice, or the Law
of the Excluded Middle, and we might "prove" the existence of all these ideas non-constructively or indirectly, but we cannot actually prove their existence as logical objects except in a purely
a priori manner. Therefore, in the disciplines of math or symbolic logic, such concepts are usually posited as axioms requiring no other proof except the formal statement of their existence,
usually by way of a non-semantic symbolism. Hence, their "proof" is provided merely by the construction of the symbolism that represents the concept to be utilized.
351. ↑ The proof of the validity of these algorithms is the subject of formal Predicate Logic.
352. ↑ Proof by construction, or proof by example, is the construction of a concrete example with a property to show that something having that property exists. Joseph Liouville, for instance, proved
the existence of transcendental numbers by constructing an explicit example.
353. ↑ See above, "Non-Constructive Proof."
354. ↑ The number of cases sometimes can become very large. For example, the first proof of the "four color theorem" was a proof by exhaustion with 1,936 cases. This proof was controversial because
the majority of the cases were checked by a computer program, not by hand. The shortest known proof of the "four color theorem" today still has over 600 cases.
355. ↑ The latter type of reasoning can be called a 'plausibility argument' and is not a proof; this is clearly seen in the case of the Collatz Conjecture. Probabilistic proof, like proof by
construction, is one of many ways to state an existence theorem. Likewise, a "statistical proof" does not prove any proposition with certainty but only "proves" the proposition within a certain
range of error or "certainty."
356. ↑ Often a bijection between two sets is used to show that the expressions for their two sizes are equal. Alternatively, a double counting argument provides two different expressions for the size
of a single set, again showing that the two expressions are equal.
357. ↑ So as to distinguish it from predictive, or "weak," induction.
|
{"url":"http://logic-law.com/index.php?title=Verbal_Logic","timestamp":"2014-04-21T13:38:46Z","content_type":null,"content_length":"346090","record_id":"<urn:uuid:0ba1c0e1-0167-4a7f-9629-949703067696>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00347-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Piercing Ice
100,561pages on
this wiki
The subject of this article was removed from World of Warcraft in patch 4.0.1.
• This includes items and quests that can no longer be obtained.
• The in-game information in this article is kept purely for historical purposes and should probably not be under any other categories.
Piercing Ice
• Increases the damage done by your Frost spells by X%.
Usable by
Class Mage
Location Frost, Tier 2
Affects Frost damage
Ranks 3
Points required 5
Spec specific Yes
Piercing Ice is a passive Mage talent that increases the all damage done by Frost spells and effects by 2% per rank, up to 6% at rank 3. The percentage is a direct multiplier on total frost damage
done, increasing overall DPS. This talent is found in most frost builds that rely on Frost spells for primary damage.
This damage multiplier stacks with another talent in the Frost tree, Arctic Winds which increases Frost damage up to 5% at max rank. Combined, the two talents increase total Frost damage by 11.3%
(bonuses are multiplied, not added).
This talent will affect the damage numbers which show in the tooltips of all frost spells.
Rank table
Rank Damage Increase
1 2%
2 4%
3 6%
The damage multiplier applies to total damage, including the base damage, bonus from spell damage gear, and any other damage multipliers. The following formula gives the total damage of a frost
spell, E, where B is the spell's base damage, c is the spell damage coefficient, d is the amount of spell damage the mage has, and X is the number of points in Piercing Ice. Critical strikes are not
E = (1.0 + 0.02*X)(B + c*d)
Arctic Winds is a second multiplier applied in a similar way. The following formula includes Y, the number of points in Arctic Winds.
E = (1.0 + 0.01*Y)(1.0 + 0.02*X)(B + c*d)
Patch changes
External links
|
{"url":"http://www.wowwiki.com/Piercing_Ice?oldid=2401485","timestamp":"2014-04-21T16:34:35Z","content_type":null,"content_length":"77112","record_id":"<urn:uuid:bbf9940d-e64f-4e6d-a19b-07e11d984afb>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00394-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Risk Sensitive Path Integral Control
Bart Broek, Bert Kappen and Bert Kappen
In: BNAIC 2010, 25 Oct- 26 Oct 2010, Luxemburg.
1 Introduction The objective in conventional stochastic optimal control is to minimize an expected cost-to-go. Risk sensitive optimal control generalizes this objective by minimizing an expected
exponentiated cost-to-go. Depending on its risk parameter , expected exponentiated cost-to-go puts more emphasis on the mode of the distribution of the cost-to-go, or on its tail, and in that way
allows for a modelling of more risk seeking ( < 0) or risk averse ( > 0) behaviour. The conventional optimal control can be viewed as a special case of risk sensitive optimal control with a risk
neutral parameter = 0. Risk sensitive control was first considered in continuous space in the LEQG problem [1], which is the risk sensitive analogue of the Linear Quadratic Gaussian (LQG) problem.
Relations with other fields such as differential games and robust control have initiated a lot of interest for risk sensitive control. The dynamic programming (DP) principle provides a well-known
approach to a global solution in stochastic optimal control. In the continuous time and state setting that we will consider, it follows from the DP principle that the solution to the control problem
satisfies the so-called Hamilton-Jacobi-Bellman (HJB) equation, which is a second order nonlinear partial differential equation. If the dynamics is linear and the cost is quadratic in both state and
control, the HJB equation can be solved exactly, both for LQG and LEQG. Recently, a path integral formalism has been developed to solve the HJB equation. This formalism is applicable if (1) both the
noise and the control are additive to the (nonlinear) dynamics, (2) the cost is quadratic in the control (but arbitrary in the state), and (3) the noise satisfies certain additional conditions. Under
these conditions the nonlinear HJB equation can be transformed into a linear one, which can be solved by forward integration of a diffusion process [2]. This formalism contains LQG control as a
special case. In our full paper [3] we show how path integral control generalizes to risk sensitive control problems. The required conditions to apply path integral control in the risk sensitive case
are the same as those in the risk neutral setting. As a consequence, characteristics of path integral control, such as superposition of controls, symmetry breaking and approximate inference, carry
over to the setting of risk sensitive control.
|
{"url":"http://eprints.pascal-network.org/archive/00007040/","timestamp":"2014-04-18T13:06:34Z","content_type":null,"content_length":"10281","record_id":"<urn:uuid:2081fabc-016d-4a61-8e19-7be6b883bbbd>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00293-ip-10-147-4-33.ec2.internal.warc.gz"}
|
probability theory (mathematics) :: Brownian motion process
probability theory
Article Free Pass
The most important stochastic process is the Brownian motion or Wiener process. It was first discussed by Louis Bachelier (1900), who was interested in modeling fluctuations in prices in financial
markets, and by Albert Einstein (1905), who gave a mathematical model for the irregular motion of colloidal particles first observed by the Scottish botanist Robert Brown in 1827. The first
mathematically rigorous treatment of this model was given by Wiener (1923). Einstein’s results led to an early, dramatic confirmation of the molecular theory of matter in the French physicist Jean
Perrin’s experiments to determine Avogadro’s number, for which Perrin was awarded a Nobel Prize in 1926. Today somewhat different models for physical Brownian motion are deemed more appropriate than
Einstein’s, but the original mathematical model continues to play a central role in the theory and application of stochastic processes.
Let B(t) denote the displacement (in one dimension for simplicity) of a colloidally suspended particle, which is buffeted by the numerous much smaller molecules of the medium in which it is
suspended. This displacement will be obtained as a limit of a random walk occurring in discrete time as the number of steps becomes infinitely large and the size of each individual step
infinitesimally small. Assume that at times kδ, k = 1, 2,…, the colloidal particle is displaced a distance hX[k], where X[1], X[2],… are +1 or −1 according as the outcomes of tossing a fair coin are
heads or tails. By time t the particle has taken m steps, where m is the largest integer ≤ t/δ, and its displacement from its original position is B[m](t) = h(X[1] +⋯+ X[m]). The expected value of B[
m](t) is 0, and its variance is h^2m, or approximately h^2t/δ. Now suppose that δ → 0, and at the same time h → 0 in such a way that the variance of B[m](1) converges to some positive constant, σ^2.
This means that m becomes infinitely large, and h is approximately σ(t/m)^1/2. It follows from the central limit theorem (equation equation (12) that lim P{B[m](t) ≤ x} = G(x/σt^1/2), where G(x) is
the standard normal cumulative distribution function defined just below equation (12). The Brownian motion process B(t) can be defined to be the limit in a certain technical sense of the B[m](t) as δ
→ 0 and h → 0 with h^2/δ → σ^2.
The process B(t) has many other properties, which in principle are all inherited from the approximating random walk B[m](t). For example, if (s[1], t[1]) and (s[2], t[2]) are disjoint intervals, the
increments B(t[1]) − B(s[1]) and B(t[2]) − B(s[2]) are independent random variables that are normally distributed with expectation 0 and variances equal to σ^2(t[1 ] − s[1]) and σ^2(t[2] − s[2]),
Einstein took a different approach and derived various properties of the process B(t) by showing that its probability density function, g(x, t), satisfies the diffusion equation ∂g/∂t = D∂^2g/∂x^2,
where D = σ^2/2. The important implication of Einstein’s theory for subsequent experimental research was that he identified the diffusion constant D in terms of certain measurable properties of the
particle (its radius) and of the medium (its viscosity and temperature), which allowed one to make predictions and hence to confirm or reject the hypothesized existence of the unseen molecules that
were assumed to be the cause of the irregular Brownian motion. Because of the beautiful blend of mathematical and physical reasoning involved, a brief summary of the successor to Einstein’s model is
given below.
Unlike the Poisson process, it is impossible to “draw” a picture of the path of a particle undergoing mathematical Brownian motion. Wiener (1923) showed that the functions B(t) are continuous, as one
expects, but nowhere differentiable. Thus, a particle undergoing mathematical Brownian motion does not have a well-defined velocity, and the curve y = B(t) does not have a well-defined tangent at any
value of t. To see why this might be so, recall that the derivative of B(t), if it exists, is the limit as h → 0 of the ratio [B(t + h) − B(t)]/h. Since B(t + h) − B(t) is normally distributed with
mean 0 and standard deviation h^1/2σ, in very rough terms B(t + h) − B(t) can be expected to equal some multiple (positive or negative) of h^1/2. But the limit as h → 0 of h^1/2/h = 1/h^1/2 is
infinite. A related fact that illustrates the extreme irregularity of B(t) is that in every interval of time, no matter how small, a particle undergoing mathematical Brownian motion travels an
infinite distance. Although these properties contradict the commonsense idea of a function—and indeed it is quite difficult to write down explicitly a single example of a continuous,
nowhere-differentiable function—they turn out to be typical of a large class of stochastic processes, called diffusion processes, of which Brownian motion is the most prominent member. Especially
notable contributions to the mathematical theory of Brownian motion and diffusion processes were made by Paul Lévy and William Feller during the years 1930–60.
A more sophisticated description of physical Brownian motion can be built on a simple application of Newton’s second law: F = ma. Let V(t) denote the velocity of a colloidal particle of mass m. It is
assumed that
The quantity f retarding the movement of the particle is due to friction caused by the surrounding medium. The term dA(t) is the contribution of the very frequent collisions of the particle with
unseen molecules of the medium. It is assumed that f can be determined by classical fluid mechanics, in which the molecules making up the surrounding medium are so many and so small that the medium
can be considered smooth and homogeneous. Then by Stokes’s law, for a spherical particle in a gas, f = 6πaη, where a is the radius of the particle and η the coefficient of viscosity of the medium.
Hypotheses concerning A(t) are less specific, because the molecules making up the surrounding medium cannot be observed directly. For example, it is assumed that, for t ≠ s, the infinitesimal random
increments dA(t) = A(t + dt) − A(t) and A(s + ds) − A(s) caused by collisions of the particle with molecules of the surrounding medium are independent random variables having distributions with mean
0 and unknown variances σ^2 dt and σ^2 ds and that dA(t) is independent of dV(s) for s < t.
The differential equation (18) has the solution
where β = f/m. From this equation and the assumed properties of A(t), it follows that E[V^2(t)] → σ^2/(2mf) as t → ∞. Now assume that, in accordance with the principle of equipartition of energy, the
steady-state average kinetic energy of the particle, m lim[t → ∞]E[V^2(t)]/2, equals the average kinetic energy of the molecules of the medium. According to the kinetic theory of an ideal gas, this
is RT/2N, where R is the ideal gas constant, T is the temperature of the gas in kelvins, and N is Avogadro’s number, the number of molecules in one gram molecular weight of the gas. It follows that
the unknown value of σ^2 can be determined: σ^2 = 2RTf/N.
If one also assumes that the functions V(t) are continuous, which is certainly reasonable from physical considerations, it follows by mathematical analysis that A(t) is a Brownian motion process as
defined above. This conclusion poses questions about the meaning of the initial equation (18), because for mathematical Brownian motion the term dA(t) does not exist in the usual sense of a
derivative. Some additional mathematical analysis shows that the stochastic differential equation (18) and its solution equation (19) have a precise mathematical interpretation. The process V(t) is
called the Ornstein-Uhlenbeck process, after the physicists Leonard Salomon Ornstein and George Eugene Uhlenbeck. The logical outgrowth of these attempts to differentiate and integrate with respect
to a Brownian motion process is the Ito (named for the Japanese mathematician Itō Kiyosi) stochastic calculus, which plays an important role in the modern theory of stochastic processes.
The displacement at time t of the particle whose velocity is given by equation (19) is
For t large compared with β, the first and third terms in this expression are small compared with the second. Hence, X(t) − X(0) is approximately equal to A(t)/f, and the mean square displacement, E
{[X(t) − X(0)]^2}, is approximately σ^2/f ^2 = RT/(3πaηN). These final conclusions are consistent with Einstein’s model, although here they arise as an approximation to the model obtained from
equation (19). Since it is primarily the conclusions that have observational consequences, there are essentially no new experimental implications. However, the analysis arising directly out of
Newton’s second law, which yields a process having a well-defined velocity at each point, seems more satisfactory theoretically than Einstein’s original model.
Do you know anything more about this topic that you’d like to share?
|
{"url":"http://www.britannica.com/EBchecked/topic/477530/probability-theory/32789/Brownian-motion-process","timestamp":"2014-04-20T01:19:27Z","content_type":null,"content_length":"104490","record_id":"<urn:uuid:b755cc08-4885-4275-87de-13ac1c65c59b>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00566-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Inertial/non-inertial reference frames
I'm a bit unsure about the last couple of bits of this question, and I'm hoping someone might be able to help.
1. The problem statement, all variables and given/known data
a) Let a reference frame with origin O & Cartesian axes (x, y, z) be fixed relative to the surface of the rotating earth at co-latitude θ (i.e. 0≤θ≤∏, where θ = 0 corresponds to the north pole).
Increasing x is east, increasing y is north & increasing z is upwards (opposite direction to gravity g). The earth is assumed to rotate steadily with angular velocity ω. Find the components of ω in
this frame of reference. Ignoring the centrifugal force, show that the motion of a particle of mass m under gravity is governed by
[itex]\ddot{x} − 2ω\dot{y} cos θ + 2ω\dot{z} sin θ = 0 [/itex]
[itex]\ddot{y}+ 2ω\dot{x} cos θ = 0 [/itex]
[itex]\ddot{z}− 2ω\dot{x} sin θ = −g [/itex]
where ω = |ω| and g = |g|. Assuming θ is constant, by integrating the second and third of these equations with respect to time and substituting into the first equation, show that
[itex]\ddot{x}+ 4ω^{2}x= 2ω(v_{0}cosθ-w_{0}sin)+2gtsinθ [/itex]
where v[0] and w[0] are constants. Hence find the general solution for x.
b) If a particle falls from rest at O, find x as a function of t. The particle falls only for a brief time before it hits the ground, so that ωt is small throughout its motion. Use a series xpansion
of solution for x in ωt to show, to leading order,
[itex]x =\frac{1}{3}gωt^{3} sin θ [/itex]
c) Explain briefly how an inertial observer would account for this eastward deflection
of the falling particle.
2. Relevant equations
[itex]m\textbf{a}=\textbf{F}-m\dot{\textbf{ω}}\times\textbf{r}-2m\textbf{ω}\times\dot{\textbf{r}}-m\textbf{ω}\times(\textbf{ω}\times\textbf{r})-m\textbf{A} [/itex]
3. The attempt at a solution
For a) I get ω=ωsinθy+ωcosθz, and using the equation above, with the fact that ω is constant and ignoring the centrifugal force, I get the three equations as stated. Integrating then gives [itex]\
ddot{x}+ 4ω^{2}x= 2ω(v_{0}cosθ-w_{0}sin)+2gtsinθ [/itex].
solving this as a 2nd order ODE, I get complementary solution [itex]x=αcos(2ωt)+βsin(2ωt) [/itex] and particular solution [itex]x=\frac{gsinθ}{2ω}t+\frac{v_{0}cosθ-w_{0}sinθ}{2w^{2}} [/itex]
so the general solution for x is these added together.
For b), I plugged in the initial values, at t=0, x=0, [itex]\dot{x}[/itex]=0 to get
and using the series expansions for sin and cos, I get for small t
which simplifies to [itex]x=(v_{0}cosθ-w_{0}sinθ)t^{2}+1/3gsinθt^{3}[/itex]
The answer I'm supposed to get here is just the second term, but I'm not entirely sure if I've done this right. Can I just cancel the first term here as t is small?
I'm also not entirely sure what answer part c) is looking for. Is it anything to do with Coriolis?
I'd be grateful if anyone could shed a bit of light on this. Thanks!
|
{"url":"http://www.physicsforums.com/showthread.php?t=631560","timestamp":"2014-04-18T08:18:59Z","content_type":null,"content_length":"41856","record_id":"<urn:uuid:50fe87aa-d463-47c3-951e-e7f533da5595>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00055-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Acta Physica Slovaca 49 (1999)
• K. Banaszek, K. Wódkiewicz
Nonlocality of the Einstein-Podolsky-Rosen state in the phase space
Acta Physica Slovaca 49, 491 (1999)
Abstract: We discuss violation of Bell inequalities by the regularized Einstein-Podolsky-Rosen (EPR) state, which can be produced in a quantum optical parametric down-conversion process. We
propose an experimental photodetection scheme to probe nonlocal quantum correlations exhibited by this state. Furthermore, we show that the correlation functions measured in two versions of the
experiment are given directly by the Wigner function and the Q function of the EPR state. Thus, the measurement of these two quasidistribution functions yields a novel scheme for testing quantum
• J. Bergou, M. Jakob, Y. Abranyos
Resonance fluorescence of a trapped four-level atom with bichromatic driving
Acta Physica Slovaca 49, 501 (1999)
Abstract: The RF spectrum of a bichromatically driven four-level atom is polarization dependent. Very narrow lines occur in the incoherent parts of the spectrum for polarization directions which
are different from that of the driving fields. The degree of squeezing has a maximum of which should make it easily observable. The second-order correlation function exhibits antibunching for
zero time delay and strong superbunching for certain values of the interaction parameter and time delay. For these parameters resonant two-photon emission takes place in the form of polarization
entangled photon pairs. The system can be a novel source of photons in the EPR and/or Bell states. Some experiments will be proposed which make use of this unique source.
• G. M. D'Ariano
Group theoretical quantum tomography
Acta Physica Slovaca 49, 513 (1999)
Abstract: A general method is presented for estimating the ensemble average of all operators of an arbitrary quantum system from a set of measurements of a quorum of observables. The quorum--i.
e. a ``complete'' set of noncommuting observables for determining the quantum state of the system--is generated from a maximal commuting set of observables--the ``seed observables''--under the
action of a dynamical group of the quantum system. A method for deconvolving noise of any kind in the measurement is given in terms of the completely positive (CP) map pertaining the noise. This
approach leads to a group theoretical classification of physically realizable quantum tomographic machines. These are made of two devices: 1) a measuring apparatus for the seed observables; 2) a
transformation apparatus that achieves the dynamical group. Examples of applications are given in different physical contexts.
• K. M. Gheri, P. Törmä, P. Zoller
Quantum state engineering with photonic qubits
Acta Physica Slovaca 49, 523 (1999)
Abstract: We outline a scheme for the generation of a train of entangled single-photon wavepackets using standard CQED-techniques. The generated photons are transferred to the continuum outside
the resonator through cavity loss in the form of wavepackets each of which may be regarded as a logical qubit. We show that undesired decoherence effects can be efficiently reduced in the
considered scheme.
• M. Hillery, V. Buzek
Secret sharing via quantum entanglement
Acta Physica Slovaca 49, 533 (1999)
Abstract: Secret sharing is a procedure for splitting a message into several parts so that no single part is sufficient to read the message, but the entire set is. This procedure can be
implemented using either GHZ states or two-particle entangled states. In the quantum case the presence of an eavesdropper will introduce errors so that her presence can be detected. We also
discuss how quantum information can be split into parts so that the message can be reconstructed from a sufficiently large subset of the parts.
• A. G. Kofman, G. Kurizki
Decay control in dissipative quantum systems
Acta Physica Slovaca 49, 541 (1999)
Abstract: We point out that the quantum Zeno effect, i.e., inhibition of spontaneous decay by frequent measurements, is observable only in spectrally finite reservoirs, i.e., in cavities and
waveguides, using a sequence of evolution-interrupting pulses or randomly-modulated CW fields. By contrast, such measurements can only accelerate decay in free space.
• N. Lütkenhaus
Security of quantum cryptography with realistic sources
Acta Physica Slovaca 49, 549 (1999)
Abstract: The interest in practical implementations of quantum key distribution (QKD) is steadily growing. However, there is still a need to give a precise security statement which adapts to
realistic implementation. In this paper I give the effective key rate we can obtain in a practical setting within the scenario of security against individual attacks by an eavesdropper. It
illustrates previous results that high losses together with detector dark counts can make secure QKD impossible.
• S. Pascazio, P. Facchi
Modifying the lifetime of an unstable system by an intense electromagnetic field
Acta Physica Slovaca 49, 557 (1999)
Abstract: We study the temporal behavior of a three-level system (such as an atom or a molecule), initially prepared in an excited state, bathed in a laser field tuned at the transition frequency
of the other level. We analyze the dependence of the lifetime of the initial state on the intensity of the laser field. The phenomenon we discuss is related to both electromagnetic induced
transparency and quantum Zeno effect.
• A. K. Pati
The issue of phases in quantum measurement theory
Acta Physica Slovaca 49, 567 (1999)
Abstract: The issue of phases is always very subtle in quantum world and many of the curious phenomena are due to the existence of the phase of the quantum mechanical wave function. We
investigate the issue of phases in quantum measurement theory and predict a new effect of fundamental importance. We call a quantum system under goes a quantum Zeno dynamics (QZD) when the
unitary evolution of a quantum system is interrupted by a sequence of measurements. In particular, We investigate the effect of repeated measurements on the geometric phase and show that the
quantum Zeno dynamics can inhibit its development under a large number of measurement pulses. It is interesting to see that neither the total phase nor the dynamical phase goes to zero under
large number of measurements. This new effect we call as the ``quantum Zeno Phase effect'' (QZPE) in analogous to the quantum Zeno effect (QZE) where the repeated measurements inhibit the
transition probability. This ``quantum Zeno Phase effect'' can be proved within von Neumann's collapse mechanism as well as using a continuous measurement model. So the effect is really
independent of any particluar measurement model considered. Since the geometric phase attributes a memory to a quantum system our results also proves that the path dependent memory of a system
can be erased by a sequence of measurements. The QZPE provides a way to control and manipulate the phase of a wave function in an interference set up. Finally, we stress that the quantum Zeno
Phase effect can be tested using neutron, photon and atom interference experiments with the presently available technology.
• E. A. Power, T. Thirunamachandran
Quantum electromagnetic fields in the neighbourhood of an atom
Acta Physica Slovaca 49, 579 (1999)
Abstract: Non-relativistic quantum electrodynamics is used to find the Heisenberg electric and magnetic field operators when a single atom perturbs the vacuum. The expectation values of these
fields for particular quantum states are found, and the differences associated with the choice between minimal and multipolar coupling are discussed. The effect of these fields on nearby atoms is
shown to give Casimir potentials. Finally an extension of the theory to allow for Roentgen currents- the source of which being moving dipoles- is made.
• S. Scheel, L. Knöll, D.-G.Welsch
Spontaneous decay in the presence of absorbing dielectric bodies
Acta Physica Slovaca 49, 585 (1999)
Abstract: We present a formalism for studying the influence of dispersive and absorbing dielectric bodies on a radiating atom in the framework of quantization of the phenomenological Maxwell
equations for given complex permittivities of the bodies. In Markov approximation, the rate of spontaneous decay and the line shift associated with it can then be related to the complex
permittivities and geometries of the bodies via the dyadic Green function of the classical boundary value problem of electrodynamics - a result which is in agreement with second-order
calculations for microscopic model systems. The theory is applied to an atom near a planar interface as well as to an atom in a spherical cavity. The latter, also known as the real-cavity model
for spontaneous decay of an excited atom embedded in a dielectric, is compared with the virtual-cavity model. Connections with other approaches are mentioned and the results are compared.
• R. Tanas
Atoms in a narrow-bandwidth squeezed vacuum
Acta Physica Slovaca 49, 595 (1999)
Abstract: Two possible descriptions of evolution of a two-level atom driven by a strong laser field and subjected to a squeezed vacuum with finite bandwidth are discussed. One is the master
equation approach in which the squeezed vacuum is treated as a Markovian reservoir to the atom, and the other is the coupled-systems (or cascaded-systems) approach in which the degenerate
parametric oscillator (DPO) producing squeezed vacuum is a part of the system. Examples of optical spectra obtained using both approaches are given.
• P. Törmä, D. Jaksch
Pairing of fermions in optical lattices
Acta Physica Slovaca 49, 605 (1999)
Abstract: We consider weakly interacting fermionic atoms in optical lattices. We show that the system can be described by the Hubbard model, and solve the BCS gap equations. Cooper-pairing is
shown to take place for parameter values which are obtainable for alkali atoms in optical lattices.
• S. Weigert
Discrete phase-space calculus for quantum spins based on a reconstruction method using coherent states
Acta Physica Slovaca 49, 613 (1999)
Abstract: To reconstruct a mixed or pure quantum state of a spin s is possible through coherent states: its density matrix is fixed by the probabilities to measure the value s along 4s(s+1)
appropriately chosen directions in space. Thus, after inverting the experimental data, the statistical operator is parametrized entirely by expectation values. On this basis, a symbolic calculus
for quantum spins is developed, the ``expectation-value representation.'' It resembles the Moyal representation for SU(2) but two important differences exist. On the one hand, the symbols take
values on a discrete set of points in phase space only. On the other hand, no quasi-probabilities--that is, phase-space distributions with negative values--are encountered in this approach.
• M. Zukowski, D. Kaszlikowski
Entanglement swapping with PDC sources
Acta Physica Slovaca 49, 621 (1999)
Abstract: We show that the possibility of distinguishing between single and two photon detection events is not a necessary requirement for the proof that recent operational realization of
entanglement swapping cannot find a local realistic description. We propose a simple modification of the experiment, which gives a richer set of interesting phenomena.
• G. Ariunbold, J. Perina, Ts. Gantsog, F. A. A. El-Orany
Two-mode correlated states in cavity with injected atoms
Acta Physica Slovaca 49, 627 (1999)
Abstract: We study a model of a lossless micromaser with two-level atoms interacting with a two-mode cavity field via two-photon transitions. We show that when the atoms are initially prepared in
a superposition state then there is an operation regime of the micromaser when the cavity field evolves into a two-mode squeezed vacuum.
• K. Banaszek
Maximum-likelihood algorithm for quantum tomography
Acta Physica Slovaca 49, 639 (1999)
Abstract: Optical homodyne tomography is discussed in the context of classical image processing. Analogies between these two fields are traced and used to formulate an iterative numerical
algorithm for reconstructing the Wigner function from homodyne statistics.
• K. Banaszek, C. Radzewicz, K. Wódkiewicz, J. S. Krasinski
Determination of the Wigner function from photon statistics
Acta Physica Slovaca 49, 643 (1999)
Abstract: We present an experimental realisation of the direct scheme for measuring the Wigner function of a single quantized light mode. In this method, the Wigner function is determined as the
expectation value of the photon number parity operator for the phase space displaced quantum state.
• C. Brukner, A. Zeilinger
Malus' law and quantum information
Acta Physica Slovaca 49, 647 (1999)
Abstract: The information content of the most elementary quantum system is represented by one single proposition. Therefore such an elementary system can only give a definite result in one
specific experimental arrangement. A change of experimental parameters then necessarily implies probabilistic measurement results in the new experimental arrangement. Assumption of the invariance
of the information content of a system upon change of the representation of our knowledge of the system together with homogeneity of the experimental parametric axis leads to the Malus' law in
quantum mechanics, the familiar sinusoidal relation between the probabilities and the laboratory parameters.
• J. Clausen , M. Dakna, L. Knöll, D.-G. Welsch
Conditional quantum state engineering at beam splitter arrays
Acta Physica Slovaca 49, 653 (1999)
Abstract: The generation of arbitrary single-mode quantum states from the vacuum by alternate coherent displacement and photon adding as well as the measurement of the overlap of a signal with an
arbitrarily chosen quantum state are studied. With regard to implementations, the transformation of the quantum state of a traveling optical field at an array of beam splitters is considered,
using conditional measurement. Allowing for arbitrary quantum states of both the input reference modes and the output reference modes on which the measurements are performed, the setup is
described within the concept of two-port non-unitary transformation, and the overall non-unitary transformation operator is derived. It is shown to be a product of operators, where each operator
is assigned to one of the beam splitters and can be expressed in terms of an s-ordered operator product, with s being determined by the beam splitter transmittance or reflectance. As an example
we discuss the generation of and overlap measurement with Schrödinger-cat-like states.
• G. M. D'Ariano, L. Maccone, M. G. A. Paris, M. F. Sacchi
Generation and measurement of nonclassical states by quantum Fock filter
Acta Physica Slovaca 49, 659 (1999)
Abstract: We study a novel optical setup which selects a specific Fock component from a generic input state. The device allows to synthesize number states and superpositions of few number states,
and to measure the photon distribution and the density matrix of a generic signal.
• G. Drobný, B. Hladký, V. Buzek
Synthesis of operators: universal quantum gates for a trapped ion
Acta Physica Slovaca 49, 665 (1999)
Abstract: We investigate physical implementations of universal quantum gates which perform arbitrary unitary transformations of unknown inputs. In particular, two approaches for synthesis of
arbitrary unitary operators acting on vibrational states of a trapped ion are considered.
• P. Facchi, A. Mariano, S. Pascazio
Wigner function and coherence properties of cold and thermal neutrons
Acta Physica Slovaca 49, 671 (1999)
Abstract: We analyze the coherence properties of a cold or a thermal neutron by utilizing the Wigner quasidistribution function. We look in particular at a recent experiment performed by Badurek
et al., in which a polarized neutron crosses a magnetic field that is orthogonal to its spin, producing highly non-classical states. The quantal coherence is extremely sensitive to the field
fluctuation at high neutron momenta. A ``decoherence parameter" is introduced in order to get quantitative estimates of the losses of coherence.
• P. Facchi, S. Pascazio
Berry phase due to quantum measurements
Acta Physica Slovaca 49, 677 (1999)
Abstract: The usual, ``static'' version of the quantum Zeno effect consists in the hindrance of the evolution of a quantum systems due to repeated measurements. There is however a ``dynamic''
version of the same phenomenon, first discussed by von Neumann in 1932 and subsequently explored by Aharonov and Anandan, in which a system is forced to follow a given trajectory. A Berry phase
appears if such a trajectory is a closed loop in the projective Hilbert space. A specific example involving neutron spin is considered and a similar situation with photon polarization is
• R. Filip
On the bistability of parametric generation process
Acta Physica Slovaca 49, 683 (1999)
Abstract: Non-equilibrium steady state transitions in nonlinear parametric generation process are analyzed. When driven by external coherent light in signal and idler beams, the parametric
generator exhibits a strongly bistable behaviour. Under certain circumstances, the bistabilities in signal and idler beams mutually compete. From presented analysis, it follows, the competition
can in principle be controlled with the input light signals in a way to implement some model of measurement device. In particular, we suggest operation of the nonlinear parametric generator as an
``all optical comparator'', analogous to routinely used electronic devices.
• J. Fiurásek, J. Krepelka, J. Perina
Quantum phase properties of Kerr couplers
Acta Physica Slovaca 49, 689 (1999)
Abstract: We use the concept of the phase space and the Husimi quasidistribution to study quantum phase properties of the optical fields propagating in Kerr couplers. Fourier coefficients of the
phase distributions are introduced and utilized to examine their spatial development. The collapses and revivals of the mean photon number oscillations between the two waveguides are due to the
bifurcation of the phase-difference probability distribution, which has a two-fold symmetry in the interval of collapse.
• U. Herzog
Decoherence due to statistically distributed jump-like events
Acta Physica Slovaca 49, 695 (1999)
Abstract: We investigate an interacting quantum system which is additionally subjected to jump-like events occurring at time instants that are distributed according to a given statistics.
Assuming that the latter can be descibed by a stationary renewal process, we consider a Poissonian and a regular distribution as well as a super-Poissonian one. To apply our method we study a
two-level system being resonantly driven by a classical field and undergoing jump-like phase decoherence (e.g. caused by quantum-nondemolition measurements of the level population). We obtain
analytical results for the steady state and for the quantum Zeno dynamics that illustrate the influence of the statistics. It turns out that a Poissonian distribution of the dephasing events is
still half as effective as a regular one in increasing the lifetime of the initial state.
• H. Kiesel, F. Hasselbach, T. Tyc, M. Lenc
Electron antibunching
Acta Physica Slovaca 49, 701 (1999)
Abstract: Two-electron correlation function is introduced and the basic property of multiparticle electron correlations -- antibunching -- is derived from its form. Two-particle correlations of
photons and electrons are compared as well as the influence of a Wien filter on one- and two-electron coherence.
• M. Koniorczyk, J. Janszky, Z. Kis
Three-photon states for quantum teleportation
Acta Physica Slovaca 49, 707 (1999)
Abstract: A three-particle generalization of the quantum teleportation of polarization states is discussed. A possible nonlinear optical process is discussed, which can lead the required
EPR-states. The Bell-state analysis of our three-photon Bell-states applying a beam-splitter and polarization analyzers is discussed.
• W. Leonski, R. Tanas
Finite energy states for periodically kicked nonlinear oscillator
Acta Physica Slovaca 49, 713 (1999)
Abstract: We study a nonlinear oscillator interacting with a one-mode cavity field. We assume, that the cavity is periodically kicked by a series of ultra-short coherent pulses. We show that for
a special choice of parameters the system evolution is restricted to a finite set of n-photon states. In consequence, the mean energy of the cavity remains finite despite the fact that the cavity
is continuously pumped. We study the properties of the cavity field showing that the field exhibits nonclassical features.
• A. Luks, V. Perinová
From the continuous measurement theory back to operator-valued processes
Acta Physica Slovaca 49, 719 (1999)
Abstract: We show that a continuous-time Hermitian operator-valued process is measured in the continuous measurement. We illustrate the utility of the eigenkets of this quantum process for the
explicit solution of the quantum stochastic equation describing the interaction between a field and a reservoir.
• S. Mancini
Stochastic control of quantum dynamics for trapped systems
Acta Physica Slovaca 49, 725 (1999)
Abstract: A stochastic control of the vibrational motion for a single trapped ion/atom is proposed. It is based on the possibility to continously monitor the motion through a light field meter.
The output from the measurement process should be then used to modify the system's dynamics.
• J. Herec
Quantum statistics of two coupled down-convertors. Part I
Acta Physica Slovaca 49, 731 (1999)
Abstract: The quantum-statistical properties of light beams in a directional symmetric nonlinear coupler composed of two nonlinear waveguides operating by the down-conversion processes are
examined. By means of short-length approximation non-classical behaviour of single and compound modes in such a device is analyzed. Linear and nonlinear mismatches are taken into account.
• D. Mogilevtsev
Quantum statistics of two coupled down-convertors. Part II
Acta Physica Slovaca 49, 743 (1999)
Abstract: The scheme is proposed to perform the reconstruction of a multi-mode quantum state of light with help of non-ideal detectors able to test only presence or absence of photons.
• J. Rehácek, Z. Hradil, J. Perina, M. Zawisky, H. Rauch, S. Pascazio
Testing of operational phase concepts
Acta Physica Slovaca 49, 749 (1999)
Abstract: Various phase concepts may be treated as special cases of the maximum likelihood estimation. For example, the discrete the operational phase of Noh, Fougères and Mandel is obtained for
continuous Gaussian signals with phase modulated mean. Although the Gaussian estimation gives a satisfactory approximation for fitting the phase distribution of almost any state the optimal phase
estimation offers in certain cases a measurably better performance. This has been demonstrated in a neutron-optical experiment.
• C. Simon, G. Weihs, A. Zeilinger
Quantum cloning and signaling
Acta Physica Slovaca 49, 755 (1999)
Abstract: We discuss the close connections between cloning of quantum states and superluminal signaling. We present an optimal universal cloning machine bas tive example, we show how a scheme for
superluminal communication based ed on stimulated emission recently proposed by us. As an instruc on this cloning machine fails.
• M. Suda
On decoherence in neutron interferometry
Acta Physica Slovaca 49, 761 (1999)
Abstract: Consistency concerning decoherence in neutron interferometry is achieved by using stochastic differential equations. In interferometry inhomogeneities of the density and/or of the
surface roughness of a phase shifter are of great influence to coherent beam superposition. The interferometric process is described by Wigner's quasi-probability which is a solution of the
diffusion equation.
• J. Skvarcek, M. Hillery
Phase distribution of the micromaser field with injected atomic coherence
Acta Physica Slovaca 49, 765 (1999)
Abstract: We present the solution for the phase distribution of the steady state micromaser field for the case with injected atomic coherence in the semiclassical approximation.
• A. Wünsche
Realizations of SU(1,1) by boson operators with application to phase states
Acta Physica Slovaca 49, 771 (1999)
Abstract: A class of realizations of the abstract Lie algebra su(1,1) in the basis (K-,K0,K+) by one-mode boson operators is derived. It corresponds to the unitary irreps (irreducible
representations) of SU(1,1) with a state of lowest weight which are characterized by a number k>0. The SU(1,1) coherent states to these irreps are discussed and it is shown that they are
eigenstates of a non-Hermitean operator. For each k>0, there exists a countable number of subdivisions of the Fock space spanned by the basis vectors with fixed values and . The same is true for
the realizations of the Heisenberg-Weyl algebra in the Fock space by basis operators . The coherent phase states are discussed as an example of SU(1,1) coherent states. Some of their properties
are related to the unorthodox integer function for which the first 4 pairs of its complex conjugated zeros are determined. The phase-optimized states are discussed and it is found that they
hardly can be accepted as really ``phase-optimized''. The roots of the failure to find a Hermitean phase operator are found already in classical mechanics in a grave topological defect of the
transition from canonical coordinates to action-angle coordinates as a canonical transformation in the coordinate origin.
• A. Napoli, A. Messina
Quantum superpositions of two coherent states generation based on a single-atom conditional measurement
Acta Physica Slovaca 49, 783 (1999)
Abstract: A new and simple way of engineering quantum superpositions of two coherent states of a single-mode quantized electromagnetic field is presented. Our proposal, developed in the context
of micromaser theory, exploits the passage of one atom only through a high-Q bimodal cavity supporting two electromagnetic modes of different frequencies.
|
{"url":"http://www.physics.sk/aps/pub.php?y=1999&pub=aps-99-04","timestamp":"2014-04-19T01:46:38Z","content_type":null,"content_length":"32850","record_id":"<urn:uuid:76f3889d-dc3f-467c-be0b-b5a895954a0b>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00561-ip-10-147-4-33.ec2.internal.warc.gz"}
|
What is the actual meaning of a fractional derivative?
up vote 21 down vote favorite
We're all use to seeing differential operators of the form $\frac{d}{dx}^n$ where $n\in\mathbb{Z}$. But it has come to my attention that this generalises to all complex numbers, forming a field
called fractional calculus which apparently even has applications in physics!
These derivatives are defined as fractional iterates. For example, $(\frac{d}{dx}^\frac{1}{2})^2 = \frac{d}{dx}$ or $(\frac{d}{dx}^i)^i = \frac{d}{dx}^{-1}$
But I can't seem to find a more meaningful definition or description. The derivative means something to me; these just have very abstract definitions. Any help?
Please read the FAQ. Regarding your question, this is standard undergraduate material, for example see: en.wikipedia.org/wiki/Fourier_transform and look up the equation for the Fourier transform
of an iterated derivative. – Ryan Budney Apr 20 '10 at 3:57
I understand that it must be frustrating to see a question that seems too low-level posted. Before posting this question, I tried to do due diligence by researching it and asking several math grad
5 students and a (in industry) PHD (who hadn't heard of it before!). Perhaps you could expand on what qualifies as a `research level math question'? Additionally, thinking about a fractional
derivative in the indirect manner you describe seems suboptimal, further defending the validity of asking for a more meaningful definition. (I hadn't heard of it this way before hand, but..) –
Christopher Olah Apr 20 '10 at 4:54
Wikipedia has the heuristics of the definition, a more or less conventional definition and tons of references. Google finds quite a bit of information, too. – Mariano Suárez-Alvarez♦ Apr 20 '10 at
Wikipedia's explanation of the heuristics, while explaining the idea behind it (fractional iterate) and giving lots of useful information, doesn't provide a nice interpretation. Similarly with all
the other content I found... – Christopher Olah Apr 20 '10 at 5:25
2 There is a lovely little book on this subject whose entire thesis is to answer the question you've just asked. It's called "An Introduction to the Fractional Calculus and Fractional Differential
Equations" by Miller and Ross. I think it's fairly cheap on amazon – Dylan Wilson Aug 6 '10 at 7:33
show 1 more comment
2 Answers
active oldest votes
I understand where Ryan's coming from, though I think the question of how to interpret fractional calculus is still a reasonable one. I found this paper to be pretty neat,
though I have no idea if there are any better interpretations out there.
up vote 10 down vote
accepted http://people.tuke.sk/igor.podlubny/pspdf/pifcaa_r.pdf
Thank you. This looks good and I've started reading it. – Christopher Olah Apr 20 '10 at 5:26
add comment
If the original poster is satisfied, that everything should be ok. However, I find this approach of giving a 'physical interpretation' of a purely mathematical idea slightly misleading.
You can give a physical meaning to complex numbers, sure, but their mathematical meaning is far more interesting and compelling; I would rather speak of an application to physics.
As to fractional derivatives, they become quite easy to understand if you think that the Fourier transform takes the derivative of a function into multiplication by the variable: $\widehat
f'=i\xi\cdot \hat f$. So higher order derivatives can be defined as multiplication of $\hat f$ by powers of $\xi$, and it is no wonder that you can use this idea to define fractional
up vote 3 derivatives, or actually generic 'functions of $d/dx$'. This leads to pseudodifferential operators etc.etc.
down vote
The main reason why this idea is not just a game but on the contrary is enormously useful, also in physics, is that using this kind of calculus you can give explicit (well, almost)
expressions to fundamental things such as solutions to differential equations, and manipulate or estimate them in a very effective way.
The probably "physically convincing" part of differintegrals for me was when they were applied to simplify the PDEs that frequently crop up in diffusion problems. "Fractional
Differential Equations" by Podlubny (the same guy who wrote the paper cited above) shows how it's done. – J. M. Aug 6 '10 at 9:41
add comment
Not the answer you're looking for? Browse other questions tagged tag-removed or ask your own question.
|
{"url":"http://mathoverflow.net/questions/21929/what-is-the-actual-meaning-of-a-fractional-derivative/21933","timestamp":"2014-04-18T21:55:40Z","content_type":null,"content_length":"63473","record_id":"<urn:uuid:ba4bd773-6452-4320-b770-b3146a18ad1d>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00316-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The Department of Astronomy and Astrophysics | Ph.D. Thesis Defenses: 2007
Ph.D. Thesis Defenses: 2007
Date Talk Title Speaker
May 30, 2007 Modified gravity as dark energy Ignacy Sawicki
July 5, 2007 The Climate Dynamics of Titan Jonathan Mitchell
July 27, 2007 Cosmic Microwave Background Analysis for CAPMAP and Future Experiments Kendrick Smith
August 24, 2007 A Catalog of Slow-Moving Objects Extracted from the SDSS: Compilation and Andrew Puckett
October 29, 2007 Scatter in the Galaxy Cluster Mass-Observable Relations Douglas Rudd
Modified gravity as dark energy
May 30, 2007 | Wayne Hu
Ignacy Sawicki
We study the effects of introducing modifications to general relativity ("GR") at large scales as an alternative to exotic forms of matter required to replicate the observed cosmic
acceleration. We survey the effects on cosmology and solar-system tests of Dvali-Gabadadze-Porrati ("DGP") gravity, f ( R ) he changes to the background expansion history of the universe,
these modifications have substantial impact on structure formation and its observable predictions.
For DGP, we develop a scaling approximation for the behaviour of perturbations off the brane, for which the predicted integrated Sachs-Wolf ("ISW") effect is much stronger than observed,
requiring new physics at around horizon scale to bring it into agreement with data. We develop a test based on cross-correlating galaxies and the ISW effect which is independent of the initial
power spectrum for perturbations and is a smoking-gun test for DGP gravity.
For f ( R ) models, we find that, for the expansion history to resemble that of Lambda-CDM, it is required that the second derivative of f with respect to R be non-negative. We then find the
conditions on f ( R ) which allow this subset of models to pass solar-system tests. Provided that gravity behave like GR in the galaxy, these constraints are weak. However, for a model to
allow large deviations from GR in the cosmology, the galactic halo must differ significantly from that predicted by structure evolution in GR. We then discuss the effect that these models have
on structure formation, and find that even in the most conservative of models, percent-level deviations in the matter power spectrum will exist and should be detectable in the future.
Finally, for MSG, we investigate the cosmology of a theory of gravity with a modified constraint structure. The acceleration era can be replicated in these models; however, linear
perturbations become unstable as the universe begins to accelerate. Once the perturbations become non-linear, the model reverts to GR, regaining stability. This leaves a significant imprint on
structure-formation probes, but one which we cannot calculate in the linear approximation.
The Climate Dynamics of Titan
July 5, 2007 | AAC 123 | 1:00 PM
Jonathan Mitchell
We study the climate dynamics of Titan by developing a hierarchy of planetary climate models and theories. We begin with a one-dimensional radiative- convective model of Titan's atmosphere
including the greenhouse and antigreenhouse effects and a generalized moist convection scheme. Our simulations indicate the thermodynamics of methane evaporation and condensation play
fundamental roles in establishing deep, precipitating convection while maintaining surface energy balance with the weak solar forcing at Titan's surface.
We then derive an extension to a steady, analytic theory for the large-scale circulation of an atmosphere and apply the theory to Titan. The theory predicts Titan's meridional overturning
circulation, or Hadley cell, spans the globe. Titan's Hadley cell tends to eliminate latitudinal temperature gradients, which is consistent with the observed weak equator-to-pole surface
temperature gradients. We expect Titan's Hadley cell to globally converge moisture into the large-scale updraft and suppress convection everywhere else; resulting cloud patterns should appear
sparse and isolated in latitude.
We then study the seasonal cycle in a zonally symmetric general circulation model of Titan's climate with an unlimited surface supply of methane. This model produces condensation consistent
with the position and timing of observed clouds, but only with the thermodynamic effect of methane condensation and evaporation included. The large-scale circulation in our simulations
latitudinally oscillates with season, which in the annual mean dries the low- latitude surface. However, self-consistent drying of the surface requires an accounting of the methane reservoir.
Finally, we present zonally symmetric general circulation model simulations with a soil model for the lower boundary and a finite reservoir of methane. Due to annual-mean moisture divergence
of the oscillating large-scale circulation, more than 50 m of liquid methane is removed from the low-latitude surface and deposited at mid and high latitudes. Simulations with total reservoir
depth below 50 m completely dry the low latitude surface. All simulations with the soil model produce condensation at positions and times consistent with observed clouds.
Events Cosmic Microwave Background Analysis for CAPMAP and Future Experiments
July 27, 2007 | LASR conference room | 11:00 AM | Wayne Hu
Kendrick Smith
A major frontier for cosmology in the coming decade will be making precision measurements of the cosmic microwave background (CMB) polarization, complementing existing measurements of the CMB
temperature anisotropies. The E- mode, or gradient-like component, of CMB polarization will break parameter degeneracies from CMB temperature alone and improve constraints on reionization
history and initial conditions in the standard cosmological model. The B-mode, or curl-like component will permit strong constraints on growth of structure from CMB lensing, and probe new
physics by measuring the gravity wave content of the early universe.
In the first half of this thesis, we describe design and implementation of the analysis pipeline for the 2005 observing season of CAPMAP, an experiment to measure CMB polarization on small
angular scales using coherent polarimeters and the Lucent 7 meter telescope in Crawford Hill, New Jersey. Although the results of the analysis are not completely finalized, we present partial
results obtained from the data, and full results for a full-season simulation, in order to illustrate the measurement that will be obtained.
The CAPMAP analysis pipeline uses a likelihood formalism which is computationally expensive, but results in measurement uncertainties which are provably optimal. An optimal analysis will be
computationally infeasible for upcoming generations of CMB polarization experiments, in which the problem size will be larger by several orders of magnitude. Therefore, fast approximate
methods have been proposed. In the second half of this thesis, we show that in their originally proposed form, these methods fail to preserve the E- B decomposition, and this failure
ultimately acts as a limiting source of noise when measuring B-modes, and propose modifications which solve this problem.
A Catalog of Slow-Moving Objects Extracted from the SDSS: Compilation and
August 24, 2007 | AAC 123 | 10:00 AM | Richard G. Kron
Andrew Puckett
I have compiled the Slow-Moving Object Catalog of Known minor planets and comets ("the SMOCK") by comparing the predicted positions of known bodies with those of sources detected by the Sloan
Digital Sky Survey (SDSS) that lack positional counterparts at other survey epochs. For the ~50% of the SDSS footprint that has been imaged only once, I have used the Astrophysical Research
Consortium's 3.5-meter telescope to obtain reference images for confirmation of Solar System membership.
The SMOCK search effort includes all known objects with orbital semimajor axes a > 4.7 AU, as well as a comparison sample of inherently bright Main Belt asteroids. In fact, objects of all
proper motions are included, resulting in substantial overlap with the SDSS Moving Object Catalog (MOC) and providing an important check on the inclusion criteria of both catalogs. The MOC
does not contain any correctly-identified known objects with a > 12 AU, and also excludes a number of detections of Main Belt and Trojan asteroids that happen to be moving slowly as they enter
or leave retrograde motion.
The SMOCK catalog is a publicly-available product of this investigation. Having created this new database, I demonstrate some of its applications. The broad dispersion of color indices for
transneptunian objects (TNOs) and Centaurs is confirmed, and their tight correlation in ( g - r ) vs ( r - i ) is explored. Repeat observations for more than 30 of these objects allow me to
reject the collisional resurfacing scenario as the primary explanation for this broad variety of colors. Trojans with large orbital inclinations are found to have systematically redder colors
than their low-inclination counterparts, but an excess of reddish low-inclination objects at L5 is identified. Next, I confirm that non-Plutino TNOs are redder with increasing perihelion
distance, and that this effect is even more pronounced among the Classical TNOs. Finally, I take advantage of the byproducts of my search technique and attempt to recover objects with
poorly-known orbits. I have drastically improved the current and future ephemeris uncertainties of 3 Trojan asteroids, and have increased by 20%-450% the observed arcs of 10 additional bodies.
Scatter in the Galaxy Cluster Mass-Observable Relations
October 29, 2007 | AAC 123 | 3:00 PM | Andrey V. Kravtsov
Douglas Rudd
We use numerical simulations of cosmological structure formation to study the distribution and evolution of galaxy clusters. We employ simulations in both WMAP1 and WMAP3 cosmologies, and
simulated with and without the physics of galaxy formation, resulting in a sample of nearly 300 galaxy clusters spanning two decades in mass at the present epoch. We show that the mass
weighted temperature of the intracluster medium and the Sunyaev-Zel'dovich (SZ) Compton Y integrated within a radius enclosing 500 times the critical density of the universe each correlates
strongly with total cluster mass, and the mean relation is reasonably well described by a simple self-similar model. These relations exhibit remarkably little scatter (10-15%) independent of
cluster mass and redshift. We find that the distribution of these quantities about the best fit scaling relations is not well fit by a log-normal distribution, but instead exhibits significant
positive kurtosis. Additionally, we find that the residual from the best fit mass-temperature relation correlates with halo temperature, indicating a connection between halo merger history and
the properties of the cluster gas. These results have significant implication for the ability of future SZ galaxy cluster surveys to self-calibrate the mass- observable relations and thus
constrain cosmological parameters.
Date Talk Title Speaker
May 30, 2007 Modified gravity as dark energy Ignacy Sawicki
July 5, 2007 The Climate Dynamics of Titan Jonathan Mitchell
July 27, 2007 Cosmic Microwave Background Analysis for CAPMAP and Future Experiments Kendrick Smith
August 24, 2007 A Catalog of Slow-Moving Objects Extracted from the SDSS: Compilation and Andrew Puckett
October 29, 2007 Scatter in the Galaxy Cluster Mass-Observable Relations Douglas Rudd
A Catalog of Slow-Moving Objects Extracted from the SDSS: Compilation and
|
{"url":"http://astro.uchicago.edu/events/phd-thesis-defense_2007.php","timestamp":"2014-04-18T20:43:55Z","content_type":null,"content_length":"45783","record_id":"<urn:uuid:1a273112-871c-4af0-a816-2815ac1ade91>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00626-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Voltage Doubler
By Cliff Bockard and Ryan Sherry
The purpose of this experiment is to determine the relationship of an input to its corresponding output for an unknown circuit. The circuit to be analyzed is shown in Figure-1.
To determine what the circuit inFigure-1 does, we applied a 4 volt sinusoidal input and obtained a trace of the output on an oscilloscope. To build the circuit we used 1N4007 p-n diodes. The
capacitor values for the circuit are C1 = C2 = 10 pF. The output of the circuit is shown in Figure-2.
The circuit puts out a DC voltage equal to almost twice the peak value of the AC input voltage. The reason the output is DC and not AC is because of the capacitor C2. Once diode D2 turns on, the
voltage across the capacitor is just a constant DC voltage . Notice in Figure-2, that the DC output is not quite the 8 volts expected from the 4 volt input. The explanation for this is the turn-on
voltage of D2. The turn-on voltage for the Si 1N4007 is about 0.7 V, and notice that the output voltage is about 7.2 Volts, very close to 8V-.7V. From this, a better name for this circuit would be a
voltage doubler.
Figure-3 is a plot of the transfer function (Vout vs. Vin) of Circuit 3. The Y-axis represents the output voltage while the X-axis is Vin. We see that as the AC input voltage sweeps from -4V to +4V
the circuit produces an output of +8V DC. While experimenting with variations of the input voltage, we found that the transfer function shifted upwards on the Y-axis and widened along the X-axis as
we increased the amplitude of the input AC signal.
From the experimental data that we have collected we determined that Circuit 3 of Lab 5 is a voltage doubler. The doubled voltage is, however, a DC voltage with an AC input as explained above. It is
important to realize that this circuit does not solve the energy crisis because since we are doubling the voltage we reduce the current by one half as given by P=VI.
Click here to mail any questions or comments about this circuit.
|
{"url":"http://www.eg.bucknell.edu/~ee222/lab5/group05/","timestamp":"2014-04-17T16:10:48Z","content_type":null,"content_length":"3331","record_id":"<urn:uuid:46ab4296-dcb0-4585-9653-4d1aadbac5d1>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00304-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
could someone explain what equinumerous means in mathematical Theoretical course?!
• one year ago
• one year ago
Best Response
You've already chosen the best response.
the definition states " two sets S and T are called equinumerous, and we write S ~ T, if there exists a bijective function from S onto T". Bijective definition is "a function is bijective if it
is surjective and injective". I'm asked to "Prove that if (S \ T) is equinumerous to (T \ S), then S is equinumerous to T" how do i do that?
Best Response
You've already chosen the best response.
What exactly do you mean by the S\T and T\S? Are those set minuses?
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
set S without T
Best Response
You've already chosen the best response.
So there are the same number of elements in S that aren't in T as there are elements in T that aren't in S. Let \(U=S\cap T\). So \(S\setminus T=S\setminus (S\cap T)\). Then \(S=S\setminus T +U
\). Now let \(f:S\setminus T\to T\setminus S\) such that \(f\) is a bijection. Now define\[g:S\setminus T+U\longrightarrow T\setminus S+U\]by\[ g(s)=\begin{cases} f(s)\qquad s\in S\setminus T \\
s\qquad \;\;\;\;\,s\in U \end{cases}\]
Best Response
You've already chosen the best response.
Note that if we restrict \(g\) to only the domain \(U\), we get a bijective function since \(g(g(s))=s\). I.e., \(g\) has an inverse. Similarly, if we restrict \(g\) to only the domain \(S\
setminus T\), then we have a bijective function since \(f\) is bijective. So \(g\) is bijective over its whole domain, and is therefore a bijective function. Finally, since \(S\setminus T+U=S\)
and \(T\setminus S+U=T\), \(S\) and \(T\) are equinumerous.
Best Response
You've already chosen the best response.
Did that all make sense?
Best Response
You've already chosen the best response.
oh yes thank you!!
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/511b0369e4b06821731a8777","timestamp":"2014-04-16T19:33:57Z","content_type":null,"content_length":"45500","record_id":"<urn:uuid:a77e54eb-e7f4-483e-a273-dd3f25d39c20>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00167-ip-10-147-4-33.ec2.internal.warc.gz"}
|
MASS - Colloquia 2004
Lectures are given in 122 Pond building.
Thursday, September 16
Professor Mark Levi (Penn State)
2:30 p.m.
Riemann Mapping Theorem via Steepest Descent
ABSTRACT : Conformal mappings of the plane are the ones which map infinitesimal squares to infinitesimal squares. Riemann Mapping Theorem is a fundamental fact which states, roughly speaking, that
any simply connected domain in the plane can be conformally mapped to a circle. We give (an apparently new) proof of the Riemann Mapping using the idea of steepest descent. Thursday, September 23
Professor Robert Connelly (Cornell University)
2:30 p.m.
Volume and area formulas.
ABSTRACT : Heron's formula gives the area of a triangle in terms of the lengths of its edges, and there is a similar formula for the volume of a tetrahedron in terms of the lengths of its edges. in
1995 I. Sabitov showed that for any triangulated surface in three-space, there is a polynomial that is satisfied by the volume bounded by the surface, and its coefficients are themselves polynomials
in the lengths of the edges of the triangulated surface. This is related to another polynomial satisfied by the area enclosed by a polygon whose vertices lie on a circle in the plane which was
studied by David Robbins before he died. These polynomials are interesting, but they can be very complicated with a degree that is exponential in the number of vertices. Thursday, September 30
Professor Andrew Belmonte (W.G. Pritchard Laboratories, The Pennsylvania State University)
2:30 p.m.
The dynamics of think flexible things: shaking and breaking
ABSTRACT : The dynamics of a deformable continuum—say a solid or a fluid—can be considerably simplified mathematically by considering situations in which one of the dimensions is much smaller than
the others. However, this restriction can introduce its own complications. I will discuss some surprising experimental observations which we have made in the lab, which have led us from classical
mechanics of strings and rods to fundamental questions involving knots and fragmentation. Thursday, October 7
Professor Andre Toom (Universidade Federal de Pernambuco, Brazil)
2:30 p.m.
Spontaneous symmetry breaking in a 1-D process with variable length.
ABSTRACT : For a long time it was a common opinion among physicists that phase transitions are impossible in one-dimensional systems.For example, Section 152 of Landau and Lifshitz's
“Statistical Physics”
was called
“The impossibility of the existence of phases in one-dimensional systems”
and an argument of physical nature was presented in support of this impossibility. However, mathematical objects are very general and may violate physical intuition. This is one of several attempts
to show possibility of phases in 1-D systems. We present a 1-D random particle process with uniform local interaction,which displays some form of spontaneous symmetry breaking, that is non-symmetric
distribution under symmetric rules. Particles, enumerated by integer numbers, interact at every step of the discrete time only with their nearest neighbors. Every particle has two possible states,
called minus and plus. At every time step two transformations occur. The first one turns every minus into plus with probability $\BE$ and every plus into minus with probability $\GA$ independently
from what happens at other places, where $\BE+\GA$ ≤ 1. Under the action of the second one, whenever a plus is a left neighbor of a minus, both disappear with probability $\AL$ independently from
fate of other places. If $\BE$ is small enough by comparison with $\AL$
and we start with
“all minuses”
, the minuses remain a majority forever. If $\GA$ is small enough by comparison with $\AL$
and we start with
“all pluses”
, the pluses remain a majority forever. Therefore, if $\BE=\GA$ are small enough by comparison with $\AL$
, we have spontaneous symmetry breaking. If, in addition, $\AL$ < 1/8, we have at least two different invariant measures. Thursday, October 21
Professor Anatoly Vershik (St. Petersburg State University and Steklov Institute)
2:30 p.m.
What does the limit shape mean in geometry and combinatorics?
ABSTRACT : Consider a configuration in the plane or in the 2-D lattice that grows in time following certain rules. The problem is to describe the limit shape of the configuration after a very long
time. Another question of this type: what is a typical shape of a convex lattice polygon? Thursday, October 28.
Professor Vladimir Retakh (Rutgers University)
10:10 a.m.
How many roots does a matrix polynomial equation have?
ABSTRACT : Everybody knows how many roots a quadratic equation
+ p
+q = 0 has over the field of complex numbers. Not everybody knows how many roots this equation may have over the ring of complex matrices. In fact, the number of roots may be equal to 0, 1, …, ∞. I
am going to discuss this and other related results in my talk. Thursday, October 28
Professor Arek Goetz (San Francisco State University)
2:30 p.m.
The dynamics and geometry of microscopic structures in piecewise rotations.
ABSTRACT : Let f(x): [0,1] ->[0,1] be the fractional part of (x + t). Iteration of f(x) results in the dynamics we understand. The sequence x, f(x), f(f(x)),..., is finite if t is a rational number
or it is infinite and uniformly distributed if t is irrational. The map f(x) is discontinuous at x=1-t, it exchanges two intervals, [0,1-t) and [1-t,1). This is an example of the simplest interval
exchange transformation. Such maps when the number of exchanged intervals is greater than two have been extensively studied, partially due tons is their connection with rational billiards. Piecewise
rotations are two dimensional generalizations of interval exchanges. In this multimedia talk, we will invite the audience to take a tour of fractal structures arising from the action of the piecewise
rotations. These structures are produced on a computer using rigorous algorithms with roots in basic algebraic number theory. We propose open questions and make available a rigorous computer package
for a later use in the exciting process of discovery. (
) Examples of piecewise rotations include, exchanges of two triangles, or the pizza map. The pizza map T rearranges a finite number of cones (pizza slices) and then T acts as translation on all
pieces. The resulting orbit behavior includes familiar behavior from dimension one as well as it features a rich and tantalizing structure of polygons, sets whose iteration never breaks into smaller
pieces. A computer zoom on this structure unravels a new landscape of dynamical and geometric phenomena. Unlike in dimension one, here often we observe many periodic domains. The key to begin
understanding the dynamics of piecewise rotations is to investigate return actions to smaller domains. However, unlike in one dimensional case where the number of pieces that come back to an interval
is finite, in two dimensional dynamics, such a number may be infinite. If, for example, the return action looks like the original map, just smaller, then we are very lucky and can conclude that the
map gives rise to a fractal. Often the return actions are very complicated and in order to keep track of details, we introduce using numbers in cyclotomic fields, that is sets of rational polynomial
expressions in roots of unity. Using such a tool allows us to prove new rigorous results. Thursday, November 4
Professor Richard Schwartz (University of Maryland)
2:30 p.m.
Experiments with triangular billiards.
ABSTRACT : A billiard path on a triangle describes the trajectory taken by a frictionless and infinitely small billiard ball as it rolls around on a billiard table shaped like the triangle. A
periodic billiard path is one which endlessly repeats itself. Amazingly, it is not known if every triangle has a periodic billiard path. For acute triangles the affirmative result was known since the
late 1770's; for right triangles the affirmative result was established in the 1990's independently by Holt and Galperin-Stepin-Vorobets. Not much is known about the obtuse case. In my talk I will
demonstrate a computer program I wrote, which searches for periodic billiard paths in triangles. I will demonstrate, at least experimentally, how every triangle with angles less than 100 degrees has
a periodic billiard path and I will discuss how one converts the numerical evidence from the plots into a rigorous mathematical proof. If you want to see the program in advance of the talk, check out
the "billiards" link on my website:
Thursday, November 11
Professor Walter Neumann (Columbia University)
2:30 p.m.
Polynomials and Knots.
ABSTRACT : There are deep unsolved problems relating to polynomials, already for polynomials in two variables. The most famous such problem is the Jacobian Conjecture, giving a conjectural
characterization of polynomial changes of coordinates. The talk will describe some of these questions, as well as connections with knot theory. Thursday, November 18
Professor Mariusz Lemanczyk (Torun, Poland)
2:30 p.m.
On the filtering problem of stationary processes.
ABSTRACT : A signal (which is meant to be a stationary stochastic process
), {n ∈
}) is sent through a communication channel. We assume that a noise (another stationary process
)) is present so, as an output, we obtain a process
) which is a function of the processes
. Can we reconstruct the process
? By that we mean an
allowing us to get
. Assuming joint stationarity of all the processes under consideration we will show how this problem is leading to some pure ergodic theory questions. To have a chance for a positive solution of the
filtering problem we need to assume that the two processes
different, they have to be at least independent. We will present some partial solutions (due to Furstenberg) of the filtering problem under some integrability assumptions on the processes in the case
. However even in this simple case the non-integrable case remains open. At the end of the lecture I will present a full (positive) solution of the filtering problem when the time takes value in the
group Z
, i.e., in the case of random Z
|
{"url":"http://www.math.psu.edu/mass/colloquia/2004/","timestamp":"2014-04-20T15:55:52Z","content_type":null,"content_length":"19398","record_id":"<urn:uuid:fff6589c-744a-4e01-b8f4-22146d81e8f4>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00181-ip-10-147-4-33.ec2.internal.warc.gz"}
|
What is a term?
April 13th 2011, 07:49 AM
What is a term?
a term is either a single number or variable, or the product of several numbers or variables separated from another term by a + or - sign in an overall expression. For example, in 3 + 4x + 5yzw
3, 4x, and 5yzw are all terms. -wikipedia
But it says either a single number...or the product of the several numbers.
So does that mean that 4 is a term, x is a term, and y,z,and w are also terms? In that case how does it affect things like the associative property, does it not allow for an anarchic rewriting of
the original expression?
April 13th 2011, 08:15 AM
the 5xyz etc are all terms as is the 4
the individual components of each term are either constants such as the 5 or variables like the x
if you have studied polynomials then you are familiar with the expression terms in a polynomial
However, as can be seen hereArithmetic and Geometric Sequences
a term can be a combination of elements...
April 30th 2012, 07:13 AM
Re: What is a term?
But what would make the elements of a term not terms too? Or are they terms?
|
{"url":"http://mathhelpforum.com/algebra/177714-what-term-print.html","timestamp":"2014-04-20T12:05:10Z","content_type":null,"content_length":"4616","record_id":"<urn:uuid:200e10a8-7c3c-43a4-8735-19750de820cc>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00004-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Forum Discussions
Date Subject Author
4/20/04 cube root of a given number vsvasan
4/20/04 Re: cube root of a given number A N Niel
4/20/04 Re: cube root of a given number Richard Mathar
7/14/07 Re: cube root of a given number Sheila
7/14/07 Re: cube root of a given number amzoti
7/14/07 Re: cube root of a given number quasi
7/14/07 Re: cube root of a given number arithmeticae
7/16/07 Re: cube root of a given number Gottfried Helms
7/16/07 Re: cube root of a given number Iain Davidson
7/21/07 Re: cube root of a given number arithmetic
7/21/07 Re: cube root of a given number arithmetic
7/21/07 Re: cube root of a given number Iain Davidson
7/21/07 Re: cube root of a given number arithmetic
7/22/07 Re: cube root of a given number Iain Davidson
7/22/07 Re: cube root of a given number arithmetic
7/22/07 Re: cube root of a given number Iain Davidson
7/23/07 Re: cube root of a given number arithmetic
7/24/07 Re: cube root of a given number Iain Davidson
7/24/07 Re: cube root of a given number arithmetic
7/24/07 Re: cube root of a given number arithmetic
7/24/07 Re: cube root of a given number Iain Davidson
7/25/07 Re: cube root of a given number arithmetic
7/24/07 Re: cube root of a given number gwh
7/25/07 Re: cube root of a given number arithmetic
7/25/07 Re: cube root of a given number Iain Davidson
7/25/07 Re: cube root of a given number arithmetic
7/25/07 Re: cube root of a given number Iain Davidson
7/25/07 Re: cube root of a given number arithmetic
7/25/07 Re: cube root of a given number arithmetic
7/25/07 Re: cube root of a given number Iain Davidson
7/25/07 Re: cube root of a given number arithmetic
7/26/07 Re: cube root of a given number Iain Davidson
7/26/07 Re: cube root of a given number arithmetic
7/26/07 Re: cube root of a given number Iain Davidson
7/26/07 Re: cube root of a given number arithmetic
8/6/07 Re: cube root of a given number arithmetic
7/26/07 Re: cube root of a given number semiopen
7/26/07 Re: cube root of a given number Iain Davidson
7/26/07 Re: cube root of a given number semiopen
7/26/07 Re: cube root of a given number arithmetic
7/26/07 Re: cube root of a given number semiopen
7/26/07 Re: cube root of a given number arithmetic
7/26/07 Re: cube root of a given number Iain Davidson
7/27/07 Re: cube root of a given number arithmetic
7/27/07 Re: cube root of a given number Iain Davidson
7/27/07 Re: cube root of a given number Iain Davidson
7/27/07 Re: cube root of a given number arithmetic
7/27/07 Re: cube root of a given number arithmetic
7/27/07 Re: cube root of a given number arithmetic
7/27/07 Re: cube root of a given number Iain Davidson
7/27/07 Re: cube root of a given number arithmetic
7/27/07 Re: cube root of a given number Iain Davidson
7/27/07 Re: cube root of a given number arithmetic
7/27/07 Re: cube root of a given number Iain Davidson
7/27/07 Re: cube root of a given number arithmetic
7/27/07 Re: cube root of a given number arithmetic
7/27/07 Re: cube root of a given number Iain Davidson
7/27/07 Re: cube root of a given number arithmetic
7/27/07 Re: cube root of a given number Iain Davidson
7/28/07 Re: cube root of a given number arithmetic
7/28/07 Re: cube root of a given number Iain Davidson
8/5/07 Re: cube root of a given number arithmeticae
8/5/07 Re: cube root of a given number Iain Davidson
8/6/07 Re: cube root of a given number arithmetic
8/6/07 Re: cube root of a given number Iain Davidson
8/6/07 Re: cube root of a given number arithmeticae
8/7/07 Re: cube root of a given number Iain Davidson
8/7/07 Re: cube root of a given number mike3
8/10/07 Re: cube root of a given number arithmetic
8/10/07 Re: cube root of a given number Iain Davidson
8/11/07 Re: cube root of a given number r3769@aol.com
8/11/07 Re: cube root of a given number Iain Davidson
8/11/07 Re: cube root of a given number r3769@aol.com
8/11/07 Re: cube root of a given number Iain Davidson
8/11/07 Re: cube root of a given number r3769@aol.com
8/12/07 Re: cube root of a given number Iain Davidson
8/17/07 Re: cube root of a given number r3769@aol.com
8/12/07 Re: cube root of a given number arithmetic
8/13/07 Re: cube root of a given number Iain Davidson
8/24/07 Re: cube root of a given number arithmetic
8/28/07 Re: cube root of a given number narasimham
1/10/13 Re: cube root of a given number ... Milo Gardner
8/28/07 Re: cube root of a given number arithmetic
8/28/07 Re: cube root of a given number Iain Davidson
8/7/07 Re: cube root of a given number mike3
8/7/07 Re: cube root of a given number Iain Davidson
8/10/07 Re: cube root of a given number arithmetic
8/10/07 Re: cube root of a given number arithmetic
7/28/07 Re: cube root of a given number arithmetic
7/27/07 Re: cube root of a given number arithmetic
7/27/07 Re: cube root of a given number arithmetic
7/27/07 Re: cube root of a given number arithmetic
7/26/07 Re: cube root of a given number Iain Davidson
7/26/07 Re: cube root of a given number arithmetic
7/25/07 Re: cube root of a given number Iain Davidson
7/26/07 Re: cube root of a given number arithmetic
7/22/07 Re: cube root of a given number arithmetic
7/21/07 Re: cube root of a given number arithmetic
7/16/07 Re: cube root of a given number Proginoskes
7/21/07 Re: cube root of a given number arithmetic
7/22/07 Re: cube root of a given number Proginoskes
7/22/07 Re: cube root of a given number Virgil
7/22/07 Re: cube root of a given number Proginoskes
7/23/07 Re: cube root of a given number arithmetic
7/23/07 Re: cube root of a given number arithmetic
7/24/07 Re: cube root of a given number Proginoskes
7/16/07 Re: cube root of a given number gwh
7/17/07 Re: cube root of a given number Iain Davidson
7/21/07 Re: cube root of a given number arithmetic
7/21/07 Re: cube root of a given number arithmetic
7/21/07 Re: cube root of a given number arithmetic
7/24/07 Re: cube root of a given number pomerado@hotmail.com
7/25/07 Re: cube root of a given number orangatang1@googlemail.com
|
{"url":"http://mathforum.org/kb/message.jspa?messageID=5878642","timestamp":"2014-04-17T07:49:00Z","content_type":null,"content_length":"152820","record_id":"<urn:uuid:c70f9938-23e7-4e68-8bf2-28cf73316acf>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00606-ip-10-147-4-33.ec2.internal.warc.gz"}
|
16 search hits
Particle ratios from AGS to RHIC in an interacting hadronic model (2004)
Detlef Zschiesche Gebhard Zeeb Kerstin Paech Horst Stöcker Stefan Schramm
Abstract: The measured particle ratios in central heavy-ion collisions at RHIC-BNL are investigated within a chemical and thermal equilibrium chiral SU(3) Ã É approach. The commonly adopted
non-interacting gas calculations yield temperatures close to or above the critical temperature for the chiral phase transition, but without taking into account any interactions. In contrast, the
chiral SU(3) model predicts temperature and density dependent effective hadron masses and effective chemical potentials in the medium and a transition to a chirally restored phase at high
temperatures or chemical potentials. Three different parametrizations of the model, which show different types of phase transition behaviour, are investigated. We show that if a chiral phase
transition occured in those collisions, freezing of the relative hadron abundances in the symmetric phase is excluded by the data. Therefore, either very rapid chemical equilibration must occur
in the broken phase, or the measured hadron ratios are the outcome of the dynamical symmetry breaking. Furthermore, the extracted chemical freeze-out parameters differ considerably from those
obtained in simple non-interacting gas calculations. In particular, the three models yield up to 35 MeV lower temperatures than the free gas approximation. The inmedium masses turn out to differ
up to 150 MeV from their vacuum values.
Impact of baryon resonances on the chiral phase transition at finite temperature and density (2004)
Detlef Zschiesche Gebhard Zeeb Stefan Schramm Horst Stöcker
We study the phase diagram of a generalized chiral SU(3)-flavor model in mean-field approxi- mation. In particular, the influence of the baryon resonances, and their couplings to the scalar and
vector fields, on the characteristics of the chiral phase transition as a function of temperature and baryon-chemical potential is investigated. Present and future finite-density lattice
calculations might constrain the couplings of the fields to the baryons. The results are compared to recent lattice QCD calculations and it is shown that it is non-trivial to obtain,
simultaneously, stable cold nuclear matter.
In-medium vector meson masses in a chiral SU(3) model (2003)
Detlef Zschiesche Amruta Mishra Stefan Schramm Horst Stöcker Walter Greiner
A significant drop of the vector meson masses in nuclear matter is observed in a chiral SU(3) model due to the e ects of the baryon Dirac sea. This is taken into account through the summation of
baryonic tadpole diagrams in the relativistic Hartree approximation. The appreciable decrease of the in-medium vector meson masses is due to the vacuum polarisation e ects from the nucleon sector
and is not observed in the mean field approximation.
Effects of Dirac sea polarization on hadronic properties : a Chiral SU(3) approach (2003)
Amruta Mishra K. Balazs Detlef Zschiesche Stefan Schramm Horst Stöcker Walter Greiner
Abstract: The e ect of vacuum fluctuations on the in-medium hadronic properties is investigated using a chiral SU(3) model in the nonlinear realization. The e ect of the baryon Dirac sea is seen
to modify hadronic properties and in contrast to a calculation in mean field approximation it is seen to give rise to a significant drop of the vector meson masses in hot and dense matter. This e
ect is taken into account through the summation of baryonic tadpole diagrams in the relativistic Hartree approximation (RHA), where the baryon self energy is modified due to interactions with
both the non-strange ( ) and the strange ( ) scalar fields.
Space-time evolution and HBT analysis of relativistic heavy ion collisions in a chiral SU(3) x SU(3) model (2002)
Detlef Zschiesche Stefan Schramm Horst Stöcker Walter Greiner
The space-time dynamics and pion-HBT radii in central heavy ion-collisions at CERN-SPS and BNL-RHIC are investigated within a hydrodynamic simulation. The dependence of the dynamics and the
HBT-parameters on the EoS is studied with different parametrizations of a chiral SU(3) sigma omega model. The selfconsistent collective expansion includes the e ects of e ective hadron masses,
generated by the nonstrange and strange scalar condensates. Different chiral EoS show di erent types of phase transitions and even a crossover. The influence of the order of the phase transition
and of the latent heat on the space-time dynamics and pion-HBT radii is studied. A small latent heat, i.e. a weak first-order chiral phase transition, or a smooth crossover lead to distinctly di
erent HBT predictions than a strong first order phase transition. A quantitative description of the data, both at SPS energies as well as at RHIC energies, appears di cult to achieve within the
ideal hydrodynamic approach using the SU(3) chiral EoS. A strong first-order quasi-adiabatic chiral phase transition seems to be disfavored by the pion-HBT data from CERN-SPS and BNL-RHIC.
Particle ratios at RHIC : effective hadron masses and chemical freeze-out (2002)
Detlef Zschiesche Stefan Schramm Jürgen Schaffner-Bielich Horst Stöcker Walter Greiner
The measured particle ratios in central heavy-ion collisions at RHIC-BNL are investigated within a chemical and thermal equilibrium chiral SU(3) theta - omega approach. The commonly adopted
noninteracting gas calculations yield temperatures close to or above the critical temperature for the chiral phase transition, but without taking into account any interactions. Contrary, the
chiral SU(3) model predicts temperature and density dependent e ective hadron masses and e ective chemical potentials in the medium and a transition to a chirally restored phase at high
temperatures or chemical potentials. Three di erent parametrizations of the model, which show di erent types of phase transition behaviour, are investigated. We show that if a chiral phase
transition occured in those collisions, freezing of the relative hadron abundances in the symmetric phase is excluded by the data. Therefore, either very rapid chemical equilibration must occur
in the broken phase, or the measured hadron ratios are the outcome of the dynamical symmetry breaking. Furthermore, the extracted chemical freeze-out parameters di er considerably from those
obtained in simple noninteracting gas calculations. In particular, the three models yield up to 35 MeV lower temperatures than the free gas approximation. The in-medium masses turn out di er up
to 150 MeV from their vacuum values.
Nuclei, superheavy nuclei, and hypermatter in a chiral SU(3) model (2001)
Christian Beckmann Panajotis Papazoglou Detlef Zschiesche Stefan Schramm Horst Stöcker Walter Greiner
A model based on chiral SU(3)-symmetry in nonlinear realisation is used for the investigation of nuclei, superheavy nuclei, hypernuclei and multistrange nuclear objects (so called MEMOs). The
model works very well in the case of nuclei and hypernuclei with one Lambda-particle and rules out MEMOs. Basic observables which are known for nuclei and hypernuclei are reproduced
satisfactorily. The model predicts Z=120 and N=172, 184 and 198 as the next shell closures in the region of superheavy nuclei. The calculations have been performed in self-consistent relativistic
mean field approximation assuming spherical symmetry. The parameters were adapted to known nuclei.
Superheavy nuclei in a chiral hadronic model (2000)
Christian Beckmann Panajotis Papazoglou Detlef Zschiesche Stefan Schramm Horst Stöcker Walter Greiner
Superheavy nuclei are investigated in a nonlinear chiral SU(3)-model. The proton number Z=120 and neutron numbers of N=172, 184 and 198 are predicted to be magic. The charge distributions and
alpha-decay chains hint towards a hollow stucture.
Hadrons in dense resonance matter: a chiral SU(3) approach (2000)
Detlef Zschiesche Panajotis Papazoglou Stefan Schramm Jürgen Schaffner-Bielich Horst Stöcker Walter Greiner
A nonlinear chiral SU(3) approach including the spin 3 2 decuplet is developed to describe dense matter. The coupling constants of the baryon resonances to the scalar mesons are determined from
the decuplet vacuum masses and SU(3) symmetry relations. Di erent methods of mass generation show significant differences in the properties of the spin- 3 2 particles and in the nuclear equation
of state
Critical review of quark gluon plasma signals (2000)
Detlef Zschiesche Lars Gerland Stefan Schramm Jürgen Schaffner-Bielich Horst Stöcker Walter Greiner
Compelling evidence for a new form of matter has been claimed to be formed in Pb+Pb collisions at SPS. We critically review two suggested signatures for this new state of matter: First the
suppression of the J/psi , which should be strongly suppressed in the QGP by two different mechanisms, the color-screening [1] and the QCD-photoe ect [2]. Secondly the measured particle, in
particular strange hadronic, ratios might signal the freeze-out from a quark-gluon phase.
|
{"url":"http://publikationen.ub.uni-frankfurt.de/solrsearch/index/search/searchtype/authorsearch/author/%22Stefan+Schramm%22/start/0/rows/10/author_facetfq/Detlef+Zschiesche/sortfield/year/sortorder/desc","timestamp":"2014-04-18T13:27:39Z","content_type":null,"content_length":"50561","record_id":"<urn:uuid:a8469a2f-e5ef-4f8f-8681-681ac4eddf32>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00326-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The relationship between the shape of a parachute and its drop velocity
1. For this science fair project, the independent variable is the shape of the parachutes’ canopies. The dependent variable is the drop speeds of the parachutes. How long does it take for the
parachutes to reach the ground? Measure this using a stopwatch. The constants (control variables) are the surface area of the canopies, the weight of the nails, and the height from which the
parachutes are dropped. 2. Cut a triangle, a square, a rectangle and a circle from 4 plastic bags. Each shape should have an area of 500 square centimeters. Calculate how long the sides and diameters
|
{"url":"http://www.all-science-fair-projects.com/project1312_57.html","timestamp":"2014-04-17T15:34:25Z","content_type":null,"content_length":"18121","record_id":"<urn:uuid:29b1f4be-e814-4747-83af-f2648ba1a167>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00286-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Machine rate problem
Author Message
Machine rate problem [#permalink] 07 Jun 2011, 13:00
teebumble 5% (low)
Intern Question Stats:
Joined: 07 Jun 2011 100%
Posts: 2 (03:32) correct
Followers: 0 0% (00:00)
Kudos [?]: 0 [0], wrong
given: 1
based on 1 sessions
Hi All,
Can you please help me solve this problem?
Machine A @ constant rate 9,000/H
Machine B @ Constant rate 7,000/H
If both together, then it can produces 100,000 in 8 hours, then what is the minumum hours is needed for machine B to run?
The answer was at least 4 hours.
This is the equation I think is correct: 9000/A + 7000/B = 10000/8. However, I don't know how to get another equation since i have 2 unknowns.
Re: Machine rate problem [#permalink] 07 Jun 2011, 15:17
Joined: 17 Nov 2007
Expert's post
Posts: 3599
It seems that something is wrong with your question.
Entrepreneurship, A&B together have 16,000/h rate and over 8h it will be 128,000 (not 100,000). Or am I missing something?
Schools: Chicago
(Booth) - Class of NEW! GMAT ToolKit 2 (iOS) / GMAT ToolKit (Android) - The must have GMAT prep app | PrepGame
GMAT 1: 750 Q50 V40
Followers: 324
Kudos [?]: 1572 [0],
given: 354
Re: Machine rate problem [#permalink] 07 Jun 2011, 17:27
This post received
maverick04 We have to find out the minumum no. hours that machine B HAS to run. Now we have the output for 8 hrs for both the machines (
Manager nothing given about the no. of hrs each runs
Joined: 28 Feb 2011 )
Posts: 91 Lets assume Machine A runs for full 8 hrs..it produces 72000 units.
Followers: 0 Now the remaining 28000 have to be produced by B..which it would do in 28000/7000 = 4 hrs...
I would slightly tweak the wording in the problem to say "if both of them operate simultenously or otherwise for 8 hrs..." (but I guess it would make the question far to
Fight till you succeed like a gladiator..or doom your life which can be invincible
Manager Re: Machine rate problem [#permalink] 08 Jun 2011, 00:51
Joined: 16 Mar 2011 So you're saying B HAS to run for atleast 4 hours? But what if A runs less than 8 hrs.. Is it given that it has to run for atleast 7 hours? I think than the answer would change
unless the result has to be an integer multiple of both rates.. I was actually thinking the same thing as walker about the combined rate.. Anyone?
Posts: 180
Followers: 1
Kudos [?]: 14 [0],
given: 13
Re: Machine rate problem [#permalink] 08 Jun 2011, 01:26
This post received
l0rrie wrote:
So you're saying B HAS to run for atleast 4 hours? But what if A runs less than 8 hrs.. Is it given that it has to run for atleast 7 hours? I think than the answer would change
Joined: 28 Feb 2011 unless the result has to be an integer multiple of both rates.. I was actually thinking the same thing as walker about the combined rate.. Anyone?
Posts: 91 The question asks the min. no of hrs B should run..we have a given output and no. Of hrs..A can run for a maximum of 8 hrs..and the shortfall in the output would be covered by
Followers: 0
I agree the language of the question can be improved but this is what I think given the problem at hand...
Posted from my mobile device
Fight till you succeed like a gladiator..or doom your life which can be invincible
Intern Re: Machine rate problem [#permalink] 08 Jun 2011, 08:59
Joined: 07 Jun 2011 I think maverick04 is correct. I see how you solved the problem now. Thanks so much. Sorry for wording the question so badly. I got it off this online problem generating
website and I am unable to regenerator the same problem. I wrote down the problem in short-hand style and so I don't remember the exact wording. Again, Thanks so much for all
Posts: 2 your help.
Followers: 0
Kudos [?]: 0 [0],
given: 1
Re: Machine rate problem [#permalink] 08 Jun 2011, 17:15
maverick04 This post received
teebumble wrote:
Joined: 28 Feb 2011
I think maverick04 is correct. I see how you solved the problem now. Thanks so much. Sorry for wording the question so badly. I got it off this online problem generating
Posts: 91 website and I am unable to regenerator the same problem. I wrote down the problem in short-hand style and so I don't remember the exact wording. Again, Thanks so much for all
your help.
Followers: 0
Anytime mate
Fight till you succeed like a gladiator..or doom your life which can be invincible
puneetj Re: Machine rate problem [#permalink] 10 Jun 2011, 20:40
Manager 16,000*x + 9000(8-x) = 1,00,000
Joined: 08 Sep 2010 so x will be 4 hrs
Posts: 173 _________________
Followers: 0 My will shall shape the future. Whether I fail or succeed shall be no man's doing but my own.
Kudos [?]: 12 [0], If you like my explanations award kudos.
given: 18
Current Student
Re: Machine rate problem [#permalink] 11 Aug 2011, 05:20
Status: mba here i
come! A+B in 1 hr produce 9+7 = 16k
Joined: 07 Aug 2011 in 8hrs = 128k
Posts: 270 but we need only 100k in 8hrs. so we have 28k extra. let's give B some rest.
Location: Pakistan 128k/7k = 4hrs
Concentration: so B will rest for 4 hours and, thus, will work for 4 hrs (8-4)
Strategy, Marketing
Schools: Insead '13
(M) press +1 Kudos to appreciate posts
Download Valuable Collection of Percentage Questions (PS/DS)
GMAT 1: 680 Q46 V37
GMAT 2: Q V
Followers: 23
Kudos [?]: 635 [0],
given: 48
VP Re: Machine rate problem [#permalink] 11 Aug 2011, 23:25
Status: There is 100- 9*8 = 28
always something new
!! thus 7*4 hrs = 28
Affiliations: PMI,QAI hence 4
Joined: 08 May 2009
Visit -- http://www.sustainable-sphere.com/
Posts: 1368 Promote Green Business,Sustainable Living and Green Earth !!
Followers: 9
Kudos [?]: 120 [0],
given: 10
gmatclubot Re: Machine rate problem [#permalink] 11 Aug 2011, 23:25
|
{"url":"http://gmatclub.com/forum/machine-rate-problem-114850.html?fl=similar","timestamp":"2014-04-17T01:14:49Z","content_type":null,"content_length":"186692","record_id":"<urn:uuid:8f2b88a8-a9db-45b3-bf90-beffb9c33c2f>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00660-ip-10-147-4-33.ec2.internal.warc.gz"}
|
EvenOdd in Agda, Idris, Haskell, Scala
— 2014-01-23
A while ago I blogged about using Agda to prove the parity of added numbers. I've recently been doing some work on Idris and wondered how easy it would be to translate my Agda proof to Idris.
The original Agda code looked something like this:
module EvenOdd where
open import Data.Nat
data Even : ℕ → Set where
evenZero : Even 0
evenSuc : {n : ℕ} → Even n → Even (suc (suc n))
_e+e_ : {n m : ℕ} → Even n → Even m → Even (n + m)
evenZero e+e b = b
evenSuc a e+e b = evenSuc (a e+e b)
The direct Idris translation looks like:
module EvenOdd
data Even : Nat -> Type where
evenZ : Even Z
evenS : Even n -> Even (S (S n))
ee : Even n -> Even m -> Even (n + m)
ee evenZ m = m
ee (evenS n) m = evenS (ee n m)
The few differences:
• We don't have to import a Nat type
• Totality is not the default (there is a flag to make it so, though)
• We can't define mixed letter and symbol operators
Pretty easy! Time for something trickier. Now, I haven't done very much type level Haskell but I wanted to see how easy it would be to translate to the recent GHC 7.8 release.
{-# LANGUAGE DataKinds #-}
{-# LANGUAGE GADTs #-}
{-# LANGUAGE KindSignatures #-}
{-# LANGUAGE TypeFamilies #-}
module EvenOdd where
data Nat = Z | S Nat
data Even :: Nat -> * where
EvenZ :: Even Z
EvenS :: Even n -> Even (S (S n))
type family Plus (n :: Nat) (m :: Nat) :: Nat
type instance Plus Z m = m
type instance Plus (S n) m = S (Plus n m)
ee :: Even n -> Even m -> Even (Plus n m)
ee EvenZ m = m
ee (EvenS n) m = EvenS (ee n m)
Getting a bit trickier. We've had to do the following:
• Enable data type promotion, so that data types can be kinds
• Enable type families, so we can write a type level functions
• Define our own Nat and use that as a kind
• Define our own Plus type level function (the type family)
• Get totality results as warnings (or errors via -Wall)
The few problems are caused by Haskell's distinction between values, types and kinds. Everything else looks extremely similar - we've been lucky to fall into an area where the GHC data kinds
extension works really well and we can promote our simple Nat type to a kind.
Let's step it right up. Now let's encode this in Scala. Are you ready?
package org.brianmckenna.evenodd
sealed trait Nat
trait Z extends Nat
case object Z extends Z
case class S[N <: Nat](n: N) extends Nat
sealed trait Even[N <: Nat]
trait EvenZ extends Even[Z]
case object EvenZ extends EvenZ
case class EvenS[N <: Nat](n: Even[N]) extends Even[S[S[N]]]
object Even {
implicit val evenZ = EvenZ
implicit def evenS[N <: Nat](implicit even: Even[N]) = EvenS[N](even)
sealed trait Plus[N <: Nat, M <: Nat] {
type Result <: Nat
object Plus {
type Aux[N <: Nat, M <: Nat, R <: Nat] = Plus[N, M] {
type Result = R
implicit def plusZ[M <: Nat] = new Plus[Z, M] {
type Result = M
implicit def plusS[N <: Nat, M <: Nat](implicit plus: Plus[N, S[M]]) = new Plus[S[N], M] {
type Result = plus.Result
object ee {
def apply[N <: Nat, M <: Nat, R <: Nat](n: Even[N], m: Even[M])(implicit sum: Plus.Aux[N, M, R], re: Even[R]) = re
Now, this probably looks pretty verbose since we have to define our own type level Nat and Plus function. The type level Plus uses Scala's path-dependent types.
What's interesting is that our theorem is expressed as a constraint that given Even[N] and Even[M] then we can construct an Even[N + M] from an implicit. What we've given up is a constructive proof
that every combination of two even numbers always results in an even number (but we know from our previous proofs that it's true) - specifically, we can't tell if the the implicit re can be found for
all values. We could fix this by splitting the implicit up into each case but we can't get totality checking.
Hopefully this gives a very quick feel for programming at the type level in Agda, Idris, Haskell and Scala. Type level programming in Agda and Idris is just as easy as programming at the value level.
Type level programming in Haskell and Scala is a bit annoying, we have to write functions very differently at the type level and value level, but it's impressive that we can achieve our goal in much
more widely used languages.
Thanks to Miles Sabin for help with simplifying the Scala version.
Please enable JavaScript to view the comments powered by Disqus.
|
{"url":"http://brianmckenna.org/blog/evenodd_agda_idris_haskell_scala","timestamp":"2014-04-20T08:50:39Z","content_type":null,"content_length":"7730","record_id":"<urn:uuid:27b8d445-6d79-432e-ac98-649f5ff8ecd1>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00507-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Fixed digits
We are accustomed to writing numbers using the number of digits needed and no more. However we can accept that adding zeroes to the front of a number has no effect on the value: when a car odometer
shows "001,234" we can agree that this is identical to the number 1,234. This is relevant to our discussion of binary because we are concerned with digital electronics and microprocessor systems,
where every digit requires some supporting hardware. So we generally are not just talking about a binary number, but a binary number with a particular number of digits.
In computers we very frequently deal with "bytes," which are groups of exactly eight binary digits. When we write a binary value represented by a byte, then, we always write eight digits no matter
how big the value actually is. So our example of 42 (decimal) would be written...
...while a number like 1,234 (decimal) doesn't fit and cannot be represented in a single byte. Just like an odometer, there is a point of overflow when the number you see is smaller than the number
A byte can represent any number from 0 to 255 (decimal).
Negative values
So far I have only talked about representing positive integer numbers in binary. But what if you want to represent a negative number? It would be reasonable to express -42 (decimal) as -101010
(binary), and to represent that in electronics with an extra digital state to indicate the sign of the number. However when dealing with a fixed number of binary digits, the more common
representation is called "two's complement."
I already likened a fixed number of digits to a car odometer. What would happen if you could step the odometer backwards? You might count down to 000,002, then 000,001, then 000,000... and you know
what would come next: the counter would wrap around ("underflow") to 999,999. The same thing happens in binary with a fixed number of digits: if you counted down in byte values you would see
00000010, 00000001, 00000000... and then wrap around to 11111111.
So for instances where we want to represent a signed value, we make a rule that says that any set of digits where the highest digit is "1" actually represents a negative number. By this rule, a
signed byte can only represent positive values from 0 (00000000) through 127 (01111111). When the sign "digit" is set, the value is an underflow; take the positive integer represented if the value
were not signed, and subtract the next power of 2 beyond the number of digits. That is, to interpret the signed eight-digit value "11111111," you first treat it as a positive integer (255 decimal),
then subtract the next power of 2 (256) to yield -1. This makes sense; we got 11111111 by taking one away from zero, and you would hope that would mean -1!
There is another way to convert to and from signed values. A binary "complement" is when, for a given number, you swap all the zeroes for ones and all the ones for zeroes. "Two's complement" is when
you complement all the digits and then add one. So, going from 1 to -1, you start with the binary value (00000001), then complement it (11111110), then add one (11111111). Going the other way, you
start with the binary value (11111111), then complement it (00000000), then add one (00000001).
When we start using one place in a byte to represent a sign rather than a numeric symbol, it starts to seem silly to call all the places "digits." Actually, the more common term is "bit," short for
"binary digit." We say that eight bits make a "byte," and the bits can represent digits or other logical states as needed. The leftmost ("high") bit has a different meaning for signed bytes than for
unsigned bytes.
A signed byte can represent any number from -128 to 127.
One problem with binary as a number representation is that it is so long! It takes lots of digits to express anything. When writing or entering these numbers it is easy to make mistakes. So a more
convenient representation often used in digital electronics and computers is hexadecimal, or base 16.
In hexadecimal we simply take every group of four bits (half a byte, called a "nibble;" cute, huh?) and assign a symbol to the sixteen possible values from 0 through 15 (decimal). What symbols? 0
through 9 are simply 0 through 9, while 10 through 15 are the letters A through F. Thus the number 13 (decimal) is D in hexadecimal.
If the math tables for binary are much easier than decimal, the math tables for hexadecimal are much harder than decimal, because you would have to learn all combinations from 0 through F (256
combinations each for addition and multiplication, rather than the 100 combinations in decimal). Here's a simple suggestion: don't bother. You use hexadecimal simply as a shorthand way of talking
about binary numbers, not for doing arithmetic. When talking about lots of bits, it's just easier to say "12345678" in hexadecimal than it is to say "00010010001101000101011001111000."
Next: Logic Gates
Previous: Electricity
Copyright ©2003-2006, Mark Bereit. All rights reserved.
|
{"url":"http://www.markbereit.com/rsrc/ccdig_binary.html","timestamp":"2014-04-17T21:24:31Z","content_type":null,"content_length":"13658","record_id":"<urn:uuid:bf3f92f1-48cd-4034-87c0-d88af32946d2>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00332-ip-10-147-4-33.ec2.internal.warc.gz"}
|
MS4 - Multi-Scale Selector of Sequence Signatures: An alignment-free method for classification of biological sequences
While multiple alignment is the first step of usual classification schemes for biological sequences, alignment-free methods are being increasingly used as alternatives when multiple alignments fail.
Subword-based combinatorial methods are popular for their low algorithmic complexity (suffix trees ...) or exhaustivity (motif search), in general with fixed length word and/or number of mismatches.
We developed previously a method to detect local similarities (the N-local decoding) based on the occurrences of repeated subwords of fixed length, which does not impose a fixed number of mismatches.
The resulting similarities are, for some "good" values of N, sufficiently relevant to form the basis of a reliable alignment-free classification. The aim of this paper is to develop a method that
uses the similarities detected by N-local decoding while not imposing a fixed value of N. We present a procedure that selects for every position in the sequences an adaptive value of N, and we
implement it as the MS4 classification tool.
Among the equivalence classes produced by the N-local decodings for all N, we select a (relatively) small number of "relevant" classes corresponding to variable length subwords that carry enough
information to perform the classification. The parameter N, for which correct values are data-dependent and thus hard to guess, is here replaced by the average repetitivity κ of the sequences. We
show that our approach yields classifications of several sets of HIV/SIV sequences that agree with the accepted taxonomy, even on usually discarded repetitive regions (like the non-coding part of
The method MS4 satisfactorily classifies a set of sequences that are notoriously hard to align. This suggests that our approach forms the basis of a reliable alignment-free classification tool. The
only parameter κ of MS4 seems to give reasonable results even for its default value, which can be a great advantage for sequence sets for which little information is available.
The classification of biological sequences is one of the fundamental tasks of bioinformatics, and faces special challenges in the genomic and post-genomic era. While it is a classical paradigm to
base it on an initial multiple alignment of the sequences, a current trend is to provide alignment-free classification methods (subword-based [1], kernel-based [2], composition vector-based [3,4
]...), in order to tackle datasets that cannot be amenable to multiple sequence alignment (MSA) methods. Approaches based on k-mers have also been used for more than a decade to detect anchoring
zones for whole genome alignments [5-8].
In this paper, we describe a method for the alignment-free classification of families of nucleic or protein sequences (composed of a few hundreds of members). Our aim is to rapidly detect similarity
segments shared by these sequences without having to consider the order in which they occur inside the sequences. Our approach allows us to take into account shuffled domains as well as repeated
The local similarity detection uses a previously described method called N-local decoding [9]. The basic principle of the N-local decoding is to rely on the occurrences of similar substrings in
sequences to cluster together positions in the sequences. More precisely, two positions in the considered sequences (that we will call "sites" for short) are directly related when they occur at the
same position in two equal substrings of fixed length N. The N-local decoding clusters together all indirectly related sites, that is, sites related by a chain of direct relations. This results in a
partition of the set of sites. For each subset of clustered sites (an equivalence class or simply class), the segments of length 2N - 1 which are centered on the sites exhibit local similarities.
Although it is based on exact matches, the indirect relation scheme results in the inclusion of an a priori unknown number of mismatches.
We have previously used successfully this k-mer based method for alignment-free classification [10], without being able to solve the delicate problem of tuning the parameter N. In the present paper,
we tackle this problem by developing a procedure to select among all the segments of similarity detected by N-local decoding for all N, a subset on which to base the classification. We call this
alignment-free classification method MS4, for Multi-Scale Selector of Sequence Signatures.
The N-local decoding has been efficiently implemented using suffix trees. Like in any k-mer based approach, there is no sensible criterion to fix a value of the parameter N. Here, we follow how the
partition of sites varies with the parameter N. When N increases, site classes tend to split into several subclasses, while for too low values of N, classes tend to group sites that do not share any
detectable similarity. MS4 attempts to select among all these classes of sites those that correspond to relevant homologous segments. More precisely, MS4 selects for a given site the smallest N such
that the average number of occurrences per sequence of the equivalence class of this site is smaller than a given threshold κ. The resulting values of N are different for different sites, and adapt
to the context of appearence of the site among the studied set of sequences. The parameter κ, unlike N, has a sensible global interpretation, and can be tuned to a value reflecting the maximum number
of repetitions in the sequences. Finally, the classes selected by MS4 are used to compute a dissimilarity matrix on which the classification is based (using the NeighborNet option of SplitsTree [11,
In this paper, we describe the implementation of the MS4 classification tool, which is accessible via a Web-based interface. We also give a validation on some real biological data that are not so
easy to classify: MS4 is illustrated on several families of HIV/SIV sequences. These sets have already been classified by us with the help of N-local decoding method [13], and it was shown that the N
-local decoding classes correspond to segments of homology for these sequences [10]. The results obtained in [10] were in good agreement with the accepted classification [14,15], for several values
of N. These "good" values are however data-dependent and hard to guess. The approach described in this paper replaces this parameter with the more intuitive parameter κ.
Our present results show that MS4 gives correct classifications on coding and non-coding regions of HIV/SIV. Moreover the results are robust with respect to the variations of the parameter κ. In
fact, even on sequences containing repetitions (like the non-coding regions of the HIV/SIV LTR), the choice of κ = 1 gives satisfying results. Therefore, MS4 may be expected to give reasonable
results for this default value for κ when no other information on the sequences is available.
As mentioned in the Background section, we use the N-local decoding (NLD) in order to produce partitions of the set of all sites in the sequences under study [9]. A short recapitulation of NLD is
found here. The central part of this paper is the introduction of an object that describes the embedding of successive partitions as N increases. It turns out that this object is a tree. The tree
structure is essential, because it provides a criterion for choosing "relevant" partitions of sites, which may occur at several values of N. We use the chosen classes to construct a dissimilarity
matrix between sequences (taxa). This matrix becomes then the input for standard tree construction methods (SplitsTree4 [11,12] in our case).
N-Local Decoding
We consider a collection S of sequences s over a finite alphabet site space consists of all pairs σ = (s, p) where s is a sequence, and p a position in it. This set is
where ℓ (s) is the length of sequence s. The NLD procedure starts with a collection of sequences and with an integer N ≥ 1. It consists of two steps:
1. To every site σ in ∑, associate a neighborhood of length 2N - 1, consisting of σ and of N - 1 sites on each side of σ (neighborhoods that are too near the beginning or the end of a sequence are
accordingly truncated, but this case will not be considered for simplicity's sake in the rest of the description). This neighborhood carries a word W of length 2N - 1. We consider all subwords w of
length N of this word W. They can be "identified" by their position relative to σ, i.e. the index of the beginning of w inside W. The subword w of W at relative position i will be denoted by w[i].
Given two sites σ and σ', we say that they are directly related if there exists an i such that the subword w[i ]of W is identical to the subword W' . If two sites σ, σ' are directly related, we write
σ ≃[N ]σ'.
2. We define the equivalence relation ~[N ]as the transitive closure of ≃[N ]. In other words, we say that σ[1 ]~[N ]σ[2 ]if there is a chain of directly related sites connecting σ[1 ]and σ[2].
We illustrate this on an example (Fig. 1). We consider here a set of protein sequences, and examine one of the equivalence classes obtained by N-local decoding with N = 7. This class consists of 6
sites. The first site is described by the pair (0,571): this means that it lies at position 571 of the sequence number "0", and similarly for the other five sites. Since N = 7, the neighborhoods
around these sites are of length 2N - 1 = 13. The words in these neighborhoods are shown on the picture, with the central letter displayed in red.
Figure 1. NLD illustration. Graphical representation of relatedness within an NLD class, with N = 7. For each one of six sites, the word occupying its neighborhood is shown on the right hand of the
picture. Directly related sites are connected by solid lines: each color corresponds to (at least) one word of length 7 shared by two neighborhoods. Broken lines connect sites that are connected but
not directly connected.
Directly related sites are connected by solid lines. For instance, the sites (0, 571); (3, 630) and (8, 614) share the word LREIDED starting at the third position of their environment. The sites that
are related (but not directly related) are connected by broken lines. For instance, the sites (1, 580) and (5, 528) are connected by the chain (1, 580) → (0, 571) → (3, 630) → (5, 528). The fact that
every site is connected to every other site means that this set of sites is a class.
The Partition Tree
A recurring problem of N-mer-based methods is that there does not seem to be a good criterion to tune this parameter N to an acceptable value. There is moreover no real reason to believe that a
single "optimal" value will always be meaningful, since the similarity between sequences can depend very much on the position of neighborhoods in sequences.
In the case of N-local decoding, we combine the different equivalence classes for various values of N by introducing a new construction, the partition tree, which encodes the way in which equivalence
classes for successive values of N are related. This tree will allow us to choose a set of "relevant" NLD-classes. Let ℰ^N be the partition of ∑ induced by ~[N].
Lemma 1. For all N ≥ 0, the partitions ℰ^N satisfy ℰ^N+1 ⊂ℰ^N.
Proof. Compare the partitions of ∑ produced by ~[(N+1) ]with the partitions produced by ~[N]. If any two sites σ[1 ]and σ[2 ]are ~[(N+1)]-equivalent, we have to show that they are ~[N]-equivalent.
Notice that σ[(N+1) ]equivalence is reduced to a set of direct ≃[(N+1) ]relations, and that σ[1 ]≃[(N+1) ]σ[2 ]implies trivially σ[1 ]≃[N ]σ[2]. If two neighborhoods share a word of length N + 1 at a
given relative position, they also share words of length N at the same relative positions.
This simple lemma is crucial, and corresponds to the intuitive idea that it is harder to lump together big words than small words. We are now ready to define the partition tree.
Definition 1. For N > 0, denote by ℰ^N the set of equivalence classes defined by the relation ~[N]. Letting ℰ^0 = {(which will correspond to the root of the tree), we can encode the set V = ∪[i ≥ 0 ]
ℰ^i of equivalence classes for different values of N into the partition tree P = (V, E^P), defined by
In other words: the vertices of P are all the equivalence classes that correspond to ~[N ]for all values of N. The edges are drawn between pairs of classes that correspond to successive values of N
and such that one is a subset of the other. By the above lemma, any two sites that are (N + 1)-equivalent are also N-equivalent. On the other hand two sites that are N-equivalent are not necessarily
(N + 1)-equivalent. In other words, the N-classes split as N increases. The edges are drawn precisely between any N-class C and all the (N + 1)-classes into which C splits. From this definition, it
is clear that any vertex of P has at most one ancestor, i.e. that P is a tree. Finally, for memory saving purposes, all valency 2 nodes are suppressed from P (resulting in the compacted partition
tree). Examples of partition trees are given in Fig. 2 and Fig. 3.
Figure 2. Selection of relevant classes. Selection of relevant classes in a partition tree. On the right, the green nodes satisfy κ = 1 while the red ones do not. On the left, only the relevant
classes are shown.
Figure 3. Didactic example. Toy example of Multi-Scale Selector of Sequence Signatures (MS4 selection of classes). On the first row (top) we see the input sequences and the output of eligible classes
(MS4 classes). The second row shows the NLD re-writing from N = 1 to 5. The partition tree constructed on the basis of the re-writing is shown on the lower part of the picture. The leaves correspond
to classes that contain a single site (singletons). The dotted nodes should normally disappear from the compacted tree, and are only shown for clarity's sake. The eligible classes are colored in
green. Nodes are labelled with identifiers like C0_3 where C0 is an arbitrary class identifier and 3 the value of N.
A choice of classes
When we examine N-equivalence classes for all possible N, we face a deluge of information, moreover altogether redundant. We shall now use the tree of partitions to alleviate this problem. Given any
set C of sites, we can define the size of C as the number of sites in C and the spread of C as the number of sequences which contain at least one element of C. Define κ(C) as the ratio between the
size and the spread of C as follows.
For a given value κ ≥ 1, the condition κ (C) ≤ κ means that the average number of occurrences of class C per sequence where it occurs is less or equal than κ. In particular, κ (C) = 1 means that no
sequence contains more than one element of C (of course we take here C to be an NLD-class). We call the parameter κ the maximum average repetitivity. We use this parameter to select nodes in the
partition tree that satisfy κ (C) ≤ κ.
This condition is not sufficient to make these classes relevant (see an example in Fig. 2). Indeed, the bottom of the partition tree is occupied by classes corresponding to large N, which occur in
only one sequence. Such classes are of no interest. In order to find relevant classes, we have to "climb upward" (towards smaller values of N). Since any vertex of a tree has only one ancestor, the
following definition does make sense.
Definition 2. An NLD class C will be called κ-relevant, if it satisfies κ (C) ≤ κ, while its ancestor does not.
The MS4 method consists in choosing all relevant classes in a set of sequences, and ignoring the others. The algorithm describing the implementation of MS4 is given in section Appendix. An explicit
toy example on which we can see both the N-local decoding and the selection of relevant classes at work for κ = 1 is shown in Fig. 3.
The Dissimilarity matrix
At the end of the MS4 procedure, each sequence can be rewritten, by replacing the letter originally found at a given site by the identifier of the relevant MS4-class to which the site belongs (e.g.
Fig. 4). We use the number of MS4 classes shared by 2 sequences to define a similarity index in a similar way as described in [10]. This measure is closely related to the percentage of identity
classically used for sequence comparison.
Figure 4. Example of similarity blocks found by MS4 in the non-coding LTR sequences. Part of the alignment from 29 out of the 43 non-coding LTR sequences centered on the NFκB binding site. The
complete alignment of the 43 sequences is shown in Additional Files 8 and 9. The alignment is focused on the transcription factor NFκB binding site (GGGACTTTCC[A|G]) and its flanking regions. The
names of sequences are indicated with their accession number in Los Alamos HIV sequence databank. The sequence are regrouped according to their phylogeny. The position of the first letter of the
displayed region is given on the left. The letters are rewritten by applying the MS4 method to the whole non coding LTR sequences. As seen in Additional File 8, the complete MS4 identifier is
constructed as follows: e.g. C24_8 (class C24 for a N value of 8). Identical recoded letters that are in the same columns are displayed in the same colour. The MS4 identifier has been simplified as
follows: we have just indicated the letter and the value of N. Therefore it can be that two different MS4 classes that lie on the same column, with the same letter and the same N value are only
distinguished by their colour (e.g. A18 and also T18 HIV-1-M/G, that are red or green). The two or more repeated segments of the same sequence are put one under the other. Therefore the sequences are
often written on several lines to highlight similarities between sequences and inside sequences. Most often the similarity blocks are aligned and the great majority of identical indexed letters are
on only one column. Some colored letters are unique because only 29 sequences (out of 43) are displayed on this figure.
Given any two sequences seq[i ]and seq[j], we compute a number d[ij ]as follows. For a class c, let n[i](c) be the number of occurrences of c in seq[i]. Denote by C[ij ]the set of relevant classes
that have representatives both in seq[i ]and seq[j]. Since the two sequences can contain a different number of occurrences, we put seq[i ]and seq[j ]. We define then a dissimilarity d[ij ]by
In fact, n[ij ]is the sum of local similarities shared by 2 sequences. Any exact common word of length M corresponds to M common MS4 classes (e.g. Fig. 4).
When κ = 1, n[ij ]is simply the number of relevant classes having representatives in both seq[i ]and seq[j ]. This dissimilarity matrix is used as input in NeighborNet of SplitsTree4 [11,12] to
produce the split networks displayed in Fig. 5 and Fig. 6.
Figure 5. Network from the HIV/SIV genomes. The split-network obtained from 70 HIV/SIV genome sequences (dissimilarity matrix calculated by MS4). The sequences names are written as follows: their
GenBank accession numbers, followed by their nomenclature names [15].
Figure 6. Network from the non-coding LTR sequences. The split-network obtained from 43 HIV/SIV non-coding parts of LTR nucleotide sequences (distance matrix calculated by MS4 for κ = 1 and N varying
from 2 to 100). M15390 corresponds to the HIV-2-A ROD isolate just as X05291 for Fig.5. Sequence names follow the same rule as in Fig.5.
Results and Discussion
MS4-classification of complete HIV/SIV genomes
We have applied the MS4-method, followed by a computation of the dissimilarity matrix (see section Methods), and the construction of a split-network (with the option NeighborNet [12] of SplitsTree4 [
11]) to a family of 70 HIV/SIV genomes. The input for the calculation of the dissimilarity matrix consists of the classes selected by MS4 with κ = 1, for values of N between 2 and 60. We use here the
same 70 non-recombinant HIV (Human immunodeficiency virus)/SIV (Simian immunodeficiency virus) nucleotide sequences that we studied previously in [10] by using the N-local decoding method. These
sequences include four incomplete (gag) sequences (HIV-2 subtype C, D, E, F). These short sequences are subtyped in the sequence databases, so they appear to have kept subtyping signals that are in
the complete genome sequences. The 66 complete sequences range in length from 8555 to 11443 nucleotides. All these sequences can be retrieved from the Los Alamos HIV sequence database [16] (their
accession numbers are given in Fig. 5). The accepted groups are as follows:
1. HIV-1 group M (subtypes A-D, F-H, J, K; A is split into A1 and A2, and F is divided into F1 and F2),
2. HIV-1 group N,
3. HIV-1 group O,
4. HIV-2 groups A, B, G,
5. SIV-CPZ (chimpanzee)
6. SIV-SMM (sooty mangabey)
We produce a network by application of SplitsTree4 on the basis of a dissimilarity matrix given by the MS4 method. Fig. 5 shows the network obtained by our calculation. The network is quite
tree-like. The two types of HIV are clearly distinguished: HIV-1 is closer to SIV-CPZ and HIV-2 is closer to SIV-SMM. The HIV-1 group M, on the left, is clearly separated from the rest. The nine
subtypes of HIV-1 group M (major) cluster distinctly, with sub-subtypes significantly more closely related to each other (A1 and A2, F1 and F2, B and D that should be regarded as sub-subtypes [14,15
]). Subtype K is more distant from sub-subtypes F1 and F2 than these are from each other, but closer to them that to other subtypes. The HIV-1 group N intercalates between HIV-1-M and SIV-CPZ (-CAM3,
-CAM5, -GAB, and -US). The HIV-1 group O is intercalated between these CPZ and CPZ-ANT that is the borderline in the HIV-1/SIV-CPZ lineages. HIV-2 groups also form clear clusters, respectively,
including C, D, E, and F that cover about half of the gag region.
Within the HIV-2 viruses, notice that the HIV-2 area, with the exception of the groups A and G, is less tree-like than the rest. From the aspect of the network, it seems that HIV-2-C tends to cluster
both with HIV-2-B and with SIV-SMM. Another example is SIV-SMM-MAC which tend to group with both HIV-2-F and with HIV-2-D. Notice that the sequences HIV-2-C, HIV-2-D and HIV-2-F are short.
These groupings, which were obtained without alignments and without parameters, agree with accepted classifications.
In our previous paper, we varied the parameter N and we selected values of N that agree with existing knowledge; it turned out that correct tree topologies were found for N in the range from 13 to
35. The fact that the same groupings were found by the MS4 method with no other input than the sequences themselves gives us some confidence in the validity of this approach.
HIV/SIV sequences from the Compendium 2000
We have also calculated a split network from the 46 HIV/SIV complete nucleotide sequences of the Compendium 2000 (HIV-1/HIV-2/SIV Complete Genomes), and compared it with a tree available at [17]. The
result of our calculation is tree-like, and agrees with the topology of the Compendium tree (Additional File 1).
Additional file 1. Network for Compendium2000 sequences. Network for the 46 Compendium2000 sequences computed by SplitsTree4 on our MS4 dissimilarity matrix with κ = 1 (from N = 2 to N = 60).
Format: PNG Size: 64KB Download file
Major genes of HIV/SIV
The major genes (gag, pol, env) of the HIV/SIV sequences (see above) were also tested.
1. For gag we have 70 sequences: 66 complete sequences (1473 to 1569 nucleotides in length) and 4 partial sequences covering about half the gag regions (771-781 nt).
2. For pol : we have 66 complete sequences (2993-3360 nt).
3. For env : we have 66 complete sequences (2499-2658 nt).
The regions pol and env were unavailable for the 4 HIV-2 groups C-F. The trees obtained for gag, pol and env give a good classification and the same description can be done for them as that detailed
above for the 70 complete sequences (Additional Files 2, 3, 4).
Additional file 2. Network for gag sequences. Network for the 70 gag sequences computed by SplitsTree4 on MS4 dissimilarity matrix with κ = 1 (N[max ]= 510).
Format: PNG Size: 56KB Download file
Additional file 3. Network for the pol sequences. Network for the 66 pol sequences computed by SplitsTree4 on MS4 dissimilarity matrix with κ = 1 (N[max ]= 962).
Format: PNG Size: 59KB Download file
Additional file 4. Network for env sequences. Network for the 66 env sequences computed by SplitsTree4 on MS4 dissimilarity matrix with κ = 1 (N[max ]= 794).
Format: PNG Size: 72KB Download file
MS4-classification of short sequences: nef and non-coding LTR sequences
Non-coding LTR
In order to test our method, we have also looked at parts of the HIV/SIV genomes that are notoriously hard to align due to inner repetitions in the sequences. One of them (retrieved from 43 of the 70
sequences) covers the non-coding part of long terminal repeat (complete non-coding LTR region or at least its portion including the polyadenylation signal AATAAA). The lengths of this part range from
211 to 328 nt in the HIV-1/SIV-CPZ subset, and 433 to 508 nt in the HIV-2/SIV-SMM subset. These short non-coding segments contain many duplications/insertions/deletions that make them difficult for
traditional alignment-based phylogenic studies.
The network obtained (Fig. 6) shows again a clear separation between HIV-1 and HIV-2, even though it was constructed with short and "difficult" subsequences. It is less treelike than the network
obtained from the complete sequences, which is not surprising. The comparison between Fig. 5 and Fig. 6 show several features which may require further investigation: While the complete genomes
produce very strong grouping of the sub-types HIV-1-M, the non-coding LTR show several discrepancies for these sub-types. The clustering of HIV-2 (and their groups), SIV-SMM, HIV-1-O, SIV-CPZ and
HIV-1-M is correct. The network (Fig. 6) is similar to the tree in our previous paper [10].
It is interesting to notice that the two HIV-1-N are not very clearly grouped together. The sequence AJ271370_HIV-1-N is grouped both with the chimpanzee group (SIV-CPZ) and with AJ006022_HIV-1-N. On
the other hand, AJ006022_HIV-1-N tends to group both with the other HIV-1-N and with AF061640_HIV-1-M-G (but less clearly). In the Neighbor Joining tree of [10], the two HIV-1-N are grouped together
with a bootstrap value of 95% and connected with the group SIV-CPZ with bootstrap value of only 55%.
Even though our results show the difficulties of treating the non-coding part of LTR, it should be stressed that our method says something about these sequences. By contrast, these sequences are not
tractable by standard alignment-based methods [10].
The featured sequences are reputedly hard to align, because they exhibit several repeated segments. MS4, used together with SplitsTree4, gives relevant results on these data that are usually set
aside for the typing and subtyping of HIV-SIV, for lack of sufficient phylogenetic signal. This observation was already present in our previous study which used only the N-local decoding method. In
this previous study, we proceeded to the careful - and tedious - scrutiny of several trees, resulting from the NLD method for various values of the parameter N. We showed that, for the non coding LTR
sequences, the best tree (best fitting the reference classification) was obtained for the value N = 11. The splits networks that are obtained by MS4, or by NLD for N = 11 (Additional File 5), are
similar and yield correct groupings of the non-coding LTR. One only notes a discrepancy inside group M, NLD giving a better clustering of the A subtypes, while MS4 groups H subtypes better.
Additional file 5. Network for LTR sequences obtained with NLD. The SplitsTree4 network for non-coding LTR sequences computed with the NLD method for a fixed word length of N = 11. NLD method is
described in [10], it uses a similar similarity index but with a fixed length word. In [10] we used Neighbor Joining instead of Splits Networks.
Format: PNG Size: 67KB Download file
It should be noticed that when we have here varied the maximum average repetitivity κ from 1.0 to 10.0 (by step of 0.5), the obtained classifications turned out to be remarkably robust to this
variation (e.g. Additional Files 6 and 7).
Additional file 6. SplitsTree network for k = 5 for LTR sequences. Network for the 43 non coding sequences parts of HIV LTR computed by SplitsTree4 on MS4 dissimilarity matrix for the value κ = 5 (N
from 2 to 100).
Format: PNG Size: 17KB Download file
Additional file 7. SplitsTree network for k = 10 for LTR sequences. Network for the 43 non coding sequences parts of HIV LTR computed by SplitsTree4 on MS4 dissimilarity matrix for the value κ = 10
(N from 2 to 100).
Format: PNG Size: 17KB Download file
NFkB region
We focus now on the non-coding region of LTR, to show how MS4 deals with repetitions in the sequences. The fig. 4 and the figures in Additional Files 8 and 9, show the binding site of the
transcription factor NFκB and its flanking regions [10]. This site is characterised by the signature GGGACTTTCC[A|G], which is present one or two times in the non-coding region of the LTR of HIV/SIV
genomes (one or two additional imperfect copies may exist).
Additional file 8. Similarity blocks found by MS4 in non coding LTR sequences. Superposition of MS4 classes on a manually expertised alignment of the non coding part of 43 HIV-SIV LTR sequences
focused on NFκB region. This is a nucleotide sequences alignment of the 43 non-coding LTR sequences. Apart from minor modifications the alignment is the same as that in Fig. 5 in [10]. The alignment
is focused on the transcription factor NFκB binding site (GGGACTTTCC[A|G]) and its flanking regions. The names of sequences are indicated with their accession number in Los Alamos HIV sequence
databank. The sequence are regrouped according to their phylogeny. The letters are rewritten by applying the MS4 method to the whole non coding LTR sequences. The MS4 identifier is constructed as
follows: e.g. C24_8 (class C24 for a N value of 8). Identical recoded letters that are in the same columns are displayed in the same colour. When they are not all aligned on the same column no colour
is used (as well as when they are unique in this part of the alignment). The repeated motifs inside one sequence are put one under the other. Therefore the sequences are often written on several
lines to highlight similarities between sequences and inside sequences. Most often the similarity blocks are aligned and the great majority of identical indexed letters are on only one column.
Format: XLS Size: 54KB Download file
This file can be viewed with: Microsoft Excel Viewer
Additional file 9. Region of NFκ B fixation site. The complete alignment, part of which is featured in Fig. 4. This figure corresponds to the figure in Additional File 8. The colours are the same as
in the figure in Additional File 8 but in this figure the MS4 identifier has been simplified as follows: we have just indicated the letter and the value of N. Therefore it can be that two different
MS4 classes that lie on the same column, with the same letter and the same N value are only distinguished by their colour (e.g. A18 and also T18 HIV-1-M/G, that are red or green).
Format: PDF Size: 32KB Download file
This file can be viewed with: Adobe Acrobat Reader
It clearly appears that, although the parameter κ is here set to 1, this zone contains relevant classes over the whole repeated region. Each repeated motif of the NFκB pattern is identified by a
different set of MS4-classes corresponding to N larger than the length of the repeated motif. Fig. 4 illustrates how the MS4-classes on this repetitive region participate to the overall MS4
classification. We clearly distinguish the HIV-1-N group which has some similarity with SIV-CPZ, the group HIV-1-O, and the group HIV-1-M in which we can distinguish e.g. the subtypes HIV-1-M/G, C
and J. The HIV-2 sequences are clearly separated into three groups A, B and G which show similarities with SIV-MM. This example illustrates the facts that (a) Repeated segments are taken into account
by the MS4 method, even for κ = 1 (which corresponds to one repetition of a class per sequence) and (b) each repeated segment participates in the classification of our set of sequences. Fig. 4 also
illustrates the way that the re-writing of sequences in terms of MS4-classes defines the dissimilarity between sequences (See Eq.3). For instance, in the sequences HIV-1-M/J, a class, such as 'A49',
corresponds to an exact word of length 49 shared by the two sequences. These classes correspond to the value N = 49 when the similarity concerns only 2 sequences (this is a straightforward exact
match) but a smaller N when it is shared by more than 2 sequences (most often N = 18 for the binding site of NFκB).
The nef sequences
We have also studied the 66 nef sequences (292-783 nt). The classification by MS4 is correct except for a few discrepancies (that have already been described in [10]): in the group HIV-1-M,
sub-subtypes F1 and F2 mix together, and the position of subtype K is uncertain between F1/F2 and J (Additional File 10). In both cases (non coding LTR and nef) that we just saw, it is obvious that a
full classification is not possible due to conflicting signals, and it is necessary to find homologous sites on a multiple alignment (as we did for LTR with N-local decoding in [18]).
Additional file 10. Network for the nef sequences. Network for the 66 nef nucleic sequences computed by SplitsTree4 on MS4 dissimilarity matrix with κ = 1 (for N[max ]= 543).
Format: PNG Size: 62KB Download file
Here we examine more precisely nef, a sequence which is important for the virulence of the virus. We show a multiple alignment of the 66 sequences (Fig. 7). The Dialign [19] multiple alignment has
been manually edited by putting in the same column the sites corresponding to one MS4 class (See section Methods). The results have been visualized with the help of Jalview [20] which is a multiple
alignment editor which allows the user to define, for each color, the set of sites that carry that color. The fig. 7 shows an unambiguous sector of this alignment. The identifiers of the classes are
not shown on the figure, but Jalview fortunately allows the user to click on a letter and recover this information. Identical letters (A, C, G or T) that are on the same column and with the same
colour belong to the same class. We clearly see on Fig. 7 that there are classes that appear only in HIV-1, classes that appear only in HIV2, and classes that appear in both. The fact that sequences
can be correctly classified by MS4, suggests that the majority of sites regrouped in one class correspond to blocks of homology between sequences.
Figure 7. Screenshot of local nef alignment. Jalview screenshot of positions 402 to 510 of the alignment of 66 nef sequences. Identical letters (A, C, G or T) that are of the same color and on the
same column, come from the same MS4 class. (It can happen that two neighboring colors are hard to distinguish). In the left column, the sequences are identified by their accession number, the type of
virus (HIV1 or 2, SIV-CPZ or SMM) the group -for example HIV-1 M, N, O or HIV-2-A, the subtype, and the subtype in the case of HIV-1-M/C, for example. The sites that are not colored belong to classes
with only one element.
This paper gives a description of the MultiScale Selector of Sequence Signatures (MS4) method and uses it for an alignment-free classification (virtually parameter-free) of a family of sequences. The
core of the method consists in the selection of "relevant" classes of segments, which are assumed to carry similarity information, although the criterion for grouping them together is purely
combinatorial (classification by context [9]). The point of our method is that it does not require the specification of a word length parameter and it does not consider only exact words.
The user may choose a parameter κ which reflects the average repetitivity of the set of sequences under consideration. The default value κ = 1 yields satisfying results in the examples we have
considered so far. MS4 sets automatically a local length parameter N which depends on the starting set of sequences and local similarities between sequences.
In this paper, we test the method on a set of well-studied HIV/SIV sequences [10,14,16] on which one of us is an expert [10,18]. The results obtained are in excellent agreement with the accepted
knowledge. The MS4 method has also been applied to other data (not shown here). It should be noted that it is not accurate on too small datasets. In our experience, this program can be applied in its
present state to sets composed from a dozen to a few hundreds of sequences (datasets consisting of a few Mb). Note also that MS4 works for protein data as well as genes (e.g. Additional File 11, and
Additional file 11. Network for the Nef protein sequences. Network for the 66 Nef protein sequences on MS4 dissimilarity matrix with κ = 1 (for N = 2 to N = 100).
Format: PNG Size: 452KB Download file
As N decreases, the N-local decoding method detects weaker similarities, before being flooded by spurious ones [13]. Concerning the selection of equivalence classes, our aim is to select as many
non-redundant homologous segments as possible, while keeping the background noise at a low level. Our default criterion for "relevant" classes locally sets N above this level, at the cost of losing
some occurrences of repeated similar segments. By tuning the parameter κ, it is possible to accept a maximal average quantity of repetitions below a given threshold. When κ is set too high, the
result of the classification can degenerate, and tends towards the mere letter-composition criterion as κ tends to infinity. By default, we exclude repetitions of any given class in the same
sequence. However, even for this value, the repeated segments are not lost altogether. When the value of N becomes larger than the size of the repetition, the MS4 classes only change (as subsets of
sites) up to the value where different repetitions are assigned a different MS4 class. This can indeed result in a clearer identification of the distinct homologous repetitions. This phenomenon is
illustrated on the well known repetitive NFκB binding regions of non-coding LTR (see Fig. 4 and Section Results sub-section NFκB region). Although our current criterion can be tuned to take
repetitivity into account, the classifications of the HIV/SIV sequences turn out to be remarkably robust to the variations of the parameter κ (for example see in additional files 6 and 7 the
resulting SplitsTree from non coding part of LTR sequences for κ = 5 and 10). Nevertheless, it seems desirable to get a more significant criterion, statistical-based, to prune the tree formed by the
whole set of embedded partitions (See section Methods subsection Partition Tree and Choice of classes). The last step concerns the computation of the similarity matrix. Our similarity is
straightforward: it consists in counting the number of MS4 classes that are shared by 2 sequences. This corresponds to a usual basic scheme for the comparison of two nucleic sequences (% identity).
We group together similar sites (according to MS4) as equivalence classes. As a result, a segment of identity of length N between sequences will result in N MS4 classes (Additional Files 8 and 7).
Each MS4 class has an equal weight in our dissimilarity computation (See Eq.3). In the case of an exact repeated subword of length N between two sequences, the contribution of this subword to the
dissimilarity is exactly N.
However, it could be also possible in the future to obtain a SplitsTree by constructing directly the splits themselves on the basis of the selected segment classes, and to avoid the computation of
the matrix. The presence of incompatible signals (resulting in parallelograms) in the network constructed by SplitsTree4 [11] from MS4 similarity matrices for short sequences, shows, as otherwise
expected, that this method must usually be completed by visual expertise. This can be achieved by coupling MS4 with multiple alignment editor like Jalview [22] (See Fig. 6 and Fig. 7). Therefore, the
classes detected by MS4 can be used to help the manual editing of a multiple alignment. We also use them to determine anchor points for the multiple alignment programs [21].
A user-friendly Web-interface is available at http://stat.genopole.cnrs.fr/ms4/ webcite. It takes as input a file with sequences in fasta format and gives the dissimilarity matrix in nexus format to
run the option NeighborNet of SplitsTree4. The allowed parameters are κ (default value 1) and the range of N for computing the partition tree (default values: from 2 to N[max ]which is the size of
the maximal repeated word shared by two sequences in the dataset). The Python code is avalaible in Additional File 12 and upon request from the corresponding author (for some implementation details
see the algorithm in section Appendix).
Additional file 12. Python code source. Python implementation of MS4 algorithm for linux systems. See INSTALL and README files to use it.
Format: GZ Size: 75KB Download file
Authors' contributions
EC and FP conceived the method and wrote part of the code, GG made the code available and implemented the Web interface, IL gave the original idea for the biological application and expertised the
results, GD wrote part of the code, CD and EC drafted the manuscript, CD produced the results, expertised them, and supervised this work. All authors read and approved the final manuscript.
We thank M. Pupin, M. Nadal and A. Grossmann for helpful discussions, M. Baudry for assistance with the code, and B. Prum and J.L. Risler for useful suggestions about this manuscript. EC was
supported by Genopole and by the Deutsche Forschungsgemeinschaft under reference DFG Project MO 1048/6-1. We thank anonymous referees for their comments.
Algorithm 1 Main steps to select relevant classes in the partition tree
Input: All equivalence classes E ∈ ℰ^n, for n ∈ {n[min], ..., n[max]}
Input: ∑ set of all sites
2: // Initialize the MS4 equivalence classes
3: RelevantECList←{∅}
5: // Initialize the partition tree P: add leaves (∈ℰ^∞) in P
6: for each site σ ∈ ∑, ∑ set of sites do
7: // Add a new node in the partition tree P and initialize κ
8: addNode(P, σ)
9: κ (σ)←1
10: end
12: // Main loop
13: n ← n[max]
14: while n ≥ n[min ]do
15: for each equivalence class E ∈ ℰ^ndo
16: // Build A, the highest ancestor set of E in P
18: for each site σ ∈ E do
19: P, σ)
20: end
21: // Compact the partition tree if only one ancestor is found
22: if card(then
23: // Add a new node in the partition tree P
24: addNode (P, E)
25: // Compute κ(E): E) is the number of sequences where equivalence class E appears
26: κ (E) ← card(E)/E)
27: for each equivalence class A ∈ do
28: // Set inclusion relation A ⊂ E in the partition tree P
29: addEdge(P, (E, A)).
30: if κ (E) <κ and κ(A) ≥ κ then
31: RelevantECList ← RelevantECList ∪{A}
32: end
33: end
34: end
35: end
36: n ← n - 1
37: end while
39: // Create root node (i.e. ℰ^0), connect it to the highest ancestors in P
40: // Same as above
42: return RelevantECList
Sign up to receive new article alerts from BMC Bioinformatics
|
{"url":"http://www.biomedcentral.com/1471-2105/11/406","timestamp":"2014-04-21T10:08:15Z","content_type":null,"content_length":"157093","record_id":"<urn:uuid:29c00eaf-9ed1-497d-82b3-2bac847804b5>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00097-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Fault detection and model quality estimation using mixed integer linear programming
Fault detection and model quality estimation using mixed integer linear programming
Abstract (Summary)
Robustness is a necessary property of a control system in an industrial environment, due to changes of the process such as changes of material quality, aging of equipment, replacing of instrument,
manual operation (e.g. a valve that is opened or closed) etc. The uncertainties associated with the nominal process model is a concern in most approaches to robust control. The question is how to
achieve a tight bound or shape of the uncertainty by using a set of measurement data. This active research area is known as model quality estimation.Change detection is a quite active field, both in
research and applications. Faults occur in almost all systems, and change detection often has the aim to locate the fault occurrence in time and to raise an alarm. Examples of faults in an industry
are leakage of a valve, clogging of a valve or faults in measurement instruments.A time-varying linear system is a realistic description of many industrial processes, and nonlinear behavior can then
also be accounted for. Then, we can consider a linear system with time-varying parameters as the model uncertainty, e.g. an affine inputoutput approximation. Many time-varying changes or faults of
industrial processes can be decribed as abrupt changes in parameters. The approach is to model them as piecewise constant parameters. The parameters of the linear time-varying system are thus
approximated for two purposes: 1) As uncertainty bounds for use in robust control. 2) Fault detection and isolation. We present a method based on the assumption of piecewise constant parameters which
results in a sparse structure of their derivative. A MILP (Mixed Integer Linear Programming) algorithm to maximize the sparsity of a matrix is introduced in this thesis.We use the method to
estimate the time-varying parameters of a blender's hingedoutflow valve. This process is included in the pelletization of Luossavaara-Kiirunavaara AB (LKAB) where the quality of iron ore pellets
depends on many factors. One important issue is the mixing of binding material and slurry. The level of the blender is controlled by regulating a hinged-outflow valve. Then, the modelling of the
valve is important, and the essential idea is to find a method to use the process model and the available measured data to detect two detrimental conditions and warn the operators. These two
conditions are: 1) The hinged valve is coated with slurry and therefore has to be cleaned to maintain its function. 2) Slurry is improperly distributed so that it does not cover the outflow valve,
which then loses its authority over outflow. The valve behaviour is nonlinear and depends on the viscosity of the materials in the tank. Therefore, we use the method to estimate the time-varying
parameters of the valve. Simulation with measurement data from the LKAB facility at Malmberget, Sweden, shows the viability of the algorithm. Then, we apply the method to the change in the mean model
and compare it with four other change detection algorithms. Two applications, fuel monitoring and airbag control are treated with good results. In other example, we consider a time-varying
time-delay first-order process model. The gain, time-constant and time-delay are considered as uncertainties in this example. An estimate of the perturbations is produced based on the MILP method.
The Pade approximation and orthogonal collocation method are used to approximate the delay.An overhead crane is used as an illustrative example, where the length of the pendulum, friction coefficient
and the proportionality factor converting the control signal into the speed of the suspension point are time-varying and then considered as uncertaintiesand we try to estimate the bounds of these
Bibliographical Information:
School:Luleå tekniska universitet
School Location:Sweden
Source Type:Master's Thesis
Date of Publication:01/01/2009
|
{"url":"http://www.openthesis.org/documents/Fault-detection-model-quality-estimation-597406.html","timestamp":"2014-04-20T10:48:22Z","content_type":null,"content_length":"11618","record_id":"<urn:uuid:9feb6dfd-25e1-4ff8-a9c5-0543c508db90>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00327-ip-10-147-4-33.ec2.internal.warc.gz"}
|
5.5.2.8 Example: linked lists and the fast accumulation of results
5.5.2.8 Example: linked lists and the fast accumulation of results
For many applications, one needs to be able to build up a list of some intermediate results obtained in some computation. The easiest way to set up such a list is to use Append or Prepend (or
perhaps, AppendTo or PrependTo). However, for large lists this method is quite inefficient. The reason is that lists in Mathematica are implemented as arrays, and thus every time we add an element,
the entire list is copied.
We can use FoldList to illustrate the creation of a list in such manner :
Now, let us do some performance tests :
We see that the time used by this operation is quadratic in the size of the list. We of course would like a linear time. One way to achieve this which is available starting with the Mathematica
version 5.0 is to use the Reap-Sow technique (to be described in Part II). Another (perhaps, slightly less efficient) way to get a linear time is to use linked lists. We will follow the discussion in
the book of David Wagner [7].
A linked list in Mathematica is a structure of the type
The advantage of this representation is that on every level, we have a list containing just 2 elements, which is easy to copy. It will not work in this way for elements that are lists themselves, but
then one can replace a list by an arbitrary head <h>.
To avoid a possible conflict with some < h > already defined, we can use Module[{h}, ...] to make it local.
Using Fold is the most natural way to create such structures :
Converting them back to a normal list is just as easy with Flatten :
Notice that in the second case we used the fact that Flatten takes as an optional third argument the head which has to be Flatten - ed, and then Flatten - s only subexpressions with this head. In any
case, this is another linear-time operation.
We can now write a function:
Let us do some performance tests:
We see that the time is roughly linear in the list size, and for example, for a list of 20000 we get already a speed - up of the order of 100 times! Flattening is even faster:
Here we assumed that the list of results is accumulated immediately, just to separate this topic from the other problem - specific part of a program. If the list is accumulated not immediately but
some other operations are performed in between (which is what usually happens), one just has to use the idiom list = {newelement, list}, to achieve the same result.
Created by Wolfram Mathematica 6.0 (05 February 2009)
|
{"url":"http://www.mathprogramming-intro.org/book/node525.html","timestamp":"2014-04-20T03:10:21Z","content_type":null,"content_length":"15555","record_id":"<urn:uuid:f6ac9e59-bddf-49e7-9fa8-194fc1e270a7>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00240-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Computations, residuals and the power of indeterminacy
Results 1 - 10 of 18
, 1994
"... This paper addresses the problem of defining a formal tool to compare the expressive power of different concurrent constraint languages. We refine the notion of embedding by adding some
"reasonable" conditions, suitable for concurrent frameworks. The new notion, called modular embedding, is used to ..."
Cited by 32 (5 self)
Add to MetaCart
This paper addresses the problem of defining a formal tool to compare the expressive power of different concurrent constraint languages. We refine the notion of embedding by adding some "reasonable"
conditions, suitable for concurrent frameworks. The new notion, called modular embedding, is used to define a preorder among these languages, representing different degrees of expressiveness. We show
that this preorder is not trivial (i.e. it does not collapse into one equivalence class) by proving that Flat CP cannot be embedded into Flat GHC, and that Flat GHC cannot be embedded into a language
without communication primitives in the guards, while the converses hold. 4 A; C; D; G; M;O;P;R; T : In calligraphic style. ss; ff ; dd: In slanted style. \Sigma; \Gamma; #; oe; ; /; ø; ff. S ; [; ";
;; 2 j=; 6j=; ; 9 +; k; ~ +; ~ k; ! \Gamma! W ; \Gamma! ; ; \Gamma! W ; \Gamma! ; h; i; [[; ]]; d; e ffi; ?; ; 5 All reasonable programming languages are equivalent, since they are Turing...
- In CONCUR'98, volume 1466 of LNCS , 1998
"... . We recast dataflow in a modern categorical light using profunctors as a generalisation of relations. The well known causal anomalies associated with relational semantics of indeterminate
dataflow are avoided, but still we preserve much of the intuitions of a relational model. The development fits ..."
Cited by 28 (13 self)
Add to MetaCart
. We recast dataflow in a modern categorical light using profunctors as a generalisation of relations. The well known causal anomalies associated with relational semantics of indeterminate dataflow
are avoided, but still we preserve much of the intuitions of a relational model. The development fits with the view of categories of models for concurrency and the general treatment of bisimulation
they provide. In particular it fits with the recent categorical formulation of feedback using traced monoidal categories. The payoffs are: (1) explicit relations to existing models and semantics,
especially the usual axioms of monotone IO automata are read off from the definition of profunctors, (2) a new definition of bisimulation for dataflow, the proof of the congruence of which benefits
from the preservation properties associated with open maps and (3) a treatment of higherorder dataflow as a biproduct, essentially by following the geometry of interaction programme. 1 Introduction A
, 1989
"... Given suitable categories T; C and functor F : T ! C, if X; Y are objects of T, then we define an (X; Y )-relation in C to be a triple (R; r; ¯ r), where R is an object of C and r : R ! FX and ¯
r : R ! FY are morphisms of C. We define an algebra of relations in C, including operations of "relabeli ..."
Cited by 17 (6 self)
Add to MetaCart
Given suitable categories T; C and functor F : T ! C, if X; Y are objects of T, then we define an (X; Y )-relation in C to be a triple (R; r; ¯ r), where R is an object of C and r : R ! FX and ¯ r :
R ! FY are morphisms of C. We define an algebra of relations in C, including operations of "relabeling," "sequential composition," "parallel composition," and "feedback," which correspond intuitively
to ways in which processes can be composed into networks. Each of these operations is defined in terms of composition and limits in C, and we observe that any operations defined in this way are
preserved under the mapping from relations in C to relations in C 0 induced by a continuous functor G : C ! C 0 . To apply the theory, we define a category Auto of concurrent automata, and we give an
operational semantics of dataflow-like networks of processes with indeterminate behaviors, in which a network is modeled as a relation in Auto. We then define a category EvDom of "event domains," a
- Information and Computation , 1992
"... We analyze the relative expressive power of variants of the indeterminate fair merge operator in the context of static dataflow. We establish that there are three different, provably
inequivalent, forms of unbounded indeterminacy. In particular, we show that the well-known fair merge primitive canno ..."
Cited by 17 (7 self)
Add to MetaCart
We analyze the relative expressive power of variants of the indeterminate fair merge operator in the context of static dataflow. We establish that there are three different, provably inequivalent,
forms of unbounded indeterminacy. In particular, we show that the well-known fair merge primitive cannot be expressed with just unbounded indeterminacy. Our proofs are based on a simple trace
semantics and on identifying properties of the behaviors of networks that are invariant under network composition. The properties we consider in this paper are all generalizations of monotonicity. 1
, 1997
"... We recast dataflow in a modern categorical light using profunctors as a generalization of relations. The well known causal anomalies associated with relational semantics of indeterminate
dataflow are avoided, but still we preserve much of the intuitions of a relational model. The development fit ..."
Cited by 12 (5 self)
Add to MetaCart
We recast dataflow in a modern categorical light using profunctors as a generalization of relations. The well known causal anomalies associated with relational semantics of indeterminate dataflow are
avoided, but still we preserve much of the intuitions of a relational model. The development fits with the view of categories of models for concurrency and the general treatment of bisimulation they
provide. In particular it fits with the recent categorical formulation of feedback using traced monoidal categories. The payoffs are: (1) explicit relations to existing models and semantics,
especially the usual axioms of monotone IO automata are read off from the definition of profunctors, (2) a new definition of bisimulation for dataflow, the proof of the congruence of which benefits
from the preservation properties associated with open maps and (3) a treatment of higher-order dataflow as a biproduct, essentially by following the geometry of interaction programme.
- FORMAL ASPECTS OF COMPUTING , 1990
"... A deterministic message-communicating process can be characterized by a "continuous" function f which describes the relationship between the inputs and the outputs of the process. The
operational behavior of a network of deterministic processes can be deduced from the least fixpoint of a function g, ..."
Cited by 11 (2 self)
Add to MetaCart
A deterministic message-communicating process can be characterized by a "continuous" function f which describes the relationship between the inputs and the outputs of the process. The operational
behavior of a network of deterministic processes can be deduced from the least fixpoint of a function g, where g is obtained from the functions that characterize the component processes of the
network. We show in this paper that a nondeterministic process can be characterized by a "description" consisting of a pair of functions. The behavior of a network consisting of such processes can be
obtained from the "smooth" solutions of the descriptions characterizing its component processes. The notion of smooth solution is a generalization of least fixpoint. Descriptions enjoy the crucial
property that a variable may be replaced by its definition.
- Semantics for Concurrency, Leicester , 1990
"... Kahn's principle states that if each process in a dataflow network computes a continuous input/output function, then so does the entire network. Moreover, in that case the function computed by
the network is the least fixed point of a continuous functional determined by the structure of the network ..."
Cited by 8 (2 self)
Add to MetaCart
Kahn's principle states that if each process in a dataflow network computes a continuous input/output function, then so does the entire network. Moreover, in that case the function computed by the
network is the least fixed point of a continuous functional determined by the structure of the network and the functions computed by the individual processes. Previous attempts to generalize this
principle in a straightforward way to "indeterminate" networks, in which processes need not compute functions, have been either too complex or have failed to give results consistent with operational
semantics. In this paper, we give a simple, direct generalization of Kahn's fixed-point principle to a large class of indeterminate dataflow networks, and we prove that results obtained by the
generalized principle are in agreement with a natural operational semantics. 1 Introduction Dataflow networks are a parallel programming paradigm in which a collection of concurrently and
asynchronously executing s...
- In Fifth Conference on the Mathematical Foundations of Programming Semantics, Springer-Verlag. Lecture Notes in Computer Science , 1989
"... Abstract We define a concrete operational model of concurrent systems, called trace automata. For such automata, there is a natural notion of permutation equivalence of computation sequences,
which holds between two computation sequences precisely when they represent two interleaved views of the &qu ..."
Cited by 7 (4 self)
Add to MetaCart
Abstract We define a concrete operational model of concurrent systems, called trace automata. For such automata, there is a natural notion of permutation equivalence of computation sequences, which
holds between two computation sequences precisely when they represent two interleaved views of the "same concurrent computation. " Alternatively, permutation equivalence can be
characterized in terms of a residual operation on transitions of the automaton, and many interesting properties of concurrent computations can be expressed with the help of this operation. In
particular, concurrent computations, ordered by "prefix, " form a Scott domain whose structure we characterize up to isomorphism. By axiomatizing the properties of the residual operation,
we obtain a more abstract formulation of automata, which we call concurrent transition systems (CTS's). By exploiting a correspondence between concurrent alphabets and certain CTS's, we are able to
use the rich algebraic structure of CTS's to obtain results in trace theory. Finally, we connect CTS's and trace automata by obtaining a characterization of those CTS's that correspond in a natural
way to trace automata, and we show how the correspondence suggests an interesting notion of morphism of trace automata.
- THEORETICAL COMP. SCIENCE , 1997
"... An automaton with concurrency relations A is a labeled transition system with a collection of binary relations indicating when two actions in a given state of the automaton can occur
independently of each other. The concurrency relations induce a natural equivalence relation for finite computatio ..."
Cited by 7 (2 self)
Add to MetaCart
An automaton with concurrency relations A is a labeled transition system with a collection of binary relations indicating when two actions in a given state of the automaton can occur independently of
each other. The concurrency relations induce a natural equivalence relation for finite computation sequences. We investigate two graph-theoretic representations of the equivalence classes of
computation sequences and obtain that under suitable assumptions on A they are isomorphic. Furthermore, the graphs are shown to carry a monoid operation reflecting precisely the composition of
computations. This generalizes fundamental graph-theoretical representation results due to Mazurkiewicz in trace theory.
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=506720","timestamp":"2014-04-18T22:57:09Z","content_type":null,"content_length":"38940","record_id":"<urn:uuid:257dfee3-64da-4022-b4be-2de915c50b47>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00617-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Master-Slave Synchronization of Stochastic Neural Networks with Mixed Time-Varying Delays
Mathematical Problems in Engineering
Volume 2012 (2012), Article ID 730941, 18 pages
Research Article
Master-Slave Synchronization of Stochastic Neural Networks with Mixed Time-Varying Delays
^1School of Resources and Safety Engineering, China University of Mining and Technology, Beijing 100083, China
^2College of Automation Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 210016, China
^3Key Laboratory of Measurement and Control of CSE School of Automation, Southeast University, Ministry of Education, Nanjing 210096, China
Received 11 April 2011; Revised 29 July 2011; Accepted 4 August 2011
Academic Editor: Xue-Jun Xie
Copyright © 2012 Yongyong Ge et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any
medium, provided the original work is properly cited.
This paper investigates the problem on master-salve synchronization for stochastic neural networks with both time-varying and distributed time-varying delays. Together with the drive-response
concept, LMI approach, and generalized convex combination, one novel synchronization criterion is obtained in terms of LMIs and the condition heavily depends on the upper and lower bounds of state
delay and distributed one. Moreover, the addressed systems can include some famous network models as its special cases, which means that our methods extend those present ones. Finally, two numerical
examples are given to demonstrate the effectiveness of the presented scheme.
1. Introduction
In the past decade, synchronization of chaotic systems has attracted considerable attention since the pioneering works of Pecora and Carroll [1], in which it shows when some conditions are satisfied,
a chaotic system (the slave/response system) may become synchronized to another identical chaotic system (the master/drive system) if the master system sends some driving signals to the slave one.
Now, it is widely known that there exist many benefits of having synchronization or chaos synchronization in various engineering fields, such as secure communication [2], image processing [3], and
harmonic oscillation generation. Meanwhile, there exists synchronization in language development, which comes up with a common vocabulary, while agents' synchronization in organization management
will improve their work efficiency. Recently, chaos synchronization has been widely investigated due to its great potential applications. Especially, since artificial neural network model can exhibit
the chaotic behaviors [4, 5], the synchronization has become an important area of study, see [6–23] and references therein. As special complex networks, delayed neural networks have been also found
to exhibit some complex and unpredictable behaviors including stable equilibria, periodic oscillations, bifurcation, and chaotic attractors [24–27]. Presently, many literatures dealing with chaos
synchronization phenomena in delayed neural networks have appeared. Together with various techniques such as LMI tool, -matrix, and Jensen's inequalities, some elegant results have been derived for
global synchronization of various delayed neural networks including discrete-time ones in [6–14]. Moreover, some authors have considered the problems on adaptive synchronization and synchronization
in [15, 16].
Meanwhile, it is worth noting that, like time-delay and parameter uncertainties, noises are ubiquitous in both nature and man-made systems and the stochastic effects on neural networks have drawn
much particular attention. Thus a large number of elegant results concerning dynamics of stochastic neural networks have already been presented in [17–23, 28, 29]. Since noise can induce stability
and instability oscillations to the system, by virtue of the stability theory for stochastic differential equations, there has been an increasing interest in the study of synchronization for delayed
neural networks with stochastic perturbations [17–23]. Based on LMI technique, in [17–19], some novel results have been derived on the global synchronization as the addressed networks were involved
in distributed delay or neutral type. Also the works [20–23] have considered the adaptive synchronization and lag synchronization for stochastic delayed neural networks. However, the control schemes
in [17–19] cannot tackle the cases as the upper bound of delay's derivative is not less than 1, and the presented results in [20–23] are not formulated in terms of LMIs, which makes them checked
inconveniently by most recently developed algorithms. Meanwhile, in order to implement the practical point of view better, distributed delay should be taken into consideration and thus, some
researchers have began to give some preliminary discussions in [9–11, 19]. It is worth pointing out that the range of time delays considered in [17–23] is from 0 to an upper bound. In practice, the
range of delay may vary in a range for which the lower bound is not restricted to be 0. Thus the criteria in the above literature can be more conservative because they have not considered the
information on the lower bound of delay. Meanwhile, it has been verified that the convex combination idea was more efficient than some previous techniques when tackling time-varying delay, and
furthermore, the novel idea needs some improvements since it has not taken distributed delay into consideration altogether [30]. Yet, few authors have employed improved convex combination to consider
the stochastic neural networks with both variable and distributed variable delays and proposed some less conservative and easy-to-test control scheme for the exponential synchronization, which
constitutes the main focus of the presented work.
Motivated by the above-mentioned discussion, this paper focuses on the exponential synchronization for a broad class of stochastic neural networks with mixed time-varying delays, in which two
involved delays belong to the intervals. The form of addressed networks can include several well-known neural network models as the special cases. Together with the drive-response concept and
Lyapunov stability theorem, a memory control law is proposed which guarantees the exponential synchronization of the drive system and response one. Finally, two illustrative examples are given to
illustrate that the obtained results can improve some earlier reported works.
Notation 1. For symmetric matrix (resp., ) means that is a positive-definite (resp., positive-semidefinite) matrix; represent the transposes of matrices and , respectively. For denotes the family of
continuous functions from to with the norm . Let be a complete probability space with a filtration satisfying the usual conditions; is the family of all -measurable -valued random variables such that
, where stands for the mathematical expectation operator with respect to the given probability measure ; denotes the identity matrix with an appropriate dimension and with denoting the symmetric term
in a symmetric matrix.
2. Problem Formulations
Consider the following stochastic neural networks with time-varying delays described by where is the neuron state vector, represents the neuron activation function, is a constant external input
vector, and are the connection weight matrix, the delayed weight matrix, and the distributively delayed connection weight one, respectively.
In the paper, we consider the system (2.1) as the master system and the slave system as follows: with , where are constant matrices similar to the relevant ones (2.1) and is the appropriate control
input that will be designed in order to obtain a certain control objective. In practical situations, the output signals of the drive system (2.1) can be received by the response one (2.2).
The following assumptions are imposed on systems (2.1) and (2.2) throughout the paper. (A1) Here and denote the time-varying delay and the distributed one satisfying and we introduce , and . (A2)
Each function is locally Lipschitz, and there exist positive scalars and such that for all . Here, we denote and . (A3) For the constants , the neuron activation functions in (2.1) are bounded and
satisfy (A4) In system (2.2), the function is locally Lipschitz continuous and satisfies the linear growth condition as well. Moreover, satisfies the following condition: where are the known constant
matrices of appropriate dimensions.
Let be the error state and subtract (2.1) from (2.2); it yields the synchronization error dynamical systems as follows: where . One can check that the function satisfies , and Moreover, we denote , ,
and In the paper, we adopt the following definition.
Definition 2.1 (see [18]). For the system (2.6) and every initial condition , the trivial solution is globally exponentially stable in the mean square, if there exist two positive scalars such that
where stands for the mathematical expectation and are the initial conditions of systems (2.1) and (2.2), respectively.
In many real applications, we are interested in designing a memoryless state-feedback controller , where is a constant gain matrix. In the paper, for a special case that the information on the size
of is available, we consider the delayed feedback controller of the following form: then replacing into system (2.6) yields Then the purpose of the paper is to design a controller in (2.10) to let
the slave system (2.2) synchronize with the master one (2.1).
3. Main Results
In this section, some lemmas are introduced firstly.
Lemma 3.1 (see [18]). For any symmetric matrix , scalar , vector function such that the integrations concerned are well defined, then .
Lemma 3.2 (see [19]). Given constant matrices , where , then the linear matrix inequality (LMI) is equivalent to the condition: ,.
Lemma 3.3 (see [31]). Suppose that are the constant matrices of the appropriate dimensions, , and , then the inequality holds, if the four inequalities hold simultaneously.
Then, a novel criterion is presented for the exponential stability for system (2.11) which can guarantee the master system (2.1) to synchronize the slave one (2.2).
Theorem 3.4. Supposing that assumptions (A1)–(A4) hold, then system (2.11) has one equilibrium point and is globally exponentially stable in the mean square, if there exist matrices diagonal matrices
matrices , and one scalar such that the matrix inequalities (3.1)-(3.2) hold: where and With
Proof. Denoting , we represent system (2.11) as the following equivalent form: Now, together with assumptions (A1) and (A2), we construct the following Lyapunov-Krasovskii functional: where with
setting , and . In the following, the weak infinitesimal operator of the stochastic process is given in [32].
By employing (A1) and (A2) and directly computing , it follows from any matrices that where and .
Now adding the terms on the right side of (3.8)–(3.11) to and employing (2.5), (3.1), it is easy to obtain Based on methods in [33] and (2.7), for any diagonal matrices , the following inequality can
be achieved: From (A1), for any diagonal matrix , one can yield Furthermore, for any constant matrices , we can obtain where Then together with the methods in [28, 29], combining (3.12)–(3.15) yields
where are presented in (3.2) and Together with Lemmas 3.2 and 3.3, the nonlinear matrix inequalities in (3.2) can guarantee to be true. Therefore, there must exist a negative scalar such that Taking
the mathematic expectation of (3.19), we can deduce , which indicates that the dynamics of the system (2.11) is globally asymptotically stable in the mean square. Based on in (3.6) and directly
computing, there must exist three positive scalars such that Letting , we can deduce By changing the integration sequence, it can be deduced that Substituting the terms (3.22) into the relevant ones
in (3.21), it is easy to have where . Choose one sufficiently small scalar such that . Then, . Through directly computing, there must exist a positive scalar such that Meanwhile, . Thus with (3.24),
one can obtain which indicates that system (2.11) is globally exponentially stable in the mean square, and the proof is completed.
Remark 3.5. As for systems (2.1) and (2.2), many present literatures have much attention to with positive-definite diagonal matrix, which can be checked as one special case of assumption (A3). Also
in Theorem 3.4, it can be checked that in (3.17) was not simply enlarged by , but equivalently guaranteed by utilizing two matrix inequalities (3.2) and Lemma 3.3, which can be more effective than
these techniques employed in [18, 28, 29]. Moreover, we compute and estimate in (3.11) more efficiently than those present ones owing to that some previously ignored terms have been taken into
In order to show the design of the estimate gain matrices and , a simple transformation is made to obtain the following theorem.
Theorem 3.6. Supposing that assumptions (A1)–(A4) hold and setting , then the system (2.1) and system (2.2) can exponentially achieve the master-slave synchronization in the mean square, if there
exist matrices diagonal matrices matrices , and one scalar such that the LMIs in (3.26)-(3.27) hold where are similar to the relevant ones in (3.2), and with Moreover, the estimation gains and .
Proof. Letting and setting in (3.2) of Theorem 3.4, it is easy to derive the result and the detailed proof is omitted here.
Remark 3.7. Theorem 3.6 presents one novel delay-dependent criterion guaranteeing the systems (2.1) and (2.2) to achieve the master-slave synchronization in an exponential way. The method is
presented in terms of LMIs, therefore, by using LMI in MATLAB Toolbox, it is straightforward and convenient to check the feasibility of the proposed results without tuning any parameters. Moreover,
the systems addressed in this paper can include some famous networks in [17, 19–21, 23] as its special cases or is not differentiable.
Remark 3.8. Through setting in (3.6) and employing similar methods, Theorems 3.4 and 3.6 can be applicable without taking the upper bound on derivative of into consideration, which means that
Theorems 3.4 and 3.6 can be true even as is unknown.
Remark 3.9. As we all know, most of free-weighting matrices of in Theorems 3.4 and 3.6 cannot help reduce the conservatism but only result in computational complexity. Thus we can choose the
simplified slack matrices as follows: with matrices . Though the number of matrix variables in (3.30) is much smaller than the one in (3.2) and (3.27), the numerical examples given in the paper still
demonstrate that the simplified criteria can reduce the conservatism as effectively as Theorems 3.4 and 3.6 do.
4. Numerical Examples
In this section, two numerical examples will be given to illustrate the effectiveness of the proposed results.
Example 4.1. Consider the drive system (2.1) and response one (2.2) of delayed neural networks as follows: Then it is easy to check that , and By setting and utilizing Theorem 3.6, then the estimator
gain matrices and in (2.10) can be worked out Furthermore, as for , and setting , we can obtain the following estimator gain matrices by using Theorem 3.6 and Remark 3.8: which means that the
obtained results still hold as the time delay is not differentiable. However, the methods proposed in [17–19] fail to solve the synchronization problem even without the distributed delay.
Example 4.2. As a special case, we consider the master system (2.1) of delayed stochastic neural networks as follows: where , and , . It can be verified that , and . The activation functions can be
taken as . The corresponding slave system can be where . Then together with Theorem 3.6, , and , we can obtain part feasible solution to the LMIs in (3.26) and (3.27) by resorting to the Matlab LMI
Toolbox: Then the estimator gain matrices can be deduced as follows: It follows from Theorem 3.6 that the drive system with the initial condition for synchronizes with the response system when the
initial condition is for . The phase trajectories and state ones of drive system and response one and state trajectories of error system are shown in Figure 1. Therefore, from Figure 1, we can see
that the master system synchronizes with the slave system.
5. Conclusions
In this paper, we consider the synchronization control of stochastic neural networks with both time-varying and distributed time-varying delays. By using the Lyapunov functional and LMI technique,
one sufficient condition has been derived to ensure the global exponential stability for the error system, and thus, the slave system can synchronize the master one. Then, the estimation gains can be
obtained. The obtained results are novel since the addressed networks are of more general forms and some good mathematical techniques are employed. Finally, we give two numerical examples to verify
the theoretical results.
This work is supported by the national Natural Science Foundation of China no. 60835001, no. 60875035, no. 60904020, no. 61004032, no. 61004046 and China Postdoctoral Science Foundation Funded
Special Project no. 201003546.
1. L. M. Pecora and T. L. Carroll, “Synchronization in chaotic systems,” Physical Review Letters, vol. 64, no. 8, pp. 821–824, 1990. View at Publisher · View at Google Scholar · View at Zentralblatt
2. T. L. Liao and S. H. Tsai, “Adaptive synchronization of chaotic systems and its application to secure communications,” Chaos, Solitons & Fractals, vol. 11, no. 9, pp. 1387–1396, 2000. View at
Publisher · View at Google Scholar · View at Scopus
3. V. Perez-Munuzuri, V. Perez-Villar, and L. O. Chua, “Autowaves for image processing on a two-dimensional CNN array of excitable nonlinear circuits: flat and wrinkled labyrinths,” IEEE
Transactions on Circuits and Systems I, vol. 40, no. 3, pp. 174–181, 1993. View at Publisher · View at Google Scholar · View at Scopus
4. F. Zou and J. A. Nossek, “Bifurcation and chaos in cellular neural networks,” IEEE Transactions on Circuits and Systems I, vol. 40, no. 3, pp. 166–173, 1993. View at Publisher · View at Google
Scholar · View at Zentralblatt MATH
5. M. Gilli, “Strange attractors in delayed cellular neural networks,” IEEE Transactions on Circuits and Systems I, vol. 40, no. 11, pp. 849–853, 1993. View at Publisher · View at Google Scholar ·
View at Scopus
6. C.-J. Cheng, T.-L. Liao, and C.-C. Hwang, “Exponential synchronization of a class of chaotic neural networks,” Chaos, Solitons & Fractals, vol. 24, no. 1, pp. 197–206, 2005. View at Publisher ·
View at Google Scholar · View at Zentralblatt MATH
7. J.-J. Yan, J.-S. Lin, M.-L. Hung, and T.-L. Liao, “On the synchronization of neural networks containing time-varying delays and sector nonlinearity,” Physics Letters A, vol. 361, no. 1-2, pp.
70–77, 2007. View at Publisher · View at Google Scholar · View at Scopus
8. H. Huang and G. Feng, “Synchronization of nonidentical chaotic neural networks with time delays,” Neural Networks, vol. 22, no. 7, pp. 869–874, 2009. View at Publisher · View at Google Scholar ·
View at Scopus
9. Q. Song, “Design of controller on synchronization of chaotic neural networks with mixed time-varying delays,” Neurocomputing, vol. 72, no. 13–15, pp. 3288–3295, 2009. View at Publisher · View at
Google Scholar · View at Scopus
10. T. Li, S. M. Fei, Q. Zhu, and S. Cong, “Exponential synchronization of chaotic neural networks with mixed delays,” Neurocomputing, vol. 71, no. 13–15, pp. 3005–3019, 2008. View at Publisher ·
View at Google Scholar · View at Scopus
11. T. Li, A. G. Song, S. M. Fei, and Y. Q. Guo, “Synchronization control of chaotic neural networks with time-varying and distributed delays,” Nonlinear Analysis, Theory, Methods & Applications,
vol. 71, no. 5-6, pp. 2372–2384, 2009. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
12. J. H. Park, “Synchronization of cellular neural networks of neutral type via dynamic feedback controller,” Chaos, Solitons & Fractals, vol. 42, no. 3, pp. 1299–1304, 2009. View at Publisher ·
View at Google Scholar · View at Zentralblatt MATH
13. H. Li and D. Yue, “Synchronization stability of general complex dynamical networks with time-varying delays: a piecewise analysis method,” Journal of Computational and Applied Mathematics, vol.
232, no. 2, pp. 149–158, 2009. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
14. Y. Liu, Z. Wang, J. Liang, and X. Liu, “Synchronization and state estimation for discrete-time complex networks with distributed delays,” IEEE Transactions on Systems, Man, and Cybernetics Part B
, vol. 38, no. 5, pp. 1314–1325, 2008. View at Publisher · View at Google Scholar · View at Scopus
15. H. Zhang, Y. Xie, Z. Wang, and C. Zheng, “Adaptive synchronization between two different chaotic neural networks with time delay,” IEEE Transactions on Neural Networks, vol. 18, no. 6, pp.
1841–1845, 2007. View at Publisher · View at Google Scholar · View at Scopus
16. H. R. Karimi and P. Maass, “Delay-range-dependent exponential H[∞] synchronization of a class of delayed neural networks,” Chaos, Solitons & Fractals, vol. 41, no. 3, pp. 1125–1135, 2009. View at
Publisher · View at Google Scholar · View at Zentralblatt MATH
17. W. Yu and J. Cao, “Synchronization control of stochastic delayed neural networks,” Physica A, vol. 373, pp. 252–260, 2007. View at Publisher · View at Google Scholar · View at Scopus
18. Y. Tang, J. A. Fang, and Q. Miao, “On the exponential synchronization of stochastic jumping chaotic neural networks with mixed delays and sector-bounded non-linearities,” Neurocomputing, vol. 72,
no. 7-9, pp. 1694–1701, 2009. View at Publisher · View at Google Scholar · View at Scopus
19. J. H. Park and O. M. Kwon, “Synchronization of neural networks of neutral type with stochastic perturbation,” Modern Physics Letters B, vol. 23, no. 14, pp. 1743–1751, 2009. View at Publisher ·
View at Google Scholar · View at Scopus
20. X. Li and J. Cao, “Adaptive synchronization for delayed neural networks with stochastic perturbation,” Journal of the Franklin Institute, vol. 345, no. 7, pp. 779–791, 2008. View at Publisher ·
View at Google Scholar · View at Zentralblatt MATH
21. Y. Tang, R. Qiu, J. A. Fang, Q. Miao, and M. Xia, “Adaptive lag synchronization in unknown stochastic chaotic neural networks with discrete and distributed time-varying delays,” Physics Letters A
, vol. 372, no. 24, pp. 4425–4433, 2008. View at Publisher · View at Google Scholar
22. Y. Xia, Z. Yang, and M. Han, “Lag synchronization of unknown chaotic delayed yang-yang-type fuzzy neural networks with noise perturbation based on adaptive control and parameter identification,”
IEEE Transactions on Neural Networks, vol. 20, no. 7, pp. 1165–1180, 2009. View at Publisher · View at Google Scholar · View at Scopus
23. Z. X. Liu, S. L. Liu, S. M. Zhong, and M. Ye, “p-th moment exponential synchronization analysis for a class of stochastic neural networks with mixed delays,” Communications in Nonlinear Science
and Numerical Simulation, vol. 15, pp. 1899–1909, 2010.
24. O. M. Kwon and J. H. Park, “Delay-dependent stability for uncertain cellular neural networks with discrete and distribute time-varying delays,” Journal of the Franklin Institute, vol. 345, no. 7,
pp. 766–778, 2008. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
25. R. Samidurai, S. Marshal Anthoni, and K. Balachandran, “Global exponential stability of neutral-type impulsive neural networks with discrete and distributed delays,” Nonlinear Analysis: Hybrid
Systems, vol. 4, no. 1, pp. 103–112, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
26. Y. Horikawa and H. Kitajima, “Bifurcation and stabilization of oscillations in ring neural networks with inertia,” Physica D, vol. 238, no. 23-24, pp. 2409–2418, 2009. View at Publisher · View at
Google Scholar · View at Scopus
27. H. Lu, “Chaotic attractors in delayed neural networks,” Physics Letters A, vol. 298, no. 2-3, pp. 109–116, 2002. View at Publisher · View at Google Scholar · View at Scopus
28. W. H. Chen and X. Lu, “Mean square exponential stability of uncertain stochastic delayed neural networks,” Physics Letters A, vol. 372, no. 7, pp. 1061–1069, 2008. View at Publisher · View at
Google Scholar · View at Zentralblatt MATH
29. H. Huang and G. Feng, “Delay-dependent stability for uncertain stochastic neural networks with time-varying delay,” Physica A, vol. 381, no. 1-2, pp. 93–103, 2007. View at Publisher · View at
Google Scholar · View at Scopus
30. T. Li, A. Song, S. Fei, and T. Wang, “Global synchronization in arrays of coupled Lurie systems with both time-delay and hybrid coupling,” Communications in Nonlinear Science and Numerical
Simulation, vol. 16, no. 1, pp. 10–20, 2011. View at Publisher · View at Google Scholar
31. D. Yue, E. Tian, Y. Zhang, and C. Peng, “Delay-distribution-dependent stability and stabilization of T-S fuzzy systems with probabilistic interval delay,” IEEE Transactions on Systems, Man, and
Cybernetics Part B, vol. 39, no. 2, pp. 503–516, 2009. View at Publisher · View at Google Scholar · View at Scopus
32. X. Mao, Stochastic Differential Equations and Their Applications, Horwood, Chichester, UK, 1997.
33. Z. Wang, H. Shu, Y. Liu, D. W. C. Ho, and X. Liu, “Robust stability analysis of generalized neural networks with discrete and distributed time delays,” Chaos, Solitons & Fractals, vol. 30, no. 4,
pp. 886–896, 2006. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
|
{"url":"http://www.hindawi.com/journals/mpe/2012/730941/","timestamp":"2014-04-18T02:33:38Z","content_type":null,"content_length":"967718","record_id":"<urn:uuid:d59fd9e0-31b3-4a41-b9e3-54d6860f642d>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00053-ip-10-147-4-33.ec2.internal.warc.gz"}
|
A Novel Evaluation Method for Building Construction Project Based on Integrated Information Entropy with Reliability Theory
The Scientific World Journal
Volume 2013 (2013), Article ID 573014, 8 pages
Research Article
A Novel Evaluation Method for Building Construction Project Based on Integrated Information Entropy with Reliability Theory
School of Management, Xi'an University of Architecture and Technology, Xi'an Shanxi 710055, China
Received 8 January 2013; Accepted 28 January 2013
Academic Editors: D. Choudhury and Z. Guan
Copyright © 2013 Xiao-ping Bai and Xi-wei Zhang. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and
reproduction in any medium, provided the original work is properly cited.
Selecting construction schemes of the building engineering project is a complex multiobjective optimization decision process, in which many indexes need to be selected to find the optimum scheme.
Aiming at this problem, this paper selects cost, progress, quality, and safety as the four first-order evaluation indexes, uses the quantitative method for the cost index, uses integrated qualitative
and quantitative methodologies for progress, quality, and safety indexes, and integrates engineering economics, reliability theories, and information entropy theory to present a new evaluation method
for building construction project. Combined with a practical case, this paper also presents detailed computing processes and steps, including selecting all order indexes, establishing the index
matrix, computing score values of all order indexes, computing the synthesis score, sorting all selected schemes, and making analysis and decision. Presented method can offer valuable references for
risk computing of building construction projects.
1. Introduction
The evaluation decision of building construction schemes is a complex multiobjective and multifactor problem, selecting a reasonable goal structure system of evaluation schemes and the optimal scheme
is very important in the building construction decision process [1].
Until now, there have been many references studying the optimization decision problem of building construction schemes, and many concepts about it have been set up. Among them, some research results
only aim at the small range special field; for example, some researchers study pit bracing construction scheme decision problem [2]. Analytical Hierarchy Process (AHP) is a common method; however,
its convictive power is not strong enough because it lacks quantitative data analysis [3–5]. In [6], authors integrated value engineering principle with technical and economic factors to make
evaluating and decision making of construction schemes, its advantage is more considering evaluation factors included in construction schemes, but the selection of evaluation values is fuzzy [6]. In
addition, grey correlation [7], minimum variance [8], fuzzy decision, projection pursuit, and other methods are also used in decision-making of project schemes [9, 10].
The optimization decision-making evaluation of construction schemes is a multiobjective process; many targets should be analyzed. Some references select cost, progress, quality, and reliability as
evaluation targets, such as in [11, 12], analysis of time-cost-quality tradeoff optimization in construction project management is presented [11, 12], but on the whole there is a shortage of
systematic deep discussion, and qualitative study is dominant.
Reliability method is rarely used in the decision making of construction schemes, which is usually simply mentioned in three elements of the project in management references. In [13], reliability
method is applied in the evaluation of the construction procedure [13].
In multiobjective optimization decision-making evaluation of projects, the relatively important degree of each evaluation index usually should be considered. The most direct and simple method
expressing the important degree of each evaluation target is to give each target relevant weight. The entropy is a very ideal criterion to be applied for evaluating different decision-making
processes. Applying the entropy principle to determine the weights of evaluation indexes has the scientific and the accuracy nature. In 1991, two Chinese scholars GU Changyao and QIU Wanhua firstly
defined complex entropy and apply it in decision analysis. In 1994, QIU Wanhua also presents group decision-making complex entropy model [14].
In [15], the authors presented the evolution of concepts, an overview of research and applications pertaining to reliability in construction production, and the use of reserves, robust itineraries,
and contingency of time and cost. It describes areas of management advisory systems in relation to the cycle of risk analysis [15].
In [16], a biobjective genetic algorithm was employed to solve the multiperiod network optimization problem, and a numerical example shows that the optimal coordination saves more than 50% of waste
in system costs, compared to the worst-case scenario [16].
Making use of many existed studying results, this paper integrates engineering economics, risk and reliability theories, and information entropy theory to present a set of detailed engineering
management decision methods of building construction projects combined with the concrete example. Presented detailed methods and steps can offer the reference for engineering management decision of
building construction projects.
On the basis of summarizing and absorbing some existed references, this paper selected engineering cost, progress, quality, and safety as first-order criterion indexes, shown in Figure 1. For every
first-order criterion index, further extended analysis calculation was done.
2. Calculating Cost of Building Construction Schemes
Combined with practical engineering experiences, the building construction cost of engineering projects includes direct cost and indirect cost. The direct cost also includes direct labor cost (),
direct material cost (), direct mechanical cost (), and direct measure cost (). The indirect cost () includes building construction stipulated expense and enterprise administration expense. The
detail composition of building construction cost can be expressed by Table 1.
The total building construction cost can be calculated by
3. Calculating the Progress Score Value of Building Construction Schemes Combined with Reliability Theory
Combined with practical engineering experiences, this paper divided the whole building construction project engineering into 10 first-order progress segments; moreover, the first-order progress
segment is divided into detailed second-order progress segments, as shown in Table 2. The progress score value of every second-order progress segment can be given out directly by domain experts.
The total progress score value can be calculated by where is the sum of progress score value of the first-order progress segment.
For calculating , the authors make use of related knowledge in reliability theory. Considering that the progress relations of various second-order progress segments among goods transportation
progress () are parallel, so can be calculated by
For other except for , the progress relation of various second-order progress segments are a series, so except for can be calculated by where is the number of second-order progress segments among the
first-order progress segment.
4. Calculating Quality Score Value of Building Construction Schemes Combined with Reliability Theory
The authors divide the whole building construction project engineering into 7 first-order influence quality factor segments. Every first-order influence quality factor segment is divided into
detailed second-order segments, as shown in Table 3.
For calculating the quality score value of every second-order influence quality factor segment, this paper divides them into two types; one type can be calculated by related reliability method,
including collecting failure data, putting forward hypotheses by the frequency histogram, estimating parameters, and testing hypothesis. The other type can be calculated by the expert evaluation
Combined with practical engineering knowledge, the quality relations of various second-order quality segments included in every first-order quality segment are a series, so the quality score value of
every first-order quality segment can be calculated by where is the number of second-order quality segments among the first-order quality segment.
The total quality score value can be calculated by
5. Calculating Safety Score Value of Building Construction Schemes
Factors affecting building construction safety mainly include direct factor and indirect factor. Direct factors include human factor, matter factor, and environment factor; indirect factors include
management factor, and it is caused by three direct factors. This paper further analyzes three direct factors; the detailed composition of building construction safety influence factor is shown in
Table 4.
In this paper, the safety score value can be calculated by where expresses the possibility risk degree index caused by unsafe factors in safety factor, expresses the probability risk degree index
caused by unsafe factor, and expresses the produced result risk degree index after the accident in safety factor. The value of three indexes can be obtained by experts according to Table 5.
The total quality score value can be calculated by
6. Case Studies and Synthesis Computational Method Based on Integrated Information Entropy with Reliability Theory
6.1. Case Analysis
Taking a building construction engineering project, for example, it is located in the third ring road east section of a city outskirt, has convenient traffic environment. There are residential
buildings on the east, west, and north sides of this project; a Greenbelt Park is located in the south side of it. The total land area is 15 acres, the plot ratio is 2.1, and it is planned to be
completed in one stage. The building engineering construction will begin on March 1, 2013; the planned construction period is 12 months.
The construction scheme 1 is described as follows. The expected period of engineering construction is 12 months. The month construction completed rate of this scheme, respectively, is 8%, 10%, 11%,
10 %, 8%, 8%, 10%, 11%, 8%, 6%, 6%, and 4%. The month construction progresses in winter and summer is slower than other months because of the effects of the natural environment.
In construction scheme 2, the expected period of engineering construction is 11 months. The month construction completed rate of this scheme, respectively, is 9%, 11%, 11%, 11%, 9%, 9%, 12%, 11%, 8%,
8%, and 7%.
In construction scheme 3, the expected period of engineering construction is 11 months. The month construction completed rate of this scheme, respectively, is 9%, 10%, 11%, 10%, 9%, 8%, 10%, 10%, 8%,
7%, 6%, and 2%.
In construction scheme4, the expected period of engineering construction is 12 months. The month construction completed rate of this scheme, respectively, is 7%, 9%, 9%, 9%, 8%, 8%, 10%, 10%, 9%,
8%, 7%, and 6%.
According to Table 1 and formula (1) to calculate, respectively, the cost of 4 construction schemes, the result is shown in Table 6.
According to Table 2, formula (2), (3), and (4) to calculate, respectively, progress score value of 4 building construction schemes, the result is shown in Table 7.
According to Table 3, formula (5) and (6) to calculate, respectively, quality score value of 4 building construction schemes, the result is shown in Table 8.
According to Tables 4 and 5, and formula (7) and (8) to calculate, respectively, safety score value of 4 building construction schemes, the result is shown in Table 9.
According to above detailed calculating methods and steps, the result is shown as a calculated total value of 4 indexes including cost, progress, quality, and safety of 4 building construction
schemes, as shown in Table 10.
6.2. Detailed Computing Steps of Entropy Weight
Regarding a multiobjective decision making problem that has selected schemes and evaluation indexes, detailed computing steps of entropy weight are as follows.
(1) Establishe evaluation index matrix including each evaluation index and corresponding evaluation value:
(2) Standardize evaluation index matrix.
For the index that is “the bigger, the better,” the standardized value of the evaluation index can be calculated by
For the index that is “the smaller, the better,” the standardized value of the evaluation index can be calculated by
So the standardized decision-making matrix (, ) can be obtained.
(3) Calculate the entropy value of each evaluation index:
(4) Calculate the comprehensive feudatory degree of each evaluation object:
6.3. Calculations Combined with Case
Standardizes evaluation index matrix composed of value in Table 10. Cost and safety indexes are ones that is “the smaller, the better”, making use of a formula (11) to calculate. Progress and quality
indexes are ones that is “the bigger, the better”, making use of a formula (10) to calculate. The standardization result is shown in Table 11”.
Making use of formula (12) and (13) to calculate the entropy value and entropy weight of 4 schemes, the result is shown in Table 12.
Calculates comprehensive feudatory degree of each evaluation scheme by formula (14) are shown as follows:
7. Conclusions
Based on the above calculation results, 4 building construction project schemes can be selected according to such sequence: Scheme 3 > Scheme 2 > Scheme 1 > Scheme4. Generally, the optimization
decision of building construction schemes is usually multiobjective optimization decision-making problem affected by many factors. This paper selects cost, progress, quality, safety as the four
first-order evaluation indexes, and further deployment analyses of these indexes integrate engineering economics, risk and reliability theories, and information entropy theory to present a new
evaluation optimization method for building construction projects based on integrated information entropy with the reliability theory combined with a case study. Presented detailed methods and steps
can offer the reference for engineering management decision for the building construction projects.
This work was supported in part by NSFC (59874019), Shanxi Province Education Department Research Project (12JK0803), Shanxi Province Key Discipline Construction Special Fund Subsidized Project
(E08001), Shanxi Province Higher Education Philosophical Social Science Key Research Base Construction Special Fund Subsidized Project (DA08046), and Shanxi Province Higher Education Philosophical
Social Science Characteristic Discipline Construction Special Fund Subsidized Project (E08003, E08005).
1. Y. Xu, Y. Wang, and B. Yao, “Construction project stakeholder collaboration group decision making based on entropy theory,” Chinese Journal of Management Science, vol. 16, pp. 117–121, 2008.
2. Y. Feng and K. Shi, “Optimum decision-making of deep foundation pit construction project based on the least variance priority method,” Building Science, vol. 25, no. 1, pp. 12–15, 2009.
3. B. Tian, Management Science in Engineering Project, Southwest Jiaotong University Press, 2009.
4. Y. Chen and X. Peng, “Method of analytical hierarchy process making for decision on construction scheme,” Journal of Zhengzhou University of Light Industry (Natural Science), vol. 22, pp.
198–200, 2007.
5. J. Chen, “On construction scheme selected based on value engineering,” Shanxi Architecture, vol. 12, no. 36, p. 202, 2010.
6. S. Gao and H. Du, “Study on comprehensive evaluation method about engineering construction scheme based on Grey Correlation Degree,” Coal Mine Engineering, no. 1, pp. 37–39, 2003.
7. Y. Feng, “Optimum decision-making of construction project based on the least variance priority method,” Mathematics in Practice and Theory, vol. 36, no. 3, pp. 171–173, 2006.
8. W. Oiu, Management Decision and Application Entropy, Mechanical Industry Press, 2001.
9. J. Wang and E. Liu, “Analysis of time-cost-quality tradeoff optimization in construction project management,” Journal of Systems Engineering, vol. 19, no. 2, pp. 148–150, 2004.
10. J. Touboul, “Projection pursuit through relative entropy minimization,” Communications in Statistics, vol. 40, no. 6, pp. 854–878, 2011. View at Publisher · View at Google Scholar · View at
11. Q. Liu and Q. Yang, “The control of cost, duration, ‘quality and safety in project management of construction’,” Journal of Ningxia Institute of Technology (Natural Science), vol. 9, no. 1, pp.
31–33, 1997.
12. N. Lu, Y. Shi, X. Gao, W. Li, and X. Liao, “Calculation method of construction working procedure,” Journal of Xi'an University of Architecture & Technology (Natural Science Edition), vol. 38, no.
3, pp. 311–315, 2006.
13. W. Qiu, “An entropy model on group decision system,” Control and Decision, vol. 10, no. 1, pp. 51–53, 1995.
14. L. Ma and Q. Gao, “Analysis of organizational structure for human resource management department based on structure-entropy model,” Industrial Engineering Journal, no. 4, pp. 86–90, 2010.
15. Z. Turskis, M. Gajzler, and A. Dziadosz, “Reliability, risk management, and contingency of construction processes and projects,” Journal of Civil Engineering and Management, vol. 18, no. 2, pp.
290–298, 2012.
16. J. Oh, H. Kim, and D. Park, “Bi-objective network optimization for spatial and temporal coordination of multiple highway construction projects,” KSCE Journal of Civil Engineering, vol. 15, no. 8,
pp. 1449–1455, 2011.
|
{"url":"http://www.hindawi.com/journals/tswj/2013/573014/","timestamp":"2014-04-17T05:26:27Z","content_type":null,"content_length":"124490","record_id":"<urn:uuid:c9d9952d-af78-470b-9bce-15091289b319>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00016-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Archimedean Principle
From ProofWiki
Let $x$ be a real number.
Then there exists a natural number greater than $x$.
$\forall x \in \R: \exists n \in \N: n > x$
That is, the set of natural numbers is unbounded above.
Let $x \in \R$.
Let $S$ be the set of all natural numbers less than or equal to $x$:
$S = \left\{{a \in \N: a \le x}\right\}$
It is possible that $S = \varnothing$.
Suppose $0 \le x$.
Then by definition, $0 \in S$.
But $S = \varnothing$, so this is a contradiction.
From the Trichotomy Law for Real Numbers it follows that $0 > x$.
Thus we have the element $0 \in \N$ such that $0 > x$.
Now suppose $S \ne \varnothing$.
Then $S$ is bounded above (by $x$, for example).
Thus by the Continuum Property of $\R$, $S$ has a supremum in $\R$.
Let $s = \sup \left({S}\right)$.
Now consider the number $s - 1$.
Since $s$ is the supremum of $S$, $s-1$ can not be an upper bound of $S$ by definition.
So $\exists m \in S: m > s - 1 \implies m + 1 > s$.
But as $m \in \N$, it follows that $m + 1 \in \N$.
Because $m + 1 > s$, it follows that $m + 1 \notin S$ and so $m + 1 > x$.
Also known as
This result is also known as the Archimedean law, or the Archimedean Property of Natural Numbers, or the axiom of Archimedes.
Also see
Not to be confused with the better-known (outside the field of mathematics) Archimedes' Principle.
Source of Name
This entry was named for Archimedes.
It appears as Axiom V of Archimedes' On The Sphere and the Cylinder.
The name axiom of Archimedes was given by Otto Stolz in his 1882 work: Zur Geometrie der Alten, insbesondere über ein Axiom des Archimedes.
|
{"url":"http://www.proofwiki.org/wiki/Archimedean_Principle","timestamp":"2014-04-16T11:11:17Z","content_type":null,"content_length":"29225","record_id":"<urn:uuid:fa22ba79-6adf-419e-9147-85ec093846be>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00412-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Mental Math S
Mental Math
The only way to excel at mental math is to constantly practice it. Math Blaster has a large collection of worksheets and fun math activities that parents and teachers can use to help kids practice
their mental math skills.
Time to Read is an activity that combines healthy reading habits with mathematics! In fact, each student gets a cool bookmark in which (s)he notes down the amount of time (s)he has spent reading each
day. See more.
Counting change is almost an everyday activity, but it requires a fair amount of addition and subtraction. In Making Changes, students learn that different coin combinations can add up to the same
amount of money. See more.
Have you ever tried playing baseball during math class? Well, there’s a first time for everything! Call the class’s best players and start the game. See more
This is a fun subtraction game based on luck. Players roll dice and subtract the sum of the numbers from 100. The first player to reach 1 or 0 wins the game. See more
Multiple Mania is a simple printable activity that teachers can use to revise the multiplication tables of numbers up to 5. See more
Students race to create their own robots in this fun printable multiplication game. Students can only add a body part to their robot when they win a round. See more
Bug Capture is an addition game based on luck. Players roll two dice and add up the total, trying to get the right sum in each round. This game is perfect for practicing mental math. See more
Target 50 is a challenging math game that can be used to help kids develop their problem solving skills, addition and subtraction skills and mental math skills. See more
In Chain of Clues, players must read the clues on each other’s shirts and figure out the answer to each one, collecting signatures as they solve the clues. There’s a lot of addition and subtraction
involved, making this game great for practicing mental math. See more
Even or Odd is a fun printable card game, great for practicing mental math, addition and/or subtraction. See more
Mental Math
Mental math is the solving of mathematical problems using nothing but the human brain. A person who is good at mental math can solve simple math problems more quickly without paper, pencils or
calculators than when they are using a calculating device! There are many reasons parents should introduce their children to mental math.
The Many Benefits of Learning Mental Math
Since mental math relies on nothing but the human brain, it is a great way of keeping the brain young and active. This has many benefits as one grows older. A person who learns to use mental math
effectively is very unlikely to stop using it, as it is extremely convenient in everyday situations. People who are good at mental math find it very satisfying to solve mathematical problems with
great ease and speed. This fosters a love for mathematics that is beneficial throughout a child’s education. Mental math improves one’s concentration, and the benefits of this are seen across all
subjects! Further, it improves one’s problem solving and reasoning abilities. Initially, learning to do mental math may seem difficult, but what’s a little hard work when it pays off so well?
Mental Math Secrets and Strategies
The best thing about mental math is that anyone can learn it! Mental math tricks are what all practitioners of mental math rely on. Luckily, many mental math secrets are easily available online! Some
mental math tricks are very simple, and can be put to practice immediately. More advanced mental math tricks require greater practice to get good at. The only way to excel at mental math is to
constantly practice these tricks and secrets. After all, even magic requires practice.
Practicing Mental Math
People who are really serious about mental math recommend practicing it for a fixed amount of time everyday. Parents can give their children easily available math worksheets to practice their mental
math skills. There are no special problems one must use to practice mental math. The mental math secrets that one learns can be put to use on any math problem anywhere.
|
{"url":"http://www.mathblaster.com/teachers/math-practice/mental-math","timestamp":"2014-04-17T00:54:26Z","content_type":null,"content_length":"92431","record_id":"<urn:uuid:d33ccf7d-ff84-43f5-a2e0-3e62ec3b45b9>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00022-ip-10-147-4-33.ec2.internal.warc.gz"}
|
How many mls is 18.7 cl?
You asked:
How many mls is 18.7 cl?
187.0 millilitres
the volume 187.0 millilitres
Say hello to Evi
Evi is our best selling mobile app that can answer questions about local knowledge, weather, books, music, films, people and places, recipe ideas, shopping and much more. Over the next few months we
will be adding all of Evi's power to this site.
Until then, to experience all of the power of Evi you can download Evi for free on iOS, Android and Kindle Fire.
|
{"url":"http://www.evi.com/q/how_many_mls_is_18.7_cl","timestamp":"2014-04-16T16:08:51Z","content_type":null,"content_length":"55167","record_id":"<urn:uuid:6253fb50-995e-4c62-ac42-8b93224dc8db>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00045-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Posts by
Total # Posts: 651
Statistics Emergency please
Thanks for your help I really appreciate it.
Statistics Emergency please
People end up tossing 12% of what they buy at the grocery store (Reader's Digest, March 2009). Assume this is the true population proportion and that you plan to take a sample survey of 540 grocery
shoppers to further investigate their behavior. a)What is the probability t...
Statistics Emergency please
People end up tossing 12% of what they buy at the grocery store (Reader's Digest, March 2009). Assume this is the true population proportion and that you plan to take a sample survey of 540 grocery
shoppers to further investigate their behavior. a.Show the sampling distrib...
Math emergency I have 10 minutes to submit it.
Evaluate the definite integral. (e^x)/(7 + e^x)dx between (0,2)
Math emergency I have 10 minutes to submit it.
Evaluate the definite integral. 9x e^(x^2)dx between (0,3)
Math emergency I have 10 minutes to submit it.
Evaluate the definite integral. x sqrt(13 x^2 + 36)dx between (0,1)
The College Board reported the following mean scores for the three parts of the Scholastic Aptitude Test (SAT) (The World Almanac, 2009): Assume that the population standard deviation on each part of
the test is = 100. a. What is the probability a sample of 90 test takers will...
People end up tossing 12% of what they buy at the grocery store (Reader's Digest, March 2009). Assume this is the true population proportion and that you plan to take a sample survey of 540 grocery
shoppers to further investigate their behavior. a.Show the sampling distrib...
People end up tossing 12% of what they buy at the grocery store (Reader's Digest, March 2009). Assume this is the true population proportion and that you plan to take a sample survey of 540 grocery
shoppers to further investigate their behavior. a.Show the sampling distrib...
The College Board reported the following mean scores for the three parts of the Scholastic Aptitude Test (SAT) (The World Almanac, 2009): Assume that the population standard deviation on each part of
the test is = 100. a. What is the probability a sample of 90 test takers will...
Grade 12 Chemistry
When 50.0 mL of 1.0 mol/L hydrochloric acid is neutralized completely by 75.0 mL of 1.0 mol/L sodium hydroxide in a coffee-cup calorimeter, the temperature of the total solution changes from 20.2 to
25.6. Determine the quantity of energy transferred, q, and state whether the r...
People end up tossing 12% of what they buy at the grocery store (Reader's Digest, March 2009). Assume this is the true population proportion and that you plan to take a sample survey of 540 grocery
shoppers to further investigate their behavior. a.Show the sampling distrib...
The College Board reported the following mean scores for the three parts of the Scholastic Aptitude Test (SAT) (The World Almanac, 2009): Assume that the population standard deviation on each part of
the test is = 100. a. What is the probability a sample of 90 test takers will...
Suppose a random sample of size 50 is selected from a population with σ = 10. Find the value of the standard error of the mean in each of the following cases (use the finite population correction
factor if appropriate). a. The population size is infinite (to 2 decimals). ...
The American Association of Individual Investors (AAII) polls its subscribers on a weekly basis to determine the number who are bullish, bearish, or neutral on the short-term prospects for the stock
market. Their findings for the week ending March 2, 2006, are consistent with ...
It still says it is wrong. =/
Find the average value of the function f over the interval [-1, 2]. f(x)=1-x^2
The thing is that I did get 7/8(log9-log1) I got 7/8(log8-log1). but thanks for your help.
Find the average value of the function f over the indicated interval [0, 8]. f(x) = 7/(x + 1) I keep getting the wrong answer.
Find the average value of the function f over the interval [-1, 2]. f(x) = 1-x^2
Find the average value of the function f over the interval [-1, 2]. f(x) = 1-x^2
Find the average value of the function f over the indicated interval [0, 8]. f(x) = 7/(x + 1) I keep getting the wrong answer.
People end up tossing 12% of what they buy at the grocery store (Reader's Digest, March 2009). Assume this is the true population proportion and that you plan to take a sample survey of 540 grocery
shoppers to further investigate their behavior. a. Show the sampling distri...
Assume that the population proportion is .55. Compute the standard error of the proportion, ( ), for sample sizes of 100, 200, 500, and 1000 (to 4 decimals). a. 100? b. 200? c. 500? d. 1000?
The College Board reported the following mean scores for the three parts of the Scholastic Aptitude Test (SAT) (The World Almanac, 2009): Assume that the population standard deviation on each part of
the test is = 100. a. What is the probability a sample of 90 test takers will...
Suppose a random sample of size 50 is selected from a population with σ = 10. Find the value of the standard error of the mean in each of the following cases (use the finite population correction
factor if appropriate). a. The population size is infinite (to 2 decimals). ...
Many drugs used to treat cancer are expensive. BusinessWeek reported on the cost per treatment of Herceptin, a drug used to treat breast cancer (BusinessWeek, January 30, 2006). Typical treatment
costs (in dollars) for Herceptin are provided by a simple random sample of 10 pat...
A car moves along a straight road in such a way that its velocity (in feet per second) at any time t (in seconds) is given by v(t) = 3t sqrt(64−t^2) (0 ≤ t ≤ 8). Find the distance traveled by the car
in the 8 sec from t = 0 to t = 8.
Based on a preliminary report by a geological survey team, it is estimated that a newly discovered oil field can be expected to produce oil at the rate of R(t) = 300t^2/(t^3 + 32)+5 (0 ≤ t ≤ 20)
thousand barrels/year, t years after production begins. Find the amoun...
Find the average value of the function f over the interval [-1, 2]. f(x) = 1 - x^2
Find the average value of the function f over the indicated interval [0, 8]. f(x) = 7/(x + 1)
Find the average value of the function f over the interval [0, 8]. f(x) = 5e^-x
Find the indefinite integral. (e^(9 x))/(2 + e^(9 x)) dx
Evaluate the following definite integral. 5(1 + 1/u + 1/u^2)du between, (5,7)
Annual sales (in millions of units) of pocket computers are expected to grow in accordance with the following function where t is measured in years, with t = 0 corresponding to 1997. f(t) = 0.18t^2 +
0.16t + 2.64 How many pocket computers will be sold over the 2 year period be...
Evaluate the following definite integral. 5(1 + 1/u + 1/u^2)du between (5,7)
Annual sales (in millions of units) of pocket computers are expected to grow in accordance with the following function where t is measured in years, with t = 0 corresponding to 1997. f(t) = 0.18t^2 +
0.16t + 2.64 (0<=t<=2) How many pocket computers will be sold over the ...
The management of Ditton Industries has determined that the daily marginal revenue function associated with selling x units of their deluxe toaster ovens is given by the following where R '(x) is
measured in dollars/unit. R'(x)= -0.1x + 40 (a) Find the daily total reve...
Evaluate the following definite integral. sqrt(11 x)(sqrt(x) + sqrt(11)) dx between(0,1)
statistics; emergency please help me
Assume a binomial probability distribution has p = .60 and n = 200. a. What is the probability of 100 to 110 successes (to 4 decimals)? b. What is the probability of 130 or more successes (to 4
statistics; emergency please help me
Given that z is a standard normal random variable, compute the following probabilities (to 4 decimals). a. P(z -1.5) b. P(z -2.5)
If there are 5 cats and 4 dogs how would I create a ratio bar? Help!
Find the indefinite integral. (e^(4 x) + e^(-5 x)) dx
Thanks Reiny for all your help, I really appreciate it.
Find the indefinite integral. x^2(5 x^3 + 9)^3 dx
The average student enrolled in the 20-wk Court Reporting I course at the American Institute of Court Reporting progresses according to the rule below where 0 t 20, and N'(t) measures the rate of
change in the number of words/minute dictation the student takes in machine s...
Find the function f, given that the slope of the tangent line at any point (x,f(x)) is f '(x) and that the graph of f passes through the given point. f '(x)=6(2x-7)^5 at (4, 3/2)
Carlota Music Company estimates that the marginal cost of manufacturing its Professional Services guitars in dollars/month by the following. This model assumes that the level of production is x
guitars/month. C '(x) = 0.007x + 130 The fixed costs incurred by Carlota are $9...
Collina s Italian Café in Houston, Texas, advertises that carryout orders take about 25 minutes (Collina s website, February 27, 2008). Assume that the time required for a carryout order to be ready
for customer pickup has an exponential distribution with a me...
statistics; emergency please help me
This is what I got too but the web where I do my homework says it is wrong.
statistics; emergency please help me
Given that z is a standard normal random variable, compute the following probabilities (to 4 decimals). c. P(z -1.5) d. P(z -2.5)
statistics; emergency please help me
Assume a binomial probability distribution has p = .60 and n = 200. c. What is the probability of 100 to 110 successes (to 4 decimals)? d.What is the probability of 130 or more successes (to 4
statistics; emergency please help me
Collina s Italian Café in Houston, Texas, advertises that carryout orders take about 25 minutes (Collina s website, February 27, 2008). Assume that the time required for a carryout order to be ready
for customer pickup has an exponential distribution with a me...
I agree it quadruples.
Statistics PLEASE HELP ME!
Collina s Italian Café in Houston, Texas, advertises that carryout orders take about 25 minutes (Collina s website, February 27, 2008). Assume that the time required for a carryout order to be ready
for customer pickup has an exponential distribution with a me...
Collina s Italian Café in Houston, Texas, advertises that carryout orders take about 25 minutes (Collina s website, February 27, 2008). Assume that the time required for a carryout order to be ready
for customer pickup has an exponential distribution with a me...
Collina s Italian Café in Houston, Texas, advertises that carryout orders take about 25 minutes (Collina s website, February 27, 2008). Assume that the time required for a carryout order to be ready
for customer pickup has an exponential distribution with a me...
Thanks for the help.
This is what I got too, but it says that my answer is wrong.
The percentage of a certain brand of computer chips that will fail after t years of use is estimated to be P(t) = 100(1 − e−0.12t ). What percentage of this brand of computer chips are expected to be
usable after 3 years? (Round your answer to one decimal place.)
ok I got now. Thank you so much for your help, I appreciate it.
I got t=70 and it still says it is wrong.
This is what I got and it says it is wrong. Can you tell me what I'm doing wrong please? Thanks Q(t) = 200(.5)^(.0704t) Q '(t) = (200/14.2) (.5)^(t/14.2) so when t = 21.5 Q ' (t) = (200/14.2) (.5)^
(21.5/14.2) = 4.931 g/day
Phosphorus-32 (P-32) has a half-life of 14.2 days. If 200 g of this substance are present initially, find the amount Q(t) present after t days. (Round your growth constant to four decimal places.)
How fast is the P-32 decaying when t = 21.5? (Round your answer to three decimal...
The length (in centimeters) of a typical Pacific halibut t years old is approximately f(t) = 190(1 − 0.955e−0.25t). (b) How fast is the length of a typical 9-year-old Pacific halibut increasing? cm/
yr (c) What is the maximum length a typical Pacific halibut can att...
The percentage of a certain brand of computer chips that will fail after t years of use is estimated to be P(t) = 100(1 − e−0.12t ). What percentage of this brand of computer chips are expected to be
usable after 3 years? (Round your answer to one decimal place.)
The length (in centimeters) of a typical Pacific halibut t years old is approximately. (b) How fast is the length of a typical 9-year-old Pacific halibut increasing? (c) What is the maximum length a
typical Pacific halibut can attain?
Phosphorus-32 (P-32) has a half-life of 14.2 days. If 200 g of this substance are present initially, find the amount Q(t) present after t days. (Round your growth constant to four decimal places.)
How fast is the P-32 decaying when t = 21.5? (Round your answer to three decimal...
Assume a binomial probability distribution has p = .60 and n = 200. c. What is the probability of 100 to 110 successes (to 4 decimals)? d. What is the probability of 130 or more successes (to 4
Given that z is a standard normal random variable, compute the following probabilities (to 4 decimals). P(z -1.0) P(z -1.0) P(z -1.5) P(z -2.5) P(-3 < z 0)
Collina s Italian Café in Houston, Texas, advertises that carryout orders take about 25 minutes (Collina s website, February 27, 2008). Assume that the time required for a carryout order to be ready
for customer pickup has an exponential distribution with a me...
Collina s Italian Café in Houston, Texas, advertises that carryout orders take about 25 minutes (Collina s website, February 27, 2008). Assume that the time required for a carryout order to be ready
for customer pickup has an exponential distribution with a me...
Given that z is a standard normal random variable, compute the following probabilities (to 4 decimals). P(z -1.0) P(z -1.0) P(z -1.5) P(z -2.5) P(-3 < z 0)
Assume a binomial probability distribution has p = .60 and n = 200. c. What is the probability of 100 to 110 successes (to 4 decimals)? d. What is the probability of 130 or more successes (to 4
The growth rate of Escherichia coli, a common bacterium found in the human intestine, is proportional to its size. Under ideal laboratory conditions, when this bacterium is grown in a nutrient broth
medium, the number of cells in a culture doubles approximately every 30 min. (...
Phosphorus-32 (P-32) has a half-life of 14.2 days. If 200 g of this substance are present initially, find the amount Q(t) present after t days. (Round your growth constant to four decimal places.)
How fast is the P-32 decaying when t = 21.5? (Round your answer to three decimal...
The length (in centimeters) of a typical Pacific halibut t years old is approximately. (b) How fast is the length of a typical 9-year-old Pacific halibut increasing? (c) What is the maximum length a
typical Pacific halibut can attain?
A radioactive substance decays according to the formula Q(t) = Q0e−kt where Q(t) denotes the amount of the substance present at time t (measured in years), Q0 denotes the amount of the substance
present initially, and k (a positive constant) is the decay constant. (a) Fi...
The percentage of a certain brand of computer chips that will fail after t years of use is estimated to be P(t) = 100(1 − e−0.12t ). What percentage of this brand of computer chips are expected to be
usable after 3 years? (Round your answer to one decimal place.)
During a flu epidemic, the number of children in the Woodbridge Community School System who contracted influenza after t days was given by the following. Q(t) = 7000/1+249 e−0.6 t (b) How many
children had the flu after 10 days?
Given that z is a standard normal random variable, compute the following probabilities (to 4 decimals). P(z -1.0) P(z -1.0) P(z -1.5) P(z -2.5) P(-3 < z 0)
Assume a binomial probability distribution has p = .60 and n = 200. a)What are the mean and standard deviation (to 2 decimals)? b)Why can the normal probability distribution be used to approximate
this binomial distribution? c)What is the probability of 100 to 110 successes (t...
Collina s Italian Café in Houston, Texas, advertises that carryout orders take about 25 minutes (Collina s website, February 27, 2008). Assume that the time required for a carryout order to be ready
for customer pickup has an exponential distribution with a me...
A radioactive substance decays according to the formula Q(t) = Q0e−kt where Q(t) denotes the amount of the substance present at time t (measured in years), Q0 denotes the amount of the substance
present initially, and k (a positive constant) is the decay constant. (a) Fi...
During a flu epidemic, the number of children in the Woodbridge Community School System who contracted influenza after t days was given by the following. Q(t) = 7000 1+249 e−0.6 t (a) How many
children were stricken by the flu after the first day? (b) How many children h...
The percentage of a certain brand of computer chips that will fail after t years of use is estimated to be P(t) = 100(1 − e−0.12t ). What percentage of this brand of computer chips are expected to be
usable after 3 years? (Round your answer to one decimal place.)
The length (in centimeters) of a typical Pacific halibut t years old is approximately f(t) = 190(1 − 0.955e−0.25t). How fast is the length of a typical 9-year-old Pacific halibut increasing? ? cm/yr
(c) What is the maximum length a typical Pacific halibut can attai...
The growth rate of Escherichia coli, a common bacterium found in the human intestine, is proportional to its size. Under ideal laboratory conditions, when this bacterium is grown in a nutrient broth
medium, the number of cells in a culture doubles approximately every 30 min. (...
Phosphorus-32 (P-32) has a half-life of 14.2 days. If 200 g of this substance are present initially, find the amount Q(t) present after t days. (Round your growth constant to four decimal places.)
How fast is the P-32 decaying when t = 21.5? (Round your answer to three decimal...
Skeletal remains had lost 84% of the C-14 they originally contained. Determine the approximate age of the bones. (Assume the half life of carbon-14 is 5730 years. Round your answer to the nearest
whole number.)
Global Studies
I was also thinking c?
Global Studies
Which of the following minerals generates the most income for British Columbia? A. lead B. zinc c. Gold d. Copper <<< One reason many British Columbians feel a link to the countries of the Pacific
Rim is because more than 15% have ____ancestors. a. Hawaiian <<&l...
Hoe can I solve this by using elimination method.?? 5y+2x=5x+1 3x-2y=3+3y
please help someone fast !!!
Describe how the Triassic period was a transitional time period during the Mesozoic era. Explain why the boundary between the Triassic and Jurassic periods is similar to the boundary between the
Permian and Triassic periods.
2. Describe the two most important events in the history of animal life that occurred at the beginning and at the end of the Paleozoic era. List and briefly outline the six different periods into
which the Paleozoic era is divided. so i cannot find it in my notes, and im strug...
Write a unit rate 250 miles for 20 miles
write the equations of two different parabolas whose vertices are at (3,2).
how do you solve x+y=2?
Measuring the conductivity of an aqueous solution in which 0.0200 mol CH3COOH has been dissolved in 1.00 L of solution shows that 2.96 % of acetic acid molecules have ionized to CH3COO- ions and H3O+
ions. Calculate the equilibrium constant for ionization of acetic acid and c...
Pages: 1 | 2 | 3 | 4 | 5 | 6 | 7 | Next>>
|
{"url":"http://www.jiskha.com/members/profile/posts.cgi?name=vanessa","timestamp":"2014-04-17T22:41:49Z","content_type":null,"content_length":"32351","record_id":"<urn:uuid:7fdfe3e4-65f7-43f6-942c-e08312f8d98d>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00586-ip-10-147-4-33.ec2.internal.warc.gz"}
|
help desk - 2nd grade mode question
Tuesday, October 9, 2012
help desk - 2nd grade mode question
24 comments:
As far as I can tell, the "mode" is the least-used of all descriptive statistics. Many data sets don't even have a mode although they will have a min, max, median, and mean. But mode is the
easiest, of the the statistics with a special name, to define; thus is it taught pointlessly to 7th graders around the land. And now 2nd graders?
The mode is 4, the value you have the most of--the one that's most common. It's a good term to know. Occasionally, you'll hear a warning that some data set is "bimodal," like a Bactrian camel,
and you'll be reminded to not assume that the "average" is "average."
Glen - you could sample a bi-modal distribution and end up with a set of data that has no mode. There might be no repeated values at all.
You could sample ANY distribution and have that happen. In fact, you are almost guaranteed to have no mode if you sample a continuous distribution with enough precision. In continuous cases, you
often group things into ranges (10-12, 12-14, 14-16, etc.) so you can get some repeat values.
Is this 2,2,3,3,3,3,4,4,4,4,4,5,5,6 OR 2,4,5,2,1?
That's my question.
It is 2, 2, 3, 3, 3, etc.
What you are looking at is a stripped-down bar graph, in which each X in the column over a numeral represents the number of times that numeral occurs. The stack of X's over the 4 is tallest.
Therefore, 4 is the most commonly occurring value, a.k.a., the mode.
Yes, Anonymous, it's the former. Each X is a data point. The stacks are bins into which you throw the data points for counting purposes, and the counts showing how many Xs are in each bin aren't
data points themselves, just part of the analysis. If one of the bins ends up with a taller stack of data points than any other bin, that bin is the mode. The count of how many Xs end up in the
mode bin isn't called anything that I know of. As long as it's the biggest its bin becomes the mode.
Thank you. That's what I thought, but the teacher said that the correct answer was 2. I spoke with her today again. She double checked the teacher's manual and the correct answer is 4.
gasstationwithoutpumps said...
I missed the point of this post entirely. That the mode was 4 was obvious. I thought it was being highlighted because they called the histogram a line graph, which it clearly is not.
SteveH said...
"She double checked the teacher's manual and the correct answer is 4."
This is the scary part. She had to check the manual. Teaching mode means that they can say that they are teaching statistics.
What curriculum is this from?
Cassandra Turner said...
According to most state standards. The image is of a line plot:
"A graphical display of a set of data where each separate piece of data is shown as a dot or mark above a number line."
Cassandra Turner said...
Mode is not not included in the CCSSM at 2nd grade, so what will happen to this textbook? Will the teacher be instructed to skip the lesson? Will the textbook be updated, then no one can afford
the new books? Will they just keeping teaching mode because its in their materials? Just sayin'
Cassandra Turner said...
For your elementary student memorizing enjoyment, there is a card/poster put out by a company in Texas that uses the visual:
The median graphic is :
Catherine Johnson said...
She double checked the teacher's manual and the correct answer is 4.
Well at least that's something.
I was thinking the manual had it wrong, too!
I'm guessing you were typing meDIan, not meDIAn, and lingered a little too long on the shift key. But maybe what it should really be is adEImn. ;-)
(And that's funny--the captcha I now have to type is "nmedian 4". The machine gods are watching.)
This is interesting, because if this is considered a "line plot" that definition does not match current use. Or at least, my current use and I assume also that of anyone who programs much in
Matlab at a minimum. A line plot is a plot of a line (or curve), while this is clearly a histogram.
Cassandra Turner said...
"Glen said...
I'm guessing you were typing meDIan, not meDIAn"
meDIan is correct.
The curse of the ipad and fat fingers.
And you can find these math cards here: Lone Star Learning Math Vocab Cards
Cassandra Turner said...
kcab- This definition of Line Plot seems to be from the (elementary) education world. Google "line plot" +images -> most hits are from a math coaching or education sites. Google histogram, and
the hits are, well, less from the ed world.
Line plots are identified in the Common Core State Standards for Mathematics in grades 2, 4 & 5:
2.MD.9 Generate measurement data by measuring lengths of several objects to the nearest whole unit, or by making repeated measurements of the same object. Show the measurements by making a line
plot, where the horizontal scale is marked off in whole-number units
4.MD.4 Make a line plot to display a data set of measurements in fractions of a unit (1/2, 1/4, 1/8). Solve problems involving addition and subtraction of fractions by using information presented
in line plots. For example, from a line plot find and interpret the difference in length between the longest and shortest specimens in an insect collection.
5.MD2 Make a line plot to display a data set of measurements in fractions of a unit (1/2, 1/4, 1/8). Use operations on fractions for this grade to solve problems involving information presented
in line plots. For example, given different measurements of liquid in identical beakers, find the amount of liquid each beaker would contain if the total amount in all the beakers were
redistributed equally.
And the CCSSM Glossary defines a line plot as:
Line plot. A method of visually displaying a distribution of data values where
each data value is shown as a dot or mark above a number line. Also known as a dot plot. (Adapted from Wisconsin Department of Public Instruction, op. cit.)
Yes, I noticed, but I think that perhaps the ed world is making a mistake. Histogram is the accepted term for this type of graph. If schools are teaching terms like mode in the first place, they
might as well get the name of the type of graph correct.
histogram - see Wikipedia
Line plot - this page in old Matlab documentation is an example of typical usage: http://www.mathworks.com/help/matlab/creating_plots/line-plots-of-matrix-data.html
or, look at your chart types in Excel. (Excel doesn't use the term line plot, but does use line graph or line chart.)
Sigh, I just don't like things like this because they remind me of an awful early elementary teacher who told my daughter lies like 2-3 = 0.
While we're at it, the word is datum.
"This is the scary part. She had to check the manual"
Unfortunately there is more. She also said that next year she will accept both answers (2 and 4) as valid because "kids get very confused with this concept".
I didn't even know how to respond to that but looking back I should have said that if a concept is too difficult for a second grader to grasp then it should be taught when he/she is older instead
of teaching it wrong and accepting a wrong answer. So sad... :(
If a concept is too difficult for a teacher to grasp, she shouldn't be teaching that subject.
The book looks bad enough, but if the teacher doesn't even understand the math so poorly expressed, it's downhill from there.
Lsquared said...
I hate the term "line plot", because I get it mixed up with "line graph". I can't figure out why a line plot is called a line plot, since there aren't any lines in it, but oh well. I prefer
pictogram (just with very simple pictures), which, of course, is closely related to histogram (only with discrete values for the bins).
Coming in a little late here, but this is too timely and interesting for me to pass-up.
About two weeks ago I downloaded Ohio's 3rd grade math assessment test to give to my son as a check on how he's coming along at school. (BTW, we're not from Ohio, it just happened to be one of
the top hits from my search term.) One of the questions was this very one. I was following along while he worked on it, and when I saw this question my first thought was what a ridiculous
question. I couldn't remember mode myself! No one uses mode, except as mentioned already, in the use of bi-modal to make a distinction with normal or Gaussian, which unfortunately didn't occur to
me at the time.
So, I used it as an opportunity to teach him test-taking skills. I said I have no idea what mode is either, but we can figure it out. All these tests have to have a single right answer in order
for the computers to be able to grade them. We can figure out the right answer by ruling out all the answers that are not unique. Two is unique, because it is the smallest, but we already have a
word for smallest: minimum. Of 3, 4, and 5, which one is unique? 4 is both because it has the most and because it is in the middle. So, 4 is the right answer.
Then I told him, incidentally, I do know the word for the middle number is median; so, that tells us mode must mean the number that has the most.
Finally, I told him the important thing about taking these tests is to never panic and make a wild guess. If you aren't sure, always look at the answers and try and pick the one that is unique.
For good or for bad, test taking is an important skill in the 21st century.
|
{"url":"http://kitchentablemath.blogspot.com/2012/10/help-desk-2nd-grade-mode-question.html","timestamp":"2014-04-19T22:06:07Z","content_type":null,"content_length":"420378","record_id":"<urn:uuid:54dcb962-ad2f-4d1e-ba61-ce548094bf8e>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00065-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Page View
Chambers, Ephraim, 1680 (ca.)-1740 / Cyclopædia, or, An universal dictionary of arts and sciences : containing the definitions of the terms, and accounts of the things signify'd thereby, in the
several arts, both liberal and mechanical, and the several sciences, human and divine : the figures, kinds, properties, productions, preparations, and uses, of things natural and artificial : the
rise, progress, and state of things ecclesiastical, civil, military, and commercial : with the several systems, sects, opinions, &c : among philosophers, divines, mathematicians, physicians,
antiquaries, criticks, &c : the whole intended as a course of antient and modern learning
Epacts - equinox, pp. 319-338
'he Valentinia S hold to have been
ty Years of our Saviour's private
Ionnologv. the Exceles of the Solar
Month above the Lunar Synodical Month; and of the Solar
Year above the Lunar Year, or i2 Synodical Months: Or
of feveral Solar Months, above as many Synodical Months;
and feveral Solar Years above as many Dozens of Synodical
Whence, the Epatis are either Annual or Menfirual.
Menfirual EPACTS, are the Excefies of the Civil, or
Calendar Month, above the Lunar Month. See MONTH.
Suppofe, e. gr. it were New Moon on the firff Day of
7anuary: Since the Lunar Month is 29 Days I2 h 443 ";
and the Month of 7anuary. contains 31 Days: The Men-
flrual Spa fit is i Day I Ih I5 57'.
Annual EPAcTs, are the Exce11es of the Solar Year
above the Lunar. See YEAR.
Hence, as the 7ulian Year is 365 Days 6 Hours, and
the Julian Lunar Year 354 Days 8 h 48' 38"; the& an-
nual Epad will be io Days I2h ii' 22z '; that is, nearly,
ii Days. Confequently, the Epaft of 2 Years, is 22 Days;
of 3 Years, 33 Days ; or rather 3, fince 30 Days make
an Embolifmic, or Intercalary Month. See EMBOLISMIC.
Thus,the EpaH of 4 Years are 14 Days, and fo of the refd:
And thus, every igth Year, the EpaG7 becomes 30 or o ;
confequently the 0oth Year the Epat7 is II again: And
fo the Cycle of Epaffs, expires with the Golden Number,
or Lunar Cycle of i9 Years, and begins again with the
fame, as in the following Table.
GoNudmb-Elats. Gold. Ep a Gold. E
Num p Numb. a Numb. Epafi.
I XI 7 XVII 13 XXIII
2 XXII 8 XXVIII 14 IV
3 III 9 IX I5 XV
4 XIV 10 XX i6 XXVI
5 XXV I][ I I7 VIll
6 VI It XII is XIX
_______ I19 XXX
Again, as the New Moons are the fame, that is, fall
on the fame Day every I9 Years, fo the Difference be-
tween the Lunar and Solar Year, is the fame every i9
Years. And becaufe the faid Difference is always to be
added to the Lunar Year, 'in Order to adjuft, or make it
equal to the Solar Year ; hence the faid Difference re-
fpetirely belonging to each Year of the Moon's Cycle, is
called the Epafc of the faid Sear, that is, the Number
to be added to the faid Year to make it equal to the Solar
Year; The Word being form'd from the Greek, Ez*,
induco, intercalo.
Upon this mutual Refpecf, between the Cycle of the
Moon, and the Cycle of the Epafas, is founded this Rule
for findina the Ejpaa belonging to any Year of the Moon's
Cycle. Multiply the Year given of the Moon's Cycle into
-I I if the Produce be lefs than 30, it is the Epai fought
if the Produa be greater than 30, divide it by 3o, and
the Remainder of the Dividend is the Eaa : For In-
fance, I would know the Epjay for the Year 17Iz, which
is the third Year of the Moon's Cycle. Wherefore 3 it
the Epad7 for 1712 : For II X 3 '33, and 33 being di.
vided by 30, there is left 3 of the Dividend for the
Epaai. See CYCLE.
By Help of the Epat7 may be found what Day of an)
Month in any Year the New Moon falls on, thus: T(
the Number of the Month, from March inclufively, ads
the Epat7 of the Year given; if the Sum be le s that
30, fubflra& it out of 30; if greater, fubffrad it out o
60; and the Remainder will te the Day, whereon the
New Moon will fall.
If the New Moon be fought for in the Month of 7a
tiuary or March, then nothing is to be added to the
Epafi; if for February or April, then only I is to be added
For Example: I would know what Day of Decembej
the New Moon was on A. D. 1711, the Epaaf whereof id
X2. By the aforefaid Rule, I find it will be December the
2Sth; for 22 + I -32., and 6o-32 z 8. See MOON
The Day whereon the New Moon falls, 'being thus found
it is eafy to infer from thence what the Age of the Moon
is on any Day given.
However, there is a peculiar Rule commonly made ufi
of to this Purpofe, which is this: Add the Epav of the
Year, the Number of the Month, from March inclufively;
and the given Day of the Month all into one Sum, which
lf it be lefs than 3o, 1hews the Age of the Moon; if it
be greater than 30, divide it by So, and the Remainder
of the Dividend mhews the Age of the Moon, or how
i many Days it is from the lafi New Moon; This Method
will never err a whole Day.
For Inflance: What was the Age of the. Moon on,
!December 31 fly A D. I 7II ? By this Rule, I find,
that the Moon will then be three Days old; that isj it
will then be three Days from the lafi new Moon. For,
2z+ ± O+ 3i = 63, and 63 being divided by 3o, there
will remain of the Dividend, 3. And this exactly agrees
to the other foregoing Rule, whereby it was found that
the New Moon was on December z8. 171'.
It mufl be obferved, that as the Cycle of i9 Years,
anticipates the new Moons by one Day in 3;Iz Years;
the fame Cycle of ,pa s will not always hold: The
Moon's Anticipation leffening the feveral E~pat(s by one,
every 312 Years.
To have the Epa s, therefore, point out the New Moons
perpetually; that Epaa given in the Calendar is not fuf-
ficient; but all the 30, Epass fhould be bclowed through-
out the whole Year, that the Calendar may exhibit all
the Cycles of Ejpa~s. See CALENDAR.
And, again, that as in 30o Gregorian Years, there is
one Bifextile Year dropp'd ; the New Moons are thus
thrown on the following Day. Confequently, by the Moon's
pofl-pofition there is one added to every Epatl. See
GR EGORI AN.
EPANORTHOSIS, in Rhetoric, a Figure, whereby the
Orator revokes and correcfs fomething before alledg'd, as
too weak; and adds fomething fironger, and more con-
formable to the Paffion he is agitated by. See CoR-
Such, e. gr, is that of Cicero for Caelills: 0 stultitia !
Stultitiam ne dicam, an Inpudentiam Singularem. Oh
Folly! Folly miall 1 call it, or rather intolerable Impu-
dence ? And in the firfil Catalinarian5: Kuamnquam quid
loquor ! Me vit W11a res frangat ? 7Au ut unquam te
corrigas ? <T' sut ullam fugam meditere ? Iu ut ullum;
exilium cogites? Utinam tibi ijtam Mentem 2)ii Im-
mortales donarent.
Thus alfo Tlerence, in the hIeautolztimorumenos, in-
troduces his old Man Mendemus, faying,
Filium Unicum Adolefcentulum
Habeo. A ! quid dixi habere me? Imo habui, Cb'reme,
Nunc habeam nec ne, incertum eft.
The Word is Greek, es7mvPon0s, form'd of o, Right,
Straight, whence CpgD'e> I lraighten, av't7.'U, s=PoSw-
I redrefs, firaighten, correi7, and e Corre~tion.
The Latins call it Corredio, and Frmendatio.
EPAULE, or ESPAULE, in Fortification, the Shnoulder
of the Baffion; or the Angle made by the Face and Flank;
whence that Angle is ofteu called the Angle of the Epaule.
See BASTION and SHOULDER.
The Word is pure French, and literally fignifies Sho'lider.
EPAULEMENT, in Fortification, a Side-Work haflily
thrown up, to cover the Canon, or the Men,
It is made either of Earth thrown up, of flags of
Earth, Gabions, or of Fafcines and Earth, of which lat-
ter make, the EPpanlements of the Places of Arms, for
the Cavalry, behind the Trenches are.
EPAULEMENT, is alfo ufed for a Demi-Baffion, confidfing
of a Face and Flank, placed at the Point of a Horn- or
- Crown-Work. Alfo, for a little Flank, added to the Sides
of Horn-Work, to defend them when too lonfr. Alfo for
the Redoubts made on a right Line, to fortine it. And,
laffly, for a Square Orillon, which is a Mafs of Earth almoft
fquare, faced and lined with a Wali, and defigned to
cover the Canon of a Cafement.
EPENTHESIS, in Grammar, the Addition, or Infer-
tion of a Letter, whether a Vowel, or Confonant, in the
Middle of a Word; as Relligio for Religio. See FiGURE.
The Word is Greek, AVsioiw form'd of 6*, &' and
X Siu, q. d. m6773nwp, inyfero, immitto.
EPHA, a dry Meafure in Ufe among the Ifebrew.r.
, See MEASURE.
The Epha was the moll ordinary Meafure they ufed;
and that whereby the refi were regulated. 'Tis commonly
r fuppofed that the Elpha, reduced to Roman Alodius,
contain'd four Modii and a half. Now the Roman Modi us'
of Grains, or Flower, contain'd 20 Librae, or Pounds;
confequently the Epha weigh'd 96 Pounds. Dr. Arbuthnor
reduces theEpha to 3 Pecks, 3 Pints, Evg?.
lThe Hofpitality of Gideon is prais'd for baking an
Epha of Flower for a fingle Angel. Which might tave.
erved 45 Men a whole Day; thie ufual Portion allow'd
the Workmen being two Pound of Bread per Diem.
EPHEMERA, in Medicine, an Epithet applied to
fomething that only lafts one Day; particularly to a
Fever, which terminates in the Cornpafs of a Dayi, i. e;
to an Accefs of a Fever which returns no more,
called by Galen, eplyfrg i Febris Ehemoer, and,
alto fDiaria. See FEVLa.
EI A
|
{"url":"http://digicoll.library.wisc.edu/cgi-bin/HistSciTech/HistSciTech-idx?type=article&did=HistSciTech.Cyclopaedia01.i0041&id=HistSciTech.Cyclopaedia01&isize=text","timestamp":"2014-04-17T19:16:38Z","content_type":null,"content_length":"20256","record_id":"<urn:uuid:38259884-a285-442e-b65c-5716005f6e63>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00301-ip-10-147-4-33.ec2.internal.warc.gz"}
|
How Do You Go About Learning Mathematics?
up vote 1 down vote favorite
I really like mathematics, but I am not good at learning it. I find it takes me a long time to absorb new material by reading on my own and I haven't found a formula that works for me. I am hoping a
few people out there will tell me how they go about learning math so I can try out their systems.
I need to know basic things. Should I use one book at a time or should I be reading many books on the same topic at once? Do you stop reading when you hit on a fact that you don't understand or do
you keep reading?
Do you read all in one go or do you do a little bit and for how long (1 hr, 2 hr or more?)
Do you read all the chapters or do you do all the exercises before moving on from a chapter?
Do you adjust your technique in Calculus(calculation heavy) vs. Analysis (proof heavy)? If so, how?
When you make notes, what do you make notes about? Do you make notes while you read or after?
Is there some note writing system (eg. Cornell system) that you find superior for taking mathematics?
If you think these decisions all depend, can you say what they depend on?
I am really lost here. I would appreciate any input.
Full Disclosure: I have asked this question on Math Stack Exchange.
I am looking for a diversity of approaches. I hope this question is on-topic here.
As there is already one vote to close, I have started a discussion on meta at tea.mathoverflow.net/discussion/650 – Theo Johnson-Freyd Sep 5 '10 at 18:15
I vote to close, the question is offtopic. Besides, everyone must find his individual technique. For example, I tend to read proofs sometimes in a non-specific order and let my intuition fill in
the rest of the proof. But of course, not everyone should do this. – Martin Brandenburg Sep 5 '10 at 19:03
And yes, all these questions depend on your mathematical experience, on your situation, how many time you want to spend, etc. pp. – Martin Brandenburg Sep 5 '10 at 19:06
In Theo's meta thread I've posted links to some of the more relevant related threads. Browsing the "soft question" tag is a slightly less efficient alternative way to find these threads. – Ryan
Budney Sep 5 '10 at 19:32
@Martin: I expect the answers to be personal. However, my hope is if I get several answers, then among them, one might be useful for me. I can also mix and match variations of what other people are
doing. For example, music tastes are highly personal but if a large enough number of people shared their favorite song then perhaps I would find one I really liked also. (Looking at other people's
top ten music lists has been a way I've found new favorites in the past.) – user9028 Sep 5 '10 at 20:13
show 3 more comments
closed as off topic by Andrew Stacey, Martin Brandenburg, Ryan Budney, Daniel Moskovich, Felipe Voloch Sep 6 '10 at 13:26
Questions on MathOverflow are expected to relate to research level mathematics within the scope defined by the community. Consider editing the question or leaving comments for improvement if you
believe the question can be reworded to fit within the scope. Read more about reopening questions here.If this question can be reworded to fit the rules in the help center, please edit the question.
4 Answers
active oldest votes
I doubt there is a universal formula for this, but here's my view (expressed before on MathOverflow):
1. As much as possible, learn things together with others. A working seminar is a wonderful way to learn things
up vote
7 down 2. Read as little as possible and try to work out as much as possible on your own. Read only enough to get the idea of what's going on and then try to work out the details yourself. Consult
vote the book only when you get stuck or lost in what you're doing. Avoid letting the book do any work that you are able to do yourself.
1 +1 for the second point. However, usually it may take some time for beginners to realize that this is very useful. – Somnath Basu Sep 5 '10 at 21:15
@Deane I dunno if that really works.I fully agree that serious math students have to force themselves to produce as many proofs as they can without looking them up.But I don't know if
1 this kind of brute,"info only need to know" minimalism produces the kind of deep insight working through several treatments of the same material does.I know many very talented math
students at top programs who do this.The global result to me,is somewhat less then stellar.They usually have huge gaps in thier knowledge-usually in the most basic of concepts. – Andrew L
Sep 6 '10 at 6:37
@Deane continued For example,I had a friend who was researching nonassociative algebras at Stanford by the age of 20.He had no clue what filters or nets were. He also had never heard of
1 the classification of compact surfaces-which was particularly shocking given his area.I'm not saying you're wrong.I understand the need to force yourself to create math.I just don't know
if you have to go to that extreme to accomplish this.I read everything actively,but I don't STUDY like that-when I'm studying,the only thing I don't produce from whole cloth are
definitions and some examples. – Andrew L Sep 6 '10 at 6:42
Andrew, to clarify, I'm not suggesting that a student try to create his own math. I'm just saying that, given the logical rigor of math, you can often work out at least some of the more
7 obvious rigorous details of a typical proof yourself from just knowing the intuitive idea of what's going on and that doing this is worth the effort. I certainly don't think it's
realistic for a student to work out entire statements and proofs of theorems without consulting references carefully. And I don't believe I said anything that implies an overly narrow
focus in what you study. – Deane Yang Sep 6 '10 at 12:46
add comment
My suggestion is, first, don't look for the optimal way to learn mathematics -how not to quote here Menaechmus' famous reply to Alexander: "there is no royal road to geometry".
Second, speak with other people -here it may be interesting to discuss with somebody else following a different book.
up vote 5 down
vote In any case, remember that maths books are very dense; no surprise if reading is slow! But on the other hand, each new single book that you read may enrich you greatly. So, just go at
your speed, don't worry about the time it takes, and enjoy what you are learning.
add comment
"What is the most effective way to learn mathematics?"
I have been trying to answer this question for myself, and one measure I've taken towards this goal is to record all of my mathematical reading, work, and random thoughts in a journal. I
highly recommend the practice as it has been very illuminating to me since I started a few months ago. Reviewing my previous readings allows me to ascertain how much math I actually end up
retaining from my study sessions, and keeping all of my work in one place (as opposed to throwaway scrap paper) allows me to spot any particularly common mistakes.
So far, I've found that my memory is far more tenuous than I had previously assumed. I'd look at last month's entries and realize that I'd only retained 20% of what I had learned; fine
details being especially prone to slippage. Yet from analyzing my mistakes, I've also found that those very details are much more crucial than I had thought.
up vote
2 down The result of all of this is that I've started to shift my focus from "learning new math rapidly" (which has been my focus since I am still an undergraduate) to "winning the uphill battle
vote against memory loss." From this new perspective, the old adage: "the only way to learn mathematics is through doing" begins to make a lot more sense. While active learning is far from any
cure to forgetfulness, given my own mnemonic capabilities I have come to see that it would probably be a better long-term investment to spend a month on fully working and understanding a
chapter, than to spend the same time blazing through several chapters but skipping the exercises (having done both.)
I emphasize again that this is my own conclusion based on my own characteristics, and that is precisely why I recommend everyone to find their own answer to this question by keeping their
own math notebook.
add comment
I've found that the best books are the ones that make me pause when I read a paragraph or point because I suddenly feel that I've understood something well or that I've suddenly slipped
gears and am bogged down. At that point, I tend to walk away from the book towards a stack of blank pages and work out the problem as well as I understand it at that point. Either I conquer
my mistake and return to the book, or I find another book or example which helps me. This is particularly true of my Differential Equations book from my undergraduate course, and I remember
stopping at the catenary problem and marveling at the simple and elegant way of looking at it provided by differential equations.
The other direction which helps me is in having (i) a problem to solve or (ii) a question to answer already in mind. I ended up rereading a book on Computation theory and Finite Automata
because I wanted to refine the state space of a particular algorithm. Having a question in mind helped me re-state the examples in the book in terms of what I found myself interested in.
up vote Summary:
0 down
vote • learn the tools,
• handle and use the tools (you won't break anything, except a sweat),
• work out the math and proofs on your own, convince yourself of the validity of a proof,
• look for another source if the book you're using seems opaque and too difficult to understand. Do you need more examples to understand a mathematical point? Look for a book designed for
applications, e.g. computer science or engineering oriented books about math.
• the answer depends on the book and on your mental state. On some days, it is worth the trouble to keep on attacking a problem until you can solve it. On other days, if you cannot
concentrate, it makes sense to walk away from the problem and the book, read a different topic or read a different book on the same topic, and sleep on it. Try again the next day.
add comment
Not the answer you're looking for? Browse other questions tagged soft-question or ask your own question.
|
{"url":"https://mathoverflow.net/questions/37794/how-do-you-go-about-learning-mathematics","timestamp":"2014-04-20T11:02:55Z","content_type":null,"content_length":"71371","record_id":"<urn:uuid:1c2fa360-d1f5-4d66-8a29-37352ba4ffc1>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00362-ip-10-147-4-33.ec2.internal.warc.gz"}
|
How Do Companies Calculate Their Cost of Capital?
On Monday, the Association for Financial Professionals issued its “2013 AFP Estimating and Applying Cost of Capital Survey Report.” The survey of 424 finance professionals found that although
financial planning and analysis (FP&A) might be the department that makes the most use of weighted average cost of capital (WACC) figures, it is generally not the department responsible for coming up
with WACC estimates. In 46 percent of respondents’ companies, treasury is responsible for WACC calculations. This responsibility falls to finance in 32 percent of companies and to FP&A in only 18
More than half of respondents think their organization’s WACC estimate is within 50 basis points (bps) of its actual cost of capital, but 11 percent believe the WACC is off by more than 100 bps.
Large companies are three times as likely to think their WACC calculations are more than 1 percentage point away from actual cost of capital (see Figure 1).
The survey delved into the calculations companies use to reach their WACC estimates. It found that the vast majority of companies (85 percent) use discounted cash flow (DCF) techniques to value their
projects and investment opportunities; most of the remaining 15 percent of respondents use either projected return on investment (ROI) or a more generic means of cost-benefit analysis. Most DCF users
(51 percent) discount explicit forecasted cash flows over the project’s first five years. Twenty-six percent use a 10-year explicit cash flow forecast, only 3 percent use a 15-year forecast, and 12
percent use a three-year forecast.
Following their explicit discounted cash flow forecasting, respondents most commonly estimate the terminal or continuing value of the project or investment opportunity using a perpetuity growth model
(42 percent). Thirty-seven percent use a long explicit cash flow forecast (most popular among smaller and privately held companies), and 15 percent use the value driver model.
The survey also found that nearly three-quarters of respondents consider multiple scenarios (e.g., best-case, worst-case, and expected-case scenarios) when modeling cash flows for projects and
investment opportunities (see Figure 2).
The survey also explored how companies estimate their cost of equity. Eighty-five percent of respondents use the Capital Asset Pricing Model (CAPM); only 4 percent use the Dividend Discount Model
(DDM), and 2 percent use the Arbitrage Pricing Model (APM).
Among those that use the CAPM, the most popular instrument used in estimating risk-free rates is the 10-year Treasury (39 percent), followed by 90-day Treasuries (17 percent), 5-year Treasuries (14
percent), and 52-week Treasuries (12 percent). An increasing number of companies are imposing a floor, a cap, or both on the risk-free rate they use in evaluating projects and investments (see Figure
3). Among these, the average floor for the risk-free interest rate is 4 percent. The average rate cap is 8 percent, although the average among companies with less than $1 billion in annual revenue is
10 percent and the average among larger companies is 7 percent.
To determine the beta factor in estimating their cost of equity, nearly two-thirds of companies use Bloomberg data. Half use a raw beta factor, while the other half use an adjusted beta factor. This
is a notable shift toward using raw beta; in 2010, considerably more respondents used adjusted beta (57 percent).
When asked about the market risk premium they use to determine expectations for return on an equity portfolio, roughly equal numbers use a premium of less than 3 percent, a premium of 6 percent or
higher, and each full percentage in between the two (see Figure 4). The most common frequency of re-evaluating the market risk premium is once a year (36 percent of respondents), although 19 percent
re-evaluate on a quarterly basis and 23 percent reconsider their market risk premium every time they estimate their cost of equity.
In figuring the cost-of-debt component of their WACC, 43 percent of respondents' companies use the current rate on their outstanding debt, 21 percent use the forecasted rate for new debt issuance, 21
percent use an average rate on outstanding debt over a defined period of time, and 12 percent use the historical rate on outstanding debt. The survey also inquired about which tax rate respondents
use when calculating the after-tax cost of debt. Almost two-thirds (60 percent) use the effective tax rate, while more than a quarter (27 percent) use the marginal tax rate.
How do companies pull together all these calculations to determine their WACC? In establishing the relative weighting of debt and equity, 36 percent use their current book debt-to-equity ratio.
Seventeen percent use their current book debt but use market values for the equity portion of the ratio. Twenty-four percent use their current market debt-to-equity ratio, while 18 percent use their
target debt-to-equity ratio.
Ultimately, when determining whether to fund a specific project, 48 percent of large companies and 32 percent of smaller businesses use a standard hurdle rate above the WACC. Then many adjust the
hurdle rate when characteristics of the project indicate that doing so is appropriate (see Figure 5).
|
{"url":"http://www.treasuryandrisk.com/2013/10/30/how-do-companies-calculate-their-cost-of-capital?t=chrysler&page=2","timestamp":"2014-04-19T12:28:06Z","content_type":null,"content_length":"38224","record_id":"<urn:uuid:ff92b1fa-7e09-464e-8588-b6165629c880>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00633-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Integer cohomology of the Grassman manifold of n planes in $R^\infty$
up vote 5 down vote favorite
I can't seem to find a reference on the web that gives the $\mathbb{Z}$ cohomology of the Grassmann manifold of real n-planes in infinite dimensional Euclidean space and also the Bockstein maps
associated with the coefficient sequence
$$0 \to \mathbb{Z} \to \mathbb{Z} \to \mathbb{Z/2Z} \to 0.$$
The real question is which products of Stiefel-Whitney classes are really $\mathbb{Z}$ classes.
See also: mathoverflow.net/questions/16632/… – Mark Grant Jul 16 '12 at 8:18
add comment
1 Answer
active oldest votes
I don't know if these have everything that you want, but see the following:
Brown, Edgar H., Jr. The cohomology of BSOn and BOn with integer coefficients. Proc. Amer. Math. Soc. 85 (1982), no. 2, 283–288.
up vote 3 down
vote accepted Feshbach, Mark The integral cohomology rings of the classifying spaces of O(n) and SO(n). Indiana Univ. Math. J. 32 (1983), no. 4, 511–516.
Thanks for the reference I gather from a cursory reading that the products of Stiefel-Whitney class that are mod 2 reductions of integer class are generated by - mod 2 reductions
of the Chern classes of the universal n-plane bundle and Sq^1 of the even Stiefel-Whitney classes, that is, the polynomials w1uw2i+ w2i+1. – marc gordon 0 secs ago – marc gordon
Jul 17 '12 at 18:21
add comment
Not the answer you're looking for? Browse other questions tagged at.algebraic-topology or ask your own question.
|
{"url":"http://mathoverflow.net/questions/102323/integer-cohomology-of-the-grassman-manifold-of-n-planes-in-r-infty/102329","timestamp":"2014-04-21T00:43:49Z","content_type":null,"content_length":"52910","record_id":"<urn:uuid:33310da3-7ea4-480f-8f6e-b705110ed00e>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00657-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[Numpy-discussion] subclassing ndarray
Chris.Barker Chris.Barker@noaa....
Mon Nov 21 19:18:36 CST 2011
Hi folks,
I'm working on a "ragged array" class -- an array that can store and
work with what can be considered tabular data, with the rows of
different lengths:
A "ragged" array class -- build on numpy
The idea is to be able to store data that is essentially 2-d, but each
row is
an arbitrary length, like:
At the moment, my implementation (see enclosed) stores the data in a 1-d
numpy array as an attribute, and also an index array that stores the
indexes into the rows. This is working fine.
However, I'd like to have it support any of the usual numpy operations
that make sense for a ragged array:
arr *= a_scalar
arr * a_scalar
etc, etc, etc.
So I thought maybe I'd do a subclass, instead of having the data array
an attribute of the class. But I can't figure out how to solve the
indexing problem:
I want to re-map indexing, so that:
arr[i] returns the ith "row":
In [2]: ra = ragged_array([(1,2), (3,4,5), (6,7)])
In [4]: print ra
ragged array:
[1 2]
[3 4 5]
[6 7]
In [5]: ra[1]
Out[5]: array([3, 4, 5])
I'm currently doing (error checking removed):
def __getitem__(self,index):
returns a numpy array of one row.
row =
(self._data_array[self._index_array[index]:self._index_array[index+1]] )
return row
But if I subclass ndarray, then self._data_array becomes jsut plain
"self", and I've overloaded indexing (and slicing), so I don't know how
I could index into the "flat" array to get the subset of the array I need.
any ideas?
Other comments about the class would be great, too.
Christopher Barker, Ph.D.
Emergency Response Division
NOAA/NOS/OR&R (206) 526-6959 voice
7600 Sand Point Way NE (206) 526-6329 fax
Seattle, WA 98115 (206) 526-6317 main reception
-------------- next part --------------
A non-text attachment was scrubbed...
Name: ragged_array.py
Type: text/x-python-script
Size: 3727 bytes
Desc: not available
Url : http://mail.scipy.org/pipermail/numpy-discussion/attachments/20111121/cedbea4c/attachment.bin
-------------- next part --------------
A non-text attachment was scrubbed...
Name: test_ragged_array.py
Type: text/x-python-script
Size: 4208 bytes
Desc: not available
Url : http://mail.scipy.org/pipermail/numpy-discussion/attachments/20111121/cedbea4c/attachment-0001.bin
More information about the NumPy-Discussion mailing list
|
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2011-November/059297.html","timestamp":"2014-04-16T17:14:26Z","content_type":null,"content_length":"5312","record_id":"<urn:uuid:c9b64fc8-3fd7-46b2-a370-9429512679ba>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00455-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Mark Gritter's Journal
[Most Recent Entries] [Calendar View] [Friends]
Below are 20 journal entries, after skipping by the 20 most recent ones recorded in Mark Gritter's LiveJournal:
[ << Previous 20 -- Next 20 >> ]
Saturday, November 30th, 2013
1:00 am Court Dress and Diplomatic Uniforms
One of the things I love about reading the Economist is little historical tidbits that get brought to my attention. For example, a few weeks ago I learned about the British
Honors Forfeiture Committee
. And, of course, Wikipedia also has a category for persons stripped of their honors.
Today's gem is that in 1853, the United States asked its diplomats not to wear court dress any longer.
Wikipedia's explanation is a bit more involved
...In 1853, Secretary of State William L. Marcy issued a circular recommending that U.S. diplomats wear “the simple dress of an American citizen.”
In response to what was perceived as the excessive ostentatiousness of some of these individualized uniforms, Congress banned diplomatic uniforms altogether in 1867, by passing a
resolution forbidding diplomatic officials to wear "any uniform or official costume not previously authorized by Congress". This caused some discomfort to American diplomats, who now
had to appear "underdressed", in evening dress, to official functions. In 1910, Theodore Roosevelt attracted considerable attention when he was the only foreign official at the
funeral of King Edward VII who was not in uniform.
It goes on to state that modified Navy uniforms were in use for a while, but the practice was stopped by FDR in 1937, and codified in law in 1946.
Now... it's pretty clear the intent of the law is being followed. But a quick search of pictures of diplomatic staff suggests "black suit and tie" is a de facto uniform for males. (A few
grey suits, I admit.) How much uniformity is too much? Are members of other departments also forbidden to wear uniforms unless authorized by Congress? Who would have standing to sue if
the State Department violated this law?
Comment on this
Friday, November 29th, 2013
6:57 pm Starfleet makes no sense (again)
In our Deep Space Nine viewing we have made it through
Sacrifice of Angels
and the big
One of my big annoyances is that Captain Sisko is running the show. Why does Starfleet even *have* admirals and commodores, if not to coordinate major fleet movements? (I guess they're
occasionally antagonists and scene-setters.) Would it be too much to have Admiral Ross, who's been a recurring character, direct the fight? Is this
"The Main Characters Do Everything"
or were they just too cheap to build a flagship set?
Anyway, that leads to my Star Trek trivia question: which officer (in canon) is responsible for the greatest loss, by number of starships, in Starfleet history?
I think it must be Sisko, given that the engagement was described as having about 600 ships and there were significant losses (though at least 200 survived.) Probably not all are capital
ships. Wolf 359 was described as just 40 ships vs. the Borg Cube, so Admiral Hanson is probably off the hook.
2:20 pm Boomtown
Kev introduced me to
at Thanksgiving, and we played it with my godkids. (We leanred that Rob has an unconventional bidding strategy.)
In each round you bid to get first choice of the available mines. Each mine has a production value (1 through 6 gold) and a number that specifies when it triggers, based on the roll of
two dice each turn. Further, collecting a set of mines from the same town gives you the mayorship, which lets you collect additional gold from players who build mines in that town (or
are forced to by the available options.)
Position is important because choice of mines proceeds clockwise from the winner, and bid payments are paid counterclockwise. (The player to the winner's right gets half, the next player
gets 1/4, etc., with any remainder going back to the bank.)
So, each mine has value in a variety of different ways:
* Its "equity" value, the production value translates to points at the end of the game
* The expected stream of future payments from production (more important in a small game than a big one). Usually the mines with rarer numbers (2 and 12) do not have big enough
production value to compensate for the low probability. This can be calculated exactly.
* Its contribution towards winning a mayor, or increasing the value of the mayor's office
The mayor has the same components:
* A 5-point value at the end of the game
* The expected stream of future payments from mines in the matching city. This is somewhat dependent on future bidding but the number of undiscovered mines in that city is known.
Both these need to be risk-weighted against the possibility of losing the mine (to a special card) or the mayor's office (due to somebody outbidding.) So valuation is complex enough to
be interesting. I wonder if tools from conventional finance are worth using here, does it make sense to apply a discounting rate to future returns? I think they don't because there is no
risk-free reward.
However, the bid payout mechanism (and the fact that you get one card every round no matter what) makes bidding nontrivial as well. Let's ignore coalitions for a moment--- they're hard
anyway--- and just look at two players, A and B. Let's also ignore any + or - value to position (the winning bidder bids first the next round.)
mine X: value 3 to A, value 0 to B (a production-3 mine in A's city, assuming A has three mines in that city)
mine Y: value 3 to A, value 6 to B (a production-6 mine in B's city, mirrored assumption)
The global optimum is that B gets Y and A gets X. But because this is a competitive game, A prefers (AY,BX). So we can recast this in terms of A's utility:
X to B: +3
X to A: -3
B's utility is the opposite (in games with more players we can't make this simplification). But A can't bid 3, because 2 of those gold would go to B. Writing things out:
A bids 3 and wins: +3 -3 -2 = -2
A bids 2 and wins: +3 -2 -1 = 0
A bids 1 and wins: +3 -1 -1 = +1
Is A's win enough? Well, from B's perspective (more negative is better) he can bid 2 and get an improved result--- the payoffs are all reversed:
B bids 3 and wins: -3 +3 +2 = +2
B bids 2 and wins: -3 +2 +1 = 0
B bids 1 and wins: -3 +1 +1 = -1
So if A goes first he should bid 1, forcing B to bid 2--- or he can bid 2 himself and achieve the same payoff (but B might make a mistake either way.)
In a two-player game, is there always a way to force no net gain? No, because the mine values may be fractional due to the future revenue stream, and only whole-value bids are accepted.
In that case, the winning strategy is to immediately bid the amount which produces a small (<1) positive result for the first bidder; the second player cannot improve his bid without
going negative, since zero is not possible.
But this strategy suggests there is an advantage to bidding first, equal to the fractional payouts in future rounds. So, confusingly, it might be worth overpaying in round 1 if you could
go first on all subsequent rounds. But the other player could compensate by overbidding in round 2. I don't know what the end effect of this line of reasoning would be (it might not even
be feasible with limited bankrolls)--- it might make an interesting toy game to study all by itself.
Comment on this
Saturday, November 23rd, 2013
12:18 am Build Me This Game
So, I was reading the reviews for
Rise of Venice
, since it was on sale on Steam, but decided not to buy it at its current price. The campaign sounds like it's a lot of drudgery. But I like trading and economic games---- I'm looking
forward to Elite: Dangerous coming out in March 2014.
I have also been poking at a shopkeeping game on Kongregate (more grindy than I'm happy with but there are some time-and-motion optimizations!), and thinking of either getting back into
Dwarf Fortress (also big on time-and-motion) or finally learning how Towns works.
Not that I have time. Especially this month. However!
What I'd really like to see is a mashup of the two genres. Instead of playing the dwarves tunneling into the earth below, play the nearby human town they trade with. When the dwarves
discover a good seam of iron, you can trade it with visiting caravans, or build a forge to sell the iron back to the dwarves as pickaxes and armor. But if you don't take the latter
route, the dwarves may decide to invest in their own ironworks, reducing the supply of raw materials--- but potentially opening the door to higher-value goods. I'm imagining both
tactical decisions on what facilities to build and what goods to stock--- but also a strategic game about developing trade and competitive advantage, exploring a tree of different
decision points.
Most shopkeeper-type games tend to be more about collecting everything rather than specializing, alas. And most trading games are mostly about established patterns rather than industrial
development. I'd like something a little closer to SimCity or Civ, with multiple paths to success.
Comment on this
Monday, November 11th, 2013
1:54 am Things I hate in adventure games
Trying a new point-and-click adventure game that came in a Humble Bundle: "Broken Sword: Shadow of the Templars." OK so far except for some voice acting that sounds like it was recorded
in a cardboard box.
But: say you're a modern-era journalist who's planning on doing some exploring. Even pre-cellphone, you'd bring a flashlight, right? Or your camera? I'm willing to put up with Sam and
Max scraping by on used chewing gum they pick up off the sidewalk, but anybody with a steady job has at least *some* resources.
Or, say you need to pry open a stuck door. What's more logical: go home and get a crowbar, or use the secret sliding door to smash the end of a shell casing you found lying around? What
the heck?!?
The Batman (Arkham) games did somewhat better at making puzzles that admitted more than one solution (among the gadgets Batman had equipped), although the tool upgrade mechanism annoyed
me there too as something not really in character. But I'm wondering if there's any "game" left if you gave adventurers access to real resources.
1:26 am More stunning ignorance in the Strib opinion section
I shouldn't even be surprised by this point, but
The middle class: An American tragedy, in numbers
contained this blatantly racist passage:
America, uniquely, has always been a middle-class country — no aristocracy, no peasantry, and later, under industrialization, to the great frustration of Marxists, no proletariat.
Americans were always of the middling social orders. This was the thesis of Harvard Prof. Louis Hartz as he explained in the middle of the last century why we had always had centrist
The Puritans of New England and the Quakers of Pennsylvania came from the rising middle class and early capitalists of England, Scotland and Wales. The more socially exclusive
Cavaliers, who led the founding of Virginia, Georgia and the Carolinas, were not aristocrats in English terms but merely squires. Succeeding waves of immigrants — even down to
today’s Vietnamese and Hispanics — have been thoroughly middle-class in their aspirations.
In 1860, the U.S. Census recorded 3,953,760 slaves out of a total U.S. population of 31,443,321. One out of every eight people living in slavery doesn't count as a "middle-class
The Chinese laborers who built the transcontinental railroad and worked the California gold mines? Whose wives were turned back by anti-Chinese immigration laws? Totally not allowed into
the middle class. But they never amounted to even a single percent of the U.S. population until this century, so their experience must not count?
While that's the article's greatest sin, the author does himself no favors by bringing up the increase of tattoos as an example of how the middle class is abandoning its values.
Comment on this
Wednesday, October 30th, 2013
10:07 pm Mark's Autumnal Salsa
Peel two tomatoes. (If you abrade the skin of the tomato with the dull side of a knife first, the skin will come off more easily. Various other methods are discussed on the internet--- I
don't remember where I learned this one.)
Peel three pearl onions.
Remove the tops from all the jalapeno peppers remaining from the garden. (In my case about 15 small-sized peppers, in a mixture of red and green, that had been sitting in the fridge for
a week or two. Most recipes recommend two or three peppers.)
Dice everything to an even consistency. You're aiming for something like a relish, not gazpacho. Mix with 1 tbsp lemon juice and a few shakes of salt.
After warning onlookers to stand clear of the resulting fireball, sample and adjust to taste.
Tuesday, October 15th, 2013
9:09 pm More Tintri News and Quotes
Tintri announced that Ken Klein, who was previously our independent board member, will take over as Chairman and CEO. Kieran Harty, my co-founder, will take the CTO role.
Press release here.
We also have a great collection of quotes from analysts, bloggers, and partners about our recent product launches, collected on the Tintri blog:
Rave Reviews for Tintri VMstore T600 Series and Tintri Global Center
Tuesday, October 8th, 2013
9:58 am (
Comment on this
Monday, September 23rd, 2013
5:46 pm Error Messages Are Hard, a continuing series
Poor little compiler.
[javac] Compiling 95 source files to /hg/tc/out/classes
[javac] /hg/tc/src/java/com/tintri/platform/YYY.java:64: '(' expected
[javac] } else if if ( result == 1 ) {
[javac] ^
[javac] /hg/tc/src/java/com/tintri/platform/YYY:64: illegal start of expression
[javac] } else if if ( result == 1 ) {
[javac] ^
[javac] /hg/tc/src/java/com/tintri/platform/YYY.java:64: ')' expected
[javac] } else if if ( result == 1 ) {
[javac] ^
[javac] /hg/tc/src/java/com/tintri/platform/YYY.java:64: not a statement
[javac] } else if if ( result == 1 ) {
[javac] ^
[javac] /hg/tc/src/java/com/tintri/platform/YYY.java:64: ';' expected
[javac] } else if if ( result == 1 ) {
[javac] ^
[javac] 5 errors
However, I think it could do better. That's only 5 errors out of 8 tokens remaining on the line. Why not one error message per token?
Comment on this
Friday, September 13th, 2013
9:03 am Lance Fortnow's "The Golden Ticket"
I'm being driven kind of nuts by this book because it confuses perfect machine learning with near-perfect predictive ability. The examples given for the consequences for P=NP are simply
outlandish. That's not so say they're all wrong--- machine-generated proofs really could demolish long-held problems if that were the situation. But in other cases he vastly
overestimates our ability to predict even with perfect knowledge. There is no way that a year of weather or an entire baseball season can be accurately predicted no matter how much "big
data" you have to learn from, that's just a limitation of chaotic systems.
I am also a bit less than impressed that he doesn't much acknowledge that there are problems
than NP where even verifying an answer takes non-polynomial time, or the answer is uncomputable. (This does get a mention later in the book.) But this is not a rigorous introduction,
it's a popular science book.
It's a good introduction for all that (including some interesting history even for those already familiar with the topic) But even a complexity theorist should understand the difference
between modelling and prediction.
Comment on this
Thursday, September 12th, 2013
2:34 am I can't find the earlier version of this rant, though I'm sure I made it
Nearly every kid (and, heck, probably adults learning mathematics for the first time) who is introduced to the concept of infinity asks questions like "what's zero times infinity" or "
what's the square root of infinity
" or "
can you divide one by infinity?
Which is most helpful response?
1. Snotty lectures about how infinity isn't really a number, so you can't even ask that sort of question.
2. Reasonable explanations about how these terms aren't always well-defined, and the problems you run into trying to give them a definition.
3. Here are some ways in which mathematicians have tried to answer those questions in ways that still make sense!
Unfortunately, as the links above show, #1 is by far the most popular approach. #2 is not bad, but if you actually want somebody to stay interested in math, I think #3 is far superior.
Mathematics is a game. It's not a set of rules, it's about making up rules and seeing where they lead. And concepts like surreal numbers, projective geometry, and Hilbert spaces all use
infinities in ways that are mathematically consistent but "bizarre". Surreal numbers in particular are a great example of taking seriously such ideas as infinitesimals and "infinity plus
one" and giving them a concrete meaning instead of blowing such obvious ideas off as stupid. For that matter, the whole field of complex analysis results from taking seriously something
that was previously just a "hack" to make the cubic function come out right: "what is the square root of -1?"
Some games don't work out, but that's probably just because nobody's been clever enough yet. Or in some cases, the game is provably not very interesting (which is sort of
meta-interesting). But the next time somebody tries to get pseudo-sophisticated with you by explaining how your math question can't even be asked, treat them as you would any other
Saturday, September 7th, 2013
1:17 pm Beef Kebab Marinade
Here's what I threw together for kebabs yesterday:
1/3 cup soy sauce
1/3 cup red wine
1/4 cup honey
1 tsp ground ginger
4 cloves garlic, smashed
(mainly for future reference.)
Thursday, September 5th, 2013
10:24 pm Transpositions are Easy
Building on the toy games introduced
I figured the easiest tile-game move to analyze is transpositions. Only two tiles are affected by any move, and they have to be opposite colors in any minimal solution. My earlier code
showed, for example, that it takes 8 tile swaps (orthogonal moves only) to convert:
and it's pretty easy to find such a sequence of moves. To do so you sort of eyeball where the "gaps" are and which other pieces are closest to those gaps. Can this idea be formalized
into a (non-brute-force) algorithm for finding minimal move counts? I think so.
Let's start with the algorithm, then argue that its results make sense. Transform the problem into a
minimum-cost bipartite matching problem
. The two sets are the goal positions and the pieces, and the edge costs are the distance (in moves) between each pair. So the matrix representation of the transformed problem looks like
pieces -->
goals 0 1 3 2 3 3 3 5
| 1 0 2 1 2 2 4 4
V 2 1 1 2 1 3 5 3
The minimum selection of 8 elements from this matrix, which share no row or column in common, gives the number of moves in the solution. It's obvious that no fewer number of moves will
work. But can we show that this number is achievable?
Now, alas, you can't just read off the matching to get the sequence of moves. For example, in the case
moving "piece 8 to position 3" has cost 2, as does "piece 6 to position 3 and piece 8 to position 6", so both are solutions to the matching, but only the latter is feasible. If we tried
to move piece 8 twice we'd need an additional move to put 6 back into place. We need to show that any infeasible plan can be transformed into one which is feasible with the same number
of moves.
Fortunately, this is pretty easy. Suppose the plan calls for moving piece A onto the square currently occupied by piece B. This move is a no-op since the pieces are identical in this
puzzle. We can remove that transposition from the move sequence by "swapping the identities" of A and B instead of making the no-op move. That is, whenever we would move A to a non-empty
space occupied by B, instead swap the labels and continue moving the new "A-prime" (previously "B"). This will leave "B-prime" one space away from its initial starting point, so after
A-prime has moved on, use one move to put B-prime back into its starting point (or its goal point). This exactly balances out the move we removed because it was a no-op.
Puzzle: * *
A 0 0 B 0
Plan: A --> --> --> --> (4 transpositions)
B (0 transpositions)
Execution: A --> --> rename
A'--> (3 transpositions)
B'--> (1 transposition)
The transformation can be applied multiple times to move through multiple pieces as well, just move the renamed pieces back in reverse order (first-in-last-out). As a check, I ran the
minimum-matching algorithm across all 4x4 positions and it agreed with the previously computed values.
Using this algorithm, we can both solve individual large instances as well as perform parallel counts of move distances without using a large amount of memory. The
best minimal matching algorithms are O(N^3)
, though, so this is a significantly more compute-intensive method of finding all minimal word distances than the brute force approach.
This example of a random 12x12 matrix
requires 267 moves to restore to the sorted state, according to this algorithm, with one possible plan
(goal, piece) = (0, 0), (1, 5), (2, 62), (3, 1), (4, 2), (5, 67), (6, 60), (7, 3), (8, 4), (9, 56), (10, 8), (11, 9), (12, 70), (13, 71), (14, 6), (15, 7), (16, 65), (17, 12), (18, 49),
(19, 13), (20, 14), (21, 15), (22, 50), (23, 51), (24, 10), (25, 63), (26, 11), (27, 64), (28, 20), (29, 66), (30, 21), (31, 69), (32, 28), (33, 61), (34, 16), (35, 17), (36, 18), (37,
58), (38, 19), (39, 48), (40, 59), (41, 54), (42, 55), (43, 22), (44, 23), (45, 24), (46, 25), (47, 26), (48, 57), (49, 46), (50, 52), (51, 31), (52, 32), (53, 27), (54, 42), (55, 68),
(56, 37), (57, 29), (58, 30), (59, 44), (60, 45), (61, 40), (62, 47), (63, 35), (64, 53), (65, 41), (66, 36), (67, 43), (68, 33), (69, 38), (70, 34), (71, 39)
. That is, piece 0 moves left twice into position 0, piece 5 moves left seven times into position 1, position 2 is filled from below, position 3 is filled with the piece "1" already in
it. But following the procedure above we can see that piece "5" and "1" will be renamed if we carry out this sequence. (I haven't written the code to actually simulate playing all 267
moves to verify, so there may be an error lurking here but I'm pretty confident in the proof above.)
There might be some way to tighten up the matching to prefer non-crossing moves, for example by increasing "1" to "1.001", "2" to "2.003", "3" to "3.006", etc., to introduce a bias for
plans which perform two "real" moves instead of a longer one which will get broken up.
Unfortunately this practical algorithm doesn't seem to provide any hints toward enumerating the distribution of minimal distances for various sizes of board.
Thursday, August 22nd, 2013
9:03 pm Federation Ethics Make No Sense
We're up to
Doctor Bashir, I Presume?
in Deep Space 9. The episode is about how Doctor Bashir's parents had him genetically enhanced as a child, which is illegal within the Federation. As a result, Bashir may have to leave
Starfleet and lose his medical license?!?
This is bogus on multiple levels, and kind of troubling it you take it seriously.
What would be interesting is to have some Star Trek in which this weird hang-up actually mattered. Are other civilizations which practice genetic engineering unwelcome within the
Federation? It could be viewed, within the wider galaxy, as a pretty extreme political stance, a result of a particular historical accident within Human history, whose lessons are not
more broadly applicable.
Or which abnormalities count as "serious"? You don't see anybody in Star Trek wandering around with glasses. It's obviously easier to wave away cancer-causing genes once you've got a
cure for cancer.
In what ethical system is it OK to blame the child for this, anyway? I can't think of any better way to encourage potential Kahns to antiocial behavior than to close the door to all
professional and political success. (What is with the obsession with Asians as genetic supermen?) This bit of boneheaded worldbuilding is profoundly pessimistic, suggesting the
Federation couldn't find a place for a non-expansionary Borg (too inhuman) and wouldn't have let Data into Starfleet either (too artificial.) Heaven help them if they encountered a
machine race that wanted to team up.
This season of Deep Space Nine has really been bothering me by taking a reactionary approach to any politics that come up; the most painfully earnest TNG or TOS episode is far
preferable. Chemical warfare, evidently, is just fine with the Federation. And the entrance of Bajor to the Federation is one to be signed by admirals rather than diplomats (look at
who's at the table in *that* episode.)
Sunday, August 18th, 2013
8:19 pm Scaling
The size of the search space in the
toy problem I described in the last post
is basically (n^2) choose (n^2)/2 for an n x n board. My initial implementation is in-memory only which only works for 3x3 and 4x4. It ought to work for 5x5 too but for some reason
Python (on Windows) just stopped computation rather than giving me an out-of-memory error or finishing.
9C5 = 126
16C8 = 12,870
25C13 = 5,200,300
36C18 = 9,075,135,300
49C25 =~ 6.3 * 10^13
64C32 =~ 1.8 * 10^18
What you need for the brute-force algorithm is basically a bit per tile arrangement (permutation) to tell whether you've visited it already, plus a list of the "frontier" of
newly-generated arrangements:
next = []
for X in frontier:
for P in permutations:
if P.X not already visited:
mark P.X visited
next.add( P.X )
frontier = next
So you only need sequential access to the queue of positions at distance D, in order to calculate D+1; that queue can safely be placed in a file.
9 billion bits is just over a gigabyte, thus a "streaming" algorithm where frontier/next are files will scale at least that far. And could even be written in pure Python! The files for
each distance can be stored as a list of numbers representing the positions (no more than 64 GB for this size problem) or we can include some additional metadata if we want to remember
the actual word.
How much further could we go using the same basic algorithm? Well, 49C25 bits is about 7.2 terabytes, so in-memory on a single computer is out. But, we could certainly use an array of
flash drives to store the visited set. (A Tintri T540 has 2.4TGB of raw flash, but you could get 1 TB SSDs if you really want them.) Assuming sufficient parallelism to keep the drives
busy, >=100000 IOPS is feasible, but even if each position required even a single I/O we're talking 20 years. (We might be somewhat smarter about I/Os but on the other hand the
brute-force approach will visit the same position many times. Even a million IOPS is still 2 years.)
On the other hand, 7.2TB of memory is only about 31 Amazon EC2 cr1.8xlarge instances, so storing a distributed hash table would be feasible and solve the problem in a much shorter period
of time. You would do this in a sort of map/reduce style, distributing the multi-TB list of positions in generation D to each node and have each independently produce its slice of
generation D+1. Each node would categorize each P.X as "visited", "not visited", or "some other node's problem", and then merge all the "not visited" to form the list for the next
iteration. Of course, you'd need at least at least half a petabyte to store the results, which doesn't come cheaply either. But you could throw money at the problem and it would still be
doable within a reasonable amount of time. We're talking about 4 days to stream all the data at 10Gbps, though, so "reasonable" is probably at least a couple weeks.
So what about 8x8? Well, now the hash table itself is 200 petabytes. Even SETI-at-home style resources aren't adequate for this, as you'd need almost 900,000 computers with 244GB of
memory to implement the straightforward approach. 200 petabytes is about as large as a storage cluster gets today, or larger, I think--- most examples I can find are no more than half
that. (Internet Archive is at about 10 petabytes, Megaupload held about 30 petabytes, Facebooks's Hadoop cluster is 100PB physical.) Instead--- assuming you can build out a multimillion
dollar exabyte storage solution for the intermediate steps of the solution--- you'd need to find ways to make CPU-vs-space tradeoffs. The algorithm would have to change at this point,
given today's technology, and would probably consume a nontrivial portion of the Internet's communication capability. (Partitioning a second time by giving each computer only part of D
and part of the visited set doesn't scale because you require N^2 passes to make sure you found all the previously visited positions in D+1.)
I asked earlier this year whether it's still true that computers can get just one "size" larger than humans--- in this example, I think computers can get 3 sizes larger, assuming the 4x4
grid is human-tractable and the 5x5 is not.
Comment on this
Saturday, August 17th, 2013
12:03 am Some preliminary goofing around on tile games
My vacation project has been putting together some Python code to systematically study various tile-permuting operations found in match-3 or sliding puzzle games. What I'm interested in
is: given a particular goal on a square grid (like making a group of N tiles, or arriving at some other in a set of arrangements), how many moves does it take to reach the goal? Are
certain types of moves better suited to particular types of goals?
( Read more...Collapse )
Monday, July 29th, 2013
5:35 pm Why are middle-aged monarchs preferable to old ones?
I am trying to understand the point of this Economist leader
about how George Alexander Louis will have to wait until 2070 (or later) to become king
, and perhaps the British monarchy should take a page from Queen Beatrix and King Albert, and abdicate into retirement.
What is the benefit of having a monarch who is merely middle-aged instead of one who is elderly? I mean, I don't see the point of a monarch at all. (The Netherland's royal family is, in
my opinion, a foreign imposition--- the Netherlands have been a monarchy for less time than a republic.) So why does it matter that the younger members of the family will be crown
princes or princesses for a long time? They are hardly likely to stage a coup in order to ascend to the throne.
If you want a young leader, elect one. If you insist on some hereditary ceremonial head of state,
don't complain about the random and arbitrary nature of the succession
. It's like complaining that the kings should have smaller ears or more hair or something.
Thursday, July 25th, 2013
2:02 am Solving stupid integer sequence problems
Problems of the form "what is the next number in the sequence A1, A2, A3, A4, ..." are dumb. They don't rely upon mathematical ingenuity. They depend on some social construct about what
the "right" answer ought to be, in terms of simplicity or "things you might be expected to know". And yet they are pretty popular puzzles.
And, of course, mathematical psuedo-sophisticates make appeals to Kolmogorov complexity which is unhelpfully uncomputable.
What I would like instead are a collection of mechanisms for proving such questions meaningless--- that is, tool for constructing an arbitrary fit for an integer sequence. Here's a few
to kick things off:
1. Fit the K integers as successive values x=1, x=2, x=3, ..., x=k of a polynomial of order N. There are N+1 coefficients, so when N+1 >= K we should be able to find an exact fit (except
maybe in some degenerate cases not immediately obvious to me) and for N+1 > K we can find an arbitrary number of polynomials.
2. Take N=ceil(log2(max(Ai))). Then we can view the sequence as the operation of N separate N-bit binary functions. Since there are 2^N such binary functions and only K examples, the
problem is not particularly constrained, even if we restrict ourselves to "simple" N-to-1-bit functions.
3. Define A_K as a (K-1)-ary function of its previous arguments. Backfill A_0, A_-1, A_-2 to make the sequence look non-arbitrary.
4. Take differences between terms enough times, until the resulting sequence is short enough to be matched to something trivial.
5. Split the sequence into two or more unrelated sequences that have been interleaved (which can be done in a variety of ways.)
6. Construct a rational number such that A1 A2 A3 ... AK (suitably zero padded) are the repeating digits (or initial digits, or a combination) of its decimal expansion.
7. Construct a number such that A1, A2, A3, ... AK are the first terms in its continued fraction representation.
Any other suggestions?
Monday, July 22nd, 2013
12:25 pm Capital Raising Through Arm Twisting
Today's "On Business" column by Neal St. Anthony discusses
a plan to create a local fund-of-funds for early stage venture capital
Local venture capitalists and fund managers have been quietly urging Gov. Mark Dayton’s administration to embrace a privately financed and operated equity fund....
The idea would be for flagship Minnesota institutions, such as U.S. Bancorp, Thrivent Financial and Ameriprise, to perhaps pledge several million dollars each, followed by
foundations and perhaps affluent individuals. A fund-of-fund manager, operating under agreed-upon guidelines, would disburse the money to Minnesota and other private equity and
venture capital managers.
There's nothing wrong with a little cheerleading for local investment. That sort of boosterism is certainly the governor's job. But showing up at U.S Bancorp's door and asking them to
invest more money with local VCs? That goes a bit far too me. Are the VC's going to give up their 2-and-20 on the deal, too? The article doesn't say, and hints at some desire for
public-sector involvement.
It would be one thing if Minnesota didn't have venture capitalists to provide early-stage funding--- but we do. Capital crosses borders (especially state borders!) easily, and if local
VCs can't raise enough money for their investment funds, they're failing at their job. Why bail them out with cash from Minnesota's successful large companies?
Now, don't get me wrong--- I'd much rather have had a $348m state venture capital fund than a new Vikings stadium. But it's not clear to me that this sort of deal is on the correct side
of the line between "the governor provides a little bit of social lubricant for a good cause" and back-room cronyism. Maybe I'm hypocritical, but I'd rather have the state pony up some
money if this is a social good, than have VCs propped up by private investments that otherwise wouldn't get made.
[ << Previous 20 -- Next 20 >> ]
|
{"url":"http://markgritter.livejournal.com/?skip=20","timestamp":"2014-04-21T03:23:58Z","content_type":null,"content_length":"74076","record_id":"<urn:uuid:911c31ed-6cb2-4b71-b872-b33b124c2772>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00639-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Time & Distance Aptitude Questions With Answers
1)A train covers a distance in 50 min ,if it runs at a speed
of 48kmph on an average.The speed at which the train must run
to reduce the time of journey to 40min will be.
1. Solution::
Time=50/60 hr=5/6hr
New speed = 40* 3/2 kmph= 60kmph
2)Vikas can cover a distance in 1hr 24min by covering 2/3 of
the distance at 4 kmph and the rest at 5kmph.the total
distance is?
2. Solution::
Let total distance be S
total time=1hr24min
A to T :: speed=4kmph
T to S :: speed=5km
21/15 hr=2/3 S/4 + 1/3s /5
= 6km
3)walking at . of his usual speed ,a man is late by 2 . hr.
the usual time is.
3. Solution::
Usual speed = S
Usual time = T
Distance = D
New Speed is . S
New time is 4/3 T
4/3 T – T = 5/2
T=15/2 = 7 .
4)A man covers a distance on scooter .had he moved 3kmph
faster he would have taken 40 min less. If he had moved
2kmph slower he would have taken 40min more.the distance is.
Let distance = x m
Usual rate = y kmph
x/y – x/y+3 = 40/60 hr
2y(y+3) = 9x ————–1
x/y-2 – x/y = 40/60 hr y(y-2) = 3x —————–2
divide 1 & 2 equations
by solving we get x = 40
5)Excluding stoppages,the speed of the bus is 54kmph and
including stoppages,it is 45kmph.for how many min does the bus
stop per hr.
Due to stoppages,it covers 9km less.
time taken to cover 9 km is [9/54 *60] min = 10min
6)Two boys starting from the same place walk at a rate of
5kmph and 5.5kmph respectively.wht time will they take to be
8.5km apart, if they walk in the same direction
The relative speed of the boys = 5.5kmph – 5kmph = 0.5 kmph
Distance between them is 8.5 km
Time= 8.5km / 0.5 kmph = 17 hrs
7)2 trains starting at the same time from 2 stations 200km
apart and going in opposite direction cross each other ata
distance of 110km from one of the stations.what is the ratio of
their speeds.
7. Solution::
In same time ,they cover 110km & 90 km respectively
so ratio of their speed =110:90 = 11:9
8)Two trains start from A & B and travel towards each other at
speed of 50kmph and 60kmph resp. At the time of the meeting the
second train has traveled 120km more than the first.the distance
between them.
8. Solution::
Let the distance traveled by the first train be x km
then distance covered by the second train is x + 120km
x/50 = x+120 / 60
x= 600
so the distance between A & B is x + x + 120 = 1320 km
9)A thief steals a ca r at 2.30pm and drives it at 60kmph.the
theft is discovered at 3pm and the owner sets off in another car
at 75kmph when will he overtake the thief
9. Solution::
Let the thief is overtaken x hrs after 2.30pm
distance covered by the thief in x hrs = distance covered by
the owner in x-1/2 hr
60x = 75 ( x- .)
x= 5/2 hr
thief is overtaken at 2.30 pm + 2 . hr = 5 pm
10)In covering distance,the speed of A & B are in the ratio
of 3:4.A takes 30min more than B to reach the destion.The time
taken by A to reach the destinstion is.
10. Solution::
Ratio of speed = 3:4
Ratio of time = 4:3
let A takes 4x hrs,B takes 3x hrs
then 4x-3x = 30/60 hr
x = . hr
Time taken by A to reach the destination is 4x = 4 * . = 2 hr
|
{"url":"http://www.searchcrone.com/2011/08/time-distance-aptitude-questions-with-answers","timestamp":"2014-04-18T15:38:22Z","content_type":null,"content_length":"55222","record_id":"<urn:uuid:076889d7-0767-4983-88af-1920fc5c0773>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00210-ip-10-147-4-33.ec2.internal.warc.gz"}
|
maths stats - conditional density function
March 24th 2009, 06:41 PM #1
Mar 2009
maths stats - conditional density function
I'm just very confused about conditional density function. like the question is to show that Ey[E(X|Y)] = E(X)
x=0 x=1 x=2 x=3
y = 0 o.o4 o.o5 o.o6 o.o7
y=1 o.o6 o.o8 o.o7 o.o9
y=2 o.1 o.11 o.13 o.14
well i think that E(X) is
0x.04 + 0x.06 + 0x.1 + 1x.05 + 1x.08 + 1x.11 + 2x.06 + 2x.07 + 2x.13 + 3x.07 + 3x.09 + 3x.14 = 1.66
and that E(X|Y) = P(x,y)/P(y)
but the thing is that i dont know how to calculate P(x,y). Can someone explain it to me please?
I'm just very confused about conditional density function. like the question is to show that Ey[E(X|Y)] = E(X)
x=0 x=1 x=2 x=3
y = 0 o.o4 o.o5 o.o6 o.o7
y=1 o.o6 o.o8 o.o7 o.o9
y=2 o.1 o.11 o.13 o.14
well i think that E(X) is
0x.04 + 0x.06 + 0x.1 + 1x.05 + 1x.08 + 1x.11 + 2x.06 + 2x.07 + 2x.13 + 3x.07 + 3x.09 + 3x.14 = 1.66
and that E(X|Y) = P(x,y)/P(y) this is p(x|y) not E(X|Y)
but the thing is that i dont know how to calculate P(x,y). Can someone explain it to me please?
I don't understand your table.
Is that p(x,y)?
yeahh it is, but the thing is i really dont understand what my p(x,y) is supposed to be.. like is it just one number? like the row totals or something?
ohhh and umm im really dumb, so what is the difference between p(x|y) and E(X|Y)? what is it meant to be? thanks
Last edited by mr fantastic; March 25th 2009 at 12:11 AM. Reason: Merged posts
YOUR table is hard to read, but it looks like all the numbers add to one.
SO, that is p(x,y). For example P(X=0,Y=0)=.04....
Y can be 0,1,2 so there are 3 values for E(X|Y=y), depending on y
$E(X|Y=0)=\sum_{x=0}^3 xP(X=x|Y=0)$
ahhh icic thank you. :]. but how do i show that Ey[E(X|Y)] = E(X)?
do i just do:
$E(X|Y=0)=\sum_{x=0}^3 xP(X=x|Y=0)$
$E(X|Y=1)=\sum_{x=0}^3 xP(X=x|Y=1)$
$E(X|Y=2)=\sum_{x=0}^3 xP(X=x|Y=2)$
and then add them up?
$E(X)=\sum_x\sum_y xp(x,y)$
YOU need to be careful with that y in
$E(E(X|Y))=\sum_y E(X|Y=y))P(Y=y)$
Just consider E(X|Y=y)) as g(y) and
$E(g(Y))=\sum_y g(y) P(Y=y)$
Last edited by matheagle; March 26th 2009 at 02:51 PM.
ahhh okies, thank you!!
March 24th 2009, 09:29 PM #2
March 24th 2009, 10:50 PM #3
Mar 2009
March 24th 2009, 11:13 PM #4
March 24th 2009, 11:28 PM #5
Mar 2009
March 24th 2009, 11:35 PM #6
March 24th 2009, 11:47 PM #7
Mar 2009
|
{"url":"http://mathhelpforum.com/advanced-statistics/80513-maths-stats-conditional-density-function.html","timestamp":"2014-04-18T00:43:45Z","content_type":null,"content_length":"51978","record_id":"<urn:uuid:d0254726-889e-4b76-aec6-2ae32a73e422>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00377-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Is there an angle measuring more than 360 degrees ? What do you call it ? My professor said there is no such thing. But I doubt it.
• one year ago
• one year ago
Best Response
You've already chosen the best response.
If you notice, more than 360 degrees would start another cycle.
Best Response
You've already chosen the best response.
361 degrees would be 1 degree.. 362 = 2 363 = 3 and so on
Best Response
You've already chosen the best response.
ok thanks! :)
Best Response
You've already chosen the best response.
And wait... that's where modular arithmetic comes in
Best Response
You've already chosen the best response.
I asked this because I am seeing angles more than 360 degrees sometimes :)
Best Response
You've already chosen the best response.
If you want to find an angle measure more than 360, it'd be\(\mod{(x,360)}\)
Best Response
You've already chosen the best response.
What about modular arithmetic ?
Best Response
You've already chosen the best response.
Yeah, 720 degrees is again 360 degrees or 0 degrees :)
Best Response
You've already chosen the best response.
For example, 480 degrees is actually \(\mod({480,360}) = 120^{\circ}\)
Best Response
You've already chosen the best response.
ohh I don't know that thing
Best Response
You've already chosen the best response.
Hey did you use photoshop to make that thing blue?
Best Response
You've already chosen the best response.
I don't use Photoshop :)
Best Response
You've already chosen the best response.
But I think it is fun knowing things like that
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
How did you make that? :O
Best Response
You've already chosen the best response.
the modular arithmetic
Best Response
You've already chosen the best response.
I mean the background is supposed to be grey
Best Response
You've already chosen the best response.
what that?
Best Response
You've already chosen the best response.
Your display pic... how is the background blue? Photoshop?
Best Response
You've already chosen the best response.
ohh my pic. I copied it here. Is it not allowed?
Best Response
You've already chosen the best response.
No, it is but how did you make the background blue?
Best Response
You've already chosen the best response.
i'll remove it if it is not allowed :)
Best Response
You've already chosen the best response.
No no.. it is completely allowed
Best Response
You've already chosen the best response.
Just wanted to know how you made the background blue :)
Best Response
You've already chosen the best response.
I copied it from here (openstudy.com) when openstudy was not yet like this. I copied it from the openstudy feedback section. I think that time is when openstudy is just starting and the homepage
is showing already the sections. The homepage has different kinds of pictures from the different sections then I saw this picture from the openstudy feedback sections and I copied it. I think
Physics section has a atomic model pic and chemistry has a graduated cylinder.
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
Nice stuff man
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/4fd85b03e4b091dda53fe788","timestamp":"2014-04-21T02:18:40Z","content_type":null,"content_length":"94697","record_id":"<urn:uuid:222b0719-6fa6-41c5-b487-d483379cf567>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00415-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Withdrawal from a fluid of finite depth through a line sink, including surface-tension effects
Hocking, G.C. and Forbes, L.K. (2000) Withdrawal from a fluid of finite depth through a line sink, including surface-tension effects. Journal of Engineering Mathematics, 38 (1). pp. 91-100.
The steady withdrawal of an inviscid fluid of finite depth into a line sink is considered for the case in which surface tension is acting on the free surface. The problem is solved numerically by use
of a boundary-integral-equation method. It is shown that the flow depends on the Froude number, F B=m(gH 3 B)–1/2, and the nondimensional sink depth =H S/H B, where m is the sink strength, g the
acceleration of gravity, H B is the total depth upstream, H S is the height of the sink, and on the surface tension, T. Solutions are obtained in which the free surface has a stagnation point above
the sink, and it is found that these exist for almost all Froude numbers less than unity. A train of steady waves is found on the free surface for very small values of the surface tension, while for
larger values of surface tension the waves disappear, leaving a waveless free surface. It the sink is a long way off the bottom, the solutions break down at a Froude number which appears to be
bounded by a region containing solutions with a cusp in the surface. For certain values of the parameters, two solutions can be obtained.
|
{"url":"http://researchrepository.murdoch.edu.au/4683/","timestamp":"2014-04-20T18:27:05Z","content_type":null,"content_length":"23018","record_id":"<urn:uuid:015dc2a0-bc32-4da7-a265-47eb42ef47cb>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00307-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Zentralblatt MATH
Publications of (and about) Paul Erdös
Zbl.No: 346.10027
Autor: Cohen, S.D.; Erdös, Paul; Nathanson, M.B.
Title: Prime polynomial sequences. (In English)
Source: J. London Math. Soc., II. Ser. 14, 559-562 (1976).
Review: Let F(x) be a polynomial of degree d \geq 2 with integral coefficients and such that F(n) \geq 1 for all n \geq 1, Let G[F] = {F(n) } ^oo[n = 1]. Then F(n) is called composite in G[F] if F(n)
is the product of strictly smaller terms of G[F]. Otherwise F(n) is prime in G[F]. It is proved that, if F(x) is not of the form a(bx+c)^d, then almost all members of G[F] are prime in G[F]. More
precisely, if C(x) denotes the number of composite F(n) in G[F], with n \geq x, then, for any \epsilon > 0, it is shown that C(x) << x^1-(1/d^2)+\epsilon. For monic quadratics an identity implies
that C(x) >> x^ ^1/[2] so that in this case x{^1/[2]} << C(x) << x^ ^3/[4] +\epsilon. On the other hand, it is easy to construct polynomials for which C(x) = 0 for all x. In general, the exact order
of C(x) is unknown.
Classif.: * 11N13 Primes in progressions
11B83 Special sequences of integers and polynomials
© European Mathematical Society & FIZ Karlsruhe & Springer-Verlag
│Books │Problems │Set Theory │Combinatorics │Extremal Probl/Ramsey Th. │
│Graph Theory │Add.Number Theory│Mult.Number Theory│Analysis │Geometry │
│Probabability│Personalia │About Paul Erdös │Publication Year│Home Page │
|
{"url":"http://www.emis.de/classics/Erdos/cit/34610027.htm","timestamp":"2014-04-18T05:30:23Z","content_type":null,"content_length":"4519","record_id":"<urn:uuid:cd108ddb-6e5a-48cc-a326-02687b2be830>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00221-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Sediment transport models
The hydrological, geomorphological, environmental and ecological state of streams and rivers occur over a range of spatial and temporal scales and are good indicators of the health of the system.
Healthy riparian and animal communities depend on the change in flows, shifting channels, and moving sediments to provide inputs of organic and mineral materials. These same drivers also are key to
physically shaping the stream or river system; they are what form, maintain, and alter the channels. All surface water resource projects impose some changes on this dynamic system whether it be water
velocity and depth; the concentration and size of sediment particles moving with the water; or the width, depth, slope, hydraulic roughness, planform and lateral movement of the stream channel (USAC,
1989). Understanding this is an important, but sometimes overlooked, part of stream restoration, which is where sediment transport modeling comes in.
Channels are formed, maintained, and altered by two things; flows and sediment loads. Equilibrium is achieved through a balance of four factors, as shown in Equation 1:
sediment discharge, sediment particle size, streamflow, and stream slope (Lane, 1954).
This equation qualitatively states that the sediment load, which is the first half of the equation, is proportional to the stream power which is represented by the second half of the equation.
Equilibrium occurs when the streamflow power is constant over the length of the stream resulting in zero change in the shape. By changing any term on either side of the equation, the balance is
shifted and one or more of the other variables must compensate for this, as shown in Figure 1. Reaching equilibrium usually involves erosion (Loucks, 2005). An example is a stream below a dam - the
effluent from the dam is going to be sediment starved so the intial Q
is low. Q
, the streamflow, can't be naturally adjusted so equilibrium is reached through changes in channel slope, S, the mean sediment particle size, D
, and picking up sediment from the channel bed immediately below the spilway. Bed armoring can result.
Figure 2 - Types of sediment loads in rivers (Louck, 2005)
An equilibriating stream tends to erode more sediment and larger particle sizes, resulting in erosion and downcutting in some areas and aggredation in others. The
channel evolution model
helps explain this. The transported sediment can be dissolved, suspended and pushed along by saltation and traction. The suspended load is usually the fine particles that make a stream look muddy,
like silt and clay, and can make up as much as 95% of the sediment carried by the stream (Louck, 2005). Saltation and traction are the two processes that form the bedload (see Figure 2, Louck, 2005).
For saltation and sliding or traction to occur the flow must reach a critical velocity that is dependent on the particle size and material. This corresponds to shear stress and is fundamental in
sediment transport
and the modelling process.
Bed Shear Stress
For a particle to become entrained, the bed or boundary shear stress caused by the water flowing parallel to the stream bed must overcome a critical shear stress. It can be thought of as a force
balance - if the applied force of the water (primarily hydrodynamic drag, F
but also hydrodynamic lift, F
) overcomes the resistive force of the submerged weight of the particle, F
the particle will become entrained, as shown in Figure 3. The threshold when the two forces are equivalent is the critical condition at which the applied forces are just balancing the resisting
forces (Chang, 1988).
Figure 3: Forces acting on a particle (van Rijn, 1984)
The following is a breakdown of the different forces:
is a constant found experimentally,
is the bed shear stress or tractive force, and d is the grain diameter. The effective surface area that the shear stress is exerted upon is equal to
. This force acts at the center of gravity of the particle.
Here c
is also a constant found experimentally,
is the specific weight of the sediment, and
is the specific weight of water. If the bed of the channel is sloped, the angle formed with the horizontal is designated as
is the angle of repose or friction angle between the submerged particles. Right before the particle starts to move the resultant of these forces
is in the direction of the friction angle so the ratio of forces acting parallel to the bed versus those normal to the bed is equal to
tan θ
. This simplifies to Equation 2c:
Combining Equations 2a, 2b, and 2c results in the critical shear stress
Equation 3:
And for a horizontal bed:
Figure 4: Shield's Diagram (Cao, 2006)
where the left hand side represents the ratio of the hydrodynamic force versus the submerged weight (Chang, 1988).
Shield's Diagram
Dimensional analysis on a particle leads to two dimensionless numbers; the Shield's stress ,
which is essentially Equation 4 is re-written as
and a dimensionless viscosity, or Reynolds number
that is a function of the particle diameter and density.
Shield obtained a relationship between the dimensionless critical stress and the dimensionless viscosity using experimental data. This is known as the Shield's Diagram (see Figure 4). The variation
demonstrates the effect of fluid viscosity on grain movement. Since
can't be measured directly, this empirical method gives criterion for incipient motion while only needing to solve for a specific version of the Reynolds number, usually through trial and error with
a known set of sediment and fluid values. The Shield's diagram holds for sand and gravel and non-cohesive particles, but is not as effective for clays and fines that clump together because fines are
more poorly sorted, have electrostatic forces, and are regulated more by turbulent movement (Chang, 1988 and Wilcock, 2004). The Bureau of Reclamation has a great manual with more information on the
differences between cohesive and non-cohesive sediment transport
Sediment Transport Capacity
Figure 5: Grain sizes associated with bed load, bed-material load, suspended load, and wash load. (Wilcock, 2009)
Transport capacity is the rate at which the stream or river moves sediment at a given flow. As mentioned previously, the two main mechanisms of sediment transport are: 1) bed load, where the grains
move along the bed by sliding, rolling, or hopping; and 2) suspended load, where grain are picked up off the bed and move along a more turbulent path. In many streams, grains smaller than 1/8 mm are
always suspended while grains great than 8 mm travel as bed load. The strength of flow determines the transport mechanism of grains in between these two sizes. Sediment transport can also be
categorized based on the source of the grains: 1) bed material load, which is grains found in the stream bed; and 2) wash load, which is finer grains found as less than a percent or two of the total
amount in the bed (Wilcock, 2009). Figure 5 provides a good visual of the different grain sizes associated with each transport mechanism. For this paper, bed load and sediment load will be the main
transport mechanisms considered. It is also important to keep in mind that the boundary between the two is not absolute; it really depends on the flow strength. Formulas used to describe the two for
steady uniform flow have been developed based on field calibrations and flume data. The most commonly used follow.
Bedload Formulas
Several bedload equations and their assumptions follow.
TABLE 1
Equation Concept Shear Bed-load Variables Assumptions
Name Formula Discharge
Relates bed-load discharge per unit channel 1. Uniform sediment grains move as superimporsed layers with a thickness
DuBoys width,qb,to the excess shear stress, or τ0- Cd and τc were obtained upon 2. At the threshold for incipient motion n=1
Formula τc. experiments in small lab flumes by
Shields 1. The left hand side of this equation represents the dimensionlessbed-load discharge while
Formula Also based on the excess shear stress. the right hand side lumps the excess shear stress and the submerged weight of the
sediment particle.
**Table contents modified from Chang, 1988
Suspended Load Formulas
Einstein developed the method most commonly used to evaluate the suspended load. For more information, see the
Sediment Transport
page. It is important to have a general understanding of the different equations (and there are many!) and what assumptions are made for each as well as what conditions they are best suited for when
doing sediment transport modelling. Most modelling programs have different transport methods that can be chosen based on whether the channel is sand or gravel, the grain size distribution, or how
well sorted the bed is.
Numerical Models
Shields Diagram gives an empirical way of approximating the sediment load which is good for getting an estimate or basic understanding of the system with limited data. For a more in-depth
understanding, numerical models are used with increasing frequency. The selection and application of the model is strongly dependent on the type and scale of the problem being studied. There are
initial or sediment transport models that compute the sediment transport rate and bed level changes for one time step, resulting in a short-term prediction, and there are dynamic morphological models
that compute the flow velocities, wave heights, sediment transport rates, bed level changes and velocities as a continuous loop. There are models that look specifically at bed deformation, some look
at channel evolution/
bank stability
, and then there are others that combine the two. There are also one, two and three dimensional simulation methods (van Rijn,
1-D Models
One dimensional models are commonly used in situations where the flow field shows little variation over the cross-section, like flow in some river systems. This is the most commonly used method for
sediment transport studies.
is one of the commonly used, free 1-D transport models available. One really nice thing about CCHE1D is that it can be integrated with GIS to process topographic data and generate model input data.
The governing equations are St. Venants 1-D equation (Equations 6 and 7), a sediment continuity equation (Equation 8), and a channel bed deformation equation (Equation 9) as shown below.
Figure 6: Computational Cell (Wu, 2002)
Figure 7: Extracted Channel Network, Goodwin Creek (Wu, 2002) Figure 8: Refined Computational Grid for Goodwin Creek (Wu, 2002)
For these equations,
are the spatial and temporal axes; A is the flow area; Q is the flow discharge; h is the flow depth; So is the bed slope; ß is a correction coefficient for the momentum due to the nonuniformty
distribution at the cross section; g is the gravitational acceleration; and q is the side discharge per unit channel length. To actually model this, the continuity equation and the momentum equation
(Equation 7) are both discretized, or broken from a continuous equation into a series of discrete "nodes" as shown in Figure 6, using the Preissmann four-point scheme and then solved iteratively.
Basically, it takes input as shown in Figure 7 and then assigns nodes along the stream, To solve these conditions, upstream boundary conditions at the inlets and downstream conditions at the outlet
of the channel network as well as internal conditions at confluences and hydraulic structure locations are necessary. The inflow boundary is defined by either a hypothetical hydrograph or a given
time series of discharge. For the outflow, a stage-discharge curve or a time series of stage is imposed. Flows through structures are complicated and difficult to simulate using 1-D models so
simplifications are made. An example of output results are shown below in Figure 9 (Stone et al., 2007 and Wu, 2002).
Figure 9: Measured and computed annual sediment yield for Goodwin Creek (Stone, 2007)
HEC-RAS (SIAM)
also uses St. Venant equations and the Preissmann four-point scheme to describe flow. Transport potential is computed by grain size fraction which allows simulation of non-uniform sediment movement
and bed material size change. This is especially applicable to get an understanding of the long-term effects of scour and deposition. It is best suited to steady, equilibrium sediment transport. This
model works well for making a sediment budget analysis (Stone et al., 2007).
SAM is another commonly used tool developed for the Army Corps of Engineers available through Owen Ayers & Associates, Inc. The main purpose of SAM is to calculate stable channel dimensions that will
pass a prescribed sediment load without deposition or erosion. It uses a package of three different design modules, SAM.hyd; SAM.sed; and SAM.yld that each build on one another, starting with
SAM.hyd. SAM.hyd can solve for any the the variables in the uniform flow equation depending on what the user specifies as the dependent variable. The default channel method (alpha) assumes steep
banks are vertical and have no influence so calculations are performed for the channel bed only which can cause a huge variation in results for channels that are narrow and steep. SAM.sed uses
hydraulic input that is either from SAM.hyd or user specified along with bed gradation to calculate a sediment discharge rating curve. The sediment transport function is applied at a point which does
not allow for variability in sediment distribution with time or space so it's possible that the calculated transport rates are inaccurate. It is important to choose the proper sediment transport
equations based on the bed gradation (i.e, gravel vs. sand, well sorted vs. poorly). SAM.aid, a module for use with the SAM package, is helpful in determining what transport function to use based on
the stream conditions. SAM.aid is especially useful for low-budget projects because it can be used with limited field data. SAM.yld calculates the sediment that passes through the cross section for
some time period, be it a flood event/single storm or an entire year. The sediment discharge rating curve created by SAM.sed is used in conjunction with the flow duration curve or hydrograph to get a
representative value of sediment discharge
(Thomas, 2002).
The main difference between SAM and CCHED-1 is that it represents the system as an average. This simplification makes it easy and quick to use, but limits its usefulness. It is mostly used as a tool
during planning that can help determine the slope, channel design, rip-rap etc. for a stable channel.
Other available 1-D models include
Mike 11
, and
2-D Models
Two dimensional models can be either depth averaged (2DH) or vertically averaged (2DV). Depth averaged simulations are useful when the flow field has no significant variations in vertical direction
and where the fluid densities are constant. For stream modelling these are useful because they allow properties associated with non-uniform, meandering flow and flow near hydraulic structures to be
incorporated. There is also no need for a momentum correction coefficient, unlike most 1-D models. 2-D flow in the vertical plane is useful when the flow is uniform in one lateral direction but has
significant variations in the vertical direction, such as flow accross trenches or long crested dunes (van Rijn, 1993; and Stone et al., 2007).
CCHE2D uses the depth-integrated, two-dimensional flow momentum equation for turbulent flow in Cartesian coordinates.
Equation 10 is the depth integrated continuity equation which is used to calculate the free surface elevation for the flow. For Equations 10-12, U and V are the depth integrated velocity components
in the x and y directions, respectively; t is the time; g is gravitational acceleration; h is the local water depth;
is water density; fcor is the
Coriolis parameter
τxx, τxy, τyx,
are the depth integrated Reynolds stresses;
are shear stresses on the bed surface. This is represented by Equation 13, shown below.
Here vt is the eddy viscosity coefficient. The stresses are approximated using the assumption that they are related to the main rate of strain of the depth-averaged flow field with a coefficient of
eddy viscosity.
Equations 14-16 are used for sediment transport processes, where Ck is the suspended sediment concentration in the kth size fraction and
is the turbulence diffusivity coefficient. These equations are solved by discretization, as explained for 1-D models. The method used by CCHE2D is called the Efficient Element Method.
Examples of output from CCHE2D are shown below in Figure 10 and Figure 11.
Figure 10: CCHE2D simulation of bed change in meandering channel (Stone et al., 2007) Figure 11: CCHE2D Simulation of flow field in East Fork River (Stone et al. 2007)
Other available 2-D models are
Mike 21C
3-D Models
Three dimensional models are of particular interest when there is a lot of variation in the vertical direction; structures in a channel are a good example of this. The most general hydrodynamic model
to describe the flow in a specific control volume is a three dimensional, time-dependent model. The different processes can be described in terms of balances, ie mass balance, momentum balance, etc.
Other aspects of the fluid's behavior can be described using empirical equations such as those connecting the fluid density to the temperature and salinity of the fluid. The effects of small-scale
turbulent motions on the time-averaged flow are also represented by empirical equations connecting shear stresses to velocity gradients (eddy viscosity) (van Rijn, 1993).
CCHE3D solves the Navier-Stokes equation.
Equation 18 includes the velocity components, u
; F
, the gravity force per volume; the fluid density
and the pressure p. Turbulent stresses
are calculated using the turbulent kinetic energy and its dissipation rate. The sediment model equations include 3D sediment advection-diffusion and bed deformation. The equations are discretized and
solved using the Efficient Element Method. Like many other 3D models, it is especially useful for determining scour around hydraulic structures and sediment transport in areas that have strong
spatial variability (can't assume flow in one direction is negligible). Figures 11 and 12 show examples CCHE3D simulation of scour around cylinders (Stone et al., 2007).
Figure 12: Flow field and scour around cylinder modeled with CCHE3D (Stone et al., 2007)
Figure 13: 3D topography of scour hole using CCHE3ED (Stone et al., 2007).
is another 3D modelling suite that can be used to model flow, sediment transport and morphology, waves, water quality and ecology as well as modelling their interactions. It is used mostly for
modelling coastal and estuarine areas where it can be used to understand storm surges, density driven flows, salt intrusions, transport of dissolved material and pollutants, and sediment transport
and morphology, among other things. The ability to model water quality, including the adsorption and desorption of contaminants and the deposition and suspension of adsorbed substances to and from
the bed add a novel and more complicated element to 3D modelling, which is already complicated to begin with. A technical manual is not available for the software since licences are only available
for purchase.
3D sediment transport modelling is still in its infancy and is not common because of how expensive it is both for data collection and computation. Other available 3-D models are
Mike 3
Summary of Numerical Models
Dimensionality Governing Assumptions Pros Cons Best Use Available
Equations Software
1. Less
St Venant Average the 3-D equation over the cross-section so only expensive 1. Doesn't give the big Flow in rivers where there is little
1-D Equations longitudinal flow is simulated 2. Data collection picture variation over the cross section; narrow
also less but shallow streams.
1. Not as 1. Some accuracy
2 Dimensional Average the 3-D governing Equations in the vertical or transverse computationally sacrificed 1. Mike
2-D Momentum direction (ie depth is much smaller scale than the reach, expensive as 3-D 2. Still requires a lot of Flow in channels where planform variation 21C
Equation laterally and longitudinally) models data, which can be cost is important or flow is unsteady. 2. SED2D
3 Dimensional
Momentum 1. Involve intensive Local scour problems, lake, estuary, and 1. EFDC
3-D Equation or 1. Most accurate computation, require a reservoir environments 2. Mike 3
Navier-Stokes lot of data 3. Delft3D
When selecting a model to use, it is important to have a clear idea of what the goal of the model is and what data is available. Quality of data is also important; low quality data will do nothing to
improve the model, rather it might cause unforeseen errors. This helps determine whether the model should be 1-D, 2-D, or 3-D. It is also helpful in determining what software to use. Duan et al.
(2008) prepared an evaluation of the Rillito River for the Pima County Regional Flood Control District using four 1-D models (ILLUVIAL 2, HEC-RAS 4.0, HEC-6, and SRH-1D) and compared the
computational data to observed. They found that IALLUVIAL 2 produced the stage hydrograph with the smallest root mean square error (RMSE), however HEC-6 and HEC-RAS 4.0 had more accurate averaged bed
elevation changes.
Applications/Case Study - Why is Sediment Transport Modelling So Important?
Elwha Dam Removal
Why is sediment transport modelling so important? We know that healthy riparian and animal communities depend on the dynamics of the channel to provide nutrients and it is these same driving forces -
the flows, shifting channels, and moving sediments- that physically shape a river or stream system. Understand this interaction before starting a restoration project can help the designer to optimize
the design for a desired outcome and to understand what results might be expected.
The Elwha Dam removal is a classic example of this. Before the dam was removed, Draut (2010) studied the existing system and the changes in channel evolution caused by the dam by looking at the
geology and hydrology of the system both upstream and downstream of the dam over time. Below the dam the stream was incised, narrowed, and there was bed armoring. Since removing the dam releases all
the trapped sediment, aggradation and bar formation on the lower Elwha can be expected. This is significant in several ways. The formation of bars could in the long term improve salmon habitat, but
it could raise the 100 year flood stage more than a meter (Draut, 2010). Konrad (2009) found that while the sediment released initially from dam removal in the short term will decrease the the salmon
habitat, the long term effect will be positive.
This project was successful because of the thorough understanding of the system. It included:
1. Project Planning
2. Site Analysis (This is demonstrated by both papers - there was a lot of effort to understand the system before the removal. The dam was removed in 2011 and these papers were published in 2010 and
2009, respectively.)
3. Selection of Design Procedure (Use of notched removal?)
4. Implementation
5. Monitoring (This is occurring now)
There is a nice collection of papers on the Elwha Dam removal from NOAA that looks at both before and after
Dams and Impoundments
goes into more detail, giving a clear definition and outlining the environmental effects.
Delft3D-FLOW: Simulation of mulit-dimensional hydrodynamic flows and transport phenomena, including sediments. October 2007. Deltares Systems. Engineering and design: Sedimentation and investigations
of rivers and reservoirs. 1989. Washington, D.C.: U.S. Army Corps of Engineers. Stream Restoration and Design (National Engineering Handbook 654). 2008. United States Department of Agriculture:
Natural Resource Conservation Service. Cao Z., Pender G., Meng J. 2006. Explicit formulation of the shields diagram for incipient motion of sediment. Journal of Hydraulic Engineering 132(10). Chang
H.H. 1992. Fluvial processes in river engineering. Malabar, Florida: Krieger Publishing Company. Draut A. E., Logan J.B., Mastin M. C. 2010. Channel evolution on the dammed Elwha River, Washington,
USA. Geomorphology 127(2010). Duan J.G., Acharya A., Yaeger M., Zhang S., Salguero M. July 5, 2008. Evaluation of flow and sediment models for the rillito river. Jia Y., Wang S.Y. CCHE2D:
Two-dimensional Hydrodynamic and Sediment Transport Model for Unsteady Open Channel Flows Over Loose Bed Konrad C.P. 2009. Simulating the recovery of suspended sediment transport and river-bed
stability in response to dam removal on the Elwha River, Washington. Ecological Engineering 35(2009). Lane E.W. December 1954. The importance of fluvial morphology in hydraulic engineering. Denver,
Colorado: US. Department of the Interior Bureau of Reclemation. Loucks D.P., van Beek E., Stedinger J.R., Dijkman JPM, Villars M.T. 2005. Appendix A: Natural system processes and interactions. In:
Water resources systems planning and management. Italy: United Nations Educational, Scientific and Cultural Organization. Stone M., Chen L., Scott S. September 2007. Guidance for Modeling of
Sedimentation in Stream Restoration Projects. Desert Research Institute, Nevada System of Higher Education. Prepared for U.S. Army Corps of Engineers, Engineer Research and Development Center. Thomas
W., Copeland R., McComas D. September 2002. SAM Hydraulic Package for Channels. Vicksburg, Mississippi: Coastal and Hydraulics Laboratory, U.S. Army Engineer Research and Development Center. van Rijn
L.C. 1984. Sediment Transport, Part I: Bed Load Transport. Journal of Hydraulic Engineering 110(10). van Rijn L. C. 1993. Principles of Sediment Transport in Rivers, Estuaries and Coastal Seas.
Amsterdam: Aqua Publications. Wilcock P. Sediment Transport in Gravel-Bed Rivers with implications for channel change. Lecture notes, guest lecture at Univ. California Berkeley January 26-28, 2004.
Wilcock P., Pitlick J., Cui Y. 2009. Sediment transport primer: Estimating bed-material transport in gravel-bed rivers. Rocky Mountain Research Station: U.S. Department of Agriculture Available from:
General Technical Report RMRS-GTR-22. Wu W. January 2002. One-dimensional channel network model; CCHE1D – Technical Manual. National Center for Computational Hydroscience and Engineering. Technical
Report No. NCCHE-TR-2002-1
by A. Symonds
|
{"url":"https://riverrestoration.wikispaces.com/Sediment+transport+models?responseToken=311199d88ec7462d49dc73968ff81056","timestamp":"2014-04-25T07:55:57Z","content_type":null,"content_length":"104218","record_id":"<urn:uuid:3b0b3c3b-62c7-4b7f-a0a5-de7baf717229>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00201-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Incarnations of a theorem of Eilenberg
up vote 7 down vote favorite
Let $R$ be any ring, let $\text{Mod}_R$ be the category of right $R$-modules and let $\text{Ab}$ be the category of abelian groups. There is a classical theorem of Eilenberg (I think) which says that
for any right exact functor $F:\text{Mod}_R \to \text{Ab}$ which preserves direct sums, there exists a left module structure on $F(R)$ making $F$ naturally isomorphic to the functor $- \otimes_R F(R)
Does anyone know any nice "incarnations" of this theorem? By this I mean "nice", simple and concrete right exact functors from $\text{Mod}_R$ to $\text{Ab}$ (for some $R$) preserving direct sums, for
which it is not immediately clear that they are given by a tensor product (= isomorphic to a tensor functor) with a left $R$-module, but for which this left $R$-module can still be constructed in a
concrete way.
ac.commutative-algebra abelian-categories
MacLane attributes that theorem to Watts. – Mariano Suárez-Alvarez♦ Oct 30 '11 at 22:42
2 ...also to Watts, I mean. – Mariano Suárez-Alvarez♦ Oct 30 '11 at 22:49
1 It does not quite fit this theorem of Watts, but Grothendieck's base change module $\mathscr Q$ (EGA III, 7.7.6) comes to mind. – user2035 Oct 31 '11 at 6:12
add comment
2 Answers
active oldest votes
More generally, the Theorem of Eilenberg-Watts says the following: The category of cocontinuous functors $\mathrm{Mod}(R) \to \mathrm{Mod}(S)$ is equivalent to the category of $(R,S)
$-bimodules. A bimodule ${}_R M _{S}$ corresponds to the functor $- \otimes_R M$. There is a recent paper by A. Nyman which deals more generally with cocontinuous functors $\mathrm
{Qcoh}(X) \to \mathrm{Qcoh}(Y)$ for nice schemes $X,Y$.
up vote 10 The Eilenberg-Watts theorem over schemes, J. Pure Appl. Algebra, 214 (2010), 1922-1954; online.
down vote
accepted As for your question, take a finitely generated projective module $P$. Then $\mathrm{Hom}(P,-) : \mathrm{Mod}(R) \to \mathrm{Mod}(R)$ is right exact (since $P$ is projective) and
preserves infinite direct sums (since $P$ is finitely generated), thus cocontinuous. In this case the Theorem shows that $\mathrm{Hom}(P,-) \cong (-) \otimes P^*$, where $P^* = \mathrm
{Hom}(P,R)$ is the dual of $P$. More generally, for every vector bundle $V$ on some scheme/manifold we have $\underline{\mathrm{Hom}}(V,-) \cong (-) \otimes V^*$.
add comment
There is a paper by Mark Hovey which discusses incarnations of the Eilenberg-Watts theorem in homotopy theory. First he reviews the algebraic version and provides an equivalent formulation
wherein the theorem says that the category of abelian groups is left self-contained (note that Eilenberg also proved Ab is right self-contained, though it seems Watts did not prove this).
Hovey then finds the correct definition of this concept in homotopy theory and proves a very general theorem about when a model category has this property. His theorem holds for topological
spaces, simplicial sets, chain complexes, and all the models of spectra. Note, there is a more recent version of this paper on Hovey's website, but it does not compare the new results to the
up vote classical ones as fully.
1 down
vote On page 2 of Hovey's paper above is an open problem which you might be interested in. It asks for conditions such that a closed symmetric monoidal category is self-contained (rather than
homotopically self-contained as Hovey's paper answers).
add comment
Not the answer you're looking for? Browse other questions tagged ac.commutative-algebra abelian-categories or ask your own question.
|
{"url":"http://mathoverflow.net/questions/79543/incarnations-of-a-theorem-of-eilenberg","timestamp":"2014-04-20T14:03:00Z","content_type":null,"content_length":"58530","record_id":"<urn:uuid:ccd7cefd-e650-4b31-bca0-e50ac49bbea0>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00298-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Addition In Base 6
I've got a column of numbers that represent the number of overs bowled in games of cricket. Whilst these are whole numbers (eg. 34 overs + 34 overs) the addition isn't a problem, but when they are
incomplete overs (eg. 34.4 overs + 34.5 overs) then the addition if out of kilter as it sums them in base 10, and not in base 6. (As there are six balls in an over, not ten for anyone who doesn't
View Complete Thread with Replies
Sponsored Links: Related Forum Messages:
Addition Of Cells
I wanted to have the weeks of the month down one column = 52 week.
down the next column I have different amounts of money in that week.
some months have 4 weeks and other have 5. I wanted a program to say:
If you see a month "x" look at the next column and take that amount. Then on the next row you have month "x" again (week 2) go to the next column and take that amount and add it to week one. And so
on until all 4/5 week are added to give on result.
Then the same for the next month...
month amount/week amount/month
05-Mar 0
12-Mar 70
19-Mar 210
26-Mar 350 1050
02-Apr 420
09-Apr 455......
View Replies! View Related
Sumproduct - Addition By Name
I need to C8 - C19 only to add up jobs won by andrew (in current orders). It needs to be month specific. what i mean by that is I need the formula to do what its doing now (adding up the jobs by and
putting the totals into the according cell depending on what month they were won.
View Replies! View Related
Addition Formula
I am a new excel user. I a trying to write a certain formula but am having trouble. I want to write the formula to add a column of numbers, say H-10 through H-15. Each cell will have a number in it,
but I want only to add the cells if the cell precedding it in the G-10 through G-15 Collumn is blank. For example if cells G-12 and G-14 have an "X" in them, then I do not want Cells H-12 and H-14 to
be added. I only want the formula to add cells H-10,H-11,H-13, and H-15. I used just 6 cells for example, the column of cells to be added will be a lot longer.
View Replies! View Related
Vlookup & Addition
I have multiple ranges in a spread sheet. I am trying to write a formula that will go out to each range in succession and look for a part number, upon finding return a quantity and them move on to
the next range duplicating the above process. The formula should tally the grand total of all numbers found. I have it working except that not all of my items are in all ranges. If the item that I am
searching for is in all ranges my formula works but if there is one or more of the ranges that doesn't have that particular value it returns an #n/a instead of totalling those that do have it. If I
use a true instead of false in my [range_lookup] I get an incorrect answer. My formula for a given cell is listed below. This is with the true argument which does not work....
View Replies! View Related
Addition Of The Values In The Cells
I AM HAVING DATA OF 210 BRANCHES OF DIFFERENT ITEMS. EACH BRANCH HAS AROUND 100 TO 150 ITEMS. I WANT TO ADD THE VALUES OF EACH BRANCH AND I HAVE TO GET THE GRAND TOTAL VALUE IN A SINGLE SHEET.
SUPPOSE IF ADD E10+E210+E350+E470 LIKE THIS AFTER SOME pLUSES I WILL GET THE FORMULA RANGE IS OVER. IS THERE ANY METHOD OF ADDING 210 BRANCHES ITEMS
View Replies! View Related
Algorithm For Addition (operation For Each ID)
Have an excel table with following data:
- ID
- number of bottles
- number of bottle crates (there are 20 bottles in one one bottle crate)
I need to write a macro which will do this operation for each ID:
(bottles/20)-crates = x
and if "x" is not 0 then write down the value of "x".
There are two points I would like to point out:
- One ID may contain 3 or more rows (see 20168880)
- The macro will work with hundreds IDs so the algorithm should be fast (but it is not necessary)
View Replies! View Related
Multiplication/addition Function
I obviously know less about functions than I thought I did. I've got the attached spreadsheet set up except getting totals at the bottom. The production total L44, would be column A multiplied by the
quantity entered in columns L and summed. Same for Total SF, square footage in column B times quantity in L and summed at the bottom. This would continue daily, needing sums under each column.
View Replies! View Related
Simple Conditional Addition Function
I imagine this is a simple conditional SUMIF function. I'd like a cell to add values in e.g. column "d" when that row meets certain criterion in column "a".
In other words, I have a column that has times recorded in minutes, and another that says a person's name which correlates with the times. I'd like a cell on another sheet to give a total sum of
minutes for each person.
Ideally, part of the function would translate the minute count into hours/minutes, but I think I can figure out how to do that by changing the format in the cell...
View Replies! View Related
Function Of Addition With An Only Conditional Criterion
I behind developed to a time a function of Addition with an only conditional criterion.
I would like to extend at least for three criteria, this function I function accurately as the function SUMPRODUCT alone that done in VBA.
Function VlookupAllSum(name As String, IntervalSearches As Range, IntervalReturn As Range) As Variant ' as integer para valores até 32.767
Dim Valor, Nome
Dim lin, col As Integer
Dim Total
lin = 1
col = lin
For Each Nome In IntervalSearches
If Nome = name Then
Valor = IntervalReturn(lin, col)................
View Replies! View Related
Addition/Subtraction With Menu Selection
Included is an example of a spreadsheet I am working on. There are multiple choices within several different drop-down menu's. As of right now I have the 1st menu as the stage of completion of a car.
Within the next few menu's are options.
If welded chassis is chosen, none of these options are included. However if roller or turn-key are chosen then some of these options are included. But then there are also upgrades to these parts that
are included as well. Is there a way to make 1 option included when a roller is chosen, but then if you want the 2nd option in the menu, you click on it and it automatically updates the price next to
it, therefore subtracting the cost of option 1 from the cost of option 2?
View Replies! View Related
Sumproduct, Skipping Columns, Addition
I have Names in column A, Data in Column B. Example
A1 John B1 1000 C1 5:32:05
A2 Jim B2 500 C2 5:56:55
A3 John B3 600 C3 6:45:65
A4 Bill B4 300 C3 7:21:05
In another column I have the names of all the possible people that I will need data from and next to them I will need a formula to tabulate all their totals from column B and then another formula
that will skip B and total column C's total.. I have a formula that I used from awhile ago when I needed to offset the data but I can't figure out how to just take the data to the right of it and
then another formula to skip column B. Here is my old formula =SUMPRODUCT(($A$1:$A$291=G14)+0,OFFSET($B$1:$B$291,1,0)+0)
View Replies! View Related
Fill InThe Blanks Addition
I have some great code that HalfAce provided a while back that I think will fit a project I am working on, but I can't see how to modify it to fit this one. I need to have it look at a location and
provider and find the most "common" date. Then for that criteria fill in the lines with no dates with that "common" date. Here is the code that I need to modify for this
Sub FillInTheBlanks()
Dim LstRw As Long, _
DescRng As Range, _
AccntRng As Range, _
Desc As Range, _
Accnt As Range
LstRw = Cells(Rows.Count, "B").End(xlUp).Row
Set AccntRng = Range(Cells(2, "B"), Cells(LstRw, "B"))
Set DescRng = Range(Cells(2, "I"), Cells(LstRw, "I"))
View Replies! View Related
Calculator Formula For Addition Via Columns
I would like to know the calculator formula for addition via columns.
Eg 1. If i were to place 135 into Column A ;
12.95 into Column C ;
i would need to yield a result of 147.95
Eg 2. Place 189 into Column A ;
12.95 into Column C
i would need to yield result of 201.95 and so on. in the attachment is the sample file.
View Replies! View Related
Addition Of Values In A Single Cell
see the attached sheet. It already has some example....I need the result of the addition in the cells of column F, at the side Say column G, in the coressponding row. e.g for cell F9, I need the
result in G9, and so on. For testing, step 1. Select M+R in Col "TOI", enter some value in the pop up. step 2. Again select M+R in "TOI", enter some value in the pop up. the Col F will have some
additions (e.g 1+2), for which I need the result in the corresponding next column. i.e col G.
View Replies! View Related
Addition Calculation Not Giving Correct Answer
I have a workbook with calculations for a sale less the assorted fees and at the end giving the final amount from a sale.
I have noticed that some of the rows are not giving the correct amount in them.
In other words the addition of some columns in that row are not adding up correctly. It is only off by 1 cent (either over or under), but I can't figure out why.
I have the feeling that I am going to want to kick myself when someone explains this to me (I just know that I know the answer but for the life of me I can't right now).
View Replies! View Related
Update Csa Formula With The Addition Of New Rows
I am working on a spreadsheet that matches each cell in Column B (text) with the data (text) in a constant cell; if there is a match, the data that corresponds to the data in Column B (text) will
average (Column G, number) using a CSA formula, for example: =AVERAGE(IF($B$3:$B$106=A$110,$G$3:$G$106))
Now the formula above works well, only I have to update the spreadsheet, so when I add new rows the $B$3:$B$106 and $G$3:$G$106 portions are useless.
Trying to use the INDIRECT function that many people successfully use in this forum, produces a #VALUE error,
View Replies! View Related
Sumif Formula Needs To Split 2 Criterias Of Addition
you guys very kindly helped me with a spreadsheet a couple of months ago, but i now need to adapt it for another dept. I have completed as much as I can.
I need column C and E in the 'totals tab' to only calculate contract and upgrade sales respectively (found in 'service orders' tab). I also need Scott's and ash's individual sales to be calculated in
corrisponding tabs. Most of the formulas are in place so just need them tweaked slightley.
View Replies! View Related
Basic Addition, Subtraction & Multiplication
Take a single cell in column D, and multiply it by a single cell in column E, which will equal F. Take column F, and multiply it by .02 (2%), which will equal G. Take a cell in column G, and subtract
it from F, which will equal I. And this all takes place in the same row. Then have it move down to the next row, and do the same thing..... so it would basically look like this.....
A B C D E F G H I
1 D1 E1 (D1*E1) (F1*.02) (G1-F1)
2 D2 E2 (D2*E2) (F2*.02) (G2-F2)
3 D3 E3 (D3*E3) (F3*.02) (G3-F3)
For easier reading.... in each row I want it to do the following math
And then do it for every row that I have data in (excluding the VERY first row). I am -COMPLETELY- sorry if I broke any rules, and am also sorry for the poor representation
View Replies! View Related
Array In Data Base
i have used the database from j & r solutions. i have altered it slightly to suit me. the database works fine apart from when using the find all button it will only return 4 entries. if there are
more than 4 entries it returns runtime error 9.i have zipped up the code and marked where the error is shown when i debug
View Replies! View Related
Create Rank Base On 2 Variable
I hv following table :-
Summary Report FGH1how urgent How ImptRank2highHigh 3midhigh 4LowHigh 5highmid 6Midmid 7Lowmid 8Highlow 9midlow 10lowlow Spreadsheet FormulasCellFormulaF1=+'Mega_Variable (#1)'!R4G1=+'Mega_Variable
(#1)'!R5F2=+'Project (1) '!U63G2=+'Project (1) '!U64F3=+'Project (10)'!U63G3=+'Project (10)'!U64F4=+'Project (12)'!U63G4=+'Project (12)'!U64F5=+'Project (5)'!U63G5=+'Project (5)'!U64F6=+'Project (7)
'!U63G6=+'Project (7)'!U64F7=+'Project (9)'!U63G7=+'Project (9)'!U64F8=+'Project (4)'!U63G8=+'Project (4)'!U64F9=+'Project (2)'!U63G9='Project (2)'!U64F10=+'Project (3)'!U63G10='Project (3)'!U64
Excel tables to the web >> Excel Jeanie HTML 4
I need code , when run it will fill in the ranking number :-
Summary Report FGH1how urgent How ImptRank2highHigh13midhigh24LowHigh35highmid46Midmid57Lowmid68Highlow79midlow810lowlow9Spreadsheet FormulasCellFormulaF1=+'Mega_Variable (#1)'!R4G1=+'Mega_Variable
(#1)'!R5F2=+'Project (1) '!U63G2=+'Project (1) '!U64F3=+'Project (10)'!U63G3=+'Project (10)'!U64F4=+'Project (12)'!U63G4=+'Project (12)'!U64F5=+'Project (5)'!U63G5=+'Project (5)'!U64F6=+'Project (7)
'!U63G6=+'Project (7)'!U64F7=+'Project (9)'!U63G7=+'Project (9)'!U64F8=+'Project (4)'!U63G8=+'Project (4)'!U64F9=+'Project (2)'!U63G9='Project (2)'!U64F10=+'Project (3)'!U63G10='Project (3)'!U64
Excel tables to the web >> Excel Jeanie HTML 4
View Replies! View Related
Open Worksheet Base On Cell Value
I have multiple worksheets that call one userform.
Each worksheet has a specific word in cell J1 that matches a worksheet name.
How do I select a a sheet based on a cell value in J1?
Code: .....
View Replies! View Related
Base Multiple Charts Off One Range
I have a chart that I want to be the same across multiple worksheets. The data ranges don't move, but the data may be different. It is cumbersome to go and retype the name of the sheet every time
this chart is placed.
I have tried using named ranges. My named range X is !$A$1:$A$30 so that it will refer to the active sheet. If I place this in cells on the spreadsheet, it works. If I place "=X" in the values entry
for the source data of the chart, I get a formula error.
View Replies! View Related
Use Access Data Base For Project?
I am working on a spreasheet that will automatically calculate the interest rate, loan to value advance, and other parameters from user inputs.
I have found a way to get the calculations to work correctly, but I have about 50 lenders to input - all with different rates and lending guidlines with respect to loan to value advances.
I am sure that I am going about it the hard way and I have no problem going at this to get it right.
I have attached the spreasheet I've started. I've only got one lender completed thus far. So if you need help sleeping at night, go ahead and see what I've done (yes it's boring).
Edit note: I don't know if using Access would make this an easier project to tackle but it is an option (I'll just have to learn
Access if that's the case - I've never used it).
View Replies! View Related
Base Pivot Table Off Different Ranges
I need to read consantly changing shift time/ covered data from a Pivot Tables pivot chart and populate this data into number of shifts covered/ uncovered. This information is then put into a chart
over a 24 hour period (from 0700 to 0700). I have been populating the data from the pivot chart by hand by referencing the number of shifts in the covered line and dragging it to correspond to the
shift time data part. I then have to do this for the uncovered shifts. As the data in the pivot chart is constantly changing, i need to do this data ransfer 'automatically'. I have started to look at
and learn VBA, but i am getting nowhere fast. I enclose a worksheet (blank) to give you an idea fo what i am trying to do.
View Replies! View Related
Generate Password Base On File Name In Directory
1. In centain directory I have xls files where name of each file starts from "HR" string, eg. "HG_Control Mike.xls", ora "HR_Control Mark.xls".
2. I have a master xls file where I want to start a macro that will open each of xls "HR" files and copy selected rows to this master xls workbook (need to write this one too).
Problem is that opening of every "HR" files suposed to be protected by password. Users will be adding new xls "HR" files to the directory so I will not be able to change macro everytime new xls is
added. So, I need to make a macro that will generate password base on xls "HR" file name and then I will use this password to protect this files and open them by another macro.
View Replies! View Related
Share Worksheet Base On User/password
I have a workbook which contains multiple worksheets of employees' information. I'm hoping to share this workbook out. Each employee only able to view and update their own worksheet and their
manager, being able to view/update everything within the workbook.
View Replies! View Related
Time / Frame Fps Formula (base 75)
I have a formula that calculates only seconds and frames. The frame rate is 75 fps (0-74).
e.g., 49.50 is 49 seconds 50 frames. It will not parse Minutes.Seconds.Frames, e.g., 1.49.50 is 1 minute 49 seconds 50 frames.
What my existing formula does is converts the number to all frames then converts the answer back to seconds and frames.
I need this formula to include minutes in its calculation.
A copy of the spreadsheet is here: ...
View Replies! View Related
Number A Colum For Sorting As Data Base
totaly new to Excel (just using two weeks) as a database
I need to number a colum, 1 to what ever, so that I can use that colum to re-sort the data base back into original order.
View Replies! View Related
Calculate The Delivery Charges Base On The Table
what i trying to learn is that im trying to get the delivery price calculate base on the delivery area(F10) and the total qty of the items(G10).
but i've tried with Vlookup, IF, lookup, Hlookup function and i still cant manage to get the right one to put the data in the H10
View Replies! View Related
Count The Number Of Occurance Base On Criteria
how to count the number of occurrences base on a criteria? My sample file contains a Tally Sheet and a template sheet...how can i count the number of occurrences of Yes and No per class? Say for the
A class, how can i count the yes or no?
View Replies! View Related
Save And Enumerate Base On Existing Files
I have this macro it save to specific location but if the file name exist then macro fails or wants to overwrite existing file I will like to make this macro to add a number
So It will look like this
DISCONNECTED # 11-09-09.xlsx
DISCONNECTED # 11-09-09 2.xlsx
DISCONNECTED # 11-09-09 3.xlsx
DISCONNECTED # 11-09-09 4.xlsx
DISCONNECTED # 11-09-09 5.xlsx
View Replies! View Related
Data Base UserForm - Clear ListBox
I have the existing code below. What I would like to do is clear the ListBox of all previous records found prior to the next Find All event occurring. For Example I search for "M" and it finds 3
records and these are listed in the ListBox for the user to select from, then if the user searches for "Grealy" it finds 1 record and puts it in the list but the 2nd and 3rd record from the previous
Find All event still remain.
I tried using the following code
which clears the listbox but then as soon as you hit Find All following the above mention sequence you get the result as outlined.
Private Sub cmdFind_Click()
Dim strFind, FirstAddress As String 'what to find
Dim rSearch As Range 'range to search
Set rSearch = Sheet1.Range("b2", Range("b65536").End(xlUp))
strFind = Me.TxtEmpName.Value 'what to look for
Dim f As Integer
View Replies! View Related
Base Listbox Fill Range On Selection Of Another
Have 2 listboxes, the contents of the second (fmmultiselectmulti) is populated based on selection of first. Sometimes (50%) when I open the workbook I receive a "Object Required" runtime error..
Private Sub ListBox1_click()
Select Case ListBox1.Value
Case "All"
ListBox2.ListFillRange = "_Sheet2!A1:A1"
Case "A"
ListBox2.ListFillRange = "_Sheet2!B1:B18" <--- example of line that gives the 424 - Oject Required
Case "B"
ListBox2.ListFillRange = "_Sheet2!C1:C18"
End Select
End Sub
Looks as though sometimes when it runs, listbox2 is not yet initialized ?? If I go into Debug and look at ListBox2 it shows up as type "Variant/Empty" and not " ListBox/ListBox" ??!!?? Is this some
type of timing/race condition on the loading of controls ? I'm out of idea's. Both listboxes are on the same worksheet (Sheet1). The ListFillRange for Listbox1 (which is a fmmultiselectsingle) is
hardcoded and also references a range in _Sheet2 - no problems with this control.
View Replies! View Related
Macro: Base Row Number On Cell Value
I have a macro that has following line
I want to dynamically change the C38 to may be C37 depending on a value in a cell i.e. F1
so if the value in cell F1 is 31 then I want this statement to look like
View Replies! View Related
Conversions To & From Feet-Inch To Decimal/base 10
I often use feet and inch inputs for calulations. I prefer to input a typical feet & inch input into one cell using this format: ft-n.
example: 12ft 9in would input as 12-9
This would need to be converted into a decimal for calcualtions. Also I would like to convert from Decimal back to ft-in.
View Replies! View Related
Add Number Of Occurances Base On Criteria On Two Columns
How can I add the number of remarks to the number classes based on their row?
I have a TALLY SHEET which auto computes the number of occurances of each classes
and remarks…can someone help me how to add the class and remarks? In this example
you can see that CLASS A occurred 3x ,YES remarks occurred 2x and NO remarks occurred 1x for A class…
how can I add the occurances of YES and NO remarks to A class?
This should be the output…Remarks are being added according to the class they belong
ABEYESNONot Applicable
View Replies! View Related
VBA- Create A Macro That Will Change 1 Of The Base Salary
Lets say I have
Base Salary
US Duties
Base Salary
Senior Advisor
Is there a way to create a macro that will change 1 of the Base Salary(s). I need one of them to be Base Salary - Admin, or Base Salary ' or something different for my vlookup.
Will this macro work for each different tab I have?
View Replies! View Related
Macro Needed To Fill Down Other Cell Base On Condition
I have accounts that I need to compare to see if they exist on my system the account that has a listed date, exist on my system then if I can fill the dates in the accounts the match then I will be
able to delete the other accounts that don’t have a date see attach file for more understanding.
View Replies! View Related
Using IF Statement To Carry Over Grand Totals Based On Base Number
I have a number of Grand Totals that equal to Hours of Work in a day ( Based on Demand from Customer Orders)
I only have 95 ( this will be a number in a cell that I want to be able to change if needed) work hours available to me each working day.
I want each day to attempt to fill in up to 95 hours , anything more and it will push the remaining balance forward into the other cells.
IE here is what I have for the next 5 days for Totals
Under the 211 I want it to change to 95 and then carry over the balance to the cell under 120 , I then want that cell to change to 95 and carry over its balance to the next cell and so on down the
line. I will always have 22 Working Days I want to work with. So the last day may or may not have a greater then 95 total.
The 95 part I want to be able to change that to whatever number I think I will have available to me and it will adjust accordingly through the line.
View Replies! View Related
Sum Alternate Columns Base On Previous Column Entry
I am trying to resolve a calculation issue where I want to sum accross columns depending on an entry in the column immediately preceeding. The layout is an Attendance sheet, The columns are for the
days of the Month ( 1 - 31 ) and the rows are the Months. There are 2 columns associated with Each day. The first column is for the type of Time Off ( Vacation, Sick, Personal, etc ) the column next
to it records the number of Hours some one took off. The work book has a Sheet for Each Employee and a running total needs to be maintained for the amount of "off time" each employee takes by the
various time off categories. I have tried setting up range names but this won't work as there will be multiple sheets. I believe the problem is the mixture of Text and Numeric data but could not
View Replies! View Related
Macro/Code Required For Calculations Base On Pivot Tables Sum
What I required is either a Macro or Code for formulas in column 'F' in the attached spreadsheet that correspond to the SUM of each description and divided by 37.5 e.g. in F10 the formula should be =
D10/37.5 the formula should be F12 D10/37.5 and so on all the way down the Pivot table
My problem is as the amount data increases on the Data Tab the formulas in column 'F' will become out of line with the corresponding Sum of each description so I guess I need some code or formula
that check every time the Pivot table is refreshed.
View Replies! View Related
Add Addition If Condition To Existing Formula: Long Formula
This task joins a string together based on a number of characters per cell in the range.
I want to isolate one range, Col N, and add an IF condition to it.
There may be other issues preventing this from happening, e.g. the number of IF that exist in the complete formula. I will isolate the current cell and its requirements and then post the entire
formula at the end for reference....
View Replies! View Related
Log Function In A Macro: Take Log Of Each Number (on The Base 2) And Show The Result In The Adjacent Column
I have a lots of number arranged in a column. I want to take log of each number (on the base 2) and show the result in the adjacent column. I want this to be in a macro and the results to be
displayed all at a time (I dont want to drag the cursor down to get log values for number corresponding to each row).
View Replies! View Related
Inset Rows & Fill That Rows Base On Formula
dear friend in my document column "L" has some numbers & formulas.if any cell has formula base in that i need to inset rows below that formula cell & that formula need to spread on that new rows.i
have 4 type of formulas.each formula has (1.5).that part is common. it 's like this...
(01.)ex- L1 cell =150*2*1.5 ,need to inset one row below this cell & after running the macro it should change like this..
L1 cell =150*1.5
L2 cell =150*1.5
(02.)ex- L1 cell =150*2*1.5+50*1.5 ,need to inset two rows below this cell & after running the macro it should change like this..
L1 cell =150*1.5
L2 cell =150*1.5
L3 cell =50*1.5
(03.)ex- L1 cell =150*2*1.5+130*3*1.5 ,need to inset four rows below this cell & after running the macro it should change like this..
L1 cell =150*1.5
L2 cell =150*1.5
L3 cell =130*1.5
L4 cell =130*1.5
L5 cell =130*1.5
(04.)ex- L1 cell =150*2*1.5+130*3*1.5+20*1.5 ,need to inset five rows below this cell & after running the macro it should change like this..
L1 cell =150*1.5
L2 cell =150*1.5
L3 cell =130*1.5
L4 cell =130*1.5
L5 cell =130*1.5
L6 cell =20*1.5
View Replies! View Related
Macro/function To Take Data From Source File Into Base File
I have a base excel file for summarizing some data, the problem is that the data comes from a different excel spreadsheet. What I want to do is make a function that pulls the data from another
spreadsheet into my base file. It would be easy if it were just one excel sheet, but this job would require where the data is pulled from a data file which has many modified versions.
Can anyone tell me how to do this? The files with the data will be structured the exact same but with different data entered in. I just want a button so I can click the file I want the data from and
have it show up on my summarizing base file.
View Replies! View Related
HLOOKUP In HLOOKUP, Base Estimate Table In Excel
I am trying to import a BASE ESTIMATE table into EXCEL.
I have problems with most of the formulas, especially this one:
=VLOOKUP($E$2,$B$24:$P$604,HLOOKUP($E$3,$D$22:$L$604,1)+2)*HLOOKUP(HLOOKUP($E$3,$D$22:$L$604,1),$D$2 2:$L$23,2)
and this one
I am not sure if EXCEL allows a HLOOKUP within an HLOOKUP. If not, how can I get around this?
View Replies! View Related
|
{"url":"http://excel.bigresource.com/Addition-in-base-6-SRjZ1fqq.html","timestamp":"2014-04-20T08:22:15Z","content_type":null,"content_length":"74004","record_id":"<urn:uuid:4946064d-46e0-4373-894e-ce0d3cd71963>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00471-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Overcoming Bias
The Future Of Intellectuals
Back in 1991, … [a reporter] described Andrew Ross, a doyen of American studies, strolling through the Modern Language Association conference … as admiring graduate students gawked and murmured,
“That’s him!” That was academic stardom then. Today, we are more likely to bestow the aura and perks of stardom on speakers at “ideas” conferences like TED. …
Plenty of observers have argued that some of the new channels for distributing information simplify and flatten the world of ideas, that they valorize in particular a quick-hit, name-branded,
business-friendly kind of self-helpish insight—or they force truly important ideas into that kind of template. (more)
Across time and space, societies have differed greatly in what they celebrated their intellectuals for. Five variations stand out:
• Influence – They compete to privately teach and advise the most influential folks in society. The ones who teach or advised kings, CEOs, etc. are the best. In many nations today, the top
intellectuals do little else but teach the next generation of elites.
• Attention – They compete to make op-eds, books, talks, etc. that get attention from the intellectual-leaning public. The ones most discussed by the snooty public are the best. Think TED stars
today, or french public intellectuals of a generation ago.
• Scholarship – They compete to master stable classics in great detail. When disputes arise on those classics, the ones who other scholars say win those disputes are the best. Think scholars who
oversaw the ancient Chinese civil service exams.
• Fashion – They compete to be first to be visibly associated with new intellectual fads, and to avoid association with out-of-fashion topics, methods, and conclusions. The ones who fashionable
people say have the best fashion sense are the best. Think architecture and design today.
• Innovation – They compete to add new results, methods, and conclusions to an accumulation of such things that lasts and is stable over the long run. Think hard sciences and engineering today.
Over the last half century, in the most prestigious fields and in the world’s dominant nations, intellectuals have been celebrated most for their innovation. But other standards have applied through
most of history, in most fields in most nations today, and in many fields today in our dominant nations. Thus innovation standards are hardly inevitable, and may not last into the indefinite future.
Instead, the world may change to celebrating the other four features more.
A thousand years ago society changed very slowly, and there was little innovation to celebrate. So intellectuals were naturally celebrated for other things that they had in greater quantities. The
celebration of innovation got a big push from World War II, as innovations from intellectuals were seen as crucial to winning that war. Funding went way up for innovation-oriented intellectuals.
Today, however, tech and business startups, and innovative big firms like Apple, have grabbed a lot of innovation prestige from academics. Many parts of academia may plausibly respond to this by
celebrating other things besides innovation where those competitors aren’t as good.
Thus the standards of intellectuals may change in the future if academics are seen as less responsible for important innovation, or if there is much less total innovation within the career of each
intellectual. Or maybe if intellectuals who are better at doing other things besides innovation to win their political battles within intellectual or wider circles.
If intellectuals were the main source of innovation in society, such a change would be very bad news for economic and social growth. But in fact, intellectuals only contribute a small fraction of
innovation, so growth could continue on nearly as fast, even if intellectuals care less about innovation.
(Based on today’s lunch with Tyler Cowen & John Nye.)
GD Star Rating
Tagged as: Academia, Future, Innovation
Multiplier Isn’t Reason Not To Wait
On the issue of whether to help now vs. later, many reasonable arguments have been collected on both sides. For example, positive interest rates argue for helping later, while declining need due to
rising wealth argues for helping now. But I keep hearing one kind of argument I think is unreasonable, that doing stuff has good side effects:
Donating to organizations (especially those that focus on influencing people) can help them reach more people and raise even more money. (more)
Giving can send a social signal, which is useful for encouraging more giving, building communities, demonstrating our generosity, and coordinating with charities. (more)
Influencing people to become effective altruists is a pretty high value strategy for improving the world. … You can do more good with time in the present than you can with time in the future. If you
spend the next 2 years doing something at least as good as influencing people to become effective altruists, then these 2 years will plausibly be more valuable than all of the rest of your life. (
Yes doing things now can have good side effects, but unless something changes in the side-effect processes, doing things later should have exactly the same sort of side effects. And because of
positive interest rates, you can do more later, and thus induce more of those good side effects. (Also, almost everyone can trade time for money, and so convert money or time now into more money or
time later.)
For example, if you can earn 7% interest you can convert $1 now into $2 a decade from now. Yes, that $1 now might lend respectability now, induce others to copy your act soon, and induce learning by
the charity and its observers. But that $2 in a decade should be able to induce twice as much of all those benefits, just delayed by a decade.
In math terms, good side effects are multipliers, which multiply the gains from your good act. But multipliers are just not good reasons to prefer $1 over $2, if both of them will get the same
multiplier. If the multiplier is M, you’d just be preferring $1M to $2M.
Now it does seem that many people are arguing that these side-effect processes are in fact changing, and changing a lot. They suggest that that if you work with or donate to them or their friends,
then these efforts today can produce huge gains in inducing others to copy you, or in learning better how to do things, gains that won’t be available in the future. Because they and you and now are
I think one should in general be rather suspicious of investing or donating to groups on the basis that they, or you, or now, is special. Better to just do what would be good even if you aren’t
special. Because usually, you aren’t.
Now one very believable way in which you might be special now is that you might be at a particular age. But the objectively best age to help is probably when you have peak abilities and resources,
around age 40 or 60. If you are near your peak age, then, yes, maybe you should help now. If you are younger though, you should probably wait.
Added 14Apr: Every generation has new groups with seemingly newly urgent or valuable causes. So you need some concrete evidence to believe that your new cause is especially good relative to the
others. I am not at all persuaded that today is very special just because some people throw around the phrase “effective altruism.”
GD Star Rating
Tagged as: Charity, Finance
Rah Manic Monopolists?
The vast majority of economic growth is caused by innovation. So when it comes to long term policy, innovation is almost the entire game – whatever policy causes substantially more innovation is
probably better, even if has many other big downsides.
One simple robust solution to the innovation problem would seem to be manic monopolists: one aggressively-profit-maximizing firm per industry. Such a firm would internalize the entire innovation
problem within that industry, all the way from designers to suppliers to producers to customers – it would have full incentives to encourage all of those parties to put nearly the right amount and
type of efforts into innovation.
Yes, even monopolists don’t have exactly the right incentives. They will tend to focus on what marginal customers want, at the expense of both lower-value customers pushed out by inflated monopolist
prices, and higher-value infra-marginal customers. And when innovations can cross industry boundaries, industry monopolists may also fail to coordinate with monopolists from other industries. But
still, this approach seems to get a lot closer to optimal that anything other simple policy. And if two industries had enough innovation interaction, one might just have a single firm cover both
Ideally these monopolies would be global, but if not national ones might still be a big win over the status quo.
Admittedly, common intuitions don’t agree with this. For one thing we tend to think of monopolists as too lazy to innovate – it takes competition to push them out of their comfort zone. And I agree
that this is a common situation for regulated utilities and government agencies. Often the employees of a monopolist tend to have enough political power to entrench themselves and resist change, at
the expense of investors and customers. This is why I specified manic monopolists – we need investors to have enough power to impose their will, and we need to have enough competition to fill these
investor roles.
Yes, we also tend to be uncomfortable with the inequality and power concentration that manic monopolists would embody and require. It isn’t at all what foragers are prone to praise. But still, if
innovation is important enough, shouldn’t we be willing to tolerate a lot more inequality to get it?
Added 8a 11Apr: In general, industries that are more concentrated, i.e., more in the direction of having a monopolist, have more patents, all else equal. This seems to be because they invest more in
R&D. Data here, here.
GD Star Rating
Tagged as: Inequality, Innovation
Review of LockStep
Since the tech of science fiction tends to be more realistic than its social science, I am especially interested in science fiction praised for its social realism. Alas I usually find even those
wanting. The latest such book is Lockstep. Cory Doctorow:
As I’ve written before, Karl Schroeder is one of the sharpest, canniest thinkers about technology and science fiction I know. … Now he’s written his first young adult novel, Lockstep, and it is a
triumph. Lockstep’s central premise is a fiendishly clever answer to the problem of creating galactic-scale civilizations in a universe where the speed of light is absolute. … Lockstep has enough
social, technological, political and spiritual speculation for five books. It is easily the most invigorating, most scientifically curious book I’ve ever read that’s written in a way that both young
people and adults can enjoy it. (more)
Paul Di Filippo:
And then, within all this gosh-wow fun, Schroeder inserts a detailed subtext on economics. He’s concerned with income inequality, arcane trade arrangements between locksteps, theft and conquests of
sleeping cities. In fact, this book should probably be read in parallel with Charles Stross’s Neptune’s Brood. … Both these books prove that far from being the “dismal science,” economics can provide
fascinating grounds for speculations. (more)
To explain my complaints, I’ll have to give some spoilers. You are warned. Continue reading "Review of LockStep" »
GD Star Rating
Tagged as: Fiction, Future
Who Gains From Grit?
I’ve often said that while foragers did what felt natural, farmer cultures used religion, conformity, self-control, and “grit,” to get farmers do less-natural-feeling things. But as we’ve become rich
over the last few centuries, we’ve felt those pressures less, and revived forager-like attitudes. Today “conservatives” and “liberals” have farmer-like and forager-like attitudes, respectively. I
think the following recent quotes support this view.
Tyler Cowen says workers today have less grit:
There is also a special problem for some young men, namely those with especially restless temperaments. They aren’t always well-suited to the new class of service jobs, like greeting customers or
taking care of the aged, which require much discipline or sometimes even a subordination of will. (more)
There were two classes of workers fired in the great liquidity shortage of 2008-2010. The first were those revealed to be not very productive or bad for firm morale. They skew male rather than
female, and young rather than old. … There really are a large number of workers who fall into the first category. (more)
Alfie Kohn says grit is overrated:
More than smarts, we’re told, what kids need to succeed is old-fashioned self-discipline and willpower, persistence and the ability to defer gratification. … The heart of what’s being disseminated is
a notion drummed into us by Aesop’s fables, Benjamin Franklin’s aphorisms, Christian denunciations of sloth and the 19th-century chant, “If at first you don’t succeed, try, try again.” …
On closer inspection, the concept of grit turns out to be dubious, as does the evidence cited to support it. Persistence can actually backfire and distract from more important goals. Emphasizing grit
is usually justified as a way to boost academic achievement, which sounds commendable. Indeed, research has found that more A’s are given to students who report that they put off doing what they
enjoy until they finish their homework. Another pair of studies found that middle-schoolers who qualified for the National Spelling Bee performed better in that competition if they had more grit,
“whereas spellers higher in openness to experience, defined as preferring using their imagination, playing with ideas, and otherwise enjoying a complex mental life,” did worse.
But what should we make of these findings? If enjoying a complex mental life interferes with performance in a contest to see who can spell the most obscure words correctly, is that really an argument
for grit? And when kids persist and get good grades, are they just responding to the message that when they do what they’ve been told, they’ll be rewarded by those who told them to do it?
Interestingly, separate research, including two studies Duckworth cites to argue that self-discipline predicts academic performance, showed that students with high grades tend to be more conformist
than creative. That seems to undermine not only the case for grit but for using grades as markers of success…
Moreover, grit may adversely affect not only decisions but the people who make them. Following a year-long study of adolescents, Canadian researchers Gregory Miller and Carsten Wrosch concluded that
those “who can disengage from unattainable goals enjoy better well-being ... and experience fewer symptoms of everyday illness than do people who have difficulty disengaging from unattainable
goals.” …
Finally, the concept isn’t just philosophically conservative in its premise but also politically conservative in its consequences. The more we focus on trying to instill grit, the less likely we’ll
be to question larger policies and institutions. (more)
Yes, grit is conservative, and gritty people may not be as playful, open, relaxed, or creative. Grit just helps individuals to succeed, and societies to get ugly things done, like winning their
competitions with other societies. But yes, you might be happier to play video games in your parent’s basement, leaving the support of society to someone else.
GD Star Rating
Tagged as: Farmers, Foragers, Self-Control, Work
Reason, Stories Tuned for Contests
Humans have a capacity to reason, i.e., to find and weigh reasons for and against conclusions. While one might expect this capacity to be designed to work well for a wide variety of types of
conclusions and situations, our actual capacity seems to be tuned for more specific cases. Mercier and Sperber:
Reasoning is generally seen as a means to improve knowledge and make better decisions. However, much evidence shows that reasoning often leads to epistemic distortions and poor decisions. This
suggests that the function of reasoning should be rethought. Our hypothesis is that the function of reasoning is argumentative. It is to devise and evaluate arguments intended to persuade. … Poor
performance in standard reasoning tasks is explained by the lack of argumentative context. … People turn out to be skilled arguers (more)
That is, our reasoning abilities are focused on contests where we already have conclusions that we want to support or oppose, and where particular rivals give conflicting reasons. I’d add that such
abilities also seem tuned to win over contest audiences by impressing them, and by making them identify more with us than with our rivals. We also seem eager to visibly hear argument contests, in
addition to participating in such contests, perhaps to gain exemplars to improve our own abilities, to signal our embrace of social norms, and to exert social influence as part of the audience who
decides which arguments win.
Humans also have a capacity to tell stories, i.e., to summarize sets of related events. Such events might be real and past, or possible and future. One might expect this capacity to be designed to
well-summarize a wide variety of event sets. But, as with reasoning, we might similarly find that our actual story abilities are tuned for the more specific case of contests, where the stories are
about ourselves or our rivals, especially where either we or they are suspected of violating social norms. We might also be good at winning over audiences by impressing them and making them identify
more with us, and we may also be eager to listen to gain exemplars, signal norms, and exert influence.
Consider some forager examples. You go out to find fire wood, and return two hours later, much later than your spouse expected. During a hunt someone shot an arrow that nearly killed you. You don’t
want the band to move to new hunting grounds quite yet, as your mother is sick and hard to move. Someone says something that indirectly suggests that they are a better lover than you.
In such examples, you might want to present an interpretation of related events that persuades others to adopt your favored views, including that you are able and virtuous, and that your rivals are
unable and ill-motivated. You might try to do this via direct arguments, or more indirectly via telling a story that includes those events. You might even work more indirectly, by telling a fantasy
story where the hero and his rival have suspicious similarities to you and your rival.
This view may help explain some (though hardly all) puzzling features of fiction:
• Most of our real life events, even the most important ones like marriages, funerals, and choices of jobs or spouses, seem too boring to be told as stories.
• Compared to real events, even important ones, stories focus far more on direct conscious conflicts between people, and on violations of social norms.
• Compared to real people, character features are more extreme, and have stronger correlations between good features.
• Compared to real events, fictional events are far more easily predicted by character motivations, and by assuming a just world.
GD Star Rating
Tagged as: Disagreement, Fiction, Standard Biases
To The Barricades
I recently watched the classic 1952 Kurosawa film Ikiru, and have some comments. But those comments include spoilers; you are warned. Continue reading "To The Barricades" »
GD Star Rating
Tagged as: Death, Fiction, Management, War
Extremists Compete
Extremists hold extreme views, and struggle to persuade others of their views, or even to get them to engage such views. Since most people are not extremists, you might think extremists focus mostly
on persuading non-extremists. If so, they should have a common cause in getting ordinary people to think outside the usual boxes. They should want to join together to say that the usual views tend to
gain from conformity pressures, and that such views are held overconfidently.
But in fact extremists don’t seem interested in joining together to support extremism. While each individual extremist tends to hold multiple extreme views, extremists groups go out of their way to
distance themselves from other extremist groups. Not only do they often hate close cousins who they see as having betrayed their cause, they are also hostile to extremists groups on orthogonal
This all makes sense if, as I’ve suggested, there are extremist personality types. Extremist groups have a better chance of attracting these types to their particular sort of extremism, relative to
persuading ordinary folks to adopt extreme views.
GD Star Rating
Tagged as: Contrarian, Disagreement
Open Thread
This is our monthly place to discuss related topics that have not appeared in recent posts.
GD Star Rating
Tagged as: Open Thread
NASA Goddard Talk Monday
This Monday at 3:30p I talk on interstellar colonization at the Engineering Colloquim of NASA Goddard:
Attempts to model interstellar colonization may seem hopelessly compromised by uncertainties regarding the technologies and preferences of advanced civilizations. However, if light speed limits
travel speeds and reliability limits travel distances, then a selection effect may eventually determine behavior at the colonization frontier. Making weak assumptions about colonization technology, I
use this selection effect to predict colonists’ behavior, including which oases they colonize, how long they stay there, how many seeds they then launch, how fast and far those seeds fly, and how
behavior changes with increasing congestion. This colonization model might explain some astrophysical puzzles, predicting lone oases like ours, amid large quiet regions with vast unused resources.
(more here; here)
Added: Slides, Audio
I’m also talking on helping now vs. later at the DC Less Wrong Meetup Sunday (tomorrow), 3p in the courtyard of the National Portrait Gallery.
GD Star Rating
Tagged as: Personal, Space
|
{"url":"http://www.overcomingbias.com/","timestamp":"2014-04-17T15:26:48Z","content_type":null,"content_length":"67212","record_id":"<urn:uuid:91dad8f5-9985-4391-86e3-c05ef67ad919>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00310-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Confidence Interval and Hypothesis Test for a Proportion
JMP Tutorial:
Confidence Interval and Hypothesis Test for a Proportion
A random sample of n=150 Stat 201 students in Spring 09 revealed that 91 of them were born in Tennessee.
Create a JMP data table as follows. Notice that 91 + 59 = 150 = sample size.
Go to the Analyze menu and select Distribution:
Click the column Born in TN? and then click Y, Columns.
Click the column Number and then click Freq.
You Should see:
Click OK.
This will produce the following:
To create 95% confidence intervals, click on the red down arrow next to Born in TN? and select Confidence Interval, then 0.95.
You should see the following additional output:
Hypothesis Testing
Click the red down arrow next to Born in TN? and select Test Probabilities.
Suppose, prior to collection the data, someone thought that 50% of all Stat 201 students in Spring 09 were born in Tennessee. You disagree, but you are not sure if it is above or below 50% (i.e., a
2-sided alternative hypothesis).
Fill in the resulting dialog box as follows:
Click Done:
You should see the following output:
Notice that the Pearson p-value above is 0.009, and will match the p-value obtained using the z test statistic approach. In this case, for a 2-sided alternative, JMP uses the square of the z test
statistic, called the Chi Square test Statistic. The two method are mathematically identical.
Note: If your data are "unaggregated", such as 150 rows of the following:
all the above instructions apply, except you place nothing in the Freq text box, like this:
|
{"url":"http://web.utk.edu/~cwiek/201Tutorials/CIProportion/","timestamp":"2014-04-18T23:50:39Z","content_type":null,"content_length":"2994","record_id":"<urn:uuid:b2c44709-8263-41eb-bddf-5bb77a2dae1a>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00230-ip-10-147-4-33.ec2.internal.warc.gz"}
|
CRpuzzles Logic Problem - The Good Sport
Logic Puzzle # 154 Logic Problems Help
The Good Sport
by Randall L. Whipkey
When Buzz of Buzz's Exxon gathered his 2001 records for his CPA, he found that he had made six donations, each for a different amount of money and totaling $1500, to six local youth sports teams,
including a softball team. Given the data below, can you find how much in sports deductions Buzz has for 2001: the contribution to each team and the sport the team plays?
1. Buzz donated twice as much to the basketball team as he did to the Knights.
2. The contribution to the Comets was less than that to the football team.
3. For his donation to the soccer team, which was $50 more than he gave the Rovers, the soccer team advertised Buzz's Exxon on the back of jersey # 1.
4. The Lions got $150 more from the service station operator than the baseball team did.
5. The smallest donation of the six was for $50.
6. The Hawks received twice as much money from Buzz as the soccer team.
7. Before actually seeing the receipts again, Buzz thought that his largest donation had been $600 and that he had given the hockey team $250; he found that the largest contribution was for less
than $600 and that the hockey donation was for less than $250.
8. Buzz gave the Knights $50 more than he gave the Devils.
9. The largest donation wasn't the one to the youth basketball team.
Logic Problem Solution
CRpuzzles.com. Copyright © 2000-2007 by Calvin J. Hamilton & Randall L. Whipkey. All rights reserved. Privacy Statement.
|
{"url":"http://crpuzzles.com/logic/logic0154.html","timestamp":"2014-04-20T03:11:29Z","content_type":null,"content_length":"7526","record_id":"<urn:uuid:90bf9d66-a8db-4de0-aac9-50e428c87610>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00139-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Elmwood Park, NJ SAT Math Tutor
Find an Elmwood Park, NJ SAT Math Tutor
...This attitude has always served me well in patience with exploring new topics and needing to explore alternate routes of explanation. I do hold high expectations on both parties, and do
understand that this is a process that evolves as a deeper relationship is formed. Currently I am employed as the Physical Science and Physics teacher at St.
9 Subjects: including SAT math, calculus, physics, algebra 1
...I am confident in my English skills as well because learning other languages forces one to perform well in English. I live in New Hartford with my wife, two kids, and two puppies who are
growing like weeds. If I can be of assistance, please do not hesitate to contact me!
45 Subjects: including SAT math, English, Spanish, reading
In addition to a BA in Philosophy with Honors from Grinnell College, 99+ percentile SAT, LSAT, GRE, and GMAT scores, a fluent command of the French language, and countless musical awards, I have
many years of teaching experience. My rate varies depending on the subject. I offer a discounted rate i...
8 Subjects: including SAT math, LSAT, ACT Math, SAT reading
...I find it very humbling to be able to help someone to succeed and make themselves a better life. If you are a student who has financial troubles, you can also request a discount. My goal is to
provide you with the best tutoring experience you've had yet.
30 Subjects: including SAT math, reading, English, chemistry
...The main difficulty that student have with the ACT Reading test is its relative length. Students have just 35 minutes to read 4 passages and answer 40 questions. Practice is key so that
students are able to effectively complete this test section.
17 Subjects: including SAT math, physics, calculus, geometry
Related Elmwood Park, NJ Tutors
Elmwood Park, NJ Accounting Tutors
Elmwood Park, NJ ACT Tutors
Elmwood Park, NJ Algebra Tutors
Elmwood Park, NJ Algebra 2 Tutors
Elmwood Park, NJ Calculus Tutors
Elmwood Park, NJ Geometry Tutors
Elmwood Park, NJ Math Tutors
Elmwood Park, NJ Prealgebra Tutors
Elmwood Park, NJ Precalculus Tutors
Elmwood Park, NJ SAT Tutors
Elmwood Park, NJ SAT Math Tutors
Elmwood Park, NJ Science Tutors
Elmwood Park, NJ Statistics Tutors
Elmwood Park, NJ Trigonometry Tutors
|
{"url":"http://www.purplemath.com/elmwood_park_nj_sat_math_tutors.php","timestamp":"2014-04-17T08:01:20Z","content_type":null,"content_length":"24243","record_id":"<urn:uuid:5bccad4e-f9ea-4268-96bc-8b106f7900bd>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00531-ip-10-147-4-33.ec2.internal.warc.gz"}
|
188 helpers are online right now
75% of questions are answered within 5 minutes.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/users/nickhouraney/medals","timestamp":"2014-04-16T20:08:30Z","content_type":null,"content_length":"89433","record_id":"<urn:uuid:b95705c9-aec9-471d-a415-b4c8024202b1>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00270-ip-10-147-4-33.ec2.internal.warc.gz"}
|
667pages on
this wiki
In anatomy, the cuboid bone is a bone in the foot.
In geometry, a cuboid is a solid figure bounded by six faces, forming a convex polyhedron. There are two competing and incompatible definitions of a cuboid in the mathematical literature. In the more
general definition of a cuboid, the only additional requirement is that these six faces each be a quadrilateral, and that the undirected graph formed by the vertices and edges of the polyhedron
should be isomorphic to the graph of a cube.^[1] Alternatively, the word “cuboid” is sometimes used to refer to a shape of this type in which each of the faces is a rectangle, and in which each pair
of adjacent faces meets in a right angle; this more restrictive type of cuboid is also known as a right cuboid, rectangular box, rectangular hexahedron, right rectangular prism, or rectangular
General cuboids
By Euler's formula (the number of faces (F), vertices (V), and edges (E) of any convex polyhedron are related by the formula F + V = E + 2. In the case of a cuboid this gives 6 + 8 = 12 + 2; that is,
like a cube, a cuboid has 6 faces, 8 vertices, and 12 edges.
Along with the rectangular cuboids, any parallelepiped is a cuboid of this type, as is a square frustum (the shape formed by truncation of the apex of a square pyramid).
Rectangular cuboid
In a rectangular cuboid, all angles are
right angles
, and opposite faces of a cuboid are
. It is also a
right rectangular prism
. The term "
rectangular or oblong prism
" is ambiguous. Also the term
rectangular parallelepiped
or orthogonal parallelepiped is used.
The square cuboid, square box ,or right square prism or right cube (also ambiguously called square prism or cube) is a special case of the cuboid in which at least two faces are squares. The cube is
a special case of the square prism in which all six faces are squares.
If the dimensions of a cuboid are a, b and c, then its volume is $abc$ and its surface area is $2(ab + (b + a)c).$
The length of the space diagonal is $d = \sqrt{a^2+b^2+c^2} .$
Cuboid shapes are often used for boxes, cupboards, rooms, buildings, etc. Cuboids are among those solids that can tessellate 3-dimensional space. The shape is fairly versatile in being able to
contain multiple smaller cuboids, e.g. sugar cubes in a box, small boxes in a large box, a cupboard in a room, and rooms in a building.
A cuboid with integer edges as well as integer face diagonals is called an Euler brick, for example with sides 44, 117 and 240. A perfect cuboid is an Euler brick whose space diagonal is also an
integer. It is currently unknown whether a perfect cuboid actually exists.
See also
External links
az:Düzbucaqlı paralelepiped cs:Kvádr da:Kasse (geometri)eo:Kvadroid:Balokhe:תיבה (גאומטריה) lt:Stačiakampis gretasienis hu:Téglatest mk:Квадар nl:Balk (geometrie) ja:直方体 pl:Prostopadłościan
qu:Suqtawask'a sk:Kváder sl:Kvader sr:Квадар su:Balok sv:Rätblock th:ทรงสี่เหลี่ยมมุมฉาก ur:مکعب نما zh:長方體
|
{"url":"http://math.wikia.com/wiki/Cuboid","timestamp":"2014-04-19T02:00:33Z","content_type":null,"content_length":"70085","record_id":"<urn:uuid:ae2fe207-e952-4b70-8052-0a3d18429f2d>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00441-ip-10-147-4-33.ec2.internal.warc.gz"}
|