content
stringlengths 86
994k
| meta
stringlengths 288
619
|
|---|---|
The Hindu Business Line : Using futures/options: The probability game
Investment World - Derivatives Markets
Markets - Derivatives Markets
Using futures/options: The probability game
Anup Menon
IN the purest sense, trading is a gamble, more so in the derivative markets. For instance an option price is derived from the movement of the underlying asset, which by itself is a random process.
There are several strategies that one can use to profit from option trading.
Those who are willing to absorb a considerable amount of risk can consider selling naked calls or puts. Traders who are, to some extent, risk averse can consider writing covered calls. But all these
positions involve a certain risk that the position would go against the traders. Is there something that can be done about this? Can investors protect themselves to some extent when being short in
the options market? The answer is yes.
Success in trading depends on probabilities. Taking a view on how the underlying asset moves and the accuracy of the view determines the final pay off to the trader. The development of option pricing
models assumes certain statistical properties that the underlying asset price should follow. To be a successful trader it helps to understand the likelihood of the stock moving in the direction it is
expected to. This is where a basic understanding of probability theory might help.
The log normal distribution has been cited in literature as being one among the better distributions to model asset prices. Intuitively the natural logarithm of the return series is expected to
follow a normal (Gaussian) distribution.
The distribution has some interesting properties. A look at the graph of the normal curve will give us some interesting insights. The probabilities are highest at the mean values of the distribution.
Further as we move away from the mean, the probabilities are lower. Further the curve is symmetrical on either side. This means that there is an equal chance of the event moving either way. While
this is the basic idea, what does this have to do with options?
Options and probabilities
Lets take a real-life example. One of the factors that all traders look is the returns and the risk factor in stocks when an investment is made. Take the case of the Sensex. On an average, we know
that the sensex will move around `x' points in a day. It is very rarely that we may see movement to the scale of "x+y" where y is significantly above the historical standard deviation. Therefore,
this conforms to the normal distribution then the probability of the sensex moving in and around the mean values will be much higher.
Therefore, consider options on the Sensex. Lets say that we have a host of OTM options and they are OTM by the range of x%, y%, z% and so on. The pricing of the options and the probability of the
underlying asset moving jointly determine the probability of the trade being successful. For instance assume that the Sensex is at 3000. You have to choose from different options with strikes of say
3020, 3030, 3040 and 3050, all calls. Now as the strike increases the premium comes down. Therefore the 3050 calls will be priced lower than the 3030 calls.
Now assume the case of the write of the option. The question is whether he wants to write the 3030 calls or the 3050 calls. Obviously writing the 3050 calls will be less risky. But the 3030 calls is
more profitable. Now using the probabilities he can determine whether it is worth taking the risk of writing the 3030 calls as against the 3050 calls. The significance of assessing the probabilities
is more important when writing naked positions as the risks are higher. But there are several questions when using probabilities
The first thing that comes to mind is implementation. For a small trader how easy is it to implement such models. Spreadsheets are armed with functions that can help calculate the probabilities. In
terms of data, all that is required is stock prices and options data, which is available. Therefore implementing these models is not difficult.
The validity of the probabilities is even more questionable than implementation. In noisy processes (such as what we see in stock prices), where volatilities are very high, the normal distribution
might not be the best of distributions to characterize the market. But it is the easiest to use. This increases the comfort level for small investors.
But ultimately irrespective of the model used, in the long run, the probabilities do add up.
(The author is a Research Scholar and Graduate Student with the department of Agricultural Economics at Kansas State University. Feedback is invited to amenon@agecon.ksu.edu)
If you have any queries relating to the futures/options markets and strategies that can be used in these markets, please mail them to Futures & Options, Kasturi & sons, 859-860, Anna Salai, Chennai
600 002 or email them to vaidy@thehindu.co.in with a mention of futures/options in the subject line of the mail.
Send this article to Friends by E-Mail
Comment on this article to BLFeedback@thehindu.co.in
|
{"url":"http://www.thehindubusinessline.in/bline/iw/2002/11/03/stories/2002110301101200.htm","timestamp":"2014-04-18T05:34:43Z","content_type":null,"content_length":"20697","record_id":"<urn:uuid:7fcb7238-c5d7-4bbe-860f-e340c4a37bb6>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00212-ip-10-147-4-33.ec2.internal.warc.gz"}
|
2420 -- A Star not a Tree?
A Star not a Tree?
Time Limit: 1000MS Memory Limit: 65536K
Total Submissions: 3241 Accepted: 1645
Luke wants to upgrade his home computer network from 10mbs to 100mbs. His existing network uses 10base2 (coaxial) cables that allow you to connect any number of computers together in a linear
arrangement. Luke is particulary proud that he solved a nasty NP-complete problem in order to minimize the total cable length.
Unfortunately, Luke cannot use his existing cabling. The 100mbs system uses 100baseT (twisted pair) cables. Each 100baseT cable connects only two devices: either two network cards or a network card
and a hub. (A hub is an electronic device that interconnects several cables.) Luke has a choice: He can buy 2N-2 network cards and connect his N computers together by inserting one or more cards into
each computer and connecting them all together. Or he can buy N network cards and a hub and connect each of his N computers to the hub. The first approach would require that Luke configure his
operating system to forward network traffic. However, with the installation of Winux 2007.2, Luke discovered that network forwarding no longer worked. He couldn't figure out how to re-enable
forwarding, and he had never heard of Prim or Kruskal, so he settled on the second approach: N network cards and a hub.
Luke lives in a loft and so is prepared to run the cables and place the hub anywhere. But he won't move his computers. He wants to minimize the total length of cable he must buy.
The first line of input contains a positive integer N <= 100, the number of computers. N lines follow; each gives the (x,y) coordinates (in mm.) of a computer within the room. All coordinates are
integers between 0 and 10,000.
Output consists of one number, the total length of the cable segments, rounded to the nearest mm.
Sample Input
Sample Output
[Submit] [Go Back] [Status] [Discuss]
All Rights Reserved 2003-2013 Ying Fuchen,Xu Pengcheng,Xie Di
Any problem, Please Contact Administrator
|
{"url":"http://poj.org/problem?id=2420","timestamp":"2014-04-17T07:50:57Z","content_type":null,"content_length":"7261","record_id":"<urn:uuid:a788e69c-fe84-4af8-b131-574cf6ac7630>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00259-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Check out our New Publishers' Select for Free Articles
Mechanics of Composite Materials (v.41, #5)
Mode II Delamination of a Unidirectional Carbon Fiber/Epoxy Composite in Four-Point Bend End-Notched Flexure Tests by E. Zile; V. Tamuzs (pp. 383-390).
Results from an experimental study on the delamination of a unidirectional carbon fiber/epoxy composite by using the four-point bend end-notched flexure (4ENF) test are presented. It was found that
the compliance data obtained in load-unload-reload and continuous loading tests were very similar. The R-curves for specimens of different thickness were also found experimentally. These curves
showed an appreciable toughening with crack advance, which can be explained by the presence of fiber bridging. The finite-element method with cohesive elements allowing us to model the progressive
delamination was used to analyze the 4ENF test.
Keywords: four-point bend end-notched flexure; delamination; Mode II; cohesive elements
Interaction of Plane Stress Waves in a Three-Layer Structure. 1. Degenerate Solutions in Terms of Characteristics for a Homogeneous Structure by V. A. Polyakov; R. P. Shlitsa; V. V. Khitrov; V. I.
Zhigun (pp. 391-406).
Exact expressions in terms of characteristics for calculating the normal-stress waves propagating across the layers of different materials are deduced. A one-dimensional boundary-value problem is
considered for a three-layer structure of sandwich type. The faces of the layered structure are free from loads or one of them is rigidly fixed (variant 1), or one face is rigidly fixed and the other
is subjected to an impact of a mass M with a speedV[0] (variant 2). For the boundary conditions of variant 1, relationships are obtained which allow one to reduce the analytical continuation of a
solution in time to a periodic procedure if solely the initial disturbances of the strain field in the layers are given. It is shown that, in this case, the Cauchy problem with the initial strain
field is reduced to graphoanalytically constructing the superposition patterns of the forward and backward waves. The fundamental features of the construction are demonstrated for a uniform bar with
a piecewise constant distribution of strains along its length. To solve the problem of impact loading in variant 2, analytical results for a uniform plate are used, which allows us to account for the
direction of mass forces in collision. In the latter case, the possibility of mass recoil is revealed in the first and second time cycles. The analytical constructions presented are focused on an
exact calculation of stresses upon response of a layered plate to initial disturbances within its layers, as well as to an external dynamic action.
Keywords: layered structure; wave problem; graphoanalytical method in terms of characteristics; transverse normal stress
Predicting the Deformability of Expanded Polystyrene in Long-Term Compression by I. J. Gnip; V. I. Kersulis; S. I. Vaitkus (pp. 407-414).
The interval prediction of creep strain on the basis of 15 years is carried out for slabs of expanded polystyrene (EPS) subjected to a compressive load. The expansion of the confidence interval
caused by the discounted prediction information is allowed for by an additional factor. The creep compliance $$ar J$$ [c] (t = 15) of the EPS is determined based on empirically estimating the
long-term creep of this material subjected to a compressive stress σ c[ ] = 0.3σ[10%] for 15 years. A relationship between $$ar J$$ [c] (t = 15) and EPS density in the slabs is established.
Keywords: interval prediction; creep; creep compliance; long-term compression; slabs of expanded polystyrene
Large Elastic Strains of Plastic Foams by D. A. Chernous; S. V. Shil'ko (pp. 415-424).
The elastic deformation of plastic foams with a low (< 6%) volume fraction of solid phase is described based on a 4-rod equivalent element. A criterion is proposed which allows one to determine the
parameter of structure of this element. Based on an analysis of the equivalent element, a procedure is developed for constructing the compression diagram of plastic foams in the region of large (>
70%) strains. The calculation results are compared with data found in the literature and experimental results for polyurethane foams obtained by the present authors.
Keywords: foams; implants; volume fraction of solid phase; structural element; effective Young's modulus; nonlinear elastic deformation; loss of stability
Comparative Studies on the Mechanical Properties of a Thermoset Polymer in Tension and Compression by R. D. Maksimov; E. Z. Plume; J. O. Jansons (pp. 425-436).
Results of an experimental investigation into the mechanical properties of a polyester resin in tension and compression are reported. Features of the stress-strain curves obtained are discussed. Data
on the elastic modulus, Poisson ratio, and volume strains are obtained. The results of creep behavior of the material in tension and compression are also presented. It is found that the
time-dependent creep obeys a power law, but the nonlinear stress dependence can be described by using the hyperbolic sine function. The effect of load type (tension or compression) on the
nonlinearity of the creep is analyzed.
Keywords: polyester resin; tension; compression; strength; elastic modulus; Poisson ratio; creep
Stability of Composite Cylindrical Shells with Noncoincident Directions of Layer Reinforcement and Coordinate Lines by N. P. Semenyuk; V. M. Trach (pp. 437-444).
The stability problem is solved for cylindrical shells made of a laminated composite whose directions of layer reinforcement are not aligned with coordinate axes of the shell midsurface. Each layer
of the composite is modeled by an anisotropic material with one plane of symmetry. The resolving functions of the mixed variant of shell theory are approximated by trigonometric series satisfying
boundary conditions. The stability of the shells under axial compression, external pressure, and torsion is investigated. A comparison with calculation data obtained within the framework of an
orthotropic body model is carried out. It is shown that this model leads to considerably erroneous critical loads for some structures of the composites.
Keywords: composite cylindrical shells; stability; axial compression; external pressure; torsion; one plane of symmetry
Analysis of Thick Laminated Composite Plates on an Elastic Foundation with the Use of Various Plate Theories by S. S. Akavci (pp. 445-460).
In this study, various theories of composite laminated plates are extended to rectangular composite laminates resting on an elastic foundation. First, an analysis based on the classical theory of
laminated plates is employed. Then the first-order Reissner-Mindlin theory is used for analyzing the laminates. At last, the Reddy shear deformation theory, which allows for the transverse shear
strains, is applied to the bending analysis of the laminates. In the analysis, the two-parameter Pasternak and Winkler foundations are considered. The accuracy of the present analysis is demonstrated
by solving problems numerical results for which are available in the literature. Some numerical examples are presented to compare the three methods and to illustrate the effects of parameters of the
elastic foundations on the bending of shear-deformable laminated plates.
Keywords: laminate; composite; plate; shear; Winkler; Pasternak
Multicriteria Optimal Design of a Rectangular Composite Plate Subjected to Biaxial and Thermal Loading by G. Teters (pp. 461-466).
Multicriteria optimization of the structure and geometry of a laminated anisotropic composite plate subjected to the thermal and biaxial action is considered. From known properties of the monolayer
and the given values of variable structural parameters, the thermoelastic properties of the layered composite are determined. The criteria to be optimized—the transverse critical load and the
longitudinal thermal stresses—depend on two variable design parameters of composite properties and temperature. In the space of the optimization criteria, the domain of allowable solutions and the
Pareto-optimal subdomain are found.
Keywords: multicriteria optimization; composite plate; thermal action; biaxial loading
The Method of Three-Point Bending in Testing Thin High-Strength Reinforced Plastics at Large Deflections by A. K. Arnautov (pp. 467-476).
A method is presented for determining the flexural strength of unidirectional composites from three-point bending tests at large deflections. An analytical model is proposed for calculating the
flexural stress in testing thin bars in the case of large deflections. The model takes into account the changes in the support reactions at bar ends and in the span of the bar caused by its
deflection. In the model offered, the influence of transverse shear and the friction at supports are neglected. The problem is solved in elliptic integrals of the first and second kind. The results
obtained are compared with experimental tension data. The method elaborated for calculating flexural stresses has an obvious advantage over the conventional engineering procedure, because the
calculation accuracy of the stresses increases considerably in the case of large deflections.
Keywords: advanced composites; test methods; thin bar; analytical model; three-point bending; large deflections; tension; strength
|
{"url":"http://chemweb.com/journals/journals?type=issue&jid=SV11029&iid=0004100005","timestamp":"2014-04-21T04:54:58Z","content_type":null,"content_length":"50501","record_id":"<urn:uuid:40b16117-b633-431d-ac80-31a10bca7d4f>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00581-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[no subject]
Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[no subject]
I additionally have 7 other variables a, b, c, d, e, f, g that I would like to add to the original model in all possible combinations ie:
original model or a
original model or b
original model or c
original model or d
original model or e
original model or f
original model or g
original model or a or b
original model or a or c
original model or a or b or c
original model or a or b or d
original model or a or b or c or d or e or f or g
I am scratching my head trying to work out how to include this loop within the original loop and would be grateful for any suggestions.
Do file so far is:
[r12cough and rrate1 are two of the seven additional variables I am adding in combination to the original model and are shown as an example of what I am trying to achieve... but by using a loop]
xi:logistic died i.sex1 i.r18unwel_1 i.r21dyspn i.r25whtl i.hiv i.pulse1
*generates a variable p which is the probability of each individual being a case, based on this logistic regression model
predict p
*generates a variable x which is going to be used as the mortality status based on the model
gen x_reg=0
gen y_reg=0
gen x_h=0
gen y_h=0
qui gen n=1
*Creates a loop to do the sens/spec/ppv/npv for cutoffs from 0.005 to 0.650 - incrementing by 0.005
forvalues i=0.005(0.005)0.65 {
*resets x to zero within each value of cutoff
qui replace x_reg=0
qui replace y_reg=0
qui replace x_h=0
qui replace y_h=0
disp ""
disp ""
disp "cutoff = " `i'
*replaces x=1 if the individual has a probability of being a case which is higher than the cutoff i
qui replace x_reg=1 if p>=`i' & p!=.
qui replace x_h=1 if (p>=`i' & p!=.)
qui replace x_h=0 if p<`i'
*add in combinations of additional variables with high specificity
qui replace x_h=1 if (p>=`i' & p!=.) | r12cough==1 | rrate1==1
qui replace x_h=0 if p<`i' & r12cough!=1 & rrate1!=1
*tabulates true mortality status (died) against predicted (x) for each cutoff value
disp "True vs predicted mortality status based on LR model only"
tab died x_reg, row
disp "True vs predicted mortality status based on LR model plus r12cough or rrate1"
tab died x_h, row
qui replace y_reg=1-x_reg
qui replace y_h=1-x_h
qui collapse (sum) x_reg y_reg x_h y_h n, by(died)
qui gen sensspec_reg=x_reg/n if died==1
qui gen sensspec_h=x_h/n if died==1
qui gen id=1
qui replace sensspec_reg=y_reg/n if died!=1
qui replace sensspec_h=y_h/n if died!=1
qui reshape wide sensspec_reg x_reg y_reg sensspec_h x_h y_h n, i(id) j(died)
qui gen lr_reg=sensspec_reg1/(1-sensspec_reg0)
qui gen ppv_reg=x_reg1/(x_reg0+x_reg1)
qui gen npv_reg=y_reg0/(y_reg0+y_reg1)
qui gen lr_h=sensspec_reg1/(1-sensspec_h0)
qui gen ppv_h=x_h1/(x_h0+x_h1)
qui gen npv_h=y_h0/(y_h0+y_h1)
disp "Sensitivity based on LR model only = " %5.3f sensspec_reg1 " at cutoff p=" `i'
disp "Specificity based on LR model only = " %5.3f sensspec_reg0 " at cutoff p=" `i'
disp "Positive predictive value based on LR model only = " %5.3f ppv_reg " at cutoff p=" `i'
disp "Negative predictive value based on LR model only = " %5.3f npv_reg " at cutoff p=" `i'
disp "Likelihood ratio based on LR model only = " %5.3f lr_reg " at cutoff p=" `i'
disp "Sensitivity based on LR model plus r12cough or rrate1= " %5.3f sensspec_h1 " at cutoff p=" `i'
disp "Specificity based on LR model plus r12cough or rrate1= " %5.3f sensspec_h0 " at cutoff p=" `i'
disp "Positive predictive value based on LR model plus r12cough or rrate1h= " %5.3f ppv_h " at cutoff p=" `i'
disp "Negative predictive value based on LR model plus r12cough or rrate1= " %5.3f npv_h " at cutoff p=" `i'
disp "Likelihood ratio based on LR model plus r12cough or rrate1= " %5.3f lr_h " at cutoff p=" `i'
Many thanks,
Peter MacPherson
Liverpool School of Tropical Medicine
Dr Peter MacPherson MBChB MPH
Wellcome Trust Clinical Research Fellow
PhD Candidate
Apt 5, 14 South Albert Road
Liverpool, L17 8TN
United Kingdom
Mob: +447519592227
email: petermacpherson@mac.com
This email has been scanned by the MessageLabs Email Security System.
For more information please visit http://www.messagelabs.com/email
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
|
{"url":"http://www.stata.com/statalist/archive/2010-08/msg00284.html","timestamp":"2014-04-16T16:11:23Z","content_type":null,"content_length":"10409","record_id":"<urn:uuid:9ee95a22-2a62-46dc-a018-d68fcd6ed754>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00525-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Pi Day Is Coming — But Tau Day Is Better
30327779 story
Posted by
from the circular-logic dept.
"A few months ago, a Tweet from Randal Schwartz pointed me to a YouTube video about 'Triangle Parties' made by Vi Hart. My nerdiness and my love of math made it my new favorite thing on YouTube. Now,
with Pi Day coming up later this week, I thought it would be an appropriate time to point people to another of her YouTube videos: Pi is Wrong. The website she mentions at the end, Tauday, has a full
explanation of the benefits of using Tau rather than Pi. Quoting: 'The Tau Manifesto is dedicated to one of the most important numbers in mathematics, perhaps the most important: the circle constant
relating the circumference of a circle to its linear dimension. For millennia, the circle has been considered the most perfect of shapes, and the circle constant captures the geometry of the circle
in a single number. Of course, the traditional choice for the circle constant is pi — but, as mathematician Bob Palais notes in his delightful article "Pi Is Wrong!", pi is wrong. It's time to set
things right.'"
This discussion has been archived. No new comments can be posted.
Pi Day Is Coming — But Tau Day Is Better
Comments Filter:
• Agreed (Score:5, Funny)
by Ardeaem (625311) on Monday March 12, 2012 @02:12PM (#39329039)
What, pi is 14.3? When did that happen?
□ Re:Agreed (Score:5, Funny)
by Harold Halloway (1047486) on Monday March 12, 2012 @02:18PM (#39329169)
Being English, old-fashioned and inaccurate, I prefer to celebrate Pi Day on July 22nd.
☆ Re:Agreed (Score:5, Interesting)
by wjh31 (1372867) on Monday March 12, 2012 @02:22PM (#39329245) Homepage
22/7 is actually more accurate than 3.14 (0.05% vs 0.04%)
○ by Hatta (162192)
22/7 is misleading, in that people often think it's an exact value. I actually had math teachers in middle school who claimed as much, and refused to understand the term
"transcendental number".
□ by Hentes (2461350)
Because that's not the standard [wikipedia.org].
□ by uigrad_2000 (398500)
Dude, you put the more significant digits first.
Pi is now 201203.14 (201.203,14 with European punctuation).
☆ by Carewolf (581105)
Actually with European punctuation it is 14/3 - 2012 which is around 2008BC or something
Pi is now 201203.14 (201.203,14 with European punctuation).
Depends upon which part of Europe you are from. In the English speaking part it would be 201,203.14
☆ by plj (673710)
“European punctuation” is an unfortunately generic term, if one includes the digit group separator in that definition, as you just did. While all of continental Europe (as well as the
entire South America!) indeed uses comma as decimal separator, digit group separator varies. For example, Germans, Greeks, Italians and Swedes would group digits with dots, while Czechs,
we Finns, as well as French and Poles would use spaces. (Thin) space is also used in some applications elsewhere in the world, du
□ What, pi is 14.3? When did that happen?
It is a consequence of neutrinos going faster than light- all the laws of the universe are now backwards. And yes, Pie is now 14.3... or as an estimation 7 divided by 22.
[whistling to self] hope no-one notices how I spelt pie... [/whistling to self]
• Cant eat a slice of Tau to celebrate. (Score:5, Insightful)
by Kenja (541830) on Monday March 12, 2012 @02:12PM (#39329047)
Thing is, we like pie. Being able to eat a Pi sized slice of Pi at 1:59 on 3.14 is a geeky excuse to consume treats.
□ by Kozar_The_Malignant (738483)
You beat me to it. I'm the chief promoter of Pi Day at my workplace, and it's mostly almost all about the pie. One of the secretaries likes to sing Pi Carols, but it's pretty much about the
pie eating.
☆ I'm trying to imagine Pi Carols...
Oh Pie Tree, Oh Pie Tree,
How lovely is your crust baked...
Rudolf the red cherry piedeer
We three fillings, of orient are,
figs, plums, kiwis stored in a jar
Timer Bells, Timer Bells
Time to open the oven
☆ by gstoddart (321705)
One of the secretaries likes to sing Pi Carols
Wow, is your secretary some kind of frustrated geek or something?
I didn't even know there were Pi carols.
○ Hmmm maybe not Pi Carols but there are at least two Carol Pi's in the US according to Whitepages.com
○ by Kozar_The_Malignant (738483)
>Wow, is your secretary some kind of frustrated geek or something?
Yes. Nice voice though.
>I didn't even know there were Pi carols.
Google is your friend, or not. You may not want to know. For starters: try this. [teachpi.org]
□ by sideslash (1865434)
If you want to observe the festivities with a more phonetically accurate English language reinterpretation of the ancient Greek letter name "Pi", you should go to the restroom and urinate.
That can be a very satisfying feeling as well.
□ Re:Cant eat a slice of Tau to celebrate. (Score:5, Insightful)
by buchner.johannes (1139593) on Monday March 12, 2012 @02:45PM (#39329667) Homepage Journal
With Tau, you can have two pies.
☆ With Tau, you can have two pies.
Actually, if you are a particle physicist you can have a lot more - one tau can decay into 5 pis (although 3 is more common).
☆ by Joce640k (829181)
I just watched the "tau" video and ... I actually agree with it. Making it the ratio of diameter/circumference instead of radius/circumference was a dumb move.
While we're at it can we swap the + and - on our electronic circuits?
□ by CAIMLAS (41445)
Bah, they're both boring. Let me know when it's Summer Glau Day.
Ah, but you can do this: On June 28 at 3:18, everybody leaves work early to contemplate the nature of existence (which simply cannot be done at work). Mathematically religious holiday! That
means you can eat pie at home in your underpants* WITHOUT your dumbass co-workers stealing your fork.
*According to wikipedia, this is the ONLY way to properly contemplate existence, unless, of course, my edit was edited.
□ by GIL_Dude (850471)
Right, but if you have a cylinder with radius z and height a, its volume is Pizza. Who needs pie when we can have wonderful, tasty, Pizza?
• by BenJury (977929)
There are 14 months in a year now?
□ by CaptSlaq (1491233)
It's the year 3141?
□ by ae1294 (1547521)
Well with everyone so interested in the Mayans with thought; Oh, the Mayan calendar is 13 months long. We're gonna make ours 1 better you see? 14 is 1 better than 13. Our new calendar goes to
• Triangle Panties (Score:2, Funny)
by cpu6502 (1960974)
I read that wrong.
I say we stick with pi. It's too labor-intensive to rewrite all the textbooks to read "tau" instead of "2*pi" and reteach everyone the new formulas.
□ Re:Triangle Panties (Score:5, Insightful)
by Mr Z (6791) on Monday March 12, 2012 @03:07PM (#39329975) Homepage Journal
And, I think it's perhaps a little wrongheaded anyway. The area of a circle is pi*r^2. That'd become tau*r^2/2... You took the 2 out of one place and put it in another. And it does nothing
for spheres: Volume = (4*pi*r^3)/3 = (2*tau*r^3)/3; Surface area = (4*pi*r^2) = (2*tau*r^2).
And besides, tau's already claimed as the "time constant" variable, so n'yah!
☆ by newcastlejon (1483695)
The area of a circle is pi*r^2.
For most people, yes.
For some (including me), however, it will always be pi.d^2/4, for the simple reason that you can't easily measure an object's radius (measuring d then halving doesn't count). Seeing it
that way might be ugly/wrong from a mathematical standpoint but practically speaking it seems more natural.
○ by robot256 (1635039)
(measuring d then halving doesn't count)
This is where I stopped reading your post.
• Tau day is better (Score:5, Funny)
by Bob the Super Hamste (1152367) on Monday March 12, 2012 @02:16PM (#39329121) Homepage
Tau day is better because I have an excuse to get 2 pies instead of just one. I still celebrate pie day as well as groundhog day, mmmmm ground hog).
□ by BobNET (119675)
Tau day is better
But Einstein's birthday is best!
□ by LanMan04 (790429)
I'll never support those filthy Xeno bastards.
• It's the day we're all comfortable with Sin(), further we're so accomodating we'll embrace Cos().
□ by RyuuzakiTetsuya (195424)
tau day is better in this regard. It's easier to get a Tan() in June than march.
• Considering the counterpoints (Score:3, Interesting)
by Anonymous Coward on Monday March 12, 2012 @02:20PM (#39329203)
I do think tau is the 'better' constant, and both exploring the possibilities of what tau can do, and just 'playing around' with the math involved, has been enjoyable. However, to evaluate it
properly and determine just how strong it is, a strong counterpoint is needed - and it is supplied in The Pi Manifesto [thepimanifesto.com].
Both its author and I recommend reading The Tau Manifesto (and Bob Palais's original work; both are linked in the article above) before reading The Pi Manifesto, to make proper sense of it.
In the end, I think tau is a much stronger choice than pi for some aspects of math; others, deserve further investigation. It may all be academic discussion, given how firmly pi is entrenched in
our mathematics, but perhaps there's a solid place for both - with pi reserved for certain advanced concepts, and tau used through introductory geometry, trig and calculus.
Hmm. The Pi Manifesto's first three arguments are "Tau is silly.", "It doesn't matter which one we use." and "Physicists are dumb. Even the Babylonians used Pi." Then it goes on to argue the
Tau Manifesto uses cherry-picked examples by .... cherry picking examples. I think, if we had to decide on the number now, without the long history of Pi, Tau should win by a hair, as
described by this analogy of Pi to 1/2 and Tau to 1 (from the Tau Manifesto):
"Imagine we lived in a world where we used the lette
• Four thirds pi! (Score:5, Interesting)
by Geoffrey.landis (926948) on Monday March 12, 2012 @02:21PM (#39329209) Homepage
Wait, what about four-thirds pi, the constant that relates the volume of a sphere to the radius???
Using 2pi as the so-called "constant" is two-dimensional chauvinism!
□ by Carewolf (581105)
4/3 pi r^3 is actually 2/3 or of a circumscribed cylinder or 2/3 tau r^3..
This tau thing kind of makes sense, though I tend to call it 2 pi.. If pie is good, two pi is twice as good.
□ by ardiri (245358)
four-thirds pi = eight-thirds tau
just as complex in both cases.. neither case holds up as being singular for the sphere.
• Bah. e is better than them all (Score:3, Interesting)
by Matt_Bennett (79107) on Monday March 12, 2012 @02:27PM (#39329341) Homepage Journal
Who cares about pi or tau? e shows a much more in depth understanding of mathematics.
□ Re:Bah. e is better than them all (Score:5, Funny)
by Bob Hearn (61879) on Monday March 12, 2012 @02:31PM (#39329413) Homepage
Then, when somebody wants to argue that twice e is actually a better constant, we can say "2e or not 2e, that is the question."
☆ by arth1 (260657)
we can say "2e or not 2e, that is the question."
Unless you use the Amerenglish pronunciation[*], you can say:
"2 pi or not 2 pi, that's the tau question".
[*]: At least they're mostly consistent, making "pi" rhyme with "bi-" and "Semper Fi". But not with "quay".
• Tau for the win (Score:4, Funny)
by mjrauhal (144713) on Monday March 12, 2012 @02:29PM (#39329373) Homepage
Tau is twice the constant Pi ever was!
□ by StikyPad (445176)
You could say it's two Pis and then sum.
• Pie are not squared! (Score:2, Funny)
by jd2112 (1535857)
I remember arguing with my geometery teacher years ago, she kept saying pie are squared. I can't recall ever seeing a square pie. (Cobbler perhaps but never a square pie.)
• tau is wrong (Score:2, Insightful)
by w_dragon (1802458)
Division is harder than multiplication. Given the choice between sometimes multiplying by 2, and sometimes dividing by two, we should pick the constant that forces the multiplication. Also, e^(pi
* i) is nicer than e^((tau / 2) * i).
□ Re: (Score:3, Informative)
by artor3 (1344997)
I know that some people will point out that e^(tau * i) = 1, which they'll claim is nicer than e^(pi * i) = -1
But the most beautiful equation in mathematics is e ^ (pi * i) + 1 = 0. The five most fundamental constants, being combined with the three most fundamental operators (addition,
multiplication, exponentiation -- sorry, tetration), all equaling out, with absolutely nothing extra. There's no way to make it work as elegantly with tau.
☆ Re:tau is wrong (Score:5, Insightful)
by mrnobo1024 (464702) on Monday March 12, 2012 @03:50PM (#39330575)
Sure there is: e^(tau * i) + 0 = 1.
Hey, it's really not any more ridiculous than "... + 1 = 0".
○ by artor3 (1344997)
How in your mind is "x+1=0" ridiculous in the sense that "x+0=1" is? The former is a perfectly valid equation. Setting things equal to zero is extremely common, as anyone with even a
middle school level education ought to know. Do you complain that x^2+2x+1=0 is a ridiculous equation too?
Especially since Euler had to hack in a +1 to turn a -1 into that oh-so-elegant zero.
That's like finding out all of Bob Ross' happy little trees were Photoshopped in during the commercial break. Tao is a "full-circle" representation, literally, while Pi is simply
"half" assed. =)
• by deego (587575) on Monday March 12, 2012 @02:40PM (#39329573)
Seriously? People devote all this energy to replace a centuries-old constant by twice its value?
Isn't "wrong" a sensationalist word to use?
This is like many other things that are "wrong" - in the sense that there are technically better conventions to use, but the weight of history and inertia often keeps us from switching. Examples:
- Km vs. Mile. (SI units vs. Imperial..)
- Why are there 60 minutes per hour? Wouldn't it be better to have 100?
- Why do we use base 10 to express numbers? We should rather use base 8.
- Why 360 degrees? Why not 100 or 1000 (which is using base 8, of course, as mentioned above) instead.
□ I'm in the minority I know- but I would be in favour of switching to a metric clock. Sure it would cause confusion at first. I'd be in favour of measuring degrees in fractions of 100 or 1000.
There again- I'm always in favour of confusion. It's always more fun than the status quo.
☆ by deego (587575)
haha, same here - actually in favor of switching for that case. That one was a bad example, then, I guess.
□ Re: (Score:2, Interesting)
by Grishnakh (216268)
You complain about miles instead of km, but then you complain about using base 10? You're not even being consistent; if you favor base 8, then you should be against switching to kilometers or
any SI unit for that matter, as their entire existence is based on the supposed superiority of base 10.
And why base 8? Why not base 12? 12 is evenly divisible by both 3 and 4, which is very useful in many real-world situations. 10 is only divisible by 2 and 5. 8 is only divisible by 2, so it
really sucks to be hon
• Tau (Score:3, Interesting)
by brianerst (549609) on Monday March 12, 2012 @02:43PM (#39329615) Homepage
I'm not a mathematician, but that Tau "article" seems to steal a few bases.
It whines about A=(pi)r2 while C=(pi)D and how that shows that diameter is fundamental. But that's not the way I learned it anyway - the formula was always C=2(pi)r. Radius was fundamental, not
Which is even more obvious when you go into spheres, where everything is based off radius (A=4(pi)r2, V=4/3(pi)r3).
If we use diameter, you have to remember additional divisors (4 for the areas, 8 for the volumes). I can't speak on whether the whole "one turn" argument would help understanding other concepts,
but aside from people who are working to become mathematicians, I suspect that the fact that the radius-based "magic formulas" are simpler will keep them around...
p.s. What magic brew do you have to use to get Slashdot to accept HTML codes like pi? Or Unicode? Every attempt ended up getting stripped, so I went with (pi).
□ by DMUTPeregrine (612791)
A circle in n dimensions is defined as the set of all points at a given distance from a fixed point, the center. Circles are defined by the radius, not the diameter. The "standard" equation
for a circle is x^2+y^2=r^2. Etc, etc. The diameter is not more fundamental.
• by Bob Hearn (61879) on Monday March 12, 2012 @02:45PM (#39329657) Homepage
I managed to get bib # tau for a marathon last year [fbcdn.net]. Gave the timekeeper fits.
• Niether side is convincing (Score:5, Funny)
by sdhankin (213671) on Monday March 12, 2012 @02:46PM (#39329685)
Both are irrational.
The problem with Tau is that it will always be associated with Pooh thanks to the book the "Tau of Pooh".
Pi day sounds way more appetizing than Pooh day. In the land of prunes, every day is a Pooh day.
• by Oswald McWeany (2428506) on Monday March 12, 2012 @03:00PM (#39329871)
Oh and obligatory:
Taumorrow, Taumorrow, I love you, Taumorrow, you're only a day away......
• For sufficiently large values of nerd.
• I recently saw an image of a Pi-Cake with the caption, "It's cake. But it's pi. But it's CAKE. But it's PI. BUT IT'S CAKE!!!"
After a little research, I even found a recipe for pi-cake. Pi-Cake [instructables.com]
While an irrational pursuit, it looks to be a tasty one. Anyone thinking about making one?
the circle has been considered the most perfect of shapes
And yet, the circle needs a point to define the center, and an infinite number of points around the circumference to define the circle itself. The most perfect of shapes is a point. It is the
basis for all other shapes, both in flatworld, in 3d space, and in space-time. Without the point, there would be no point (pun intended) to trying to define a circle either as pi or tau (where is
your center to get your diameter or radius from, hmmmm?).
• Let me see if I get this straight... Tau = 2*Pi and Tau is right. But Pi is wrong. So, by this rational, two wrongs make a right?
• Sigh (Score:3)
by sootman (158191) on Monday March 12, 2012 @04:28PM (#39331039) Homepage Journal
Pi will always be around because it relates to the diameter, which is easily measurable by actual humans in actual circumstances.
If there's a big circle on the floor, you can measure the diameter with a tape measure and one other person: stand on opposite sides of the circle, one end of the tape stays in one spot, and the
other end gets moved back and forth until its length is as long as possible. The widest part of the circle == the diameter.
You can determine "the widest part of the circle" with simple physical measurements. Measuring the radius only requires a way to accurately determine where the center is, which is a non-trivial
exercise. (Compared to the above.) Or you could measure the diameter and then divide by 2, but "measure the diameter" will always be one less step than "determine the radius."
Related Links Top of the: day, week, month.
|
{"url":"http://science.slashdot.org/story/12/03/12/1741202/pi-day-is-coming-but-tau-day-is-better","timestamp":"2014-04-17T12:43:52Z","content_type":null,"content_length":"283636","record_id":"<urn:uuid:dccb6ae4-00ff-4ca7-bd11-2f981eb3ede0>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00237-ip-10-147-4-33.ec2.internal.warc.gz"}
|
onjugation of
Verb conjugation of "calculate" in English
Conjugate the verb calculate:
Present Past Future
I calculate he calculated will calculate
you calculate we have calculated ...
... ...
Conditional Conjunctive
would calculate ...
The picture I sent yesterday was taken on the first evening. And this morning the email bounced back. The user has already uploaded 3 different pictures of himself, all with the same facial
Somehow, my email was not sent. There were even a few mistakes, but really only minor things, nothing really bad. I've already read some other texts that you have sent and are getting better each
I have made an internship at a law office, but just for two months. Their salary is really bad. Now to my promise to correct your long message. You need to say: 'On Friday' not 'At Friday'.
|
{"url":"http://www.vocabulix.com/conjugation3/calculate.html","timestamp":"2014-04-18T08:05:38Z","content_type":null,"content_length":"16610","record_id":"<urn:uuid:1f660722-d96e-49cf-9d35-82f3193d09db>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00159-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Pi | Geekosystem
The last vestiges of telecommunications company Nortel were sold off in an auction this week, encompassing some 6,000 patents and patent applications for various technologies. Several big name tech
companies were involved in the marathon 4-day bidding process, but Google turned some heads with its unusual bidding strategy. They began with an initial $900 million "stalking horse" bid, which they
upped to $1,902,160,540 and then to $2,614,972,128 and finally $3.14159 billion. Normally, bidders opt for rounder numbers, but the mathematically inclined quickly identified a pattern in Google's
bids. Their opening shot was Brun's constant, followed by Meissel-Mertens constant, and finally pi. Reuters quotes a source commenting on Google's bidding, saying "either they were supremely
confident or they were bored." Sadly, their whimsically mathematical bids weren't enough to carry the day. The final price for the patent materials was $4.5 billion, purchased by a coalition of
companies comprised of Apple, EMC, Ericsson, Microsoft, RIM, and Sony. (Reuters via Techmeme, image via Jorel Pi)
Read on...
|
{"url":"http://www.geekosystem.com/tag/pi/","timestamp":"2014-04-19T07:08:35Z","content_type":null,"content_length":"49669","record_id":"<urn:uuid:357ac577-98ea-4079-9cea-8b72920f6202>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00431-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Using Links to Dig Deeper in Wolfram|Alpha
December 12, 2011
Posted by
The hyperlink has been one of the most powerful tools of the information age. Links make it easier to navigate the complex web of information online by combining the information itself with the
method for retrieving it. Clicking a link means “tell me more about this thing,” which naturally lends itself to “surfing.”
At Wolfram|Alpha, we strive to integrate and leverage technologies to create the most powerful computational capabilities and user experiences possible. In Wolfram|Alpha, the output comes in the form
of a “report.” If you want to know more about something in the output of an Wolfram|Alpha query, clicking it as a link will generate another such report. Though we’ve had links in Wolfram|Alpha for a
while, we’ve recently taken them to the next (computable) level: Wolfram|Alpha now computes links dynamically based on the output generated by your query.
Clicking a link basically feeds the plaintext of that link back into Wolfram|Alpha, creating new output with new links. Thus the navigational ability of the world wide web and the computational
ability of Wolfram|Alpha are now intertwined and can feed off each other. You can now surf Wolfram|Alpha like you can surf the Internet.
In particular, mathematical expressions are linked for the first time. For example, suppose we ask Wolfram|Alpha about the equation “r/1 = (1-r)/r”:
In the Solutions pod are two real numbers which may look familiar. By clicking the second one, we essentially feed it back into Wolfram|Alpha, which produces a new output. The last pod of the new
output reveals why we thought those two numbers seemed familiar: they are related to the golden ratio!
Perhaps you are interested in the mathematical theory of knots. Entering “8_1 knot” into Wolfram|Alpha gives you all sorts of knot invariants, one of which is the Alexander polynomial:
Clicking the Alexander polynomial feeds it back into Wolfram|Alpha, giving graphs and more detailed information about this expression:
These are just a few examples of the new links found in Wolfram|Alpha.
Since these mathematical links are computed, we are able to do some processing to increase their accuracy and reliability. To ensure that the vast majority of the links work correctly, we’ve been a
bit conservative with our initial launch. You can think of these links as “suggestions” for what to ask Wolfram|Alpha next.
2 Comments
Nice stuff. will this be available in widgets also?
Posted by Vipul December 16, 2011 at 2:20 am Reply
The output from a widget is normal WA output so the links will function in the same way so the answer to your question is “Yes”.
Posted by Brian Gilbert December 22, 2011 at 6:45 am Reply
Bitcoins have been heavily debated of late, but the currency's popularity makes it worth attention. Wolfram|Alpha gives values, conversions, and more.
Some of the more bizarre answers you can find in Wolfram|Alpha: movie runtimes for a trip to the bottom of the ocean, weight of national debt in pennies…
Usually I just answer questions. But maybe you'd like to get to know me a bit, too. So I thought I'd talk about myself, and start to tweet. Here goes!
Wolfram|Alpha's Pokémon data generates neat data of its own. Which countries view it most? Which are the most-viewed Pokémon?
Search large database of reactions, classes of chemical reactions – such as combustion or oxidation. See how to balance chemical reactions step-by-step.
|
{"url":"http://blog.wolframalpha.com/2011/12/12/using-links-to-dig-deeper-in-wolframalpha/","timestamp":"2014-04-21T02:10:59Z","content_type":null,"content_length":"43675","record_id":"<urn:uuid:159940d4-8b42-42fe-afc0-59698e854ae9>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00163-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Limit operators, collective compactness, and the spectral theory of infinite matrices
Downloads per month over past year
Chandler-Wilde, S. N. and Lindner, M. (2011) Limit operators, collective compactness, and the spectral theory of infinite matrices. Memoirs of the American Mathematical Society, 210. 989. ISSN
Text - Accepted Version
· Please see our End User Agreement before downloading.
Official URL: http://www.ams.org/journals/memo/2011-210-989/S006...
In the first half of this memoir we explore the interrelationships between the abstract theory of limit operators (see e.g. the recent monographs of Rabinovich, Roch and Silbermann (2004) and Lindner
(2006)) and the concepts and results of the generalised collectively compact operator theory introduced by Chandler-Wilde and Zhang (2002). We build up to results obtained by applying this
generalised collectively compact operator theory to the set of limit operators of an operator (its operator spectrum). In the second half of this memoir we study bounded linear operators on the
generalised sequence space , where and is some complex Banach space. We make what seems to be a more complete study than hitherto of the connections between Fredholmness, invertibility, invertibility
at infinity, and invertibility or injectivity of the set of limit operators, with some emphasis on the case when the operator is a locally compact perturbation of the identity. Especially, we obtain
stronger results than previously known for the subtle limiting cases of and . Our tools in this study are the results from the first half of the memoir and an exploitation of the partial duality
between and and its implications for bounded linear operators which are also continuous with respect to the weaker topology (the strict topology) introduced in the first half of the memoir. Results
in this second half of the memoir include a new proof that injectivity of all limit operators (the classic Favard condition) implies invertibility for a general class of almost periodic operators,
and characterisations of invertibility at infinity and Fredholmness for operators in the so-called Wiener algebra. In two final chapters our results are illustrated by and applied to concrete
examples. Firstly, we study the spectra and essential spectra of discrete Schrödinger operators (both self-adjoint and non-self-adjoint), including operators with almost periodic and random
potentials. In the final chapter we apply our results to integral operators on .
Deposit Details
Download Statistics for this item.
Centaur Editors: Update this record
|
{"url":"http://centaur.reading.ac.uk/27337/","timestamp":"2014-04-19T09:32:09Z","content_type":null,"content_length":"30201","record_id":"<urn:uuid:d89345d2-80a9-4d6c-b9ed-af30bb7e6b92>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00061-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Shorebird patches as fingerprints of fractal coastline fluctuations due to climate change
The Florida coast is one of the most species-rich ecosystems in the world. This paper focuses on the sensitivity of the habitat of threatened and endangered shorebirds to sea level rise induced by
climate change, and on the relationship of the habitat with the coastline evolution. We consider the resident Snowy Plover (Charadrius alexandrinus nivosus), and the migrant Piping Plover (Charadrius
melodus) and Red Knot (Calidris canutus) along the Gulf Coast of Mexico in Florida.
We analyze and model the coupled dynamics of habitat patches of these imperiled shorebirds and of the shoreline geomorphology dictated by land cover change with consideration of the coastal wetlands.
The land cover is modeled from 2006 to 2100 as a function of the A1B sea level rise scenario rescaled to 2 m. Using a maximum-entropy habitat suitability model and a set of macroecological criteria
we delineate breeding and wintering patches for each year simulated.
Evidence of coupled ecogeomorphological dynamics was found by considering the fractal dimension of shorebird occurrence patterns and of the coastline. A scaling relationship between the fractal
dimensions of the species patches and of the coastline was detected. The predicted power law of the patch size emerged from scale-free habitat patterns and was validated against 9 years of
observations. We predict an overall 16% loss of the coastal landforms from inundation. Despite the changes in the coastline that cause habitat loss, fragmentation, and variations of patch
connectivity, shorebirds self-organize by preserving a power-law distribution of the patch size in time. Yet, the probability of finding large patches is predicted to be smaller in 2100 than in 2006.
The Piping Plover showed the highest fluctuation in the patch fractal dimension; thus, it is the species at greatest risk of decline.
We propose a parsimonious modeling framework to capture macroscale ecogeomorphological patterns of coastal ecosystems. Our results suggest the potential use of the fractal dimension of a coastline as
a fingerprint of climatic change effects on shoreline-dependent species. Thus, the fractal dimension is a potential metric to aid decision-makers in conservation interventions of species subjected to
sea level rise or other anthropic stressors that affect their coastline habitat.
Land cover change; Coastal wetlands; Coastline complexity; Fractal dimension; Habitat suitability; Patches; Sea level rise
Florida coastline-dependent species are characterized by one of the highest extirpation risks in the world because of sea level rise and increase in tropical cyclone activity (Convertino et al. 2010;
2011c) due to climate change. The Snowy Plover (Charadrius alexandrinus nivosus; SNPL hereafter) is a residential shorebird of Florida listed as threatened at the state level. The Piping Plover (
Charadrius melodus; PIPL hereafter) is federally designated as threatened, and it migrates mostly from the North Atlantic coasts of the USA and Canada to Florida where it winters for 3 months on
average (Elliott Smith and Haig 2004). The Red Knot (Calidris canutus; REKN hereafter) is designated as threatened in New Jersey and is federally listed as a potential “at risk” species. REKN uses
the Florida Gulf beaches as stop-over areas for about 3 weeks during its migration between South America and North America’s Big Lakes region and Atlantic coast (Harrington 2001). This is considered
as the wintering period of the REKN in Florida. An understanding of the spatial distribution of the suitable habitat patches for these shorebirds, their controlling factors, and how these factors are
affected by sea level rise is fundamentally important for adopting efficient conservation strategies. An understanding of linkages between the coupled evolution of landforms and ecological patterns
is a crucial topic due to the evidence that these patterns are tightly linked. Biocomplexity approaches (Mandelbrot 1982; Rinaldo et al. 1995; Banavar et al. 2001; Pascual et al. 2002; Schneider and
Tella 2002; Buldyrev et al. 2003; del Barrio et al. 2006; Solé and Bascompte 2006; Scanlon et al. 2007), despite being accused of adopting simplified biological models (Paola and Leeder 2011), are
capable of reproducing macroscale patterns of complex phenomena and of developing indicators, such as the probability of the patch size (Mandelbrot 1982; Bonabeau et al. 1999; Jovani and Tella 2007;
Kéfi et al. 2007; Jovani et al. 2008; Convertino et al. 2012), that are useful for assessing ecosystem health (Kefi et al. 2011). One of ecology’s main goals is to detect from observed patterns, such
as species occurrence patterns, the organizational rules of species in stationary and evolving ecosystems. Many theories have been proposed to explain the formation of clustered patterns of species
in nature. Conspecific attraction, environmental heterogeneities, and food availability have been claimed—alone or together—to be the motivation for the formation of habitat patches in which
individuals of a species coexist in colonies. An optimal search theory, the so-called Lévy-flight foraging hypothesis (or predator-prey-food resource dynamics), predicts that predators should adopt
search strategies known as Lévy flights where prey is sparse and distributed unpredictably. However, Humphries et al. (2010) showed that Brownian movement is sufficiently efficient for locating
abundant prey. This theory explains the clustered patterns of resources in landscapes that may be different from the pattern of species occurrence. Neither the Lévy-flight foraging hypothesis nor
Brownian movement model address the linkages of biota with landscape forms and their evolution, which is, in our opinion, one of the main missing points.
The colony size of seabirds (Schneider and Tella 2002), colonial birds (Jovani and Tella 2007), and many other other animals (Bonabeau et al. 1999) has been found to follow a power-law distribution.
Analogous scale-free distributions have been detected for bacteria colonies (Buldyrev et al. 2003), for species in complex ecosystems (Solé and Bascompte 2006; Convertino et al. 2012), and also for
man-made systems such as cities (Batty and Longley 1994). The ubiquity of the power-law structure for the probability of the patch size in aggregation phenomena of natural and human systems suggests
the existence of universal self-organization principles (Pascual et al. 2002; Solé and Bascompte 2006). The scaling exponent of the power-law distribution of the aggregate size was proven to be the
fractal dimension of the pattern analyzed (Mandelbrot 1982; Convertino et al. 2012). The word “aggregate” is a general word for indicating the assemblage of individuals with similar or identical
features in a landscape. In the presence of a power law for the probability distribution of the aggregate size, the occurrence patterns are scale-free, indicating that the patterns are invariant at
different scales of observations (Convertino et al. 2012). The concept of fractal dimension was introduced by Mandelbrot analyzing the coastline of Britain at different scales (Mandelbrot 1967). The
work disseminated the use of fractal analysis first in geomorphology (Morais et al. 2011; Baldassarri et al. 2012) and later to a variety of sciences from biology to engineering (Bak 1999).
Nonetheless, all these theories, models, and empirical findings have rarely considered any potential effect of slow or abrupt change in the exogenous factors on the heterogenous habitat in which
species live. Only recently it was proven quantitatively that ecosystems exhibit variations in the probability distribution of the patch size due to anthropically and naturally driven changes in the
environmental variables (Kefi et al. 2011). For example, desertification of water-controlled ecosystems produces a decrease in the fractal dimension of vegetation patches, or in extreme cases, a
shift from the power law to exponential distribution of the patch size (Kéfi et al. 2007; Scanlon et al. 2007; Kefi et al. 2011). Climate change scenarios tested in temperate/continental regions
depicted an overall decrease in the fractal dimension of patches in time for many different taxa (Barrio et al. 2006). For colonial birds the variation in the fractal dimension of the patches was
clearly related to the fluctuations in the population abundance due to interspecies competition (Jovani et al. 2008). In geomorphology the variation in the fractal dimension was used as the signature
of the persisting climate over landscapes. For example, the association between landscape evolution and climate has been assessed for river basin ecosystems in Rinaldo et al. (1995). However, none of
the previous studies linked the fractal dimension of two ecosystems’ patterns in time (e.g., of geomorphological and ecological patterns) resulting from linked processes. Here we verify for the first
time, to the best of our knowledge, that the fractality of the coastline is clearly linked to the habitat patches of shoreline-dependent birds in their breeding and wintering seasons.
We hypothesize that sea level rise may increase the complexity of the coastline and that such complexity determines fragmentation of the habitat of species. We assume scale invariance of the patches,
which is also detectable by the analysis of the shorebird occurrences. We consider a breeding shorebird (Snowy Plover) and wintering shorebirds (Piping Plover and Red Knot) in Florida to quantify the
potential effect of sea level rise on resident and migrant species. For the Snowy Plover the nesting season is usually considered part of the breeding season; thus, our model’s input considers the
SNPL breeding and nesting occurrences simultaneously. Furthermore, observations indicate that nesting, breeding, and wintering areas for SNPL fall within the same range (Convertino et al. 2011a).
Wintering occurrences of SNPL are thus considered together with breeding occurrences.
An integrated ecogeomorphological modeling approach is adopted to predict the viability from 2006 to 2100 of threatened, endangered, and at risk (TER) shorebirds (SNPL, PIPL, and REKN) along the Gulf
Coast of Florida as a function of the increasing sea level rise due to climate change. We rescale to 2 m the Intergovernmental Panel on Climate Change (IPCC A1B) scenario described in Chu-Agor et al.
(2011) and model the ecosystem at a 120 m spatial resolution. We predict land cover change with the Sea Level Affecting Marshes Model SLAMM (Clough 2010)] which is a geomorphological model at
low-medium level of complexity. SLAMM considers coastal wetland types such as swamp, cypress swamp, mangrove, and salt marsh (Additional file 1: Figure S1). The habitat model predicts the habitat
suitability for breeding and wintering through a maximum entropy principle approach (MAXENT) (Phillips and Miroslav 2008) as a function of the recorded species occurrences in the breeding and
wintering season, the predicted land cover, and a geology layer. MAXENT is an ecological model at low level of complexity. The land cover and habitat simulations are produced in Aiello-Lammens et al.
(2011). Finally, in this paper a patch-delineation model is introduced to predict the yearly habitat patches for a set of biological constraints imposed on the habitat suitability maps. We assume the
stationarity of the habitat patterns at the year scale and absence of biological adaptation of species to climate change. The fractal dimension of the patches is derived by three independent methods:
(i) box-counting for the observed occurrences; (ii) probability distribution of the patch size [“Korčak’s law” (Korcak 1940; Mandelbrot 1982)]; and (iii) perimeter-area relationship for the predicted
patches. We assume that these three methods produce very close estimates of the fractal dimension of the whole mosaic of patches as shown in Convertino et al. (2012).
Additional file 1. Additional Methods, Additional Results and Discussion, Additional Tables, and Additional Figure.
Format: PDF Size: 4.8MB Download file
This file can be viewed with: Adobe Acrobat Reader
The power-law distribution of the patch size is verified by almost a decade (2002–2010) of historical observations of the species. Thus, the patch-delineation model is validated against these
observations from 2002 to 2010. The coupled ecogeomorphological organization is shown by the correspondence in time of the fractal dimensions of the habitat-specific coastline and of the predicted
patches. The fractal dimension of the habitat-specific coastline, along with habitat loss and population abundance, is demonstrated to greatly influence the number and size of the patches, which are
related to habitat loss and population abundance. Although the fragmentation of the habitat (which is proportional to the fractal dimension of the patches) is predicted to fluctuate considerably in
this century, the risk of extirpation of the species analyzed is not drastically increased because the connectivity of the patches is predicted to increase. The Piping Plover is the species with the
largest fluctuation in the number and size of patches. We believe the research presented in this paper constitutes a contribution to the emerging field of biogeosciences, which explores the interface
between biology and the geosciences and attempts to understand the interrelated functions of landscapes and biological systems across multiple spatial and temporal scales. We are aware of the
existence of many other complex ecogeomorphological processes that are not included in our modeling effort. However, parsimonious models such as the model presented here can capture large-scale
patterns while bypassing small-scale details (Ehrlich and Levin 2005; Pascual et al. 2011). These models can be tested against other more biologically realistic models to fully explore the linkages
among various environmental changes, geomorphological dynamics, and biodiversity patterns. We anticipate that further research will explore this issue of process complexity versus model complexity,
model relevance, and model uncertainty, which can be synthesized as a “modeling trilemma” (Muller et al. 2010).
This paper is organized as follows. The “Methods” section describes the shorebird data and the study site and explains the models used in this study and the theoretical characterization of patches.
The “Results and discussion” section reports the main results with a broad discussion of figures and how these results are interpreted considering our assumptions. The “Conclusions” section reports
the most important conclusions, implications for management, and further research efforts. Additional files 1 and 2 are provided to support our main result.
Additional file 2. Video S1. Predicted land cover by SLAMM from 2006 to 2100.
Format: AVI Size: 10.4MB Download file | Watch movie
Site description and biogeographical variables
The white fine-sand beaches of the Florida coast of the Gulf of Mexico constitute the habitat of the whole Florida SNPL population. The SNPL population in Florida is distributed along about 80% of
the Florida Panhandle and along about 20% of the Florida Peninsula (Lamonte and Douglass 2002; Himes et al. 2006; Burney 2009; Pruner 2010) (Figure 1a). The Florida Peninsula and the Atlantic coasts
are the main wintering grounds for the migratory PIPL and REKN, which seem less constrained than the SNPL by the mineralogical properties of the beach substrate captured by the geology layer
(Convertino et al. 2010; 2011b). The land cover, which includes many wetland types from C-CAP (2009) is represented in Figure S1 of the Additional file 1, and the geology (F-DEP 2001) characterizes
the mineralogical substrate of each land cover class (Additional file 1: Figure S6) (Convertino et al. 2011b). In 2006 the PIPL Panhandle-Peninsula and Atlantic populations were 38 and 33%,
respectively, of the total migrant PIPL population in Florida. The REKN Panhandle-Peninsula and Atlantic populations were 55 and 20%, respectively, of the total migrant population in Florida. The
International Piping Plover Census in 2006 supported the field sampling of SNPL, PIPL, and REKN (USGS-FWS 2009; FWC 2010; Alliance 2010). The 2006 wintering occurrences in Florida are the data used
in this study for PIPL and REKN. For the SNPL, data of breeding and nesting occurrences are also available from 2002 to 2010 and are provided year by year by the Florida Wildlife Commission. These
occurrences are used to verify the assumption of scale-invariance of SNPL occurrence patterns over time with the box-counting. However, despite the availability of SNPL data from 2002 to 2010, we
construct the habitat suitability model with the 2006 SNPL occurrences alone in order to be consistent with the 2006 NOAA land cover (C-CAP 2009) and the 2006 PIPL and REKN occurrences. The geology
and the elevation from USGS (USGS 2010; Convertino et al. 2010; 2011b) are used in the habitat suitability model and in the land cover model, respectively (Aiello-Lammens et al. 2011; Convertino et
al. 2010; 2011b).
We consider PIPL and REKN in the same geographic domain where the full range of the SNPL occurs in order to perform a simultaneous interspecies assessment of the habitat use and extirpation risk of
the three species (Figure 1a). Thus, only the Panhandle-Peninsula region was considered in this study. The SNPL is our main interest because its year-round presence in the Florida coastal ecosystem
makes this species potentially more vulnerable than PIPL and REKN. Dispersal among the Panhandle and Peninsula SNPL populations has been observed but not quantified. Population subdivision of the
SNPL has not been observed; thus, we can adopt the same habitat and dispersal criteria for the whole population. Population subdivision, for example, can be caused by geographic barriers or
disturbances [e.g., renourishment (Convertino et al. 2011a)] that interfere with the dispersal. The reduction in dispersal is reported to reduce gene flow and increase genetic drift of independent
subpopulations in the long-term. However, this is not the case for the SNPL population in Florida despite the weak interchange of individuals between Panhandle and Peninsula (Aiello-Lammens et al.
Habitat area and dispersal data for SNPL are mostly from Aiello-Lammens et al. (2011) but also from Page et al. (2009), Patons and Edwards (1996), Stenze et al. (1994; 2007), Warriner et al. (1986).
Aiello-Lammens et al. 2011synthesized the biological data and the metapopulation modeling effort of this research for the SNPL. Information is gathered also from field ecologists working on this
project [i.e., Dr. R.A. Fischer (Engineering Research and Development Center, US Army Corps of Engineers) and Mrs. A. Pruner (Florida Park Service)]. For PIPL, habitat and dispersal data are from
Audubon (2006), Seavey et al. 2010, and USFWS (2009), and for REKN, data are from Fallon (2005) and Leyrer et al. (2006). For a more detailed description of the site under study we refer the reader
to Convertino et al. (2011b).
Box-counting algorithm
The characterization of the occurrence patterns of breeding and wintering occurrences and of the coastline is performed using the “box-counting” method. For the SNPL the occurrence pattern of nesting
occurrences was observed to be a self-similar pattern (Convertino et al. 2012); thus, the box-counting method is suitable to predict how this pattern changes with the scale of analysis. The
box-counting analysis consists of calculating, for grids of different box-side lengths, the number of boxes that contain the object under study. Adjacent boxes constitute an approximation of the real
patches at each resolution. The algorithm can be applied to both point and line patterns. The box-counting is performed over eight orders of magnitude in a logarithmic scale of the box-side length,
from l[o(1)]=565km (which corresponds to the box “B1” in Figure 1), which is approximatively the width of Florida, to l[o(5000)]=11.3×10^−3 km (Figure 1). We indicate with l[o(i)] the length of the
box side at resolution “o=i” from i=1,…,5000 where the increment from one resolution to another is 11.3×10^−2km, which is slightly smaller than the average home range of the SNPL (Table 1). The order
of magnitude is relative to the scales of analysis (extent) investigated by the box-counting, while the resolution is related to the grids chosen for the box-counting. The number N(l)of boxes of size
l needed to cover the pattern of occurrences (which is generally a fractal set) follows a power law,
where D≤d, and d is the dimension of the space (usually d=1,2,3). D is also known as the Minkowski-Bouligand dimension, Kolmogorov capacity, or Kolmogorov dimension, or simply the box-counting
dimension and is an estimate of the Hausdorff dimension (Mandelbrot 1982). The fractal dimension for 1−d objects is associated with the Hurst exponent H such that D=2−H(Mandelbrot 1982; Bak 1999).
The values of the Hurst exponent vary between 0 and 1, with higher values indicating a smoother trend, less volatility, and less roughness of the analyzed pattern (Mandelbrot 1982). We indicate the
fractal dimension of the breeding and wintering occurrences with D[b] and the fractal dimension of the coastline with D[f]derived from box-counting analysis. Both fractal dimensions are determined by
the box-counting method. The fractal dimension of the coastline is calculated also for each land cover class that is a species-specific habitat for the species considered (Figure 1b). Many land cover
classes are coastal wetland types (Additional file 1: Figure S1).
Figure 1. Box-counting algorithm. (a) Representation of the the box-counting algorithm applied to the 2006 occurrences of the Snowy Plover (SNPL), Piping Plover (PIPL), and Red Knot (REKN), for eight
orders of magnitude (in a logarithmic scale), which corresponds to 5000 resolutions of the box-counting grid. In this example at the resolution of box B5 the number of boxes in which there is at
least one occurrence is N(B5)=6. (b) Box-counting example applied to the whole coastline, to the habitat-specific coastline (e.g., beach, salt-marsh), and to other land cover classes as in Convertino
et al. (2011b). Many coastal wetland types are included in the land cover, such as swamp, cypress swamp, mangrove, and salt marsh. The shaded grid cells in (a) and (b) have at least one species
occurrence or a coastline segment at the represented resolution. Two coastline configurations are presented: the first for high values of D[f]and D[K](c), the second for low values of D[f]and D[K](d
). The patches presented in green are connected because their neighboring distance is lower than the maximum dispersal length d[l].
Table 1. Macroecological parameters of the patch-delineation model, and biological data estimated from the literature
Land-cover model
The land cover is predicted year by year by using the Sea Level Affecting Marshes Model (SLAMM) (Clough 2006; Chu-Agor et al. 2011) starting from the year 2006 to 2100. These simulations are
performed in Aiello-Lammens et al. (2011) and Convertino et al. (2010) to which we refer the reader for more details. The domain of the model is extended inland for about 10 km from the coastline
(Convertino et al. 2011a; 2011b) (black region along the coast in Figure 1, box B1). We consider the predicted inundation distance in 2100 (∼ 9 km) for a range of [1, 2] m sea level rise (SLR) adding
1 km to consider the uncertainty in the estimation of the flooding distance. The initial condition is the 2006 land cover from NOAA (Klemas et al. 1993). The NOAA land cover classes are changed into
SLAMM land cover classes for modeling purposes. SLAMM requires us to group the classes of land cover into model classes. The conversion is reported in Convertino et al. (2011b). The SLAMM model also
requires the elevation and slope as input variables. The modeled domain is divided into seven regions (Additional file 1: Figure S1) with distinct historical tidal and SLR trends. Each region is
characterized by a unique set of values for the 26 input parameters (Additional file 1: Table S1) related to tide, accretion, sedimentation, and erosion processes. The value of the parameters is
derived from the available literature and previous efforts of this research (Chu-Agor et al. 2011). In this effort of modeling the land cover, we do not consider any geomorphological feedback between
landforms and climate change that is expected to occur with global warming. All our assumptions are the same as those in Chu-Agor et al. (2011) and Convertino et al. (2010). Also we do not consider
any possible barrier island shifting because that is reported to occur over a time period much longer than our predictions (Masetti et al. 2008).
Habitat suitability model
The employed habitat suitability model is MAXENT (Phillips et al. 2006; Phillips and Miroslav 2008), which is one of the most diffused models in species distribution modeling. MAXENT is a model based
on the principle of maximum entropy that predicts continuous habitat suitability maps of potential species occurrence under a set of selected environmental variables. The environmental variables that
are necessary and sufficient for calculating the habitat suitability are the land cover translated into SLAMM classes (Chu-Agor et al. 2011; Convertino et al. 2011a; 2011b) and the USGS geology layer
(Convertino et al. 2011a; 2011b) at a resolution of 120 m. The resolution 120 m is the home-range distance of the SNPL (Table 1). Such distance is sufficient to capture not only the spatial
variability of habitat preferences of SNPL, but also that of PIPL and REKN, whose home-range distance are much larger than that of SNPL. The habitat suitability at-a-point (i.e., for each pixel of
the modeled domain) can be considered as a proxy to find SNPL, PIPL, and REKN in the breeding and wintering season. The prior probabilities of occurrence are calculated in MAXENT using the recorded
shorebird occurrences constrained to the environmental variables. The occurrences are nest and breeding occurrences for SNPL, and the adult occurrences for PIPL and REKN. Thus, for PIPL and REKN the
habitat suitability refers to the suitability for wintering as in Convertino et al. (2011a). No absences are required in MAXENT. Then the posterior probabilities of occurrence are based on the prior
probabilities given the change in the land cover modeled year by year by the land cover model. A regularization parameter that controls the fit of the predicted suitability to the real occurrence
data is assumed to be equal to one. Non-randomly placed pseudoabsences are used to improve the predictions, and 25% of the occurrences are taken as a training sample (Convertino et al. 2011a; 2011b).
The predicted habitat suitability maps represent the average of over 30 replicates for each year to reduce the uncertainty of the predictions. The habitat suitability is calculated with 10,000 random
background points. Background points are a subset of points of the domain over which the Bayesian inference between the recorded species occurrences, pseudoabsences, and environmental layers is
We assign a biological interpretation to the predicted habitat suitability score, P(hs), which is the probability at-a-point of finding a breeding and/or a wintering ground. Breeding and wintering
grounds are suitable sites for the SNPL as a function of the season considered, and wintering grounds are suitable sites for the PIPL and REKN. We define the suitability index (SI) as a metric from 0
to 100 that captures the quality of the breeding and/or wintering habitat for the species. The higher the SI the larger the biological spectrum of functions performed by the species in that habitat.
Hence, P(hs) is also a surrogate of habitat use during the breeding and wintering seasons of the species considered. In fact, it is legitimate to assume that habitat use increases with habitat
quality. Every pixel of the HS maps is classified into five SI categories: SI=100 [for 0.8≤P(hs)≤1] is considered the best habitat with the highest survival and/or reproductive success; SI=80 [for
0.6≤P(hs)<0.8] is typically associated with successful breeding and/or wintering; SI=60 [for 0.2≤P(hs)<0.6] is associated with consistent use for breeding and wintering; SI=30 [for 0.2<P(hs)] is
associated with occasional use for non-breeding, feeding activities, and wintering; all values less than SI=30 indicate habitat avoided both for breeding and wintering; and SI=0 for completely
unsuitable habitat. We refer the reader to Convertino et al. (2011a) for additional details about MAXENT runs for the SNPL, PIPL and REKN.
Patch-delineation model
Below we define a criterion for delineating breeding and wintering patches for SNPL, PIPL and REKN, respectively. A patch is defined when the following criteria simultaneously hold:
The species-dependent values for the three parameters required for the patch identification are reported in Table 1. The values of biological data in Table 1 are used only to support the choice of
model parameters. The model parameters are calibrated to reproduce a patch-size distribution as close as possible to the box-counting distribution of occurrences in 2006. The model with this set of
parameters was validated against the patch-size distributions from 2002 to 2010 estimated by the box-counting. We define breeding patch as an area large enough to at least occasionally support a
single breeding pair through courtship and rearing of young to dispersal age (Majka et al. 2007). A population patch is defined as an area large enough to support breeding for 10 years or more, even
if the patch is isolated from interaction with other populations of the species (Majka et al. 2007). Since population-wide data are lacking for these breeding and population area requirements, we
assumed that a population patch is at least two times larger than a breeding patch. For the SNPL these patches contain certain nesting patches. The minimum population and breeding/wintering patch
areas are estimated from the literature available and by expert knowledge of the field biologists involved in the sampling campaigns performed for this study (see Burney 2009; Himes 2006; Lamonte and
Douglass 2002; Pruner 2010). S[p] and S[b/w] are the minimum population and breeding/wintering area, respectively, and are proportional to the estimated home range. The minimum breeding/wintering
area is the minimum area that will support breeding and wintering activity of the shorebirds. The home range hr and the home-range distance hrd (the square root of the hr) are values estimated
considering the breeding regions for SNPL, PIPL, and REKN. We assume that S[p] and S[b/w] for PIPL, and REKN are much smaller than hr because they refer to the wintering period of these shorebirds in
Florida. For REKN, S[p]is also reduced due to the habitat limitation and the close coexistence with SNPL in the same habitat. Patches are considered connected if their neighboring distance is equal
to or smaller than d[l], which is the maximum dispersal length. Figure 1c,d shows an example of patches that are connected because their reciprocal distance is lower than d[l]. These plots also
represent our assumption that coastline complexity affects patch distribution. The average neighborhood distance 〈nd〉 is the average dispersal of the species. 〈nd〉 is higher than hrd for the SNPL
due to the higher local dispersal ability estimated from recent surveys (Himes et al. 2006; Pruner 2010). For PIPL and REKN, 〈nd〉is smaller than hrd because the reported hrd refers to their
breeding range in northern states in the USA and Canada. In the winter season PIPL and REKN migrate to Florida, and their dispersal distance is observed to be smaller. Within the neighborhood
distance a subpopulation can be assumed to be panmictic. A panmictic population is one in which all individuals are potential partners. It is usually estimated from the foraging distance of an animal
species. In a more abstract way the neighborhood distance is the glue of all the suitable patches. In a particle physics analogy, it describes the Brownian motion of individuals within a larger
species group. Thus, by using d[l], which is the maximum dispersal, as a criterion in the model, foraging is certain to be considered within patches. Our model considers an upper estimate of the
patch size for all the shorebirds considered. m is the average body mass, which is used to discuss some results. We assume the same biological parameters for the SNPL Panhandle and Peninsula as in
Aiello-Lammens et al. (2011).
Probability distribution of the patch size
The probability of exceedance of the patch size is known in literature as Korčak’s law (Korcak 1940; Nikora et al. 1999), which is expressed by:
where c is a constant, F is a homogeneity function that depends on a characteristic size s[c], and ε=D[K]/2 is the scaling exponent (Korcak 1940; Mandelbrot 1982). D[K]is the fractal dimension of the
patches. The probability of exceedance exhibits a power-law behavior. The probability distribution of the patch size for the predicted patches was used to validate the patch-delineation model against
the box-counting estimates on the real occurrences from 2002 to 2010. The fit of the predicted distribution of patches is performed using a Maximum Likelihood Estimation technique (MLE), which is
described in the Additional file 1.
Perimeter-area relationship
The scaling relationship between the perimeter p and the size S of the patches:
determines the fractal dimension of the mosaic of patches, which considers the fractality of the patch edge. Here we indicate the fractal dimension D[c], which is derived from the same predicted
patches of the introduced patch model (see the “Patch-delineation model” section) but also considers their perimeters. Because Korčak’s law (Korcak 1940) considers only the size of the patches, the
perimeter-area scaling law has been considered as a more precise tool for measuring the fractal dimension. In literature the ratio p/Sis adopted to measure the quality of the patches for population
survivability, that is, the likelihood of surviving in a suitable patch (Helzer and Jelinski 1999; Airoldi 2003; Imre and Bogaert 2004). In general the higher the ratio p/Sthe less suitable the patch
area for the species, and the higher the ratio p/S, the higher the fractal dimension D[c].
Results and discussion
The relationship between the number of cells occupied by shorebird occurrences, N(l), and the length of the side of the box, l, at each scale of analysis is shown in Figure 2. The relationship is a
power-law function, N(l)∼l^D[b], whose exponent D[b] is the fractal dimension of the shorebird occurrence pattern. Figure 2a reports the power-law relationship for PIPL and REKN breeding occurrences
in 2006, and Figure 2b for the SNPL breeding occurrences from 2002 to 2010. The results confirm the supposed scale-free distribution of the shorebird occurrences. The fractal distribution of the
predicted patches is captured by Korčak’s law (Figure 3). The box-counting overestimates the fractal dimension with respect to the fractal dimension of Korčak’s law as shown in Convertino et al. (
2012). The fractal dimension of the box-counting (D[b]) is 1.63, 1.85, and 1.53, and the fractal dimension of Korčak’s law (D[K]) is 1.47, 1.70, and 1.42 for SNPL, PIPL, and REKN in 2006,
respectively (Additional file 1: Tables S2 and S3). The box-counting envisions a more pessimistic scenario for the patch size of shorebirds. However, as in Convertino et al. (2012) we believe that in
the absence of any modeling effort box-counting constitutes a valid technique to calculate the fractal dimension of the mosaic of patches. For the SNPL occurrences, box-counting allows us to detect
the fluctuation over time of the fractal dimension of the recorded nest occurrences and of the coastline. The insets in Figure 2b show the empirical evidence of the correlation between D[f] and D[b],
and Additional file 1: Table S3 reports the values of the fractal dimensions. The analysis raises the question of whether the variation in D[b]is caused by natural fluctuations of the species range
or by changes in external forcing such as natural or anthropogenic stressors. We observe that in 2004 and 2005 the fractal dimension showed a jump possibly due to the exceptional hurricane season in
those years, which altered the positive feedback between tropical cyclones and SNPL nest abundance (Convertino 2011c). This supposition is confirmed by the results of Convertino et al. (2011c).
Figure 2. Box-counting scaling-law in time. (a) Power-law N(l)=N[0]l^−D[b]derived from the box-counting algorithm applied to the occurrences of PIPL (black dots) and REKN (green) in 2006 and to the
whole Florida Gulf coastline. In the inset the schematized Florida coastline is evaluated at different box sizes. (b) Box-counting algorithm applied to the 2002-2010 occurrence of the SNPL. The
fractal dimension derived from the analysis of the breeding and nesting occurrences is D[b]=1.63, 1.62, 1.75, 1.74, 1.63, 1.64, 1.66, 1.68, 1.70 for the years from 2002 to 2010, respectively. In the
inset D[b]and D[f]are reported for each year. Values of D[f]are reported (Additional file 1: Table S3).
Figure 3. Korčak’s law of the predicted suitable patches. The fractal dimension of the patches is derived from the scaling exponent, ε=D[K]/2, of the probability of exceedance of the patch size
(Equation 3) for SNPL, PIPL, and REKN. The probability of exceedence of the patch size is represented for the years 2006, 2020, 2040, 2060, 2080, and 2100. The probability of exceedance is compared
against the box-counting scaling laws for the years 2002-2010. Additional file 1: Table S2 reports the values of D[K]. The insets represent the probability density functions (pdfs) of the patch size
that show a heavy-tailed behavior.
The potential effect of sea level rise, one of the main controlling factors of land cover of coastal habitats, is studied here. The simulated variation in land cover classes over time is performed in
SLAMM (Clough 2010) for the Gulf Coast of Florida (Additional file 1: Figures S1 and S2). We predict by 2100 a decrease in the salt-marsh and estuarine beach classes, which are crucial habitats for
PIPL, SNPL, and REKN. We also predict a net decrease in swamp and inland fresh marsh habitats. Following flooding predicted to occur after 2060, undeveloped drylands will change mostly into tidal
flats, which may shift into estuarine open water (Additional file 1: Figure S2). We estimate a 6% increase in estuarine open water and a 10% increase in ocean open water from 2006 to 2100. We expect
global land-loss independent of the land cover class of about 16% with respect to the 2 m sea level rise. A video in Additional file 2 and Figure S2 in Additional file 1 show the evolution of land
cover and of the coastline geomorphology over time. Additional file 1: Figures S3, S4, and S5 report the suitability index derived from the predicted habitat suitability maps using MAXENT
corresponding to the yearly land cover maps. The patches are then calculated using the patch-delineation model introduced in the “Patch-delineation model” section and the habitat suitability maps.
The power-law structure of the patch size holds for every year simulated (Figure 3), which proves the scale-invariance of the suitable habitat over time. By using the maximum likelihood estimation
(MLE) criteria, we found that the Pareto-Lévy probability function has the best fit for the predicted distribution of the patch size (Additional file 1). Korčak’s law exhibits some finite-size
effects before the upper truncation and a potential lower-cutoff in the power-law behavior. However, these variations from the power law are quite common in natural systems due to the finiteness of
the variable sampled. Thus, we can claim an overall scale-invariance of the patch size. Additional file 1: Table S2 reports the fractal dimension derived from Korčak’s law for 2006, 2020, 2040, 2060,
2080, and 2100. The scale-invariance of the habitat patterns of the SNPL was shown in Convertino et al. (2011b) for the prediction of the habitat suitability in 2006. Here we show that, given the
scale-invariance of the patch size, fluctuations in the scaling exponent ε=D[K]/2 of Korčak’s law occur. We believe that these fluctuations are related to variations in the land cover, which changes
the coastline fractality. The higher the fractal dimension, the higher the fragmentation of the shorebird habitat. The fragmentation of the habitat creates smaller patches for wintering and breeding
for PIPL and REKN, and for SNPL, respectively. Brownian-Lévy movements of shorebirds might be the cause for the scale-invariance of the occurrence patterns that can be detected by the box-counting.
This has been proven for other marine animals (Humphries et al. 2010) and colonial birds (Jovani et al. 2008). However, in this study we do not reproduce any movement of species as we believe that
the size and number of patches is affected by the geomorphological evolution of the coastline, which in turn affects the movement of shorebirds.
The worst scenario for the vulnerability of shorebirds is predicted considering the fractal dimension D[b] from the box-counting. Moreover, the box-counting suffers from the risk of potentially
unsampled occurrences. The Korčak’s law fractal dimension (Figure 3) is based on the size of the predicted suitable patches (potential habitat range), while the box-counting (Figure 2) is an
approximation that only captures the recorded occurrences (realized range). The fact that D[K]≅D[b] for the 2006-2010 period in which SNPL nest occurrences are available confirms the good estimation
of MAXENT of the realized range as previously found in Convertino et al. (2011b). A more accurate estimation of the fractal dimension that is intermediate between D[K]and D[b] is given by the patch
perimeter-size scaling relationship (Figure 4). The perimeter-size relationship captures the edge effects of patches on species. In general shorebird species prefer to live in patches whose shapes
are as regular as possible versus highly irregularly shaped patches such as the patches determined by a very complex coastline. The survivability of the species is higher for those inhabiting patches
with large perimeters and simple shapes than for those inhabiting patches of equivalent area but complex shape. The larger the edge effect determined by the complexity of the patch parameter, the
lower the probability of survival for the individuals within the species within the patch. However, there are some cases of “edge species” for whom irregular shapes are preferred. In our case it was
observed that D[K]≤D[c]≤D[b]. Hence, the estimation of the fractal dimension by using Korčak’s law forecasts the best scenario predicting the least amount of fragmentation due to sea level rise. D[c]
predicts greater fragmentation than D[K]because the fractality of the patch’s perimeter is considered, but overall D[c] seems the best estimate of the fractal dimension between Korčak’s law and the
box-counting estimates.
Figure 4. Perimeter-size relationship for SNPL, PIPL, and REKN. Perimeter-size relationship ( ) for the predicted suitable patches of the SNPL (a), PIPL (b), and REKN (c), in 2006 and 2100. The
exponent D[c]for the SNPL is listed in Additional file 1: Table S3.
Figure 5a,b respectively, show the time series of the fractal dimension of the species-dependent habitat coastline D[f] (mostly beach for SNPL, PIPL, and REKN, but also salt marsh for the PIPL), and
of the fractal dimension of the patches D[K] (from Equation 3) computed with the patch-delineation model. Additional file 1: Figures S3, S4, and S5 show the patches for SNPL, PIPL, and REKN in the
years 2006, 2020, 2040, 2060, 2080, and 2100. The majority of patches are along the barrier islands and particularly in the Panhandle region. After 2060, when sea levels start to rapidly rise, a
consistent portion of the patches will be found along the shore as barrier islands gradually disappear. Figure 5a also shows the variation in the fractal dimension of the whole coastline
independently of the land cover class. The probability of finding a patch of size S is lower in 2100 than in 2006. D[K] values are similar for SNPL and REKN and are higher for PIPL (Figure 5b). Thus,
on average the relationship holds for the modeled period. Big variations in are observed particularly in correspondence with big variations in the salt-marsh habitat (Figure 5a,b), which confirms
the likelihood of finding a breeding ground of PIPL in the salt-marsh habitat (class 8 contained in Additional file 1: Figure S6) as reported in literature (Convertino et al. 2011a) and as found by
our results (Additional file 1: Figure S6). In 2100 the fractal dimension of REKN is very similar to the fractal dimension of SNPL, while the fractal dimension of PIPL is the highest. The PIPL shows
the lowest probability of large patches with respect to the other shorebirds considered because D[K] is the highest. The area under the power-law distribution of patches for 2100 in Figure 3 has a 5,
3 and 8% negative variation with respect to the area for 2006 for SNPL, PIPL and REKN. The area under the curve is the overall probability of finding patches of any size in a given year. Just
comparing 2006 with 2100 is not enough to derive any conclusion about the species with the highest potential risk of decline. The location and size of the patches are determined by the habitat
suitability at-a-point and by a combination of dispersal and area criteria (see the “Patch-delineation model” section). The Piping Plover, despite having a larger spectrum of habitat preferences than
SNPL and REKN [transitional marsh and salt marsh areas are favorable classes as shown in Additional file 1: Figure S6 and as reported in Convertino et al. (2011a)] seems to be at risk due to the high
fragmentation of its habitat. This is evidenced by the larger fluctuation of than and .
Figure 5. Fractal dimension time series of the shorebirds patches and of the coastline. (a) Time series of the fractal dimension D[f]of the entire coastline (blue line), of the salt-marsh (red), and
of the beach (green) habitat coastlines, determined by the box-counting algorithm. (b) Fractal dimension D[K]over time for the patches for SNPL (blue dots), PIPL (red), and REKN (green) derived from
Korčak’s law. The dashed gray lines (a, b) represent the 95% confidence interval of the estimated D[f]and D[K]. (c) Scaling relationship among the fractal dimension of the patches for the threatened,
endangered, and potentially at-risk shorebird species (TER-s) and the fractal dimension of the favorable habitat coastlines (salt marsh for PIPL, and beach for SNPL and REKN). The average
species-independent scaling exponent is γ= 1.67. The gray cloud (c) represents the 95% confidence interval for the linear regression between D[K]and D[f].
We believe that it is important to observe the fluctuations of D[K] over time for each species. D[K]values of SNPL and REKN are on average steady and increasing over time, respectively; thus, the
probability of finding large patches for these shorebirds decreases over time with respect to 2006. D[K] of PIPL has the largest fluctuations, but most of these fluctuations imply an increase in the
probability of finding large patches with respect to 2006. Nonetheless we believe the frequent and large variation in patches is not a good scenario for species.
In Figure 5c we propose a scaling relationship between the fractal dimension of patches and the fractal dimension of the habitat-specific coastline, D[K]∼D[f]^γ. The relationship holds over at least
two orders of magnitude, from the smallest patches (∼ 0.01km^2) and short coastline segments to the largest patches and the whole Florida Gulf coastline. The same scaling exponent is observed for
SNPL, PIPL, and REKN, underlining a possible common ecogeomorphological organization of the landscape under sea level rise pressure. In Figure 5cD[f] is characteristic of the portion of coastline in
which there is a suitable habitat for SNPL, PIPL, and REKN, which is evidenced in Additional file 1: Figure S6. The coupled evolution of the land cover and habitat patterns may hold clues about the
linkage of geomorphological and ecological processes. The scaling relationship between the fractal dimensions of patches and coastline can be a potential tool to measure the vulnerability of the
species in the future. The higher the exponent γ, the higher the potential risk of decline of the species. For small changes in the configuration of the coastline, a large fragmentation of the
suitable habitat would potentially be observed. For species with comparable values of γ, which is the case for SNPL, PIPL, and REKN, the range of values of D[K] and D[f] is important for detecting
which species may be subjected to the most significant change in the suitable habitat patches. The lower D[K], the higher the likelihood of having large patches. To the best of our knowledge this is
the first scaling relationship to be identified between fractal dimensions of landscape and ecological patterns. In this respect this relationship brings insights into the field of “landscape
allometry,” which is the study of the possible scaling of landscape and ecological patterns and processes. The relationship is between fractal dimensions, which are indicators that focus on how
measured quantities vary as a power of measurement scale, but at the same time the relationship has an allometric focus, between the coastline complexity and the magnitude of habitat fragmentation.
However, fragmentation per se does not directly imply loss of connectivity among patches. Figure 6 shows how the average size of the patches 〈s〉for SNPL, PIPL, and REKN decreases with the increase
in the fractal dimension of the patches. Here we consider D[K] of Korčak’s law for the fractal dimension. At the same time we observe an increase in the number of patches N[p]. Thus, the variation in
the coastline produces fragmentation, rather than shrinking, of the suitable habitat. The former does not imply the latter as erroneously assumed by many theoretical models in the ecological
literature. The average size of the PIPL patches is lower than that for SNPL and REKN, and the habitat for the PIPL is the most fragmented (N[p] is the highest on average). This is related to the
high value of D[K]for the PIPL with respect to SNPL and REKN. Thus, although the variations in would predict bigger patches, the fragmentation of the PIPL habitat is the greatest. In 2100 the number
of suitable patches for SNPL, PIPL, and REKN is predicted to be higher than in 2006, but the average size of the patches is predicted to be smaller (Additional file 1: Table S2). As sea level rise
(SLR) increases the complexity of the coastline, habitat patches moderately shrink and split. On the contrary when the coastline complexity decreases, habitat patches enlarge and coalesce (Figure 1c)
as in our assumption depicted in Figure 1b. The PIPL seems to be the shorebird most affected by the changes in its breeding habitat due to sea level rise.
Figure 6. Relationships among patch number, size, and connectivity, and fractal dimension of the habitat-specific coastline. 〈s〉vs D[K](a), N[p]vs D[K](b), N[p]vs 〈s〉(c), and 〈c〉vs D[K](d) for
the threatened, endangered, and at-risk shorebirds (TERs) considered. The dots are the bin averages over 30 simulations for each year for the period 2006-2100. The dashed lines represent the 95%
confidence intervals for the dependent variables considered.
The average size and the number of the patches are inversely proportional given the relationship in Figure 6a,b and as shown in Additional file 1: Figure S7. The average patch size 〈s〉 for the
shorebirds is not proportional to the average body mass m as possibly expected (Table 1), although the latter scales with the average dispersal length. The 〈s〉is for the PIPL, while it is larger
for SNPL and REKN. This emphasizes the controlling role of habitat geomorphology in shaping the patch distribution. The PIPL also depends on the salt-marsh habitat, which t is one of the classes more
seriously compromised by SLR. We consider d[l], the estimated maximum dispersal length, in order to determine the average number of connected patches 〈c〉. d[l]considers rare “Lévy flights” of
individuals of the species in the ecosystem. Lévy flights are a special class of random walk with movement displacements drawn from a probability distribution with a power-law tail (the so-called
Pareto-Lévy distribution), and they give rise to stochastic processes closely linked to fractal geometry and anomalous diffusion phenomena. Because it has the largest maximum dispersal distance, the
REKN has the highest number of connected patches. However, for the three shorebird species 〈c〉increases with the fractal dimension of the patches, indicating a measure of the habitat fragmentation.
Because we find that climate change is responsible for the splitting of the patches, rather than their shrinking, and because the dispersal capability of species is not expected to change
consistently in the modeled period, the result seems justifiable. The increase in the number of connected patches is explainable because N[p]increases without a drastic reduction in the habitat. The
average connectivity of the predicted breeding and wintering patches is an increasing function of the fractal dimension of the patches. The increasing roughness of the Florida coastline due to
climate change produces a larger number of patches with smaller dimensions. The increased connectivity would potentially enhance the survivability of the shorebirds despite the decrease in the
average size of suitable patches. Thus, the predicted patch patterns for the Florida shorebirds are not the worst case scenario in which both the connectivity and the dimension of the patches are
reduced. Further explanation of the land cover, habitat, and patch dynamics is provided in Additional file 1.
Sea level rise due to climate change, beyond being a human-population threat, is shown to strongly affect biodiversity such as residential and migrant shorebird populations in Florida. The integrated
patch-prediction modeling framework proposed in this paper constitutes a parsimonious but useful risk assessment tool for species decline with respect to more accurate metapopulation models. In our
opinion, the understanding of ecogeomorphological processes at any scale of analysis together with the detection of useful indicators of such dynamics is one of the primary goals to protect
biodiversity against the anticipated changes in the landscape due to climate change. On the one hand, it is impossible to consider, or to estimate with low uncertainty, all the factors affecting the
processes that govern the distribution of species (e.g., conspecific attractions, interspecific competition, density dependence, sex structure, life history, phenotypic plasticity, and phenological
changes in dispersal ability and in breeding/wintering area requirements), the geomorphological processes, and the links and feedbacks among these processes. On the other hand, we believe that a
top-down approach of biocomplexity is useful to detect the fundamental drivers of the observed patterns of interest (Schwimmer 2008; National Research Council 2009; Reinhardt et al. 2010). We are
aware that many geomorphological and biological processes are not incorporated in the presented model; however, the uncertainty in the quantification of these processes and the interaction of these
uncertainties may produce erroneous results in the predictions. The integrated model is capable of providing valuable macroscale predictions with relatively few data and variables. Thus, the model is
useful for evaluating conservation actions for increasing the survivability of shorebirds in Florida. We are also confident that the proposed model, properly tuned, can be applied to many different
species in coastal ecosystems worldwide that are threatened by sea level rise. We anticipate further development of this model at higher levels of complexity and also for inland sites. The following
conclusions are worth mentioning.
A scale-free distribution of nesting, breeding, and wintering occurrences was detected for the Snowy Plover in Florida. The scale-free distribution was also found for the wintering occurrences of
Piping Plover and Red Knot. The distribution was derived through the box-counting technique applied to the breeding and wintering occurrences, which gives a proxy of the fractal dimension of
shorebird patches. Empirical evidence shows that the fractal dimension of the occurrences is strongly positively correlated with the coastline fractal dimension, which underlines an
ecogeomorphological organization, i.e., a coupling of ecological and geomorphological patterns. The power law held for any season of the shorebird annual cycle, demonstrating the high importance of
the physical habitat on species processes.
We predicted breeding and wintering patches of shorebirds, simulating land cover (which comprises many coastal wetland types) and habitat suitability at the year scale from 2006 to 2100 as a function
of sea level rise. Patches were identified by a set of macroecological criteria, such as area, habitat suitability, and neighboring distance, as a function of the maximum dispersal. The distribution
of the predicted patch size was Korčak’s law, whose exponent is half of the fractal dimension of the patches. We validated the model by predicting the observed patch-size distribution and patch
patterns from 2002 to 2010 where data were available. We also investigated the perimeter-size relationship for estimating the fractal dimension of the patches at a higher level of complexity because
of the calculation of the perimeter. The fractal dimension provided by the perimeter-size relationship provided a median estimate between the values derived from Korčak’s law and the box-counting
distribution. Korčak’s law provided the most optimistic scenario of fragmentation in which the probability of finding large patches was the highest, while the box-counting provided the most
pessimistic scenario. Hence, the perimeter-area relationship is suggested as the best method to calculate the fractal dimension of the mosaic of habitat patches.
The robustness of the Pareto-Lévy distribution of the patch size was verified for predictions of patches from 2006 to 2100. Thus, the scale-invariance of the patch patterns holds in time despite the
strong influence of sea level rise. This may be related to a sort of simulated “biological resilience” of species to the external changes (Folke et al. 2004) by assuming invariant habitat area and
dispersal requirement. Scale-free habitat patterns have proven to be the most resilient to external stressors in previous studies (Kefi et al. 2011). Thus, the shape of the patch-size probability and
the fractal dimension when this probability is a power law can be useful indicators to estimate the “degree of stress” of coastal ecosystems. Further research is anticipated to understand when and
how the patch-size probability deviates from a Pareto-Lévy behavior. The fragmentation, which is proportional to the fractal dimension of the habitat-specific coastline, varied considerably over time
and in particular for the Piping Plover. However, the risk of extirpation in 2100 for SNPL, PIPL, and REKN was not high with respect to 2006. We note that the comparison between final and initial
years’ risk should not be the only comparison in evaluating the risk of decline of a species. The overall trend of the fractal dimension in the modeled period has to be evaluated as well.
A scaling relationship was found between the fractal dimensions of the patches and of the habitat-specific coastline. The scaling exponent of this relationship appears to be species-independent for
the shorebirds considered. Further research is needed to explore the conditions of universality (species- and ecosystem-wise) of this relationship, which may be related to the species considered. The
fluctuation in the fractal dimension of the coastline can be assumed to be a valuable ecological indicator for assessing variation in patch patterns of breeding and wintering shorebirds.
We demonstrated that habitat loss, fragmentation, and connectivity are three separate concepts. Although these variables are closely linked to each other, their causality is not trivial. For the
shorebirds studied, the predicted fragmentation was coupled with habitat loss while the connectivity increased. The fact that the patches, even if smaller, were connected is an extremely positive
factor that ensures dispersal and gene flow; thus, the connectivity of patches enhances the survivability of shorebirds. Birth, death, and dispersal processes of a species can overcome the
habitat-loss effect and a decrease in the average size of patches. Yet, a lower metapopulation risk of extirpation exists if interpatch migration is allowed (Kindvall and Petersson 2000). However, a
decrease in the average patch size can potentially increase intra-species competition for foraging (Ritchie 1998) and decrease carrying capacity. A possible optimal ecogeomorphological state of the
coastal ecosystem may be characterized by the smallest fractal dimension of the coastline that maximizes the compactness of the suitable patches. This configuration also minimizes the fractal
dimension of the patches. The highest entropy of this configuration may translate into the smallest energy expenditure of the species that inhabit the habitat, for example, for foraging and breeding
activities. The entropy of geomorphological landforms (Nieves et al. 2010) may, in fact, be highly correlated with the scale-invariance of ecological patterns such as species-patch patterns.
submitted to Ecological Processes - Special Issue “Wetlands In a Complex World”, Guest Editor: Dr. Matteo Convertino
SNPL: Snowy Plover; PIPL: Piping Plover; REKN: Red Knot; TER: threatened, endangered, and at risk; SLAMM: Sea Level Affecting Marshes Model; SLR: sea level rise; Df: fractal dimension of the
coastline (from box-counting); Db: fractal dimension of the breeding and wintering occurrences (from box-counting); DK: fractal dimension of the patches (from Korčak’s law); Dc: fractal dimension of
the patches (from perimeter-size relationship); S: patch-size; p: patch perimeter; P(hs): habitat suitability score; SI: suitability index; Sp: minimum population patch-size; Sb/w: minimum breeding/
wintering patch-size; hr: home-range; hrd: home-range distance; dl: maximum dispersal length.
Author’s contributions
MC designed the study, managed and analyzed the data, wrote the model (box-counting and patch delineation model), developed the theory, and wrote the manuscript. AB assisted in making the
calculations and analysis, and helped in writing the manuscript. GAK and RMC participated in the habitat suitability modeling framework and reviewed the manuscript. IL supervised the whole work, and
reviewed the manuscript by providing a practical angle to this research for effective environmental management. All authors read and approved the final manuscript.
Authors’ information
MC is Research Scientist at the University of Florida, Gainesville, and a Contractor of the Engineering Research and Development Center of the US Army Corps of Engineers at the Risk and Decision
Science Team. AB is currently a financial analyst at Frontier Airlines. AB got his B.Sc and M.Sc. from MIT, Civil and Environmental Engineering program. AB performed his research internship at the
Risk and Decision Science Team in the summer of 2011. GAK and RMC are Associate and Professor at the University of Florida, Gainesville, respectively. IL is team leader of the Risk and Decision
Science Team of the Engineering Research and Development Center of the US Army Corps of Engineers.
This research was supported by the US Department of Defense, through the Strategic Environmental Research and Development Program (SERDP), Project SI-1699. M.C. acknowledges the funding of project
“Decision and Risk Analysis Applications Environmental Assessment and Supply Chain Risks” for his research at the Risk and Decision Science Team. The computational resources of the University of
Florida High-Performance Computing Center (http://hpc.ufl.edu) are kindly acknowledged. The authors cordially thank Dr. RA Fisher (Engineering Research and Development Center of the US Army Corps of
Engineers) and the Eglin Air Force Base personnel for their help in obtaining the data and for the useful information about the breeding information of SNPL. Tyndall Air Force Base and Florida
Wildlife Commission are also gratefully acknowledged for the assistance with the data. We thank M.L. Chu-Agor (currently at the Center of Environmental Sciences, Department of Biology and Earth and
Atmospheric Sciences, Saint Louis University, St. Louis, MO) for her computational effort with SLAMM at the University of Florida. Permission was granted by the USACE Chief of Engineers to publish
this material. The views and opinions expressed in this paper are those of the individual authors and not those of the US Army or other sponsor organizations.
• Aiello-Lammens ME, Chu-Agor ML, Convertino M, Fischer RA, Linkov I, Resit Akcakaya H (2011) The impact of sea level rise on snowy plovers in Florida: integrating geomorphological, habitat, and
metapopulation models. Global Change Biol 17:3644-3654 Publisher Full Text
• Airoldi L (2003) Effects of patch shape in intertidal algal mosaics: roles of area, perimeter and distance from edge. Mar Biol 143:639-650 Publisher Full Text
• Alliance FS (2010) Florida Panhandle Shorebird Working Group.
Tech. rep FWC/Audubon Florida/USFWS. http://www.flshorebirdalliance.org/resources-pages/maps.html webcite
• Audubon (2006) America’s top ten most endangered birds. Tech. rep. National Audubon Society, New york. PubMed Abstract
• Baldassarri A, Sapoval B, Felix S (2012) A numerical retro-action model relates rocky coast erosion to percolation theory.
Arxiv. http://arxiv.org/pdf/1202.4286v1.pdf webcite
• Banavar JR, Colaiori F, Flammini A, Maritan A, Rinaldo A (2001) Scaling, optimality and landscape evolution. J Stat Phys 104:1-33 Publisher Full Text
• Bonabeau E, Dagorn L, Freon P (1999) Scaling in animal group-size distributions. Proc Nat Acad Sci 96:4472-4477 PubMed Abstract | Publisher Full Text | PubMed Central Full Text
• Buldyrev S, Dokholyan N, Erramilli S, Hong M, Kim J, Malescio G, Stanley H (2003) Hierarchy in social organization. Phys Stat Mech Appl 330(3-4):653-659 Publisher Full Text
• Burney C (2009) Florida beach-nesting bird report, 2005-2008.
Tech. rep. Florida Fish and Wildlife Conservation Commission, Tallahassee. http://www.flshorebirdalliance.org/pdf/2005-2008_FWC_BNB_Report.pdf webcite
• C-CAP (2009) Coastal change analysis program regional land cover.
Tech. rep. NOAA, Washington DC. http://www.csc.noaa.gov/digitalcoast/data/ccapregional/ webcite
• Chu-Agor M, Muñoz-Carpena R, Kiker G, Emanuelsson A, Linkov I (2011) Exploring sea level rise vulnerability of coastal habitats using global sensitivity and uncertainty analysis. Environ Model
Software 26(5):593-604 Publisher Full Text
• Clough J (2006) Application of SLAMM 4.1 to nine sites in Florida.
Warren Pinnacle Consulting, Waitsfield, VT. http://warrenpinnacle.com/prof/SLAMM/NWF_SLAMM_FLORIDA_2-16-2006.doc webcite
• Clough JS (2010) The Sea Level Affecting Marshes Model.
Tech. rep. Warren Pinnacle Consulting, Waitsfield, VT. http://warrenpinnacle.com/prof/SLAMM6/SLAMM6_Technical_Documentation.pdf webcite
• Convertino M, Kiker G, Chu-Agor M, Muñoz-Carpena R, Martinez C, Aiello-Lammens M, Akçakaya H, Fisher R, Linkov I (2010) Integrated modeling to mitigate climate change risk due to sea-level rise
of imperiled shorebirds on Florida coastal military installations. Springer, Dordrecht.
• Convertino M, Donoghue J, Chu-Agor M, Kiker G, Munoz-Carpena R, Fischer R, Linkov I (2011a) Anthropogenic renourishment feedback on shorebirds: a multispecies Bayesian perspective. Ecol Eng
37:1184-1194 Publisher Full Text
• Convertino M, Kiker G, Munoz-Carpena R, Chu-Agor M, Fischer R, Linkov I (2011b) Scale- and resolution-invariance of suitable geographic range for shorebird metapopulations. Ecol Complexity 8
(4):364-376 Publisher Full Text
• Convertino M, Elsner J, Muñoz-Carpena R, Kiker G, Fisher R, Linkov I (2011c) Do tropical cyclones shape shorebird habitat patterns? Biogeoclimatology of Snowy Plovers in Florida. PLoS ONE 6(1):p1
• Convertino M, Simini F, Catani F, Kiker G (2012) From river basins to elephants to bacteria colonies: aggregate-size spectrum of animate and inanimate species. PLoS ONE (in press).
• del Barrio G, Harrison P, Berry P, Butt N, Sanjuan M, Pearson R, Dawson T (2006) Integrating multiple modelling approaches to predict the potential impacts of climate change on species’
distributions in contrasting regions: comparison and implications for policy. Environ Sci Policy 9(2):129-147 Publisher Full Text
• Ehrlich PR, Levin SA (2005) The evolution of norms. PLoS Biol 3(6):e194 PubMed Abstract | Publisher Full Text | PubMed Central Full Text
• Elliott Smith E, Haig S (2004) Piping plover (charadrius melodus). Cornell Lab of Ornithology, Ithaca. Publisher Full Text
• F-DEP, State Geological Map FDEP-GEO (2001)
Tech. rep., Florida Department of Environmental Protection (data from, FGDL, University of Florida GEOPLAN Center). http://www.dep.state.fl.us/geology/gisdatamaps/state_geo_map.htm webcite
• Fallon F (2005) Petition to list the red knot (caladris canutus rufa) as endangered and request for emergency listing under the endangered species act.
Tech. rep. Delaware Riverkeeper Network, American Littoral Society, Delmarva Ornithological Society, Delaware Chapter of the Sierra Club, New Jersey Audubon Society. http://www.fws.gov/northeast/
redknot/riverkeeper.pdf webcite
• Folke C, Carpenter S, Walker B, Scheffer M, Elmqvist T, Gunderson L, Holling C (2004) Regime shifts, resilience and biodiversity in ecosystem management. Annu Rev Ecol Evol Syst 35:557-581
Publisher Full Text
• FWC (2010) Florida beach-nesting birds website.
Tech. rep. Florida Fish and Wildlife Conservation Commission. http://legacy.myfwc.com/bnb/ webcite
• Harrington B (2001) Red knot (calidris canutus). Cornell Lab of Ornithology, Ithaca. Publisher Full Text
• Himes J, Douglass N, Pruner R, Croft A, Seckinger E (2006) Status and Distribution of Snowy Plover in Florida, 2006.
Tech. rep. Florida Wildlife Conservation Commission. http://www.flshorebirdalliance.org/pdf/Himes_Douglass-2006_SNPL_Report.pdf webcite
• Humphries N, et al. (2010) Environmental context explains Lévy and Brownian movement patterns of marine predators. Nature 465(7301):1066-1069 PubMed Abstract | Publisher Full Text
• Imre AR, Bogaert J (2004) The fractal dimension as a measure of the quality of habitats. Acta Biotheoretica 52:41-56 PubMed Abstract
• Jovani R, Tella J (2007) Fractal bird nest distribution produces scale-free colony sizes. Proc R Soc B 274:2465-2469 PubMed Abstract | Publisher Full Text | PubMed Central Full Text
• Jovani R, Serrano D, Tella JL, Adler FR, Ursúa E (2008) Truncated power laws reveal a link between low-level behavioral processes and grouping patterns in a colonial bird. PLoS ONE 3:1992-+
Publisher Full Text
• Kéfi S, Rietkerk M, Alados CL, Pueyo Y, Papanastasis VP, Elaich A, de Ruiter PC (2007) Spatial vegetation patterns and imminent desertification in Mediterranean arid ecosystems. Nature
449:213-217 PubMed Abstract | Publisher Full Text
• Kefi S, Rietkerk M, Roy M, Franc A, de Ruiter P, Pascual M (2011) Robust scaling in ecosystems and the meltdown of patch size distributions before extinction. Ecol Lett 14(1):29-35 PubMed
Abstract | Publisher Full Text
• Kindvall O, Petersson A (2000) onsequences of modelling interpatch migration as a function of patch geometry when predicting metapopulation extinction risk. Ecol Modell 129(1):101-109 Publisher
Full Text
• Klemas V, Dobson J, Ferguson R, Haddad K (1993) A coastal land cover classification system for the noaa coastwatch change analysis project. J Coastal Res 9(3):862-872
• Lamonte K, Douglass N (2002) Status and Distribution of Snowy Plover in Florida, 2002.
Tech. rep. Florida Wildlife Conservation Commission. http://www.flshorebirdalliance.org/pdf/Lamonte_Douglass-2002_SNPL_Report.pdf webcite
• Leyrer J, Spaans B, Camara M, Piersma T (2006) Small home ranges and high site fidelity in red knots (calidris c. canutus) wintering on the banc dArguin, mauritania. J Ornithology 147(2):376-384
Publisher Full Text
• Majka D, Jenness J, Beier P (2007) Corridordesigner: Arcgis tools for designing and evaluating corridors. CorridorDesigner.
http://corridordesign.org webcite
• Mandelbrot B (1967) How long is the coast of britain? Statistical self-similarity and fractional dimension. Science 156(3775): Publisher Full Text
• Masetti R, Fagherazzi S, Montanari A (2008) Application of a barrier island translation model to the millennial-scale evolution of Sand Key, Florida. Continental Shelf Res 28:1116-1126 Publisher
Full Text
• Morais PA, Oliveira EA, Araújo NAM, Herrmann HJ, Andrade JS (2011) Fractality of eroded coastlines of correlated landscapes. Phys Rev E 84:016,102 Publisher Full Text
• Muller S, Muñoz-Carpena R, Kiker G (2010) Model relevance: frameworks for, exploring the complexity-sensitivity-uncertainty trilemma. NATO book, Amsterdam.
• National Research Council, NAS (ed) (2009) A New Biology for the 21st Century. The National Academies Press, Washington, DC.
• Nieves V, Wang J, Bras R, Wood E (2010) Maximum entropy distributions of scale-invariant processes. Phys Rev Lett 105(11):118,701+ Publisher Full Text
• Nikora V, Pearson C, Shankar U (1999) Scaling properties in landscape patterns: New Zealand Experience. Landscape Ecol 14:17-33 Publisher Full Text
• Page G, Stenzel L, Warriner J, Paton P (2009) Snowy plover (charadrius alexandrinus).
In: Cornell I Lab of Ornithology, Poole A (eds) The Birds of North America Online, vol. 5. , http://bna.birds.cornell.edu/bna/species/154 webcite
Publisher Full Text
• Paola C, Leeder M (2011) Environmental dynamics: Simplicity versus complexity. Nature 469:38-39 PubMed Abstract | Publisher Full Text
• Pascual M, Manojit R, Guichard F, Flierl G (2002) Cluster-size distributions: signatures of self-organization in spatial ecologies. Phil Trans R Soc Lond B 357:657-666 Publisher Full Text
• Pascual M, Roy M, Laneri K (2011) Simple models for complex systems: exploiting the relationship between local and global densities. Theor Ecol 4(2):211-222 Publisher Full Text
• Paton P, Edwards T (1996) Factors affecting interannual movements of snowy plovers. The Auk 113(3):534-543 Publisher Full Text
• Phillips S, Anderson R, Schapire R (2006) Maximum entropy modeling of species geographic distributions. Ecol Modell 190(3-4):231-259 Publisher Full Text
• Phillips SJ, Miroslav D (2008) Modeling of species distributions with maxent: new extensions and a comprehensive evaluation. Ecography 31(2):161-175 Publisher Full Text
• Pruner R (2010) Assessing habitat selection, reproductive performance, and the affects of anthropogenic disturbance of the Snowy Plover along the Florida Gulf coast.
Master’s thesis, University of Florida, Gainesville, USA
• Reinhardt L, Jerolmack D, Cardinale BJ, Vanacker V, Wright J (2010) Dynamic interactions of life and its landscape: feedbacks at the interface of geomorphology and ecology. Earth Surf Processes
Landforms 35:78-101 Publisher Full Text
• Rinaldo A, Dietrich WE, Rigon R, Vogel GK, Rodrlguez-Lturbe I (1995) Geomorphological signatures of varying climate. Nature 374:632-635 Publisher Full Text
• Ritchie M (1998) Scale-dependent foraging and patch choice in fractal environments. Evolutionary Ecol 12:309-330 Publisher Full Text
• Scanlon TM, Caylor KK, Levin SA, Rodriguez-Iturbe I (2007) Positive feedbacks promote power-law clustering of Kalahari vegetation. Nature 449:209-212 PubMed Abstract | Publisher Full Text
• Schneider D, Tella J, Scaling theory: application to marine ornithology (2002) Ecosystems 5:736-748 Publisher Full Text
• Schwimmer RA (2008) A temporal geometric analysis of eroding marsh shorelines: can fractal dimensions be related to processes. J Coastal Res 1:152-158
• Seavey JR, Gilmer B, McGarigal KM (2010) Effect of sea level rise on piping plover (charadrius melodus) breeding habitat. Biol Conserv. Publisher Full Text
• Solé R (2006)Bascompte, J (eds) Self-organization in Complex Ecosystems, Princeton University Press, Princeton, NJ.
• Stenzel L, Warriner J, Warriner J, Wilson K, Bidstrup F, Page G (1994) Long-distance breeding dispersal of Snowy Plovers in western North America. J Animal Ecol 63:887-902 Publisher Full Text
• Stenzel L, Page G, Warriner J, Warriner J, George D, Eyster C, Ramer B, Neuman K (2007) Survival and natal dispersal of juvenile snowy plovers (Charadrius alexandrinus) in central coastal
California. The Auk 124(3):1023-1036 Publisher Full Text
• USFWS (2009) Piping Plover (Charadrius melodus) 5-Year Review: Summary and Evaluation.
Tech. rep., US Fish and Wildlife Service. http://www.fws.gov/northeast/endangered/PDF/Piping_Plover_five_year_review_and_summary.pdf webcite
• USGS (2010) National Elevation Dataset.
Tech. rep., United State Geological Survey. http://ned.usgs.gov/ webcite
• USGS-FWS (2009) Data from the 2006, International Piping Plover Census (IPIPLC),.
Tech. rep., United State Geological Survey and USFWS. http://www.flshorebirdalliance.org/pdf/Elliot-Smith_Haig-2006_PIPL_Report.pdf webcite
Sign up to receive new article alerts from Ecological Processes
|
{"url":"http://www.ecologicalprocesses.com/content/1/1/9","timestamp":"2014-04-16T07:13:51Z","content_type":null,"content_length":"210664","record_id":"<urn:uuid:3df32f0c-ff05-43cb-a469-312df1214fa7>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00496-ip-10-147-4-33.ec2.internal.warc.gz"}
|
How to convert a sine wave into a triangular wave
Full Member level 3
Join Date
Dec 2005
6 / 6
sine wave into intergrator
Have a query. How do we convert a sine wave output into a sawtooth wave?
Advanced Member level 5
Join Date
May 2008
1425 / 1425
convert sine wave to sawtooth
One simple solution would be first to convert it into a squarewave (bistable circuit, hard limiting) and than to integrate the signal.
Full Member level 2
Join Date
Aug 2006
7 / 7
convert sine to sawtooth
triangular sawtooth wave can be gernarated by summing all even multiples of sine wave.
Advanced Member level 5
Join Date
Jan 2008
Toronto area of Canada
725 / 725
convert sine wave to triangle wave
triangular sawtooth wave can be gernarated by summing all even multiples of sine wave.
But a pure sine-wave doesn't have any "multiples" (harmonics).
Full Member level 5
Join Date
Jun 2008
19 / 19
saw wave into a square wave
we use integrator to get tri wave from a square wave and sine from tri, why don't try differentiator circuit like simple CR circuit similar to high pass filter, but take care of the cutting
frequency of the high pass filter, this is a simple method you can use.
Advanced Member level 5
Join Date
Jan 2008
Bochum, Germany
7951 / 7951
convert sine wave to sawtooth wave
It hasn't been said at all, if the sine wave has fixed or variable frequency (which range) and how amplitude variations shall be reflected by the sawtooth.
Advanced Member level 5
Join Date
Jan 2008
Toronto area of Canada
725 / 725
how to convert square wave to triangular wave
It hasn't been said at all, if the sine wave has fixed or variable frequency (which range) and how amplitude variations shall be reflected by the sawtooth.
The sinewave must first be converted into a square-wave by amplifying it a lot and letting the amplifier clip the signal into square-waves. A small sine-wave will also clip the same as a
large sine-wave so the amplitude of the sine-wave doesn't matter (as long as it is high enough for the amplifier to clip.
Then the square-wave is made asymmetrical then integrated into triangle-waves that all have the same amplitude.
1 members found this post helpful.
Advanced Member level 5
Join Date
Jan 2008
Bochum, Germany
7951 / 7951
converting sawtooth to sine wave
I think, that's only one of many options - depending on th erequirements.
Full Member level 2
Join Date
Mar 2008
16 / 16
10. 27th July 2008, 18:30 #10
Advanced Member level 5
Join Date
May 2008
1425 / 1425
clipped reflected sine wave
may be integator....
May be ?? Long time ago we have learned in school what we get if we integrate a sine function, did we not ?
11. 1st August 2008, 07:09 #11
Full Member level 3
Join Date
Dec 2005
6 / 6
convert sawtooth wave to sine wave
The input sinewave may have varying frequencies. Would there be a problem if we convert it into square wave and take integral? I tend to believe that the resulting triangular wave would have
varying amplitude
12. 1st August 2008, 08:35 #12
Advanced Member level 1
Join Date
May 2008
51 / 51
how i convert triangular wave to square wave
circuit will be in this configuration,use operational amplifire in open loop gain,it'll convert sine wave into rectanguler wave then use integrator to convert ur rectanguler wave to
trianguler wave
13. 12th August 2008, 12:59 #13
Advanced Member level 5
Join Date
May 2008
1425 / 1425
Re: How to convert a sine wave into a triangular wave
The input sinewave may have varying frequencies. Would there be a problem if we convert it into square wave and take integral? I tend to believe that the resulting triangular wave would have
varying amplitude
Yes, you are right. The result of integration depends in its amplitude on the duration of the rectangular signal - and, thus, on the frequency of the limited sinus.
Any suggestions to cope with the problem of varying frequencies ?
14. 12th August 2008, 14:12 #14
Newbie level 6
Join Date
Aug 2008
1 / 1
Re: How to convert a sine wave into a triangular wave
This circuit is connected to the 120vac power line and transfers 60Hz clock pulses to a logic circuit. The optoisolator used provides 5000 volts of isolation between the power line and the
logic side of the circuit.
15. 2nd January 2011, 09:40 #15
Newbie level 6
Join Date
Dec 2010
0 / 0
Re: How to convert a sine wave into a triangular wave
if you are trying to get a triangle wave, i suggest you to use a circuit like this
it can generate triangle and pulse, with any frequency and amplitude by changing resistors and capacitor.
---------- Post added at 10:33 ---------- Previous post was at 10:31 ----------
i cant send a link...
---------- Post added at 10:40 ---------- Previous post was at 10:33 ----------
|
{"url":"http://www.edaboard.com/thread130782.html","timestamp":"2014-04-20T23:29:09Z","content_type":null,"content_length":"104827","record_id":"<urn:uuid:35e02ded-7681-4247-ba84-380f406120c9>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00388-ip-10-147-4-33.ec2.internal.warc.gz"}
|
225 dollars in pounds
You asked:
225 dollars in pounds
Say hello to Evi
Evi is our best selling mobile app that can answer questions about local knowledge, weather, books, music, films, people and places, recipe ideas, shopping and much more. Over the next few months we
will be adding all of Evi's power to this site.
Until then, to experience all of the power of Evi you can download Evi for free on iOS, Android and Kindle Fire.
|
{"url":"http://www.evi.com/q/225_dollars_in_pounds","timestamp":"2014-04-24T22:04:54Z","content_type":null,"content_length":"57388","record_id":"<urn:uuid:affc2c6b-4948-4fb2-9814-d5544c8b19ab>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00053-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Transformation groups and Lie algebras
Higher Education Press&, World Scientific, 2013, 196 pp, ISBN 978-981-4460-84-2
Lectures on theory of group properties of differential
Edited by N.H. Ibragimov, translated by E. D. Avdonina and N.H. Ibragimov,
Higher Education Press&World Scientific, 2013, 156 pp., ISBN: 978-981-4460-81-21
Applications of Lie group analysis in Geophysical fluid dynamics
by Nail Ibragimov, Ranis Ibragimov
Series on Complexity, Nonlinearity and Chaos - Vol. 2, World Scientific Publishing Co Pte Ltd and Higher Education Press, 2011, ISBN:978-981-4340-46-5
This book introduces an effective method for seeking local and nonlocal conservation laws and exact solutions for nonlinear two-dimensional equations which provide a basic model in describing
internal waves in the ocean. The model consists of non-hydrostatic equations of motion which uses the Boussinesq approximation and linear stratification. The Lie group analysis is used for
constructing non-trivial conservation laws and group invariant solutions. It is shown that nonlinear equations in question have remarkable property to be self-adjoint. This property is crucial for
constructing physically relevant conservation laws for nonlinear internal waves in the ocean. The comparison with the previous analytic studies and experimental observations confirrms that the
anisotropic nature of the wave motion allows to associate some of the obtained invariant solutions with uni-directional internal wave beams propagating through the medium. Analytic examples of the
latitude-dependent invariant solutions associated with internal gravity wave beams are considered. The behavior of the invariant solutions near the critical latitude is investigated.
Symmetries of Integro-Differential Equations
With Applications in Mechanics and Plasma Physics
Series: Lecture Notes in Physics, Vol. 806
by Grigoriev, Y.N., Ibragimov, N.H., Kovalev, V.F., Meleshko, S.V.
1st Edition., 2010, XIV, 316 p., Softcover
ISBN: 978-90-481-3796-1
A Practical Course in Differential Equations and Mathematical Modelling: Classical and new methods, nonlinear mathematical models, symmetry and invariance principles
by Nail H.Ibragimov
Order form (pdf, 2MB)
Chinese version, ISBN 978-7-04-026547-7
by N.H. Ibragimov and V.F. Kovalev
One can order this book from Amazon or Springer WEB:
|
{"url":"http://www.bth.se/ihn/alga.nsf/pages/70403fef15dd329ac12576190052d45b?OpenDocument","timestamp":"2014-04-19T01:59:48Z","content_type":null,"content_length":"22456","record_id":"<urn:uuid:7e68240f-329c-4534-8a65-3911040bc4a3>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00257-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: analytical ODE solution (a bit off-topic)
Replies: 2 Last Post: Nov 30, 2012 6:20 AM
Messages: [ Previous | Next ]
Re: analytical ODE solution (a bit off-topic)
Posted: Nov 30, 2012 6:20 AM
Leslaw Bieniasz wrote:
> Hi,
> I need to solve analytically a certain second order ODE,
> which takes the general form
> y''(x) - p(z,x)*y(x) = 0.
> where p(z,x) is a polynomial and y(x) is to be determined. The polynomial
> depends on a complex parameter z.
Depending on your p(z,x) expression and the boundary conditions of your
problem, it might be possible to find an analytical solution to your
problem. Do you have that information?
> The problem is that I need a possibly approximate but analytical solution,
> perhaps in the form of some truncated series (but not the series in powers
> of x), not just numerical values of the solution.
> Are there any techniques available?
Yes, a good number of them.
One possible approach is to employ any form of the Galerkin method.
Basically, it essentially involves picking a function to approximate the
exact solution and then adjusting it to the solution to your problem.
Depending on the function you picked to approximate the solution, the
Galerkin method can actually return it.
The finite element method can be seen as a very specific implementation of
the Galerkin method. So, you can also explore that path.
Hope this helps,
Rui Maciel
Date Subject Author
11/28/12 analytical ODE solution (a bit off-topic) Leslaw Bieniasz
11/29/12 Re: analytical ODE solution (a bit off-topic) Peter Spellucci
11/30/12 Re: analytical ODE solution (a bit off-topic) Rui Maciel
|
{"url":"http://mathforum.org/kb/message.jspa?messageID=7930264","timestamp":"2014-04-21T03:00:43Z","content_type":null,"content_length":"19621","record_id":"<urn:uuid:57f31b99-d0cb-44c5-bc7b-3c8fbb1a70a5>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00559-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Hellenic Mathematical Society
The Hellenic Mathematical Society
The Hellenic Mathematical Society was founded in 1918 [1]:-
Its main goal was to encourage the study of, and research in, the science of mathematics and its many applications, as well as the evolution of mathematical education.
The first President of the Society was Nikolaos Hatzidakis who served in this role from 1918 to 1925. He had studied under Darboux, Emile Picard, and Poincaré in Paris; under Hilbert, Klein and
Schönflies in Göttingen; and under Fuchs, Knoblauch and Schwarz in Berlin. He undertook research into differential geometry and when he became a founder member of the Hellenic Mathematical Society he
had been a professor at the University of Athens since 1901. The second President of the Society was Georgios Remoundos, who was also a founder member of the Society. He undertook research in
function theory and had been appointed as professor of Higher Mathematical Analysis at the University of Athens in 1912 and he had also been appointed to the Technical University of Athens in 1916.
The Society began publication of the Bulletin of the Greek Mathematical Society in 1919 and Remoundos was a member of the editorial board. The Bulletin not only aimed at promoting mathematics and
mathematical education but it also aimed at providing a means of communication between members of the Society.
Remoundos was President from 1925 to 1927 and then Konstantinos Maltezos was President in 1927. Maltezos, who worked on mechanics and theoretical physics, had been dismissed by the University of
Athens in 1920 by the royalist Minister of Education after the exiled King Constantine I had been restored to his throne. However Maltezos had been reinstated to his professorship in October 1922
after King Constantine had abdicated and a military junta seized power in Greece, so he held his professorship at the time that he was President of the Society. Nilos Sakellariou, professor of
analytical geometry at the University of Athens, then served as President of the Society from 1929. In 1931, under Sakellariou's Presidency, the Society organised the first Panhellenic Mathematical
Competition. However, there followed an extremely difficult period for the Society.
General Ioannis Metaxas, on the political far right, encouraged unrest by workers and when a general strike was threatened he persuaded King George II, on 4 August 1936, to suspend parliament which
did not reconvene over the following ten years. Metaxas, now a dictator, tried to bring back the values of ancient Greece and imposed his wishes on all aspects of Greek life. In particular he
interfered in the running of the Society and N Kritikos resigned from the executive committee of the Hellenic Mathematical Society in 1936 due to political interference in the affairs of the Society.
Alphabetical list of Societies Chronological list of Societies
Welcome page Biographies Index
History Topics Index Famous curves index
Chronology Time lines
Mathematicians of the day Anniversaries for the year
Search Form Birthplace Maps
JOC/EFR August 2004 School of Mathematics and Statistics
University of St Andrews, Scotland
The URL of this page is:
|
{"url":"http://www-groups.dcs.st-and.ac.uk/~history/Societies/Hellenic.html","timestamp":"2014-04-18T00:16:25Z","content_type":null,"content_length":"7217","record_id":"<urn:uuid:2ad9fec3-0bce-40e1-8903-f1a6d5783bfa>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00330-ip-10-147-4-33.ec2.internal.warc.gz"}
|
A variant of the Stone-Weierstrass theorem?
up vote 6 down vote favorite
I would like to ask specialists in C*-algebras if the following variant of the Stone-Weierstrass theorem is true.
Suppose $A$ is a C*-algebra and $C$ is its center. Since $C$ is a commutative C*-algebra, there exists a compact space $T$ such that $C$ is isomorphic to the algebra $C(T)$ of continuous functions on
$T$. Does this mean that there exists a C*-algebra $B$ such that
1) $A$ is isomorphic to a closed subalgebra in the algebra $C(T,B)$ of continuous mappings $f:T\to B$ with the pointwise algebraic operations (and the topology of uniform convergence on $T$), and
2) this isomorphism turns $C$ into the algebra of scalar mappings, i.e. the mappings of the form $f(x)=\lambda(x)\cdot 1_B$, where $1_B$ is the identity in $B$, and $\lambda(x)$ $\in$ $\mathbb{C}$
for all $x\in T$.
EDIT 21-03-12: All the C*-algebras here are supposed to be unital, excuse me for not mentioning this from the very beginning!
add comment
3 Answers
active oldest votes
EDIT 20-03-12 It seems from the recent answers of Douglas Somerset and Ulrich Pennig that what I claim below is false, and so this answer should be "dis-accepted".
I think (although I admit I don't know the details) that the answer to both questions is yes, by a theorem of Dauns and Hoffman. According to the version quoted in the article
T. Becker, A few remarks on the Dauns-Hofmann theorems for $C^\ast$-algebras. Archiv der Mathematik 43 (1984) no. 3, 265-269 [Math Review]
up vote 1 down
vote accepted $A$ can be realized as the algebra of continuous sections of some kind of continuous $C^\ast$-algebra-bundle with base space $T$.
However, since I am not a specialist, I may have misread or misunderstood.
I am also under the impression that one should think of a C* algebra as some kind of continuous bundle over Spec of its center. It is not clear to me whether this implies the
statement in the original question, however? – Theo Johnson-Freyd Dec 21 '11 at 23:43
Theo: firstly, the devil is in the details (Dauns and Hofmann were openly trying to get bundle representations for C*-algebras and Banach algebras, but my understanding is that more
than "soft" or "categorical" methods are needed en route). – Yemon Choi Dec 22 '11 at 0:04
... Secondly, it's not clear to me either, if this bundle realization is enough to answer the original questions. But I thought I would mention the crucial result and give a link in
case it helps. (Certainly both 1 and 2 are true for certain kinds of C*-algebra, but right now I'm not sure if this needs certain conditions on the topology of the primitive ideal
space) – Yemon Choi Dec 22 '11 at 0:06
Excuse me, I actually meant one question: 1) and 2) are just two conditions of one statement (I have now put "and" between them). But I would be satisfied if B is replaced by a
C*-algebra bundle. – Sergei Akbarov Dec 22 '11 at 0:23
add comment
The statement of the Dauns-Hofmann theorem is actually too weak to get bundles of $C^*$-algebras. For completeness, let me state it:
Let $A$ be a $C^*$-algebra. For each $P \in Prim(A)$, let $\pi_P \colon A \to A/P$ be the quotient map. Then there is an isomorphism $\phi$ of $C_b(Prim(A))$ onto the center $ZM(A)$ of the
multiplier algebra $M(A)$ such that for all $f \in C_b(Prim(A))$ and $a \in A$ $$ \pi_P(\phi(f)a) = f(P)\pi_P(a) $$ for every $P \in Prim(A)$. Usually one writes $f \cdot a = \phi(f)a$.
up vote 9 So, the best you could hope for is some kind of sheaf of $C^*$-algebras over the primitive ideal space. Getting local triviality in general is kind of hopeless, I think. A reading
down vote recommendation for these matters would be the book "Morita Equivalence and Continuous-Trace $C^*$-algebras" by Raeburn and Williams.
For continuous trace $C^*$-algebras things are quite different. These are all Morita equivalent (or stably isomorphic) to sections in a bundle of compact operators!
Ulrich, the Yemon Choi reference led me to a book by M.Dupre and R.Gillette "Banach bundles, Banach modules and automorphisms of C*-algebras". As far as I understand, from their Theorem
2.4 (at p.40, see also the discussion at pp.38-39) it follows that if C is a closed subalgebra in the center of a (unital) C*-algebra A, then A is isomorphic to the algebra of sections of
a C*-algebra bundle over the spectrum T of C. I didn't understand, whether your words contradict to what they write... Just in case: everywhere I speak about unital algebras, of course. –
Sergei Akbarov Mar 21 '12 at 9:21
I think there is no contradiction. It is just that the term "bundle" for me always implies some sort of local triviality, whereas the term "bundle" used by Dupre does not! In fact, Banach
bundles as defined by Fell and others avoid local triviality. I would call this a field of C*-algebras instead of a bundle. – Ulrich Pennig Mar 21 '12 at 15:34
btw. for the definition of a Banach bundle see http://books.google.de/books?id=nCCNodGa6UkC&lpg=PR6&dq=C*%20algebra%20bundle%20Fell&hl=de&pg=PA1#v=onepage&q=C*%20algebra%20bundle%20Fell
&f=false – Ulrich Pennig Mar 21 '12 at 15:35
Ulrich, if there's no contradiction, I won't change anything here, because what Dupre and Gillette write seems to be sufficient for me (and it was Yemon Choi, who gave me the first
reference). Thank you for the comments anyway! – Sergei Akbarov Mar 21 '12 at 17:18
Sure. No problem! – Ulrich Pennig Mar 21 '12 at 18:25
add comment
Three thoughts on this. The first is that $A$ probably has to be assumed unital to guarantee that $T$ is compact.
Assuming then that $A$ is unital, each point $t\in T$ corresponds to a maximal ideal $M_t$ of $C$ which generates a closed two-sided ideal $G_t$ in $A$. The ideals $\{G_t: t\in T\}$ are
called the Glimm ideals (after James Glimm who used them in the case when $A$ is a von Neumann algebra). For each element $a\in A$, the mapping $t\mapsto \Vert a+G_t\Vert$ is upper
semi-continuous but not in general continuous. Indeed these norm funcions are all continuous if and only if the 'complete regularisation' map from the primitive ideal space of $A$ with the
hull kernel topology to $T$ is an open map (R-Y Lee, 1970s). The second thought, therefore, is that a necessary condition for the answer to the question to be yes is that the complete
up vote regularisation map should be open.
4 down
vote Even when the complete regularisation map is open, I expect that one can find examples where the answer to the question is no, although no such example comes to mind just now [in fact, see
E. Kirchberg, S.Wassermann, Operations on continuous bundles of C*-algebras, Math. Ann. 303 (1995), 677-697]. The third thought, however, is that Blanchard showed that if $A$ is separable
and exact and the complete regularisation map is open then such a $B$ can be found (E. Blanchard, Subtriviality of continuous fields of nuclear C*-algebras, J. Reine Angew. Math. 489 (1997),
Of course, I am speaking about unital algebras (excuse me for not clarifying this from the very beginning). Douglas, how is this connected with what M.Dupre and R.Gillette write in "Banach
bundles, Banach modules and automorphisms of C*-algebras"? – Sergei Akbarov Mar 21 '12 at 9:24
One can do a lot in this area without all the baggage of Banach bundles. The simplest approach is that of the $C_0(X)$-algebra where one simply assumes a continuous map $\phi$ from the
primitive ideal space of $A$ to a locally compact Hausdorff space $X$. Then $A$ 'fibres' as an algebra of upper semi-continuous cross-sections over $X$ with the cross-sections taking
values in the fibre algebras $A_x$, where $A_x=A/J_x$ (where $J_x$ is the kernel of the primitive ideals of $A$ which $\phi$ maps onto the point $x\in X$). The cross-sections are
continuous iff $\phi$ is open. – Douglas Somerset Mar 21 '12 at 21:49
Douglas, I hope it will be proper if I contact you (and Ulrich) by e-mail, because the more I read about this, the more questions occur to me. :) – Sergei Akbarov Mar 28 '12 at 19:24
add comment
Not the answer you're looking for? Browse other questions tagged c-star-algebras or ask your own question.
|
{"url":"http://mathoverflow.net/questions/84054/a-variant-of-the-stone-weierstrass-theorem/84057","timestamp":"2014-04-18T10:59:33Z","content_type":null,"content_length":"78562","record_id":"<urn:uuid:5d28613f-df16-4dbd-9185-ac5c53cd95d6>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00107-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Remainder theorem
September 18th 2007, 07:00 AM #1
Sep 2007
Remainder theorem
Hi all,
I have the fraction:
$(x^4 + 3x^2 - 4)/(x^2 + 1)$
We have been asked to express in mixed number form using i) long division and ii) using the remainder theorem.
Have done the long division bit and got a remainder of -6.
I am not sure how to use the remainder theorem with the x^2 + 1 as if you use f(-1) you get the remainder 0. How do i go about using the remainder theorem with a squared x?
Hi all,
I have the fraction:
$(x^4 + 3x^2 - 4)/(x^2 + 1)$
We have been asked to express in mixed number form using i) long division and ii) using the remainder theorem.
Have done the long division bit and got a remainder of -6.
I am not sure how to use the remainder theorem with the x^2 + 1 as if you use f(-1) you get the remainder 0. How do i go about using the remainder theorem with a squared x?
Technically you can't use $x^2 + 1$ with synthetic division. However if you set $y = x^2$ then your problem becomes to divide $y^2 + 3y - 4$ by $y + 1$ which can be done by synthetic division.
Remainder theorem
Thanks! I remember doing things like that before. But usually you have to work with y when you've finished.
Do i have to do anything to y when i'm done? Otherwise they might has well just asked me work out the quadratic divided by x + 1.
Does that make any sense?
Many thanks
September 18th 2007, 07:12 AM #2
September 18th 2007, 01:57 PM #3
Sep 2007
|
{"url":"http://mathhelpforum.com/algebra/19123-remainder-theorem.html","timestamp":"2014-04-20T11:53:32Z","content_type":null,"content_length":"36535","record_id":"<urn:uuid:df07fc87-6d35-4776-ba14-59452c5938b0>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00420-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Help
May 31st 2008, 10:49 AM #1
May 2008
an original
Please, help me fill the gaps (I've got about ten exercises like these but I cannot answer only to these two).
[1] An original is a real-valued function of one real variable that satisfy the following conditions:
1) for all t<0, f(t) = 0,
2) in any finite interval (a,b) the function has a finite number of discontinuities of the ............ type
3) ...........
[2] A real-valued function of two real variables (z=f(x,y)) is called differentiable at the point (x_0, y_0) if its increment delta z = f(x,y)-f(x_0,y_0) may be written in the form
delta z = ..........., where epsilon_1 and epsilon_2 are function of variables ........ such that ........
Thank you very much in advance!
Follow Math Help Forum on Facebook and Google+
|
{"url":"http://mathhelpforum.com/calculus/40184-original.html","timestamp":"2014-04-20T17:55:43Z","content_type":null,"content_length":"28576","record_id":"<urn:uuid:851e2678-1358-4fb6-aae1-9944a1ef7827>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00012-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Physics Forums - View Single Post - Finding the integrating factor (ODEs)
First, you need to realize that, just as an equation may not be exact, it may also not have an integrating factor that is a pure function of just x or y. Say you start with:
M(x,y)dx + N(x,y)dy = 0
Of course your first test would be to see if N[x] = M[y]. If so, it is exact and you proceed with exact methods. But what if it isn't exact. Maybe, if we are lucky, we can find an integrating factor
of the form μ(x) or μ(y) that will make it exact. So let's try it:
μ(x)M(x,y)dx + μ(x)N(x,y)dy = 0
Let's try the exactness test on this. We need
(μ(x)N(x,y))[x] = (μ(x)M(x,y))[y]
μ'(x)N(x,y) + μ(x)N[x](x,y) = μ(x)M[y](x,y)
[tex]\mu'(x) = \frac {\mu(x)(M_y(x,y) - N_x(x,y))}{N(x,y)}[/tex]
We can only hope to find μ(x) as a pure function of x if there are no y's on the right hand side. So before we even try to find such a μ(x), we should test the given equation to see if
[tex]\frac {M_y(x,y) - N_x(x,y)}{N(x,y)}[/tex]
is a pure function of x.
Trying the same thing for a pure function of y we get:
μ(y)M(x,y)dx + μ(y)N(x,y)dy = 0
Testing for exactness:
μ(y)N[x](x,y) = μ(y)M[y](x,y) + μ'(y)M(x,y)
[tex]\mu'(y) = \frac {\mu(y)(N_x(x,y) - M_y(x,y))}{M(x,y)}[/tex]
For this to work there needs to be no x on the right side, so:
[tex]\frac {M_y(x,y) - N_x(x,y)}{M(x,y)}[/tex]
must be a pure function of y.
In your example, neither of the two tests work, which explains why you didn't get a pure function of y to integrate and your method failed.
|
{"url":"http://www.physicsforums.com/showpost.php?p=2374343&postcount=2","timestamp":"2014-04-21T07:15:34Z","content_type":null,"content_length":"9150","record_id":"<urn:uuid:4f0c282c-bafa-4b19-b0de-85ad92e838bb>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00464-ip-10-147-4-33.ec2.internal.warc.gz"}
|
La Puente Algebra 2 Tutor
Find a La Puente Algebra 2 Tutor
I have taught math for over 5 years! Many of my students are from grade 2 to 12, some are from college. I also have a math tutor certificate for college students from Pasadena City College. I
graduated in 2012 from UCLA.
7 Subjects: including algebra 2, geometry, algebra 1, trigonometry
...I was awarded the scholar athlete award, as well as varsity MVP. I am a graduate from Pratt Institute with a B.F.A in communication design. I have gained skills in foundation drawing from
college and previous art classes.
16 Subjects: including algebra 2, English, algebra 1, drawing
...I have received a 5 in AP calculus BC, and continued applying calculus throughout a college engineering curriculum. For example, engineering problems often involve trigonometric functions like
sine and cosine, for which pre-calculus laid the groundwork. Consequently, I have a solid background in pre-calculus.
12 Subjects: including algebra 2, calculus, physics, geometry
...Today, for obvious safety reasons, fewer chemical experiments involving reactions are taught in the lab class. Much more of the work is theoretical. I make a point of bringing chemistry alive
by describing what the chemicals look like and how the reactions proceed.
12 Subjects: including algebra 2, chemistry, algebra 1, trigonometry
...I specialize in developing solid fundamentals in mathematics that allows my student to take and excel in advanced and AP classes. I emphasize understanding over rote memorization and challenge
my students to really learn the material. I also focus on sound, proven SAT, ACT, and AP test taking strategies and material for standardized testing.
42 Subjects: including algebra 2, reading, chemistry, writing
|
{"url":"http://www.purplemath.com/La_Puente_Algebra_2_tutors.php","timestamp":"2014-04-16T07:31:04Z","content_type":null,"content_length":"23937","record_id":"<urn:uuid:ddbb296b-eb7d-4a85-8708-9c971562906b>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00145-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Analytical Methods for the Performance Evaluation of Binary Linear Block Codes
Abstract (Summary)
The modeling of the soft-output decoding of a binary linear block code using a Binary Phase Shift Keying (BPSK) modulation system (with reduced noise power) is the main focus of this work. With this
model, it is possible to provide bit error performance approximations to help in the evaluation of the performance of binary linear block codes. As well, the model can be used in the design of
communications systems which require knowledge of the characteristics of the channel, such as combined source-channel coding. Assuming an Additive White Gaussian Noise channel model, soft-output Log
Likelihood Ratio (LLR) values are modeled to be Gaussian distributed. The bit error performance for a binary linear code over an AWGN channel can then be approximated using the Q-function that is
used for BPSK systems. Simulation results are presented which show that the actual bit error performance of the code is very well approximated by the LLR approximation, especially for low
signal-to-noise ratios (SNR). A new measure of the coding gain achievable through the use of a code is introduced by comparing the LLR variance to that of an equivalently scaled BPSK system.
Furthermore, arguments are presented which show that the approximation requires fewer samples than conventional simulation methods to obtain the same confidence in the bit error probability value.
This translates into fewer computations and therefore, less time is needed to obtain performance results. Other work was completed that uses a discrete Fourier Transform technique to calculate the
weight distribution of a linear code. The weight distribution of a code is defined by the number of codewords which have a certain number of ones in the codewords. For codeword lengths of small to
moderate size, this method is faster and provides an easily implementable and methodical approach over other methods. This technique has the added advantage over other techniques of being able to
methodically calculate the number of codewords of a particular Hamming weight instead of calculating the entire weight distribution of the code.
Bibliographical Information:
School:University of Waterloo
School Location:Canada - Ontario
Source Type:Master's Thesis
Keywords:electrical computer engineering soft output decoding error performance binary linear block codes log likelihood ratio weight distribution
Date of Publication:01/01/2000
|
{"url":"http://www.openthesis.org/documents/Analytical-Methods-Performance-Evaluation-Binary-200014.html","timestamp":"2014-04-16T04:50:39Z","content_type":null,"content_length":"10103","record_id":"<urn:uuid:dc99a1e6-95ad-4b8b-8614-a7fdc442734e>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00418-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Laurent expansion of principal root
May 2nd 2009, 04:53 AM #1
Mar 2009
Laurent expansion of principal root
How do I find the Laurent expansion of a function containing the principal branch cut of the nth root?
Last edited by bernardbb; May 2nd 2009 at 06:52 AM.
Remembering the binomial series expansion...
$(1+x)^{\alpha}= \sum_{k=0}^{\infty} \frac{\alpha\cdot (\alpha-1)\dots (\alpha-k+1)}{k!}\cdot x^{k}$
... for $x=-\frac{1}{z^{4}}$ and $\alpha=\frac{1}{4}$ we have...
$(1-\frac{1}{z^{4}})^{\frac{1}{4}}= 1 - \frac{1}{4}\cdot z^{-4} - \frac{3}{4\cdot 4\cdot 2!}\cdot z^{-8} - \frac{3\cdot 7}{4\cdot 4\cdot 4\cdot 3!}\cdot z^{-12} + \dots$
... and then...
$f(z)= -i\cdot z \cdot (1-\frac{1}{z^{4}})^{\frac{1}{4}}= -i\cdot (z - \frac{1}{4}\cdot z^{-3} - \frac{3}{4\cdot 4\cdot 2!}\cdot z^{-7} - \frac{3\cdot 7}{4\cdot 4\cdot 4\cdot 3!}\cdot z^{-11} + \
Kind regards
Last edited by chisigma; May 2nd 2009 at 09:00 AM. Reason: added factorials
Thanks, that seems helpful.
I think you forgot the (k!)?
Last edited by bernardbb; May 3rd 2009 at 09:11 AM.
Edit: Nevermind, figured it all out.
Last edited by bernardbb; May 4th 2009 at 02:27 PM.
May 2nd 2009, 06:04 AM #2
May 2nd 2009, 06:46 AM #3
Mar 2009
May 3rd 2009, 08:53 AM #4
Mar 2009
|
{"url":"http://mathhelpforum.com/differential-geometry/86901-laurent-expansion-principal-root.html","timestamp":"2014-04-20T09:16:12Z","content_type":null,"content_length":"39873","record_id":"<urn:uuid:5139e3db-055e-4384-a757-1d8dee9e90b9>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00653-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Why Unstable Uranium
Useful nucleus
In reference to the binding energy per nucleon curve in the wiki page I linked above, all the nuclides on the right to the isotope Fe-56 are unstable and try to decrease their mass number (A) by
undergoing either alpha decay or spontanious fission.
No, not all of them can gain energy by spontaneous fission. For example, an isotope with mass number 84 could gain energy by somehow turning into isotope with mass number 56 - but this does not mean
it would gain energy by turning into isotope with mass number 42.
Useful nucleus
On the other hand nuclides such as U-238 are called "fissionable but non-fissile" because you can induce fission in these nuclides but using fast neutrons (> 0.5 MeV).
What is important to note is that U-235 is one of the very few primordial radioisotopes - isotopes which are exactly unstable enough to decay at an important rate, yet so stable that they have lasted
though the age of Earth.
Between 100 milliard years and 100 million years, the isotopes are:
K-40 (1,25)
Rb-87 (49)
Sm-146 (0,103)
Lu-176 (37,8)
Re-187 (41,2)
Th-232 (14)
U-235 (0,71)
U-238 (4,47)
And of these 8, 5 (all the low mass ones) decay into stable nuclei by single decay - one alpha for Sm-146, one beta for all others, incl. K-40 which also can undergo electron capture and positron
The 3 long lived isotopes on the isle of stability are unique in being long-lived isotopes that decay into short-lived isotopes undergoing a radioactive decay chain.
|
{"url":"http://www.physicsforums.com/showthread.php?s=a1f0d34414434b0ba7f6951927286e94&p=4392795","timestamp":"2014-04-18T03:11:04Z","content_type":null,"content_length":"63948","record_id":"<urn:uuid:704d5111-1ebb-4c29-abb4-9b7cba04e2d3>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00567-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Circumference vs. Perimeter
Date: 05/09/2003 at 12:01:23
From: Todd
Subject: Circumference vs. Perimeter
Technically speaking, can the term "perimeter" apply to a circle in a
mathematical context?
While I would never actually refer to the circumference of a circle
as its "perimeter," a discussion recently arose about whether the
term "perimeter" can even APPLY to a circle. I consulted a standard
dictionary, which said it was the length of a closed curve enclosing
an area, so it would seem to apply under that definition. A
mathematics dictionary said that perimeter was the sum of the lengths
of the edges of a closed figure, but since a circle has no edges,
this doesn't seem to apply to circles. However, the same mathematical
dictionary defined circumference as "the perimeter of a circle."
Date: 05/09/2003 at 12:34:15
From: Doctor Peterson
Subject: Re: Circumference vs. Perimeter
Hi, Todd.
"Circumference" is just a special term for the perimeter when applied
to circles.
We have to have a general term that applies to all shapes, or it would
get very confusing; there is no reason not to allow the word
"perimeter" to be applied to circles, as part of a discussion that
includes both circles and other shapes. For example, in the
"isoperimetric problem" we are looking for the figure with the
greatest area among all shapes with a given perimeter, and the answer
turns out to be the circle. We'd be in bad shape if we weren't allowed
to call that a perimeter, so that the circle was disqualified!
A dictionary that applies "perimeter" only to "edges" will we hope
define "edge" in a way that is not limited to straight lines. I'm
sure that's what was intended, since they just said "closed figure,"
not "polygon." In that sense, a circle has one edge.
See Eric Weisstein's World of Mathematics:
which defines perimeter as the total arc length of a boundary, and
specifically states that the perimeter of a circle is called the
circumference. Your math dictionary is probably aimed at a less
sophisticated audience, at least in that particular definition, which
does not impress me much.
If you have any further questions, feel free to write back.
- Doctor Peterson, The Math Forum
Date: 05/09/2003 at 12:43:53
From: Todd
Subject: Thank you (Circumference vs. Perimeter)
Thanks for the prompt response. I appreciate the reasoning behind your
answer - makes sense to me! Kudos.
|
{"url":"http://mathforum.org/library/drmath/view/62914.html","timestamp":"2014-04-17T18:27:54Z","content_type":null,"content_length":"7977","record_id":"<urn:uuid:5a3a417a-c19f-4a2c-9b80-f211cbb27c9e>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00471-ip-10-147-4-33.ec2.internal.warc.gz"}
|
pound-force to poundals
Category - start in: main menu • force menu • Pounds-force
Amount: 1 pound-force (lbf) of force
Equals: 32.17 poundals (pdl) in force
TOGGLE : from poundals into pounds-force in the other way around.
CONVERT : between other force measuring units - complete list.
Conversion calculator for webmasters.
Force measuring units
Convert force measuring units between pound-force (lbf) and poundals (pdl) but in the other reverse direction from poundals into pounds-force.
conversion result for force:
From Symbol Equals Result To Symbol
1 pound-force lbf = 32.17 poundals pdl
Converter type: force units
This online force from lbf into pdl converter is a handy tool not just for certified or experienced professionals.
First unit: pound-force (lbf) is used for measuring force.
Second: poundal (pdl) is unit of force.
32.17 pdl is converted to 1 of what?
The poundals unit number 32.17 pdl converts to 1 lbf, one pound-force. It is the EQUAL force value of 1 pound-force but in the poundals force unit alternative.
How to convert 2 pounds-force (lbf) into poundals (pdl)? Is there a calculation formula?
First divide the two units variables. Then multiply the result by 2 - for example:
32.1740485542 * 2 (or divide it by / 0.5)
1 lbf = ? pdl
1 lbf = 32.17 pdl
Other applications for this force calculator ...
With the above mentioned two-units calculating service it provides, this force converter proved to be useful also as a teaching tool:
1. in practicing pounds-force and poundals ( lbf vs. pdl ) values exchange.
2. for conversion factors training exercises between unit pairs.
3. work with force's values and properties.
International unit symbols for these two force measurements are:
Abbreviation or prefix ( abbr. short brevis ), unit symbol, for pound-force is:
Abbreviation or prefix ( abbr. ) brevis - short unit symbol for poundal is:
One pound-force of force converted to poundal equals to 32.17 pdl
How many poundals of force are in 1 pound-force? The answer is: The change of 1 lbf ( pound-force ) unit of force measure equals = to 32.17 pdl ( poundal ) as the equivalent measure for the same
force type.
In principle with any measuring task, switched on professional people always ensure, and their success depends on, they get the most precise conversion results everywhere and every-time. Not only
whenever possible, it's always so. Often having only a good idea ( or more ideas ) might not be perfect nor good enough solution. If there is an exact known measure in lbf - pounds-force for force
amount, the rule is that the pound-force number gets converted into pdl - poundals or any other force unit absolutely exactly.
Find in traditional oven
|
{"url":"http://www.traditionaloven.com/tutorials/force/convert-pound-force-lbf-unit-to-poundal-pdl-force-unit.html","timestamp":"2014-04-18T16:18:14Z","content_type":null,"content_length":"27473","record_id":"<urn:uuid:e583ee05-85a4-4e5d-8c6d-aeffb1c010b6>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00578-ip-10-147-4-33.ec2.internal.warc.gz"}
|
This page presents how to work with Microsoft Excel 2007 features and its redesigned interface. You may also visit our Excel Question page.
Calculate the average of a group of numbers
Excel 2007
Let's say you want to find the average number of days to complete a milestone in a project or the average temperature on a particular day over a 10-year time span. There are several ways to calculate
the average of a group of numbers.
The AVERAGE function measures central tendency, which is the location of the center of a group of numbers in a statistical distribution. The three most common measures of central tendency are:
• Average which is the arithmetic mean, and is calculated by adding a group of numbers and then dividing by the count of those numbers. For example, the average of 2, 3, 3, 5, 7, and 10 is 30
divided by 6, which is 5.
• Median which is the middle number of a group of numbers; that is, half the numbers have values that are greater than the median, and half the numbers have values that are less than the median.
For example, the median of 2, 3, 3, 5, 7, and 10 is 4.
• Mode which is the most frequently occurring number in a group of numbers. For example, the mode of 2, 3, 3, 5, 7, and 10 is 3.
For a symmetrical distribution of a group of numbers, these three measures of central tendency are all the same. For a skewed distribution of a group of numbers, they can be different.
What do you want to do?
Calculate the average of numbers in a contiguous row or column
Calculate the average of numbers not in a contiguous row or column
Calculate a weighted average
Calculate the average of numbers, ignoring zero (0) values
Calculate the average of numbers in a contiguous row or column
1. Click a cell below or to the right of the numbers for which you want to find the average.
2. On the Home tab, in the Editing group, click the arrow next to AutoSum , click Average, and then press ENTER.
Top of Page
Calculate the average of numbers not in a contiguous row or column
To do this task, use the AVERAGE function.
The example may be easier to understand if you copy it to a blank worksheet.
How to copy an example
1. Create a blank workbook or worksheet.
2. Select the example in the Help topic.
Note Do not select the row or column headers.
Selecting an example from Help
3. Press CTRL+C.
4. In the worksheet, select cell A1, and press CTRL+V.
5. To switch between viewing the results and viewing the formulas that return the results, press CTRL+` (grave accent), or on the Formulas tab, in the Formula Auditing group, click the Show Formulas
Formula Description (Result)
=AVERAGE(A2:A7) Averages all of numbers in list above (9.5)
=AVERAGE(A2:A4,A7) Averages the top three and the last number in the list (7.5)
=AVERAGEIF(A2:A7, "<>0") Averages the numbers in the list except those that contain zero, such as cell A6 (11.4)
Function details
Top of Page
Calculate a weighted average
To do this task, use the SUMPRODUCT and SUM functions.
The example may be easier to understand if you copy it to a blank worksheet.
How to copy an example
1. Create a blank workbook or worksheet.
2. Select the example in the Help topic.
Note Do not select the row or column headers.
Selecting an example from Help
3. Press CTRL+C.
4. In the worksheet, select cell A1, and press CTRL+V.
5. To switch between viewing the results and viewing the formulas that return the results, press CTRL+` (grave accent), or on the Formulas tab, in the Formula Auditing group, click the Show Formulas
This example calculates the average price paid for a unit across three purchases, where each purchase is for a different number of units at a different price per unit.
A B
Price per unit Number of units
Formula Description (Result)
=SUMPRODUCT(A2:A4,B2:B4)/SUM(B2:B4) Divides the total cost of all three orders by the total number of units ordered (24.66)
Function details
Top of Page
Calculate the average of numbers, ignoring zero (0) values
To do this task, use the AVERAGE and IF functions.
The example may be easier to understand if you copy it to a blank worksheet.
How to copy an example
1. Create a blank workbook or worksheet.
2. Select the example in the Help topic.
Note Do not select the row or column headers.
Selecting an example from Help
3. Press CTRL+C.
4. In the worksheet, select cell A1, and press CTRL+V.
5. To switch between viewing the results and viewing the formulas that return the results, press CTRL+` (grave accent), or on the Formulas tab, in the Formula Auditing group, click the Show Formulas
Formula Description (Result)
=AVERAGEIF(A2:A7, "<>0") Averages the numbers in the list except those that contain zero, such as cell A6 (11.4)
Function details
Top of Page
Excel > Formula and name basics > Examples of formulas > Statistical
Calculate the median of a group of numbers
Excel 2007
Let's say you want to find out what the midpoint is in a distribution of student grades or a quality control data sample. To calculate the median of a group of numbers, use the MEDIAN function.
The MEDIAN function measures central tendency, which is the location of the center of a group of numbers in a statistical distribution. The three most common measures of central tendency are:
• Average which is the arithmetic mean, and is calculated by adding a group of numbers and then dividing by the count of those numbers. For example, the average of 2, 3, 3, 5, 7, and 10 is 30
divided by 6, which is 5.
• Median which is the middle number of a group of numbers; that is, half the numbers have values that are greater than the median, and half the numbers have values that are less than the median.
For example, the median of 2, 3, 3, 5, 7, and 10 is 4.
• Mode which is the most frequently occurring number in a group of numbers. For example, the mode of 2, 3, 3, 5, 7, and 10 is 3.
For a symmetrical distribution of a group of numbers, these three measures of central tendency are all the same. For a skewed distribution of a group of numbers, they can be different.
The example may be easier to understand if you copy it to a blank worksheet.
How to copy an example
1. Create a blank workbook or worksheet.
2. Select the example in the Help topic.
Note Do not select the row or column headers.
Selecting an example from Help
3. Press CTRL+C.
4. In the worksheet, select cell A1, and press CTRL+V.
5. To switch between viewing the results and viewing the formulas that return the results, press CTRL+` (grave accent), or on the Formulas tab, in the Formula Auditing group, click the Show Formulas
Formula Description (Result)
=MEDIAN(A2:A7) Median of numbers in list above (8)
Function details
Excel > Formula and name basics > Examples of formulas > Statistical
Calculate the mode of a group of numbers
Excel 2007
Let's say you want to find out the most common number of bird species sighted in a sample of bird counts at a critical wetland over a 30-year time period, or you want to find out the most frequently
occurring number of phone calls at a telephone support center during off-peak hours. To calculate the mode of a group of numbers, use the MODE function.
The MODE function measures central tendency, which is the location of the center of a group of numbers in a statistical distribution. The three most common measures of central tendency are:
• Average which is the arithmetic mean, and is calculated by adding a group of numbers and then dividing by the count of those numbers. For example, the average of 2, 3, 3, 5, 7, and 10 is 30
divided by 6, which is 5.
• Median which is the middle number of a group of numbers; that is, half the numbers have values that are greater than the median, and half the numbers have values that are less than the median.
For example, the median of 2, 3, 3, 5, 7, and 10 is 4.
• Mode which is the most frequently occurring number in a group of numbers. For example, the mode of 2, 3, 3, 5, 7, and 10 is 3.
For a symmetrical distribution of a group of numbers, these three measures of central tendency are all the same. For a skewed distribution of a group of numbers, they can be different.
The example may be easier to understand if you copy it to a blank worksheet.
How to copy an example
1. Create a blank workbook or worksheet.
2. Select the example in the Help topic.
Note Do not select the row or column headers.
Selecting an example from Help
3. Press CTRL+C.
4. In the worksheet, select cell A1, and press CTRL+V.
5. To switch between viewing the results and viewing the formulas that return the results, press CTRL+` (grave accent), or on the Formulas tab, in the Formula Auditing group, click the Show Formulas
Formula Description (Result)
=MODE(A2:A7) Mode of numbers in list above (7)
|
{"url":"http://www.likeoffice.com/28057/Excel-2007-Statistical-formula","timestamp":"2014-04-17T07:05:09Z","content_type":null,"content_length":"82799","record_id":"<urn:uuid:ed4e8f23-1913-41d4-92ad-60e7aec60727>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00078-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Results 1 - 10 of 24
- EUROCRYPT 2006, volume 4004 of LNCS , 2006
"... Abstract. We show that, in the ideal-cipher model, triple encryption (the cascade of three independently-keyed blockciphers) is more secure than single or double encryption, thereby resolving a
long-standing open problem. Our result demonstrates that for DES parameters (56-bit keys and 64-bit plaint ..."
Cited by 101 (27 self)
Add to MetaCart
Abstract. We show that, in the ideal-cipher model, triple encryption (the cascade of three independently-keyed blockciphers) is more secure than single or double encryption, thereby resolving a
long-standing open problem. Our result demonstrates that for DES parameters (56-bit keys and 64-bit plaintexts) an adversary’s maximal advantage against triple encryption is small until it asks about
2 78 queries. Our proof uses codebased game-playing in an integral way, and is facilitated by a framework for such proofs that we provide. 1
- In TCC’05, LNCS 3378 , 2005
"... Abstract. Encryption of data using multiple, independent encryption schemes (“multiple encryption”) has been suggested in a variety of contexts, and can be used, for example, to protect against
partial key exposure or cryptanalysis, or to enforce threshold access to data. Most prior work on this sub ..."
Cited by 35 (2 self)
Add to MetaCart
Abstract. Encryption of data using multiple, independent encryption schemes (“multiple encryption”) has been suggested in a variety of contexts, and can be used, for example, to protect against
partial key exposure or cryptanalysis, or to enforce threshold access to data. Most prior work on this subject has focused on the security of multiple encryption against chosen-plaintext attacks, and
has shown constructions secure in this sense based on the chosen-plaintext security of the component schemes. Subsequent work has sometimes assumed that these solutions are also secure against
chosen-ciphertext attacks when component schemes with stronger security properties are used. Unfortunately, this intuition is false for all existing multiple encryption schemes. Here, in addition to
formalizing the problem of chosen-ciphertext security for multiple encryption, we give simple, efficient, and generic constructions of multiple encryption schemes secure against chosen-ciphertext
attacks (based on any component schemes secure against such attacks) in the standard model. We also give a more efficient construction from any (hierarchical) identity-based encryption scheme secure
against selectiveidentity chosen plaintext attacks. Finally, we discuss a wide range of applications for our proposed schemes. 1
, 1993
"... The security of cascade ciphers, in which by definition the keys of the component ciphers are independent, is considered. It is shown by a counterexample that the intuitive result, formally
stated and proved in the literature, that a cascade is at least as strong as the strongest component cipher, ..."
Cited by 25 (2 self)
Add to MetaCart
The security of cascade ciphers, in which by definition the keys of the component ciphers are independent, is considered. It is shown by a counterexample that the intuitive result, formally stated
and proved in the literature, that a cascade is at least as strong as the strongest component cipher, requires the uninterestingly restrictive assumption that the enemy cannot exploit information
about the plaintext statistics. It is proved, for very general notions of breaking a cipher and of problem difficulty, that a cascade is at least as difficult to break as the first component cipher.
A consequence of this result is that, if the ciphers commute, then a cascade is at least as difficult to break as the most-difficult-tobreak component cipher, i.e., the intuition that a cryptographic
chain is at least as strong as its strongest link is then provably correct. It is noted that additive stream ciphers do commute, and this fact is used to suggest a strategy for designing secure
practical ci...
- of LNCS , 1996
"... Abstract. Meet-in-the-middle attacks, where problems and the secrets being sought are decomposed into two pieces, have many applications in cryptanalysis. A well-known such attack on double-DES
requires 2 56 time and memory; a naive key search would take 2112 time. However, when the attacker is limi ..."
Cited by 17 (0 self)
Add to MetaCart
Abstract. Meet-in-the-middle attacks, where problems and the secrets being sought are decomposed into two pieces, have many applications in cryptanalysis. A well-known such attack on double-DES
requires 2 56 time and memory; a naive key search would take 2112 time. However, when the attacker is limited to a practical amount of memory, the time savings are much less dramatic. For n the
cardinality of the space that each half of the secret is chosen from (n=2 56 for double-DES), and w the number of words of memory available for an attack, a technique based on parallel collision
search is described which requires O ( ) times fewer operations and O ( ) times fewer memory accesses than previous approaches to meet-in-the-middle attacks. For the example of double-DES, an
attacker with 16 Gbytes of memory could recover a pair of DES keys in a knownplaintext attack with 570 times fewer encryptions and 3.7×106 n ⁄ w n ⁄ w times fewer memory accesses compared to previous
techniques using the same amount of memory. Key words. Meet-in-the-middle attack, parallel collision search, cryptanalysis, DES, low Hamming weight exponents.
- CRYPTO , 2006
"... Abstract. Let A and B denote cryptographic primitives. A (k, m)robust A-to-B combiner is a construction, which takes m implementations of primitive A as input, and yields an implementation of
primitive B, which is guaranteed to be secure as long as at least k input implementations are secure. The ma ..."
Cited by 13 (2 self)
Add to MetaCart
Abstract. Let A and B denote cryptographic primitives. A (k, m)robust A-to-B combiner is a construction, which takes m implementations of primitive A as input, and yields an implementation of
primitive B, which is guaranteed to be secure as long as at least k input implementations are secure. The main motivation for such constructions is the tolerance against wrong assumptions on which
the security of implementations is based. For example, a (1,2)-robust A-to-B combiner yields a secure implementation of B even if an assumption underlying one of the input implementations of A turns
out to be wrong. In this work we study robust combiners for private information retrieval (PIR), oblivious transfer (OT), and bit commitment (BC). We propose a (1,2)-robust PIR-to-PIR combiner, and
describe various optimizations based on properties of existing PIR protocols. The existence of simple PIR-to-PIR combiners is somewhat surprising, since OT, a very closely related primitive, seems
difficult to combine (Harnik et al., Eurocrypt’05). Furthermore, we present (1,2)-robust PIR-to-OT and PIR-to-BC combiners. To the best of our knowledge these are the first constructions of A-to-B
combiners with A � = B. Such combiners, in addition to being interesting in their own right, offer insights into relationships between cryptographic primitives. In particular, our PIR-to-OT combiner
together with the impossibility result for OT-combiners of Harnik et al. rule out certain types of reductions of PIR to OT. Finally, we suggest a more fine-grained approach to construction of robust
combiners, which may lead to more efficient and practical combiners in many scenarios.
, 1998
"... We investigate, in the Shannon model, the security of constructions corresponding to double and (two-key) triple DES. That is, we consider Fk1 (Fk2(\Delta)) and Fk1(F \Gamma 1 k2 (Fk1 (\Delta)))
with the component functions being ideal ciphers. This models the resistance of these constructions to " ..."
Cited by 12 (1 self)
Add to MetaCart
We investigate, in the Shannon model, the security of constructions corresponding to double and (two-key) triple DES. That is, we consider Fk1 (Fk2(\Delta)) and Fk1(F \Gamma 1 k2 (Fk1 (\Delta))) with
the component functions being ideal ciphers. This models the resistance of these constructions to "generic" attacks like meet in the middle attacks. We obtain
- In Proc. Eurocrypt ’07 , 2007
"... 1 Introduction A function H: f0; 1g ..."
"... Abstract. A(k; n)-robust combiner for a primitive F takes as input n candidate implementations of F and constructs an implementation of F, which is secure assuming that at least k of the input
candidates are secure. Such constructions provide robustness against insecure implementations and wrong ass ..."
Cited by 5 (3 self)
Add to MetaCart
Abstract. A(k; n)-robust combiner for a primitive F takes as input n candidate implementations of F and constructs an implementation of F, which is secure assuming that at least k of the input
candidates are secure. Such constructions provide robustness against insecure implementations and wrong assumptions underlying the candidate schemes. In a recent work Harnik et al. (Eurocrypt 2005)
have proposed a (2; 3)-robust combiner for oblivious transfer (OT), and have shown that (1; 2)-robust OT-combiners of a certain type are impossible. In this paper we propose new, generalized notions
of combiners for two-party primitives, which capture the fact that in many two-party protocols the security of one of the parties is unconditional, or is based on an assumption independent of the
assumption underlying the security of the other party. This fine-grained approach results in OT-combiners strictly stronger than the constructions known before. In particular, we propose an
OT-combiner which guarantees secure OT even when only one candidate is secure for both parties, and every remaining candidate is flawed for one of the parties. Furthermore, we present an efficient
uniform OT-combiner, i.e., a single combiner which is secure simultaneously for a wide range of candidates ’ failures. Finally, our definition allows for a very simple impossibility result, which
shows that the proposed OT-combiners achieve optimal robustness.
"... Abstract. The security of cascade blockcipher encryption is an important and well-studied problem in theoretical cryptography with practical implications. It is well-known that double encryption
improves the security only marginally, leaving triple encryption as the shortest reasonable cascade. In a ..."
Cited by 4 (0 self)
Add to MetaCart
Abstract. The security of cascade blockcipher encryption is an important and well-studied problem in theoretical cryptography with practical implications. It is well-known that double encryption
improves the security only marginally, leaving triple encryption as the shortest reasonable cascade. In a recent paper, Bellare and Rogaway showed that in the ideal cipher model, triple encryption is
significantly more secure than single and double encryption, stating the security of longer cascades as an open question. In this paper, we propose a new lemma on the indistinguishability of systems
extending Maurer’s theory of random systems. In addition to being of independent interest, it allows us to compactly rephrase Bellare and Rogaway’s proof strategy in this framework, thus making the
argument more abstract and hence easy to follow. As a result, this allows us to address the security of longer cascades as well as some errors in their paper. Our result implies that for blockciphers
with smaller key space than message space (e.g. DES), longer cascades improve the security of the encryption up to a certain limit. This partially answers the open question mentioned above.
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=1785116","timestamp":"2014-04-23T13:56:05Z","content_type":null,"content_length":"37782","record_id":"<urn:uuid:f4c90181-0622-415c-b8cd-461334d647bb>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00380-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Microsoft Interview Question Program Managers
• puzzle - 2 trains traveling in opposite direction, bird starts from one and reaches other and flies back and forth like this till train collide. Find the time taken to collide, total distance
traveled by the bird, and no. of times bird makes a U turn.
Comment hidden because of low score. Click to expand.
D: distance between 2 trains
v1,v2: speeds of two trains
v3: speed of bird(v3>v1 &&v3>v2)
1)Find the time taken to collide
2)total distance traveled by the bird
3)# of times bird makes a U turn
Maybe infinite (not sure)
Comment hidden because of low score. Click to expand.
You are right and number of turns is infinite.
Comment hidden because of low score. Click to expand.
Comment hidden because of low score. Click to expand.
trains traveling in opposite directions never collide
Comment hidden because of low score. Click to expand.
You fool, why won't trains travelling in opposite directions not collide according to the given question.
N plz apply common sense before writing "can never collide".
Comment hidden because of low score. Click to expand.
First of all trains will never travel on the same track in opposite directions.
Comment hidden because of low score. Click to expand.
Obviously you don't know much about how trains travel in China.
Comment hidden because of low score. Click to expand.
trains traveling in opposite directions never collide
Comment hidden because of low score. Click to expand.
Train travelling in opposite direction may collide, train travelling in same direction may collide.. only one train can also collide with itself.. ask Indian Railway..
what about the poor bird.. its not even afraid of train whistle..it should die before the last three U turns. :)
Comment hidden because of low score. Click to expand.
Comment hidden because of low score. Click to expand.
zglgjg is right.
No. of u turns is indefinite,the problem can be identified when there is too short a distance between the trains ie when they are about to collide.
Comment hidden because of low score. Click to expand.
what is the answer for the second part i.e number of trips made by bird? there will be a infinite series right ..but what is it?
Comment hidden because of low score. Click to expand.
Everybody know how to find the distance traveled by the bird. The attraction of the question is the second part... The answer will surprise you..... It depends on the size of the bird (If its a point
then infinite, if it is not a point then why are we even solving such a puzzle :). BTW for the trains never colliding read: en.wikipedia.org/wiki/Zeno%27s_paradoxes
Comment hidden because of low score. Click to expand.
distance traveled: D/vb, where D is the initial distance and vb is the speed of the bird;
Time U turned: infinite. Suppose finite, at the last U turn, suppose the trains are d apart, in d/(vt + vb) time, the bird run into the other train, however the trains are still d(1 - 2vt/(vt + vb))>
0 apart, which means there is another U turn. contradiction
Comment hidden because of low score. Click to expand.
if the time of collision of two trains is finite, how can the number of U turns be infinite??
Comment hidden because of low score. Click to expand.
Comment hidden because of low score. Click to expand.
Trains travelling in opposite direction on the same track towards each other.
Now is that clear U IDIOTS.
|
{"url":"http://www.careercup.com/question?id=2155662","timestamp":"2014-04-18T00:40:45Z","content_type":null,"content_length":"59226","record_id":"<urn:uuid:8041296b-e919-4c79-9987-0bbbee4a3a71>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00214-ip-10-147-4-33.ec2.internal.warc.gz"}
|
• [X]
• [X]
• Bestsellers - This Week
• Foreign Language Study
• Pets
• Bestsellers - Last 6 months
• Games
• Philosophy
• Archaeology
• Gardening
• Photography
• Architecture
• Graphic Books
• Poetry
• Art
• Health & Fitness
• Political Science
• Biography & Autobiography
• History
• Psychology & Psychiatry
• Body Mind & Spirit
• House & Home
• Reference
• Business & Economics
• Humor
• Religion
• Children's & Young Adult Fiction
• Juvenile Nonfiction
• Romance
• Computers
• Language Arts & Disciplines
• Science
• Crafts & Hobbies
• Law
• Science Fiction
• Current Events
• Literary Collections
• Self-Help
• Drama
• Literary Criticism
• Sex
• Education
• Literary Fiction
• Social Science
• The Environment
• Mathematics
• Sports & Recreation
• Family & Relationships
• Media
• Study Aids
• Fantasy
• Medical
• Technology
• Fiction
• Music
• Transportation
• Folklore & Mythology
• Nature
• Travel
• Food and Wine
• Performing Arts
• True Crime
• Foreign Language Books
Mathematics; Problems, exercises, etc
Most popular at the top
• World Scientific Publishing Company 2007; US$ 427.00
This volume contains talks given at a joint meeting of three communities working in the fields of difference equations, special functions and applications (ISDE, OPSFA, and SIDE). The articles
reflect the diversity of the topics in the meeting but have difference equations as common thread. Articles cover topics in difference equations, discrete dynamical... more...
• Taylor and Francis 2010; US$ 119.95
Even though the theories of operational calculus and integral transforms are centuries old, these topics are constantly developing, due to their use in the fields of mathematics, physics, and
electrical and radio engineering. Operational Calculus and Related Topics highlights the classical methods and applications as well as the recent advances in... more...
• Taylor and Francis 2012; US$ 159.95
Unparalleled in scope compared to the literature currently available, the Handbook of Integral Equations, Second Edition contains over 2,500 integral equations with solutions as well as
analytical and numerical methods for solving linear and nonlinear equations. It explores Volterra, Fredholm, Wiener?Hopf, Hammerstein, Uryson, and other equations... more...
• Springer 2007; US$ 119.00
This book gives background material on the theory of Laplace transforms, together with a fairly comprehensive list of methods that are available at the current time. Computer programs are
included for those methods that perform consistently well on a wide range of Laplace transforms. Operational methods have been used for over a century to solve problems... more...
• Springer 2008; US$ 49.95
During the last several years, frames have become increasingly popular; they have appeared in a large number of applications, and several concrete constructions of frames of various types have
been presented. Most of these constructions were based on quite direct methods rather than the classical sufficient conditions for obtaining a frame. Consequently,... more...
• Springer 2008; US$ 129.00
This book is devoted to the basic mathematical properties of solutions to boundary integral equations and presents a systematic approach to the variational methods for the boundary integral
equations arising in elasticity, fluid mechanics, and acoustic scattering theory. It may also serve as the mathematical foundation of the boundary element methods.... more...
• Oxford University Press 2008; US$ 45.00
A comprehensive resource containing an entertaining selection of problems in mathematics. Including numerous exercises, illustrations, hints, and solutions, it is aimed at students of mathematics
looking for an introduction to problem solving in mathematics, as well as Mathematical Olympiad competitors and other recreational mathematicians. - ;The... more...
• WIT Press 2007; US$ 252.00
For many years, the subject of functional equations has held a prominent place in the attention of mathematicians. In more recent years this attention has been directed to a particular kind of
functional equation, an integral equation, wherein the unknown function occurs under the integral sign. The study of this kind of equation is sometimes referred... more...
• MobileReference.com 2010; US$ 3.99
Students and research workers in mathematics, physics, engineering and other sciences will find this compilation invaluable. All the information included is practical, rarely used results are
excluded. Great care has been taken to present all results concisely and clearly. Excellent to keep as a handy reference! If you don't have a lot of time... more...
• Elsevier Science 2000; US$ 260.00
This book provides a comprehensive introduction to modern global variational theory on fibred spaces. It is based on differentiation and integration theory of differential forms on smooth
manifolds, and on the concepts of global analysis and geometry such as jet prolongations of manifolds, mappings, and Lie groups. The book will be invaluable for... more...
|
{"url":"http://www.ebooks.com/subjects/general-mathematics-problems-exercises-etc-ebooks/7511/?page=4","timestamp":"2014-04-21T05:05:45Z","content_type":null,"content_length":"80624","record_id":"<urn:uuid:14d9b356-6ee9-43f8-88a1-6b32121e976a>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00159-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Parallel Axis Theorem
What is the Parallel Axis Theorem?
The parallel axis theorem can be used to determine the moment of inertia of a rigid body around any axis. Oftentimes the moment of intertia of a rigid body is not taken around the centroid, rather
some arbitrary point. A good example of this is an I-Beam. You may need to use the parallel axis theorem to determine the Moment of Inertia of an I-Beam around it's centroid because the top and
bottom flange will not be acting through the centroid of the shape (see the Example Below).
How can I calculate a moment of inertia using the Parallel Axis Theorem
Figure 1: Variables used for using the Parallel Axis Theorem
┃ Once the centroid of the shape is found, the parallel axis theorem can be used around any axis by taking: ┃
┃ ┃
┃ I_{A} =\sum ( I_{x} + Ad^2) ┃
• I[A] = The moment of inertia taken about the A-A axis (in^4)
• I[x] = The moment of inertia taken through the centroid, the x-x axis (in^4)
• A = The area of the rigid body (in^2)
• d = the perpendicular distance between the A-A axis and the x-x axis (in)
Note: Looking closely at the Parallel Axis Theorem you can see that the moment of inertia of a shape will increase rapidly the further the Centroid of the area is from the axis being checked.
Using the Parallel Axis Theorem in an Example
Figure 2: Parallel Axis Theorem Example
For the above example, take h = 12", h[1] = 10", b = 9 " and t[w] = 1". Solve for I[x] using the Parallel Axis Theorem.
1) Solve for I[x] of the center section (feel free to use the shortcuts here?:
{ I_x}_{web} = \frac{bh^3}{12} = \frac{1" * 10"^3}{12} = 83 in^4
{I_x}_{flange} = \frac{bh^3}{12} = \frac{9" * 1"^3}{12} = 0.75 in^4
2) Solve the increase in the moment of inertia for A[1] using the parallel axis theorem:
A_1 = 1in * 9" = 9 in^2
d = \frac{12in - 10in}{2*2} + \frac{10in}{2} = 0.5in + 5in = 5.5 in
Ad^2 = (9 in^2)(5.5in)^2 = 272 in^4
3) Using the Parallel Axis Theorem:
I_{x} =\sum ( I_{x} + Ad^2) = (83 in^4 + 2*0.75 i^4) + 2*(272 in^4) = 628.5 in^4
As you can see a majority of the Section Modulus (87%) comes from the parallel axis theorem and not from the moment of inertia calculation.
1. Paul A. Tipler, "Physics for Scientists and Engineers (4th Edition)", 1990
|
{"url":"http://www.wikiengineer.com/Structural/ParallelAxisTheorem","timestamp":"2014-04-21T05:54:49Z","content_type":null,"content_length":"16639","record_id":"<urn:uuid:ac14a21a-c334-4f0c-8506-edd536478e57>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00183-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Normal form (for matrices)
From Encyclopedia of Mathematics
The normal form of a matrix
The Smith normal form.
where minor of
The invariant factors
where the
For a practical method of finding the Smith normal form see, for example, [1].
The main result on the Smith normal form was obtained for [7]) and [8]). With practically no changes, the theory of Smith normal forms goes over to the case when [3], [6]). The Smith normal form has
important applications; for example, the structure theory of finitely-generated modules over principal ideal rings is based on it (see [3], [6]); in particular, this holds for the theory of
finitely-generated Abelian groups and theory of the Jordan normal form (see below).
The natural normal form.
Let [1], [4].
The matrix
denotes the so-called companion matrix
The matrix [1], [2]).
Now let Block-diagonal operator) whose blocks are the companion matrices of all elementary divisors
The matrix [1], [2]), or its Frobenius, rational or quasi-natural normal form (see [4]). In contrast to the first, the second natural form changes, generally speaking, on transition from
The Jordan normal form.
where [1]) or the Jordan block of order
The matrix Jordan matrix and is called the Jordan normal form of [4] for information about the so-called generalized Jordan normal form, reduction to which is possible over any field
Apart from the various normal forms for arbitrary matrices, there are also special normal forms of special matrices. Classical examples are the normal forms of symmetric and skew-symmetric matrices.
Let [1]) if there is a non-singular matrix
which can be regarded as the normal form of
where [6], [10] and Quadratic form for information about the normal forms of symmetric matrices for a number of other fields, and also about Hermitian analogues of this theory.
A common feature in the theories of normal forms considered above (and also in others) is the fact that the admissible transformations over the relevant set of matrices are determined by the action
of a certain group, so that the classes of matrices that can be carried into each other by means of these transformations are the orbits (cf. Orbit) of this group, and the appropriate normal form is
the result of selecting in each orbit a certain canonical representative. Thus, the classes of equivalent matrices are the orbits of the group
[1] M. Markus, "A survey of matrix theory and matrix inequalities" , Allyn & Bacon (1964)
[2] P. Lancaster, "Theory of matrices" , Acad. Press (1969) MR0245579 Zbl 0186.05301
[3] S. Lang, "Algebra" , Addison-Wesley (1974) MR0783636 Zbl 0712.00001
[4] A.I. Mal'tsev, "Foundations of linear algebra" , Freeman (1963) (Translated from Russian) Zbl 0396.15001
[5] N. Bourbaki, "Elements of mathematics. Algebra: Modules. Rings. Forms" , 2 , Addison-Wesley (1975) pp. Chapt.4;5;6 (Translated from French) MR0643362 Zbl 1139.12001
[6] N. Bourbaki, "Elements of mathematics. Algebra: Algebraic structures. Linear algebra" , 1 , Addison-Wesley (1974) pp. Chapt.1;2 (Translated from French) MR0354207
[7] H.J.S. Smith, "On systems of linear indeterminate equations and congruences" , Collected Math. Papers , 1 , Chelsea, reprint (1979) pp. 367–409
[8] G. Frobenius, "Theorie der linearen Formen mit ganzen Coeffizienten" J. Reine Angew. Math. , 86 (1879) pp. 146–208
[9] F.R. [F.R. Gantmakher] Gantmacher, "The theory of matrices" , 1 , Chelsea, reprint (1977) (Translated from Russian) MR1657129 MR0107649 MR0107648 Zbl 0927.15002 Zbl 0927.15001 Zbl 0085.01001
[10] J.-P. Serre, "A course in arithmetic" , Springer (1973) (Translated from French) MR0344216 Zbl 0256.12001
The Smith canonical form and a canonical form related to the first natural normal form are of substantial importance in linear control and system theory [a1], [a2]. Here one studies systems of
Canonical forms are often used in (numerical) computations. This must be done with caution, because they may not depend continuously on the parameters [a3]. For example, the Jordan canonical form is
not continuous; an example of this is:
The matter of continuous canonical forms has much to do with moduli problems (cf. Moduli theory). Related is the matter of canonical forms for families of objects, e.g. canonical forms for
holomorphic families of matrices under similarity [a4]. For a survey of moduli-type questions in linear control theory cf. [a5].
In the case of a controllable pair
[a1] W.A. Wolovich, "Linear multivariable systems" , Springer (1974) MR0359881 Zbl 0291.93002
[a2] J. Klamka, "Controllability of dynamical systems" , Kluwer (1990) MR2461640 MR1325771 MR1134783 MR0707724 MR0507539 Zbl 0911.93015 Zbl 0876.93016 Zbl 0930.93008 Zbl 1043.93509 Zbl 0853.93020 Zbl
0852.93007 Zbl 0818.93002 Zbl 0797.93004 Zbl 0814.93012 Zbl 0762.93006 Zbl 0732.93008 Zbl 0671.93040 Zbl 0667.93007 Zbl 0666.93009 Zbl 0509.93012 Zbl 0393.93041
[a3] S.H. Golub, J.H. Wilkinson, "Ill conditioned eigensystems and the computation of the Jordan canonical form" SIAM Rev. , 18 (1976) pp. 578–619 MR0413456 Zbl 0341.65027
[a4] V.I. Arnol'd, "On matrices depending on parameters" Russ. Math. Surv. , 26 : 2 (1971) pp. 29–43 Uspekhi Mat. Nauk , 26 : 2 (1971) pp. 101–114 Zbl 0259.15011
[a5] M. Hazewinkel, "(Fine) moduli spaces for linear systems: what are they and what are they good for" C.I. Byrnes (ed.) C.F. Martin (ed.) , Geometrical Methods for the Theory of Linear Systems ,
Reidel (1980) pp. 125–193 MR0608993 Zbl 0481.93023
[a6] H.W. Turnball, A.C. Aitken, "An introduction to the theory of canonical matrices" , Blackie & Son (1932)
A normal form of an operator is a representation, up to an isomorphism, of a self-adjoint operator
To begin with, suppose that
here spectral resolution of
Then the operators
Suppose, next, that
The operator Normal operator).
[1] A.I. Plesner, "Spectral theory of linear operators" , F. Ungar (1965) (Translated from Russian) MR0194900 Zbl 0188.44402 Zbl 0185.21002
[2] N.I. Akhiezer, I.M. Glazman, "Theory of linear operators in Hilbert spaces" , 1–2 , Pitman (1981) (Translated from Russian) MR0615737 MR0615736
V.I. Sobolev
The normal form of an operator Fock space constructed over a certain space measure space, in the form of a sum
where annihilation operators creation operators
In each term of expression (1) all factors
For any bounded operator
The representation (1) can be rewritten in a form containing the annihilation and creation operators directly:
In the case of an arbitrary (separable) Hilbert space
[1] F.A. Berezin, "The method of second quantization" , Acad. Press (1966) (Translated from Russian) (Revised (augmented) second edition: Kluwer, 1989) MR0208930 Zbl 0151.44001
R.A. Minlos
[a1] N.N. [N.N. Bogolyubov] Bogolubov, A.A. Logunov, I.T. Todorov, "Introduction to axiomatic quantum field theory" , Benjamin (1975) (Translated from Russian) MR0452276 MR0452277
[a2] G. Källen, "Quantum electrodynamics" , Springer (1972) MR0153346 MR0056465 MR0051156 MR0039581 Zbl 0116.45005 Zbl 0074.44202 Zbl 0050.43001 Zbl 0046.21402 Zbl 0041.57104
[a3] J. Glimm, A. Jaffe, "Quantum physics, a functional integral point of view" , Springer (1981) Zbl 0461.46051
The normal form of a recursive function is a method for specifying an recursive function
where primitive recursive function, least-number operator to
The normal form theorem is one of the most important results in the theory of recursive functions.
A.A. Markov [2] obtained a characterization of those functions
[1] A.I. Mal'tsev, "Algorithms and recursive functions" , Wolters-Noordhoff (1970) (Translated from Russian) Zbl 0198.02501
[2] A.A. Markov, "On the representation of recursive functions" Izv. Akad. Nauk SSSR Ser. Mat. , 13 : 5 (1949) pp. 417–424 (In Russian) MR0031444
V.E. Plisko
[a1] S.C. Kleene, "Introduction to metamathematics" , North-Holland (1951) pp. 288 MR1234051 MR1570642 MR0051790 Zbl 0875.03002 Zbl 0604.03002 Zbl 0109.00509 Zbl 0047.00703
A normal form of a system of differential equations
near an invariant manifold
that is obtained from (1) by an invertible formal change of coordinates
in which the Taylor–Fourier series resonance terms. In a particular case, normal forms occurred first in the dissertation of H. Poincaré (see [1]). By means of a normal form (2) some systems (1) can
be integrated, and many can be investigated for stability and can be integrated approximately; for systems (1) a search has been made for periodic solutions and families of conditionally periodic
solutions, and their bifurcation has been studied.
Normal forms in a neighbourhood of a fixed point.
Suppose that
contain only resonance terms for which
Every system (1) with
Generally speaking, the normalizing transformation (3) and the normal form (2) (that is, the coefficients [3]). If the original system contains small parameters, one can include them among the
coordinates [3]).
where the
(see , [3]). The solution of this system reduces to a solution of the subsystem of the first [3]).
The following problem has been examined (see ): Under what conditions on the normal form (2) does the normalizing transformation of an analytic system (1) converge (be analytic)? Let
for those
In case
If for an analytic system (1)
Thus, the problem raised above is solved for all normal forms except those for which small denominators, but degeneracy of the normal form.
But even in cases of divergence of the normalizing transformation (3) with respect to (2), one can study properties of the solutions of the system (1). For example, a real system (1) has a smooth
transformation to the normal form (2) even when it is not analytic. The majority of results on smooth normalization have been obtained under the condition that all
where the [4]–). If in the normalizing transformation (3) all terms of degree higher than
where the Lyapunov function (or Chetaev function)
where [7]; for other examples see the survey [8]).
From the normal form (2) one can find invariant analytic sets of the system (1). In what follows it is assumed for simplicity of exposition that
where [3]). On the sets [9]).
If a system (1) does not lead to a normal form (2) but to a system whose right-hand sides contain certain non-resonance terms, then the resulting simplification is less substantial, but can improve
the quality of the transformation. Thus, the reduction to a "semi-normal form" is analytic under a weakened condition [9]).
Suppose that a system (1) is defined and analytic in a neighbourhood of an invariant manifold
such that
If among the coordinates Krylov–Bogolyubov method of averaging (see [10]), and the averaged system is a normal form. More generally, perturbation theory can be regarded as a special case of the
theory of normal forms, when one of the coordinates is a small parameter (see [11]).
Theorems on the convergence of a normalizing change, on the existence of analytic invariant sets, etc., carry over to the systems (9) and (10). Here the best studied case is when [3], , [12]–[14].
[1] H. Poincaré, "Thèse, 1928" , Oeuvres , 1 , Gauthier-Villars (1951) pp. IL-CXXXII
[2a] A.D. [A.D. Bryuno] Bruno, "Analytical form of differential equations" Trans. Moscow Math. Soc. , 25 (1971) pp. 131–288 Trudy Moskov. Mat. Obshch. , 25 (1971) pp. 119–262
[2b] A.D. [A.D. Bryuno] Bruno, "Analytical form of differential equations" Trans. Moscow Math. Soc. (1972) pp. 199–239 Trudy Moskov. Mat. Obshch. , 26 (1972) pp. 199–239
[3] A.D. Bryuno, "Local methods in nonlinear differential equations" , 1 , Springer (1989) (Translated from Russian) MR0993771
[4] P. Hartman, "Ordinary differential equations" , Birkhäuser (1982) MR0658490 Zbl 0476.34002
[5a] V.S. Samovol, "Linearization of a system of differential equations in the neighbourhood of a singular point" Soviet Math. Dokl. , 13 (1972) pp. 1255–1259 Dokl. Akad. Nauk SSSR , 206 (1972) pp.
545–548 Zbl 0667.34041
[5b] V.S. Samovol, "Equivalence of systems of differential equations in the neighbourhood of a singular point" Trans. Moscow Math. Soc. (2) , 44 (1982) pp. 217–237 Trudy Moskov. Mat. Obshch. , 44
(1982) pp. 213–234
[6a] G.R. Belitskii, "Equivalence and normal forms of germs of smooth mappings" Russian Math. Surveys , 33 : 1 (1978) pp. 95–155 Uspekhi Mat. Nauk. , 33 : 1 (1978) MR0490708
[6b] G.R. Belitskii, "Normal forms relative to a filtering action of a group" Trans. Moscow Math. Soc. , 40 (1979) pp. 3–46 Trudy Moskov. Mat. Obshch. , 40 (1979) pp. 3–46
[6c] G.R. Belitskii, "Smooth equivalence of germs of vector fields with a single zero eigenvalue or a pair of purely imaginary eigenvalues" Funct. Anal. Appl. , 20 : 4 (1986) pp. 253–259 Funkts.
Anal. i Prilozen. , 20 : 4 (1986) pp. 1–8
[7] A.M. [A.M. Lyapunov] Liapunoff, "Problème général de la stabilité du mouvement" , Princeton Univ. Press (1947) (Translated from Russian)
[8] A.L. Kunitsyn, A.P. Markev, "Stability in resonant cases" Itogi Nauk. i Tekhn. Ser. Obsh. Mekh. , 4 (1979) pp. 58–139 (In Russian)
[9] J.N. Bibikov, "Local theory of nonlinear analytic ordinary differential equations" , Springer (1979) MR0547669 Zbl 0404.34005
[10] N.N. Bogolyubov, Yu.A. Mitropol'skii, "Asymptotic methods in the theory of non-linear oscillations" , Hindushtan Publ. Comp. , Delhi (1961) (Translated from Russian) MR0100379 Zbl 0151.12201
[11] A.D. [A.D. Bryuno] Bruno, "Normal form in perturbation theory" , Proc. VIII Internat. Conf. Nonlinear Oscillations, Prague, 1978 , 1 , Academia (1979) pp. 177–182 (In Russian)
[12] V.V. Kostin, Le Dinh Thuy, "Some tests of the convergence of a normalizing transformation" Dapovidi Akad. Nauk URSR Ser. A : 11 (1975) pp. 982–985 (In Russian) MR407356
[13] E.J. Zehnder, "C.L. Siegel's linearization theorem in infinite dimensions" Manuscr. Math. , 23 (1978) pp. 363–371 MR0501144 Zbl 0374.47037
[14] N.V. Nikolenko, "The method of Poincaré normal forms in problems of integrability of equations of evolution type" Russian Math. Surveys , 41 : 5 (1986) pp. 63–114 Uspekhi Mat. Nauk , 41 : 5
(1986) pp. 109–152 MR0878327 Zbl 0632.35026
A.D. Bryuno
For more on various linearization theorems for ordinary differential equations and canonical form theorems for ordinary differential equations, as well as generalizations to the case of non-linear
representations of nilpotent Lie algebras, cf. also Poincaré–Dulac theorem and Analytic theory of differential equations, and [a1].
[a1] V.I. Arnol'd, "Geometrical methods in the theory of ordinary differential equations" , Springer (1983) (Translated from Russian)
How to Cite This Entry:
Normal form (for matrices). Encyclopedia of Mathematics. URL: http://www.encyclopediaofmath.org/index.php?title=Normal_form_(for_matrices)&oldid=24779
This article was adapted from an original article by V.L. Popov (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098.
See original article
|
{"url":"http://www.encyclopediaofmath.org/index.php/Normal_form_(for_matrices)","timestamp":"2014-04-19T17:02:35Z","content_type":null,"content_length":"114027","record_id":"<urn:uuid:836062de-180c-4eaa-b90b-30816b766e26>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00345-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Euclidean Domains - Dummit and Foote - Chapter 8 - Section 8.1 - Example on Quadratic
October 12th 2012, 05:22 AM
Euclidean Domains - Dummit and Foote - Chapter 8 - Section 8.1 - Example on Quadratic
I am reading Dummit and Foote Chapter 8, Section 8.1 - Euclidean DOmains
I am working through Example 2 on page 273 (see attachment)
Example 2 demonstrates that the quadratic integer ring $\mathbb{Z} [ \surd -5 ]$ is not a Euclidean domain.
I can follow the argument down to the point where D&F state (see attachment)
"Multiplying both sdes by $2 - \surd -5$ would then imply that $2 - \surd -5$ is a multiple of 3 in R, a contradiction"
================================================== ====================================
I cannot show this point - the mechanics of this fail me... can someone please help
October 12th 2012, 07:55 AM
Re: Euclidean Domains - Dummit and Foote - Chapter 8 - Section 8.1 - Example on Quadr
$1 \in I = (3, 2 + \sqrt{-5})$ implies $\exists \gamma, \delta \in \mathbb{Z}[\sqrt{-5}]$ such that $3 \gamma + (2 + \sqrt{-5}) \delta = 1$.
Multiply both sides by $2 - \sqrt{-5}$, getting:
$3(2 - \sqrt{-5})\gamma + (2 + \sqrt{-5}) (2 - \sqrt{-5}) \delta = 2 - \sqrt{-5}$, so
$3(2 - \sqrt{-5})\gamma + 9 \delta = 2 - \sqrt{-5}$, so
$3 \{(2 - \sqrt{-5})\gamma + 3 \delta \} = 2 - \sqrt{-5}$.
Thus $3 \alpha = 2 - \sqrt{-5}$, where $\alpha = (2 - \sqrt{-5})\gamma + 3 \delta \in \mathbb{Z}[\sqrt{-5}]$.
Thus $3$ divides $(2 - \sqrt{-5})$ in $\mathbb{Z}[\sqrt{-5}]$.
But that's impossible because $3x = 2$ has no solution in $\mathbb{Z}$.
(In detail, if $\alpha = x + y\sqrt{-5}, x, y\in \mathbb{Z}$, then $3\alpha = 3x + 3y\sqrt{-5}$, so
$3 \alpha = 2 - \sqrt{-5}$ implies $3x + 3y\sqrt{-5} = 2 - \sqrt{-5}$ implies $3x = 2, 3y =-1$.)
Therefore $1 otin I$.
There's a tiny mistake in the proof of Proposition1. It should read "by the Well Ordering of $\mathbb{N}$".
October 12th 2012, 11:19 AM
Re: Euclidean Domains - Dummit and Foote - Chapter 8 - Section 8.1 - Example on Quadr
Euclidean domains are a very restrictive class of rings. They are "almost" fields. In particular, they are: unique factorization domains, greatest common divisor domains, and principal ideal
So if a given ring lacks one of these properties, we can conclude it is NOT a Euclidean domain. In this case, D&F choose to show that $\mathbb{Z}[\sqrt{-5}]$ is not a PID.
One can also show R is not a UFD:
9 = 3*3
9 = (2+√(-5))(2-√(-5))
are two distinct factorizations of 9 (that is 3 is not a factor of either 2+√(-5) or 2-√(-5)), which is equivalent to showing that 3 is not prime in R (3 divides a product ab, but divides neither
a nor b). However, 3 IS irreducible in R, and in a Euclidean domain "irreducibles = primes" (the same norm N can be used to show that 3 is irreducible:
if 3 = (a+b√(-5))(c+d√(-5)), then N(3) = 9, so we have either:
N(a+b√(-5)) = 1,3 or 9. if N(a+b√(-5)) = 1, then a = ±1, b = 0, in which case a+b√(-5) = ±1 is a unit. A similar proof show c+d√(-5) = ±1 if N(a+√(-5)) = 9. so if both a+b√(-5) and c+d√(-5) are
to be non-units, we must have N(a+b√(-5)) = 3. this means a^2+5b^2 = 3, for INTEGERS a,b, so |b| < 1, and is thus 0, and a^2 = 3 has no integer solution).
In general, it is more convenient to characterize rings by the properties of ideals, rather than elements (by analogy to groups, where we characterize groups by the behavior of normal subgroups:
that is, which factor groups we can form from them). In fact, the word "ideal" comes from the term "ideal numbers" which were first studied in quadratic extension rings of the integers (perhaps
motivated by a desire to solve Fermat's Last Theorem) as "generalizations" of "prime numbers" in ordinary arithmetic (integers). The general construction is this:
One starts with Q, the rational numbers, and adjoins a root of a quadratic polynomial with integer coefficients, so one gets Q(a). then one considers the sub-ring Z[a]. The ring-theoretic
properties depend on a, for some choices we get a Euclidean domain, for some we do not. The general idea is to extend "number theory" to such rings as much as possible. The "euclidean" definition
of primes: p is a prime iff p|ab implies p|a or p|b generalizes to a prime ideal: ab in P implies a in P or b in P. If R is a Euclidean domain (the nicest situation), then R is a PID, and the
prime ideals P are generated by prime elements p: P = (p).
In the case at hand, the polynomial is x^2 + 5 in Q[x]. Z[√(-5)] are the "integers" of the field Q(a), where a is a root of that polynomial. because this ring is non-euclidean, "factoring" isn't
as helpful as it could be (we cannot say that just because something is irreducible, it is prime, so divisibility arguments can go astray).
October 12th 2012, 02:57 PM
Re: Euclidean Domains - Dummit and Foote - Chapter 8 - Section 8.1 - Example on Quadr
Thank you for these posts
Most helpful for those like me engaged in self-study of mathematics
October 12th 2012, 03:01 PM
Re: Euclidean Domains - Dummit and Foote - Chapter 8 - Section 8.1 - Example on Quadr
Deveno ,
thanks for the considerable help
Working through the detail of your post now - your post is much appreciated
October 12th 2012, 03:46 PM
Re: Euclidean Domains - Dummit and Foote - Chapter 8 - Section 8.1 - Example on Quadr
I followed your post except for the following point - you write:
"N(a+b√(-5)) = 1,3 or 9. if N(a+b√(-5)) = 1, then a = ±1, b = 0, in which case a+b√(-5) = ±1 is a unit. A similar proof show c+d√(-5) = ±1 if N(a+√(-5)) = 9. so if both a+b√(-5) and c+d√(-5) are
to be non-units, we must have N(a+b√(-5)) = 3. this means a2+5b2 = 3, for INTEGERS a,b, so |b| < 1, and is thus 0, and a2 = 3 has no integer solution)."
My question is "Why do both a+b√(-5) and c+d√(-5) have to be non-units?"
[Apologies ... I suspect my question is rather basic ... but I have only just now skimmed the material on UFDs and have not covered them properly]
October 13th 2012, 12:39 PM
Re: Euclidean Domains - Dummit and Foote - Chapter 8 - Section 8.1 - Example on Quadr
in a unique factorization domain, the factorizations are only unique up to units.
for example, in Z, we have: 6 = 2*3 = 1*1*2*3, but we don't really consider these "different" because 1 is a unit. this is the logic behind the rule: "1 is not a prime number".
again, in say, Q[x], when we factor 4x^2 - 4, we have (2x + 2)(2x - 2), AND (4)(x + 1)(x - 1), but these aren't considered "different" because 4 is a unit in Q.
in a ring with unity (which you have to have to even define units (invertible multiplicative elements)), any element r can ALWAYS be written r = u(u^-1r), for any unit u.
the definition of an irreducible in a ring R is something that cannot be written as the product of 2 non-units. that is:
u in R is irreducible if u = ab with a,b in R, implies a or b is a unit.
prime elements are irreducible, but it is not always true that irreducible elements are prime. for example: 3 in Z[√(-5)].
|
{"url":"http://mathhelpforum.com/advanced-algebra/205175-euclidean-domains-dummit-foote-chapter-8-section-8-1-example-quadratic-print.html","timestamp":"2014-04-18T07:30:16Z","content_type":null,"content_length":"17169","record_id":"<urn:uuid:cdc3c569-6b04-4b74-ada6-e382643c71c0>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00620-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Convert to COMP-3 using VB
02-14-2006, 12:49 PM #1
Registered User
Join Date
Jan 2006
I am trying to Convert a "HH:MM:SS" value to PIC S9(07) COMP-3
what steps i need to follow.
as a step-1 , i could able to convert "12:04:03" to 0120403C
what should be the next step as finally that data in VSAM looks like :- Ÿ„
so my question is
is this a ASCII char? and how do i convert 0120403C to Ÿ„
Any help would be much appreciated.
Step 1: Convert the time value into a double
Step 2: Convert the Double into a Packed Decimal String (COMP-3)
Step 3: Convert the string into a byte array.
For Example:
Dim tDouble As Double
Dim tStr As String
Dim tByteArray() as Byte
Dim tTimeStr As String
Dim i As Long
tTimeStr = "12:04:03"
tDouble = 120403
tStr = DoubleToPackedString(tDouble, 7, 0) 'function defined below...
'allocate the bytes needed for the resulting string
ReDim tByteArray(0 To Len(tStr) - 1)
For i = 0 To Len(tStr) - 1
tByteArray(i) = Asc(Mid(tStr, i + 1, 1))
Next i
Public Function DoubleToPackedString(pDouble As Double, pLength As Long, pFraction As Long) As String
Dim tDouble As Double
Dim tDecimalStr As String
Dim tLen As Long
Dim tChar As String
Dim tHiBits As Byte
Dim tLoBits As Byte
Dim tPChar As String
Dim i As Long
Dim tPackStr As String
Dim tSignChar As Byte
Dim tFormatStr As String
tPackStr = ""
tFormatStr = String(pLength - pFraction, "0") & "." & String(pFraction, "0")
'format it to the size desired
tDecimalStr = Format(pDouble, tFormatStr)
'if negative remove the leading sign
If pDouble < 0 Then
tDecimalStr = Mid(tDecimalStr, 2)
End If
'remove the decimal place
tDecimalStr = Left(tDecimalStr, InStr(tDecimalStr, ".") - 1) & Mid(tDecimalStr, InStr(tDecimalStr, ".") + 1)
'make sure we only convert the correct overall length expected
'example problem: pDouble=1.0, pLength=7, pFraction=7 (i.e. PIC V9(7) ) resulting formated string = 1.0000000
tDecimalStr = Right(tDecimalStr, pLength)
'if its an even length, we need to add a leading zero to even out
'the result when adding the sign character
' If (pLength - pFraction) Mod 2 = 0 Then
If Len(tDecimalStr) Mod 2 = 0 Then
'add a leading 0
tDecimalStr = "0" & tDecimalStr
End If
tLen = Len(tDecimalStr)
For i = 1 To tLen - 1
tChar = Mid(tDecimalStr, i, 1)
'even number are the "LoBits", odd are the "HiBits"
If i Mod 2 <> 0 Then
'get the value and shift it 4 bits
tHiBits = Val(tChar) * 16
'get the value
tLoBits = Val(tChar)
'add them together and get the resulting character
tPChar = Chr(tHiBits + tLoBits)
tPackStr = tPackStr & tPChar
tHiBits = 0
tLoBits = 0
End If
Next i
'add the sign character
tChar = Mid(tDecimalStr, i, 1)
'get the value of the last character and shift it 4 bits
tHiBits = Val(tChar) * 16
'add the sign character
If pDouble >= 0 Then
tLoBits = 12
tLoBits = 13
End If
'add them together and get the resulting character
tPChar = Chr(tHiBits + tLoBits)
tPackStr = tPackStr & tPChar
DoubleToPackedString = tPackStr
End Function
How did you determine what to input as length to the function? I need to write something to convert a file to ftp up to the mainframe and it contains comp-3 fields so I would like to understand
exactly how this would work.
How did you determine what to input as length to the function? I need to write something to convert a file to ftp up to the mainframe and it contains comp-3 fields so I would like to understand
exactly how this would work.
Do you have the copy book that defines the file layout ? The comp-3 definitions define the length, for example,
05 MyCompField Pic S9(10)V9(2) Comp-3.
V = decimal place
9 = a numeric digit
10 = Length (number digits to the left of the decimal place)
2 = Fraction (number of digits to the right of the decimal place)
So each comp-3 field will be a different length depending on its definition.
so given your example, would the length be 12? I do have to layout that I need for the file and it contains all the cobol definitions for the fields. Do implied decimals count for the length
parameter or do I just sum the digits to the left and right of the decimal?
so given your example, would the length be 12? I do have to layout that I need for the file and it contains all the cobol definitions for the fields. Do implied decimals count for the length
parameter or do I just sum the digits to the left and right of the decimal?
The function calculates the length of the field as follows:
FieldLength = (Length + Fraction)
If FieldLength Mod 2 = 0 Then
'add 1 to the field length
FieldLength = FieldLength + 1
End If
So that in the given the example above, the length of the data is:
If (10 + 2) Mod 2 = 0 Then
FieldLength = 10 + 2 + 1
FieldLength = 10 + 2
End If
Do you know how packed decimal (i.e. comp-3) fields are stored internally ?
Well, I have an explanation that I printed off the web...I would not say I fully understand it though...
I am taking a text file and converting it to EBCDIC to use on the mainframe. Many of the fields that I need to populate contain leading and trailing 0's which seem to be truncated during the
function. Any suggestions on how to modify so it will not lose the zeros. I am feeding a COBOL program so I need to have them for placeholders.
I am taking a text file and converting it to EBCDIC to use on the mainframe. Many of the fields that I need to populate contain leading and trailing 0's which seem to be truncated during the
function. Any suggestions on how to modify so it will not lose the zeros. I am feeding a COBOL program so I need to have them for placeholders.
Sounds like you're using a string conversion methoid and the underlying funciton is intepreting the 0's as control charactes (line feeds for example) and ignoring them and the resulting string is
missing those characters.
The best way is to read the data into a byte array, a line at a time if it's to large, and convert the individual bytes using a ASCII to EBCDIC mapping.
Attached is a class that handles the conversion (extension is renamed to .txt, just change it to .cls) ... hope it helps..
Thanks I will give this a try. I have a combination of fields to read that are converted to comp-3 or just picx and I write out three different layouts for each line that read in.
I have a feed of data that I am reading in and I am only using about 5 fields in that. I use that data for each of the 3 different layouts that I write for each record, but in each layout is a
combination of just picX fields and comp-3 fields. I am confused as how to use the class you provided. The translate function calls in a string so am I to do that first and then add it into the
byte array...sorry if I am being dense about this.
02-19-2006, 10:00 AM #2
Registered User
Join Date
Dec 2004
Atlanta, Ga.
04-25-2007, 08:25 AM #3
Registered User
Join Date
Apr 2007
04-25-2007, 08:39 AM #4
Registered User
Join Date
Dec 2004
Atlanta, Ga.
04-25-2007, 08:53 AM #5
Registered User
Join Date
Apr 2007
04-25-2007, 10:15 AM #6
Registered User
Join Date
Dec 2004
Atlanta, Ga.
04-25-2007, 11:56 AM #7
Registered User
Join Date
Apr 2007
05-10-2007, 09:41 AM #8
Registered User
Join Date
Apr 2007
05-10-2007, 10:04 AM #9
Registered User
Join Date
Dec 2004
Atlanta, Ga.
05-10-2007, 10:19 AM #10
Registered User
Join Date
Apr 2007
05-11-2007, 01:13 PM #11
Registered User
Join Date
Apr 2007
|
{"url":"http://forums.devx.com/showthread.php?161079-Query-a-date-from-database&goto=nextnewest","timestamp":"2014-04-20T00:40:52Z","content_type":null,"content_length":"119476","record_id":"<urn:uuid:dcb42dc9-bdfd-419f-88ce-f4e0482d4858>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00252-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Functions and Analysis
1106 Submissions
[3] viXra:1106.0056 [pdf] submitted on 27 Jun 2011
The Introduction of Twist (The Skew) in the Mathematics
Authors: Mircea Selariu
Comments: 10 pages.
The article define a mathematic entity called twist, which generates, in this way, notion of straight line. Straight line becom thus a twist of eccentricity e = 0, and broken line (zigzag line) is a
twist of s = ± 1.
Category: Functions and Analysis
[2] viXra:1106.0055 [pdf] replaced on 27 Jun 2011
The Calculus Relation Determination, with Whatever Precision, of Complete Elliptic Integral of the First Kind.
Authors: Mircea Selariu
Comments: 10 pages. v1 in Romanian, v2 in English.
These papers show a calculus relation ( 50 ) of complete elliptic integral K(k) with minimum 9 precise decimals and the possibility to obtain a more precisely relation.. It results by application
Landen's method, of geometrical-arithmetical average, not for obtain a numerical value but to obtain a compute algebraically relation after 5 steps of a geometrical transformation, called "CENTERED
Category: Functions and Analysis
[1] viXra:1106.0014 [pdf] submitted on 9 Jun 2011
Is Zero to the Zero Power Equal to One?
Authors: Ron Bourgoin
Comments: 4 pages
Sometimes in physics we end up with a function that resembles f(x)=0^0, where for example we have a radius that goes to zero and an exponent goes to zero in k/r n , where k is a constant. Is 0^0 in
such cases equal to unity?
Category: Functions and Analysis
|
{"url":"http://vixra.org/anal/1106","timestamp":"2014-04-19T01:53:48Z","content_type":null,"content_length":"5341","record_id":"<urn:uuid:2257f53e-d0b5-42dc-8967-9e781959bc64>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00119-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Elementary School Mathematics/Number System and Place Value
The Number System and Place Value.
Different Number Systems -- BasesEdit
Understanding What a Number System IsEdit
Our normal number system (the decimal system), consists of 10 digits (0, 1, 2, 3, 4, 5, 6, 7, 8, 9). "Deci-" is the Latin prefix for ten. Thus, "decimal" denotes the number system that counts in
tens. We can count from 0 to 9, but when we reach 9 and try to increment, we need to add another place in the number. In the decimal system, the number after 9 is 10. Each place is the number of the
base to an incremented power. This may sound confusing, but try this example: 1234. 4*10^0 + 3*10^1 + 2*10^2 + 1*10^3 = 1234. The first digit (4) is equal to four 1's (1 = 10^0). The second (3) is
equal to three 10's (10 = 10^1). The third (2) is equal to two 100's (100 = 10^2), and the fourth digit is equal to one 1000 (1000 = 10^3). When writing in different systems, care must be taken to
ensure that the reader isn't confused as to which system is currently being used. Thus, you will periodically see N[10] to denote a number in the decimal system, N[2] to denote a number in the binary
system, N[8] for octal, or N[16] for hexadecimal. These (binary, octal, decimal, and hexadecimal) are the most commonly used systems, although you can theoretically have a system with any number as a
The Binary SystemEdit
Computers use base two for their number system, meaning they only use the digits 0 and 1. This corresponds to whether an electric current is on or off, and the presence (or lack) of the current is
what runs the applications on your computer. Starting with 0, the next binary value is 1, just like the decimal system. However, after 1, there is no such thing as a 2 in binary. If you realize that
2[10] = 1*2^1, you've taken the first step to understanding binary. 2[10] = 1*2^1 + 0*2^0. Or, 2[10] = 10[2].
The Decimal SystemEdit
The first number represents the ones, the second number going left represents the tens, and the then third number going left represents the hundreds. For example the number 123 is broken down into
three different places, the 3 is in the one's column, the 2 is in the ten's column, and the 1 is in the hundreds column. So we take the left most digit and read it as one hundred and twenty three, we
can break it down via addition as 100 + 20 + 3 = 123. Going farther than that will require a comma after the hundreds place, and before the thousands place.
Numbers can become quite large or small, we shall example the thousands, ten thousands, hundred thousands, millions, ten millions, hundred millions, billions, ten billions, hundred billions. You can
see how the number system expands on itself and the farther to the left it goes the larger the number. Let us use the number 123,000 and notice that there are zeroes in the one's, ten's, and
hundred's places so we can ignore them for now. A zero by itself is worth nothing, but put numbers to the left of that zero and the value will increase. 123,000 has the 3 in the thousand's place, the
2 in the ten thousand's place, and the 1 in the hundred thousand's place. Notice that we placed a comma before the 3 and after the last zero to the left. This is the way to separate the classes of
numbers to make them easier to read. The 123,000 number is read as one hundred and twenty three thousand.
123,000,000 is another example and the 3 is in the millions, the 2 in the ten millions, and the 1 in the hundred millions. There is a second comma right before the millions mark, and another one
right before the thousands mark. The student will note that every three spaces to the left a comma is added to make the number easier to read. The 123,000,000 is read as one hundred and twenty three
million. The 3 is in the million's place, the 2 is in the ten million's place, and the 1 is in the hundred million's place.
The pattern repeats itself with 123,000,000,000 as you can see by now six numbers with a number in front of it goes into the millions, nine numbers with a number in front of it goes into the
billions. So if there was 1,000,000 numbers the six zeroes before the 1 shows that it is one million and the 1,000,000,000 shows that it is one billion. The student should now learn the pattern of
our number system. After the billion is the trillion 1,000,000,000,000 or twelve zeroes. These numbers are usually so high that they can record a nation's national debt and spending. The
123,000,000,000 is one hundred and twenty three billion, and if you guessed right 123,000,000,000,000 is one hundred and twenty three trillion.
Each space going left has a place value. The first digit is ones, and then the second digit is ten, and then the third digit is hundreds. Each value goes up 10 or rather multiplied by ten so 10 times
10 makes 100. 20 times 10 makes 200. 11 times 10 makes 110. That is each digit can hold 0 to 9, but after passing up the 9 or higher the value goes up to the next position. You cannot, for example
place a two digit number in the one's place or above, you would have to place the left digit in the ten's place and the digit to the right of the left digit to the one's place. No place can hold a
number higher than 9, the student has to learn later to carry the one in addition and subtraction in dealing with moving values from one place to another for addition and subtraction, which goes
beyond the scope of this article for now.
So far we have covered positive Integer numbers. Later on in this article we will cover Rational numbers and other types. To do so means we will be learning what the decimal is, and what the numbers
to the right of the decimal means. All whole numbers we covered so far are to the left of the decimal so 123 is really 123.00 and if we had to add in one-half or 1 over 2 to 123 we would get 1 over 2
is 0.50 and added to 123 equals 123.50 in which 0.50 is a number less than one but greater than zero.
Last modified on 27 December 2010, at 06:09
|
{"url":"http://en.m.wikibooks.org/wiki/Elementary_School_Mathematics/Number_System_and_Place_Value","timestamp":"2014-04-18T18:21:23Z","content_type":null,"content_length":"20673","record_id":"<urn:uuid:923643f2-cd11-4822-90a3-2bba079e148d>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00043-ip-10-147-4-33.ec2.internal.warc.gz"}
|
FairVote - Ranking the States
Ranking the States
Please see menu on the left for the various rankings.
States are ranked in the following categories:
Voter Turnout:
The percentage of the voting eligible population which voted in a state's U.S. House elections (as opposed to statewide and presidential elections). We use population estimates by Professor Michael
McDonald at George Mason University. His figures estimate the number of voting age adults who are eligible to vote, which means they excludes non-citizens and ex-felons in states that disenfranchise
This index measures the percentage of adult voters in a state who voted for the winning candidate in House elections; it is determined by multiplying voter turnout in U.S. House races by the
percentage of votes cast for winning candidates.
Landslide Index:
Percentage of all races won by at least 20%
Margin of Victory:
The winner's percentage of all votes cast minus the second-place candidate's percentage
Seats-to-Votes Distortion:
The seats-to votes distortion measures the extent to which one party wins a greater percentage of seats than votes and the other party wins a smaller percentage of seats than votes. You add the
percentage distortion for each party and divide by two. For example, if Democrats won 10% more seats than votes and Republicans 6% fewer seats than votes, the distortion would be 8.0%.
Democracy Index:
A state's average ranking in key categories: average margin of victory (measuring overall competitiveness), landslide index(measuring number of somewhat competitive races), seats-to-votes distortion
(measuring how well the intent of voters was reflected by results) and representation index (weighted double, as it measures both voter participation and the percentage of effective votes that elect
|
{"url":"http://archive.fairvote.org/global/?page=547","timestamp":"2014-04-18T13:30:01Z","content_type":null,"content_length":"6595","record_id":"<urn:uuid:e401b986-f7e5-4b53-9216-81c6f2c6e6e3>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00363-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[Tutor] Fibonacci series(perhaps slightly off topic)
John Fouhy john at fouhy.net
Thu Jul 3 02:59:39 CEST 2008
On 03/07/2008, Emil <kuffert_med_hat at hotmail.com> wrote:
> I have created a class called Fibs which allow you to access a specific number in the
> Fibonacci series(http://en.wikipedia.org/wiki/Fibonacci_number) But it seems to me that it
> is a bit inefficient, any suggestions on how to make it more efficient?
Does this behaviour seem correct to you? --
>>> class Fibs(object):
... def __init__(self):
... self.fibsseq = [0, 1]
... def __getitem__(self, key):
... for i in xrange(key):
... self.fibsseq.append(self.fibsseq[-1] +
... return self.fibsseq[key]
>>> f = Fibs()
>>> f[1]
>>> f[1]
>>> f[1]
>>> f.fibsseq
[0, 1, 1, 2, 3]
Maybe if I examine the first Fibonacci number a few hundred times:
>>> ones = [f[1] for i in xrange(500)]
>>> len(f.fibsseq)
Hmm, that's a lot of numbers to calculate when we're only looking at
the first element in the sequence..
(by the way: you might want to replace 'print' with 'return' in your
definition of __getitem__)
(by the way 2: if you follow the above code, and then display
f.fibsseq, you may see some nice curves caused by the " " between each
number. Aren't fibonacci numbers wonderful :-) )
More information about the Tutor mailing list
|
{"url":"https://mail.python.org/pipermail/tutor/2008-July/062802.html","timestamp":"2014-04-19T01:13:07Z","content_type":null,"content_length":"4256","record_id":"<urn:uuid:4c1bf23d-d05d-4d45-83e8-d5c9af1689dd>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00120-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Summary: Identication of small amplitude perturbations in the
electromagnetic parameters from partial dynamic
boundary measurements
Habib Ammari
We consider the inverse problem of reconstructing small amplitude perturbations
in the conductivity for the wave equation from partial (on part of the boundary) dy-
namic boundary measurements. Through construction of appropriate test functions by
a geometrical control method we provide a rigorous derivation of the inverse Fourier
transform of the perturbations in the conductivity as the leading order of an appropri-
ate averaging of the partial dynamic boundary perturbations. This asymptotic formula
is generalized to the full time-dependent Maxwell's equations. Our formulae may be
expected to lead to very eective computational identication algorithms, aimed at de-
termining electromagnetic parameters of an object based on partial dynamic boundary
Key words. inverse problem, wave equation, Maxwell's equations, reconstruction,
electromagnetic coeÆcients, geometric control
2000 AMS subject classications. 35R30, 35B40, 35B37, 35L05
1 Introduction
The ultimate objective of the work described in this paper is to determine, most eec-
|
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/889/0185750.html","timestamp":"2014-04-18T18:39:50Z","content_type":null,"content_length":"8328","record_id":"<urn:uuid:ad12a5a3-8261-431b-b11a-ca28ee218b75>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00162-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Please recommend a nice and concise math book on probability theory
up vote 3 down vote favorite
My intention is neither to learn basic probability concepts, nor to learn applications of the theory. My background is at the graduate level of having completed all engineering courses in probability
/statistics - mostly oriented toward the applications without emphasizing too much the rigorousness of mathematics.
Now I am very interested in learning what makes the core logic and mathematical framework of the probability theory, as a math branch. More specifically, I would like to learn answers to the
following questions:
1) what are the necessary axioms, from which we can build the probability theory? 2) what are the core theorems and results in the mathematical theory of probability? 3) what are the derived rules
for reasoning/inference, based on the theorems/results in the probability theory?
So I am seeking a book that covers the "heart" of the mathematical probability theory - no need much on applications, or discussion on extended topics.
I would like to appreciate your patience for reading my post and any informative responses.
Regards, finguy
pr.probability axioms
add comment
closed as off-topic by Andres Caicedo, Mark Meckes, Chris Godsil, Willie Wong, Stefan Kohl Jan 28 at 17:04
This question appears to be off-topic. The users who voted to close gave these specific reasons:
• "This question does not appear to be about research level mathematics within the scope defined in the help center." – Andres Caicedo, Stefan Kohl
• "MathOverflow is for mathematicians to ask each other questions about their research. See Math.StackExchange to ask general questions in mathematics." – Mark Meckes, Willie Wong
If this question can be reworded to fit the rules in the help center, please edit the question.
1 Answer
active oldest votes
This book is a nice small book on the subject:
David Williams, Probability with Martingales
For a more comprehensive treatment of the subject I suggest the following:
up vote 5 down vote
William Feller, An Introduction to Probability Theory and Its Applications
Patrick Billingsley, Probability and Measure
add comment
Not the answer you're looking for? Browse other questions tagged pr.probability axioms or ask your own question.
|
{"url":"http://mathoverflow.net/questions/155925/please-recommend-a-nice-and-concise-math-book-on-probability-theory?answertab=active","timestamp":"2014-04-19T12:04:22Z","content_type":null,"content_length":"45072","record_id":"<urn:uuid:705f0570-0657-40b9-a344-434b93013d87>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00101-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Noncommutative geometry
My first introduction to the theory of Drinfeld modules was in the mid 1970's when I was a graduate student at Harvard. My advisor, Barry Mazur, had heard about them from lectures by Deligne (who, I
believe, had previously met Drinfeld in Moscow). In any case, based on his knowledge of elliptic modular curves, Barry asked me whether the difference of two cuspidal points would be of finite order
in the Jacobian of the modular curves of rank two Drinfeld modules (it is). He expected that showing this would involve Eisenstein series and then said, "But I don't know how to construct them." I
went home and wrote down the obvious formula from $SL_2({\mathbf Z)$ which clearly converged and I was off; it took me a little while to realize that, in fact, the convergence was indeed strong
enough to define a "rigid analytic function" in the sense of John Tate - such rigid functions play the role in nonArchimedean analysis that holomorphic functions do in complex analysis. The glorious
point to Tate's idea was that by drastically reducing the number of "admissable" open sets (via a Grothendieck topology), one could actually force analytic continuation, "GAGA" theorems (which
basically say that anything done analytically on a projective variety actually ends up in the algebraic category), and so on.....
Anyway, once one had Eisenstein series, the definitions of general modular forms were completely straightforward. What was not obvious was eastablishing that they possessed expansions at the cusps in
analogy with the "$q$-expansions" of elliptic modular froms; but one can in fact do this with a little rigid geometry. The resulting expansions arise from the appropriate Tate objects in the theory
also in analogy with the classical elliptic theory. Coherent chomology then shows that the forms of a given weight, which are also holomorphic at the cusps, form finite dimensional spaces and so on.
Moreover, one could readily define the Hecke operators with the obvious definition and see that the Eisenstein series are eigenforms with eigenvalues associated to a prime $(f)$ ($f\in {\mathbf F}_q
[\theta]$) of the from $f^i$.
However, there were some issues that immediately arose which vexed me greatly then, and still do even now with a good deal of progress on them. They are:
1. The Hecke operators are associated to ideals $(i)\subset {\mathbf F}_q[\theta]$ whereas the expansions at cusps are of the form $u^j$ for $j$ an integer and $u$ the local parameter; an obvious
mismatch very much unlike classical theory!
2. A simple combinatorial calculation shows that the Hecke operators are *totally* multiplicative in obvious distinction from what happens with elliptic modular forms.
3. There is a form $\Delta$ highly analogous to its elliptic cousin. Very early on, Serre asked me to compute its eigenvalues and I was surprised that I could show $\Delta$ has the same eigenvalues
as an Eisenstein series. In fact, there are all sorts of forms that have the same eigenvalues, which is, from a classical point of view, very concerning!!
Since then, there has been a lot of great work on these rigid modular forms by Gekeler, Reversat, Teitelbaum, Böckle, Pink, Bosser, Pellarin, Armana and others. I want to focus here on the recent
work of Bartolomé López and, in particular, Aleks Petrov (who is a student of Dinesh Thakur); see http://arxiv.org/abs/1207.6479 . Remarkably there appears to be a very serious connection with my
last post (on the work of Federico Pellarin and Rudy Perkins).
More precisely, as above, let $u$ be the parameter at the cusp $\infty$ that we are expanding our forms about. Now when one computes the expansion of the Eisenstein series at the cusps, one passes
through an intermediate expansion of the form $\sum_a c_a g_a$ where $a$ runs over the monic elements in ${\mathbf F}_q[\theta]$ and $g_a$ is an easily specified function depending on $a$. Such
expansions are called "$A$-expansions" by Petrov and can be seen to be unique. The first example, as mentioned, are the Eisenstein series, but Lopez showed more remarkably that the form $\Delta$ has
an $A$-expansion as does Gekeler's function $h$ (which is a root of $\Delta$).
Petrov shows the existence of infinitely many forms with such $A$-expansions. Moreover, these expansions also work very well with the Hecke operators and, in fact, one can see that they give rise to
eigenforms with very simple eigenvalues (like those mentioned for Eisenstein series). Indeed a form with such an $A$-expansion is essentially determined by its eigenvalues and weight and this is a
very positive development!
Since one has so many forms with such simple eigenvalues, it is natural to wonder if *all* the Hecke eigenvalues are of the same simple form, and so I asked Aleks what examples he had of Hecke
eigenvalues. Now recall that in my last post, if $t$ is a scalar, we defined the quasi-character $\chi_t$ by $\chi_t(f)=f(t)$ for $f \in {\mathbf F}_q[\theta]$. Well, remarkably, Aleks sent me some
tables where, for the primes $f$ calculated, the eigenforms indeed have associated eigenvalues of the form $f^j\chi_t(f)^e$ for various $t$ integral over $A$.....
Let $E$ be a curve of genus $1$ over the rational field $\mathbf Q$. One of the glories of mathematics is the discovery that (upon choosing a fixed rational point "$\mathbf O$") $E$ comes equipped
with an addition which makes its points over any number field (or $\mathbf R$ or $\mathbf C$) a very natural abelian group. (In the vernacular of algebraic geometry, one calls $E$ an "abelian
variety" of dimension $1$ or an "abelian curve".)
Built into this setup is a natural tension between the two different avatars of the integers $\mathbf Z$ which now arise. On the one hand, an integer $n$ is an element of the scalars $\mathbf Q$ over
which our curve $E$ lies; on the other hand, $n$ is also an operator on the group formed by the elliptic curve (and, in fact, it is well known that this operator is actually a morphism on the
elliptic curve).
One would, somehow, like to form a ring that encompasses both of these avatars. An obvious way to do this would be to form ${\mathbf Z}\otimes {\mathbf Z}$ but, alas, this fails as this tensor
product is simply $\mathbf Z$. I have always thought, perhaps naively, that one of the motivations in studying ${\mathbf F}_1$ was the hope that progress could be made here....
In any case, in finite characteristic we are blessed with more flexibility. Let $q$ be a power of a prime $p$ and let ${\mathbf F}_q$ by the field with $q$-elements with $A:={\mathbf F}_q[\theta]$
the polynomial ring in the indeterminate $\theta$. In the 1970's, soon after he defined elliptic modules (a.k.a., Drinfeld modules) Drinfeld was influenced by the work of Krichever to define an
associated vector bundle called a "shtuka". In order to do so, Drinfeld worked with the $2$-dimensional algebra $A\otimes_{\mathbf F_q} A$ which precisely combined the roles of operator and scalar.
Soon after that, Greg Anderson used this algebra to develop his higher dimensional analog of Drinfeld modules (called "$t$-modules"); in particular, Anderson's theory allowed one to create a good
category of "motives" out of Drinfeld modules which is, itself, equipped with a good notion of a tensor product.
One can associate to Drinfeld modules analogs of classical special functions such as $L$-series, gamma functions; etc. Classical theory leads to the expectation that these gamma functions should
somehow be related to the $L$-series much as gamma functions are "Euler-factors at infinity" in classical algebraic number theory. But so far that has not been the case and the connection, if one
exists, remains unknown.
The basic Drinfeld module is the rank $1$ module $C$ discovered by L. Carlitz in the 1930's (in a triumph of old school algebra!); it is a function field analog of the algebraic group ${\mathbf G}_m$
and its exponential is a function field analog of the classical exponential function. Let $\tau(z):=z^{q}$ be the $q$-th power mapping with $\tau^i$ defined by composition; the Carlitz module is then
the $\mathbf F_q$-algebra map defined by $C_\theta:=\theta \tau^0+\tau$. Using Anderson's notion of a tensor product, Greg and Dinesh Thakur rapidly defined, and studied, the $n$-tensor power $C^{\
otimes n}$ of the Carlitz module in "Tensor powers of the Carlitz module and zeta values," Ann. of Math. 132 (1990), 159–191. In particular, they defined the following marvelous function
$$\omega (t):=\theta_1 \prod_{i=0}^\infty \left(1-\frac{t}{\theta^{q^i}}\right)^{-1}\,,$$
where $\theta_1$ is a fixed $(q-1)$-st root of $-\theta$. Notice that $\omega(t)$ is obviously the reciprocal of an entire function and, in that, it reminds one of Euler's gamma function.
However, much more profound is the result of Anderson/Thakur (loc. cit.) that $\lim_{t\mapsto\theta}(t-\theta)\omega(t)$ is the period $\tilde{\xi}$ of the Carlitz module. Here one can't help but be
reminded of the famous equality $\Gamma(1/2)=\sqrt \pi$; so one is led to view $\omega(t)$ as yet another function field manifestation of the notion of a gamma function. Indeed, in a tour de force,
"Determination of the algebraic relations among special $\Gamma$-values in positive characteristic," (Ann. of Math. (2) (2004), 237-313), Anderson, Dale Brownawell, and Matt Papanikolas used $\omega
(t)$ to establish virtually all the transcendence results one would want of the geometric gamma function.
So it was apparent, to me anyway, that this magical $\omega(t)$ should also make itself known in the theory of characteristic $p$ $L$-series. However, I simply did not see how this could happen. This
impasse was recently broken by some fantastic results of Federico Pellarin ("Values of Certain $L$-series in positive characteristic," Ann. of Math. to appear, http://arxiv.org/abs/1107.4511) and
these results precisely provide the operator/scalar fusion mentioned in the title of this blog!
So I would like to finish by describing some of Federico's results, and also those of my student Rudy Perkins in this regard. They both are obtaining all sorts of beautiful formulae of the sort one
might find in the famous book by Whittaker and Watson which is very exciting and certainly bodes very well for the future of the subject. But before doing so, we do need one more result of Anderson/
As in my previous blog put $K:={\mathbf F}_q((1/\theta))$ with the canonical absolute value. Put ${\mathbf T}:=\{\sum_{i=0}^\infty a_it^i\}$ where $\{a_i\}\subset K$ and $a_i\to 0$ as $i \to \infty$;
so $\mathbf T$ is simply the Tate algebra of functions with coefficients in $K$ converging on the closed unit disc.
The algebra $\mathbf T$ comes equipped with two natural operators: First of all, the usual hyperdifferential operators act on $\mathbf T$ via differentation with respect to $t$ in the standard
fashion. Now let $f(t)=\sum a_it^i\in \mathbf T$; we then set $\tau (f):=\sum a_i^qt^i$ and call it the
"partial Frobenius operator" (in an obvious sense). Note that, in this setting, $\tau$ is actually $\mathbf F_q[t]$-linear. Note also, because we are in characteristic $p$ these operators commute.
Anderson and Thakur look at the following partial Frobenius equation on $\mathbf T$: $\tau \phi=(t-\theta)\phi$ (N.B.: $t-\theta$ is the "shtuka function" associated to the Carlitz module). The
solutions to this equation clearly form an $\mathbf F_q[t]$-module and the remarkable result of A/T is that this module is free of rank $1$ and generated by $\omega(t)$.
One can rewrite the fundamental equation $\tau \omega=(t-\theta)\omega$ as
$$(\theta \tau^0+\tau)\omega=t\cdot \omega\,;$$
in other words, if we use the partial Frobenius operators to extend the Carlitz module to $\mathbf T$ then $\omega$ trivializes this action. So if $f(\theta)\in A$ one sees immediately that $C_f\
Abstracting a bit, if $t$ is a scalar, then one defines the "quasi-character" $\chi_t(f):=f(t)$ simply by evaluation. It is Federico's crucial insight that this quasi-character is exactly the
necessary device to fuse both the scalars and operators in the theory of characteristic $p$ $L$-series by defining the associated $L$-series $L(\chi_t,s)$ (in the standard fashion). These functions
have all the right analytic properties in the $s$-variable and also have excellent analytic properties in the $t$-variable!
(The reader might have imagined, as I did at first, that the poles $\{\theta^{q^i}\}$ of $\omega(t)$ are too specialized to be associated to something canonical. However, we now see that these poles
correspond to the quasi-characters $f(\theta)\mapsto f(\theta^{q^i})=f(\theta)^{q^i}$ and so are completely canonical...)
The introduction of the variable $t$ is, actually, a realization of the notion of "families" of $L$-series. Indeed, if $t$ belongs to the algebraic closure of $\mathbf F_q$, then $\chi_t$ is a
character modulo $p(\theta)$, where $p(\theta)$ is the minimal polynomial of $t$.
Theorem: (Pellarin) We have $(t-\theta)\omega(t)L(\chi_t,1)= -\tilde{\xi}\,.$
And so $\omega(t)$ makes its appearance in $L$-series! (One is also reminded a bit of Euler's famous formula $e^{\pi i}=-1$.) Now let $n$ be a positive integer $\equiv 1$ mod $(q-1)$.
Theorem: (Pellarin) There exists a rational function $\lambda_n\in {\mathbf F}_q(t,\theta)$ such that
$$(t-\theta)\omega(t)L(\chi,n)=\lambda_n \tilde{\xi}^n\,.$$
In "Explicit formulae for $L$-values in finite characteristic" (just uploaded to the arXiv as http://arxiv.org/abs/1207.1753), my student Rudy Perkins gives a simple closed form expression for these
$\lambda_n$ as well as all sorts of connections with other interesting objects (such as the Wagner expansion of $\mathbf F_q$-linear functions, recursive formulae for Bernoulli-Carlitz elements,
So the introduction of $\chi_t$ has opened the door to all sorts of remarkable results. Still, the algebraic closure of $K$ is such a vast thing (with infinitely many extensions of bounded degree
etc.), that there may be other surprises we do not yet know. Moreover, we do know that the algebras of measures can be interpreted as hyperdifferential operators on $\mathbf T$. Where are they in the
game Federico started?
Yesterday, the ATLAS and CMS experiments at CERN announced the discovery of a Higgs boson at 125 GeV. Surely, this will become one of the most important discoveries of the century. It also caused
quite a few interesting 4th of July parties (for once, with a good justification for the fireworks).
For theoretical physicists, this is as much a good reason of excitement as for the experimentalists. Although a measurement of the Higgs self interaction will only come after the upgrade at 14 TeV of
the LHC, the current measurement already suggests interesting questions (for example, there appears to be a deficit in the WW channel of decay, which may be an accident, or an indication of something
more interesting).
As it is well known, the noncommutative geometry models of particle physics generally give rise to a heavier Higgs (originally estimated at around 170 GeV, then lowered in more recent versions of the
model, but still well above 125 GeV). The usual method, in these models, to obtain estimates on the Higgs, is to impose some boundary conditions at unification energy, dictated by the geometry of the
model, and running down the renormalization group equations (RGE). The geometric constraints impose some exclusion curves on the manifold of possible boundary conditions, but do not fix the boundary
conditions entirely: in fact, recent work on the NCG models observed a sensitive dependence on the choice of boundary conditions (within the constraints imposed by the geometry). Moreover, the
renormalization group flow typically used in these estimates is the one provided by the one-loop beta function of the minimal Standard Model (or in more recent versions, of effective field theories
obtained from extensions of the MSM by right handed neutrinos with Majorana mass terms, that is, the RGEs considered in hep-ph/0501272v3), rather than a renormalization group flow directly derived
from a quantum field theoretic treatment of the action functional of the NCG model, the spectral action.
Perhaps more interestingly (as what one is after, after all, are extensions of the MSM by new physics), while the original NCG models of particle physics focussed on the MSM, there are now variants
that include new particle: a first addition beyond the MSM was a model with right handed neutrinos with Majorana mass terms, which accounts for neutrino mixing and a see-saw mechanism.
More recently, a very promising program for extending the NCG model was developed by Thijs van den Broek and Walter van Suijlekom (arXiv:1003.3788), for versions with supersymmetry. While their first
paper on the subject deals only with the QCD sector of the model, they are now well on their way towards including the electroweak sector.
I apologize for the plot spoiler, but given the occasion I think it is worth mentioning: the model that van den Broek and van Suijlekom are currently developing appears to be fairly close to the
MSSM, although it is not the MSSM. In particular, the renormalization group equations in their model are going to be different than the equations of MSSM. In particular this means that the "cheap
trick" used so far in the NCG models, of importing RGE equations of known particle physics models and running them with boundary conditions imposed by NCG, will not apply to the supersymmetric
version and Higgs estimates within this model will involve a genuinely different RGE analysis. It will be interesting to see how the Higgs sector changes in their version of the NCG model, and
whether it gives a more realistic picture close to the observed results.
Falsifiability is the most important quality of any scientific theory. Indeed, having explicit experimental data that point out the shortcomings of a theoretical model is the best condition for a
serious re-examination of assumptions and techniques used in model building.
Cheers to the LHC, the ATLAS and CMS collaborations, for a great job!
|
{"url":"http://noncommutativegeometry.blogspot.com/2012_07_01_archive.html","timestamp":"2014-04-17T03:48:48Z","content_type":null,"content_length":"114648","record_id":"<urn:uuid:5d34f411-7299-445e-981e-0d959399d601>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00563-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Public Function Minimum( _
ParamArray vValues() As Variant _
) As Variant
Minimum of numeric arguments.
Minimum(56, 12, 34) = 12
Minimum(45, Array(12, 56, 78)) = 12
Minimum(SampleData) = 95
See also:
MinimumArray Function
Maximum Function
Range Function
Small Function
Quartile Function
Percentile Function
StatVarType Property
MIN Function (Microsoft Excel)
MINA Function (Microsoft Excel)
vValues: The arguments whose minimum value is to be returned. Can be numbers, one-dimensional numeric arrays, one-dimensional Variant arrays, one-dimensional Variant arrays with embedded arrays, or
any combination of these. The current setting of the StatVarType Property determines which numeric data types are recognized by this function.
Return value: Function returns the minimum numeric value among all its arguments. Function returns Null if none of the arguments are numeric.
v1.5 Note: This function replaces the MinRealRecursive and MinRealRecursiveArray functions, which have been removed from ArrayArithmetic Class. This function is slightly different in that it examines
all of the elements in the array from its lower bound through its upper bound.
Copyright 1996-1999 Entisoft
Entisoft Tools is a trademark of Entisoft.
|
{"url":"http://www.entisoft.com/ESTools/MathStatistics_Minimum.HTML","timestamp":"2014-04-16T18:56:58Z","content_type":null,"content_length":"3158","record_id":"<urn:uuid:23e1cc69-c90a-44c2-8338-f70f876249fa>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00161-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Mplus Discussion >> Modeling correlated errors for indicators
Bruce A. Cooper posted on Thursday, July 12, 2007 - 2:57 pm
I've been unable to figure out how to replicate a book example that specifies 8 specific correlations among the residuals of 6 indicators of a latent variable in a structural regression model. I've
tried the statements below, but the Mplus results are not even close to the results in the book example. Is this the correct approach?
famrisk BY parpsych* lowses* ;
cog_achv BY verbal* vis_spa* memory* reading* arith* spelling* ;
c_adjust BY motivtn* harmony* stabilty* ;
famrisk-c_adjust@1 ;
cog_achv ON famrisk ;
c_adjust ON cog_achv ;
verbal WITH vis_spa memory spelling ;
memory WITH vis_spa reading ;
reading WITH arith spelling ;
arith WITH spelling ;
Linda K. Muthen posted on Thursday, July 12, 2007 - 3:21 pm
Using the WITH option is the way to specify residual covariances. Are you sure you have exactly the same number of observations, the same number of parameters, and are using the same estimator?
Bruce A. Cooper posted on Friday, July 13, 2007 - 5:08 pm
Thanks for your blazingly fast reply, Linda!
I waited to reply until I could check the
issue about the number of parameters. Sadly, the problem might well be that the estimator is different. This is the same correlation matrix data set that I wrote about in another post about two weeks
ago. So, Mplus is using ML, but Kline (the author) used Statistica to get accurate estimates because the matrix is corr, not cov. I have gotten results identical to his, however, using RAMONA in
Systat 12.
My main task is to learn Mplus, however. I obtained very similar factor loadings with Mplus for the measurement part of the model, although not exact. I thought that was good enough to try to extend
the model to the SR part, too. One problem (besides the estimator) may be that I don't know how to scale the variances of the latent variables to 1, and also get the residual variances for them in
the SR part of the model. Those estimates are 1.0 in the Mplus output, but they are not the same in the SR models from the book and Systat.
Is there a way to scale the latent variables with variances = 1 in the measurement part of the model (so as to get loadings for all indicators) and still get estimates for the residual variances of
the latent endogenous variables in the SR part of the model?
Linda K. Muthen posted on Monday, July 16, 2007 - 9:01 am
Setting the metric of latent variables is described in the user's guide under the BY option. If you want latent variable variance of one, you free the first factor loading and fix the factor variance
to one, for example,
f BY y1* y2 y3;
Bruce A. Cooper posted on Monday, July 16, 2007 - 2:03 pm
Thanks, Linda -
I think I was not clear about my question. Your suggested syntax was what I had already used in the model.
Suppose I have 6 variables y1-y6 representing 2 factors f1 & f2. I specify this model:
f1 BY y1* y2* y3* ;
f2 BY y4* y5* y6* ;
f1-f2@1 ;
f2 ON f1 ;
y1 WITH y2 y3 ;
y2 WITH y3 ;
y4 WITH y5 y6 ;
y5 WITH y6 ;
I think that this model does the following: The first two lines obtain estimates for the loadings of the two sets of indicator variables on the two factors. The third line sets the variances of the
factors to 1 (in order to get all the loadings). The fourth line is the structural regression of f2 on f1. The remaining lines obtain the residual covariances among the y's within sets.
What I would like to know is how I can obtain all 6 loadings (by setting the factor variances to 1) and still obtain the disturbance (residual) for f2 regressed on f1 in Mplus. Is that possible?
Linda K. Muthen posted on Monday, July 16, 2007 - 2:39 pm
In the above model, the residual variance of f2 is fixed to one not the variance because f2 is a dependent variable. You need to set the metric of a factor by either fixing one factor loading to one
or the factor variance to one. You cannot have all factor loadings free and the factor variance/residual variance free.
Bruce A. Cooper posted on Tuesday, July 17, 2007 - 5:32 pm
Thanks, Linda -
What you said is what I thought, but in Principles and Practice of SEM, 2nd Ed by Kline, he shows loadings for all the indicators in his 3-factor model, corr among 8 pairs of residuals (my first
post), direct effects of f2 on f1, and f3 on f2, AND disturbances for f2 and f3 even though he has set their variances to 1 in order to get all the indicator loadings.
I still don't know how he got all the loadings AND the disturbances from the program that he used for the tabled analyses (Statistica) but at least I know now how to do the analysis in Mplus with ML
as the estimator!
As an aside, I was able to get all the estimates he reported with SYSTAT 12's RAMONA using Maximum Wishart likelhood, but only by including the "illegal" commands to set the factor variances to 1 AND
estimate the disturbances for the factors in the SR part of the model. I don't know why that worked, since it isn't supposed to be possible! In his chapters on CFA and SEM, Kline himself says that
models with latent variables have to be identified either by setting an indicator loading to 1, or by scaling the variance of the latent variable to 1. So I don't know why Statistica (apparently) and
RAMONA are able to get away with estimating the disturbances along with all indicator loadings!
All this by way of explaining the source of my questions re Mplus. Again, thanks!
- bac
Linda K. Muthen posted on Tuesday, July 17, 2007 - 6:14 pm
If he has fixed his factor variances to one, then the parameters will not be estimated. Perhaps you are looking at the standardized solution.
If a program provides estimates for non-identified parameters, these results are meaningless.
Back to top
|
{"url":"http://www.statmodel.com/cgi-bin/discus/discus.cgi?pg=next&topic=11&page=2399","timestamp":"2014-04-19T22:08:15Z","content_type":null,"content_length":"28995","record_id":"<urn:uuid:85752d88-eed1-4644-a05a-ef9fd320b37d>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00228-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Sofiane S
Sofiane Soussi
• Department of Mathematics and Statistics
• University of Limerick
• Limerick
• Ireland
• Phone : +353 (0) 61 20 2012
• Mobile : +33 (0) 6 41 66 69 90
+44 755 43 83 950
• E-mail : sofiane.soussi__AT_gmail.com
I am a lecturer at the department of mathematics and statistics at the University of Limerick.
I prepared my Ph.D. thesis under the supervision of Pr. Habib Ammari at the Centre de Mathématiques Appliquées of the Ecole Polytechnique.
The title of the thesis is "Mathematical modeling in optics". I studied electromagnetic diffraction by objects with nonlinear thin-coating and modelisation of photonic crystals.
Numerical simulations
Working papers
• S. Soussi, Transparent boundary conditions for semi-infinite periodic waveguides.
• S. Soussi and H. Zribi, Asymptotic expansions for the voltage potentials with thin interfaces in the case of high contrast.
• T. Hohage and S. Soussi, Completeness of the Floquet modes for periodic waveguides.
• S. Soussi, Transparent boundary condition for the photonic canvity problem.
|
{"url":"http://num.sofianesoussi.eu/","timestamp":"2014-04-20T09:19:07Z","content_type":null,"content_length":"174478","record_id":"<urn:uuid:07f9b6a2-d4f8-4a78-81d6-19acea1a630e>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00457-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Competition April 2007
Alfred State College placed fourth in statewide competition during the New York State Mathematics Association of Two-Year Colleges (NYSMATYC) Math Contest. Additionally, individual achievers included
Colin Coon who placed eighth of 651 spring participants statewide, and Kyler Star placed 12th for combined spring/fall participants! He was awarded a software package at the Regional NYSMATYC
Conference held in April.
The spring portion of the annual NYSMATYC Math Contest was held in March. A total of 651 students from 20 NYS two-year colleges entered the competition with 49 Alfred students participating.
The contest consists of 20 questions (puzzles) to challenge the mathematical creative thinking skills at the pre-calculus level of mathematics. Participants are permitted the use of calculators but
creators of the contest make every effort to create contest questions that test conceptual knowledge rather than calculator dexterity.
Each participating two-year school submitted the names and original answer sheets of students having the five highest scores for team competition. This spring the ASC team members were Colin Coon,
Trumansburg; Kyler Star, Prattsburg; Paul Congdon, Bath; Dustin Falkner, Gainsville; Angela Corby, Horseheads; Sung-Ji Kim, Korea; and Tim Riehlman, Homer. All Alfred State team members placed in the
top seven percent of the 651 college participants. Hudson Valley Community College was team champion for the spring event.
Rounding out the top 10 scores for Alfred State were William Hull, Anthony Wronka, and Daniel Brown.
The contest will take place again during the fall semester in early October 2007 and again during the spring semester in early March 2008.
Local contest coordinator, Elaine Nye, associate professor, and other faculty of the Mathematics/Physics Department thank all those who participated and congratulate those whose efforts aided Alfred
in attaining the rank of fourth in the state.
|
{"url":"http://www.alfredstate.edu/print/news/2007-05-02/math-competition-april-2007","timestamp":"2014-04-19T10:40:35Z","content_type":null,"content_length":"9537","record_id":"<urn:uuid:373b2a11-5eb3-409d-9245-69b2becce0b7>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00136-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Why is 255 the magic number? [Archive] - Retrogaming Roundtable
View Full Version : Why is 255 the magic number?
05-18-2008, 03:25 PM
05-18-2008, 03:45 PM
Could you explain a little more?
Because it's the largest number that can be stored in a single byte.
05-18-2008, 04:50 PM
Because it's the largest number that can be stored in a single byte.
we have a winner.
05-18-2008, 05:00 PM
In base 2, 255 = 11111111.
Per Wikipedia: This number occurs especially frequently in video games when a small number is needed, such as in the original The Legend of Zelda for the Nintendo Entertainment System where the
maximum number of Rupees (the currency of the game) is 255. In Metroid Prime, for the Nintendo Gamecube, 255 is the maximum number of missiles Samus Aran can hold. In the Madden NFL series, the
maximum points you can score is 255. If more points are scored, the game score remains at 255 regardless. In Starcraft, the maximum number of kills shown for a unit is 255. In Square Enix's Final
Fantasy series, it is often the maximum value for any given stat. In Pac-Man, the highest level you can reach is 255, The game glitches from that point on. In World of Warcraft although the game
level cap is only 70, the engine supports up to 255. The usage of 8 bits for storage in older videogames has had the consequence of it appearing as a hard limit in many videogames. It was often used
for numbers where casual gameplay would not cause anyone to exceed the number. However in most situations it is reachable given enough time. This can cause many other peculiarities similar to the
above listed to appear when the number wraps back to 0.
Brought to you by Tootie.
05-18-2008, 05:03 PM
Like the other guys said, a single byte can hold 256 values, from 0 to 255. Since a byte equals 8 bits, this works particularly well on 8 bit machines.
Visually: A byte looks like this xxxxxxxx, where x is a 0 or a 1 and each x represents 1 bit. So you can have 01010101, 11111111, 00110011 and so on. Doing the math, since each bit can be in one of
two states, and there are 8 of them, 2^8=256.
I hope that made sense :)
05-18-2008, 06:11 PM
dont forget hexadecimal ff=255
you will see hex used in many cheat deviced aka action replay, cheats ending in 63hex = 99 dec for 99 lives or ff used for maxed life.
In mathematics and computer science, hexadecimal (also base-16, hexa, or hex) is a numeral system with a radix, or base, of 16. It uses sixteen distinct symbols, most often the symbols 0–9 to
represent values zero to nine, and A, B, C, D, E, F (or a through f) to represent values ten to fifteen.
Its primary use is as a human friendly representation of binary coded values, so it is often used in digital electronics and computer engineering. Since each hexadecimal digit represents four binary
digits (bits)—also called a nibble—it is a compact and easily translated shorthand to express values in base two.
05-18-2008, 07:09 PM
I guess you are referring to stuff like Lagoon, YS, and Mario RPG. That everything maxes out at 255?
I always thought that it was a cool sound number.
05-18-2008, 11:18 PM
I remember in Final Fantasy Adventure, your money would max out at 65535 = FFFF h = 256^2 - 1.
But in generic terms, it's always called 256 (64, 128, 256 etc...), why's that?
Icarus Moonsight
05-19-2008, 01:59 AM
It's not a generic term. A byte can have 256 values. One of which is zero. Seems like paradox when you look at it in a usual day-to-day base 10 mentality where 1 is the first value. That is only
because (in real life) it's stupid to count things that are not there. LOL
05-19-2008, 02:40 AM
While we're here, I believe character levels in Star Ocean cap out at 255 as well.
05-19-2008, 12:46 PM
Since Contra's score uses two bytes, the maximum score is 6,553,500, the maximum that can be stored in two bytes (with two zeroes thrown in at the end, since nothing in the game will ever increment
the ones or tens digit).
blue lander
05-21-2008, 09:54 AM
255 is the largest number you can put in a single byte unless you're using two's compliments to represent negatives. In that case, 11111111 actually is -1 and the largest number you can have is 127
which is 01111111.
I remember in Final Fantasy Adventure, your money would max out at 65535 = FFFF h = 256^2 - 1.
Something similar in Perfect Dark (N64) : the number of people killed in your Combat Simulator stats can't get higher than 1 048 575 which is 2^20 - 1. :)
06-19-2008, 01:51 PM
looks like I'm one of the few people here that were thinking about getting high and staring at the cuties in highschool rather than studying math :/. You guys are whizes. Of course I was never in the
higher math courses.
06-19-2008, 02:06 PM
This isn't really stuff they taught in math class. More than likely all these guys were taking computer science classes rather than getting high and staring at the cuties :p
06-20-2008, 10:14 AM
This isn't really stuff they taught in math class. More than likely all these guys were taking computer science classes rather than getting high and staring at the cuties :p
Aww, come on. There was always time for both! Women love dudes with huge brains...as long as they aren't egotistical, ugly, antisocial and awkward. Unless he has some money, of course. :D
Kdding ladies. Officially,anyway. ;)
06-20-2008, 12:07 PM
well either way, as you can see where not paying attention didn't help me out too much ha ha ha.
|
{"url":"http://www.digitpress.com/forum/archive/index.php/t-116933.html","timestamp":"2014-04-19T15:09:36Z","content_type":null,"content_length":"10322","record_id":"<urn:uuid:6507eecf-6626-43fd-88ca-60ac81878ce6>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00193-ip-10-147-4-33.ec2.internal.warc.gz"}
|
SE PhD Prospectus Defense of Jing Conan Wang
2:00 pm on Wednesday, June 19, 2013
4:00 pm on Wednesday, June 19, 2013
15 Saint Mary's Street, Rm 105
TITLE: Decision-Making For Complex Systems with Uncertainty: Markov Decision Processes and Network Anomaly Detection ABSTRACT: Decision-making for systems with uncertainty is very challenging. We
consider two types of decision-making problems. One is Markov Decision Process (MDP). The other is Anomaly Detection for Network Traffic. MDP and Dynamic Programming (DP) are general frameworks for
sequential decision-making under uncertainty. It is well known that dynamic programming suffers from the so-called “curse of dimensionality”. Besides, in many cases with large uncertainty, transition
probability is not explicitly available, in which case we must resort to Approximate Dynamic Programming (ADP) techniques. Actor-critic method is a promising type of ADP. This thesis will focus on
the methods with the actor-critic structure which optimizes some Randomized Stationary Policy (RSP) using policy gradient estimation. Our first contribution is to introduce the the method of Least
Squares Temporal Difference (LSTD) to the critic, which is known to have better convergence rate than both TD(1) and TD(\lambda). Second, to improve the performance of actor-critic methods for
ill-conditioned problems, we propose the Hessian Actor-Critic (HAC) method. Although each iteration takes more computation, HAC outperforms LSTD actor-critic for ill-conditioned problems by reaching
the optimal point with less iteration number. We evaluate our methods with a problem of finding a control policy for robot under an uncertain environment to maximize the probability of reaching some
states while avoiding some other states. System Anomaly Detection is another important type of decision-making problems under certainty. This thesis focuses on the anomaly detection of Network
System. Two types of network system are considered. First, we present four stochastic and deterministic methods to anomaly detection of stationary network whose uncertainty can be characterized by a
stationary model. Our methods cover most of the common techniques in the anomaly detection field, including Statistical Hypothesis Tests (SHT), Support Vector Machines (SVM) and clustering analysis.
We evaluate all methods in a simulated network that consists of nominal data, three flow-level anomalies and one packet-level attack. Through analyzing the results, we point out the advantages and
disadvantages of each method and conclude that combining the results of the individual methods can yield improved anomaly detection results. Second, for the dynamic network whose uncertainty is not
stationary, we propose two robust stochastic methods. We first formulate a binary composite hypothesis testing problem for evaluating whether a sequence of observation comes from a family of
Probabilities Laws (PL). Then two robust methods, one is model-free and one is model-based, are proposed to solve this problem. The two methods are applied to anomaly detection in dynamic network
systems. We take three steps to estimate the underlying family of PLs in the dynamic network. First, two types of PLs are suggested to characterize the non-stationarity of the network. Then each
feature is inspected separately to get rough estimation of PLs, which usually generates a lot of redundancy. Later, an Integer Programming (IP) problem is formulated to select a refined set of PLs by
analyzing reference data. The simulation results show that the robust methods have significant advantages over vanilla methods in the anomaly detection of dynamic network systems. COMMITTEE: Advisor:
Ioannis Paschalidis, SE/ECE; Christos Cassandras, SE/ECE; David Starobinski, SE/ECE; Mark Crovella, SE/CAS, Computer Science
|
{"url":"http://www.bu.edu/research/home/calendar/?eid=139893","timestamp":"2014-04-19T05:21:51Z","content_type":null,"content_length":"19959","record_id":"<urn:uuid:0eb62130-ee94-4e85-85d0-4d500f1acaef>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00230-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
write a differential equation in dy/dt=ay+b form with all solutions approaching y=3 as t-> infinity
• one year ago
• one year ago
Best Response
You've already chosen the best response.
no im dumb
Best Response
You've already chosen the best response.
Actually, we can reason it out another way.
Best Response
You've already chosen the best response.
Do you know what dy/dt *means*?
Best Response
You've already chosen the best response.
y = (3n+1)/(n+1) dy/dt = 2/(x+1)^2
Best Response
You've already chosen the best response.
ya i do
Best Response
You've already chosen the best response.
Okay then, so if limt->infy should give y(t) = 3, then, at infinity, what should the slope of y(t) be?
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
If, at infinity, our function needs to keep getting closer to y=3, then what value should the SLOPE of the function be approaching?
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
wait no 0
Best Response
You've already chosen the best response.
what is the limit of y = (3n+1)/(n+1) at infinity?
Best Response
You've already chosen the best response.
@v4xN0s Yes, 0. What's another name for slope?
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
Yes, or, more relavant to this problem, dy/dt. Now, as you said, we want dy/dt = 0 when y = 3, right?
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
Okay then. In ay+b, if I make y = 3, then what can a and b be?
Best Response
You've already chosen the best response.
so it would be y'=3-y?
Best Response
You've already chosen the best response.
That's one solution, b = 3 and a = -1.
Best Response
You've already chosen the best response.
no we want y = 3 when t = infinity?
Best Response
You've already chosen the best response.
@zzr0ck3r yes thats what the question says
Best Response
You've already chosen the best response.
what if all the solutions diverged from y=2
Best Response
You've already chosen the best response.
"diverged from y = 2" --> what?
Best Response
You've already chosen the best response.
as t approaches infinity
Best Response
You've already chosen the best response.
@v4xN0s we're not done with the previous one yet
Best Response
You've already chosen the best response.
oh alright i thought that was the write answer QQ
Best Response
You've already chosen the best response.
QQ? Anyway, that's just ONE anwer. In fact, any a and b that makes 3a+b = 0 works.
Best Response
You've already chosen the best response.
But it depends on the initial conditions, too.
Best Response
You've already chosen the best response.
oh i see well i just needed one diff eq for the problem
Best Response
You've already chosen the best response.
Okay, your example of dy/dt=3-y was a bit of a lucky guess.
Best Response
You've already chosen the best response.
See, if your initial condition is\[y_0>3\]then your slope is negative, and y decreases until it 'reaches' 3, at which point it 'stays' constant.
Best Response
You've already chosen the best response.
i see and if its less than 3 then its positive and increases as it gets closer to 3
Best Response
You've already chosen the best response.
Yes. If\[y_0=3\], then your solution is a line, starting at y=3 and staying that way (dy/dt = 0).
Best Response
You've already chosen the best response.
wait no slope still decreases
Best Response
You've already chosen the best response.
Um no.....
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
Yeah, those are the three possible solution curves. Why do you think the slope decreases for the initial cond less than 3 one?
Best Response
You've already chosen the best response.
the bottom curve's slope is appreaching 0 isnt it, so the slope would be decreasing
Best Response
You've already chosen the best response.
Yes, its *decreasing* but *positive*. That's the whole deal - regardless where it is, it gets closer to 0 as it goes to infinity.
Best Response
You've already chosen the best response.
ok got that one, but what if solutions diverge from y=2
Best Response
You've already chosen the best response.
So, back to where I was. You were lucky when you picked dy/dt as 3-y. What would have happened if you've done y-3 (note, this still has slope of 0 when y =3)?
Best Response
You've already chosen the best response.
it becomes negative?
Best Response
You've already chosen the best response.
Well think about it. If our \[y_0>3\]What will happen?
Best Response
You've already chosen the best response.
We're above the line we want to tend to in end behavior, and what's our slope?
Best Response
You've already chosen the best response.
its decreasing and negative
Best Response
You've already chosen the best response.
No. if y>3, and dy/dt is y-3, our slope is positive. As y increases, y-3 increases. Thus dy/dt is increasing and positive. What do you think this will do to the function? Then try y_0=3 and y_0
Best Response
You've already chosen the best response.
sorry but i have no idea
Best Response
You've already chosen the best response.
Well, if y > 3, do you see why y-3 is > 0? Then dy/dt is greater than 0. Thus our slope is positive. Our particle or whatever increases in y in that differential amount of time, resulting in an
even larger slope (since y-3 also gets bigger). Do you not see this makes you diverge?
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/5032f008e4b0382dcf76040d","timestamp":"2014-04-19T02:09:00Z","content_type":null,"content_length":"147605","record_id":"<urn:uuid:0c822a91-200f-4038-88c9-db740dd1f05c>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00309-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Undergraduate Research | Department of MathematicsUndergraduate Research
Are you interested in doing undergraduate research in mathematics, attend a conference, and publish your journal paper? If so, follow the links to the below for information about undergraduate
research opportunities, conferences, and journals.
Research Opportunities
Undergraduate research conferences
Journals of Undergraduate Research in Mathematics
Below is a list of Mathematics faculty members at UMW involved with undergraduate research, with topics of interest.
I have sponsored undergraduate research grants in the following subjects:
1. The applications of fractional calculus in mathematics and physics
2. Minimal surfaces
3. The applications of PDE/Fourier Analysis
4. Differential geometry and its applications to physics
5. Elliptic partial differential equations and harmonic maps
6. Harmonic maps and applications to physics
7. Einstein’s Equations and Their Applications
8. Constant Mean Curvature Surfaces and Their Applications
9. Gauss-Cadazzi- Mainardi Equations and Their Applications
10. Gauss-Bonnet Formulas and Their Applications
My undergraduate research interests lie primarily in generalizations of parts of abstract algebra and topology. My true research interests are in an area called category theory, which is an attempt
to give axiomatic approaches that unify algebra, topology, and other parts of purely theoretical mathematics. I have had several students complete projects in category theory and would be interested
in doing more undergraduate research in this area. To see examples of my former students’ work, visit http://doctorh.umwblogs.org/student-research/.
I am interested in working with students on projects in multivariate statistics (e.g., regression analysis, analysis of variance, variable reduction techniques). Most projects will include the
derivation of some theoretical results and simulations to verify distributional properties. I am also interested in working with students in the use of spatial statistics, particularly in
environmental applications. Some course work in statistics is needed, preferably MATH 381, and some background in programming is helpful but not required. To learn more about the kinds of projects
I have mentored please visit my undergraduate research page.
I have various undergraduate research problems in applied mathematics. In a project, for instance, you would consider a mathematical model equation (e.g., differential equations or (stochastic)
partial differential equations) of physical phenomenon. Then you would try to solve it by some mathematical methods (if possible) and by numerical methods using programming languages such as MATLAB
(including built-in solvers in MATLAB) to make predictions about how the physical phenomenon will behave in different circumstances and/or evaluate the performance of numerical solvers of the model
equation. In this project, you would learn how to analyze a real-life problem using the mathematical language that is used by many people in industry and government. This would be a great experience
and also would give a great opportunity to you who may be interested in seeking employment at places like Dahlgren. My former students’ projects can be found in here.
I have directed undergraduate research in several areas of discrete mathematics. Different types of projects are available depending on student interest. Students wo
rking with me should have taken Linear Algebra and Discrete Mathematics (125, preferably 325 as well). More advanced projects could require abstract algebra as well. See my undergraduate research
page for more detail on past projects.
Suzanne Sumner:
I have two projects in mind for undergraduate research.
1. Competing Species Models
Competing species forestry models use differential equations to examine the long-term population levels with two types of species of trees. These species are either pioneer (those trees that are
deprivation intolerant) and climax (those trees that receive a benefit from having other trees nearby.) In particular, the primary concern is when the long-term tree population values level off or
fluctuate. Hopf bifurcations mark a change in the dynamics from stable to unstable scenarios.
2. Honey Bee Biology Models
In recent years honey bees have been parasitized by two species of mites, the tracheal mite and the varroa mite. These mites have decimated numerous colonies, severely impacting the areas of
agriculture that rely on honey bee pollination. Difference equation models treat the parasitism as a Susceptible-Infected-Removed (SIR) model as in earlier work by Dr. Wyatt A. Mangum. Some bees
carry desirable traits that allow them to withstand mite infestation. Here the primary concern is determining the proportion of bees with these traits that a colony must have so that the overall
percent infestation decreases. Bifurcation surfaces separate unstable scenarios where percent mite infestation increases from stable situations where percent infestation decreases.
|
{"url":"http://cas.umw.edu/math/student-opportunities/undergraduate-research/","timestamp":"2014-04-19T01:48:28Z","content_type":null,"content_length":"38486","record_id":"<urn:uuid:1a405369-38d1-4290-98a5-eeaa4dc41837>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00245-ip-10-147-4-33.ec2.internal.warc.gz"}
|
This Article
Bibliographic References
Add to:
ASCII Text x
L.J. Hendren, A. Nicolau, "Parallelizing Programs with Recursive Data Structures," IEEE Transactions on Parallel and Distributed Systems, vol. 1, no. 1, pp. 35-47, January, 1990.
BibTex x
@article{ 10.1109/71.80123,
author = {L.J. Hendren and A. Nicolau},
title = {Parallelizing Programs with Recursive Data Structures},
journal ={IEEE Transactions on Parallel and Distributed Systems},
volume = {1},
number = {1},
issn = {1045-9219},
year = {1990},
pages = {35-47},
doi = {http://doi.ieeecomputersociety.org/10.1109/71.80123},
publisher = {IEEE Computer Society},
address = {Los Alamitos, CA, USA},
RefWorks Procite/RefMan/Endnote x
TY - JOUR
JO - IEEE Transactions on Parallel and Distributed Systems
TI - Parallelizing Programs with Recursive Data Structures
IS - 1
SN - 1045-9219
EPD - 35-47
A1 - L.J. Hendren,
A1 - A. Nicolau,
PY - 1990
KW - Index Termsautomatic parallelization; disambiguation techniques; parallelizing compilers; parallel programming languages; recursive data structures; interference; imperative language; dynamic
data structures; interference analysis tools; parallelization techniques;imperative programs; dynamically updatable trees; directed acyclic graphs; regular-expression-like representation;
accessible nodes; data structures; directed graphs; parallel programming; program compilers; trees (mathematics)
VL - 1
JA - IEEE Transactions on Parallel and Distributed Systems
ER -
A study is made of the problem of estimating interference in an imperative language with dynamic data structures. The authors focus on developing efficient and implementable methods for recursive
data structures. In particular, they present interference analysis tools and parallelization techniques for imperative programs that contain dynamically updatable trees and directed acyclic graphs.
The analysis methods are based on a regular-expression-like representation of the relationship between accessible nodes in the data structure. They authors have implemented their analysis, and they
present some concrete examples that have been processed by this system.
[1] R. Allen and K. Kennedy, "Automatic translation of FORTRAN to vector form,"ACM Trans. Programming Languages Syst., vol. 9, no. 4, pp. 491-524, 1987.
[2] U. Banerjee, "Speedup of ordinary programs," Ph.D dissertation, Dep. Comput. Sci. Univ. Illinois, Urbana-Champaign, Rep. No. UIUCDCS-R-79-989, 1979.
[3] J. P. Banning, "An efficient way to find the side effects of procedure calls and the aliases of variables," inProc. 6th POPL Conf., ACM, pp. 724-736, 1979.
[4] J. M. Barth, "A practical interprocedural dataflow analysis algorithm,"Comm. Assoc. Computing Machinery, vol. 21, no. 9, pp. 724-726, Sept. 1978.
[5] G. Bilardi and A. Nicolau, "Adaptive bitonic sorting: An optimal parallel algorithm for shared-memory machines,"SIAM J. Computing, vol. 18, no. 2, pp. 216-228, 1989.
[6] M. Burke and R. Cytron, "Interprocedural dependence analysis and parallelization," inProc. SIG-PLAN '86 Symp. Comp. Construct., Palo Alto, CA, June 1986, pp. 162-175.
[7] L. J. Hendren, "Recursive data structures and parallelism detection,"Tech. Rep.TR 88-924, Cornell Univ., Ithaca, NY, June 1988.
[8] L. J. Hendren and A. Nicolau, "Interference analysis tools for parallelizing programs with recursive data structures," inProc. Internat. Conf. Supercomputing, June 1989, pp. 205-214.
[9] L. J. Hendren and A. Nicolau, "Parallelizing programs with recursive data structures," inProc. Internat. Conf. Parallel Process., vol. II, Software, Aug. 1989, pp. 49-56.
[10] S. Horwitz, P. Pfeiffer, and T. Reps, "Dependence analysis for pointer variables," inProc. SIGPLAN '89 Conf. Program. Lang. Design and Implement., June 1989, pp. 28-40.
[11] P. Hudak, "A semantic model of reference counting and its abstraction," inAbstract Interpretation of Declarative Languages, S. Abramsky and C. Hankin, Eds. West Sussex, UK: Ellis Horwood, 1987,
pp. 45-62.
[12] N. D. Jones and S. Muchnick, "A flexible approach to interprocedural data flow analysis and programs with recursive data structures," in9th ACM Symp. Principles Program. Lang., 1982, pp. 66-74.
[13] N. D. Jones and S. Muchnick, "Flow analysis and optimization of LISP-like structures," inProgram Flow Analysis, Theory, and Applications. Englewood Cliffs, NJ: Prentice Hall, 1981, ch. 4, pp.
[14] J. R. Larus and P. N. Hilfinger, "Detecting conflicts between structure accesses," inProc. SIGPLAN '88 Conf. Program. Lang. Design and Implement., June 1988, pp. 21-34.
[15] J. R. Larus and P. N. Hilfinger, "Restructuring Lisp programs for concurrent execution," inProc. ACM/SIGPLAN PPEALS 1988-Parallel Program.: Experience with Appl., Lang., Syst., July 1988, pp.
[16] J. M. Lucassen, "Types and effects: Towards the integration of functional and imperative programming," PhD dissertation, M.I.T.. Cambridge, MA, 1987.
[17] J. M. Lucassen and D. K. Gifford,"Polymorphic effect systems." inProc. 15th ACM Symp. Principles Program. Lang., 1988, pp. 47- 57.
[18] A. Neirynck, "Static analysis of aliasing and side effects in higher-order languages," Ph.D. dissertation, Cornell Univ., Ithaca, NY. Jan. 1988.
[19] A. Neirynck, P. Panangaden, and A. J. Demers, "Computation of aliases and support sets," inProc. 14th ACM Symp. Principles Program. Lang., 1987, pp. 274-283.
[20] A. Nicolau, "Parallelism, memory anti-aliasing and correctness for trace scheduling compilers," Ph.D. dissertation, Yale Univ., June 1984.
[21] D. A. Padua and M. J. Wolfe, "Advanced compiler optimizations for supercomputers,"Common. ACM, vol. 29, no. 12, pp. 1184- 1201, Dec. 1986.
[22] C. Ruggieri and T. P. Murtagh, "Lifetme analysis of dynamically allocated objects," inProc. 15th ACM Symp. Principles Program. Lang., 1988, pp. 285-293.
[23] M. J. Wolfe, "Optimizing supercompilers for supercomputers," Ph.D. thesis, Ctr. Supercomput. Res. and Development, Univ. Illinois, Urbana-Champaign, 1980.
Index Terms:
Index Termsautomatic parallelization; disambiguation techniques; parallelizing compilers; parallel programming languages; recursive data structures; interference; imperative language; dynamic data
structures; interference analysis tools; parallelization techniques;imperative programs; dynamically updatable trees; directed acyclic graphs; regular-expression-like representation; accessible
nodes; data structures; directed graphs; parallel programming; program compilers; trees (mathematics)
L.J. Hendren, A. Nicolau, "Parallelizing Programs with Recursive Data Structures," IEEE Transactions on Parallel and Distributed Systems, vol. 1, no. 1, pp. 35-47, Jan. 1990, doi:10.1109/71.80123
Usage of this product signifies your acceptance of the
Terms of Use
|
{"url":"http://www.computer.org/csdl/trans/td/1990/01/l0035-abs.html","timestamp":"2014-04-18T14:13:49Z","content_type":null,"content_length":"57092","record_id":"<urn:uuid:f0e47786-ce4f-4ff6-a52f-93cdeb52b45d>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00091-ip-10-147-4-33.ec2.internal.warc.gz"}
|
MathGroup Archive: April 2008 [00925]
[Date Index] [Thread Index] [Author Index]
Re: A Problem with Simplify
• To: mathgroup at smc.vnet.net
• Subject: [mg88052] Re: A Problem with Simplify
• From: Alexey Popkov <popkov at gmail.com>
• Date: Wed, 23 Apr 2008 04:07:15 -0400 (EDT)
• References: <200804200351.XAA11379@smc.vnet.net> <fuhfjk$iu9$1@smc.vnet.net>
On 21 =C1=D0=D2, 11:26, Andrzej Kozlowski <a... at mimuw.edu.pl> wrote:
> And even in the purely algebraic cases Reduce can easily take for
> ever. Or consider this:
> Reduce[x^3 + Sin[x] == 0, x]
> During evaluation of In[34]:= Reduce::"nsmet" : "This system cannot
> be solved with the methods available to Reduce"
> even though anyone can easily see that 0 is a solution (but Reduce is
> not allowed to return an incomplete solution).
> Adnrzej Kozlowski
I was surprised a bit. It is sad that even if I specify the Real
domain I may not give the only possible answer x=0:
Reduce[x + Sin[x] == 0, x, Reals]
Solve[x + Sin[x] == 0, x, Reals]
Now I understand the depth of the problem.
But speaking about Integrate, is it really necessarily to perform
Reduce[] on each step? The problem is to find the singularities on the
parameters of the argument function (I mean such values of the
parameters those degenerate the argument function). After this we
should keep track on arising new conditions on each step. It does not
mean to use Reduce. We need only understand what we really do and know
about limitations. As I think this is not so much complicated task and
may be fully implemented in Mathematica (if it is not implemented
already). On the final result we may need perform searching for the
singularities again - but only for checking the result!
But as I see first of all Wolfram Research should extend Reduce[] for
working with trigonometric functions. This is that we should wait for
nearest-future version of Mathematica. If we can not expect this -
what for we should pay money?
• Follow-Ups:
• References:
|
{"url":"http://forums.wolfram.com/mathgroup/archive/2008/Apr/msg00925.html","timestamp":"2014-04-17T01:19:32Z","content_type":null,"content_length":"27256","record_id":"<urn:uuid:0f5a0068-1d83-4837-abb0-0d000c365846>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00025-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Sign-Changing and Extremal Constant-Sign Solutions of Nonlinear Elliptic Neumann Boundary Value Problems
Our aim is the study of a class of nonlinear elliptic problems under Neumann conditions involving the
1. Introduction
The motivation of our study is a recent paper of the author in [1] in which problem (1.1) was treated in case
Neumann boundary value problems in the form of (1.1) arise in different areas of pure and applied mathematics, for example, in the theory of quasiregular and quasiconformal mappings in Riemannian
manifolds with boundary (see [2, 3]), in the study of optimal constants for the Sobolev trace embedding (see [4–7]), or at non-Newtonian fluids, flow through porus media, nonlinear elasticity,
reaction diffusion problems, glaciology, and so on (see [8–11]).
The existence of multiple solutions for Neumann problems like those in the form of (1.1) has been studied by a number of authors, such as, for example, the authors of [12–15], and homogeneous Neumann
boundary value problems were considered in [16, 17] and [15], respectively. Analogous results for the Dirichlet problem have been recently obtained in [18–21]. Further references can also be found in
the bibliography of [1].
In our consideration, the nonlinearities
First, we have to make an analysis of the associated spectrum of (1.1). The Fu
has a nontrivial solution. In view of the identity
we see at once that for
We say that 22]). Furthermore, one can show that 23, Lemma 24, Theorem 25, Theorem
Let us recall some properties of the Fu26]. This yields the existence of a continuous path in
Due to the fact that
The proof of this result is given in [26].
An important part in our considerations takes the following Neumann boundary value problem defined by
where 1], there exists a unique solution
2. Notations and Hypotheses
Now, we impose the following conditions on the nonlinearities
(H) (f1)
(f4) There exists
(H) (g1)
for all pairs
(H) Let 26] (see Figure 1).
Note that (H2)(g4) implies that the function 26] or Figure 1).
Example 2.1.
Let the functions
Then all conditions in (H1)(f1)–(f4) and (H2)(g1)–(g4) are fulfilled.
Definition 2.2.
A function
Definition 2.3.
A function
Definition 2.4.
A function
We recall that
3. Extremal Constant-Sign Solutions
For the rest of the paper we denote by
Lemma 3.1.
Let conditions (H1)-(H2) be satisfied and let
In order to satisfy Definition 2.4 for
and due to (H1)(f3), we have
Hence, we get
Because of hypothesis (H2)(g2), there exists
and thanks to condition (H2)(g3), we find a constant
Finally, we have
Using the inequality in (3.5) to the first integral in (3.2) yields
which proves its nonnegativity if
We take
This completes the proof.
The next two lemmas show that constant multipliers of
Lemma 3.2.
Assume that (H1)-(H2) are satisfied. If
The Steklov eigenvalue problem (1.4) implies for all
Definition 2.3 is satisfied for
is valid for all
In case
The following lemma on the existence of a negative supersolution can be proved in a similar way.
Lemma 3.3.
Assume that (H1)-(H2) are satisfied. If
Concerning Lemmas 3.1–3.3, we obtain a positive pair
In the next step we are going to prove the regularity of solutions of problem (1.1) belonging to the order intervals
Lemma 3.4.
Assume (H1)-(H2) and let
We just show the first case; the other case acts in the same way. Let 25, Theorem
Applying (3.17) to (1.1) provides
where 27]) which is possible because
The main result in this section about the existence of extremal constant-sign solutions is given in the following theorem.
Theorem 3.5.
Assume (H1)-(H2). For every
Let 28]) corresponding to the order interval
with some function
Claim 1.
with some positive constants
Using (3.21) and the hypotheses (H1)(f3) as well as (H2)(g3) yields
which provides, by the
The uniform boundedness of the sequence
Claim 2.
One has
In order to apply Lemma 3.4, we have to prove that
We set
It is clear that the sequence
with some function
With the aid of (3.22), we obtain for
We select
Making use of (3.17) in combination with (3.29) results in
and, respectively,
We see at once that the right-hand sides of (3.32) and (3.33) belong to
From (3.28), (3.31), and (3.34) we infer that
and the
Remark that
The equation above is the weak formulation of the Steklov eigenvalue problem in (1.4) where 22, Lemma
Claim 3.
4. Variational Characterization of Extremal Solutions
Theorem 3.5 ensures the existence of extremal positive and negative solutions of (1.1) for all
which are well defined and belong to
Lemma 4.1.
(i)A critical point
(ii)A critical point
(iii)A critical point
Subtracting (4.4) from (4.3) and setting
Based on the definition of the truncation operators, we see that the right-hand side of the equality above is equal to zero. On the other hand, the integrals on the left-hand side are strictly
positive in case
An important tool in our considerations is the relation between local 1, Proposition
Proposition 4.2.
We also refer to a recent paper (see [29]) in which the proposition above was extended to the more general case of nonsmooth functionals. With the aid of Proposition 4.2, we can formulate the next
lemma about the existence of local and global minimizers with respect to the functionals
Lemma 4.3.
From the calculations above, we see at once that
Lemma 4.4.
The functional
As we know, the functional
5. Existence of Sign-Changing Solutions
The main result in this section about the existence of a nontrivial solution of problem (1.1) reads as follows.
Theorem 5.1.
Under hypotheses (H1)–(H3), problem (1.1) has a nontrivial sign-changing solution
In view of Lemma 4.4, the existence of a global minimizer
where 30]) thanks to (5.1) along with the fact that
It is clear that (5.1) and (5.2) imply that
Because of the results of Martínez and Rossi in [26], there exists a continuous path
This implies the existence of
It is well known that
The boundedness of the set
Theorem 3.5 yields that
for all
for all
In view of (5.11) we get for all
Due to hypotheses (H1)(f1) and (H2)(g1), there exist positive constants
Applying (5.15) to (5.13) yields
We have constructed a continuous path
It holds that 31, page 366]) to
Next, we introduce the path
Similarly, the Second Deformation Lemma can be applied to the functional
In the end, we combine the curves
Sign up to receive new article alerts from Boundary Value Problems
|
{"url":"http://www.boundaryvalueproblems.com/content/2010/1/139126","timestamp":"2014-04-19T12:08:46Z","content_type":null,"content_length":"155309","record_id":"<urn:uuid:629bc8c1-0262-4cc0-b570-549c521b5b98>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00327-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Finite geometric Series Problem
May 2nd 2005, 05:37 AM #1
Finite geometric Series Problem
Hello i wondering if anyone could give me a hand solving this
1. the sum of the geometric progression := S
2. the number of terms := n
3. the first term := t
Can you get an equation for the common ratio???
finite G.P S = t * ( 1 - r^n)/(1 - r)
the number i workin off are
S = 292.618
n = 15
t = 50
i dont know the sequence as i'm using this for a c++ program and numbers vary!
Hello i wondering if anyone could give me a hand solving this
1. the sum of the geometric progression := S
2. the number of terms := n
3. the first term := t
Can you get an equation for the common ratio???
finite G.P S = t * ( 1 - r^n)/(1 - r)
the number i workin off are
S = 292.618
n = 15
t = 50
i dont know the sequence as i'm using this for a c++ program and numbers vary!
In the spirit of the challenge of finding the oldest unanswered post, this question is after a bit of jiggery pokery asking for solutions of:
or rewritting this:
Can we find one or more real roots of:
in closed form when $n>4$.
(The condition on $n$ is because we have the formulas for the roots for $n=1,\ 2,\ 3,\ 4$. Though something more elegant for the last two cases would be nice).
In the spirit of the challenge of finding the oldest unanswered post, this question is after a bit of jiggery pokery asking for solutions of:
or rewritting this:
Can we find one or more real roots of:
in closed form when $n>4$.
(The condition on $n$ is because we have the formulas for the roots for $n=1,\ 2,\ 3,\ 4$. Though something more elegant for the last two cases would be nice).
I thought unanswered posts that old got deleted during the 'Great Purge' (it obviously wasn't in Urgent Help forum) ....
February 12th 2009, 02:22 AM #2
Grand Panjandrum
Nov 2005
February 12th 2009, 03:13 AM #3
February 12th 2009, 03:32 AM #4
Grand Panjandrum
Nov 2005
|
{"url":"http://mathhelpforum.com/calculus/168-finite-geometric-series-problem.html","timestamp":"2014-04-20T12:45:20Z","content_type":null,"content_length":"43747","record_id":"<urn:uuid:916b0fc6-435b-40e2-8019-ee1114902aa6>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00184-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Mirrors, magnifying glasses, and thermometers in hand, the Hawks ventured outside to conduct a very kid-friendly experiment: roasting a marshmallow using only the sunlight.
The kids played around with which surface to place the marshmallow on it make it heat. Lola suggested a small metal bucket, and Ben and Quinn thought a mirror would be best.
Aurora and Bruno experimented with magnifying glasses to see how the light expanded and condensed on the marshmallow to conduct heat.
Natasha, Lola, and Lucy wanted to double the intensity of the light coming at the marshmallow, so used a set of mirrors to direct light onto another row of mirrors to make things hotter.
Clementine’s idea was to light the pile of tinder below the marshmallow to create a fire that would do the roasting. Mackenzie reports that Ben brought out a glass of water just in case of an
uncontrolled burn.
Mackenzie says that the marshmallow never roasted, but it did reach 106 degrees Fahrenheit!
She said that the Hawks had a few ideas about their next iteration: more careful placement of mirrors so the light would hit all the same spot, more magnifying glasses to intensify light, and perhaps
a reverse disco ball that would concentrate light instead of disperse it. We’ll have to see what they come up with!
out in the field
A core principle at Brightworks is to get kids out in the world almost as much as they are in school – the world has so much to show and teach us, and we greatly benefit from being in an accessible
city with so many resources within our reach. Last week, there was a band missing every day as they found arc-related experiences all over town.
On Monday the Megaband visited the California Academy of Science to see a couple of planetarium shows about the earliest light from the creation of the universe, dark matter, and antimatter. The
students’ curiosities were sparked after listening to the third segment of RadioLab’s show on symmetry and mirrors, called “Nothing’s the Antimatter.”
On Tuesday, the Elephants ventured out to the Exploratorium to check out the mirror exhibits there.
Wednesday found the Hummingbirds in Glen Canyon Park, their usual weekly field trip to explore the nearby wild in the city.
On Thursday, the Banditos went with the Hummingbirds to the Mirror Maze at Fisherman’s Wharf.
The Banditos then met the Hawks at the Exploratorium for their own scavenger hunt around the museum.
angles of reflection
To begin their study of angles of incidence and reflection, the Hawks asked, “Does a ball bounce off of a wall the same way light bounces off of a mirror?”
They made a couple hypotheses and came up with an experiment to test them using a ball dipped in paint that would trace its path. They compared the path of a laser pointer with the orange paint ball
As Mackenzie writes, “The group started to see some patterns emerging between the path of the ball and the path of the laser beam. They also began to be able to predict where the ball would bounce
to. A new question emerged: ‘What is the relationship between the angle at which light hits the mirror and the angle at which it leaves?’ To answer this we traced the path that a laser beam travels
as it enters and leaves a mirror then measured these angles.”
After they measured several angles, the Hawks began to see that the angle of light entering a mirror is the same as the angle at which it leaves!
With this knowledge, the Hawks were given a laser game provocation where they had to orient mirrors precisely enough to hit a fixed target. They had to use what they’d learned about angles of
incidence and reflection being equal, and used a protractor to be as precise as possible.
Despite the fact that a traditional trajectory of math doesn’t introduce such skills until the seventh grade, the eight-year-old Hawks used pre-algebra skills to solve the angle challenges in the
game, since they only knew the value of one angle. They turned to angle challenges in the abstract – on paper! – and loved wrestling with these problems.
To connect these ideas to a real-world situation, the Hawks visited the Billiards Palacade. Mackenzie writes, “In small teams the Hawks solved problems involving bank shots that put their
understanding of angles of reflection to the test.” They used protractors, rulers, and ball launchers to experiment with distances and angles of reflection that would get a ball to bounce right into
a pocket
They had some help with queues from a local pool shark!
Their experiments continued with further provocations back at school. Mackenzie writes, “The Hawks were put in pairs each with a covered mirror and a designated spot. Each team had to figure out
where their partner had to stand in order to see each other in the mirror.”
They also took inspiration from the Ancient Egyptian pyramid builders who used polished metal to light the tomb walls for painting their murals. Mackenzie placed targets throughout the school and
challenged the kids to use mirrors to hit the targets with sunlight, which they traced on blueprints of the school.
The Hawks have been so impressive in their understanding of these concepts and their ability to translate what they’re learning to new situations!
mirror maze
Today, the Elephants, the Kleineband, and Velocity headed out of the building on an excursion to the Mirror Maze at the wharfs. On the way, they made stops at the Musee Mechanique to explore old
arcade games and fun-house mirrors and for some sketching of the submarine parked at the docks.
Meanwhile, back at school, the Hawks and the Hummingbirds looked at pictures of the first year of Brightworks when we realized that two bands’ worth of kids plus a handful of adults is more kids than
we had on the first day of school in the first year. How far we’ve come!
mirror blogs
The members of the Megaband have resumed their blog writing this arc. Matylda and Quinn wrote great posts about the first two weeks of the new Mirrors arc. Here are some excerpts and thoughts from
the two of them:
From Matylda:
On Tuesday my band had classes about mirrors. We were experimenting with mirror-writing.
First, our teacher showed us interesting article about Leonardo Da Vinci (http://www.inventorpat.com/leonardo.htm). Leonardo didn’t write normal. He wrote using mirrors. People couldn’t read his
notes. We don’t know why did he write like this but probably it was kind of secret code. This article shows other explanation of it. Enjoy reading
We wrote in mirror too. It was really funny and really difficult. I tried mirror drawing – it was more difficult than writing!
From Quinn:
Throughout the next week, we studied art with Phillip. He taught us about the six elements of art: 1. Lines 2. Shapes 3. Form 4. Color 5. Texture 6. Space. After he taught us about those, he taught
us about all the different kinds of symmetry. We picked three different kinds of the symmetry related categories and made collages with paper cut outs of the shapes. We also talked about how you can
relate mirrors to art.
One day with Christie, Velocity read an article all about mirrors and if they lie or tell the truth. We also wrote our own entries about whether we thought that mirrors lied or not and why.
Over the course of a couple days the band managed to listen to a full length radio lab podcast about mirrors. There were three sections and the first section talked about people mirroring other
people. The second section was about the difference between what you see yourself like in the mirror and how other people see you from their perspective. The third section was about anti-particles
that are basically mirrored normal particles. We all wrote down something that interested us that they mentioned in the podcast. We then all researched that thing and wrote a paragraph about it. I
was interested in cloud chambers which they mentioned in the third section. Here’s my entry:
I was wondering about cloud chambers and how they worked. I found out that you can use them to figure out if a room consists of filtered, dust free air or if it consists of dust. For cloud chambers
to work there needs to be dust in the room. You drop water molecules (they would be so tiny that you wouldn’t be able to see them with a naked eye) and if there isn’t dust, they would just fall to
the ground and they wouldn’t make a cloud or interact with other water droplets. But if there is dust, you would get a quite interesting result. The dust would collect the water molecules and create
bigger droplets. As the water molecules attach to the dust particles the water droplets would become visible and they would make clouds. Cloud chambers and this method are most commonly used to
detect ionizing radiation which is deadly. This radiation is made up of particles that travel with enough force and speed to launch an electron from an atom or molecule. This radiation can be
generated by nuclear reaction, very high temperature or due to acceleration of charged particles. I found this all quite intriguing.
|
{"url":"http://www.sfbrightworks.org/category/brightworks/","timestamp":"2014-04-21T13:00:17Z","content_type":null,"content_length":"50797","record_id":"<urn:uuid:87bdd3f5-3ce6-484e-8b8c-b707ea6df267>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00260-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Historical textbook collection
I’m working in the math department library today and have gotten distracted by a collection of historical math textbooks that’s just gone on the shelves.
From College Mathematics: A First Course (1940), by W. W. Elliott and E. Roy C. Miles:
The authors believe that college students who take only one year of mathematics should acquire a knowledge of the essentials of several of the traditional subjects. From teaching experience,
however, they are convinced that a better understanding is gained if these subjects are presented in the traditional order. Students who take only one year of college mathematics are usually
primarily interested in the natural sciences or in business administration.
The book covers algebra, trigonometry, Cartesian geometry, and calculus. The definition of the derivative as a limit is given, but the epsilon-delta definition of limit is not. Startling to think
that science majors came to college never having taken algebra or analytic geometry.
Further back in time we get Milne’s Progressive Arithmetic, from 1906. This copy was used by Maggie Rappel, of Reedsville, WI, and is dated January 15th, 1908. Someone — Maggie or a later owner —
wrote in the flyleaf, “Look on page 133.”
On the top of p .133 is written
Auh! Shut up your gab you big lobster, you c?
You got me, Maggie!
I can’t tell what grades this book is intended for, but certainly a wide range; it starts with addition of single digits and ends with reduction of fractions to lowest terms. What’s interesting is
that the book doesn’t really fit our stereotype that math instruction in olden times was pure drill with no attention paid to conceptual instruction and explanation. Here’s a problem from early in
the book:
How many ones are 3 ones and 4 ones? Write the sum of the ones under the ones. How many tens are 6 tens and 2 tens? Write the sum of the tens under the tens. How do you read 8 tens and 7
ones? What, then, is the sum of 24 and 63? Tell what you did to find the sum.
From the introduction:
Yet the book is not merely a book of exercises. Each new concept is carefully presented by questions designed to bring to the understanding of the pupil the ideas he should grasp, and then his
knowledge is applied. The formal statement of principles and definitions is, however, reserved for a later stage of the pupil’s progress.
Would these sentiments be so out of place in a contemporary “discovery” curriculum?
2 thoughts on “Historical textbook collection”
1. They would only be considered out of place in modern schools because they set a certain bar for students—and expect them to meet it. Not every child can succeed at every subject, but I sometimes
feel like we sell most of our kids short. Thanks for the insightful post.
2. Occasionally I have the opportunity to browse older mathematics texts, mainly at the graduate level, and I have the impression that authors in those days were far more generous in attempting to
convey intuition.
Tagged elementary school, library, textbooks
|
{"url":"http://quomodocumque.wordpress.com/2012/10/23/historical-textbook-collection/","timestamp":"2014-04-20T13:21:34Z","content_type":null,"content_length":"61870","record_id":"<urn:uuid:ddd61724-64cd-4029-a73a-c2d1a01dc0b4>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00227-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Recent Journal of Applied Mathematics and Mechanics Articles
Recently published articles from Journal of Applied Mathematics and Mechanics.
Complete the short form below to let your friends and colleagues know about this page.
Don't worry, the details you provide on this page will not be used to send unsolicited e-mail. Find out more about our privacy policy.
|
{"url":"http://www.journals.elsevier.com/journal-of-applied-mathematics-and-mechanics/recent-articles/","timestamp":"2014-04-18T10:43:40Z","content_type":null,"content_length":"110540","record_id":"<urn:uuid:2e6f9bed-dbb1-4815-af42-f84b811a958a>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00380-ip-10-147-4-33.ec2.internal.warc.gz"}
|
st: RE: counting the number of nonmissing values in varlist for each obs
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
st: RE: counting the number of nonmissing values in varlist for each observation
From "Eric Booth" <ebooth@ppri.tamu.edu>
To <statalist@hsphsun2.harvard.edu>
Subject st: RE: counting the number of nonmissing values in varlist for each observation
Date Mon, 28 Dec 2009 10:50:35 -0600
(apologies if this sends twice)
There are a couple of options here:
set obs 10
forv n = 1/10 {
g fammem`n' = abs(round(rnormal()*10))
label var fammem`n' "fammem`n'"
g fammem`n'_liv = abs(round(rnormal()*10))
label var fammem`n'_liv "fammem`n'_liv"
ds, not(varlabel "*_liv")
local one `r(varlist)'
di "`r(varlist)'"
//1. using ds list
egen nmbfammem4 = rownonmiss(`one')
//2. using varlabel wildcards
egen nmbfammem = rownonmiss(fammem? fammem??)
//3. just list all the vars (decreasingly useful as the # of vars increases)
egen nmbfammem2 = rownonmiss(fammem1 fammem2 fammem3 fammem4 fammem5 fammem6 fammem7 fammem8 fammem9 fammem10)
//4. rename vars temporarily
foreach v of local one {
rename `v' aa`v'
egen nbfamem3 = rownonmiss(aafammem*)
foreach v of local one {
rename aa`v' `v'
li nbfamem*
~ Eric
Eric A. Booth
Public Policy Research Institute
Texas A&M University
Office: +979.845.6754
On Dec 28, 2009, at 10:13 AM, Ekaterina Hertog wrote:
Dear all,
I need to create a variable which will contain the count the number of nonmissing values in varlist for each observation for the variables called fammem1, fammem2 etc.
I came up with the following command:
egen nmbfammem= rownonmiss (fammem*)
The problem is I have two types of variables starting with fammem:
fammem1, fammem2 etc. until fammem10 (which note one's family members)
fammem1_liv, fammem2_liv etc. (which notes whether one lives with the given family member in the same household or not)
and I only want to create the count of one's family members.
Is it possible in Stata 11 to specify varlist as all variables the name of which start with fammem followed by a number between 1 and 10?
I will be very grateful for advice,
Sincerely yours,
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
|
{"url":"http://www.stata.com/statalist/archive/2009-12/msg00934.html","timestamp":"2014-04-17T07:07:41Z","content_type":null,"content_length":"10278","record_id":"<urn:uuid:e9563ba4-3c42-4e11-9fa5-23adda20925c>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00312-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Closed form solution to an iterative equation.
up vote 1 down vote favorite
$f(n+1) = f(n) + f(n)^{a}$ where $a \in (0,1)$ and $n \ge 1$ with $f(1) = m$.
If $a=0$, we see $f(n) = m + n - 1$ and if $a=1$, we see $f(n) = 2^{n-1}m$. So the recursion seems to interpolate between linear and exponential forms.
Is there a closed form for $f(n)$ in terms of $n$, $a$ and $m$?
fa.functional-analysis recurrences
Standard comparison to the associated differential equation yields $y(n)=\Theta(n^b)$ with $b=1/(1-a)$ (and, with some more care, much more precise estimates) but this is not a research question.
You might want to try math.stackexchange.com instead. – Did May 6 '13 at 11:42
could you provide your derivation? – J.A May 23 '13 at 10:32
add comment
1 Answer
active oldest votes
There is no closed form except for the cases $a=0,1$. But you can find the asymptotic behavior. See, for example Fatou, Sur les equations fonctionnelles, Bull Soc. Math.
France, 47 (1919), section 8 and further. Available here:
up vote 2 down vote
accepted http://archive.numdam.org/ARCHIVE/BSMF/ BSMF_1919_47/BSMF_1919_47_161_0/BSMF_1919_47_161_0.pdf
I believe this is a good answer. However it is all in French. – J.A May 6 '13 at 15:24
add comment
Not the answer you're looking for? Browse other questions tagged fa.functional-analysis recurrences or ask your own question.
|
{"url":"http://mathoverflow.net/questions/129820/closed-form-solution-to-an-iterative-equation","timestamp":"2014-04-18T23:22:46Z","content_type":null,"content_length":"53557","record_id":"<urn:uuid:3deb5e11-42d5-4407-82f7-c973c9d898e2>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00052-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Re: st: DiD estimation versus fixed effects when T>2
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: st: DiD estimation versus fixed effects when T>2
From Nils Braakmann <nilsbraakmann@googlemail.com>
To statalist@hsphsun2.harvard.edu
Subject Re: st: DiD estimation versus fixed effects when T>2
Date Thu, 5 Mar 2009 09:57:50 +0100
Dear Alison,
in your scenario you have several possibilities. Some remarks:
(1) Your treatment group dummy is (typically) fixed over time which
implies that it is indistinguishable from the unit fixed effect. If
your treatment group dummy varies within units (individuals/countries
etc.) over time, you might be in trouble as units may then select into
or out of treatment.
(2) You need interactions between the treatment group dummy and the
period dummy/dummies (see point 3). The coefficient gives the
divergence in trends in treatment and control group in the post
treatment period. Given that the assumption holds that both groups
would have experienced the same trend in the counterfactual situation
without treatment this coefficient is the treatment effect.
(3) There are two possibilities to set up the pre-/post-Treatment dummies:
(3.a) Define a single dummy that is "1" in all periods after treatment
and "0" before. You may also add additional time controls. The
treatment effect is then a time-weighted average of the effects in the
post-treatment years.
(3.b) Use year dummies and interactions between the treatment group
and each year. Say 2003 is the first year of treatment. The
treatment-group-year interactions for 2003, 2004... tell you how the
treatment effect evolves over time. Note, however, that the longer
time series make it somewhat more unlikely that the common-trend
assumption holds.
(3.c) If the treatment occurs in different years for different units,
you need to modify (3.b).
I am not exactly sure what you mean by "four different treatment
variables". I suspect you could use these to construct higher order
DiD-estimations (triple differences etc.) , but this is hard to tell
without knowing your exact situation. I am also not exactly sure what
your variable "treatment" is (in point (1) I suspected that this is
the treatment group dummy, but I'm not so sure now). Is this a
treatment group dummy or some sort of implicitly defined interaction
term (e..g. is this variable "1" only after a treatment occurred or is
it fixed over time)?
Hope this helps,
On Wed, Mar 4, 2009 at 4:00 PM, Riggieri, Alison C <ariggieri@gatech.edu> wrote:
> I am attempting to take panel data (5 years, for all 50 states +DC) and determine a DiD estimation. I have four different treatment variables, all dummy variables.
> My question is this: my adviser told me to to run "xtreg y x treatment 2002 2003 2004 2005, fe" to get a DiD estimation, but I'm not sure that is correct. He told me to just add dummy variables to the fixed effects regression.
> Does that compute the DiD estimation or do I have to interact the treatment and the year?
> thanks in advance,
> Alison
> --
> PhD Student
> School of Public Policy
> Georgia Institute of Technology
> Atlanta, GA
> 508-410-1931
> *
> * For searches and help try:
> * http://www.stata.com/help.cgi?search
> * http://www.stata.com/support/statalist/faq
> * http://www.ats.ucla.edu/stat/stata/
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
|
{"url":"http://www.stata.com/statalist/archive/2009-03/msg00189.html","timestamp":"2014-04-17T10:07:27Z","content_type":null,"content_length":"8713","record_id":"<urn:uuid:98f78903-1ba3-4e02-86cd-db4903e518e2>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00450-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Re: st: Computing minimum driving distance to an area (rather than a spe
Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: st: Computing minimum driving distance to an area (rather than a specific point)
From "Dimitriy V. Masterov" <dvmaster@gmail.com>
To statalist@hsphsun2.harvard.edu
Subject Re: st: Computing minimum driving distance to an area (rather than a specific point)
Date Mon, 16 Apr 2012 10:30:54 -0400
I just realized that I left out the part about how you would read in
the shapefiles into Stata. You need a command from ssc called shp2dta
(or mif2dta if you have MapInfo format boundaries).
On Mon, Apr 16, 2012 at 9:46 AM, Dimitriy V. Masterov
<dvmaster@gmail.com> wrote:
> Jen,
> In a geometric sense, you can think of your municipalities as
> polygons. Every polygon has at least 4 distinct barycenters (i.e.,
> centers of mass), so there's no straightforward answer to your
> question.
> 1) The barycenter of its vertices.
> 2) The barycenter of its edges.
> 3) Its barycenter as a polygon, which can be obtained decomposing it
> into triangles. The area-weighted average of these barycenters is the
> polygon's barycenter.
> 4) X-weighted centroid, where X might be a people or blocks or block groups.
> These may coincide in special cases, but are generally distinct
> points. It may also happen that many of these centers are not
> necessarily located within the interior of a polygon. Hopefully your
> municipalities will be mostly convex, so this should be less of a
> problem. You do have to worry that your barrycenter is in the middle
> of lake, for example.
> The three types differ on where the mass is presumed located: it
> either is entirely on the vertices, spread uniformly on the edges, or
> spread throughout the polygon itself, either uniformly or not.
> You might be able to hack such calculations in Stata using the
> coordinates file that you create when you convert the shapefile for
> the municipal boundaries, but I think there's an easier way. I would
> get the shapefile for the municipalities. Such files will usually have
> columns for the lat and lon of the centroid. It's what ArcGIS uses
> when you choose to label an area. Use that as your center.
> Alternatively, you might want to see if you can track down a
> population-weighted centroid as that seems relevant to your problem.
> From then, it will just be a simple merge.
> HTH,
> DVM
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
|
{"url":"http://www.stata.com/statalist/archive/2012-04/msg00663.html","timestamp":"2014-04-21T12:24:03Z","content_type":null,"content_length":"9997","record_id":"<urn:uuid:d3eeecc9-1880-4000-a8fc-d4bef295cdac>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00026-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Re: st: RE: Return r(111) this time
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: st: RE: Return r(111) this time
From jjc.li@utoronto.ca
To statalist@hsphsun2.harvard.edu
Subject Re: st: RE: Return r(111) this time
Date Sun, 15 Mar 2009 18:53:58 -0400
Thank you Eva for not giving me up. I typed
matrix a = J(1,62,0)
gen copy_lnc = lnc
gen copy_sl = sl
gen copy_sm = sm
gen copy_se = se
set trace on
nlsur wellbehav "copy_lnc copy_sl copy_sk copy_sm' "lnpl lnpk lnpm lnpe lnq t d1 d2 d3 d4 d5 d6", at (a)
Is it right? Here's the return:
------------------------------------------------------------- end _on_colon_parse ---
- local ZERO `"`s(before)'"'
= local ZERO `"nlsur, jkopts(eclass) noeqlist"'
- local 0 `"`s(after)'"'
= local 0 `" wellbehav "copy_lnc copy_sl copy_sk copy_sm' "lnpl lnpk lnpm lnpe lnq t d1
d2 d3 d4 d5 d6", at (a)"'
- quietly syntax [anything(equalok)] [if] [in] [fw aw pw iw] [, VCE(string asis) VCE1(s
tring asis) * ]
invalid something: quotes do not match
----------------------------------------------------------------- end _vce_parserun --- --------------------------------------------------------------------------- end nlsur ---
Quoting Eva Poen <eva.poen@gmail.com>:
As Nick pointed out, you know the names of your own variables. We
don't. So, when I said "right hand side variables" I meant the list of
right hand side variables in your equations, in the order that your
program needs them. Your program reads something like
args lnc sl sk sm lnpl lnpk lnpm lnpe lnq t d1 d2 d3 d4 d5 d6
, and only you know what these names stand for. All we know is that
the first four are left hand side variables, and the rest are right
hand side variables in your equations.
So, besides
matrix a = J(1,62,0)
you need to create copies of your left hand side variables, which you can do by
gen copy_var1 = var1
assuming that var1 is the name of your variable. Do this for all your
left hand side variables, and then call your program with the list of
all 16 variables that you need for your equations, in the order
lnc sl sk sm lnpl lnpk lnpm lnpe lnq t d1 d2 d3 d4 d5 d6
because this is what your program asks for. Just make sure that,
instead of providing the names of your left hand side variables, you
provide the names of their _copies_. -set trace on- and see what
As a general remark, you might find it useful to attend a NetCourse by
Statacorp, especially NC151.
2009/3/15 <jjc.li@utoronto.ca>:
Sorry but could you take a example?
Quoting Nick Cox <n.j.cox@durham.ac.uk>:
Eva's text was not meant to be taken literally!
Your own syntax commits you to supplying 16 variable names and the name
of a matrix in an option.
Here's the return:
. matrix a = J(1,62,0)
. set trace on
. nlsurwellbehav "copies of dep. variables" "right hand side
variables" , at(a)
---------------------------------------------------------------- begin
nlsurwellbehav ---
- version 10.1
- syntax varlist(min=16 max=16) [if], at(name)
time-series operators not allowed
------------------------------------------------------------------ end
nlsurwellbehav ---
Quoting Eva Poen <eva.poen@gmail.com>:
I'm not sure I understand you here. Are you referring to the line
replace `lnc' = 5+`aq'*`lnq' ...
where there is a hard coded 5 (not a starting value! This is a set
value.), and did you replace this value of 5 by another parameter,
e.g. `a0'? In terms of the program, that is not a problem as long as
you adjust your code. It would be easiest if you put this parameter
last, since this saves you the pain of changing all your `at'[1,x]
statements to `at'[1,x+1]. Therefore, my suggestion would be to code
scalar `dmm'=`at'[1,61]
tempname a0
scalar `a0' =`at'[1,62]
quietly {
replace `lnc' = `a0'+ ....
Now, for the debugging, just follow my suggestions earlier, and invoke
your program directly. You need to create a matrix of initial values,
e.g. zeros for all coefficients. If you have 62 parameters, you do
matrix a = J(1,62,0)
which gives you a row vector of 62 zeros. Next create copies of all
your dependent variables, and invoke your program:
set trace on
nlsurwellbehav "copies of dep. variables" "right hand side variables" ,
and see where the problem lies.
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
|
{"url":"http://www.stata.com/statalist/archive/2009-03/msg00745.html","timestamp":"2014-04-20T01:11:33Z","content_type":null,"content_length":"13966","record_id":"<urn:uuid:b3287987-e02f-4311-a56b-2851e97694c8>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00660-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Riverdale Pk, MD Prealgebra Tutor
Find a Riverdale Pk, MD Prealgebra Tutor
...This is evidenced by my scores of 4 on the AP English Literature Exam, 780 on the Critical Reading section of the SAT, and 800 on the Writing Section. I have taken numerous courses in Writing
and Literature (Honors, AP, and college-level), as well as four years of Latin to assist with understand...
16 Subjects: including prealgebra, reading, algebra 1, French
...I have hundreds of hours of experience and am well-versed in explaining complicated concepts in a way that beginners can easily understand. I specialize in tutoring math (from pre-algebra to
differential equations!) and statistics. I completed a B.S. degree in Applied Mathematics from GWU, grad...
16 Subjects: including prealgebra, calculus, geometry, statistics
...I do not have any professional tutoring experience, but I have had good experiences tutoring my friends and family. I am an extremely patient person, and I am usually able to explain math
problems in several different ways until they are understood. I also have scored very well on standardized ...
32 Subjects: including prealgebra, reading, algebra 2, calculus
...My students always learn, because of my patient and thorough approach, always connecting to their prior knowledge as our base. A specialty of mine is working with students with ADHD or other
learning difficulties. I am certified as a tutor/coach in SAT Math prep.
10 Subjects: including prealgebra, algebra 1, SAT math, computer science
...I have primarily worked with students in 5th-8th grades, but have experience working with younger students. Let me know what your child needs to work on and together we can set up some goals.
Let’s meet the child where he or she is academically at and let’s move forward one step at a time with success all along the way.I have taught fifth grade math, pre-algebra, algebra, and 8th
grade math.
7 Subjects: including prealgebra, reading, writing, algebra 1
Related Riverdale Pk, MD Tutors
Riverdale Pk, MD Accounting Tutors
Riverdale Pk, MD ACT Tutors
Riverdale Pk, MD Algebra Tutors
Riverdale Pk, MD Algebra 2 Tutors
Riverdale Pk, MD Calculus Tutors
Riverdale Pk, MD Geometry Tutors
Riverdale Pk, MD Math Tutors
Riverdale Pk, MD Prealgebra Tutors
Riverdale Pk, MD Precalculus Tutors
Riverdale Pk, MD SAT Tutors
Riverdale Pk, MD SAT Math Tutors
Riverdale Pk, MD Science Tutors
Riverdale Pk, MD Statistics Tutors
Riverdale Pk, MD Trigonometry Tutors
Nearby Cities With prealgebra Tutor
Bladensburg, MD prealgebra Tutors
Brentwood, MD prealgebra Tutors
Cheverly, MD prealgebra Tutors
College Park prealgebra Tutors
Edmonston, MD prealgebra Tutors
Greenbelt prealgebra Tutors
Hyattsville prealgebra Tutors
Landover Hills, MD prealgebra Tutors
Lanham Seabrook, MD prealgebra Tutors
Mount Rainier prealgebra Tutors
New Carrollton, MD prealgebra Tutors
North Brentwood, MD prealgebra Tutors
Riverdale Park, MD prealgebra Tutors
Riverdale, MD prealgebra Tutors
University Park, MD prealgebra Tutors
|
{"url":"http://www.purplemath.com/Riverdale_Pk_MD_prealgebra_tutors.php","timestamp":"2014-04-17T13:45:45Z","content_type":null,"content_length":"24660","record_id":"<urn:uuid:74322915-67a5-430a-bdf9-88383fc92a4e>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00480-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: Can't fathom this straight line question
Replies: 0
Topics: [ Previous | Next ]
Jimbo Can't fathom this straight line question
Posted: Jun 9, 2013 1:03 PM
Posts: 1
Registered: 6 Hi,
I am having trouble understanding the following. In the image below there are 8 marked angles and the question is to find the sum of those. I understand that they are all supplementary
angles to the interior angles of an irregular polygon but still stumped...
I can only come to the conclusion that the sum is greater the sum of a polygons exterior angle (360 degrees). However how to take the 1440 degrees of all the straight line segments and
work out the supplementary parts is beyond me.
Thanks in advance,
|
{"url":"http://mathforum.org/kb/thread.jspa?messageID=9132254&tstart=0","timestamp":"2014-04-16T16:17:54Z","content_type":null,"content_length":"14573","record_id":"<urn:uuid:80702817-b372-49b3-aadb-5c9a17bb8125>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00088-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Performance and Implementation Evaluation of TR PAPR Reduction Methods for DVB-T2
International Journal of Digital Multimedia Broadcasting
Volume 2010 (2010), Article ID 797393, 10 pages
Research Article
Performance and Implementation Evaluation of TR PAPR Reduction Methods for DVB-T2
^1SUPELEC-IETR, avenue de la Boulaie CS 47601, 35576 Cesson Sévigné Cedex, France
^2ENENSYS Technologies, 80 avenue des Buttes de Coesmes, 35700 Rennes, France
Received 15 April 2010; Accepted 26 August 2010
Academic Editor: Jaime Lloret
Copyright © 2010 Mohamad Mroué et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any
medium, provided the original work is properly cited.
High Peak to Average Power Ratio (PAPR) is a critical issue in multicarrier communication systems using Orthogonal Frequency Division Multiplexing (OFDM), as in the Second Generation Terrestrial
Digital Video Broadcasting (DVB-T2) system. This problem can result in large performance degradation due to the nonlinearity of the High Power Amplifier (HPA) or in its low power efficiency. In this
paper, we evaluate the performance of different Tone Reservation-based techniques for PAPR reduction in DVB-T2 context. Also, we propose an iterative TR-based technique called “One Kernel One Peak”
(OKOP). Simulation results and performance comparison of these techniques in terms of gain in PAPR reduction, mean power variation, and complexity will be given. Finally, we describe the
implementation of a PAPR reduction algorithm in the DVB-T2 modulator.
1. Introduction
The performance of high data rate systems is significantly limited by the multipath interference that occurs in the radio channel environment. As an attractive technique in mitigating the multipath
interference, Orthogonal Frequency Division Multiplexing (OFDM) has been widely applied in various broadcasting systems such as, the Digital Video Broadcasting (DVB) systems. Despite its competitive
attributes, OFDM signals are characterized by very high Peak-to-Average Power Ratio (PAPR) levels. This characteristic leads the OFDM signals to be very sensitive to nonlinearities of analogue
components of the transceiver, in particular those of the High Power Amplifier (HPA) at the emission.
An HPA is conceived to operate in its saturation zone which corresponds to its high efficiency region. However, in this zone, the HPA has a severe nonlinear behaviour. These nonlinearities are
sources of In-Band (IB) distortions which can both degrade the link performance in term of Bit Error Rate (BER) and also cause significant Out-Of-Band (OOB) interference products that make it harder
for the operator to comply with stringent spectral masks. The simplest solution to this problem is to operate the HPA in the linear region by allowing a large enough amplifier back-off. However, this
approach degrades the power efficiency of the system and often leads to unacceptable cost-efficiency conditions in the overall system. For all these reasons, reducing the PAPR of OFDM signals is
increasingly being considered to be very important in maintaining the cost-effectiveness advantages of OFDM in practical systems, especially as new systems, such as DVB-T2, are being specified with
large number of subcarriers (up to 32768 subcarriers and 256-QAM modulation for DVB-T2 system [1]).
Many methods have been proposed to mitigate the OFDM PAPR by acting on the signal itself [2, 3]. The simplest ones use clipping and filtering techniques [4, 5]. However, these methods may lead to BER
increase of the system since clipping is a nonlinear process [6]. Alternative methods are based on coding [7, 8] and others on Multiple Signal Representation (MSR) techniques: Partial Transmit
Sequence (PTS) [9], Selective Mapping (SLM) [10], and Interleaving [11]. The main drawback of these methods is that a Side Information (SI) has to be transmitted from the transmitter to the receiver
to recover the original data, which results in some loss of throughput efficiency. Some recent efficient methods do not need any SI transmission [12]. The Active Constellation Extension (ACE) method
proposed in [13] involves reducing PAPR by changing the constellation of the signal without changing the minimum distance. However, the performance of this method depends on the mapping level. Thus,
it is not relevant for DVB-T2 system with QAM modulation up to 256 states and in the case of rotated constellation. The Tone Reservation (TR) method uses allocated subcarriers to generate additional
information that minimizes the PAPR. An original classification representation for PAPR reduction techniques was studied and proposed in [3]. The TR method which is a sub-class of the adding signal
technique will be our main concern. Thus, proposals for PAPR reduction in the case of DVB-T2 system will be restrained to methods issued from the TR concept.
This work was performed within the framework of the French regional Project DTTv2, which aimed at working on the improvements of DVB consortium standard: DVB-T2 as well as the future mobile standard
NGH (New Generation Handheld). This work includes the conception of an implementation and experimentation platform, allowing to study PAPR reduction in DVB-T2 context in real-time and conforming with
industrial constraints.
This paper is organized as follows. Section 2 gives a brief description of the DVB-T2 system model and PAPR definition. The TR-based PAPR reduction techniques for DVB-T2 will be studied in Section 3.
Also, we propose an iterative technique called “One Kernel One Peak" (OKOP) which is issued from the TR-gradient-based method. In Section 4, simulation results and comparison between the studied
techniques will be presented. Section 5 describes the PAPR reduction block in the DTTv2 Experimentation Platform.
2. DVB-T2 System Model and PAPR
Some basic terms and the system model which include OFDM, DVB-T2 physical layer and PAPR, shall be presented first. Let us define the notations used throughout the paper. Time and frequency domain
matrices are denoted by small and capital bold case letters, respectively. Scalars and vectors variables for the optimization equations are denoted by small and capital normal letters, respectively.
2.1. OFDM-Based DVB-T2 System
The OFDM signal is the sum of many orthogonally overlapped subchannels of equal bandwidth. In order to realize the spectrally overlapping sub-channels, the Inverse Fast Fourier Transform (IFFT) is
employed at the OFDM transmitter. The base-band samples for OFDM symbol, with subcarriers, at the IFFT output are given by: where is the original complex symbol duration. In practice, we assume that
only equidistant samples of are considered, where represents the oversampling factor. The DVB-T2 system employs optionally of tones for PAPR reduction in TR context. The possible FFT sizes of a
symbol in a T2-Frame are , 2048, 4096, 8192, 16384, and 32768 [1]. The associated possible modulation modes are QPSK, 16-QAM, 64-QAM, and 256-QAM.
2.2. PAPR Definition
Due to the statistical independence of carriers, the central-limit theorem holds and the complex time-domain samples of OFDM signals are approximately Gaussian distributed. This means that there
could be some very high peaks present in the signal. Peak to Average Power Ratio (PAPR) is the most common term used in the literature to describe these temporal fluctuations of the signal. The PAPR
defines the ratio of the signal's maximum instantaneous power to its mean power. The oversampled discrete-time OFDM symbol sample of can be given by [14]: where is the oversampling factor. This
factor must be large enough () to process all the continuous-time peaks and thus to better approximate the analog PAPR of the OFDM signal. Thus, the PAPR can be expressed as [14]: where , is the
zero-padded vector of by factor , denotes the expectation operation, and is the inverse discrete Fourier transform matrix of size scaled by . The PAPR reduction performance is evaluated using the
Complementary Cumulative Distribution Function (CCDF). It is defined by the probability that the PAPR of the OFDM signal exceeds a given threshold [15]:
3. TR-Based PAPR Reduction Techniques
3.1. Tone Reservation Methodology
In TR concept, the basic idea is to reserve some OFDM subcarriers called Peak Reduction Tones (PRT) for PAPR reduction. These reserved subcarriers do not carry any data information, and they are only
used for reducing PAPR. This method restricts the data vector, and the peak reduction vector to lie in disjoint frequency subspaces. This formulation is distortionless and leads to very simple coding
of the data subsymbols that are extracted from the received sequence by choosing the set of values at the receiver FFT output. Therefore, as natively included in the standard, this concept does not
degrade the BER performance of the system, and thus can be categorized in downward compatible method [3]. The problem of computing the values for these reserved tones that minimize the PAPR can be
formulated as a convex problem and can be solved exactly. The Second-Order Cone Program (SOCP) applied on unused subcarriers is described in [16]. This method has a high computational complexity. As
consequence, suboptimal techniques which are able to converge faster than the optimal solution are the subject of this section.
3.2. Implementation Schemes for TR
In this paper, different implementation schemes for TR methods shall be discussed and compared. The idea is to reduce the PAPR of the signal such that , where represents the added peak reducing
signal, as shown in Figure 1. Ideally the objective of reducing the peak of the combined signal should be attained while keeping the mean power constant or nearly unchanged. Mathematically, it can be
expressed by: However, adding signal results in a mean power increase. The relative increase in the mean power is defined as [12] The aim should be to keep this as small as possible to meet the high
power amplifier constraints. Increased mean power might drive the power amplifier into the saturation zone which results in nonlinearity and system performance degradation. We note that the
phenomenon of decreased minimum distances in constellation due to increased mean power in the peak power control context is discussed in [17]. The must be upper bounded ensuring that individual
component magnitude value cannot exceed a given value as indicated in [18], which follows, where is a constant related to power amplifier characteristics.
3.2.1. TR-Clipping-Based Technique
This technique consists in applying a hard clipping to the input OFDM signal (see Figure 2) [19]. Then, the clipped signal is subtracted from the input signal to obtain the correction signal. After
that, the correction signal is passed to an FFT/IFFT filter in order to comply with the TR concept. The clipped signal can be expressed as follows: where is the input signal, is the clipped signal,
and is the clipping magnitude level. The correction signal is obtained from the differences between the samples of the useful multi-carrier signal and its clipped version . Figure 3 shows the
peak-reducing signal generator block in the case of OFDM envelope clipping. To conform to the TR concept, only the values of the reserved tones at the PRT positions are kept; the others are reset to
zero, thus: At each iteration, the algorithm updates the vector by adding to it the vector . where is the step of the gradient method. Figure 4 shows the principle of adding signal technique for PAPR
reduction with gradient-based method in frequency-domain issued from a classical clipping. The IB filtering block guarantees the downward compatibility by considering only frequency components of the
correction signal at the PRT positions. Since this update rule is performed in the frequency domain, this algorithm can simply incorporate the necessary spectral constraint, by simply limiting the
power of the reserved tones.
3.2.2. TR-Gradient-Based Technique
The time-domain gradient-based method for PAPR reduction is proposed in the DVB-T2 norm. This method associated with Tone Reservation concept was studied and proposed by Tellado-Mourelo in [12] and
later defended by SAMSUNG for PAPR reduction scheme suitable for IEEE 802.16e. The principle of the gradient-based method is to iteratively cancel out the signal peaks by a set of impulse-like
kernels. Reserved carriers are allocated according to predetermined carrier locations which are reserved carrier indices. After the IFFT, peak cancellation is operated to reduce PAPR by using a
predetermined signal. The predetermined signal, or kernel, is generated by the reserved carriers.
The gradient algorithm is one of the good solutions to compute with low complexity. The basic idea of the gradient algorithm is come from clipping. Clipping the peak tone to the target clipping level
can be interpreted as subtracting impulse function from the peak tone in time domain. The conventional clipping technique can be formulated as an adding signal technique where its peak reducing
signal is generated directly in time domain [20]. The principle of the TR gradient-based technique is presented in Figure 5. Despite of their low computational complexity, the gradient-based methods
have the drawback of increasing the signal average power. In addition, this increase in the average power is dependent on the PAPR reduction gain.
(a) Impulse-Like Kernel Generation
During the first step, the kernel vector is computed from the PRT and stored in memory during the initialization phase. For optimal performance, the generated kernel should be designed to be as close
as possible to a discrete-time impulse. This way, every time the algorithm cancels a peak of , no secondary peaks are generated at other locations. However, as in DVB-T2 the PRT are specified in
advance, it is not possible to perfectly match with a discrete-time impulse. An optimum solution to generate the peak reduction kernel was studied in [12]; thus the kernel signal is defined as where
and indicate the size and the number of PRT, respectively. The () vector has elements of ones at the positions corresponding to the reserved-carrier indices and has () elements of zeros at the
(b) Peak Reduction Algorithm
The IFFT output is fed into the peak-cancellation block, and the peak position and value of are detected. Thus, for , where represents the th element of the vector ; and represent the maximum
magnitude and the index of the detected peak in the th iteration, respectively. Then, in the second step of the algorithm, the reference kernel, generated by the reserved carriers corresponding to
the current OFDM symbol, is circularly shifted to the peak position, scaled so that the power of the peak tone should be reduced to the desired target clipping level and phase rotated. The resulting
kernel is subtracted from and the new PAPR is calculated. As the impulse-like function is designed with only the value in the reserved tone locations, adding the peak reducing signal to the data
signal does not affect the value of OFDM symbol in frequency domain. where where denotes the kernel vector circularly shifted by and is the clipping magnitude level. In the third step, the PAPR of
the resulting signal (after adding the peak reduction kernel to the useful data signal) is calculated. If the PAPR of the resulting signal satisfies the target PAPR level, this signal is transmitted.
If not, the cancellation operation is repeated iteratively, until the number of iterations reaches the predetermined maximum iteration number. The peak-cancellation method detects and removes only
the maximum remaining peak in the time-domain per iteration. This method is simple and efficient in terms of peak regrowth control for the following iterations, at the expense of requiring a
relatively large number of iterations. Alternatively, multiple peaks can be removed in a single iteration because the kernels can be linearly combined. However, this will increase the number of
computations per iteration. The transmitted signal after the th iteration of the simple method is given as
3.2.3. Proposed Method (TR-OKOP)
The same energy is added to each reserved subcarrier at each iteration of the TR algorithm. The difficulty resides in how we can predict the evolution of the vectorial sum on each subcarrier.
Controlling the power of a reserved subcarrier implies passing to frequency domain or maintaining in memory the information on the amplitude and phase of each subcarrier at each iteration of the
algorithm. The DVB-T2 system is defined with a large number of subcarriers (up to 32768 subcarriers). The number of reserved subcarriers for the 16K and the 32K mode are 144 and 288, respectively.
Thus, the method that we propose, called “One Kernel One Peak" (OKOP), consists in distributing the reserved subcarriers into groups. Then one impulse-like kernel signal is generated from each group
of the reserved subcarriers (see Figure 6). The original idea here consists on using one kernel to reduce one peak. A simple modification on the TR-Gradient-based algorithm permits the implementation
of this technique. The modification concerns the impulse-like kernel generation part of the algorithm presented in the previous section. It offers the capability to control independently the power
associated to each group of PRT. This means that instead of using the same reference signal at each iteration, a unique correction signal (generated from a specific group of subcarriers) is added to
the useful signal. Thus, there is as much iteration as correction signals during one pass. Also, at each pass, the PRT are used only one time.
4. Simulation Results and Comparison
The simulation model is designed to match with the DVB-T2 standard. The number of PRT is , 18, 36, 72, 144, or 288, while the FFT size is, respectively, , 2048, 4096, 8192, 16384, or 32768, with the
number of subcarriers in use, , 1705, 3409, 6913, 13921, or 27841, respectively. It should be noted that the power of the correction carriers should not exceed the power spectrum mask specified for
DVB-T2 by more than dB. The performance of the TR-based methods is compared in terms of PAPR reduction capability, computational complexity and system interference (BER). Also, the power spectral
density (PSD) presentations are provided to evaluate the impact of applying the TR methods on the power spectrum mask.
4.1. PAPR Reduction Performance
Simulation results using Matlab (see Figures 7 and 8) show that both algorithms, TR-Clipping and TR-Gradient, have equivalent performance in term of PAPR reduction gain. However, the TR-Gradient
method is less complex (in term of number of operations) than the TR-Clipping because all the treatments are provided in the time domain. It does not include an IB and OOB filter since the correction
signal is generated directly from the reserved tones. The advantage of the TR-Clipping technique is that the update rule is performed in the frequency domain. Therefore, this algorithm can simply
incorporate the necessary spectral constraint, by simply limiting the power of the reserved tones. The performance of the proposed TR-OKOP technique is compared to the TR-Clipping and TR-Gradient in
Figure 9. With the TR-OKOP, the PRTs are divided into groups. Thus, a correction signal (kernel) is generated using 8 subcarriers. The term “pass” in Figure 9 refers to the use of all the reserved
subcarriers for PAPR reduction (288 PRTs in the 32K mode) or all the generated correction signals (36 kernels) only once. Thus, at each pass, peaks are reduced using correction signals. The proposed
algorithm has the same performance in term of PAPR reduction compared to the other algorithms. Its advantage lies in its capability to control independently the power associated to each group of
4.2. Complexity Analysis
In this section, we evaluate the complexity performance of the different implementation schemes for TR methods described in Section 3. Only the runtime complexity in term of the number of operations
is considered and the complexity of the initialization stage is omitted since it occurs only once.
4.2.1. TR-Clipping-Based Technique
Let us start by evaluating the complexity of the algorithm in the loop. As discussed in a previous section, this algorithm evaluates the correction signal . Then, the correction signal is passed to a
filter based on FFT/IFFT pair in order to comply with the TR concept. The complexity of calculating is very low compared to the complexity of calculating the filtered correction signal and can be
omitted. Therefore, the complexity of the algorithm is approximated as .
4.2.2. TR-Gradient-Based Technique
This technique operates in time domain. The correction signal (reference kernel) is computed from the PRT and stored in memory during the initialization phase. The other steps consist in circularly
shifting the reference kernel to the peak position, scaled and phase rotated. The complexity of calculating the time domain samples of the peak-canceling signal from the reference kernel is .
4.2.3. Proposed TR-OKOP Method
The proposed method computes a reference kernel from a group of PRTs at each iteration. This means that an IFFT operation is applied at each iteration. As for the TR-Gradient-based method, the other
steps consist on circularly shifting the reference kernel to the peak position, scaled and phase rotated. Therefore, the complexity of calculating the time domain samples of the peak-canceling signal
from the reference kernel is .
The Gradient-based technique has the advantage in term of complexity over the TR-Clipping one. The complexity of the proposed TR-OKOP technique is higher than that of the Gradient-based one and lower
than that of the TR-Clipping one. The advantage of the proposed technique is that the PRT are used only once during one algorithm pass. This allows an easier control of the power variation on each
reserved subcarrier.
4.3. IB and OOB Interference Analysis
As explained in a previous section, all the TR-based PAPR reduction methods do not affect the BER performance. The TR-Gradient and the TR-OKOP techniques create the correction signal from reserved
carriers. Thus, the data carriers are not affected. For the TR-clipping technique, the generated correction signal passes through an FFT/IFFT filter in order to respect the TR concept. Therefore, it
is evident in Figure 10 that the three methods match the conventional BER performance curve thus proving the hypothesis that out of useful band tones do not create IB interference and thus no BER
degradation takes place. It should be noted that BER calculations are performed for useful carriers only.
The OOB distortions are nullified thanks to the FFT/IFFT filter for the TR-Clipping technique. Figure 11 shows the PSD of an OFDM signal before and after applying the TR-Gradient PAPR reduction
technique. We observe that the power level of the PRT can exceed that of the useful signal by more than dB. In Figure 12, the proposed algorithm TR-OKOP was applied. In this case, the power spectrum
specifications are respected. Also, Figure 9 shows that both algorithms achieve the same PAPR reduction gain for different values of power variation. Also, the mean power variation in the case of
TR-OKOP is lower than that of the TR-Gradient.
Table 1 summarizes the performance comparison between the three TR-based PAPR reduction methods in terms of PAPR reduction gain, mean power variation, complexity, and spectrum control capability. In
Table 1, the sign “++" in the complexity line signifies that the method has a low complexity.
5. PAPR Reduction Algorithm Implementation
5.1. DTTv2 Platform
DTTv2 platform is an industrial implementation of the DVB-T2 standard. It processes input stream (which can be for instance an encoded video stream) and generates a compliant DVB-T2 RF signal. Most
of the computation is done using a Field Programmable Gate Array (FPGA), with the help of software when no real-time processing is required. After channel coding (which includes Forward Error
Correction, interleaving and mapping on constellation), OFDM symbols are assembled by adding pilot carriers, including PRT when PAPR reduction using TR is enabled (see Figure 15). PAPR reduction
block implements the TR-Gradient algorithm, as defined in DVB-T2 standard [18]. Thus, it operates in time domain after IFFT. A CCDF estimator is placed after up-sampling filters, to monitor the
performance. Finally the signal is converted to analog IF and then up-converted to RF, in the UHF-VHF bands.
5.2. PAPR Reduction in the DTTv2 Platform
This section describes the TR-Gradient-based PAPR reduction block as implemented in the DTTv2 platform (see Figure 13). The design choice was to insert the algorithm within the existing modulation
processing blocks, allowing to share and optimize hardware resources usage.
5.2.1. Main Blocks Description
The first block Cell Mapper aggregates QAM-mapped data and OFDM pilot carriers (including PRT) to form a frequency domain symbol that is then processed by an IFFT to obtain a time domain symbol.
Three memory-caches are used: “Kernel cache" that is used to save the current kernel, “Symbol cache" used to store initial and iterations results, and “Output cache" that is used to store the symbol
after the final iteration that has been completed. The Peak-detector unit is in charge of detecting and storing peaks locations, that will be then used by the Shift-Scale unit to compute the
appropriate peak-canceling signal.
5.2.2. Processing Description
For each processed symbol, several separate steps can be distinguished.
(i) Loading Step
A kernel is computed for each symbol to save memory; during symbols generation by Cell Mapper, PRT locations are saved and are later used to compute the kernel. At the end of this step, symbol cache
and kernel cache are filled with corresponding symbol and kernel. While the symbol cache is written, the symbol is also processed for peak detection. This specific data flow is identified with red
dotted path on Figure 13.
(ii) Processing Step
During each iteration, the Shift-Scale unit computes a peak-canceling symbol that is added to the symbol. The symbol is then written to its cache memory while remaining peaks are being localized at
the same time to prepare the next iteration.
(iii) Output Step
When end-criteria are matched (the maximum number of iteration has been reached, PAPR is below the target, or limits condition on requires to stop iterations), the symbol is written to the output
cache. This one adds cyclic prefix and streams the symbol to the next block.
(iv) Pipelining
Figure 14 shows the block usage during the processing and the associated steps. Some pipelinings were applied when it was possible; however, the IFFT output is bit-reversed and thus a cycle is lost
to rewrite the symbols in natural order. Overall throughput can be improved by using additional memory for that purpose.
5.2.3. Performance
The maximum number of iterations is limited by the available time between OFDM symbols generation and cannot be easily improved. However, the possible number of canceled peaks can be increased by
removing several peaks in one iteration. This mainly depends on the ability of the kernel cache memory to support multiple read operation, as the complexity of the additionally needed Shift-Scale
units can be considered as marginal. By reusing already existing operators in the design (IFFT and a large amount of cache memory), this architecture implements DVB-T2 TR-Gradient PAPR reduction into
a FPGA with a low hardware cost overhead, compared to the complexity of DVB-T2 processing in general.
6. Conclusion
Robustness and efficiency within DVB-T2's transmission system are further increased by new technologies such as PAPR reduction. In this paper, the performance of two TR-based PAPR reduction methods,
gradient and clipping, is evaluated. Also, an iterative method called “One Kernel One Peak" (OKOP) is proposed. It offers the advantage of controlling the mean power increase of the reserved
carriers. The performance of these methods is compared in terms of PAPR reduction capability, computational complexity and system interference (BER). Simulation results based on CCDF curves, using
the DVB-T2 parameters, show that these methods offer an equivalent performance in term of PAPR gain. They provide a PAPR reduction gain of about dB when only of subcarriers is used without BER
degradation. Thus, the data throughput is not reduced significantly. The advantage of the proposed TR-OKOP method is that the power of the correction carriers could be controlled more easily than in
the case of the TR-Gradient method. Thus, the magnitude of the PRT could be set equal to the data subcarrier magnitude level. Also, the implementation of the TR-Gradient PAPR reduction algorithm in
the DVB-T2 modulator was described.
The authors wish to thank the Pôle Images & Réseaux for the financial support of this work.
1. ETSI, “Digital Video Broadcasting (DVB); Implementation guidelines for a second digital terrestrial television broadcasting system (DVBT2),” ETSI TR 102 831 v0.9.6, January 2009.
2. S. H. Han and J. H. Lee, “An overview of peak-to-average power ratio reduction techniques for multicarrier transmission,” IEEE Wireless Communications, vol. 12, no. 2, pp. 56–65, 2005. View at
Publisher · View at Google Scholar · View at Scopus
3. Y. Louët and J. Palicot, “A classification of methods for efficient power amplification of signals,” Annals of Telecommunications, vol. 63, no. 7-8, pp. 351–368, 2008. View at Publisher · View at
Google Scholar · View at Scopus
4. R. O'Neill and L. B. Lopes, “Envelope variations and spectral splatter in clipped multicarrier signals,” in Proceedings of the 6th IEEE International Symposium on Personal, Indoor and Mobile
Radio Communications (PIMRC '95), pp. 71–75, Toronto, Canada, September 1995.
5. J. Armstrong, “Peak-to-average power reduction for OFDM by repeated clipping and frequency domain filtering,” Electronics Letters, vol. 38, no. 5, pp. 246–247, 2002. View at Publisher · View at
Google Scholar · View at Scopus
6. X. Li and L. J. Cimini Jr., “Effects of clipping and filtering on the performance of OFDM,” IEEE Communications Letters, vol. 2, no. 5, pp. 131–133, 1998. View at Scopus
7. A. E. Jones, T. A. Wilkinson, and S. K. Barton, “Block coding scheme for reduction of peak to mean envelope power ratio of multicarrier transmission schemes,” Electronics Letters, vol. 30, no.
25, pp. 2098–2099, 1994. View at Publisher · View at Google Scholar
8. J. A. Davis and J. Jedwab, “Peak-to-mean power control in OFDM, Golay complementary sequences, and Reed-Muller codes,” IEEE Transactions on Information Theory, vol. 45, no. 7, pp. 2397–2417,
1999. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
9. S. H. Müller and J. B. Huber, “OFDM with reduced peak-to-average power ratio by optimum combination of partial transmit sequences,” Electronics Letters, vol. 33, no. 5, pp. 368–369, 1997. View at
10. R. W. Bäuml, R. F. H. Fischer, and J. B. Huber, “Reducing the peak-to-average power ratio of multicarrier modulation by selected mapping,” Electronics Letters, vol. 32, no. 22, pp. 2056–2057,
11. A. D. S. Jayalath and C. Tellambura, “Reducing the peak-to-average power ratio of orthogonal frequency division multiplexing signal through bit or symbol interleaving,” Electronics Letters, vol.
36, no. 13, pp. 1161–1163, 2000. View at Publisher · View at Google Scholar · View at Scopus
12. J. Tellado-Mourelo, Peak to average power reduction for multicarrier modulation, Ph.D. thesis, Stanford University, Stanford, Calif, USA, September 1999.
13. B. S. Krongold and D. L. Jones, “PAR reduction in OFDM via active constellation extension,” IEEE Transactions on Broadcasting, vol. 49, no. 3, pp. 258–268, 2003. View at Publisher · View at
Google Scholar · View at Scopus
14. M. Sharif, M. Gharavi-Alkhansari, and B. H. Khalaj, “On the peak-to-average power of OFDM signals based on oversampling,” IEEE Transactions on Communications, vol. 51, no. 1, pp. 72–78, 2003.
View at Publisher · View at Google Scholar · View at Scopus
15. R. Van Nee and R. Prasad, OFDM for Wireless Multimedia Communications, Artech House, Boston, Mass, USA, 2000.
16. S. Zabre, J. Palicot, Y. Louët, and C. Lereau, “SOCP approach for OFDM peak-to-average power ratio reduction in the signal adding context,” in Proceedings of the 6th IEEE International Symposium
on Signal Processing and Information Technology (ISSPIT '06), pp. 834–839, Vancouver, Canada, August 2006. View at Publisher · View at Google Scholar
17. R. Baxely, Analyzing selected mapping for peak to average power reduction in OFDM, M.S. thesis, Georgia Institute of Technology, May 2005.
18. ETSI, “Digital Video Broadcasting (DVB); Frame structure channel coding and modulation for a second generation digital terrestrial television broadcasting system (DVB-T2),” ETSI EN 302 755
v1.2.0c, July 2009.
19. S. Litsyn, Peak Power Control in Multicarrier Communications, Cambridge University Press, Cambridge, UK, 2007.
20. D. Guel and J. Palicot, “Clipping formulated as an adding signal technique for OFDM peak power reduction,” in Proceedings of the 69th IEEE Vehicular Technology Conference (VTC '09), Barcelona,
Spain, April 2009. View at Publisher · View at Google Scholar
|
{"url":"http://www.hindawi.com/journals/ijdmb/2010/797393/","timestamp":"2014-04-16T22:19:26Z","content_type":null,"content_length":"193412","record_id":"<urn:uuid:94f2883b-e69b-43ce-abcf-5396c59229c0>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00663-ip-10-147-4-33.ec2.internal.warc.gz"}
|
First-Principles Study of Electronic Structure and Optical Properties of Tetragonal PbMoO
ISRN Condensed Matter Physics
Volume 2011 (2011), Article ID 290741, 7 pages
Research Article
First-Principles Study of Electronic Structure and Optical Properties of Tetragonal PbMoO
State Key Laboratory of Solidification Processing, School of Materials Science and Engineering, Northwestern Polytechnical University, Xi'an, Shaanxi 710072, China
Received 24 July 2011; Accepted 24 August 2011
Academic Editors: S. Bud'ko and A. Zagoskin
Copyright © 2011 Qi-Jun Liu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any
medium, provided the original work is properly cited.
Using the plane-wave ultrasoft pseudopotential technique based on the first-principles density functional theory (DFT), we have studied the structural, electronic, chemical bonding, and optical
properties of tetragonal PbMoO[4]. The obtained structural parameters are in good agreement with experiments. Band structure, density of states, and chemical bonding are calculated and shown. It is
found that tetragonal PbMoO[4] is an indirect band gap. The dielectric function, refractive index, extinction coefficient, reflectivity, absorption coefficient, loss function, and conductivity
function are calculated for radiation up to 20eV.
1. Introduction
PbMoO[4] has been the subject of great research interest both experimentally [1–8] and theoretically [9–13] due to its wide applications such as acousto-optic light deflectors, modulators, adjustable
filters, surface acoustic wave devices, ionic conductors, and low-temperature scintillators, and its superior properties such as high acousto-optic light figure of merit, low optical loss in the
region 420–3900nm, and good mechanical impedance for acoustic matching ([14–17] and the references therein).
The crystal structure of tetragonal PbMoO[4] belongs to the space group and the local symmetry . The dielectric constants [18], polarized reflectivity spectra [19], high-pressure Raman spectrum, and
electrical properties [1, 7, 14] have been reported, which show that there is an anisotropy of optical properties and a transition from crystal to amorphous phase with increasing pressure. The
framework of a fully relativistic self-consistent Dirac-Slater theory with a numerically discrete variational (DV-Xα) method [9, 13, 19], the ultrasoft pseudopotentials of generalized gradient
approximations (GGAs) based on the density function theory (DFT) with CASTEP code [10, 12], the linearized-augmented-plane-wave method with WIEN97 code [11], and so forth have been used to study F
type color centers, optical properties, electronic band structure, and so forth. Although these studied cases of PbMoO[4] are well presented, it is not clear how electron transitions to influence
optical properties is. Additionally, the chemical bonding of PbMoO[4] should be explained.
Hence, we study the structural parameters, electronic structure, chemical bonding, and optical properties of tetragonal PbMoO[4] using the plane-wave ultrasoft pseudopotential technique based on the
first-principles density functional theory. The rest of the work is organized as follows. In Section 2, we give a short description of the methods used in this paper. The results and discussion are
shown in Section 3. We present our findings and give a brief summary in Section 4.
2. Computational Methodology
Density functional theory calculations are performed with plane-wave ultrasoft pseudopotential using the generalized gradient approximation (GGA) with the Perdew-Wang 1991 (PW91) functional [20] as
implemented in the CASTEP code [21]. The ionic cores are represented by ultrasoft pseudopotentials for Mo, Pb, and O atoms. The Mo 4s^24p^64d^55s^1, Pb 5d^106s^26p^2, and O 2s^22p^4 electrons are
explicitly treated as valence electrons. The plane-wave cutoff energy is 380eV, and the Brillouin zone integration is performed over the 5 × 5 × 6 grid sizes using the Monkhorst-Pack method for
tetragonal structure optimization. This set of parameters assures the maximum force of 0.01eV/Å, the maximum stress of 0.02GPa, and the maximum displacement of Å.
3. Results and Discussion
3.1. Geometry and Structure Optimization
The crystal structure of tetragonal PbMoO[4] is shown in Figure 1. The optimized values of a and c for tetragonal PbMoO[4] are listed in Table 1. The obtained structural parameters are in good
agreement with the previous experimental data [2, 11, 16]. Of course, it can be seen that the GGA leads to overestimating our calculated data compared with experiments.
3.2. Electronic Properties
The calculations of the electronic band structure along the symmetry lines of the Brillouin zone, the total and the partial density of states (DOSs and PDOSs) are shown in Figures 2 and 3. The top of
the valence band is taken as the zero of energy. In this compound, the valence band maximum (VBM) is located at 1 point (the valence band maximum of tetragonal PbMoO[4] is not at high-symmetry point,
but at the defined 1 point between X and Γ), whereas the conduction band minimum (CBM) is located at N point, resulting in indirect band gap of 2.838eV. This value is in good agreement with the
previous calculated data 2.59eV [11] and 2.8eV [12]. However, these results are all smaller than the experimental values of 2.94–4.7eV [2] due to the well-known underestimation of conduction band
energy in DFT calculations [22].
In order to further elucidate the nature of the electronic band structure, we have calculated and explained the DOSs and PDOSs. From the PDOSs, we can identify the angular momentum character of the
different structures. Structure (1) is mainly due to Mo-4s electrons, structure (2) due to Mo-4p electrons, structure (3) due to O-2s electrons, structure (4) due to Pb-5d electrons, structure (5)
due to Pb-6s electrons, structures (6) and (7) due to O-2p electrons with hybridization of Mo-4d electrons, and structure (8) due to O-2p electrons. The conduction bands are composed of Mo-4d and
show the hybridization with O-2p, as well as the hybridization between Pb-6p and O-2p.
To understand the chemical bonding of this material, we have poltted the charge density of (112) plane corresponding to the (1)–(8) located at Figure 3 in Figure 4. The polt labeled (1) shows the
isolated Mo-4s, (2) the weak hybridization σ bonding between Mo-4p and O-2s, (3) the weak hybridization σ bonding between O-2s and Mo-4d, (4) the weak hybridization σ bonding between Pb-5d and
O-2s, (5) the weak hybridization σ bonding between Pb-6s and O-2p, (6) and (7) the hybridization σ and π bonding between O-2p and Mo-4d, and (8) the nonbonding O-2P[π]. Hence, we can conclude that
the bonding between Mo and O is mainly covalent and the bonding between Pb and O is mainly ionic. Additionally, the charge density of (220) plane and the results of population analysis have been
shown in Figure 5 and Table 2, which are in good agreement with our analysis of chemical bonding.
3.3. Optical Properties
We need to calculate two dielectric tensor components to completely characterize the linear optical properties due to the tetragonal symmetry of PbMoO[4]. The imaginary and the real parts of the
dielectric function are calculated using (1) [23–25]: as well as the scissors operator approximation [25, 26] due to underestimating the energies of excitation with the density functional
calculations. The good agreement with experiments are obtained for the optical properties like TiO[2] [26], SrHfO[3] [27], SrZrO[3] [28], and HfO[2] [29] using the scissors operator.
Figures 6 and 7 display the imaginary and the real parts of the dielectric function from (100) and (001) along with the calculated results from experimental data [19] for a radiation up to 20eV. We
can see that our results are consistent with the previous work [19]. The discrepancy between our results and the experiment [19] may be due to the different-temperature condition (0, K in our paper
and 6, K in [19]). The imaginary parts exhibit four structures A–D of (100) and E–H of (001). Structures A and E originate mainly from transitions of O-2p[π] into the conduction bands, and structures
B and F from transitions of hybridization π bonding between O-2p and Mo-4d into the conduction bands, and structures C and G from transitions of hybridization σ bonding between O-2p and Mo-4d into
the conduction bands, and structures D and H from transitions of Pb-6s into the conduction bands. The calculated static dielectric constants are 5.337 and 4.910 from (100) and (001).
The refractive index and the extinction coefficient are displayed in Figure 8. The static refractive index is found to have the values 2.310 and 2.216 from polarization vectors (100) and (001), which
are in agreement with experimental data 2.28 and 2.40 [18] from (100) and (001). Figure 9 shows the calculated results on the reflectivity, absorption coefficient, loss function, and complex
conductivity function from polarization vectors (100) and (001). We hope the calculated values can help to offer a theoretical basis for the experiment and application of tetragonal PbMoO[4].
4. Conclusions
The paper reports detailed investigations on the structural, electronic, chemical bonding, and optical properties of tetragonal PbMoO[4] using the plane-wave ultrasoft pseudopotential technique based
on the first-principles density-functional theory (DFT). The calculated equilibrium lattice parameters are in agreement with experiments. Our calculated results of the band structure and DOSs show
that this compound is an indirect band gap of 2.838eV. The charge densities and population analysis are obtained and analyzed, which show that Mo and O are mainly covalent, whereas Pb and O are
mainly ionic. The complex dielectric function has been shown, and the peaks position distributions of imaginary parts of complex dielectric function have been explained, which show electron
transitions in the electronic bands.
This work was financially supported by the National Natural Science Foundation of China (Contract no. 50902110), the Doctorate Foundation of Northwestern Polytechnical University (Contract no.
cx201005), the 111 Project (Contract no. B08040), and the Research Fund of the State Key Laboratory of Solidification Processing (NWPU), China (Contract no. 58-TZ-2011).
1. C.-L. Yu, Q.-J. Yu, C.-X. Gao et al., “In-situ high pressure Raman spectrum and electrical property of PbMoO[4],” Chinese Physics Letters, vol. 24, no. 8, article 014, pp. 2204–2207, 2007. View
at Publisher · View at Google Scholar · View at Scopus
2. J. C. Sezancoski, M. D.R. Bomio, L. S. Cavalcante et al., “Morphology and blue photoluminescence emission of PbMo0[4] processed in conventional hydrothermal,” Journal of Physical Chemistry C,
vol. 113, no. 14, pp. 5812–5822, 2009. View at Publisher · View at Google Scholar
3. J. A. Groenink and H. Binsma, “Electrical conductivity and defect chemistry of PbMoO[4] and PbWO[4],” Journal of Solid State Chemistry, vol. 29, no. 2, pp. 227–236, 1979. View at Scopus
4. S. C. Sabharwal, Sangeeta, and D. G. Desai, “Investigations of single crystal growth of PbMoO[4],” Crystal Growth and Design, vol. 6, no. 1, pp. 58–62, 2006. View at Publisher · View at Google
Scholar · View at Scopus
5. T. T. Zhou, H. L. Hu, and S. Q. Sun, “Studies of infrared antireflection coating on PbMoO[4] single crystal,” Vacuum Science and Technology, vol. 22, p. 392, 2002.
6. J. C. Wang, C. X. Liu, and Y. C. Ge, “Research on growth and optical properties of PbMoO[4] crystal,” Journal of Synthetic Crystals, vol. 24, p. 238, 1995.
7. C.-L. Yu, Q.-J. Yu, C.-X. Gao et al., “Investigation of in-situ Raman spectrum and electrical conductivity of PbMoO[4] at high pressure,” Chinese Journal of High Pressure Physics, vol. 21, no. 3,
pp. 259–263, 2007. View at Scopus
8. W.-P. Zhou, S.-M. Wan, X. Zhang et al., “Study of growth units and the growth habit of PbMoO[4] crystal using high temperature Raman spectra,” Acta Physica Sinica, vol. 57, no. 11, pp. 7305–7309,
2008. View at Scopus
9. J. Chen, Q. Zhang, T. Liu, Z. Shao, and C. Pu, “Electronic structures of PbMoO[4] crystal with F-type color centers,” Chinese Journal of Computational Physics, vol. 25, no. 2, pp. 213–217, 2008.
10. Z. C. Guo, H. N. Dong, B. Deng, and D. F. Li, “First-principles study of the optical properties for the PbMoO[4] crystal,” Material Review, vol. 24, p. 237, 2010.
11. Y. Zhang, N. A. W. Holzwarth, and R. T. Williams, “Electronic band structures of the scheelite materials CaMoO[4], CaWO[4], PbMoO[4], and PbWO[4],” Physical Review B, vol. 57, no. 20, pp.
12738–12750, 1998. View at Scopus
12. J. Chen, T. Liu, D. Cao, and G. Zhao, “First-principles study of the electronic structures and absorption spectra for the PbMoO[4] crystal with lead vacancy,” Physica Status Solidi B, vol. 245,
no. 6, pp. 1152–1155, 2008. View at Publisher · View at Google Scholar · View at Scopus
13. J. Y. Chen, Q. R. Zhang, T. Y. Liu, and Z. Y. Shao, “First-principles study of color centers in PbMoO[4] crystals,” Physica B: Condensed Matter, vol. 403, no. 4, pp. 555–558, 2008. View at
Publisher · View at Google Scholar · View at Scopus
14. C. L. Yu, Q. J. Yu, C. X. Gao et al., “Structural and electrical properties of PbMoO[4] under high pressure,” Journal of Physics Condensed Matter, vol. 19, no. 42, 2007. View at Publisher · View
at Google Scholar
15. H. C. Zeng, “Correlation of PbMoO[4] crystal imperfections to Czochralski growth process,” Journal of Crystal Growth, vol. 171, no. 1-2, pp. 136–145, 1997. View at Scopus
16. N. Senguttuvan, S. M. Babu, and R. Dhanasekaran, “Some aspects on the growth of lead molybdate single crystals and their characterization,” Materials Chemistry and Physics, vol. 49, no. 2, pp.
120–123, 1997. View at Scopus
17. M. Tyagi, S. G. Singh, A. K. Singh, and S. C. Gadkari, “Understanding colorations in PbMoO[4] crystals through stoichiometric variations and annealing studies,” Physica Status Solidi A, vol. 207,
no. 8, pp. 1802–1806, 2010. View at Publisher · View at Google Scholar · View at Scopus
18. W. S. Brower and P. H. Fang, “Dielectric constants of PbMoO[4] and CaMoO[4],” Physical Review, vol. 149, no. 2, p. 646, 1966. View at Publisher · View at Google Scholar · View at Scopus
19. M. Fujita, M. Itoh, H. Mitani, S. Sangeeta, and M. Tyagi, “Exciton transition and electronic structure of PbMoO[4] crystals studied by polarized light,” Physica Status Solidi B, vol. 247, no. 2,
pp. 405–410, 2010. View at Publisher · View at Google Scholar · View at Scopus
20. J. P. Perdew, J. A. Chevary, S. H. Vosko et al., “Atoms, molecules, solids, and surfaces: applications of the generalized gradient approximation for exchange and correlation,” Physical Review B,
vol. 46, no. 11, pp. 6671–6687, 1992. View at Publisher · View at Google Scholar · View at Scopus
21. M. D. Segall, P. J. D. Lindan, M. J. Probert et al., “First-principles simulation: ideas, illustrations and the CASTEP code,” Journal of Physics Condensed Matter, vol. 14, no. 11, pp. 2717–2744,
2002. View at Publisher · View at Google Scholar · View at Scopus
22. W. E. Pickett, “Density functional in solids: II. excited states,” Comments on Solid State Physics, vol. 12, p. 57, 1986.
23. R. C. Fang, Solid Spectroscopy, Chinese Science Technology University Press, Hefei, China, 2003.
24. Y. Zhang and W. M. Shen, Basic of Solid Electronics, Zhe Jiang University Press, Hangzhou, China, 2005.
25. C. M.I. Okoye, “Theoretical study of the electronic structure, chemical bonding and optical properties of KNbO[3] in the paraelectric cubic phase,” Journal of Physics Condensed Matter, vol. 15,
no. 35, pp. 5945–5958, 2003. View at Publisher · View at Google Scholar
26. R. Asahi, Y. Taga, W. Mannstadt, and A. J. Freeman, “Electronic and optical properties of anatase TiO[2],” Physical Review B, vol. 61, no. 11, pp. 7459–7465, 2000. View at Scopus
27. Q.-J. Liu, Z.-T. Liu, L.-P. Feng, and H. Tian, “Electronic and optical properties of cubic SrHfO[3],” Communications in Theoretical Physics, vol. 54, no. 5, pp. 908–912, 2010. View at Publisher ·
View at Google Scholar
28. Q.-J. Liu, Z.-T. Liu, Y.-F. Liu, L.-P. Feng, H. Tian, and J. G. Ding, “First-principles study of structural, electronic and optical properties of orthorhombic SrZrO[3],” Solid State
Communications, vol. 150, no. 41-42, pp. 2032–2035, 2010. View at Publisher · View at Google Scholar · View at Scopus
29. Q. Liu, Z. Liu, L. Feng, and B. Xu, “First-principles study of structural, optical and elastic properties of cubic HfO[2],” Physica B, vol. 404, no. 20, pp. 3614–3619, 2009. View at Publisher ·
View at Google Scholar · View at Scopus
|
{"url":"http://www.hindawi.com/journals/isrn/2011/290741/","timestamp":"2014-04-20T21:13:13Z","content_type":null,"content_length":"58309","record_id":"<urn:uuid:9bae5095-da5d-483e-961b-760e83582d71>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00553-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Werbad's Puzzles
Re: Werbad's Puzzles
« Reply #285 on: January 22, 2010, 08:30:19 PM »
My solution:
It doesn't really have any special cases to solve. It's a general solution that finds a matching sequence for any number of crates. It's a lot slower than Rene's though.
Amazing. I love the mechanism with the dozers and matchers. That must be useful in other circumstances as well.
I had another idea about my solution, and managed to drastically simplify it. No special cases anymore.
For Multi Sum (
Re: Werbad's Puzzles
« Reply #286 on: March 25, 2010, 06:58:38 PM »
New Puzzle:
Lock Picker:
Shouldn't be too hard for people used to maneuvering dozers.
I've also made an alternate version,
, but I've not yet managed to solve it. I have an idea that might work, but there simply isn't enough room to fit it.
Re: Werbad's Puzzles
« Reply #287 on: March 27, 2010, 01:06:56 PM »
Re: Werbad's Puzzles
« Reply #288 on: March 29, 2010, 09:56:51 PM »
For "Lock Picker":
I think i can imagine a solution to the alternate version also, but i just barely fit bybuham into the available space.
[DEL:If the playing field were just one space higher, my solution would be potentially a lot faster.:DEL]
That's so often the case though. I think it is funny that the level code for a puzzle focusing on a bulldozer almost spells "my digger."
Edit: I managed to . sonozoz should run faster on average, I think.
« Last Edit: March 30, 2010, 05:35:19 PM by Twee »
Re: Werbad's Puzzles
« Reply #289 on: March 30, 2010, 09:29:15 AM »
Re: Werbad's Puzzles
« Reply #290 on: March 30, 2010, 07:02:51 PM »
Nice Work! I did actually manage to solve the harder version as well, but I have my solution codes on another computer so unfortunately I cannot post them at the moment...
I've made another dozer-centered puzzle though:
Race Condition:
Bonus points if you manage to solve it using only one of the dozers.
Re: Werbad's Puzzles
« Reply #291 on: March 31, 2010, 12:07:53 AM »
Was it really meant to be this easy? I didn't need either dozer.
Re: Werbad's Puzzles
« Reply #292 on: March 31, 2010, 11:30:38 AM »
Ah, I made a slight mistake when I minimized the mechanism at the bottom. Here, have a fixed version:
Race Condition v2:
Re: Werbad's Puzzles
« Reply #293 on: March 31, 2010, 07:20:18 PM »
for race condition 2(one dozer, no crates, or are they barrels?):
nice, but still a bit easy for 'expert'.
edit: fixed a glitch that caused it to fail 50% of the time
« Last Edit: April 02, 2010, 09:25:27 PM by colcolpicle »
Re: Werbad's Puzzles
« Reply #294 on: March 31, 2010, 08:19:22 PM »
Stop finding alternative solutions to my puzzle!
I really didn't want to force the correct solution this way, but you give me no choice:
Race Condition v3:
I should have learned by now that players will always find a way to break even the greatest puzzles...
Re: Werbad's Puzzles
« Reply #295 on: April 01, 2010, 07:32:57 PM »
for race condition 2(one dozer, no crates, or are they barrels?):
nice, but still a bit easy for 'expert'.
I ran this three times in a row and it didn't work. It unsolves itself with one last trailing crate.
I'm on the way to a "correct solution" but I'm having trouble with
« Last Edit: April 02, 2010, 08:40:43 AM by jf »
Re: Werbad's Puzzles
« Reply #296 on: April 13, 2010, 11:15:19 PM »
Re: Werbad's Puzzles
« Reply #297 on: April 14, 2010, 11:40:22 PM »
I finally beat "warehouse"(
. This puzzle had been bugging me for a long time. I used a component based on one from Werbad's solution to one of my own puzzles.
Re: Werbad's Puzzles
« Reply #298 on: April 15, 2010, 10:55:42 AM »
I finally beat "warehouse"(
. This puzzle had been bugging me for a long time. I used a component based on one from Werbad's solution to one of my own puzzles.
Simple and direct -- I really like it. It inspired me to revisit one of Werbad's puzzles that I never finished.
I couldn't find my original work so I started over from scratch, found a new idea, and came up with
for Multi-sum (
It's really satisfying to come up with such a solution.
Re: Werbad's Puzzles
« Reply #299 on: April 15, 2010, 03:50:11 PM »
Simple and direct -- I really like it. It inspired me to revisit one of Werbad's puzzles that I never finished.
I couldn't find my original work so I started over from scratch, found a new idea, and came up with
for Multi-sum (
Thanks! Congratulations on solving Multi-sum. I've not looked at your solution, as that is another one I hope to eventually solve.
|
{"url":"http://kevan.org/rubicon/forums/index.php?topic=302.msg4665","timestamp":"2014-04-17T04:34:58Z","content_type":null,"content_length":"70809","record_id":"<urn:uuid:7e3674f0-ccfe-43cb-916d-4e87052a54f4>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00627-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Finding radius of circle
The five lines of a star pentagram is 10 cm. Find the radius of the cricle that will circumscribe the pentagram.
I thought the answer would be 5cm because the diameter had to 10 right? But I checked the answer page and the answer was 5.3cm.
|
{"url":"http://mathhelpforum.com/trigonometry/132761-finding-radius-circle.html","timestamp":"2014-04-19T12:16:01Z","content_type":null,"content_length":"36452","record_id":"<urn:uuid:382c3183-17c3-4546-9fee-abacfec01659>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00594-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Help with Fraction Story Problems for Kids
Kids in elementary school need to be able to solve fraction word problems, as well as normal fraction problems. Story problems can be more challenging because they require you to figure out which
operations you need to perform. Keep reading for help!
Story Problems with Fractions
Writing the Fraction
Some word problems will just ask you to write a fraction to represent a particular situation. For example, if you cut a pie into seven pieces and ate three of them, what fraction would express this?
To solve this type of fraction word problem, just remember that the total number of pieces goes in the fraction's denominator (the bottom), and the partial amount goes in the numerator (the top). In
this case, the fraction would be 3/7.
Parts from the Same Whole
When you're working with two fractions that come from the same whole, they'll have the same denominator. For instance, if you ate three slices from your 7-piece pie and your friend ate two slices,
this would be represented by the fractions 3/7 and 2/7. What fraction of the pie did the two of you eat together?
Since you ate 3/7 and your friend ate 2/7, you can simply add the numerators together, like this: 3 + 2 = 5, so 3/7 + 2/7 = 5/7. You can do subtraction the same way. For example, if you wanted to
know what fraction of the pie was left over, you would subtract 5/7 from 7/7. Since 7 - 5 = 2, the answer is 2/7.
Fractions with Different Denominators
You'll also need to solve story problems about fractions that have different denominators. For example, imagine that your math test has two parts, and you need to add together your scores from each
part. You got six out of eight problems correct on the first part (6/8), and two out of three problems right on the second part (2/3).
To solve the problem, you need to find a common denominator. To do this, multiply the numerator and denominator of each fraction by the denominator of the other fraction, and then add the two results
together, like this:
6/8 + 2/3
= (3/3)(6/8) + (8/8)(2/3)
= 18/24 + 16/24
= 34/24
=17/12 or 1 5/12
Fraction Problems with Division
To multiply fractions, you just have to find the products of both numerators and both denominators, and then simplify if necessary. Division of fractions is a bit more complicated. Imagine that you
have to solve the following problem:
'Edward has 25/2 cups of dough, and he wants to make tarts that require just 1/2 cup of dough each. How many tarts can he make?'
To find the solution, you'll need to divide the total amount of dough (25/2 cups) by the amount of dough needed for each tart (1/2 cup). To divide fractions, you multiply one by the reciprocal of the
other. To create a reciprocal, just flip the numerator and the denominator. For instance, the reciprocal of 1/2 is 2/1. Here's how you'll solve the problem:
25/2 ÷ 1/2
= 25/2 x 2/1
= (25 x 2)/(2 x 1)
= 50/2
= 25
Other Articles You May Be Interested In
Tips to Help Kids Understand Fractions
Fractions can be a confusing topic for some students. Read on to learn how you can help your children better understand the uses of fractions.
5 Free and Fun Math Games for Kids
Looking for a way to get your child engaged with math? There are many free, fun math games online that explore basic concepts such as addition, subtraction, multiplication and division, as well
as more advanced games that offer practice with decimals and fractions. Read on to discover five of our favorite educational - and fun! -...
We Found 7 Tutors You Might Be Interested In
Huntington Learning
• What Huntington Learning offers:
• Online and in-center tutoring
• One on one tutoring
• Every Huntington tutor is certified and trained extensively on the most effective teaching methods
• What K12 offers:
• Online tutoring
• Has a strong and effective partnership with public and private schools
• AdvancED-accredited corporation meeting the highest standards of educational management
Kaplan Kids
• What Kaplan Kids offers:
• Online tutoring
• Customized learning plans
• Real-Time Progress Reports track your child's progress
• What Kumon offers:
• In-center tutoring
• Individualized programs for your child
• Helps your child develop the skills and study habits needed to improve their academic performance
Sylvan Learning
• What Sylvan Learning offers:
• Online and in-center tutoring
• Sylvan tutors are certified teachers who provide personalized instruction
• Regular assessment and progress reports
In-Home, In-Center and Online
Tutor Doctor
• What Tutor Doctor offers:
• In-Home tutoring
• One on one attention by the tutor
• Develops personlized programs by working with your child's existing homework
• What TutorVista offers:
• Online tutoring
• Student works one-on-one with a professional tutor
• Using the virtual whiteboard workspace to share problems, solutions and explanations
|
{"url":"http://mathandreadinghelp.org/fractions_story_problems.html","timestamp":"2014-04-18T02:58:31Z","content_type":null,"content_length":"26293","record_id":"<urn:uuid:2502799c-9a89-4c4f-940c-8d037089c551>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00603-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
What is the value of x rounded to the nearest tenth? With the choice of 2.4 2 3 8.5
• one year ago
• one year ago
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
I think the answer is 3
Best Response
You've already chosen the best response.
well then can you try and help me please
Best Response
You've already chosen the best response.
Wrong thank you anyways
Best Response
You've already chosen the best response.
I think the answer is 3 I think the answer is 3 I think the answer is 3 I think the answer is 3 I think the answer is 3 I think the answer is 3 I think the answer is 3 I think the answer is 3 I
think the answer is 3 I think the answer is 3 I think the answer is 3 I think the answer is 3 I think the answer is 3 I think the answer is 3 I think the answer is 3 I think the answer is 3 I
think the answer is 3 I think the answer is 3 I think the answer is 3 I think the answer is 3 I think the answer is 3 I think the answer is 3 I think the answer is 3 I think the answer is 3 I
think the answer is 3 I think the answer is 3 I think the answer is 3 I think the answer is 3 I think the answer is 3 I think the answer is 3 I think the answer is 3 I think the answer is 3 I
think the answer is 3 I think the answer is 3 I think the answer is 3 I think the answer is 3
Best Response
You've already chosen the best response.
@T0mmy one time to say it was enough. :)
Best Response
You've already chosen the best response.
Okay just joking - went a bit crazy with copy & paste Apologies Blue Clues ;D
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/50e6204de4b058681f3f2d9d","timestamp":"2014-04-19T22:35:54Z","content_type":null,"content_length":"43639","record_id":"<urn:uuid:f6b4010f-a048-45c4-bf25-f197c8b3786e>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00308-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Gate 10 Year's Paper With Answer Key
Hosted at www.educationobserver.com
For more papers visit www.educationobserver.com/forum
SECTION A
1. This question consists of TWENTY-FIVE sub-questions (1.1 – 1.25) of ONE mark
each. For each of these sub-questions, four possible alternatives, A, B, C and D
are provided. Choose the most appropriate alternative and darken its bubble on
the Objective Response Sheet (ORS) against the corresponding sub-question
number using a soft HB pencil. Do not darken more than one bubble for any
sub-question. Do not use the ORS for any rough work. You may use the answer
book (last few pages) for any rough work.
1.1 The rank of the matrix is
(a) 4 (b) 2 (c) 1 (d) 0
1.2 The trapezoidal rule for integration gives exact result when the integrand is a
polynomial of degree
(a) 0 but not 1 (b) 1 but not 0 (c) 0 or 1 (d) 2
1.3 ( ) ( )
The solution to the recurrence equation T 2k = 3T 2k −1 + 1, T (1) = 1 is
(a) 2 k
(3 k +1
−1 ) k
(c) 3log2
(d) 2log3
1.4 The minimum number of colours required to colour the vertices of a cycle with n
nodes in such a way that no two adjacent nodes have the same colour is
(a) 2 (b) 3 (c) 4 (d) n − 2 + 2
1.5 In the worst case, the number of comparisons needed to search a singly linked
list of length n for a given element is
n n
(a) log n (b) (c) log2 − 1 (d) n
1.6 Which of the following is true?
(a) The set of all rational negative numbers forms a group under multiplication.
(b) The set of all non-singular matrices forms a group under multiplication.
(c) The set of all matrices forms a group under multiplication.
(d) Both B and C are true.
1.7 The language accepted by a Pushdown Automaton in which the stack is limited to
10 items is best described as
(a) Context free (b) Regular
(c) Deterministic Context free (d) Recursive
1.8 “If X then Y unless Z” is represented by which of the following formulas in
prepositional logic? (“ ¬ “, is negation, “∧” is conjunction, and “ ” is implication)
(a) (X∧¬Z) Y (b) (X∧Y) ¬Z (c) X (Y∧¬Z) (d) (X Y)∧¬Z
1.9 A device employing INTR line for device interrupt puts the CALL instruction on the
data bus while
(a) INTA is active (b) HOLD is active
(c) READY is active (d) None of the above
1.10 In 8085 which of the following modifies the program counter?
(a) Only PCHL instruction (b) Only ADD instructions
(c) Only JMP and CALL instructions (d) All instructions
1.11 In serial data transmission, every byte of data is padded with a ‘0’ in the
beginning and one or two ‘1’s at the end of byte because
(a) Receiver is to be synchronized for byte reception
(b) Receiver recovers lost ‘0’s and ‘1’s from these padded bits
(c) Padded bits are useful in parity computation
(d) None of the above
1.12 Minimum sum of product expression for f(w,x,y,z) shown in Karnaugh-map below
01 x 0 0 1
11 x 0 0 1
10 0 1 1 x
(a) xz + y ′z (b) xz ′ + zx ′
(c) x ′y + zx ′ (d) None of the above
1.13 Which of the following is not a form of memory?
(a) instruction cache (b) instruction register
(c) instruction opcode (d) translation look-a-side buffer
1.14 The decimal value 0.25
(a) is equivalent to the binary value 0.1
(b) is equivalent to the binary value 0.01
(c) is equivalent to the binary value 0.00111…
(d) cannot be represented precisely in binary
1.15 The 2’s complement representation of the decimal value –15 is
(a) 1111 (b) 11111 (c) 111111 (d) 10001
1.16 Sign extension is a step in
(a) floating point multiplication
(b) signed 16 bit integer addition
(c) arithmetic left shift
(d) converting a signed integer from one size to another
1.17 In the C language
(a) At most one activation record exists between the current activation record
and the activation record for the main
(b) The number of activation records between the current activation record and
the activation record fro the main depends on the actual function calling
(c) The visibility of global variables depends on the actual function calling
(d) Recursion requires the activation record for the recursive function to be
saved on a different stack before the recursive fraction can be called.
1.18 The results returned by function under value-result and reference parameter
passing conventions
(a) Do not differ
(b) Differ in the presence of loops
(c) Differ in all cases
(d) May differ in the presence of exception
1.19 Relation R with an associated set of functional dependencies, F, is decomposed
into BCNF. The redundancy (arising out of functional dependencies) in the
resulting set of relations is
(a) Zero
(b) More than zero but less than that of an equivalent 3NF decomposition
(c) Proportional to the size of F+
(d) Indetermine
1.20 With regard to the expressive power of the formal relational query languages,
which of the following statements is true?
(a) Relational algebra is more powerful than relational calculus
(b) Relational algebra has the same power as relational calculus.
(c) Relational algebra has the same power as safe relational calculus.
(d) None of the above
1.21 In 2’s complement addition, overflow
(a) is flagged whenever there is carry from sign bit addition
(b) cannot occur when a positive value is added to a negative value
(c) is flagged when the carries from sign bit and previous bit match
(d) None of the above
1.22 Which of the following scheduling algorithms is non-preemptive?
(a) Round Robin (b) First-In First-Out
(c) Multilevel Queue Scheduling
(d) Multilevel Queue Scheduling with Feedback
1.23 The optimal page replacement algorithm will select the page that
(a) Has not been used for the longest time in the past.
(b) Will not be used for the longest time in the future.
(c) Has been used least number of times.
(d) Has been used most number of times.
1.24 In the absolute addressing mode
(a) the operand is inside the instruction
(b) the address of the operand is inside the instruction
(c) the register containing the address of the operand is specified inside the
(d) the location of the operand is implicit
1.25 Maximum number of edges in a n-node undirected graph without self loops is
n ( n − 1) ( n + 1) ( n )
(a) n2 (b) (c) n - 1 (d)
2. This question consists of TWENTY-FIVE sub-questions (2.1 – 2.25) of TWO marks
each. For each of these sub-questions, four possible alternatives, A, B, C and D
are provided. Choose the most appropriate alternative and darken its bubble on
the Objective Response Sheet (ORS) against the corresponding sub-question
number using a soft HB pencil. Do not darken more than one bubble for any
sub-question. Do not use the ORS for any rough work. You may use the answer
book (last few pages) for any rough work.
2.1 Consider the following logic circuit whose inputs are functions f1, f2, f3 and output
is f.
f2(x,y,z) f(x,y,z)
Given that
f1 ( x, y , z ) =∑ (0,1, 3,5) ,
f ( x, y , z ) = ∑ ( 6, 7 ) , and
f ( x, y , z ) = ∑ (1, 4,5) ,
f3 is
(a) ∑ (1, 4,5) (b) ∑ (6, 7)
(c) ∑ (0,1, 3,5) (d) None of the above
2.2 Consider the following multiplexor where 10, 11, 12, 13 are four data input lines
selected by two address line combinations A1A0 =00,01,10,11 respectively and f
is the output of the multiplexor. EN is the Enable input.
11 4 TO 1
12 multiplexor
13 OUTPUT f(x,y,z)=?
The function f(x,y,z) implemented by the above circuit is
(a) xyz ′ (b) xy+z
(c) x+y (d) None of the above
2.3 (
Let f(A,B) = A′ + B. Simplified expression for function f f ( x + y , y ) , z is )
(a) x ′ + z
(b) xyz
(c) xy ′ + z
(d) None of the above
2.4 What are the states of the Auxillary Carry (AC) and Carry Flag (CY) after
executing the following 8085 program?
MVI H, 5DH
MIV L, 6BH
MOV A, H
ADD L
(a) AC = 0 and CY =0 (b) AC = 1 and CY =1
(c) AC = 1 and CY =0 (d) AC = 0 and CY =1
2.5 The finite state machine described by the following state diagram with A as
starting state, where an arc label is and x stands for 1-bit input and y stands
for 2-bit output
A B C
1/01 1/10
(a) Outputs the sum of the present and the previous bits of the input.
(b) Outputs 01 whenever the input sequence contains 11
(c) Outputs 00 whenever the input sequence contains 10
(d) None of the above
2.6 The performance of a pipelined processor suffers if
(a) the pipeline stages have different delays
(b) consecutive instructions are dependent on each other
(c) the pipeline stages share hardware resources
(d) All of the above
2.7 Horizontal microprogramming
(a) does not require use of signal decoders
(b) results in larger sized microinstructions than vertical microprogramming
(c) uses one bit for each control signal
(d) all of the above
2.8 Consider the following declaration of a two-dimensional array in C:
char a[100][100];
Assuming that the main memory is byte-addressable and that the array is stored
starting from memory address 0, the address of a [40][50] is
(a) 4040 (b) 4050 (c) 5040 (d) 5050
2.9 The number of leaf nodes in a rooted tree of n nodes, with each node having 0 or
3 children is:
( n − 1) (c)
( n − 1) (d)
(2n + 1)
2.10 Consider the following algorithm for searching for a given number x in an
unsorted array A[l..n] having n distinct values:
1. Choose an i uniformly at random from l..nl
2. If A[i]=x then Stop else Goto 1;
Assuming that x is present A, what is the expected number of comparisons made
by the algorithm before it terminates?
(a) n (b) n - 1 (c) 2n (d)
2.11 The running time of the following algorithm
Procedure A(n)
If n<=2 return(1) else return (A( n ));
Is best described by
(a) O(n) (b) O(log n) (c) O(log log n) (d) O(1)
2.12 A weight-balanced tree is a binary tree in which for each node, the number of
nodes in the let sub tree is at least half and at most twice the number of nodes in
the right sub tree. The maximum possible height (number of nodes on the path
from the root to the furthest leaf) of such a tree on n nodes is best described by
which of the following?
(a) log2n (b) log 4 n (c) log3n (d) log3 n
2.13 The smallest finite automaton which accepts the language
{x length of x is divisible by 3} has
(a) 2 states (b) 3 states (c) 4 states (d) 5 states
2.14 Which of the following is true?
(a) The complement of a recursive language is recursive.
(b) The complement of a recursively enumerable language is recursively
(c) The complement of a recursive language is either recursive or recursively
(d) The complement of a context-free language is context-free
sion of this test paper at http://forum.gatementor.
X 3
2.15 The Newton-Raphson iteration X n +1 = n + can be used to solve the
2 (2 X n )
(a) X 2 = 3 (b) X 3 = 3 (c) X 2 = 2 (d) X 3 = 2
2.16 Four fair coins are tossed simultaneously. The probability that at least one head
and one tail turn up is
(a) (b) (c) (d)
2.17 The binary relation S = φ (empty set) on set A = {1,2,3} is
(a) Neither reflexive nor symmetric (b) Symmetric and reflexive
(c) Transitive and reflexive (d) Transitive and symmetric
2.18 The C language is:
(a) A context free language (b) A context sensitive language
(c) A regular language
(d) Parsable fully only by a Turing machine
2.19 To evaluate an expression without any embedded function calls
(a) One stack is enough
(b) Two stacks are needed
(c) As many stacks as the height of the expression tree are needed
(d) A Turning machine is needed in the general case
2.20 Dynamic linking can cause security concerns because
(a) Security is dynamic
(b) The path for searching dynamic libraries is not known till runtime
(c) Linking is insecure
(d) Cryptographic procedures are not available for dynamic linking
2.21 Which combination of the following features will suffice to characterize an OS as a
multi-programmed OS? (A) More than one program may be loaded into main
memory at the same time for execution. (B) If a program waits for certain events
such as I/O, another program is immediately scheduled for execution. (C) If the
execution of a program terminates, another program is immediately scheduled
for execution.
(a) A (b) A and B (c) A and C (d) A, B and C
2.22 In the index allocation scheme of blocks to a file, the maximum possible size of
the file depends on
(a) the size of the blocks, and the size of the address of the blocks.
(b) the number of blocks used for the index, and the size of the blocks.
(c) the size of the blocks, the number of blocks used for the index, and the size
of the address of the blocks.
(d) None of the above
2.23 A B+ - tree index is to be built on the Name attribute of the relation STUDENT.
Assume that all student names are of length 8 bytes, disk blocks are of size 512
bytes, and index pointers are of size 4 bytes. Given this scenario, what would be
the best choice of the degree (i.e. the number of pointers per node) of the B+ -
(a) 16 (b) 42 (c) 43 (d) 44
2.24 Relation R is decomposed using a set of functional dependencies, F, and relation
S is decomposed using another set of functional dependencies, G. One
decomposition is definitely BCNF, the other is definitely. 3NF, but it is not known
which is which. To make a guaranteed identification, which one of the following
tests should be used on the decompositions? (Assume that the closures of F and
G are available).
(a) Dependency-preservation (b) Lossless-join
(c) BCNF definition (d) 3NF definition
2.25 From the following instance of a relation schema R(A,B,C), we can conclude that:
A B C
(a) A functionally determines B and B functionally determines C
(b) A functionally determines B and B does not functionally determines C
(c) B does not functionally determines C
(d) A does not functionally determines B and B does not functionally determines
SECTION B
This section consists of TWENTY questions of FIVE marks each. Any FIFTEEN out of
these questions have to be answered on the Answer Book provided.
3. Let A be a set of n(>0) elements. Let Nr be the number of binary relations on A
and let Nf be the number of functions from A to A.
(a) Give the expression for Nr in terms of n.
(b) Give the expression for Nf in terms of n.
(c) Which is larger for all possible n, Nr or Nf?
4. (a) S = { 1, 2 }
, 2,1 is binary relation on set A = {1,2,3}. Is it irreflexive? Add
the minimum number of ordered pairs to S to make it an equivalence
relation. Give the modified S.
(b) Let S = {a,b} and let (S) be the powerset of S. Consider the binary
relation ‘⊆ (set inclusion)’ on (S). Draw the Hasse diagram corresponding
to the lattice ( (S),⊆)
5. (a) Obtain the eigen values of the matrix
0 0 −2 104
0 0 −1
(b) Determine whether each of the following is a tautology, a contradiction, or
neither (“∨” is disjunction, “∧” is conjunction, “→” is implication, “¬” in
negation, and “↔” is biconditional (if and only if).
(i) A ↔ ( A ∨ A)
(ii) ( A ∨ B) → B
(iii) A ∧ (¬ ( A ∨ B )
6. Draw all binary trees having exactly three nodes labeled A, B and C on which
Preorder traversal gives the sequence C,B,A.
7. (a) Express the function f ( x, y , z ) = xy ′ + yz ′ with only one complement operation
and one or more AND/OR operations. Draw the logic circuit implementing the
expression obtained, using a single NOT gate and one or more AND/OR
(b) Transform the following logic circuit (without expressing its switching
function) into an equivalent logic circuit that employs only 6 NAND gates
each with 2-inputs.
8. Consider the following circuit. A = a2a1a0 and B = b2b1b0 are three bit binary
numbers input to the circuit. The output is Z = z3z2z1z0. R0, R1 and R2 are
registers with loading clock shown. The registers are loaded with their input data
with the falling edge of a clock pulse (signal CLOCK shown) and appears as
shown. The bits of input number A, B and the full adders are as shown in the
circuit. Assume Clock period is greater than the settling time of all circuits.
A B
REG R0
(6 – bit) b2 a2 b1 a1 b0 a0
EA 0
REG R1
(6 – bit) b2 a2 b1 a1
REG R2 b2 a2
(5 – bit)
z1 z0
(a) For 8 clocks pulses on the CLOCK terminal and the inputs A, B as shown,
obtain the output Z (sequence of 4-bit values of Z). Assume initial contents
of R0, R1 and R2 as all zeros.
A= 110 011 111 101 000 000 000 000
B= 101 101 011 110 000 000 000 000
Clock No 1 2 3 4 5 6 7 8
(b) What does the circuit implement?
9. Consider the following 32-bit floating-point representation scheme as shown in
the formal below. A value is specified by 3 fields, a one bit sign field (with 0 for
positive and 1 for negative values), a 24 bit fraction field (with the binary point
being at the left end of the fraction bits), and a 7 bit exponent field (in excess-64
signed integer representation, with 16 being the base of exponentiation). The
sign bit is the most significant bit.
1 24 ?
sign fraction exponent
(a) It is required to represent the decimal value –7.5 as a normalized floating
point number in the given format. Derive the values of the various fields.
Express your final answer in the hexadecimal.
(b) What is the largest values that can be represented using this format?
Express your answer as the nearest power of 10.
10. In a C program, an array is declared as float A[2048]. Each array element is 4
Bytes in size, and the starting address of the array is 0×00000000. This program
is run on a computer that has a direct mapped data cache of size 8 Kbytes, with
block (line) size of 16 Bytes.
(a) Which elements of the array conflict with element A[0] in the data cache?
Justify your answer briefly.
(b) If the program accesses the elements of this array one by one in reverse
order i.e., starting with the last element and ending with the first element,
how many data cache misses would occur? Justify your answer briefly.
Assume that the data cache is initially empty and that no other data or
instruction accesses are to be considered.
11. The following recursive function in C is a solution to the Towers of Hanoi problem.
Void move (int n, char A, char B, char C)
if (………………………………….) {
move (………………………………….);
printf(“Move disk %d from pole %c to pole %c\n”, n, A,C);
move (………………………………….);
Fill in the dotted parts of the solution.
12. Fill in the blanks in the following template of an algorithm to compute all pairs
shortest path lengths in a directed graph G with n*n adjacency matrix A.
A[i,j]equals if there is an edge in G from i to j, and 0 otherwise. Your aim in filling
in the blanks is to ensure that the algorithm is correct.
INITIALIZATION: For i = 1 … n
{For j = 1 … n
{ if A[i,j]=0 then P[i,j] = _______ else P[i,j] =____;}
ALGORITHM: For i = 1 …n
{ For j = 1 …n
{For k = 1 …n
(a) Copy the complete line containing the blanks in the Initialization step and fill
in the blanks.
(b) Copy the complete line containing the blanks in the Algorithm step and fill in
the blanks.
(c) Fill in the blank: The running time of the Algorithm is O(____).
13. (a) In how many ways can a given positive integer n ≥ 2 be expressed as the
sum of 2 positive integers (which are not necessarily distinct). For example,
for n = 3, the number of ways is 2, i.e., 1+2, 2+1. Give only the answer
without any explanation.
(b) In how many ways can a given positive integer n ≥ 3 be expressed as the
sum of 3 positive integers (which are not necessarily distinct). For example,
for n = 4, the number of ways is 3, i.e., 1+2+1, 2+1+1. Give only the
answer without any explanation.
(c) In how many ways can a given positive integer n ≥ k be expressed as the
sum of k positive integers (which are not necessarily distinct)? Give only the
answer without explanation.
14. The aim of the following question is to prove that the language {M | M is the
code of a Turing Machine which, irrespective of the input, halts and outputs a 1},
is undecidable. This is to be done by reducing form the language
{ }
M ′, x M ′ halts on x , which is known to be undecidable. In parts (a) and (b)
describe the 2 main steps in the construction of M. in part (c) describe the key
propery which relates the behaviour of M on its input w to the behaviour of M ′ on
(a) On input w, what is the first step that M must make?
(b) On input w, based on the outcome of the first step, what is the second step
that M must make?
(c) What key property relates the behaviour of M on w to the behaviour of M ′ on
15. A university placement center maintains a relational database of companies that
interview students on campus and make job offers to those successful in the
interview. The schema of the database is given below:
COMPANY (cname, clocation) STUDENT (scrollno, sname, sdegree)
INTERVIEW (cname, srollno, idate) OFFER (cname,srollno, osalary)
The COMPANY relation gives the name and location of the company. The
STUDENT relation gives the student’s roll number, name and the degree program
for which the student is registered in the university. The INTERVIEW relation
gives the date on which a students is interviewed by a company. The OFFER
relation gives the salary offered to a student who is successful in a company’s
interview. The key for each relation is indicated by the underlined attributes.
(a) Write relational algebra expressions (using only the operator ,σ,π,∪,− )
for the following queries:
(i) List the rollnumbers and names of those students who attended at least one
interview but did not receive any job offer.
(ii) List the rollnumbers and names of students who went for interviews and
received job offers from every company with which they interviewed.
(b) Write an SQL query to list, for each degree program in which more than five
students were offered jobs, the name of the degree and the average offered
salary of students in this degree program.
16. For relation R = (L, M, N , O, P), the following dependencies hold:
M O NO P P L and L MN
R is decomposed into R1 =(L, M, N , P) and R2 = (M, O).
(a) Is the above decomposition a lossless-join decomposition? Explain.
(b) Is the above decomposition dependency-preserving? If not, list all the
dependencies that are not preserved.
(c) What is the highest normal form satisfied by the above decomposition?
17. (a) The following table refers to search times for a key in B-trees and B+-trees.
B-tree B+-tree
Successful Search Unsuccessful search Successful Search Unsuccessful search
X1 X2 X3 X4
A successful search means that the key exists in the database and
unsuccessful means that it is not present in the database. Each of the entries
X1, X2, X3 and X4 can have a value of either Constant or Variable. Constant
means that the search time is the same, independent of the specific key
value, where Variable means that it is dependent on the specific key value
chosen for the search.
Give the correct values for the entries X1, X2, X3 and X4 (for example X1 =
Constant, X2= Constant, X3 = Constant, X4= Constant).
(b) Relation R(A,B) has the following view defined on it:
CREATE VIEW V AS
(SELECT R1.A,R2.B
FROM R AS R1, R AS R2
WHERE R1.B=R2.A)
(i) The current contents of relation R are shown below. What are the contents of
the view V?
A B
(ii) The tuples (2,11) and (11,6) are now inserted into R. What are the additional
tupels that are inserted in V?
18. (a) Draw the process state transition diagram of an OS in which (i) each process
is in one of the five states: created, ready, running, blocked (i.e. sleep or
wait), or terminated, and (ii) only non-preemptive scheduling is used by the
OS. Label the transitions appropriately.
(b) The functionality of atomic TEST-AND-SET assembly language instruction is
given by the following C function.
int TEST-AND-SET (int *x)
int y;
A3:return y;
(i) Complete the following C functions for implementing code for entering and
leaving critical sections based on the above TEST-AND-SET instruction.
int mutex=0;
void enter-cs()
while (…………………………………);
void leave-cs()
(ii) Is the above solution to the critical section problem deadlock free and
(iii) For the above solution, show by an example that mutual exclusion is not
ensured if TEST-AND-SET instruction is not atomic.
19. A computer system uses 32-bit virtual address, and 32-bit physical address. The
physical memory is byte addressable, and the page size is 4 kbytes. It is decided
to use two level page tables to translate from virtual address to physical address.
Equal number of bits should be used for indexing first level and second level page
table, and the size of each page table entry is 4 bytes.
(a) Give a diagram showing how a virtual address would be translated to a
physical address.
(b) What is the number of page table entries that can be contained in each
(c) How many bits are available for storing protection and other information in
each page table entry?
20. The following solution to the single producer single consumer problem uses
semaphores for synchronization.
#define BUFFSIZE 100
buffer buf[BUFFSIZE];
int first=last=0;
semaphore b_full=0;
semaphore b_empty=BUFFSIZE;
void producer()
while (1) {
produce an item;
p1: …………………..;
put the item into buff (first);
p2: …………………..;
void consumer()
Hosted at www.educationobserver.com
For more papers visit www.educationobserver.com/forum
while (1) {
take the item from buf[last];
c2: ……………………..;
consume the item;
(a) Complete the dotted part of the above solution.
(b) Using another semaphore variable, insert one line statement each
immediately after p1, immediately before p2, immediately after c1, and
immediately before c2 so that the program works correctly for multiple
procedures and consumers.
21. We require a four state automaton to recognize the regular expression
( a b ) * abb.
(a) Give an NFA for this purpose.
(b) Give a DFA for this purpose.
22. (a) Construct all the parse trees corresponding to i + j * k for the grammar
E E+E
E E*E
E id
(b) In this grammar, what is the precedence of the two operators * and +?
(c) If only one parse tree is desired for any string in the same language, what
changes are to be made so that the resulting LALR(1) grammar is non-
|
{"url":"http://www.docstoc.com/docs/143663442/Gate-10-Years-Paper-With-Answer-Key","timestamp":"2014-04-18T08:35:27Z","content_type":null,"content_length":"90947","record_id":"<urn:uuid:f8122587-181d-48cd-80e7-0cfc8d48897f>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00310-ip-10-147-4-33.ec2.internal.warc.gz"}
|
SICP of the Day 12/14
Today’s post comes after the heading “What Is Meant By Data?” The context is a rational number arithmetic package introduced at the beginning of the chapter.
We began the rational-number implementation in section 2.1.1 by implementing the rational-number operations add-rat, sub-rat, and so on in terms of three unspecified procedures: make-rat, numer,
and denom. At that point, we could think of the operations as being defined in terms of data objects — numerators, denominators, and rational numbers — whose behavior was specified by the latter
three procedures.
But exactly what is meant by data? It is not enough to say “whatever is implemented by the given selectors and constructors.” Clearly, not every arbitrary set of three procedures can serve as an
appropriate basis for the rational-number implementation. We need to guarantee that, if we construct a rational number x from a pair of integers n and d, then extracting the numer and the denom
of x and dividing them should yield the same result as dividing n by d. In other words, make-rat, numer, and denom must satisfy the condition that, for any integer n and any non-zero integer d,
if x is (make-rat n d), then
(numer x) n
--------- = -
(denom x) d
In fact, this is the only condition make-rat, numer, and denom must fulfill in order to form a suitable basis for a rational-number representation. In general, we can think of data as defined by
some collection of selectors and constructors, together with specified conditions that these procedures must fulfill in order to be a valid representation.
|
{"url":"http://derekmansen.tumblr.com/post/14232906698/sicp-of-the-day-12-14","timestamp":"2014-04-16T13:52:04Z","content_type":null,"content_length":"41658","record_id":"<urn:uuid:0439f254-def5-4198-a08e-7c08f8e39d20>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00574-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Action of the mapping class group on middle-dimensional cohomology
up vote 3 down vote favorite
Given an even dimensional manifold, the mapping class group acts on middle dimensional cohomology (or homology) and this action preserves the intersection form. For manifold of dimension $4k+2$, the
action symplectic, while it is orthogonal for manifold of dimension $4k$.
In dimension 2, it is well-known that any integral symplectic transformation on the cohomology of degree 1 can be realized by some diffeomorphism. I would like to know if this is still true in higher
dimension. I am interested mostly in the symplectic case (dimension $4k+2$).
More generally, does anyone know a good reference about mapping class groups of manifolds of dimension higher than 2? All the references I found treat exclusively the case of surfaces.
Thanks in advance.
at.algebraic-topology mapping-class-groups dg.differential-geometry
1 In this paper arxiv.org/abs/0908.4121 Misha Verbitsky considers the hyperkaehler case. – algori Aug 11 '10 at 13:48
In dimension 4 one requires stabilization M-> M#S^2XS^2 to guarantee the existence of a diffeomorphism realising a given automorphism, for a homeomorphism, however, stabilization is unnnecessary.
These results are respectively due to Wall and Freedman. As to whether this is true in higher dimensions, I'd put money on you not needing stabilization for a diffeomorphism- this would be
analogous to the situation with h-cobordisms. – Tom Boardman Aug 11 '10 at 13:51
... but the action of the mcg is on the second cohomology group (and the form that is preserved by the action is not the cup product, it's the Beauville-Bogomolov-Fujiki form). Woops... – algori
Aug 11 '10 at 13:56
Similarly whoops. My comment is missing the condition of simple connectedness. – Tom Boardman Aug 11 '10 at 14:31
add comment
2 Answers
active oldest votes
Without other assumptions, the answer is an easy no. For instance, if $N^3$ is a homology 3-sphere with infinite fundamental group, then $M^6 = N^3 \times S^3$ does not have very many
lifts of automorphisms of its middle homology, because there exists a degree one map $S^3 \to S^3$ but no map $S^3 \to N^3$ with non-zero degree. There are also obstructions from other
cup products besides the middle one, and from algebraic operations on cohomology other than cup products.
up vote 6
down vote So the question is much more reasonable if $M$ is simply connected (unless it is 2-dimensional) and has no homology other than the middle homology and at the ends. In this case, Tom
accepted Boardman says in the comment that Wall and Freedman showed that the answer is yes for homeomorphisms, although they surely assume that $M$ is simply connected. In higher dimensions, I
don't know the answer to this restricted question, but I imagine that it could be yes using surgery theory.
Whoops! You are indeed correct Greg! Schoolboy error.... :( – Tom Boardman Aug 11 '10 at 14:26
Thanks for the counterexample. Actually, I don't really want to put restrictions on the manifold. I would be much more interested in a way of computing the subgroup of the integral
orthogonal or symplectic group which is realized by diffeomorphisms. I have to think more about this. Thanks again! – Samuel Monnier Aug 11 '10 at 15:43
A smooth manifold $M$ has a group of simple homotopy equivalences, which is a complicated object but one that can be analyzed algebraically. Then, if $M$ is high-dimensional, there is
a "surgery structure set" which tells you whether a simple homotopy equivalence lifts to a diffeomorphism. If $M$ is 4-dimensional, then gauge theory is known to give you some
obstructions, but the totality of the obstructions is surely a very open problem. en.wikipedia.org/wiki/Surgery_structure_set – Greg Kuperberg Aug 11 '10 at 19:46
For some reason I missed the comments on this thread... Sorry about this. Thanks for the extra info. After some reflexion, I would be satisfied with an example of a 4k + 2 manifold
whose diffeomorphism group surjects on the symplectic group acting on the integral cohomology of degree 2k + 1. A naive way to construct candidates might be to mimick the construction
of Riemann surfaces as connected sums of tori by considering connected sums of products of 2k+1-spheres. But I fear I do not know enough topology to prove anything. – Samuel Monnier
Sep 29 '10 at 9:49
add comment
Here's an answer for simply connected (closed) 4-manifolds $X$. Freedman showed that every automorphism of the intersection form is realised by a unique (up to homotopy)
orientation-preserving homeomorphism. But the Seiberg-Witten invariants, which can be formulated as a finitely supported function $SW: H^2(X;\mathbb{Z})\to \mathbb{Z}$, are invariant under
orientation-preserving diffeomorphisms. For instance, if $X$ admits an integrable complex structure making it a general type surface with first Chern class $c$ then every diffeomorphism
preserves $c$ up to sign, because $SW(\pm c)=1$ and $SW(x)=0$ for all other $x$.
up vote
6 down [Also some general remarks about how to frame the question in higher dimensions, echoing Greg's. The group of homeomorphisms acts on the fundamental group and on the graded cohomology ring,
vote and (as Greg says) it respects all cohomology operations. The subgroup of diffeomorphisms fixes the characteristic classes of the tangent bundle; which of these are also preserved by
homeomorphisms is a subtle question. To get a tractable question about the action on middle-dimensional cohomology, it's therefore sensible to consider $(n-1)$-connected closed
$2n$-manifolds. There is a classification theorem for such manifolds due to C.T.C. Wall.]
Thanks! Actually I would be more interested in computing the subgroup realized by diffeomorphism than putting restriction on the manifold. – Samuel Monnier Aug 11 '10 at 15:45
Do you really want a general answer, or is it that you have certain manifolds in mind? Either way, the problem is traditionally divided into two parts. In the simply connected case: (1)
Which automorphisms of cohomology are realised by homotopy equivalences? (2) Which homotopy equivalences are homotopic to a diffeomorphism? Surgery theory says that for (2) it's sufficient
in high dimensions to understand two obstructions. The first concerns the relation between the tangent bundles, and the second is the obstruction to surgering an optimally-chosen cobordism
to an h-cobordism. – Tim Perutz Aug 11 '10 at 16:14
I missed the comments on this thread, very sorry about this. Thanks for the extra info. After reflexion, I would only need an example, if there is one, of a 4k + 2 manifold whose
diffeomorphism group surjects on the symplectic group acting on the integral cohomology of degree 2k + 1. See also the comment on Greg Kuperberg's answer... – Samuel Monnier Sep 29 '10 at
Oh, but that's much easier. Recall that the MCG of a closed genus $g$ surface is generated by Dehn twists along circles (and passing to homology, all of $Sp(2g,\mathbb{Z})$ arises this
way). We can mimic this on $M$, the connected sum of $g$ copies of $S^3\times S^3$. By Hurewicz, $\pi_3=H_3$. By Whitney, maps $S^3\to M$ can be approximated by immersions, and by the
Whitney trick they are homotopic to embeddings. An embedded 3-sphere $S$ has trivial normal bundle (because $\pi_2 SO(3)=0$). Choose a framing; thereby identify a neighbourhood with $TS^
3$. – Tim Perutz Sep 29 '10 at 19:54
But there is a comapctly supported diffeo of $TS^3$ which acts as the antipodal map on the zero-section, namely, a generalised Dehn twist (Picard-Lefschetz transformation). Read about
these, e.g., in Voisin's book on Hodge theory and complex geometry (vol. 2 I think; not sure). Transplant this into your manifold. Its action on homology is given by the Picard-Lefschetz
formula, just like for Dehn twists on surfaces. – Tim Perutz Sep 29 '10 at 19:56
show 1 more comment
Not the answer you're looking for? Browse other questions tagged at.algebraic-topology mapping-class-groups dg.differential-geometry or ask your own question.
|
{"url":"http://mathoverflow.net/questions/35224/action-of-the-mapping-class-group-on-middle-dimensional-cohomology/35230","timestamp":"2014-04-17T12:50:05Z","content_type":null,"content_length":"73003","record_id":"<urn:uuid:6c09282d-18f1-4975-a890-48d4859816c1>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00343-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Charging a cell phone with sound – possible?
This comes from Buzz Out Loud Episode 865 which got the story from Slashdot regarding a possible new technology that would use piezoelectric devices to charge cell phones while you talk. The original
article the slashdot story pointed to talked mostly about the advances in piezoelectric devices, but I want to look at the possibility that sound could charge a phone.
First for the basic physics. How do you make sound and what is it? Sound is a compression wave in the air. To make a sound you need something to push the air (yes, I simplified this quite a bit).
When that something pushes the air, it will have a force exerted over a distance. This means it takes energy to make sound. Recall the work definition:
The reverse can happen when this sound hits something (like a microphone). The air will push the device and move it – thus work will be done on the device. This leads to a change in energy of the
So, it takes energy to make sound and you can get energy from sound. The best way to look at this is with the intensity. Hyper Physics (basically an online textbook) has a good description of this.
The intensity of sound is the power per square meter.
If I have a sound source that makes sound uniformly in all directions, then the farther a receiver is from a source, the lower the intensity. You can think of this sound as an expanding sphere. When
the sphere expands the energy in a given amount of time (the power) is “spread” over the whole surface area of the sphere. If the original source has a power output of P, the intensity (I) will vary
as the distance with:
I think that is enough to proceed with the calculation.
The question is: how much power could you get from talking to a phone? Well – how much power could the PHONE get? How much power do you output when talking? The typical value for talking is that
normal speech is around 60 decibels. Human ears are kind of awesome in that they don’t not interpret the intensity. If they did, how would your brain comprehend a wide range of intensities. To
compensate, our ears (or brain – not sure) work on a log scale such that:
So, I need to convert human speech from human perceived loudness to real power per area.
Actually, I should write this in general terms of L (instead of 60 dB) so it can be more useful.
Where L is the loudness in decibels. Now I can use this to do some calculations. Remember, I already stated that I assumed the sound from the speaker was uniform in all directions (obviously not
true). I will also assume that the piezoelectric can convert 100% of the power from sound to electrical energy. In this calculation, I will use:
• Speech of loudness L.
• Piezoelectric device is a square with a width of d.
• The phone is a distance r from the mouth.
So to calculate this power, let me look at a normal conversation. Wikipedia lists a normal conversation from 40-60 dB at 1 meter away. Clearly, someone would not hold the phone 1 meter away. I want
the intensity at r distance away. First, I would find the intensity (see above). That is the power per square meter for a sphere that is 1 meter in radius. The total power would be the same if it was
a sphere of radius r, but it would give an intensity of:
Where I[1] is the intensity of speaking at 1 meter. If r is less than 1 meter, then the intensity will be greater. The power delivered to the cell phone will be the intensity times the area of the
cell phone (d^2). Putting this all together, I get:
Now, what values do I put in? I will put
• L = 60 dB (as stated before)
• d = 2 cm = 0.02 m (really just guessing here).
• r – 2 cm = 0.02 m (another wild guess).
If I plug these numbers in to the previous formula, this works out nice. The d^2 cancels with the r^2 and I get:
How much power does a phone need? To run, I am not sure. I would guess it would be much greater than 10^-6 watts. When transmitting, they use perhaps on the order of 1 watt. Looking at amazon for
cell phone batteries – it looks like 1000 mAh is a reasonable guess for the energy stored in a battery. Yes, this would be 1 amp-hour. If I talked into this phone, how long would it take to charge 1
amp hours? If this is a 3.7 volt battery with 1 amp-hour of charge stored, then this would be 3.7 Joules of energy. How long would it take a power supply of 10^-6 watts to get this amount of energy?
That is a long time. Yes, I made some assumptions – but it would STILL be a long time even if some things were changed. Also, this is essentially the same conclusion that they came to on the Slashdot
1. #1 Tom December 8, 2008
There’s a saying that you’d have to yell for about 8.5 years to generate as much energy as it takes to heat up a cup of coffee (8 years, 7 months and 6 days is often quoted). I remember running
the numbers, and the claim is reasonably accurate.
2. #2 at&t refurbished cell phones September 28, 2009
When did I wind up in high school ap calculus. These are some crazy formulas. Some of it made sense, than again I have been out of high school for the past ten years. You obviously kow what your
doing and talking about. Great post.
3. #3 calin June 12, 2011
Well, the time needed to charge the phone will definitely be acceptable if you go on a football stadium (117 dB), or near an airplane taking off (140 dB), or near a baby crying (110 dB) or use an
ambulance siren (120 dB), or near fireworks (163 dB) at 3 feet, etc. For example using a shotgun (170 dB) could charge the phone quite fast. But of course there is the problem of fast can the
device absorb that power.
|
{"url":"http://scienceblogs.com/dotphysics/2008/12/06/charging-a-cell-phone-with-sound-possible/","timestamp":"2014-04-17T16:07:49Z","content_type":null,"content_length":"64200","record_id":"<urn:uuid:b18a5961-311d-47d1-becf-ce7429867b63>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00124-ip-10-147-4-33.ec2.internal.warc.gz"}
|
XQuery/Project Euler
Project Euler is a collection of mathematical problems. Currently there are 166 so it may take some time to get through them all :-).
Add all the natural numbers below 1000 that are multiples of 3 or 5.
sum ((1 to 999)[. mod 3 = 0 or . mod 5 = 0])
Find the sum of all the even-valued terms in the Fibonacci sequence which do not exceed one million.
declare function local:fib($fibs,$max) {
let $next := $fibs[1] + $fibs[2]
if ($next > $max)
then $fibs
else local:fib(($next,$fibs),$max)
sum( local:fib((2,1),1000000)[. mod 2 = 0])
This brute-force approach recursively builds the Fibonacci sequence (in reverse) up to the maximum, then filters and sums the result.
What is the largest prime factor of the number 317584931803?
First we need to get a list of primes. The algorithm known as the Sieve of Eratosthenes is directly expressible in XQuery:
declare function local:sieve($primes as xs:integer*,$nums as xs:integer ) as xs:integer* {
if (exists($nums))
let $prime := $nums[1]
return local:sieve(($primes,$prime), $nums[. mod $prime != 0])
else $primes
{ local:sieve((),2 to 1000) }
The list of primes starts off empty, the list of numbers starts off with the integers. Each recursive call of local:sieve takes the first of the remaining integers as a new prime and reduces the list
of integers to those not divisible by the prime. When the list of integers is exhausted, the list of primes is returned.
Factorization of a number N is also easily expressed as the subset of primes which divide N:
declare function local:factor($n as xs:integer ,$primes as xs:integer*) as xs:integer* {
$primes[ $n mod . = 0]
let $n:= xs:integer(request:get-parameter("n",100))
let $max := xs:integer(round(math:sqrt($n)))
let $primes := local:sieve((),2 to $max)
{ local:factor($n,$primes) }
And the largest is
max (local:factor($n,$primes))
Sadly this elegant method runs out of space and time for integers as large as that in the problem.
Find the largest palindrome made from the product of two 3-digit numbers.
declare function local:palindromic($n as xs:integer) as xs:boolean {
let $s := xs:string($n)
let $sc := string-to-codepoints($s)
let $sr := reverse ($sc)
let $r := codepoints-to-string($sr)
return $s = $r
(for $i in (100 to 999)
for $j in (100 to 999)
return $i * $j)
Run [ takes 20 seconds]
What is the difference between the sum of the squares and the square of the sums for integers from 1 to 100?
declare function local:diff-sum($n as xs:integer) as xs:integer) {
sum (1 to $n) * sum(1 to $n)
- sum( for $i in 1 to $n return $i * $i )
This nasty brute-force method can be replaced by an explicit expression using familiar formula:
declare function local:diff-sum($n as xs:integer) as xs:integer {
let $sum := $n * ($n + 1) div 2
let $sumsq :=( $n * ($n+1) * (2 * $n +1) ) div 6
return $sum * $sum - $sumsq
Last modified on 1 July 2008, at 22:26
|
{"url":"http://en.m.wikibooks.org/wiki/XQuery/Project_Euler","timestamp":"2014-04-19T07:17:15Z","content_type":null,"content_length":"19000","record_id":"<urn:uuid:41c7bbb7-5def-4bfb-8847-b2b3d625c45f>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00311-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Ring and Groups
This was a challange question and I am not sure how to do this.. help would be appreciated thx
$\mbox{a) }\, (a\, +\, b)^2\, =\, (a\, +\, b)(a\, +\, b)\, =\, (a\, +\, b)a\, +\, (a\, +\, b)b$ $=\, a^2\, +\, ba\, +\, ab\, + \, b^2\, =\, a\, +\, ba\, +\, ab\, +\, b\, =\, (a\, +\, b)$ Subtract $(a
\, +\, b)$ from either side. See if that leads anywhere helpful...? $\mbox{b) }\, (ab)^2\, =\, (ab)(ab)\, =\, abab\, =\, a^2 b^2$ $a^{-1}ababb^{-1}\, =\, a^{-1}a^2 b^2 b^{-1}$ $ebae\, =\, eabe$
...where $e$ is the group's identity element. The result follows immediately.
thx for the reply and I can understand the second part and for the first part if I takeaway a+b from both sides then I will get ab = -ba but is it ok to say that it is commutative just from that step
or do I have to go further any more steps to prove it I think for the part (ii) I have to do like as follows: b(a+a) = 0; therefore a+a =0 is it right thx once again for ur help
|
{"url":"http://mathhelpforum.com/advanced-algebra/79161-ring-groups.html","timestamp":"2014-04-19T02:24:36Z","content_type":null,"content_length":"38766","record_id":"<urn:uuid:35bb65c4-e03e-492c-9b8e-88bacc8e47ea>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00088-ip-10-147-4-33.ec2.internal.warc.gz"}
|
This page is based mainly on information from Paolo Ronzoni.
This game, also known as Pizzichino, is a version of the Italian game Tressette adapted for two players. Each player has some cards in hand and some packets of cards on the table from which the top
card can be played.
Players and Cards
This game is for two players, normally using a 40-card Italian pack with Latin suits. The cards of each suit, from highest to lowest, and their values are as follows.
│ Card │ Value │
│ 3 │ ⅓ │
│ 2 │ ⅓ │
│ 1 (ace) │ 1 │
│ Re (king) │ ⅓ │
│ Cavallo (horse) │ ⅓ │
│ Fante (jack) │ ⅓ │
│ 7 │ 0 │
│ 6 │ 0 │
│ 5 │ 0 │
│ 4 │ 0 │
It would be possible instead to use a 40-card French suited pack, with the Queen (Donna) replacing the horse.
In addition to the points for cards, the last trick is worth 1 card point extra, so that there are 11⅔ card points to play for. Since fractions of a point are rounded down, the points scored by the
two players always add up to 11.
The players take turns to deal.
The dealer shuffles the cards and divides it into eight packets of 5 cards each which are placed face down on the table. These packets should be neatly stacked so that when they are picked up only
the bottom card can be seen.
The dealer's opponent selects four of the stacks, turns two of them face up and takes the other two to form his hand. The dealer then does the same with the other four stacks. So each player has a
hand of 10 cards and two face up packets of 5 cards on the table, stacked so that only the top card of each packet is identifiable.
The hand is played out in 20 tricks, each consisting of one card from each player. The non-dealer leads to the first trick and thereafter the winner of a trick leads to the next.
When playing to a trick, you must either play a card from your hand or the top card of one of your packets. The first player to a trick may play any of these cards; the second player must play a card
of the same suit if possible.
If the two cards of a trick are the same suit, the higher card wins. If they are of different suit, the first card wins, no matter how high the second card is.
When a card is played from a packet, the next card of the packet is immediately revealed and becomes available for the owner to play in the next trick.
If the top card of a packet is a 3 or a 2 or an Ace, the player may put it in his hand at his turn to play, revealing the card under it, which then becomes available to play. These top cards are
called "pizzichi" or "spizzichi" or "stilli" in this game.
Certain combinations of honours (“buongiochi”) can be declared and scored by a player who has them together in hand. These are:
Four 3's, four 2's or four aces: 4 points
Three 3's, three 2's or three aces: 3 points
Napoletana (3, 2 and ace of a suit): 3 points
A player who is dealt one of these combinations or acquires one by taking a card into his hand from the top of a packet must declare it immediately to score the points. A player who has scored 3
points for a set of three 3's, 2's or Aces does not score anything extra if he subsequently acquires a fourth 3, 2 or Ace.
When all the cards have been played, the players count the value of the cards in the tricks they have won, plus one point for the winner of the last trick, and add these points to their scores. There
are 11 points to be won in each deal, plus any points for buongiochi.
The game ends when a player reaches 51 or more points, and the player with the higher score wins.
Some play that when the eight packets have been dealt and the players have chosen their four packets, each player may look at the bottom card of their own packets before deciding which two to pick up
to form their hand and which two to turn fact up on the table. Others require players to choose which packets to turn up and which to use as a hand without looking at any card.
Some players do not allow the buongiocchi combinations to be declared or scored. In that case the game is played to 21 or 31 points. With boungiochi it may be played to 41 or 51 points.
Some players only allow buongiocchi to be scored if they are present in a player's initial hand of 10 cards, not if they involve cards picked up from packets.
|
{"url":"http://www.pagat.com/tressette/spizzichino.html","timestamp":"2014-04-19T19:40:07Z","content_type":null,"content_length":"11428","record_id":"<urn:uuid:6093d119-a11d-4568-82a2-6f4ac54e9008>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00436-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Exact Equations | Equations of Order One
The differential equation
is an if
Steps in Solving an Exact Equation
1. Let
2. Write the equation in Step 1 into the form
and integrate it partially in terms of x holding y as constant.
3. Differentiate partially in terms of y the result in Step 2 holding x as constant.
4. Equate the result in Step 3 to N and collect similar terms.
5. Integrate the result in Step 4 with respect to y, holding x as constant.
6. Substitute the result in Step 5 to the result in Step 2 and equate the result to a constant c.
|
{"url":"http://www.mathalino.com/reviewer/elementary-differential-equations/exact-equations-equations-of-order-one","timestamp":"2014-04-17T21:23:22Z","content_type":null,"content_length":"50657","record_id":"<urn:uuid:435fdb09-cfdb-4761-9e1d-63f1fd8548ce>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00649-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Introduction to Pattern Recognition (CSE555)
This is the website for a course on pattern recognition as taught in a first year graduate course (CSE555). The material presented here is complete enough so that it can also serve as a tutorial on
the topic.
Pattern recognition techniques are concerned with the theory and algorithms of putting abstract objects, e.g., measurements made on physical objects, into categories. Typically the categories are
assumed to be known in advance, although there are techniques to learn the categories (clustering). Methods of pattern recognition are useful in many applications such as information retrieval, data
mining, document image analysis and recognition, computational linguistics, forensics, biometrics and bioinformatics. You may find the websites of related courses that I teach on Data Mining and
Machine Learning useful as supplementary material.
Much of the topics concern statistical classification methods. They include generative methods such as those based on Bayes decision theory and related techniques of parameter estimation and density
estimation. Next come discriminative methods such as nearest-neighbor classification, support vector machines. Artificial neural networks, classifier combination and clustering are other major
components of pattern recognition.
A course in probability is helpful as a pre-requisite.
Applications of pattern recognition techniques are demonstrated by projects in fingerprint recognition, handwriting recognition and handwriting verification.
Reference Textbooks:
(i) Pattern Classification (2nd. Edition) by R. O. Duda, P. E. Hart and D. Stork, Wiley 2002,
(ii) Pattern Recognition and Machine Learning by C. Bishop, Springer 2006, and
(iii) Statistics and the Evaluation of Evidence for Forensic Scientists by C. Aitken and F. Taroni, Wiley, 2004.
Lectures Examinations (Closed Book)
Following are the lecture overheads used in class as pdf files. 1. Mid-Term
The lectures slides are frequently updated. This course was last taught in Spring 2007. 2. Final
1. Introduction
2. Bayes Decision Theory
3. Generative Methods
1. Maximum-Likelihood and Bayesian Parameter Estimation
2. Nonparametric Techniques
1. Density Estimation
4. Discriminative Methods
1. Distance-based Methods
1. Nearest neighbor Classification
2. Metrics and Tangent Distance
3. Fuzzy Classification
2. Linear Discriminant Functions Projects
3. Artificial Neural Networks
1. Biological Motivation and Back-Propagation 1. Project1: Fingerprint Pattern Recognition
Non-Metric Methods 2. Project2: Arabic Handwritten Word Recognition
1. Recognition with Strings 3. Project3: Writing Style Classification using SVM and Fisher Linear Discriminant
2. String Matching
Algorithm-Independent Machine Learning
Unsupervised Learning and Clustering
1. Unsupervised Learning and Clustering
1. Normal Distribution
2. Statistical Tests
|
{"url":"http://www.cedar.buffalo.edu/~srihari/CSE555/","timestamp":"2014-04-16T04:23:21Z","content_type":null,"content_length":"10024","record_id":"<urn:uuid:7fa8433d-de48-459a-8b88-44f1d05741e7>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00357-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Equilibrium - The physics of a clothesline.
ok perfect, thank you. As part of my solution I'm asked to find the x and y components but I'm not sure how to do so, can anyone help me? they want me to use these five steps to solve the problem,
the steps are as follows:
Step 1: select the object to be studied.
Step 2: draw a "free-body diagram" for each object chosen.
Step 3: choose a set of x and y axes for each of the objects being analyzed, and resolve the free-body diagram into components that point along these axes. (this is the step I'm having issues with).
Step 4: set up the equations in such a way that the sum of the x-components of the forces is zero, and the sum of the y-components is also equal to zero.
Step 5: solve the equations for the unknown quantities you are looking for.
Again, thanks for your help.
|
{"url":"http://www.physicsforums.com/showthread.php?p=4217074","timestamp":"2014-04-21T09:40:25Z","content_type":null,"content_length":"39396","record_id":"<urn:uuid:38086c51-0d4f-476d-97d4-e76ba9cdcf7c>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00385-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[SI-LIST] : Embedded microstrip calculations, Ultracad Calculator
Doug Brooks (doug@eskimo.com)
Thu, 11 Dec 1997 09:27:52 -0500
I apologize, but I should have carried this argument one step
further .........................
To repeat:
I quote from IPC-D-317, Design Guidelines for Electronic Packaging
Utilizing High-Speed Techniques, p 22
5.5.2 Embedded Microstrip Line .... The equations for embedded
microstriplines are the same as in the section on (uncoated) microstrip,
with a modified effective permittivity..... the effective permittivity
can be determined as in sction 5.2
Section 5.2 (equation 5.17 on p 17) gives this relationship as
E'r = Er[1 - exp(-1.55H1/H) ]
if H1 becomes infinite, the exp term goes to zero and E'r becomes Er
Therefore, according to this reference, which I relied on for the
calculator, the results ARE THE SAME for microstrip and embedded microstrip
if the thickness of the coating is very thick.
To Continue -------------------------
Now, if
E'r = Er[1 - exp(-1.55H1/H)]
if H1 > H then the exp term is <1
Therefore E'r < Er (which makes sense, because it will be between
the Er of the material below and Er of air, which is 1. So, it will
be between 1 < E'r < Er
Now Zo is an inverse function of the square root of Er
So if E'r goes down, Zo will go up (not down as Arpad alleges)
I rely, for my source, on the referenced IPC manual
I believe you all will find our little calclator (AND its Help
file where all this is disclosed and referenced) a useful
addition to your tool set.
Doug Brooks
UltraCAD Design, Inc.
|
{"url":"http://www.qsl.net/wb6tpu/si-list2/pre99/0885.html","timestamp":"2014-04-19T04:21:02Z","content_type":null,"content_length":"3745","record_id":"<urn:uuid:ea66ea5b-a781-4daa-bf7c-c5c572227827>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00001-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Deallocating Memory allocated to a pointer to a pointer
Deallocating Memory allocated to a pointer to a pointer
Hello. Please help me out with the following code...
class MATRIX
float **x;
int m,n;
MATRIX(int, int);
void getdata();
void showdata();
m=1; n=1; x=new float*[1]; x[0]=new float[1]; x[0][0]=0;
MATRIX::MATRIX(int a, int b)
int ctr,row,col;
m=a; n=b;
x=new float*[m];
for(ctr=0; ctr<m; ++ctr)
x[ctr]=new float[n];
for(row=0; row<m; ++row)
for(col=0; col<n; ++col)
//what do I type here?
I am unable to figure out what code I should use in the destructor to deallocate the memory allocated to the pointer to a pointer x.
Please help out.
Like so
for(ctr=0; ctr<m; ++ctr)
delete [] x[ctr];
delete [] x;
Hey, thanks for the code. The code I submitted was actually part of a bigger problem.
Supposing I include a friend function which returns a value of type MATRIX in the class definition, I seem to get erroneous results if I include the destructor, and the program works correctly if
I do not include the destructor. Can anyone tell me why that is the case?
For anyone who might be interested, my code is attached (it's not very long).
Try calling showdata() directly from `mat'. Like this: mat.showdata(); It works fine for me, and besides, your assign function is completely pointless as far as I can see.
Your assign() only returns a shallow copy of the matrix, not a deep copy.
So you end up with is either
temp = assign(mat); // this is a shallow copy
// destructor for mat gets called, just after the last reference to the object
temp.showdata(); // uses the same pointer you just destroyed.
// destructor for temp gets called, but you already deleted it
temp = assign(mat); // this is a shallow copy
temp.showdata(); // uses the same pointer
// destructor for mat gets called
// destructor for temp gets called, but you already deleted it
Hello tomcant. The program I attached is for demonstration purposes only. The actual function is different. I just wanted to find out where I was going wrong.
Hey Salem, looks like you cleared my doubt. Thanks a lot. Now that I know what the problem is, is there any way to get around it?
> is there any way to get around it?
Yeah, implement proper copy and assignment functions which know how to replicate the internal structure of your class. Most decent books and tutorials should be able to provide you with the
Thanks Salem, that helped. The use of a copy constructor solved my problem.
|
{"url":"http://cboard.cprogramming.com/cplusplus-programming/73990-deallocating-memory-allocated-pointer-pointer-printable-thread.html","timestamp":"2014-04-16T08:45:59Z","content_type":null,"content_length":"11895","record_id":"<urn:uuid:7feba065-b2f7-4d3c-b25c-f0ca69e62677>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00488-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Why is the fibered coproduct of affine schemes not affine?
up vote 4 down vote favorite
I am confused about the following issue:
Let $X=SpecS$, $U_1=SpecR_1$, $U_2=SpecR_2$. and suppose we have maps $S \rightarrow R_1$, $S \rightarrow R_2$. Let $U_3=Spec (R_1 \otimes_S R_2)$. We have scheme maps $U_1 \rightarrow X$, $U_2 \
rightarrow X$, $U_3 \rightarrow U_1$, $U_3 \rightarrow U_2$. The particular situation I have in mind is when $U_1$ and $U_2$ are distinguished (corresponding to localization of S at some element)
open subschemes of $X$. Intersection of $U_1$ and $U_2$ is $U_3$, and the inclusion of $U_3$ in $X$ corresponds to $S$-algebra structure on $(R_1 \otimes_S R_2)$.
The category of affine schemes (ASch) is the opposite category of commutative rings (CRing). In CRing kernels (equalisers) of pairs of maps and products exist, so by a lemma from category theory
limits should exist, in particular fibered coproducts should exist, so union of two affine schemes $U_1$ and $U_2$ over $U_3$ should be affine scheme $U_4$! But we know that in general it is not so!
Maybe the problem is that abstractly it is an affine scheme but what is it's inclusion map into X? Actually there exists an obvious map on the ring side from $S$ to kernel of a pair of maps $R_1 \
rightarrow (R_1 \otimes_S R_2)$, $R_2 \rightarrow (R_1 \otimes_S R_2)$.
Thank you!
ag.algebraic-geometry ac.commutative-algebra
I think some of the maps in your opening paragraph are backwards. If you want to discuss coproducts of schemes, then the maps on rings should be $R_1 \to S$ and $R_2 \to S$ . (As in Andreas's
example.) There may be some later typos of this sort, I'm not sure. – David Speyer Jun 24 '10 at 2:26
Just keep in mind in which categories you talk about coequalizers. In the category of affine schemes, well, the coequalizer is of course affine (and it exists). But probably you want to think about
the coequalizer of schemes, whose objects happen to be affine. See also mathoverflow.net/questions/9961/colimits-of-schemes and (sorry!) mathoverflow.net/questions/23478/… ;-) – Martin Brandenburg
Jun 24 '10 at 9:01
add comment
2 Answers
active oldest votes
The short answer is that the category of affine schemes does have pushouts, but these are not the same as pushouts of affine schemes calculated in the category of all schemes.
For a longer answer, consider an example that's small enough to compute: The projective line (over the complex numbers, say) is not affine but it is gotten by gluing two copies of the affine
line along the punctured affine line, so it is the pushout (in the category of schemes) of a diagram of affine schemes. Now what's the pushout of that same diagram in the category of affine
schemes? Well, the rings involved are two copies of $C[x]$ and a copy of $C[x,x^{-1}]$. The two maps are the two injections of $C[x]$ into $C[x,x^{-1}]$, one sending $x$ to $x$, and the
up vote other sending $x$ to $x^{-1}$. The pullback of these, in the category of commutative rings, is just $C$, because the only way a polynomial in $x$ can equal a polynomial in $x^{-1}$ is for
22 down both of them to be constant. Therefore, the affine-scheme pushout is not the projective line but a point.
Intuitively, if you glue together two copies of the line along the punctured line "gently," allowing the result to be non-affine, you get the projective line, but if you demand that the
result be affine then your projective line is forced to collapse to a point.
Thanks Andreas! My confusion is resolved. So when we calculate global sections of structure sheaf over some open set $U$ (not necessarily on affine scheme) which we can write as a union
of open affines ${U_f}$, whose sections are known, we can calculate a limit (pull-back) of corresponding diagram of rings? – Mikhail Gudim Jun 24 '10 at 6:12
1 This is essentially the definition of a sheaf. – Martin Brandenburg Jun 24 '10 at 9:02
I never realized this simple fact, but it is quite surprising! – Andrea Ferretti Jun 25 '10 at 10:21
add comment
From the categorical point of view the situation is the following. We have the category of affine schemes $AffSch$, the category of all schemes $Sch$, and the inclusion functor $i:AffSch \to
Sch$. The functor $i$ has a left adjoint functor $i^*:Sch \to AffSch$, $S \mapsto Spec \Gamma(S,{\mathcal{O}}_S)$, which is sometimes called the affine envelope.
Now, the coequalizer (the pushout) by definition is the object which corepresents a certain contravariant functor to $Sets$. Note that whenever we have a functor $i:C \to D$ having a left
up vote adjoint and a contravariant functor $F:D \to Sets$, if $F$ is corepresentable by an object $X$, then $F\circ i$ is corepresentable as well and the corepresenting object is $i^{\ast}(X)$.
4 down Indeed, if $F(Y) = Hom(X,Y)$ then $$F(i(Z)) = Hom(X,i(Z)) = Hom(i^{\ast}(X),Z).$$ Applying this to the situation of the first paragraph we see that the restriction of a corepresentable (by
vote an scheme $S$) functor from $Sch$ to $AffSch$ is correspresentable by $i^{\ast}(S)$. In particular, the coequalizer in the category of affine schemes is the affine envelope of a coequalizer
in the category of all schemes.
Oh, I see. Thanks very much, you made it very clear for me! – Mikhail Gudim Jun 27 '10 at 21:46
add comment
Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry ac.commutative-algebra or ask your own question.
|
{"url":"https://mathoverflow.net/questions/29306/why-is-the-fibered-coproduct-of-affine-schemes-not-affine/29311","timestamp":"2014-04-18T03:44:18Z","content_type":null,"content_length":"64258","record_id":"<urn:uuid:457ffb78-fed8-4c95-a223-ebfde55d4a49>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00014-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Geographic Information System (GIS) Analysis Interview Questions and Answers
This page contains the collection of Geographic Information System (GIS) Analysis Interview Questions and Answers / Frequently Asked Questions (FAQs) under category Analysis. These questions are
collected from various resources like informative websites, forums, blogs, discussion boards including MSDN and Wikipedia. These listed questions can surely help in preparing for Geographic
Information System (GIS) Analysis interview or job.
What is spatial interpolation?
Which of the following are considered the main problems facing overlay operations in GIS?
Assuming a pair of binary raster data layers, which of the following could be used as the equivalent of a Boolean AND overlay in cartographic modelling?
What is point-in-polygon overlay?
Which of the following overlay methods would you use to calculate the length of road within a forest polygon?
Which of the following could you use a buffer operation?
What is reclassification?
What is Manhattan distance?
Which of the following spatial interpolation techniques is an example of a local, exact, abrupt, and deterministic interpolator?
What is the difference between slope and aspect?
What is location-allocation modelling?
What is the possible number of combinations in which a delivery van can visit five different points on a network?
For the Happy Valley ski resort example, which GIS analyses could be used to determine which hotels are within 200m of a main road.
A buffer zone around a point feature will be a circle.
Filtering is used on raster data to change the value of a cell based on the attributes of neighboring cells.
Filtering could be used to smooth noisy data caused by problems with data collection devices.
The Jordan method used for point in polygon analysis is also known as the Intersect method.
It is an ecological fallacy to assume that all the individuals within a defined area have the same level of income.
Exact interpolation methods are so called because they give very accurate results.
The most common use of Theissen s Polygons is to create contour lines.
Slope can be calculated from the formula S = b2 - c2.
Ray tracing is a technique used in network analysis.
ZVI is the abbreviation for Zone of Varying Intensity.
What is spatial interpolation?
► 1. The process of establishing values for areas outside the boundary of an existing set of data points.
► 2. The process of modelling spatial pattern from a set of one or more data layers
► 3. The process of establishing values for areas between an existing set of discrete observations
► 4. The process of establishing a statistical relationship between two spatially correlated variables
The process of establishing values for areas between an existing set of discrete observations.
Which of the following are considered the main problems facing overlay operations in GIS?
► 1. Selecting threshold criteria
► 2. Topological inconsistencies
► 3. The Modifiable Arial Unit Problem (MAUP)
► 4. Processing overheads
► 5. Visual complexity
Selecting threshold criteria
The Modifiable Arial Unit Problem (MAUP)
Visual complexity
Assuming a pair of binary raster data layers, which of the following could be used as the equivalent of a Boolean AND overlay in cartographic modelling?
► 1. Layer 1 – layer 2
► 2. Layer 1 + layer 2
► 3. Layer 1 / layer 2
► 4. Layer 1 * layer 2
Layer 1 – layer 2
Layer 1 + layer 2
Layer 1 * layer 2
What is point-in-polygon overlay?
► 1. A method interpolating point data
► 2. An overlay method used to determine which points lay within the boundary of a polygon
► 3. An overlay method used to determine the distance between a point and it is nearest neighboring polygon
► 4. An overlay method used to reclassify polygon data
An overlay method used to determine which points lay within the boundary of a polygon
Which of the following overlay methods would you use to calculate the length of road within a forest polygon?
► 1. Erase
► 2. Union
► 3. Line-in-polygon
► 4. Point-in-polygon
Which of the following could you use a buffer operation?
► 1. Calculating the distance from one point to another
► 2. Calculating the area of overlap between two polygon data layers
► 3. Determining the area within a set distance from a point, line or area feature
► 4. Calculating the number of observations within a set distance of a point, line or area feature
Determining the area within a set distance from a point, line or area feature
Calculating the number of observations within a set distance of a point, line or area feature
What is reclassification?
► 1. The process of combining one or more data ranges into a new data range to create a new data layer.
► 2. The process of combing two or more data layers
► 3. The process of simplifying data in a data layer
► 4. An analytical technique based on point data.
The process of combining one or more data ranges into a new data range to create a new data layer.
What is Manhattan distance?
► 1. The distance between two points in a raster data layer calculated as the number of cells crossed by a straight line between them.
► 2. The distance between two points in a vector data layer calculated as the length of the line between them.
► 3. The distance between two points in a raster data layer calculated as the sum of the cell sides intersected by a straight line between them.
The distance between two points in a raster data layer calculated as the sum of the cell sides intersected by a straight line between them.
Which of the following spatial interpolation techniques is an example of a local, exact, abrupt, and deterministic interpolator?
► 1. Thiessen polygons
► 2. TIN
► 3. Spatial moving average
Thiessen polygons
What is the difference between slope and aspect?
► 1. Slope is the gradient directly down the fall line, while aspect is the direction of the fall line relative to north.
► 2. Slope is the direction of the fall line, while aspect is the gradient of the fall line.
► 3. Slope is the distance down the fall line from the top of the slope to its bottom, while aspect is the percentage gradient of this line averaged over its full distance.
► 4. Slope is the gradient of the fall line relative to vertical, while aspect is the direction of the fall line relative to the line of greatest slope.
Slope is the gradient directly down the fall line, while aspect is the direction of the fall line relative to north.
What is location-allocation modelling?
► 1. A method of site location based on overlaying multiple siting criteria maps.
► 2. A method of allocating resources within an area of interest using buffer analyses
► 3. A method of matching supply with demand across a network by locating a limited set of resources using network analysis
► 4. A method within network analysis used to determine delivery routes
A method of matching supply with demand across a network by locating a limited set of resources using network analysis
What is the possible number of combinations in which a delivery van can visit five different points on a network?
► 1. 25
► 2. 120
► 3. 10
► 4. 3125
For the Happy Valley ski resort example, which GIS analyses could be used to determine which hotels are within 200m of a main road.
► 1. Union overlay and line-in-polygon overlay
► 2. Buffer analysis and erase overlay
► 3. Buffer and point-in-polygon overlay
► 4. Intersect overlay and buffer analysis
► 5. Proximity analysis and reclassification
Proximity analysis and reclassification
Buffer and point-in-polygon overlay
A buffer zone around a point feature will be a circle.
► 1. True
► 2. False
Filtering is used on raster data to change the value of a cell based on the attributes of neighboring cells.
► 1. True
► 2. False
Filtering could be used to smooth noisy data caused by problems with data collection devices.
► 1. True
► 2. False
The Jordan method used for point in polygon analysis is also known as the Intersect method.
► 1. True
► 2. False
It is an ecological fallacy to assume that all the individuals within a defined area have the same level of income.
► 1. True
► 2. False
Exact interpolation methods are so called because they give very accurate results.
► 1. True
► 2. False
The most common use of Theissen s Polygons is to create contour lines.
► 1. True
► 2. False
Slope can be calculated from the formula S = b2 - c2.
► 1. True
► 2. False
Ray tracing is a technique used in network analysis.
► 1. True
► 2. False
ZVI is the abbreviation for Zone of Varying Intensity.
► 1. True
► 2. False
|
{"url":"http://www.questions-interviews.com/analysis/gis-analysis.aspx","timestamp":"2014-04-17T18:23:29Z","content_type":null,"content_length":"81280","record_id":"<urn:uuid:80360502-57da-4d06-a1dc-f0c4e36596af>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00495-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Bicycle Safety - The Math of Speed
07-13-13, 07:02 AM #1
Senior Member
Join Date
Oct 2012
Upstate NY
My Bikes
Various "modded" eZips and multiple econversions
0 Post(s)
0 Thread(s)
Bicycle Safety - The Math of Speed
Faster is Safer!
My Sister-in-Law just can't understand, why I feel that going faster, on a bicycle, is safer. "30 mph!" ... "You're gonna kill your self!". ...
I feel it necessary to prove that, up to the speed of surrounding traffic, faster is safer. Let me try a mathematical approach.
First, let me qualify;
1. My riding is in an urban area and 95% of the streets-roads are 30 mph limit.
2. I ride on the right side of the road, going "with traffic", as is the legal method.
For ease of math - Let's figure a 10 mile trip, w/traffic @ 10 cars per minute.
30 mph traffic:
At 10 mph -
60min x 10cars - 1/3 (for 1/3 speed of cars) = 400 cars passing you at 20mph.
At 15 mph -
40min x 10cars - 1/2 (for 1/2 speed of cars) = 200 cars passing you at 15mph.
At 20 mph -
30min x 10cars - 2/3 (for 2/3 speed of cars) = 100 cars passing you at 10mph.
AND, cars have twice the time to notice, and avoid, you! (vs 10 mph).
At 25 mph -
24min x 10 cars -5/6 (for 5/6 speed of cars) = 40 cars passing you at 5mph.
At 30 mph -
20min x 10cars - 3/3 (for 3/3 speed of cars) = 0 cars passing you!
(Math is simplified - but "sound")
When you consider that many bike accidents are directly related to passing cars, especially in a "road" environment, then 20 mph would be (4 times safer than 10 mph) x (2 - twice the time the,
approaching, driver has to see biker) = 8 times safer @ 20 mph, compared to 10 mph!
Most impressive is that each speed increase of 5 mph reduces the volume of passing traffic by 50%!
A__hole factor! Everyone might agree that, possibly, 1 in 100 motorists are AHs toward bicyclists, (Conservative Estimate!), Going 10 mph you'll get passed by 4, only 1 @ 20 mph and at 30 mph you
might never encounter 1.
"Best of all! ... I get to play. ... http://www.bikeforums.net/attachment...&thumb=1&stc=1
Sorry! ... I'm addicted to improving enhancing.
With side orders of inspiring enlightening!"
DrkAngel, I agree with you that going faster is safer. However, I've noticed cars don't like the idea of a bicycle either maintaining their same speed and/or catching up to them at the stop
Imagine if you're in a car and see a cyclist maintaining speed with you. You probably start wondering why am I paying for gas, car payments, insurance, etc. when this cyclist is going to get to
the same destination about the same time as me. It's like watching someone get a 50% discount when you just paid full price. It would make you upset and angry.
When you ride your bicycle at 30 mph you remind car drivers they just paid full price.
There is the other issue of reminding car drivers that gas prices are only going to go up and that they are stuck with being a forced consumer. Eventually, they will entertain the idea of riding
a bicycle and that probably frightens them too.
The positive car drivers will actually see a light at the end of the tunnel witnessing you travel at 30 mph. I've always believed that traveling at 30 mph conveys to the driver that there are
other transportation methods besides a car. They no longer see the bicycle as just a fun recreational activity but more of a legitimate form of transportation.
Many car drivers want to ditch their vehicles but they just don't know what other options are out there. When they see me carrying groceries and traveling at 30 mph it's like a light has come on
in their head.
If we don't make changes to our use and abuse of energy, Mother Nature will make it for us.
Be a part of the solution to Global Warming instead of a contributor.
Provided you don't wipe out - Yes, speed to keep up with traffic is good and helps earn you some respect. The same is true with a pedal only bike by the way. I can break above 20-mph for short
spurts on my pedal only "bus bike" (ride the bus with the bike in the rack between towns and then ride the bike around town) and doing so really helps with safety and getting a little more
respect when I ride around town in traffic riding "All In" technique center of lane and holding my position in the traffic cue like I was a motorcycle in stop and go 25-mph speed limit square
grid traffic patterns.
Now when it comes to wiping out - speed is your enemy even more then weight since weight is linear to kinetic energy in a wreck where as velocity is exponential to the square. Wipe out at 20-mph
instead of 10-mph and you hit with four times the kinetic energy, wipe out at 30-mph instead of 10-mph and you hit with nine times the kinetic energy.
To be specific, energy in a moving object: 0.5 x mass x velocity^2
But I agree with the OP that I feel safer when my speed is closer to that of motor vehicles.
My point is the faster you go, the more carefully you need to ride because the harder you go down if you go down.
As to keeping up with traffic actually ticking off some drivers, that is usually only the case if you filter up, if you don't filter up at the reds and stops and instead hold your position in the
traffic cue it won't make as many drivers mad and actually make a lot of them respect you more when they see that you can keep up with them and hold your own.
If you filter up and as a result you are either making better time then they are and getting further and further ahead of them at every light or worse yet they have to pass you over and over
again (yes in the mental state of 99% of all car drivers when it comes to a bicycle in the road ahead of them they do have to pass even if it is in the middle of a block between stop lights and
they know your just going to filter up again), now that is what gets them ticked off. They will blame you for constantly getting in their way and it just about makes them go critical mass nuclear
with a lot of dirty fallout.
Long story short, if your moving fast enough to keep up with traffic, don't wipe out, and hold your position in the cue at the reds and stops rather then filter up because doing so can make some
drivers go nuclear on you when they end up passing you multiple times and more then minus out any safety gains you may have made by keeping up with traffic better with more speed. Now if you can
make good enough time to outrun them altogether, then it is worth considering filtering up but use some discretion on that as well.
Last edited by turbo1889; 07-13-13 at 08:32 AM.
Interesting way to think about it. Maybe I'll start looking at 2kw motor again.
Well I don't think you can assume that by going 30 mph no one is going to pass you. If you assume there are some a_hole's out there, then they'll certainly pass you at 30 mph. EBikeFL kind of
touches on it. If someone is an a_hole and sees a bike going 30 mph they might be upset and pass you dangerously to prove a point.
But I do agree with you that keeping up with traffic makes an aspect of riding safer. On my route to work I often take a different route in one section than the route I take coming home. The
reason? Going to work the one street that's a more direct route is uphill (about 3% to 4% grade) on a narrow road and I don't want to be going 15 to 20 km/hr up this road with traffic stuck
behind me getting pissed off. But going home it's downhill and with the slight downhill I can easily go over 40 km/hr and I find traffic doesn't mind being behind me for the one block stretch the
road is narrow. No one ever honks or tries to pass, and I move over as soon as the road widens.
Well I don't think you can assume that by going 30 mph no one is going to pass you. If you assume there are some a_hole's out there, then they'll certainly pass you at 30 mph. EBikeFL kind of
touches on it. If someone is an a_hole and sees a bike going 30 mph they might be upset and pass you dangerously to prove a point.
For some reason, I suspect that a_hole is going to pass dangerously whether you're doing 10mph or 30mph.
Well I don't think you can assume that by going 30 mph no one is going to pass you. If you assume there are some a_hole's out there, then they'll certainly pass you at 30 mph. EBikeFL kind of
touches on it. If someone is an a_hole and sees a bike going 30 mph they might be upset and pass you dangerously to prove a point.
I've had this happen on more than one occasion. Drivers become fascinated with my e-bike and slow down to get a closer look but piss off the drivers behind them in doing so. Then when they
finally pass me, all the drivers behind them decide to take their frustration out on me.
I had one driver nearly run me off the road, then pointed to the sidewalk when I caught up to him.
If we don't make changes to our use and abuse of energy, Mother Nature will make it for us.
Be a part of the solution to Global Warming instead of a contributor.
Long story short, if your moving fast enough to keep up with traffic, don't wipe out, and hold your position in the cue at the reds and stops rather then filter up because doing so can make some
drivers go nuclear on you when they end up passing you multiple times and more then minus out any safety gains you may have made by keeping up with traffic better with more speed. Now if you can
make good enough time to outrun them altogether, then it is worth considering filtering up but use some discretion on that as well.
You make a good point turbo1889. I don't filter when I stop at stoplights. I do take the center lane and hold it until I get across the intersection. I had a motorcycle filter up and pass me and
several cars to make a right hand turn once. I couldn't believe what I was seeing, there was very little room for him to navigate but he did it anyway.
If we don't make changes to our use and abuse of energy, Mother Nature will make it for us.
Be a part of the solution to Global Warming instead of a contributor.
It may sound picky, but not necessarily. mass x g x height is the same at any speed. That's not just snarky because I'd rather fall with a moderate forward speed than a dead stop - easier for
either a shoulder roll or forward roll. Hitting an obstruction or another vehicle you (and the other comments) would be right.
Which is all to say that travelling 30 as opposed to 15 isn't all that much more dangerous with respects to falls. Control, cornering and collisions is another story.
It may sound picky, but not necessarily. mass x g x height is the same at any speed. That's not just snarky because I'd rather fall with a moderate forward speed than a dead stop - easier for
either a shoulder roll or forward roll. Hitting an obstruction or another vehicle you (and the other comments) would be right.
Which is all to say that travelling 30 as opposed to 15 isn't all that much more dangerous with respects to falls. Control, cornering and collisions is another story.
The worst (single vehicle) bicycle wipe out I've had so far was when I lost it going down-hill on a pedal only mountain bike at about 40-mph on a gravel back-road (I was a lot younger and
stupider back then) and I absolutely guarantee you that it was far worse at speed and I would have much preferred to be going a lot slower when I took that spill (hindsight being a lot clearer
and wiser).
You didn't hit the ground any harder for the speed. You'll get more road rash at the higher speed if you're sliding, and probably more impacts if you're bouncing. I've hit the ground at 60
without a scratch or bruise, and at 10 with some pretty serious contusions. Snapped my collarbone on impact with a curb at 25 mph (last year
Open Road Edition
I've demonstrated how faster is safer, in a 30 mph traffic environment. But "On the road", with higher speed traffic, is where the most concern about passing vehicles exists. How does speed
effect your risk in a 60 mph traffic situation.
First, let me qualify;
1. Riding is in an rural area and 95% of the roads are 55 mph limit.
2. I ride on the right side of the road, going "with traffic", as is the legal method.
For ease of math - Let's figure a 10 mile trip, w/traffic @ 10 cars per minute.
60 mph traffic:
At 10 mph -
60min x 10cars - 1/6 (for 1/6 speed of cars) = 500 cars passing you at 50mph.
Driver has 7 seconds to notice & accommodate biker.
At 15 mph -
40min x 10cars - 1/4 (for 1/4 speed of cars) = 300 cars passing you at 45mph.
At 20 mph -
30min x 10cars - 1/3 (for 1/3 speed of cars) = 200 cars passing you at 40mph.
Driver has 9.5 seconds to notice & accommodate biker. Cars have approx. 1.4 times the time to notice, and avoid, you! (vs 10 mph).
At 25 mph -
24min x 10 cars - 5/12 (for 5/12 speed of cars) = 140 cars passing you at 35mph.
At 30 mph -
20min x 10cars - 1/2 (for 1/2 speed of cars) = 100 cars passing you at 30 mph!
Driver has 12 seconds to notice & accommodate biker.
(Math is simplified - but "sound")
When you consider that, in "open road" conditions, most bike collisions are directly related to passing cars, then 20 mph would be (2.5 times safer than 10 mph) x (1.4, the time the, approaching,
driver has to see biker) = nearly 4 times safer @ 20 mph, compared to 10 mph!
30 mph would be (5 times safer than 10 mph) x (2, the time the, approaching, driver has to see biker) = 10 times safer @ 30 mph, compared to 10 mph!
Note: Some of the math is approximated, fairly accurate, but will modify if deemed necessary.
Most impressive is that every bit of speed increase greatly reduces the volume of passing traffic and therefore increases the safety factor!
A__hole factor! Everyone might agree that, possibly, 1 in 100 motorists are AHs toward bicyclists, (Conservative Estimate!), Going 10 mph you'll get passed by 5, only 1 @ 30 mph.
"Best of all! ... I get to play. ... http://www.bikeforums.net/attachment...&thumb=1&stc=1
Sorry! ... I'm addicted to improving enhancing.
With side orders of inspiring enlightening!"
There is one additional "safer" factor you didn't calculate. Under the riding conditions you specify if someone does hit you the speed differential comes into play.
Namely if some knuckle head high speed heavy vehicle operator either runs right into the rear of you on your bike or passes so closely there is a sliding physical contact between the right side
of their car and your left side and the protrusions such as the mirror, door handle, and trim make multiple impacts with your bike and body along with road rash like burns from the sliding
contact with the main smooth body of the car. That kind of "accident" (in quotes for a reason, criminal negligence on the part of the passing vehicle would be more like it) is known as "getting
sliced" up here where I live and is a common danger to avoid on the kind of roads you are talking about.
Anyway, long story short if we assume the car is going 60-mph.
----- If a car either rear-ends you or "slices" you on a too close pass and you are going 10-mph the speed of the impact is 50-mph
----- If a car either rear-ends you or "slices" you on a too close pass and you are going 20-mph the speed of the impact is 40-mph
----- If a car either rear-ends you or "slices" you on a too close pass and you are going 30-mph the speed of the impact is 30-mph
Providence-Forbid, if one of the drivers of those faster moving heavy vehicles on the road either plows into you from behind or "slices" you on a too close pass the faster you are going the lower
the speed of that impact will be when they hit you and thus the better your chances of avoiding death and/or reducing injuries from the initial impact.
So, provided you are riding on the correct side of the road and not being a salmon (speed gets added rather then subtracted in that case, which is part of the reason you shouldn't ride like that)
then you have an additional safety cushion if you are moving at higher speed if someone actually does hit you.
If you ride correctly with traffic the faster you ride and close the gap between the speed of other traffic on the road the better off you are if they hit you. The reverse is true as far as you
hitting them (be careful when drafting cars, sometimes they can brake before you can and if you don't leave enough room you can rear-end them, the only at-fault collision I have been in on a bike
was when I rear-ended a vehicle in front of me and split a foam bike helmet in half when my head hit the rear of their vehicle, and I was riding a pedal only bike !!!).
Last edited by turbo1889; 07-14-13 at 12:50 AM.
You're assuming there is only one type of accident scenario between cars and bikes. Wrong, there's at least 57 scenarios and for the other 56 you are safer if you go slower.
Bicycle Safety - The Math of Speed - Other Traffic
Exaggerated ... but ... Good point!
Let's compare other traffic situations:
Biker at 10 mph vs 20 mph in a 30 mph traffic situation.
Per mile - 10 mph biker will be:
1. passed by 4x (times) as many vehicles =
a. 4x the possibility of hit, or swipe x (2x impact speed)
b. 4x the possibility of "right cross" **
2. 2x the volume of oncoming traffic =
a. 2x the possibility of "left cross" **
b. 2x possibility of head on x (.8 impact speed)
(30 + 10 mph vs 30 + 20 mph = 80% impact speed)
3. 2x the volume of cross traffic, sidestreets, driveways etc. =
a. 2x the possibility of cross traffic collision **
Twice as long, being a target, in the intersections!
Note: Actual percentages listed where available. Other impacts are highly variable due to possible angle and bike into vehicle or vehicle into bike.
** Speed, or severity, of impact will vary, from 50% to 100% (possibly higher).
Best case is 50% impact speed of 10 mph biker into side of vehicle.
Worst case would be, side impact of biker by car, 100% impact speed. Possibility of being "run over" might be 2x, for the 10 mph biker. (Momentum of 20 mph biker is much more likely to carry him
past the car = much greater chance of not being under car!)***
(Same direction impact already established at 4x possibility & 200% speed-severity.)
*** 20 mph Biker possibility of impact is approx. 25% to 50% that of the 10 mph Biker.
Additionally, 20 mph Biker is 2x as likely to strike the vehicle while the 10 mph Biker is 2x as likely to be struck by vehicle. (Applicable to all, except same direction & head-on!) Possibility
of 10 mph Biker going under vehicle is MUCH greater!
The final, measurable, variable might be, "time to see", (tts), the biker. (10 mph biker) While following traffic only has .5x the tts, oncoming traffic has 1.25x the tts, and the cross traffic
has 2x the tts.
The additional factor of faster motion being more noticeable, especially in the peripheral vision area, should be added, but, I'm afraid, assigning percentages would be sheer speculation.
(Peripheral vision is much more attuned to detecting motion, as well as light, especially flashing light. Another good reason for a "strobe" headlight, during the day.)
Personally, I believe, faster still looks a whole lot better-safer.
Last edited by DrkAngel; 07-14-13 at 08:36 AM.
"Best of all! ... I get to play. ... http://www.bikeforums.net/attachment...&thumb=1&stc=1
Sorry! ... I'm addicted to improving enhancing.
With side orders of inspiring enlightening!"
I mainly was focusing on danger approaching from the rear where it is the least likely to be noticed in time by the cyclist to take action to avoid the situation. Even with a good mirror or two
you spend less time looking in the mirror and for danger you don't have much say in - that is most likely to come from the rear in the form of getting rear-ended or a too close pass.
I focused on the highest potential of "surprises" for the cyclist and ASSumed the cyclist of sufficient skill and capabilities to make a difference in avoiding the others. I capitalized the first
three letters of that word because you may be correct that you can't count on that and its a ill-advised assumption to make. The rider will be the one that makes the choice as to whether those
first three letters should be capitalized or not on that one. I try to keep myself on the lower case level but I will admit there have been times where I've managed to turn myself into the
capitalized version.
Impact From Behind - Best Scenario & Solution!
Absolutely the best solution to a rear impact scenario requires good speed capability, constant awareness, and one piece of specialty equipment.
First you want to be traveling, as closely as possible, to the speed of the approaching vehicle.
Second you must have an awareness as to the velocity, angle, mass and surface composition of the vehicle. Sets of 4 mirrors, or more, recommended, if possible, arrange into a stereoscopic, full
3D configuration.
Thirdly, and most importantly, the one piece of specialty equipment! "Cyclists Downunder", (based, possibly, in Australia?), has begun marketing their "Octopi" line of cyclewear.
Just make sure that you are struck squarely from behind. If you are about to be hit, quickly swerve and position yourself directly towards the center of the vehicle, the car should knock the
bicycle from under you and you should roll gracefully onto the hood and, or, windshield, where the Octopi suction cups should keep you safely secured. (Tip: As soon as you get stuck to the
vehicle, rip off one, or both windshield wipers! Some drivers will use them to try to knock you off. You can also beat them on the roof to get the drivers attention, in case he is sleeping, or
just doesn't notice you.) Hopefully the car will come to a gentle stop and you can then safely get off. Much safer than rolling down the road at 30 mph or bouncing over the roof and landing, "who
knows where"! (Tip: Please do not anger, or insult, the driver! You will probably need their help getting unstuck from the car!)
Warning! Speed is important! 20 mph bike speed is optimal to be hit by a 30 mph car.
Slower can result in fairly severe injuries.
Faster and you might not be bounced onto the top of the car, you might have to jump backwards, timing is critical! Warning! Be careful, some a__hole drivers will approach like they are going to
hit you, then ... slow down, just before impact. If not aware you might jump, and miss, ... then where would you be? ... Embarrassed! ... ???
Large trucks can be very tricky. Most don't have a nice hood to get stuck to.
1. Ideally, you must be going 10 mph slower than the truck.
2. Timing is critical, you must jump straight up just as you are being run over.
3. You must hit the windshield squarely, with enough body, to stick. Grills-radiators don't work well with suction cups!
This is a skill! Like any skill it requires practice. You should have a friend try to run you down, a few times, just so you can get good at being safer.
Oh, ... Make sure you have a good supply of bikes handy.
P.S. Be prepared for being hit by the, proverbial, "Redneck Pickup". Keep an Armageddon bag handy, on your bike. Recommend couple bottles of water, sun screen, some granola bars, "Space blanket"
... anything you might need in case they drive around with you stuck to their hood, for a few days.
Disclaimer! You must read "Epitaphs of the Downunder Cyclists", before attempting this "solution"!!!
"Best of all! ... I get to play. ... http://www.bikeforums.net/attachment...&thumb=1&stc=1
Sorry! ... I'm addicted to improving enhancing.
With side orders of inspiring enlightening!"
And the s**t eating grin they see on your face as you ride the same speed they are, increases the odds of them hating their lives, selling their cars and buying an e-bike.
win-win, bro.
As far as i am concerned nothing ive read here "demonstrates" a faster speed is safer. I run an ebike but i also run 2 trucks & several motorcycles & the single most important factor when
assessing levels of safety on urban streets is how other road users assess what risk you are to them if they do something wrong..
for example pulling out of a side turning into your path. unfortunately if you are on a bicycle they tend to assume you are doing alot less that 30mph & after a quick glance at you they will look
away & pull out. obviously flowing traffic at a similar speed is safer but not if you are on a small frontal area vehicle which people assume will be approaching at bicycle speeds.
I find the largest frontal area vehicle i drive (a truck) makes people decide not to risk entering my path far more often, even if I'm going extremely slow, however, they happily enter my path if
I'm riding my ebike & its down to me to take avoiding action which is most certainly more dangerous at higher speeds..o
Last edited by OYO; 07-14-13 at 11:21 AM.
As far as i am concerned nothing ive read here "demonstrates" a faster speed is safer. I run an ebike but i also run 2 trucks & several motorcycles & the single most important factor when
assessing levels of safety on urban streets is how other road users assess what risk you are to them if they do something wrong..
for example pulling out of a side turning into your path. unfortunately if you are on a bicycle they tend to assume you are doing alot less that 30mph & after a quick glance at you they will look
away & pull out. obviously flowing traffic at a similar speed is safer but not if you are on a small frontal area vehicle which people assume will be approaching at bicycle speeds.
I find the largest frontal area vehicle i drive (a truck) makes people decide not to risk entering my path far more often, even if I'm going extremely slow, however, they happily enter my path if
I'm riding my ebike & its down to me to take avoiding action which is most certainly more dangerous at higher speeds..o
Many of these issues can be resolved by using the appropriate warning device.
In action: http://vimeo.com/44650294
Since we have gotten into the humor end of the discussion, I'll join into that as well:
If OYO is correct that in order for other drivers to respect you on the road you must represent sufficient threat to them ~ I believe that should do the trick. I was looking for a picture of a
tadpole tandem I saw years ago on the net with hard mounted double forward gatling guns and the stoker sitting backwards with a single gatling gun on pivot mount for a tail gun but was unable to
find that picture and those were the best I was able to come up with on short notice.
Last edited by turbo1889; 07-14-13 at 05:40 PM.
hahaha reminds me of this ... at first glance it has a good threat score.
Last edited by OYO; 07-14-13 at 07:50 PM.
07-13-13, 07:51 AM #2
Senior Member
Join Date
Mar 2012
Orlando, FL
My Bikes
2012 Kona Lanai
0 Post(s)
0 Thread(s)
07-13-13, 07:56 AM #3
Transportation Cyclist
Join Date
Aug 2011
Montana U.S.A.
My Bikes
Too many to list, some I built myself including the frame. I "do" ~ Human-Only-Pedal-Powered-Cycles, Human-Electric-Hybrid-Cycles, Human-IC-Hybrid-Cycles, and one
7 Post(s)
0 Thread(s)
07-13-13, 08:06 AM #4
Senior Member
Join Date
May 2008
Zang's Spur, CO
0 Post(s)
0 Thread(s)
07-13-13, 08:22 AM #5
Transportation Cyclist
Join Date
Aug 2011
Montana U.S.A.
My Bikes
Too many to list, some I built myself including the frame. I "do" ~ Human-Only-Pedal-Powered-Cycles, Human-Electric-Hybrid-Cycles, Human-IC-Hybrid-Cycles, and one
7 Post(s)
0 Thread(s)
07-13-13, 10:04 AM #6
Used & Abused
Join Date
Jun 2011
My Bikes
GT Avalanche 2.0 + Burley D'lite
0 Post(s)
0 Thread(s)
07-13-13, 10:35 AM #7
Senior Member
Join Date
Sep 2010
My Bikes
Rocky Mountain Blizzard, Haro Roscoe (sold), Giant TCX Rabobank, Cervelo RS
0 Post(s)
0 Thread(s)
07-13-13, 12:15 PM #8
Used & Abused
Join Date
Jun 2011
My Bikes
GT Avalanche 2.0 + Burley D'lite
0 Post(s)
0 Thread(s)
07-13-13, 12:28 PM #9
Senior Member
Join Date
Mar 2012
Orlando, FL
My Bikes
2012 Kona Lanai
0 Post(s)
0 Thread(s)
07-13-13, 12:36 PM #10
Senior Member
Join Date
Mar 2012
Orlando, FL
My Bikes
2012 Kona Lanai
0 Post(s)
0 Thread(s)
07-13-13, 12:36 PM #11
rugged individualist
Join Date
Apr 2011
Alpharetta, GA
My Bikes
Nashbar Road
5 Post(s)
0 Thread(s)
07-13-13, 02:51 PM #12
Transportation Cyclist
Join Date
Aug 2011
Montana U.S.A.
My Bikes
Too many to list, some I built myself including the frame. I "do" ~ Human-Only-Pedal-Powered-Cycles, Human-Electric-Hybrid-Cycles, Human-IC-Hybrid-Cycles, and one
7 Post(s)
0 Thread(s)
07-13-13, 04:03 PM #13
rugged individualist
Join Date
Apr 2011
Alpharetta, GA
My Bikes
Nashbar Road
5 Post(s)
0 Thread(s)
07-14-13, 12:25 AM #14
Senior Member
Join Date
Oct 2012
Upstate NY
My Bikes
Various "modded" eZips and multiple econversions
0 Post(s)
0 Thread(s)
07-14-13, 12:44 AM #15
Transportation Cyclist
Join Date
Aug 2011
Montana U.S.A.
My Bikes
Too many to list, some I built myself including the frame. I "do" ~ Human-Only-Pedal-Powered-Cycles, Human-Electric-Hybrid-Cycles, Human-IC-Hybrid-Cycles, and one
7 Post(s)
0 Thread(s)
07-14-13, 02:24 AM #16
Senior Member
Join Date
Feb 2008
My Bikes
Giant CRX3, Trek 7100
0 Post(s)
0 Thread(s)
07-14-13, 04:29 AM #17
Senior Member
Join Date
Oct 2012
Upstate NY
My Bikes
Various "modded" eZips and multiple econversions
0 Post(s)
0 Thread(s)
07-14-13, 06:12 AM #18
Transportation Cyclist
Join Date
Aug 2011
Montana U.S.A.
My Bikes
Too many to list, some I built myself including the frame. I "do" ~ Human-Only-Pedal-Powered-Cycles, Human-Electric-Hybrid-Cycles, Human-IC-Hybrid-Cycles, and one
7 Post(s)
0 Thread(s)
07-14-13, 07:24 AM #19
Senior Member
Join Date
Oct 2012
Upstate NY
My Bikes
Various "modded" eZips and multiple econversions
0 Post(s)
0 Thread(s)
07-14-13, 09:46 AM #20
Senior Member
Join Date
Jul 2013
1 Post(s)
0 Thread(s)
07-14-13, 11:14 AM #21
Junior Member
Join Date
Jun 2013
0 Post(s)
0 Thread(s)
07-14-13, 11:45 AM #22
Senior Member
Join Date
Jul 2013
1 Post(s)
0 Thread(s)
07-14-13, 12:37 PM #23
Senior Member
Join Date
May 2008
Zang's Spur, CO
0 Post(s)
0 Thread(s)
07-14-13, 05:28 PM #24
Transportation Cyclist
Join Date
Aug 2011
Montana U.S.A.
My Bikes
Too many to list, some I built myself including the frame. I "do" ~ Human-Only-Pedal-Powered-Cycles, Human-Electric-Hybrid-Cycles, Human-IC-Hybrid-Cycles, and one
7 Post(s)
0 Thread(s)
07-14-13, 07:46 PM #25
Junior Member
Join Date
Jun 2013
0 Post(s)
0 Thread(s)
|
{"url":"http://www.bikeforums.net/advocacy-safety/901199-bicycle-safety-math-speed.html","timestamp":"2014-04-17T22:06:54Z","content_type":null,"content_length":"147351","record_id":"<urn:uuid:876cffb5-2ae2-44a4-b090-4c44ff90c9a5>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00267-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Reply to comment
London 2012 vowed to be the cleanest Olympics ever, with more than 6,000 tests on athletes for performance enhancing drugs. But when an athlete does fail a drug test can we really conclude that they
are cheating? John Haigh does the maths. (You can also look at the animation below to see the results illustrated.)
How reliable are accusations of cheating based on drug tests?
In order to see whether an athlete has sought an unfair advantage using performance-enhancing drugs, various tests can be made. Their detail depends on what banned substance is being investigated,
but the logic behind all such tests is identical: some "measurement" is made, and if its value exceeds some threshold, this is seen as sufficient evidence of cheating.
How likely is it that an athlete who fails the test really is a cheat, that is, how reliable is this accusation? In the language of probability, we seek the conditional probability that the athlete
is guilty given they have failed the test, written as Pr(Guilty | Fail). Three numbers are required to work this out. The first is the so-called sensitivity of the test: the proportion of drug users
who fail. In probability terms, this is the conditional probability that the athlete fails the test given they are guilty, Pr(Fail | Guilty). We would like this to be close to 100%. A second number
is the test's specificity: the proportion of non-users who pass. Again, this should be close to 100%, meaning that Pr(Fail | Not Guilty) should be close to zero. The final quantity is the actual
proportion of drug users in the relevant population, that is the group of athletes who might be tested. This is hard to know with precision, but we can make reasonable estimates.
If we have these three numbers we can use a mathematical result known as Bayes' theorem that gives us the answer (you can read more about Bayes' theorem on Plus). It turns out to be easier to work
with odds, rather than probabilities. Recall that if the probability an event occurs is 80%, the probability it does not occur is 20%, so the odds on its occurrence are 80/20, or 4 to 1; if the
chance is 90%, the odds are 90/10, or 9 to 1. Probabilities determine odds, and vice versa.
Before any evidence about drugs is sought, the probability that a randomly chosen athlete is a cheat is just the proportion of cheats in the population, so we use this figure to find the
corresponding odds value. This ratio,
is termed the prior odds of guilt.
Suppose an athlete fails the test. How should we find the posterior odds of guilt, that is the odds that they are a cheat, given the evidence of a failed test? First calculate the weight of evidence,
defined as the ratio
using the sensitivity and specificity noted above. Now Bayes' theorem tells us that the answer we seek, the posterior odds, comes simply from multiplying the prior odds by this weight of evidence.
You can then convert this to a probability, if you prefer.
Numbers often help. Suppose the proportion of cheats is 1%, the sensitivity is 95%, and the specificity is also 95%. Plainly, the prior odds of guilt are 1/99. The weight of evidence is 95/5 (agreed?
Look carefully at its definition), so the posterior odds of guilt are
about 0.19. To convert the odds into a probability, we divide the odds by 1 plus the odds:
This gives a probability of guilt of about 16% which, to most people, will be disappointingly low. Although the test gets things wrong only five times in a hundred among cheaters as well as
innocents, it isn’t good enough. Could we really throw someone out of the Olympic Games on a 16% chance of being a cheat?
Despite the initially impressive numbers of 95% for the sensitivity and specificity of the test, to understand this unsatisfactory outcome, imagine a population of size 10,000, with 1% drug cheats.
That means we have 9900 clean athletes, and 100 cheats. We expect the test to catch 95% of the cheats, that is 95 of them, but it will also finger 5% of the innocents, another 495 people. So 590
athletes fail the test, but only 95 of them — 16% of course — are genuine cheats.
We cannot expect any test to be 100% sensitive, or 100% specific. Mistakes will happen. Some authoritative body must set the thresholds, forming an acceptable balance between the mistake of accusing
an innocent athlete of being a cheat, and the mistake of passing a drug user as clean. Knowing the sensitivity and specificity is not enough — a good idea of the size of the problem is required. And
the fewer drug cheats there are, the better our tests must be to give a high enough chance of making the right decisions.
The following animation (from Understanding Uncertainty) illustrates our example. For simplicity it looks at a population of 100 and rounds the numbers involved to the nearest whole number. The
"Testing" buttons shows you the outcome of a drug test on 100 people, assuming 1% of them are cheats and a test sensitivity and specificity of 95%. The "Trees" button shows you the result in the form
of a tree diagram. The diagram shows that only 1 out of 6 — around 16% — of those who have tested positive have actually taken the drug.
You need to
install the Adobe Flash Player
to see the animation.
About this article
John Haigh teaches mathematics, including probability, at Sussex University. With Rob Eastaway, he wrote The hidden mathematics of sport, which has been reviewed on Plus.
The animation in this article originally appeared on the Understanding Uncertainty website in the context of screening for diseases and catching terrorists. It was created by the Understanding
Uncertainty team.
|
{"url":"http://plus.maths.org/content/comment/reply/5757","timestamp":"2014-04-17T15:33:38Z","content_type":null,"content_length":"30016","record_id":"<urn:uuid:eb199747-8e96-4d6d-8a91-c52d9415403b>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00568-ip-10-147-4-33.ec2.internal.warc.gz"}
|
11-XX Number theory
11-00 General reference works (handbooks, dictionaries, bibliographies, etc.)
11-01 Instructional exposition (textbooks, tutorial papers, etc.)
11-02 Research exposition (monographs, survey articles)
11-03 Historical (must also be assigned at least one classification number from Section 01)
11-04 Explicit machine computation and programs (not the theory of computation or programming)
11-06 Proceedings, conferences, collections, etc.
11Axx Elementary number theory {For analogues in number fields, see 11R04}
11Bxx Sequences and sets
11Cxx Polynomials and matrices
11Dxx Diophantine equations [See also 11Gxx, 14Gxx]
11Exx Forms and linear algebraic groups [See also 19Gxx} {For quadratic forms in linear algebra, see 15A63]}
11Fxx Discontinuous groups and automorphic forms [See also 11R39, 11S37, , 30F35, 32Nxx {For relations with quadratic forms, see 11E45]}
11Gxx Arithmetic algebraic geometry (Diophantine geometry) [See also 11Dxx, 14Gxx, 14Kxx]
11Hxx Geometry of numbers {For applications in coding theory, see 94B75}
11Jxx Diophantine approximation, transcendental number theory [See also 11K60]
11Kxx Probabilistic theory: distribution modulo $1$$1$; metric theory of algorithms
11Lxx Exponential sums and character sums {For finite fields, see 11Txx}
11Mxx Zeta and $L$$L$-functions: analytic theory
11Nxx Multiplicative number theory
11Pxx Additive number theory; partitions
11Rxx Algebraic number theory: global fields {For complex multiplication, see 11G15}
11Sxx Algebraic number theory: local and $p$$p$-adic fields
11Txx Finite fields and commutative rings (number-theoretic aspects)
11Uxx Connections with logic
11Yxx Computational number theory[See also 11-04]
11Z05 Miscellaneous applications of number theory
|
{"url":"http://ams.org/mathscinet/msc/msc.html?t=11-XX","timestamp":"2014-04-18T04:46:22Z","content_type":null,"content_length":"16788","record_id":"<urn:uuid:b12a0de0-7e0b-4bc5-a050-f4c5824de644>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00571-ip-10-147-4-33.ec2.internal.warc.gz"}
|
In this work, we develop a new foundation for rational homotopy theory based on Lie coalgebras. The work starts with a combinatorial model for the linear dual to the Lie operad, based on
a combinatorial pairing between graphs and trees (see the paper of that name listed in the configuration space page).
Ben Walter and I are writing a series of three papers on this subject, the first of which sets the basic structure (including cobracket structure, bar and cobar constructions, and model
structures) using a lift to a new category of graph coalgebras. The second paper yields the geometric payout, developing Hopf invariants from this point of view and uniting work of Chen,
Hain, Sullivan and Boardman-Steer. The planned third paper will remove finiteness and connectivity hypotheses.
Lie coalgebras and rational homotopy theory, I: graph coalgebras. (with Ben Walter)
• Home
We develop a new, intrinsic, computationally friendly approach to Lie coalgebras through graph coalgebras, which are new and likely to be of independent interest. Our graph coalgebraic • Research
approach has advantages both in finding relations between coalgebra elements and in having explicit models for linear dualities. As a result, proofs in the realm of Lie coalgebras are • Teaching
often simpler to give through graph coalgebras than through classical methods, and for some important statements we have only found proofs in the graph coalgebra setting. For
applications, we investigate the word problem for Lie coalgebras, we revisit Harrison homology, and we unify the two standard Quillen functors between differential graded commutative
algebras and Lie coalgebras.
Lie coalgebras and rational homotopy theory, II: Hopf invariants. (with Ben Walter)
We give a new, definitive answer to the basic question "how can cochain data determine rational homotopy groups?" Moreover, we give a method for determining when two maps from S^n to X
are homotopic after allowing for multiplication by some integer. We start by building integer-valued homotopy functionals from the cobar complex on the cochains of a space, which we call
generalized Hopf invariants. We show these Hopf invariants pass to the Harrison complex, and in that setting give sharp duality with rational homotopy. The previous paper in this series
built a new framework for understanding Lie coalgebras as quotients of graph coalgebras and applied this to rational homotopy theory. We extend that work by giving an independent
geometric proof that our graph coalgebra models are dual to homotopy, with cobracket dual to the Whitehead product. For applications, we investigate wedges of spheres, homogeneous
spaces, and configuration spaces; and we propose a generalization of the Hopf invariant one question.
|
{"url":"http://pages.uoregon.edu/dps/liecoalg.php","timestamp":"2014-04-16T04:13:00Z","content_type":null,"content_length":"7550","record_id":"<urn:uuid:992b587f-dd62-4229-be5f-43e5f71b7842>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00243-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Frazer, PA Algebra 2 Tutor
Find a Frazer, PA Algebra 2 Tutor
...I am a patient, flexible, and encouraging tutor, and I'd love to help you or your child gain confidence and succeed academically. I adapt my teaching style to students' needs, explaining
difficult concepts step by step and using questions to "draw out" students' understanding so that they learn ...
38 Subjects: including algebra 2, English, reading, physics
...Throughout my years tutoring all levels of mathematics, I have developed the ability to readily explore several different viewpoints and methods to help students fully grasp the subject
matter. I can present the material in many different ways until we find an approach that works and he/she real...
19 Subjects: including algebra 2, calculus, geometry, statistics
...Some years ago I started to tutor one-on-one and have found that, more than classroom instruction, it allows me to tailor my teaching to students' individual needs. Their success becomes my
success. Every student brings a unique perspective and a unique set of expectations to his or her lesson,...
21 Subjects: including algebra 2, reading, physics, writing
...Geometry is my favorite subject to tutor! I think it can be a fun subject to master. While concepts in Geometry are abstract, they are also demonstrable and can be memorable.
20 Subjects: including algebra 2, English, geometry, algebra 1
...Once they were in high school they studied more independently, but I did help with math and science homework as needed. Now they are both in college and are doing quite well on their own. I
work nearly full-time, but have some flexibility in my work schedule and can tutor either late afternoon or in the evening.
19 Subjects: including algebra 2, calculus, geometry, algebra 1
Related Frazer, PA Tutors
Frazer, PA Accounting Tutors
Frazer, PA ACT Tutors
Frazer, PA Algebra Tutors
Frazer, PA Algebra 2 Tutors
Frazer, PA Calculus Tutors
Frazer, PA Geometry Tutors
Frazer, PA Math Tutors
Frazer, PA Prealgebra Tutors
Frazer, PA Precalculus Tutors
Frazer, PA SAT Tutors
Frazer, PA SAT Math Tutors
Frazer, PA Science Tutors
Frazer, PA Statistics Tutors
Frazer, PA Trigonometry Tutors
Nearby Cities With algebra 2 Tutor
Charlestown, PA algebra 2 Tutors
Concordville algebra 2 Tutors
Eagle, PA algebra 2 Tutors
Elwyn, PA algebra 2 Tutors
Glen Riddle, PA algebra 2 Tutors
Immaculata algebra 2 Tutors
Lima, PA algebra 2 Tutors
Moylan, PA algebra 2 Tutors
Pine Swamp, PA algebra 2 Tutors
Romansville, PA algebra 2 Tutors
Rose Tree, PA algebra 2 Tutors
Southeastern algebra 2 Tutors
Upton, PA algebra 2 Tutors
Valley Forge algebra 2 Tutors
Wawa, PA algebra 2 Tutors
|
{"url":"http://www.purplemath.com/Frazer_PA_Algebra_2_tutors.php","timestamp":"2014-04-19T07:17:37Z","content_type":null,"content_length":"23992","record_id":"<urn:uuid:f92c8071-d877-45f5-97c2-95a3c7c420a8>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00012-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Markov Decision Processes
How do you plan efficiently if the results of your actions are uncertain? There is some remarkably good news, and some some significant computational hardship. We begin by discussing Markov Systems
(which have no actions) and the notion of Markov Systems with Rewards. We then motivate and explain the idea of infinite horizon discounted future rewards. And then we look at two competing
approaches to deal with the following computational problem: given a Markov System with Rewards, compute the expected long-term discounted rewards. The two methods, which usually sit at opposite
corners of the ring and snarl at each other, are straight linear algebra and dynamic programming. We then make the leap up to Markov Decision Processes, and find that we've already done 82% of the
work needed to compute not only the long term rewards of each MDP state, but also the optimal action to take in each state.
Powerpoint Format: The Powerpoint originals of these slides are freely available to anyone who wishes to use them for their own work, or who wishes to teach using them in an academic institution.
Please email Andrew Moore at awm@cs.cmu.edu if you would like him to send them to you. The only restriction is that they are not freely available for use as teaching materials in classes or tutorials
outside degree-granting academic institutions.
Advertisment: I have recently joined Google, and am starting up the new Google Pittsburgh office on CMU's campus. We are hiring creative computer scientists who love programming, and Machine Learning
is one the focus areas of the office. If you might be interested, feel welcome to send me email: awm@google.com .
|
{"url":"http://www.autonlab.org/tutorials/mdp.html","timestamp":"2014-04-16T19:01:28Z","content_type":null,"content_length":"3105","record_id":"<urn:uuid:2312b5b7-2daf-4721-9d07-a93d19a069b9>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00517-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Constructing a hypersurface with given outer normals
up vote 6 down vote favorite
Pick a point on each of the positive half-axes in $\mathbb{R}^n$. Put a (unit-norm?) vector at each of the n points.
(a) Is there a hypersurface in the orthant $\mathbb{R}^n_+$ going through these n points with the vectors as outer normals? (The answer ought to be yes, since I'm asking whether there exists a closed
=exact 1-form on $\mathbb{R}^n_+$ with given values at a few points.)
(b) Is there a hypersurface with the above properties, and also constrained to have the outer normals at all points lie in some half-space? (I know the half-space in question will contain the n
vectors specified above, and also the vector (1,1,...,1).) Equivalently, I'm asking that some other vector points inwards along all of the hypersurface.
(c) I'd quite like the above surface together with the coordinate hyperplanes to enclose a compact set. Is this possible? automatic?
(d) Is there a constructive way to build a hypersurface with properties (a)-(c)? Ideally in the vein of writing down some equations for the surface, or for its normal vector field, in terms of the
original points and vectors. (I'd like to build one of those for each orthant and then be able to stitch them together.)
(Motivation: I'm trying to show convergence of a stochastic system by constructing (the level sets of) a Lyapunov function for it. I have quite a lot of freedom for what the function looks like on
the interior of each orthant, but it's progressively more and more constrained on all the higher codimension subspaces, until at each half-axis I have a very limited range of possibilities. I'd like
to know if these possibilities can be patched together in a sensible way.)
dg.differential-geometry oc.optimization-control geometry
add comment
1 Answer
active oldest votes
This is speculation, not a precise answer, but I wonder if perhaps Minkowski's theorem on the existence of a polytope with prescribed face normals and areas might help? This theorem is
described in detail in, for example, Alexandrov's book, Convex Polyhedra, Chapter 7, p.311ff.
The connection to your problem is this. If you specify $k$ unit normals ${\bf n}_i$, you can imagine those determining the orientation of $k$ faces of a convex polytope in $\mathbb{R}^
d$. Minkowski's theorem says that, if in addition, you specify face areas $F_i$ such that $\sum_{i=1}^{k} F_i {\bf n}_i = 0$, then there exists a polytope realizing those normals and
areas. If I understand your situation correctly, you have some of the ${\bf n}_i$ prespecified. You need to add one more ${\bf n}_i$ (one more to reach $d+1$ in total), as well as choose
areas $F_i$, in order to zero that sum. The convexity might(?) then yield the halfspace condition you desire.
up vote 4
down vote This is all discrete, of course, but it should not be difficult to pass from a polytope to a smooth hypersurface.
Example. $d=2$ (my $d$ is your $n$). You specify ${\bf n}_1=(1,0)$ and ${\bf n}_2=(-1,1)$, red below, on "the positive half-axes in $\mathbb{R}^2$" (dashed), and then throw in ${\bf n}_3
=(0,-1)$ (green) and appropriate areas, which by Minkowski determine a triangle, which you then approximate by a smooth curve (hypersurface), which has all outer normals in a halfspace.
For general $d$, $d+1$ vectors ${\bf n}_i$ will lead to a simplex. Added. There is a chance that the John ellipsoid in this simplex could serve as a starting point on your other
question, "Is there an ellipsoid with given outer normals?"
Thank you, that's exactly the sort of thing I was hoping for! – Elena Yudovina Oct 16 '11 at 19:38
add comment
Not the answer you're looking for? Browse other questions tagged dg.differential-geometry oc.optimization-control geometry or ask your own question.
|
{"url":"http://mathoverflow.net/questions/78204/constructing-a-hypersurface-with-given-outer-normals","timestamp":"2014-04-21T15:20:21Z","content_type":null,"content_length":"54252","record_id":"<urn:uuid:8c4831dc-d28f-48e5-a70e-0897baabe911>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00530-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Maritime Mathematics Competition
The 14^th Annual Maritime Mathematics Competition
A Mathematics Competition open to all students enrolled full-time in any High School in New Brunswick, Nova Scotia or Prince Edward Island. The competition is designed to foster interest in
Mathematics, increase public awareness of Mathematics and encourage students to develop mathematical and problem solving skills.
|
{"url":"http://www.math.unb.ca/MaritimeMath/mathpo.html","timestamp":"2014-04-20T10:46:04Z","content_type":null,"content_length":"5036","record_id":"<urn:uuid:19a2b599-a960-4141-9318-c04b0c625423>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00417-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Minimal surface as varities
up vote 2 down vote favorite
Minimal surface equation is the following: $$(1+ \phi_t^2) \phi_{xx} - 2 \phi_x \phi_t \phi_{xt} + (1 + \phi_x^2) \phi_{tt} =0$$
Solution of this equation $\phi(x,t)$ is minimal surface(non parametric form).
Can we see this solution as varieties and if so how does one show it.
curves-and-surfaces dg.differential-geometry
3 I find this too vague. Could you say more precisely what you mean by "see this solution as varieties"? – Deane Yang Mar 27 '12 at 16:21
1 The Weierstrass-Enneper representation says that any minimal surface is the real part of a meromorphic function in $\mathbb{C}^3$.en.wikipedia.org/wiki/Weierstrass_representation – Ian Agol Mar 27
'12 at 19:30
Agol, if that's what the question is about, then it should be stated explicitly. – Deane Yang Mar 27 '12 at 19:38
add comment
1 Answer
active oldest votes
I assume that you are asking for a proof of the Weierstrass-Enneper representation theorem that, roughly speaking, tells you how to express solutions of the minimal surface equation in
terms of holomorphic functions (of one variable). In outline, the classical proof is the following one: Assume that a solution $\phi:D\to\mathbb{R}$ to the above equation is specified on a
simply-connected domain $D\subset\mathbb{R}^2$.
1. Set $U = U(x,t) = \bigl(x,t,\phi(x,t)\bigr)$, note that $$ dU = (1,0,\phi_x)\ dx + (0,1,\phi_t)\ dt $$ and that the unit vector $N= (-\phi_x, -\phi_t, 1)/(1+{\phi_x}^2+{\phi_t}^2)^{1/2}
$ satisfies $N\cdot dU = 0$.
2. Set $\nu = N\times dU$, i.e., $$ \nu = \frac{(\phi_x\phi_t,\ -{\phi_x}^2{-}1,\ -\phi_t)\ dx + ({\phi_t}^2{+}1,\ -\phi_x\phi_t,\ \phi_x)\ dt } {(1+{\phi_x}^2+{\phi_t}^2)^{1/2}}, $$ and
note that the above equation on $\phi$ is equivalent to the condition that $d\nu = 0$. Since $D$ is simply connected, it follows that there exists a function $V:D\to\mathbb{R}^3$ such
that $dV = \nu$. ($V$ is unique up to an additive constant.) Obviously, $N\cdot dV = N\cdot \nu = 0$.
up vote 4
down vote 3. Set $W = U + i\ V:D\to \mathbb{C}^3$. A little vector algebra now shows that the image of the differential of $W$ at every point is a complex line in $\mathbb{C}^3$, i.e., that $W(D)\
subset\mathbb{C}^3$ is, in fact, a(n embedded) holomorphic curve. Moreover, the tangent lines to this holomorphic curve are null with respect to the (complex linear) inner product on $\
mathbb{C}^3$, i.e., if $\bigl(w_1(\zeta),w_2(\zeta),w_3(\zeta)\bigr)$ is a (local) holomorphic parametrization of $W(D)$, then $w_1'(\zeta)^2+w_2'(\zeta)^2+w_3'(\zeta)^2=0$.
4. Conversely, if one starts with a holomorphic null curve $W(\zeta) = \bigl(w_1(\zeta),w_2(\zeta),w_3(\zeta)\bigr)$ in $\mathbb{C}^3$, such that its projection to $\mathbb{R}^3$ can be
written as a graph $\bigl(x,t,\phi(x,t)\bigr)$, then $\phi$ must satisfy the minimal surface equation.
There are various refinements if one doesn't ask that the surface be representable as a graph, and that leads to the Weierstrass-Enneper representation theorem in its full glory.
add comment
Not the answer you're looking for? Browse other questions tagged curves-and-surfaces dg.differential-geometry or ask your own question.
|
{"url":"http://mathoverflow.net/questions/92388/minimal-surface-as-varities?sort=newest","timestamp":"2014-04-16T22:42:46Z","content_type":null,"content_length":"54372","record_id":"<urn:uuid:7aee4796-eb61-431b-bb66-7fd0431bbed3>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00414-ip-10-147-4-33.ec2.internal.warc.gz"}
|
bounded continuous function
May 20th 2010, 08:46 AM #1
Junior Member
Apr 2010
bounded continuous function
must a bounded continuous function on R be uniformly continuous?
I know that if a function is continuous on a closed and bdd set then its uniformly continuous, but this says nothing about the set, just the function. Would this be T or False? How would I go
about proviing it?
Also, if f and g are uniformly continuous maps of R to R, must the product f*g be uniformly continuous? What if f and g are bounded.
I think this one is true...since you're just multiplying two continuous functions together...but I don't know how to prove this either
Well, for $(0,1)$ what about $f(x)=\sin\left(\frac{1}{x}\right)$? That is bounded and continuous but not uniformly continuous. Can you see how to generalize?
Also, if f and g are uniformly continuous maps of R to R, must the product f*g be uniformly continuous? What if f and g are bounded.
I think this one is true...since you're just multiplying two continuous functions together...but I don't know how to prove this either
It isn't true, take $f(x)=g(x)=x$
why is not uniformly continuous? aren't all the y-values relatively close together? I don't really understand how to prove or disprove uniform continuity other than saying when the x values are
really close together so are the y values...
for the second part, what about when f and g are bounded? i know when the derivative is bounded they are uniform continuous because it will be a lipschitz function, but I haven't read anything
about f and g being bounded
If f and g are both uniformly continuous on R, and are both bounded, then fg will be uniformly continuous, because
\begin{aligned}|f(x)g(x)-f(y)g(y)| &= |f(x)\bigl(g(x)-g(y)\bigr) + \bigl(f(x)-f(y)\bigr)g(y)| \\ &\leqslant |f(x)||g(x)-g(y)| + |f(x)-f(y)||g(y)| \end{aligned}<br />
and you can make the right-hand side of that inequality as small as you want, for x and y sufficiently close together, using the given properties of f and g.
May 20th 2010, 08:57 AM #2
May 20th 2010, 09:14 AM #3
Junior Member
Apr 2010
May 20th 2010, 11:20 AM #4
|
{"url":"http://mathhelpforum.com/differential-geometry/145714-bounded-continuous-function.html","timestamp":"2014-04-18T04:37:26Z","content_type":null,"content_length":"43825","record_id":"<urn:uuid:208d3b00-861b-47d2-bcd7-823a73425486>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00200-ip-10-147-4-33.ec2.internal.warc.gz"}
|
You Do the Math -- K thru Calculus
[note: this is a math-centric post but most of the concepts can, on some level, be generalized to other subjects]
There's a curiously inverted quality to the education debate. We spend a great deal of time discussing revolutionary changes to the educational system and almost no time talking about what we should
be teaching, as if the proper combination of reforms and incentives can somehow overcome the rule of garbage in, garbage out.
I spent a lot of my time as a teacher thinking about which parts of the mathematics curriculum were good and which parts were garbage and I came up with a list of reasons why a topic might be worth
the student's time. The list isn't in order (I'm not sure it's even orderable) but it is meant to be comprehensive -- everything that belongs in the curriculum should qualify under one or (generally)
more of these criteria.
1. Students are likely to need frequent and immediate access to this for jobs and daily life.
2. Students are likely to need to know how to find this (
Samuel Johnson
level knowledge).
(These are the only two mutually exclusive reasons on the list.)
3. This illustrates an important mathematical concept
4. This helps develop transferable skills in reasoning, pattern-recognition and problem solving skills
5. Students need to know this in order to understand an upcoming lesson
6. A culturally literate person needs to know this
Most topics can be justified under multiple reasons. Some, like the Pythagorean Theorem can be justified under any of the six (though not, of course, under one and two simultaneously).
Where a topic appears on this list affects the way it should be taught and tested. Memorizing algorithms is an entirely appropriate approach to problems that fall primarily under number one. Take
long division. We would like it if all our students understood the underlying concepts behind each step but we'll settle for all of them being able to get the right answer.
If, however, a problem falls primarily under four, this same approach is disastrous. One of my favorite examples of this comes from a high school GT text that was supposed to develop logic skills.
The lesson was built around those puzzles where you have to reason out which traits go with which person (the man in the red house owns a dog, drives a Lincoln and smokes Camels -- back when people
in puzzles smoked). These puzzles require some surprisingly advanced problem solving techniques but they really can be enjoyable, as demonstrated by the millions of people who have done them just for
fun. (as an added bonus, problems very similar to this frequently appear on the SAT.)
The trick to doing these puzzles is figuring out an effective way of diagramming the conditions and, of course, this ability (graphically depicting information) is absolutely vital for most high
level problem solving. Even though the problem itself was trivial, the skill required to find the right approach to solve it was readily transferable to any number of high value areas. The key to
teaching this type of lesson is to provide as little guidance as possible while still keeping the frustration level manageable (one way to do this is to let the students work in groups or do the
problem as a class, limiting the teacher's participation to leading questions and vague hints).
What you don't want to do is spell everything out and that was, unfortunately, the exact approach the book took. It presented the students with a step-by-step guide to solving this specific kind of
logic problem, even providing out the ready-to-fill-in chart. It was like taking the students to the gym then lifting the weights for them.
Long division and logic puzzles are, of course, extreme cases, but the same issues show up across the curriculum. Take factoring trinomials. A friend and former boss of mine wrote a successful
college algebra text book that omitted the topic entirely. I had mixed feelings about the decision but I understood his reasoning: this is one of those things you will almost certainly never have to
do outside of a math class (what fraction of trinomials are even factorable?).
You can justify teaching the factoring of trinomials because it illustrates important mathematical concepts and because it gives students practice manipulating algebraic expressions, but the way you
teach this concept has got to reflect the reasons for teaching it. Having students memorize a step-by-step algorithm would be the easiest way to teach the students to answer these questions (and
improve their standardized test scores) but it completely miss the point of the lesson.
The point about standardized test scores is significant and needs to be revisited a post of its own. By evaluating teachers and schools on standardized test scores, we put pressure on teachers to
treat all subjects as if they fell solely under reason one. This is not a good outcome.
Even more important than how we should teach something is the question of what we should be teaching. Current curricula tend to be broad and shallow with a tragic evenhandedness that often grants the
same amount of time to trivial techniques as it does to fundamental concepts. This is bad enough when a class on grade level and everything is going well but it's disastrous when a large part of the
class is struggling. There is tremendous pressure under those circumstances to leave the stragglers behind (a pressure that actually increases under many proposed reforms).
In addition to being overstuffed, the current curriculum omits subjects that are arguably more important than most of what we cover. The obvious example here is statistics, a topic that everyone
actually does need on a daily basis (as informed citizens and consumers if nothing else). Perhaps even more relevant is what we might call spreadsheet math (customized worksheets, recursive
functions, graphs, macro programming). You could also make a case for discrete mathematics, particularly graph theory (I might even put this one up there with statistics and spreadsheets but that's a
subject for another post).
Originally posted in Education and Statistics
|
{"url":"http://youdothemathkthrucalculus.blogspot.com/2013/01/reasons-to-teach-what-we-teach.html","timestamp":"2014-04-18T10:34:56Z","content_type":null,"content_length":"65702","record_id":"<urn:uuid:8a943981-5dc6-4f17-990b-a03f69d0912b>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00337-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Finding dr/dt. Chain Rule Trouble
April 24th 2009, 05:35 AM #1
Apr 2009
Finding dr/dt. Chain Rule Trouble
Hello ppl. This is the problem:
A section of blood vessel has a flow-rate F in cubic millimetres per
minute proportional to the fourth power of its radius, R. The blood vessel can be considered perfectly cylindrical and is expandable. At some instant the
flow rate is 30 cubic millimetres per minute when the radius is 0.5 millimetres. At that same instant the flow-rate is increasing at rate of 0.24 cubic millimetres per minute squared.
(i) What is the rate of increase of the radius at that instant?
(ii) If the rate of increase of the radius is constant how long after the time when the radius is 0.5 mm is the radius 1 mm?
I'm really confused, was rummaging over it for hours this morning, I've got the F equation, the constant, but simply cant find dr/dt. Plus my friend was saying that F=dV/dt but i said no F=V/t.
I'm now confused about the whole question, someone please help asap.
k=1/480 from values (0.5, 30)
I'm confused with whether F = V/t or dv/dt, i was thinking the former (by looking at the units)
Other equation: V= (pi)(r^4)h, where h is, i was thinking, the length of the vessel? :s
tha main part of the question is: dV/dt = dV/dr * dr/dt
I was trying to even use Volume=Ft.
all the variable have confused me.
Hello ppl. This is the problem:
I'm really confused, was rummaging over it for hours this morning, I've got the F equation, the constant, but simply cant find dr/dt. Plus my friend was saying that F=dV/dt but i said no F=V/t.
I'm now confused about the whole question, someone please help asap.
"At that same instant the flow-rate is increasing at rate of 0.24 cubic millimetres per minute squared."
Think about what an increasing flow rate means. It means that the flow rate is increasing with respect to time (ie: dF/dt).
This is also agreed by the units as flow rate would be in $(length)^3 (time)^{-1}$ whereas this is given in $(length)^3 (time)^{-2}$ which means a dt must be there something
Using the chain rule as you said:
$\frac{dF}{dt} = \frac{dF}{dr} \times \frac{dr}{dt}$
$\frac{dF}{dr} = 4kr^3$ (I didn't check your working for k
$\frac{dr}{dt} = \frac{dF}{dt} \times \frac{dr}{dF} = 0.24 \times \frac{1}{4kr^3} = 5 \times 10^{-4} ms^{-1}$
edit: I get k = 480 instead of 1/480
$F = kr^4$
$30 = k \times 0.5^4$
$k = \frac{30}{0.5^4} = 480$
Last edited by e^(i*pi); April 24th 2009 at 06:46 AM. Reason: added in answer
ok fixing up it all up, i get this (first time i use latex!
$\frac{dF}{dt} = \frac{dF}{dr} \times \frac{dr}{dt}$
$F=480r^4$ (from (0.5,30))
$\frac{dF}{dr} = 1920r^3$
$\frac{dr}{dt} = \frac{dF}{dt} \times \frac{dr}{dF} = 0.24 \times \frac{1}{1920(0.5)^3} = 0.24 \times \frac{1}{120} = 28.8 mms^{-1}$
hope that looks ok.
for second part how would i find t when r=1mm? i dont have any equations with t in it. (im guessing r=28.8t ?)
ok fixing up it all up, i get this (first time i use latex!
$\frac{dF}{dt} = \frac{dF}{dr} \times \frac{dr}{dt}$
$F=480r^4$ (from (0.5,30))
$\frac{dF}{dr} = 1920r^3$
$\frac{dr}{dt} = \frac{dF}{dt} \times \frac{dr}{dF} = 0.24 \times \frac{1}{1920(0.5)^3} = 0.24 \times \frac{1}{120} = 28.8 mms^{-1}$
hope that looks ok.
for second part how would i find t when r=1mm? i dont have any equations with t in it. (im guessing r=28.8t ?)
I don't get where you got 1/120 from? I make 1/(1920x0.5^3) = 1/240.
Also your unit of time should be the minute
$\frac{dr}{dt} = \frac{dF}{dt} \times \frac{dr}{dF} = 0.24 \times \frac{1}{1920(0.5)^3} = 0.24 \times \frac{1}{240} = 10^{-3} mm(min)^{-1}$
I think to find t you need to integrate dr/dt at r = 1mm but I'm not sure
oops i used r^4 and didnt reciprocal it
any1 else idea on (ii)?
Hello, kangaroo!
A section of blood vessel has a flow-rate $F$ in mm³/min
. . is proportional to the fourth power of its radius, $R.$
The blood vessel can be considered perfectly cylindrical and is expandable.
At some instant the flow rate is 30 mm³/min when the radius is 0.5 mm.
At that same instant, the flow-rate is increasing at rate of 0.24 mm³/min²
(a) What is the rate of increase of the radius at that instant?
We have: . $F \:=\:kR^4$
We are told: . $F = 30,\;R = 0.5$
. . Hence: . $30 \:=\:k(0.5^4) \quad\Rightarrow\quad k \,=\,480$
The function is: . $F\:=\:480R^4$
Differentiate with respect to time: . $\frac{dF}{dt} \:=\:1920R^3\,\frac{dR}{dt}$
. . We have: . $0.24 \:=\:1920(0.5^3)\,\frac{dR}{dt} \quad\Rightarrow\quad \frac{dR}{dt} \:=\:0.001$ mm/min.
(b) If the rate of increase of the radius is constant,
how long after the time when the radius is 0.5 mm is the radius 1 mm?
Am I misreading the question?
The radius increases from 0.5 mm to 1.0 mm ... an increase of 0.5 mm.
If the radius increases at a constant 0.001 mm/min,
. . it will take: . $\frac{0.5\text{ mm}}{0.001\text{ mm/min}} \:=\:500\text{ minutes.}$
Is it really that simple?
April 24th 2009, 05:57 AM #2
MHF Contributor
Mar 2007
April 24th 2009, 06:11 AM #3
Apr 2009
April 24th 2009, 06:21 AM #4
April 25th 2009, 08:47 AM #5
Apr 2009
April 25th 2009, 09:06 AM #6
April 25th 2009, 09:11 AM #7
Apr 2009
April 26th 2009, 05:34 AM #8
Super Member
May 2006
Lexington, MA (USA)
|
{"url":"http://mathhelpforum.com/calculus/85402-finding-dr-dt-chain-rule-trouble.html","timestamp":"2014-04-19T17:08:15Z","content_type":null,"content_length":"62847","record_id":"<urn:uuid:523d62f2-f6ad-4272-912d-6deedb898002>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00241-ip-10-147-4-33.ec2.internal.warc.gz"}
|
San Juan Capistrano Calculus Tutor
...I have been tutoring for over 15 years in all levels of math. I have been commended for my ability to make math more simple and relatable to the student. I have an easy personality that I
believe helps make students comfortable and willing to learn.
18 Subjects: including calculus, chemistry, physics, geometry
I would like to tutor Math/Economics/Statistics courses. If you think that I am a fit tutor for you, please feel free to contact me. I have B.S in Economics major and math minor with magna cum
lauder, and M.S in Economics.
19 Subjects: including calculus, statistics, Chinese, algebra 1
...Cognitive Science is much more of my passion! It is the study of cognition (thinking), studying how the brain learns and makes memories among other things. Math has always been my best subject
and my tutoring career started out just as me helping friends in math but slowly expanded to my full-time job as more and more people asked for my help.
23 Subjects: including calculus, geometry, statistics, precalculus
...From my experience, I have found many creative ways of explaining common problems. I love getting to the point when the student finally understands the concept and tells me that they want to
finish the problem on their own. I look forward to helping you with your academic needs.
14 Subjects: including calculus, physics, geometry, statistics
"If you know math,you can make sense of everything." I believe this saying is correct. I can teach you MATH so that you can UNDERSTAND it easier. My name is Minoo; my native language is FARSI.
7 Subjects: including calculus, geometry, algebra 1, algebra 2
|
{"url":"http://www.purplemath.com/San_Juan_Capistrano_calculus_tutors.php","timestamp":"2014-04-19T17:16:28Z","content_type":null,"content_length":"24337","record_id":"<urn:uuid:d9e3f2b5-e5cd-4b98-b7fe-4b51ba72459d>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00645-ip-10-147-4-33.ec2.internal.warc.gz"}
|
abstractModeling Biological Development using the Cellular Potts ModelModeling Biological Development using the Cellular Potts ModelModeling Biological Development using the Cellular Potts ModelModeling Biological Development using the Cellular Potts Model
Cell-level Modeling of Biological Development using the GGH Model and CompuCell3D—
Applications, Technology and Open Problems
James A. Glazier
Biocomplexity Institute and Department of Physics
While the bioinformatics of DNA sequences, the reaction kinetics of biomolecular networks and continuum pattern formation are all the targets of intensive research efforts, cell-level modeling is
still relatively undeveloped, even though the cell is a wonderful tool for hiding biological complexity. One of the key reasons for this neglect has been the lack of any widely accepted modeling
approaches and of ways to describe models compactly. A growing community of modelers employs the GGH (aka CPM) Model to create sophisticated cell-level simulations of tissue development. The
availability of new, open-source tools for building GGH models makes developing, validating and sharing such simulations much easier and makes model definition much more compact. I will introduce the
GGH and the modeling environment CompuCell3D, which we have created to simplify writing developmental simulations (https://simtk.org/home/compucell3d). I will illustrate the application of the GGH to
modeling somitogenesis in vivo, angiogenesis and vasculogenesis in vitro (see picture) and will also discuss modeling of other developmental phenomena including tumor growth, gastrulation, and the
Dictyostelium discoideum life cycle. I will also discuss some of the key mathematical and computational issues which GGH models and modeling environments still need to address.
|
{"url":"http://www.cims.nyu.edu/~binliu/AML07/AML/glazier.html","timestamp":"2014-04-20T11:25:17Z","content_type":null,"content_length":"5454","record_id":"<urn:uuid:570dfb24-9023-45ed-9888-31d19680d213>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00551-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[Haskell-cafe] sample terms and interpreting program output from Tc Monad
[Haskell-cafe] sample terms and interpreting program output from Tc Monad
rickmurphy rick at rickmurphy.org
Sat Jul 7 20:29:51 CEST 2012
Hi All:
I'm still working through the following paper [1] and I wondered whether
you could help me confirm my understanding of some sample terms and
program output from the Tc Monad. For those interested, the language is
specified in Parser.lhs available in the protoype here [2].
I understand these to be Rank 0 terms:
(\(x::Int) . x) (0 :: Int) :: (forall. Int) -- value
(\(x::Int). x) :: (forall. Int -> Int)
(\(x::a). x) :: (forall. a -> a)
Although the program prints forall, the absence of a type variable
indicates Rank 0, correct?
I understand these to be Rank 1 terms:
(\x. x) :: (forall a. a -> a) -- This is not the same as the third
example above, right? This one identifies the type variable a, the one
above does not. Also, there's no explicit annotation, it's inferred.
(\x. \y. x) :: (forall a b. b -> a -> b) -- Still rank 1.
Although there's no explicit annotation, the program infers the type
variables and prints the forall and the appropriate type variables for
the Rank 1 polytypes.
I understand these to be Rank 2 terms:
(\(x::(forall a. a)). 0) :: (forall. (forall a. a) -> Int)
The explicit forall annotation on the bound and binding variable x
causes the program to infer a Rank 2 polytype as indicated by the "->
Int" following the (forall a. a), while noting the absence of a type
variable following the left-most forall printed by the program, correct?
(\(x::(forall a. a -> a)). x) :: (forall b. (forall a. a -> a) -> b ->
Also Rank 2, only one arrow to the right of (forall a. a -> a) counts.
The universal quantifier on type variable b ranges over the type
variable a, correct?
I understand this to be a Rank 3 term:
(\(f::(forall a. a -> a)). \(x::(forall b. b)). f (f x)) :: (forall c.
(forall a. a -> a) -> (forall b. b) -> c)
The arrows to the right of the universally quantified a and b
expressions qualify this as Rank 3. Type variable c ranges over type
variables a and b, correct?
Thanks for your help in better understanding this information. I'm home
schooling myself on Haskell and community support is a big help.
1. Practical Type Inference for Arbitrary-Rank Types.
More information about the Haskell-Cafe mailing list
|
{"url":"http://www.haskell.org/pipermail/haskell-cafe/2012-July/102189.html","timestamp":"2014-04-20T04:57:29Z","content_type":null,"content_length":"5204","record_id":"<urn:uuid:f4abc93d-9d45-4b77-939f-7ddd0c04afbe>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00425-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Do engineering researchers often use advanced math as tools?
Thanks for all your input, very interesting to hear of applications of advanced math in engineering.
What I'm mostly interested in at the moment is signal processing and control systems. It seems applied math departments also do research in these topics of engineering, which I find to be odd. For
someone interested in eventually doing research in these fields, which topic in mathematics would be best to get well acquainted with?
|
{"url":"http://www.physicsforums.com/showthread.php?p=3765095","timestamp":"2014-04-20T01:01:34Z","content_type":null,"content_length":"45373","record_id":"<urn:uuid:7eb459f4-645c-430f-be6f-65b56c38982d>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00209-ip-10-147-4-33.ec2.internal.warc.gz"}
|
x axis symmetry
is there any way to check the following function for x and y axis symmetry without graphing?? g(x)=(1/X^2))+(5/x)+6
Let [tex]F(x,y)=0[\math] be a curve. Then it is x-axis symmetry iff $F(x,y)=0\leftrightarrow F(x,-y)=0$. It is y-axis symmetry iff $F(x,y)=0\leftrightarrow F(-x,y)=0$ In this specific problem, you
only have to check whether $g(x)=g(-x)$ to judge whether it is y-axis symmetry. It is clearly impossible that it is x-axis symmetry because it is written in the form $y=g(x)$(except $g(x)\equiv 0$)
|
{"url":"http://mathhelpforum.com/pre-calculus/98645-x-axis-symmetry.html","timestamp":"2014-04-18T03:06:50Z","content_type":null,"content_length":"31659","record_id":"<urn:uuid:bdc0ab83-bcc2-4a4a-b3ce-97cb1725e0a6>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00121-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Find the volume of the solid obtained by rotating the region bounded by the given curves about the specified axis. y=1/x^4, y=0, x=2, x=6, about y=-4
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/5121a03de4b06821731d1ac5","timestamp":"2014-04-16T10:12:12Z","content_type":null,"content_length":"304795","record_id":"<urn:uuid:0f3e5ac1-f3aa-4423-aeb2-682774256ec5>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00077-ip-10-147-4-33.ec2.internal.warc.gz"}
|