content
stringlengths
86
994k
meta
stringlengths
288
619
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: Determining 'classname' such that isa(v,'classname') will be true Replies: 5 Last Post: Apr 22, 2013 11:33 AM Messages: [ Previous | Next ] Paul Re: Determining 'classname' such that isa(v,'classname') will be true Posted: Apr 17, 2013 9:45 PM Posts: 17 Registered: "Steven_Lord" <slord@mathworks.com> wrote in message <kkmaj0$t60$1@newscl01ah.mathworks.com>... 5/18/11 > > "Paul " <paulremovethispartjackson@jhuapl.edu> wrote in message > news:kkki9r$s7c$1@newscl01ah.mathworks.com... > > Given an object v of a class, how can I determine all possible classname > > such that isa(v,'classname') will return true? For example, consider the > > following: > > > > h=ss(1,1,1,1); > > class(h) > > mc=metaclass(h); > > mc.SuperclassList.Name > > L1=isa(h,'ss'); % returns true, understandable > > L2=isa(h,'numlti'); % returns true, understandable > > L3=isa(h,'StateSpaceModel'); % returns true, understandable > > L4=isa(h,'lti'); % returns true, how would I have known? > > [L1 L2 L3 L4] > > > > ans = > > ss > This comes from class(h) > > ans = > > numlti > > > > ans = > > StateSpaceModel > These two come from the _comma-separated list_ mc.SuperclassList.Name. > > ans = > > > > 1 1 1 1 > > > > I just stumbled on 'lti' returning true for this case. How could I have > > found that programatically? > Note that while mc is a scalar meta.class object, mc.SuperclassList is a > vector of meta.class objects. ss directly inherits from numlti and > StateSpaceModel. But if you look at the SuperclassList for each of the > elements of the mc.SuperclassList vector of objects, you'll see that numlti > inherits from lti while StateSpaceModel also inherits from another class. > You can walk your way up the inheritance tree using the meta class objects > and their SuperclassList properties until all the meta class objects you > reach have empty SuperclassList properties. > > In general, is there some function that has a functionality like: > > c = whatis(v); % returns cell array c such that isa(v,c{i}) is true > Not directly, but you could write one to walk the whole tree. It wouldn't be > that difficult. > Are you planning to do something tricky and/or clever with this information? > Or is it more something that made you curious? > -- > Steve Lord > slord@mathworks.com > To contact Technical Support use the Contact Us link on > http://www.mathworks.com I missed in the doc that SuperclassList contains only the classes from which the object inherits *directly*. For some reason I thought it would return all of the parents. I don't think I was trying to do something tricky or clever. I have a function that can take either an ss or tf or zpk object as input and wanted to do a simple error check, like "if ~isa (in,'numlti')", but couldn't find a clear description in the doc for the common class from which those three inherit (there is a diagram in the Control System Toolbox documentation, but it's not what I would think of as an inhertiance tree and doesn't even contain the term 'numlti'). So I started exploring the use of class(h) and metaclass to figure it out myself. Maybe a whatis type of function would be useful in general to provide insight into a class hierarchy? I don't really use OOP in Matlab, and so don't know if there are any good use-cases, other than the one I described above. Date Subject Author 4/16/13 Determining 'classname' such that isa(v,'classname') will be true Paul 4/17/13 Re: Determining 'classname' such that isa(v,'classname') will be true Steven Lord 4/17/13 Re: Determining 'classname' such that isa(v,'classname') will be true Paul 4/18/13 Re: Determining 'classname' such that isa(v,'classname') will be true Steven Lord 4/20/13 Re: Determining 'classname' such that isa(v,'classname') will be true Paul 4/22/13 Re: Determining 'classname' such that isa(v,'classname') will be true Steven Lord
{"url":"http://mathforum.org/kb/message.jspa?messageID=8895279","timestamp":"2014-04-21T05:16:09Z","content_type":null,"content_length":"25911","record_id":"<urn:uuid:4cddcfdd-bb3d-4ee8-af48-f4306ce99114>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00022-ip-10-147-4-33.ec2.internal.warc.gz"}
while covering a distance of of 30 km ajeet takes 2 - Class 10 mathematics Question Answer - Extramarks Ask Explore Answer Accepting Answers... While covering a distance of of 30 km.Ajeet takes 2 hrs more than amit.If Ajeet doubles his speed , he would take 1 hr less than Amit.Find their speeds of walking. ajeet's speed- xkm\hr amit's speed-ykm\hr Now solve these then you will get the answer, 30/x=30\y + 2 (distance=time*speed) 30/y= 30\2x + 1 (distance=time*speed) LET THE SPEED AT WHICH AJEET WALKS BE X km/hr LET LET THE SPEED AT WHICH AMIT WALKS BE Ykm/hr SPEED:X km/hr Y DIST:30 km 30 TIME:30/X HR 30/Y NEW SPEED:2X km/hr Y DIST:30 km 30 NEW TIME:30/2X HR 30/Y let ajeet's speed be 'x' km/h let amit's speed be 'y' km/h Ajeet takes 2 hrs more than amit. 30/x - 30/y =2 1/x - 1/y = 1/15 15y - 15x= xy .........(1) If Ajeet doubles his speed , he would take 1 hr less than Amit. 30/y - 30/2x = 1 60x - 30y = 2xy 30x - 15y = xy .........(2) adding the equations (1) & (2) 15y - 15x= xy 30x- 15y= xy substituting the value of y in eqn. (1) 15y - 15x= xy 225 - 15x = 15x therefore the speed of ajeet is 7.5 km/h and that of amit is 15 km/h.
{"url":"http://www.extramarks.com/ask-explore-answer/question/7594/mathematics/while-covering-a-distance-of-of-30-km-ajeet-takes-2/10/?pn=&prevsh=","timestamp":"2014-04-16T10:16:22Z","content_type":null,"content_length":"43937","record_id":"<urn:uuid:8c0ce049-59aa-43d5-99ac-b103fb76eed2>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00443-ip-10-147-4-33.ec2.internal.warc.gz"}
Biology Archive | January 16, 2010 | Chegg.com Is the slug population in Hardy Weinbergequilibrium? the slug population has 2 alleles: slime 1 (very slimy) andslime 2 ( barely slimy). They are codominant. The total population consists of 200 very slimy slugs along with400 barely slimy; there are also 400 slugs with mediumsliminess. 1. Calculate the actual population's genotypefrequencies. 2. Calculate q and p for the population. 3. Calculate expected frequencies of each genotype ifthe population is in HW equilibrium. Show all calculations ! 4. Is the population in Hardy-Weinbergequilibrium? BONUS: What University's mascot is the fighting BananaSlugs?
{"url":"http://www.chegg.com/homework-help/questions-and-answers/biology-archive-2010-january-16","timestamp":"2014-04-20T05:15:16Z","content_type":null,"content_length":"30129","record_id":"<urn:uuid:c17b83b2-be85-42e2-b822-e20a057bb9b9>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00472-ip-10-147-4-33.ec2.internal.warc.gz"}
Regents Exam Prep Center Topic Index Lesson Simplifying Rational (Fractional) Expressions Lesson Adding and Subtracting Rational (Fractional) Expressions Lesson Multiplying Rational (Fractional) Expressions Lesson Division of Rational (Fractional) Expressions Practice Practice with Simplifying Rational (Fractional) Expressions Practice Practice Adding and Subtracting Rational (Fractional) Expressions Practice Practice with Multiplication of Rational (Fractional) Expressions Practice Practice with Division of Rational (Fractional) Expressions Teacher Resource Egyptian Fractions Teacher Resource The Crumple, Solve and Toss Activity for Rational Expressions Teacher Resource Making Flash Cards for Rational Expressions
{"url":"http://www.regentsprep.org/Regents/math/algtrig/ATO2/indexATO2.htm","timestamp":"2014-04-19T14:39:12Z","content_type":null,"content_length":"6062","record_id":"<urn:uuid:13d5fc36-e8cd-48e9-b646-2678386e2ecd>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00541-ip-10-147-4-33.ec2.internal.warc.gz"}
write equation for f(x) = mx + b given f(2) = 3, f(6) = -3 lost wrote:Write an equation for each function in the form f(x) = mx+b f(2) = 3, f(6) = -3 You've been given two points, (x, f(x)) = (2, 3) and (x, f(x)) = (6, -3). Plug these points into slope formula . Then pick one of the points (it doesn't matter which one), and plug that point and the slope you just found into one of the forms of the equation of a straight line If you get stuck, please reply showing how far you have gotten in working through these steps. Thank you!
{"url":"http://www.purplemath.com/learning/viewtopic.php?p=153","timestamp":"2014-04-17T21:39:40Z","content_type":null,"content_length":"18929","record_id":"<urn:uuid:97d869b6-24dc-4599-910e-13815167d27c>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00366-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: John invests $10,000 for two years at 10% compounded annually. How much will John have after the two years?@britbrat4290 • 8 months ago • 8 months ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/52028b7fe4b0ebbcb9c6e189","timestamp":"2014-04-17T12:36:22Z","content_type":null,"content_length":"34813","record_id":"<urn:uuid:d632e033-656a-474e-8a76-3a251d320d5a>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00205-ip-10-147-4-33.ec2.internal.warc.gz"}
Equation of an Defines an ellipse and explains how to graph an ellipse in standard form. Writing Equations of Ellipses Given the Center, One Vertex and One Co-Vertex Writing Equations of Ellipses Given the Center, One Vertex, and One Focus Find the standard form of the equation of the ellipse given vertices and minor axis Find the standard form of the equation of the ellipse given foci and major axis Find the standard form of the equation of the ellipse given center, vertex, and minor axis You need to be signed in to perform this action. Please sign-in and try again.
{"url":"http://www.ck12.org/analysis/Equation-of-an-Ellipse/","timestamp":"2014-04-18T19:25:07Z","content_type":null,"content_length":"78613","record_id":"<urn:uuid:ee41c807-4476-4634-97fe-9db026e8bd61>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00065-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Solve for x: 2/5 (x - 2) = 4x anyone know??? • one year ago • one year ago Best Response You've already chosen the best response. multiply 2/5 by x and 2 first Best Response You've already chosen the best response. Thank you Lexiamee :} Best Response You've already chosen the best response. NP :)! Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50bf9bc5e4b0689d52fdc977","timestamp":"2014-04-16T19:47:32Z","content_type":null,"content_length":"32285","record_id":"<urn:uuid:37c91864-b79b-41a2-affb-082b6d6b2f34>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00152-ip-10-147-4-33.ec2.internal.warc.gz"}
188 helpers are online right now 75% of questions are answered within 5 minutes. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/users/sylinan/answered","timestamp":"2014-04-16T22:59:53Z","content_type":null,"content_length":"108971","record_id":"<urn:uuid:ce4f89de-d582-4180-9dd4-fcc7a3f9b069>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00409-ip-10-147-4-33.ec2.internal.warc.gz"}
Percent Over Last Year I want to find the percent of increase over last year. If last year was 100 and this year is 500 then the percent would be 500%. However things get tricky if last year was -100 and this year is 500 or if last year was 100 and this year is -500 then it get's screwy and I'm not sure what formula to use to handle any situation. View Complete Thread with Replies Sponsored Links: Related Forum Messages: Message To Apear It The User Selects A Year AND Month Less Than The Current Year I have two combo boxes: One for entering the Year, and one for the month. I can produce a message if the user leaves either box blank but I want a message to apear it the user selects a year AND month less than the current year (iYear) and current month (iMonth). I therefore need an AND statement between the two criteria but i dont know how to do it. '....First Checks the Comboboxes arent blank then below Checks a future month/year secection is chosen ElseIf YearBox.Value = iYear & iMonthbox < iMonth Then MsgBox ("You may not enter Data before the current Month") Else '...... Run main code here View Replies! View Related Date And Month From A Column And Year Should Take Current Year I have dates in my column “A”, for example (A1 cell =22-Mar-1971), (A2 cell=30-Dec-1965). Now my requirement is in B column date and month from A column and year should take current year. Output in B column (B1 cell =22-Mar-2009), (B2 cell=30-Dec-2009) View Replies! View Related Look Up The Date To See Which Year It Falls In And Return The Year i have the following table of information Year DOB 7 01.09.96 -31.08.97 8 01.09.95 -31.08.96 9 01.09.94 -31.08.95 10 01.09.93 -31.08.94 11 01.09.92 -31.08.93 and a list of dates i need to look up the date to see which year it falls in and return the year View Replies! View Related Line Chart With Year On Year Comparison I know that in order to draw a chart where a data line for a certain period is compared with the same period the previous year, one should have the 2 sets of data of different year side by side columnwise. However, is there a way where I could still churn out the same line chart when the data is all on a single column? View Replies! View Related Add Value From Last Year To Total On This Year In PivotTable I'd like to know a formula which can calculate the moving annual total, that is the sum of the last 4 quarters. Now every time the sales from a quarter is known, i have to recalculate the MAT View Replies! View Related Year Converted Into Decimal Year 1. I need to convert a year into a decimal year ie. 1830 into decimal year (I don't have a month, just year) 2 Year/month into decimal year/month I just not sure what to do, is the year stored as a number/text/date. What should it even look like? Does 1830 display as 1830.00 using excel. View Replies! View Related Top 90 Percent how to mimic SELECT TOP 90 PERCENT from Access in Excel? I can't use the percentile function because it interpolates the value if you don't have the right multiple of values in your array. View Replies! View Related #DIV/0! With A Percent I have a spreadsheet that determines what percent increase over a previous quarter. The values can be negative or positive; however, I have one entry that I'm trying to divide zero by a number which results in the #DIV/0! error message. I rather have it say -1000% since that is the value I'm looking for. I now how to deal with a simple division by using an IF statement such as IF(B1,A1/B1,0), but this one is throwing me a curve. The attached spread sheet is a quarterly percent increase over the last one. In the example, N00377 represents a machine in cell D14 and D17, where cell D17 is the last quarter, and I'm comparing it to cell D14 which should show an increase or decrease in cell F. View Replies! View Related Percent Formula I need a formula to show percent value in a certain way in cell D1 formula is C1 = B1-A1 but I am stuck to get the percent syntax in formula bar right. D1 = PERCENT OF B1-A1 A1 B1 C1 D1 52.5-2.50-50 % ( RED/NEGATIVE PERCENT) A1 B1 C1 D1 2.552.5050% ( BLACK/POSITIVE PERCENT) 1037.00-70% ( RED/NEGATIVE PERCENT) 3107.00 70% ( BLACK/POSITIVE PERCENT) Somehow I seem to think I need to use the Value of C1 ( which is required btw) to get a percent in D1, but not sure how it would go in one complete formula in D1 View Replies! View Related Figuring Percent % This is what I have Rate Hours =basePay plus 6% plus 7.1% total $50.00 10 $500.00 $530.00 $567.63 $567.63 What i want to have is one cell that I can Total everything. I want my spread sheet to display just rate, hours total I am having troule making the formula to display everything in the total cell View Replies! View Related SUMIF Month & Year: Find Total Cost By Month Only For Year 2009 In attached sheet, I am trying to find total cost by month only for year 2009. Currently formula I have in Cell c24, is {=SUM(IF(MONTH(B2:B9)=1,D2:D9,0))} But this calculates for all years, not just 2009. How do I modify above formula, so for each month, it shows total cost but only for 2009? View Replies! View Related Calculate Probabilities In Percent I have had a fascination with the lottery, purely hobby, and have had lots of fun over the years working different things out. The last 6 months though I have become fascinated with roulette & thought it would be a fun project to work out all sorts based around that, plus I don't have to wait for lotto results I can get instant numbers & results, however my latest attempts are hitting a brick wall! I am trying to work out (in percentages) the increasing & decreasing % of 3, 11, 12, 22, possible outcomes I have worked out the 2 possible outcomes initially for odd/even as follows At the start they both have a 48.65% chance of hitting, then whatever is hit first the percentages are 76.33% and 23.67%. If you have 2 in a row of odd/even then the percentages are 88.49% & 11.51%, 3 in a row would give you 94.40% & 5.60% etc. I have used the following formula for this (BM5 is where the totalhits for even are calc'd) ... View Replies! View Related Percent Change Calculations I want to calculate percentage changes, but sometimes my values are negative. Using the traditional (latest-first)/first I'm getting incorrect percentages because of the negative values. How can I write one formula that corrects for this? View Replies! View Related Calculating Percent Differential I am looking to find a formula that will calculate a % differential between two values so for example, In D3 there is a value of 20%, in D4 there is a value of 40%. The formula should therefore display 100% in D5. View Replies! View Related Change Total Formulas For All Tables At Once To Show Either Year-to- Date Or Total Year I have a sheet in my workbook with at least 180 small tables, there may be more. I woulds like to be able to change total formulas for all tables at once to show either year-to- date or total year. For example: If we have only progressed through the second period of the year, I would like to choose something to indicate period 2. At other time I may want to know the total year whether the periods are completed or not. View Replies! View Related Find Duplicates Within A Percent Tolerance Below is a short segment of my excel spreadsheet: A B 1020 -88.11 1021 -85.3 1021.49 -86.98 1030.04 -89.4 1030.042 -88.26 1030.94 -79.98 1049.82 -84.7 What I need to do is write a macro that will find duplicates in Column A, within a changeable tolerance, say 0.1 (10%). After finding all duplicates within a tolerance in A, I need to make another "Master" worksheet with the Duplicates from A, and their counterpart in B. So if A1 and A4 where within 10% of each other, the "Master" worksheet would contain: A1 B1 A4 B4 using the values, giving: I tried using SUMPRODUCT and some other functions but just can't seem to put my finger on this one. I'm sure it's not hard and am overlooking something. View Replies! View Related Convert Number To Percent Format I have built a spreadsheet that pulls data into B60:AA240 (Sheet name is "Actual Numbers Report") from a different sheet in the same workbook. Some of the data is in Number format and the other is in Percent Format. What I would like to do is if AL10 in the Actual Numbers Report sheet says "Actual Numbers" then I would like the cells in B60:AA240 convert to a number format "000,000,000" If AL10 says "Trends" then I want it to convert the cells in B60:AA240 to a percent format "0.0%". I tried creating some code, but it doesn't seem to work. Private Sub Convert_Percent() If Not Intersect(Target, Range("B60:AA240")) Is Nothing Then If .Range("AL9") = "Actual Numbers" Then Selection.NumberFormat = "000,000,000" ElseIf .Range("AL9") = "Trends" Then Selection.NumberFormat = "0.0%" End If End If End Sub If this can work then the 2nd question I would have is can this same line of thinking work to format the chart that this data is pulled from? So if it is Actual Numbers the chart would be in a number format and if it is Trends then it will change to a percent format? View Replies! View Related Make Numbers A Percent Of 100 I have a spreadsheet with a large list of plants. Each plant has a breakdown of colors by container size. Each cell contains a number that corresponds to a percent, e.g. a cell may contain the number 20, which would also mean this number is equal to 20%. I want to change all numbers to a percent of 100, or turn 20, for instance, into .20. There are many hundreds of numbers that I need to make a percent, so I was hoping I could do this in one fell swoop somehow. This percent number will be used in another spreadsheet for calculating on order. How do I do this? View Replies! View Related No Calculation Flag, And Percent Formula 1. In neighborhoods that have zero units in a given price range I have it to display "-" , because this unit is not actually zero, the data is not available. Therefore a #VALUE! is displayed for the percent because it cannot calculate the "-". How do I get excel to glance over "-" and flag it for no calculation? 2. For the percentages I am having to manually do them row by row. I would like to set it up in a manner that allows me to copy the formula down by column and across by row correctly. For instance in the percent for Mira Lagos I have =B4/N3 where b4 is the units for mira lagos and n3 is the total. I can drag that formula across by rowto get all the correct percentages for mira lagos price ranges only, but I cannot copy this formula down by column to any of the other neighborhoods. In otherwords I have to do a new formula for each subdivision. Grand Peninsula=B5/N3 Meadow Glen(Mansfield)=B6/N3 Again I would like to make it so I can copy the formula across by row and down by column so excel will automatically compute it. View Replies! View Related Parentheses For Negative Percent Results - I have been able to format single cells to display negative percents (Budget to Actual hours), but I cannot copy the formatting to cells with positive percents without eliminating the format style I want. [I need to display, with the parenthesis, (13.6%)for negative results, but say, 18.6% for positive results.] When I copy the correctly formatted cell (13.6%) to another cell with a positive result, it sets the display to general formating. As I have over 25 rows of data to compare against 62 projects and 12 programs, with each value potentially changing from one analysis to the other, I am looking for a method to automatically change the "look" of the results. I have looked at conditional formatting, but have had no indication this will do what I am looking for. View Replies! View Related Calculate Percent Of In Pivot Table looking for a way to run some pivot tables on a large data table. Would like the result to show some different data extraction from the same field / column. The table is customer survey results for my employees, and the fields in question can have values from 1-5. I would like to finish the pivot table with all of these fields: Row: Name (ok, that part is easy) Data fields: % of entries (column 2) that are 5 % of entries (column 2) that are 4 or 5 % of entries (column 2) that are 1 or 2 # of entries (column 2) % of entries (column 3) that are 5 % of entries (column 3) that are 4 or 5 % of entries (column 3) that are 1 or 2 # of entries (column 3) I'm hoping this is something I can do with calculated fields, but haven't been able to figure it out. So far all I have is a 'Count' function in the pivot wizard for the # of entries, but I'm not getting the % of entries at all. Column A = Name, Column B = 1st metric, Column C = 2nd metric. Fairly simple layout, but I have a small sample file I can attach if that's not explanatory enough. View Replies! View Related Sum Based On Percent Of Another Column I have two columns of numbers and want to write a formula that will sum any row in column A that is greater than 75% of the corresponding row in column B. I have tried using (SUMIF(D3:D89,"<0.75* (H3:H89)")) but am not getting any results. View Replies! View Related Calculate The Percent Of People Within Age Range In the demographics sheet, I have ages listed from row F2 to F31 with different ages. I would like to get assistance with a formula that calculates the percentage of people within these age ranges: It should be separate formulas. I'm sure if I'm given the first and last ones that I could do the others myself. Also, if I needed to know the percent of males and females, would i use the same View Replies! View Related Formula For True/false Tolerance Percent I need to be able to get a true/false from a tolerance percent. Here is an example of what I am trying to do cell a2 is Nitrogen cell b2 is (Known gas%) 2.4800% cell c2 is (unknown gas%) 2.4963% cell d2 is =b2-c2 and I get the answer no trouble there. what I need is to take the answer in cell d2 and set a plus/minus 2% tolerance in cell f2 and get a true/false comparison. View Replies! View Related VBA Userform – Convert Number To Percent In the attached sample (with macros enabled), you will find the problem when pressing the button “INDTAST DATA” (I apologize for the linguistic challenge, but the XL-sheets are in Danish… To relief – check the crash course in Danish below) and then entering some number in the two last textboxes (called “Forventet ændring i antal timer I næste kvartal (%)” and “Forventet ændring i omsætning i næste kvartal (%)”)… If you enter something there, the result will be multiplied by 100 in the worksheet. I would like to be able to simply enter a full number – like 12 or 9,5– which will then be entered into the worksheet as 12% or 9,5% (and not 1200% or 950%)… I think the answer lies in inserting some code in the VBA code, when the macro writes the data to the worksheet, but you guys know more about it than I do... I can, of course, enter a full number in the textboxes – followed by a %-sign, but that will slow down the process significantly as well as increase the risk of errors… Virksomhed = Company Kvartal = Quarter År = Year Branche = Industry Fakturerede timer = Billed hours Faktureret omsætning = Billed revenue Timeforventning = Expected hours (next quarter) Omsætningsforventning = Expected revenue (next quarter) Indtast data = Enter data View Replies! View Related Formula To Calculate Percent Difference Between Last 2 Columns See attachment. In this example, in Column D I want to calculate the percent difference between the numbers in the last 2 columns (Column B and Column C). BUT I want a formula that will automatically update if I were to insert a new column between Column C and Column D. So as a result, new numbers would go in Column D and the percent difference would now be in Column E. View Replies! View Related Multiply Numbers & Return X Percent Of Result I have an inherited formula and I am not sure if it is giving me the correct answer. It is: The result is 1.503 What I am aiming for is to get 3% and 25% of 688, deduct the results from 688 and then get 10% of that answer. Is the inherited formula correct? View Replies! View Related Formula To Calculate Percent Change, Varied By Amount Of Months I need to figure out a formula for cell F17 that will calculate a percentage change only for the months that have data in 2009. The way it is set up right now I have to go in every month and change the cell reference of the formula to include the latest data. Since the 2008 data is totally populated the formula gets messed up if I include the months of 2009 that have not yet occurred. View Replies! View Related Convert Date To Year/week Of Year/day Of Week Is it possible to format cells to convert a date format of month/day/year to = year/week #/day of week? For example, 04/05/07 (April 5, 2007) would read as 7145, (7=last digit of year/ 14 = week number / 5 = day of week....Sunday being the first day of week) View Replies! View Related Calculate Amount Of Days Paid In Advance And Apply Percent Discount Part of the assesment task is to write a formula, to work out how many days in advance the customer paid, and then apply the needed discount. I have tried several basica variations to the formula, and keep getting the same Err message. give point me in the right direction to how i can calculate amount of days paid in advance and apply a % discount? attached is the start of the assesment question. You should create and enter formulas to calculate the No. of Days paid in Advance, the Discount and the Course Fee Paid. Use a VLOOKUP function in your template to determine the discount rate to be used for the calculation of the Discount. Your template should include a separate discount table containing the following information about the discount received: • If students pay the course fee less than 7 days prior to the course commencing then they receive no discount. • If students pay the course fee 7 to 13 days prior to the course commencing then they receive a discount of 5%. • If students pay the course fee 14 to 20 days prior to the course commencing then they receive a discount of 8%. • If students pay the course fee 21 days or more prior to the course commencing then they receive a discount of 10%. View Replies! View Related Percent Change Of 100% To Show 100% In Pivot Table Not DIV/0 I want to make a calculation in a pivot table where a percent difference is calculated by year. The % difference from calculation does not show an increase from the previous year as 100% but a DIV/0 error. Can i make a custom formula that will use the year base field. View Replies! View Related Determining Top Contributors To 50% Of Sales Based On Cumulative Percent Of Sales I am trying to determine the top contributors to 50% of sales based on cumulative percent of sales (see attached file). I can determine if percent of sales is less than 50%, but I need to include the person that pushes the group of top performers over the 50% mark. View Replies! View Related Forecast An Estimated Budget Based On Original Budget And Percent Complete In the attached spreadsheet i have a budget amount, billed to date, %complete, %remaining and forecast figure. What i am trying to do is estimate the forecast spend vs the budget or billed to date and percent remaining. I am struggling with how to do this based on in some cases the budget is already overspent but the %complete is less than 100%. What i really want to do is create a forecast based on the billed to date or budget depending on which is greater and work out estimated spend based on whether the task is complete or there is still a % remaining. View Replies! View Related Converting 2 Digit Year Into 4 Digit Year I have 2 digit years (98, 99, 00, 01) that I need to convert to 4 digit years (1998, 1999, 2000, 2001). There is one year per cell. If it was simply a matter of adding 19 or 20 to the beginning of each, I could do that. But since there's a combination of both 19 and 20 that needs to be added and there all intermingled, I'm not sure how to do it. Can a rule be written to add 19 to the beginning except if the current cell starts with a 0, then add 20? The highest year is 2008 (no 2010 to deal with). 98 --> 1998 99 --> 1999 00 --> 2000 01 --> 2001 View Replies! View Related Formula Year Month To Last Day Of Month, Month And Year I'm after a formula this time ... i've searched the board and can't find what i need. a cell shows 2009 December and i'd like a formula to covert this to 31st December 2009 .... i.e. for any cell i'd like to know last day of month... and month and year .. View Replies! View Related Year Month Date To Month Date Year Code Serial No Search E220060926320061125420060612520070824620061026720061226820061127920061226 Excel tables to the web >> Excel Jeanie HTML 4 E - Year Month Date I need F column as Month Date Year Format View Replies! View Related Last Sunday Of The Year I am given the year (say 2009) in Cell A1. The requirement is to put the date of last sunday of the year (2009) in cell A2. how to do this? View Replies! View Related Month Of Year As To Why This Is Giving The Answer Of "January Of 2009"? For All Answers. RS92/27/2009January Of 2009102/28/2009January Of 2009113/1/2009January Of 2009123/2/2009January Of 2009133/3/2009January Of 2009143/4/2009January Of 2009 Spreadsheet FormulasCellFormulaS10=TEXT(MONTH(R10),"MMMM")&" Of "&YEAR(R10)S11=TEXT(MONTH(R11),"MMMM")&" Of "&YEAR(R11) Excel tables to the web >> Excel Jeanie HTML 4 View Replies! View Related Leap Year I need to ignore February 29 when subtracting 1 day from another, We have the DAYS 360 formula available .... I need a DAYS 365 sort of formula. Any ideas? For example, F5 = 5/1/2017 F4 = 9/12/1985 F5 - F4 = 11554 but I want it to be 11546 because I want to pretend February 29 never happened in any of the years between the two dates. View Replies! View Related How To Use RIGHT On TODAY To Get The Year I'm trying to let Excel know what year it is. The desired output is "2009". I tried the following. One cell (A1) has "=TODAY()" giving me the following output "2/12/2009". Now in another cell (A2) I'm putting "=RIGHT(A1,4) and the output is "9856". The format is set to general. How do I get the output to read "2009", or is there any other way I can get the current year into a cell? Another thing is that I want to identify a leap year as well. Leap years can be devided by four. I want to divide to outcome of cell A2 above by four and check if this can be divided by four. I don't know however how to put this in a formula unfortunately. To outcome has to be nothing behind the comma/dot=YES, otherwise=NO. View Replies! View Related % Done In A Calender Year I wish to be able to calculate the % of a particular task that is done in a calender year based on the task start date and duration. Columns Headings: A: Start Date B: Duration (months) C: End Date (= Start date + (duration * (365/12))) D: 2009 E: 2010 F: 2011 G: 2012, etc Start Duration End Date 2009 2010 2011 2012 2013 2014 etc 1 Jul 09 12 1 Jul 10 50% 50% 1 Nov 09 12 1 Nov 10 17% 83% 1 Nov 10 36 31 Oct 13 6% 33% 33% 28% So there are two inputs and the outputs (%'s) are calculated for each year. View Replies! View Related Grouping By Year ... Is it possible to grp data in an excel sprdsheet by year or month and also is it possible once that is done to have an option of totaling each period? On a separate point, but similar: i have a spreadsheet in one of the columns i have a unique reference eg opal.... at the beggining with some other digits eg opalmimi, or opalniuj. so i have like 20 or thirty rows (maybe more) of data . What i would like to do is to sort by the column begining with the opal wildcard and grp and subtotal each wildcard grp so my sprdsheet looks like this: Date Desc (where opal values are entered) Amount View Replies! View Related Year Calculation... Is there an in-built function within Excel that will help me ascertain what year is next year, and what year is the year before current? I am using =YEAR(TODAY()) to ascertain what year we are currently in, but cannot figure out how to go one backwards and 1 forwards? View Replies! View Related Changing The Year In my sheet I have a cell that has the year in 4 digits plus 5 other digits for incidents in our fire dept. (ie 2008#####) what I want is to have the year automatically change to 2009 on the first day of the new year. View Replies! View Related Determining Quarter And Year In cell A1 I have a date entered as text as "Apr 2007". (That's the way my tool pulls it. Format can be changed if it helps) I was able to pull the Quarter and year (Q2 2007) using... A2 ="Q" & ROUNDUP(MONTH(A1)/3,0)&" "&YEAR(A1) I need to pull the next three quarters and their year. (Q3 2007, Q4 2007, Q1 2008) View Replies! View Related Year Planner Mod I have a copy of a year planner that calculates the days of the month and adjusts them according to the year input into the header area. Would anyone please modify it so that the first column reads August and the last column reads July (instead of Jan to Dec) and still maintain the calculations as required? View Replies! View Related
{"url":"http://excel.bigresource.com/Percent-over-last-year-5Wj8ATvq.html","timestamp":"2014-04-16T10:45:02Z","content_type":null,"content_length":"68989","record_id":"<urn:uuid:38832906-85ab-4f5d-ba4b-c3e5b1f3ae8b>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00055-ip-10-147-4-33.ec2.internal.warc.gz"}
Modelling Forum When? Tuesday, 1.10pm-2.00am Where? Room MY336 (Seminar room 4), Aras Moyola Why? The main intention of this informal seminar series is to be a way for staff and students to dissiminate their past and current research to other academics. Topics can range from Mathematics, Physics, Statistics, Numerics, Economy to Engineering and Social Sciences. Who? Anyone with an interest in Modelling can attend and contribute. If you would like to present your own or someone else's work at one of these seminars do not hesitate to contact Louise Nolan or Petri Piiroinen. Future topics: • 02/11/10 - "Emergence of Collective Choice through Interaction and Learning" by Srinivas Raghavendra, Department of Economics, NUIG Previous topics: • 26/10/10 - "Algorithmic Trading Strategies and Human Agents in a Virtual Futures Market" by Daniel Parashiev, DERI, NUIG • 19/10/10 - "iSim+: An Equation-Based Approach for Modelling Complex Systems" by Jim Duggan, Information Technology, NUIG. • 12/10/10 - "Measuring the material properties of soft materials with the acousto-elastic effect" by Michel Destrade, School of Mathematics, Statistics & Applied Mathematics, NUIG. • 14/05/10: Workshop on impacting systems 9.30-10.30: "Discontinuity geometry" by David Chillingworth, University of Southampton 10.30-11.15: "Analysis of grazing bifurcations within an discontinuity-geometry framework" by Neil Humphries, Applied Mathematics, NUIG 11.15-12.00: "Complex dynamics in a simple impacting system" by Joanna Mason, University of Limerick • 23/02/10 - "Eigenvector sensitivity analysis in identifying dominant structure in homogenous linear systems" by Jinjing Huang, Information Technology, NUIG. • 02/02/10 - "Some Mathematical Models of Anaerobic Digestion" by Kevin Doherty, Applied Mathematics, NUIG. • 08/12/09 - "Alternating Direction Methods: quick and easy techniques for solving 2D problems" by Niall Madden, Mathematics, NUIG. • 03/11/09 - "Modelling Glycaemia in ICU Patients - A Dynamic Bayesian Network Approach" by Catherine Enright, Information Technology, NUIG • 20/10/09 - "Flexible Mixture Models for Quantile Regression" by Milovan Krnjajic, Statistics, NUIG • 13/10/09 - "Dispersion of fine settling particles from an elevated source in an oscillatory turbulent flow" by Kajal Mondal, Mathematics, NUIG • 06/10/09 - "DIVAST: An outline of its theory and implementation" by Naresh Chadha, Mathematics, NUIG • 29/09/09 - "A Reconsideration of Samuelsson's Multiplier-Accelerator Model" by Petri Piiroinen, Applied Mathematics, NUIG • 28/04/09 - "Financial Stock Scanner based on Neural Networks and Parallel Genetic Algorithms" by Daniel Parashiev, CIMRU, NUIG • 31/03/09 - "Rich dynamics in a simple model of gear rattle" by Joanna Mason, University of Limerick • 24/03/09 - "Polarization states of an electromagnetic field" by Karine Chamaillard, Department of Physics, NUIG • 03/03/09 - "Dynamics and geometry of an impact oscillator" by Neil Humphries, Applied Mathematics, NUIG • 24/02/09 - "Modelling diffusion in stents" by Martin Meere, Applied Mathematics, NUIG • 17/02/09 - "Passive Walkers - background, modelling and analysis" by Petri Piiroinen, Applied Mathematics, NUIG • 10/02/09 - "Probabilistic models of HIV-1 evolution" by Cathal Seoighe, Mathematics, NUIG • 03/02/09 - "Random Effect Models" by John Hinde" Statistics, NUIG • 16/12/08 - "Sparse Grid Methods - a cure for the curse of dimensionality?" by Niall Madden, Mathematics, NUIG • 09/12/08 - "Macroeconomic dynamics: The case of interaction between the level and distribution of income" by Srinivas Raghavendra, Department of Economics, NUIG • 02/12/08 - "Modelling a vehicle suspension: the quarter-car model" by Fredrik Svahn, Royal Institute of Technology, Stockholm, Sweden • 25/11/08 - "Waves in layered media" by Pat O'Leary, Applied Mathematics, NUIG • 18/11/08 - "Modelling Anaerobic Digestion" by Kevin Doherty, Applied Mathematics, NUIG • 11/11/08 - "Solar system dynamics and applications" by Thomas Waters, Applied Mathematics, NUIG • 04/11/08 - "An introduction to nonsmooth systems" by Petri Piiroinen, Applied Mathematics, NUIG
{"url":"http://www.maths-physics.nuigalway.ie/documents/seminars.html","timestamp":"2014-04-20T01:47:12Z","content_type":null,"content_length":"20510","record_id":"<urn:uuid:e59206e1-4c66-4543-b4e1-5814053bf4ec>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00453-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Help April 25th 2009, 06:31 PM #1 Dec 2008 Let R be the region bounded by the graph of y=(1/x)(ln(x)), the x axis, and the line x=e. Find the area of the region. So i found that (1/x)(lnx) intersects the x axis at 1 so to solve for the area we would find the intergral of (1/x)(lnx) from 1-e. How would i find the antiderivative of (1/x)(lnx) and is my method right? This is a Calc AB problem Let R be the region bounded by the graph of y=(1/x)(ln(x)), the x axis, and the line x=e. Find the area of the region. So i found that (1/x)(lnx) intersects the x axis at 1 so to solve for the area we would find the intergral of (1/x)(lnx) from 1-e. How would i find the antiderivative of (1/x)(lnx) and is my method right? This is a Calc AB problem Integrate the function from 1 to e. use the substitution $\ln x = u$ $\frac{1}{x}dx = du$ The integral comes out $\frac{(\ln x)^2}{2}$ Finish it. See the attached graph I misunderstood this problem. when they said $x=e$ for some reason i thought they meant $x=e^y$ so then i tried to solve for y. now it makes sense. so the final product would be $\int_1^e$$\frac{1}{x}\ln(x)$ = $.5$ April 25th 2009, 06:42 PM #2 Aug 2008 April 26th 2009, 07:01 AM #3
{"url":"http://mathhelpforum.com/calculus/85654-area.html","timestamp":"2014-04-21T15:51:34Z","content_type":null,"content_length":"36606","record_id":"<urn:uuid:f663925d-bb8b-44bd-a229-215a6eac014e>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00551-ip-10-147-4-33.ec2.internal.warc.gz"}
Dividing fractions help!!! Dividing fractions help!!! hey i've been taking a C++ class for half a school year and i need help on writing thois program for HW....the question is....Given Fractions a/b and c/d (a,b,c,d integers, b+d not zero) print in lowest terms the quotient of a/b divided by c/d.....first test if b or d=0................since i have only learned basic stuff i'd have to do this in void functions with parameters and also loops. please help by writing this program for me as basic as you can.........thanks tell ya wat, try and do it yourself, show some effort, then come back... well i have sorta an idea wat to do........i need a do while loop to make sure b and d ==0 and i need. how do i divide fractions tho? and how would i check to see if i was in lowest terms? you probably know this already. But to divide functions, you would write it like any other math function in C++ e.g a=4/3; except now your using variables. as for your test statement, I'm not sure why you have to use a while loop, but you could just use an if statement. hope that helped. Good luck to give you a hint, if you forgot, dividing fractions can be done by multiplying reciprocals. by the way, people on this board won't do your homework, try it, post code, and we'll be glad to help, but to write the program for you is not an option, and it is also cheating.
{"url":"http://cboard.cprogramming.com/cplusplus-programming/32769-dividing-fractions-help-printable-thread.html","timestamp":"2014-04-20T06:02:52Z","content_type":null,"content_length":"8052","record_id":"<urn:uuid:af34f19f-1ba4-431f-8070-4c2150f0a08a>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00014-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: A steam engine has an efficiency of 20%.It is given an energy of 1000 cal.per minute.What is the actual work done by it in joule and in calories? a) 100 cal,800 J b) 200 cal,873 J c)10 cal,80 J d)100 cal,100 J • one year ago • one year ago Best Response You've already chosen the best response. @amistre64 @ParthKohli @hartnn @Callsito @Zarkon @Hero @.Sam. @estudier @ganeshie8 @ajprincess @Algebraic! Best Response You've already chosen the best response. Best Response You've already chosen the best response. im not sure since we havent gone over this in my physics class yet, but i would say that 20% efficiency means that 20% of the 1000 cals are used; giving us 200 cals in the answer at least Best Response You've already chosen the best response. May I help? Best Response You've already chosen the best response. please do :) Best Response You've already chosen the best response. ofcourse @SheldonEinstein Best Response You've already chosen the best response. ;) OK so first of all what @amistre64 discussed in his previous comment is very imp. to do this quest., so @mayankdevnani do you know what does efficiency mean? Best Response You've already chosen the best response. ok @amistre64 then Best Response You've already chosen the best response. Best Response You've already chosen the best response. Best Response You've already chosen the best response. right @SheldonEinstein Best Response You've already chosen the best response. Yeah or in simpler words it is : \(\large{\frac{\textbf{Output}}{\textbf{Input}}\times 100}\) it is 100%... Best Response You've already chosen the best response. Best Response You've already chosen the best response. So, what we have is output/input * 100% = 20% , correct? Best Response You've already chosen the best response. Best Response You've already chosen the best response. Ok so now, let me understand the second line of the quest. , is the output = 1000 calorie ? Best Response You've already chosen the best response. Best Response You've already chosen the best response. Best Response You've already chosen the best response. @SheldonEinstein then Best Response You've already chosen the best response. Oh sorry wait! Best Response You've already chosen the best response. Best Response You've already chosen the best response. I am getting diff. answer so I am checking it, sorry , it will require some patience. Best Response You've already chosen the best response. Best Response You've already chosen the best response. Quest. is wrong Best Response You've already chosen the best response. why?? no it is right Best Response You've already chosen the best response. Time is not given.. though if you take 1 minute it is 200 cal. Best Response You've already chosen the best response. Best Response You've already chosen the best response. See , Power = 1000 cal/minute (given) Best Response You've already chosen the best response. use this and you will get 200 cal. as answer. Best Response You've already chosen the best response. so what's the work done? Best Response You've already chosen the best response. W = P * time if T = 1 second , W = P ... that's it , find P(output) .. (using efficiency = Ouput/Input *100%) in Cal. per minute , that will be your answer Best Response You've already chosen the best response. 200 cal.... = WORK DONE Best Response You've already chosen the best response. Best Response You've already chosen the best response. but the work done is in joule Best Response You've already chosen the best response. @ParthKohli @Preetha Best Response You've already chosen the best response. Best Response You've already chosen the best response. Best Response You've already chosen the best response. Is my answer correct, i.e. 200 cal, 873 J ? Best Response You've already chosen the best response. Best Response You've already chosen the best response. So where is the problem? ?? ? ? Best Response You've already chosen the best response. 873 J how?? Best Response You've already chosen the best response. Best Response You've already chosen the best response. @SheldonEinstein how you got it 873J Best Response You've already chosen the best response. I got 200 Cal, and it is in options, soo.... Best Response You've already chosen the best response. but it's wrong Best Response You've already chosen the best response. i need solution Best Response You've already chosen the best response. can you help me @amriju Best Response You've already chosen the best response. 200 cal * 4.184 = 836 J (approx. ) not equal to 873 Joule , hence I think the options are wrong.. Best Response You've already chosen the best response. no it was not wrong Best Response You've already chosen the best response. work done is 20 percnt of the energy given...thats efficiency....so per minute is 20percnt of 1000 cal...thats 200 cal...convert to joules Best Response You've already chosen the best response. so 1cal.=? j @amriju Best Response You've already chosen the best response. 4.2...approx..the correct is 4.187... Best Response You've already chosen the best response. is it 4.37 Best Response You've already chosen the best response. Best Response You've already chosen the best response. Best Response You've already chosen the best response. thats how it is defined...both are units...and 1 cal is defined as 4.187 joules... Best Response You've already chosen the best response. That is what I was saying @amriju Best Response You've already chosen the best response. in my solution book...the answer is as follows- \[\frac{20}{100}=\frac{W}{1000}=200 calorie\] OR \[200\times4.37=873J\] Best Response You've already chosen the best response. is that right?? @amriju and @SheldonEinstein Best Response You've already chosen the best response. That is wrong, 1 Cal. = 4.187 Joule NOT 4.37 joule... Best Response You've already chosen the best response. ok... thnx... @SheldonEinstein and @amriju Best Response You've already chosen the best response. u may jst check on wikipedia...and its seems that none of the options are correct...the closest being 200, 837 Best Response You've already chosen the best response. if your options are truely: 200,873 then its most likely a typo and prolly meant to be read as 200,837 Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/508a76b1e4b077c2ef2e2c45","timestamp":"2014-04-18T21:09:37Z","content_type":null,"content_length":"178778","record_id":"<urn:uuid:b430f3f4-b59d-4604-b227-1e2acdc29299>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00335-ip-10-147-4-33.ec2.internal.warc.gz"}
Multivariate analysis of variance Multivariate analysis of variance or muliple analysis of variance (MANOVA) is a statistical test procedure for comparing multivariate (population) means of several groups. Unlike univariate ANOVA, it uses the variance-covariance between variables in testing the statistical significance of the mean differences. It is a generalized form of univariate analysis of variance (ANOVA). It is used when there are two or more dependent variables. It helps to answer: 1. do changes in the independent variable(s) have significant effects on the dependent variables?; 2. what are the interactions among the dependent variables? and 3. among the independent variables?^[1] Statistical reports, however, will provide individual p-values for each dependent variable, indicating whether differences and interactions are statistically significant. Where sums of squares appear in univariate analysis of variance, in multivariate analysis of variance certain positive-definite matrices appear. The diagonal entries are the same kinds of sums of squares that appear in univariate ANOVA. The off-diagonal entries are corresponding sums of products. Under normality assumptions about error distributions, the counterpart of the sum of squares due to error has a Wishart distribution. Analogous to ANOVA, MANOVA is based on the product of model variance matrix, $\Sigma_{model}$ and inverse of the error variance matrix, $\Sigma_{res}^{-1}$, or $A=\Sigma_{model} \times \Sigma_{res}^ {-1}$. The hypothesis that $\Sigma_{model} = \Sigma_{residual}$ implies that the product $A \sim I$.^[2] Invariance considerations imply the MANOVA statistic should be a measure of magnitude of the singular value decomposition of this matrix product, but there is no unique choice owing to the multi-dimensional nature of the alternative hypothesis. The most common^[3]^[4] statistics are summaries based on the roots (or eigenvalues) $\lambda_p$ of the $A$ matrix: • Samuel Stanley Wilks' $\Lambda_{Wilks} = \prod _{1...p}(1/(1 + \lambda_{p})) = \det(I + A)^{-1} = \det(\Sigma_{res})/\det(\Sigma_{res} + \Sigma_{model})$ distributed as lambda (Λ) • the Pillai-M. S. Bartlett trace, $\Lambda_{Pillai} = \sum _{1...p}(1/(1 + \lambda_{p})) = \mathrm{tr}((I + A)^{-1})$ • the Lawley-Hotelling trace, $\Lambda_{LH} = \sum _{1...p}(\lambda_{p}) = \mathrm{tr}(A)$ • Roy's greatest root (also called Roy's largest root), $\Lambda_{Roy} = max_p(\lambda_p) = \|A\|_{\infty}$ Discussion continues over the merits of each, though the greatest root leads only to a bound on significance which is not generally of practical interest. A further complication is that the distribution of these statistics under the null hypothesis is not straightforward and can only be approximated except in a few low-dimensional cases.^[citation needed] The best-known approximation for Wilks' lambda was derived by C. R. Rao. In the case of two groups, all the statistics are equivalent and the test reduces to Hotelling's T-square. Correlation of dependent variables[edit] MANOVA is most effective when dependent variables are moderately correlated (0.4–0.7). If dependent variables are too highly correlated it could be assumed that they may be measuring the same See also[edit] External links[edit]
{"url":"http://blekko.com/wiki/MANOVA?source=672620ff","timestamp":"2014-04-19T23:40:21Z","content_type":null,"content_length":"63497","record_id":"<urn:uuid:70021501-0e4e-4653-86dd-f40038143f17>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00070-ip-10-147-4-33.ec2.internal.warc.gz"}
Next: 15-8. Up: Energy I: Week 1 Previous: 15-2. Consider, for concreteness, the system to be a mass on a horizontal spring. The mass then takes half of it's periodic motion. Thus the period will be twice this amount of time or Finally, the amplitude of the motion is the distance from equilibrium to on of the end points. Since the distance between end points is 36 cm, the amplitude is half this amount, or 18 cm. Dan Cross 2006-10-18
{"url":"http://www.physics.drexel.edu/~dcross/teaching/energy1/week1/node2.html","timestamp":"2014-04-21T12:34:24Z","content_type":null,"content_length":"2761","record_id":"<urn:uuid:1a4ba065-69a7-4e72-a9fc-e05ac5209c17>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00422-ip-10-147-4-33.ec2.internal.warc.gz"}
Differential equation February 8th 2009, 12:10 PM Differential equation For what $nonzero$ values of $k$ does the function $y=sinhkt$ satisfy the DE $y''-25y=0$? I got $k=5$ and $k=-5$ which I think is correct, but I'm struggling with the other two parts of the question: For those values of $k$, show that every member of the family of functions $Asinhkt+Bcoshkt$ (where $A$ and $B$ are constants) is also a solution of the DE. Come up with a second order DE for which $y=sinkt$ is a solution for the same values of $k$ obtained before. February 8th 2009, 03:33 PM Yes, it is. If y= sinh(kt) then y'= k cosh(kt) and y"= k^2 sinh(kt). Putting that into the differential equation, y"- 25y= k^2 sinh(kt)- 25 sinh(kt)= (k^2- 25)sinh(kt)= 0 for all t. Since sinh (kt) is not alway7s 0 itself, we must have k^2- 25= 0 so k= 5 or k= -5. but I'm struggling with the other two parts of the question: For those values of $k$, show that every member of the family of functions $Asinhkt+Bcoshkt$ (where $A$ and $B$ are constants) is also a solution of the DE. Just DO it! With k= 5, those functions are $A sinh(5t)+ B cosh(5t)$. Find the second derivative, plug it into the equation and see what happens! With k= -5, you haved $A sinh(-5t)+ B cosh(-5t)$. Since sinh is an odd function and cosh is an even function, that is just $-A sinh(5t)+ B cosh(5t)$ which is really the same thing since A itself can be positive or negative. [/quote]Come up with a second order DE for which $y=sinkt$ is a solution for the same values of $k$ obtained before.[/QUOTE] So now youy are working with $y= sin(5t)$. Take the second derivative, stare at it and think!(Clapping)
{"url":"http://mathhelpforum.com/differential-equations/72506-differential-equation-print.html","timestamp":"2014-04-17T20:27:01Z","content_type":null,"content_length":"10323","record_id":"<urn:uuid:c6048021-2666-49a9-958c-02e75d6b25ec>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00437-ip-10-147-4-33.ec2.internal.warc.gz"}
Computational and mathematical challenges involved in very large-scale phylogenetics Seminar Room 1, Newton Institute Phylogenetic inference presents enormous computational and mathematical challenges, but these are particularly exacerbated when dealing with very large datasets (containing thousands of sequences) or when sequences evolve under complex models of evoluiton. In this talk, I will describe some of the recent progress in large-scale phylogenetics. In particular, I will talk about multiple sequence alignment and its implications for large-scale phylogenetics. Related Links The video for this talk should appear here if JavaScript is enabled. If it doesn't, something may have gone wrong with our embedded player. We'll get it fixed as soon as possible.
{"url":"http://www.newton.ac.uk/programmes/PLG/seminars/2007090310001.html","timestamp":"2014-04-20T05:49:59Z","content_type":null,"content_length":"6725","record_id":"<urn:uuid:fb35a808-eb50-481c-b19e-23206beb42d2>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00121-ip-10-147-4-33.ec2.internal.warc.gz"}
Optimization Problem April 15th 2010, 07:56 PM #1 Apr 2010 Optimization Problem I'm doing good with calculus but I'm having trouble deciphering optimization problems. Two currently: Find the area of the largest rectangle that can be inscribed in a right triangle with legs of length 3cm and 4cm if two sides of the rectangle lie along the sides. (leg refers to a side that is not the hypotenuse) Here I dont even know what functions to use.. A = L*W (square) A= 1/2 * (x+3) * (y+4) (triangle) Another one is: A box with square base and open top must have a volume of 32000 cm^3 Find the dimensions of the box that minimize the amount of material. V = l*w*h here I assume square means l and w are equal. So 32000 = h * w^2 Basically I can't get past the wording and formula formation. I can get past the derivative and finding the critical points once I can create a formula.. Any help would be much appreciated. I'm doing good with calculus but I'm having trouble deciphering optimization problems. Two currently: Find the area of the largest rectangle that can be inscribed in a right triangle with legs of length 3cm and 4cm if two sides of the rectangle lie along the sides. (leg refers to a side that is not the hypotenuse) Here I dont even know what functions to use.. A = L*W (square) A= 1/2 * (x+3) * (y+4) (triangle) Another one is: A box with square base and open top must have a volume of 32000 cm^3 Find the dimensions of the box that minimize the amount of material. V = l*w*h here I assume square means l and w are equal. So 32000 = h * w^2 Basically I can't get past the wording and formula formation. I can get past the derivative and finding the critical points once I can create a formula.. Any help would be much appreciated. For any of these type of optimisation problems, the procedure is as follows: Draw a diagram, label variables. Write a formula for the thing you are trying to optimise eg in your first problem it is area, in your second it is surface area). Use any given information to get your formula in terms of one variable only. Then you sound like you know what to do....which is a good thing! Start with the second one first (it's actually a bit easier). Draw a diagram of a square based prism. label the width x, length x and height h. (or any other letters you wish) So V = h*x^2 (you had this already - i prefer to use x) Now you know that V = 32000 so 32000 = h*x^2 (you already knew that). This also means that h = 32000/x^2. Now this seems to be where you get stuck. What are you trying to optimise? Answer: Surface area So...get a formula for surface area from your diagram (remember it has an open top) so SA = ................(at this stage this will involve both x and h). Formula with 2 variables??? - a problem. So ...use the fact that h = 32000/x^2 to get the formula in terms of one variable x. Then you know what to do!! April 15th 2010, 08:15 PM #2 Senior Member Oct 2009
{"url":"http://mathhelpforum.com/calculus/139433-optimization-problem.html","timestamp":"2014-04-16T11:42:33Z","content_type":null,"content_length":"34069","record_id":"<urn:uuid:58e0adb2-0517-46e6-aae0-659d3cef758d>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00522-ip-10-147-4-33.ec2.internal.warc.gz"}
Mountlake Terrace Prealgebra Tutors ...In my personal studies I have completed math classes through calculus two. I have completed the WyzAnt geometry qualification quiz. I have privately tutored geometry for several students over the past 5 years. 4 Subjects: including prealgebra, geometry, algebra 1, algebra 2 ...As much as possible I like to help people learn by doing. If there are problems to solve, we would work through some examples together to make sure your thought processes are going in the right direction. My favorite subject is biology. 6 Subjects: including prealgebra, chemistry, biology, algebra 1 ...I have had much experience with students of all needs, backgrounds and abilities, and have been met with success in reaching the students and helping them to feel successful. My methods of teaching are forever adaptable in that it is my number one goal to reach the student and work with them in ... 11 Subjects: including prealgebra, reading, writing, geometry ...This is where the concepts for calculus are truly laid and any shakiness in understanding the topics here lead to hesitation in moving on to Calculus. This where I show students where the graphical representation and equation come together so that the visualization carries on with them when they... 16 Subjects: including prealgebra, geometry, algebra 1, algebra 2 ...For the past 3 years, I have been a First Years Program Leader on campus, essentially guiding freshmen through the various challenges and concerns they have upon entering college. I have taught at a "Read 'n' Lead" program at my local library where I read to and helped elementary school students... 42 Subjects: including prealgebra, reading, English, calculus
{"url":"http://www.algebrahelp.com/Mountlake_Terrace_prealgebra_tutors.jsp","timestamp":"2014-04-16T07:15:56Z","content_type":null,"content_length":"25349","record_id":"<urn:uuid:6f3a9bc8-0a12-4e20-b801-77ced10ede07>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00153-ip-10-147-4-33.ec2.internal.warc.gz"}
Need help with logarithm/exponential functione problems(2) April 8th 2010, 08:49 PM #1 Apr 2010 I'm in pre-calculus and I'm completely clueless with these 2 problems. 1) The radioactive isotope of carbon-14c has a half life of 5730 years. A) What is the decay constant of carbon-14c B) If we start with a sample of 1000 carbon14 nuclei, how many will be left in 22,290 years time. 2) A Piece of charcoal of mass 25g is found in the ruins of an ancient city. The sample shows a carbon-14 activity or R(t)= 4.167 decays/second. A) Convert the decay constant of carbon-14c from (1)(a) in terms of seconds instead of years. B) Find the number f remaining atoms of N(t) using the constant you found in (a) C) Suppose that the initial number of carbon-14c nuclei before decay is 1.63x10^12. What was the initial rate of decay ( or R initial) D) How long has the tree that this charcoal came from been dead? Okay so I'm given these equations Last edited by mr fantastic; April 10th 2010 at 09:30 PM. Reason: Restored deleted questions. I'm in pre-calculus and I'm completely clueless with these 2 problems. 1) The radioactive isotope of carbon-14c has a half life of 5730 years. A) What is the decay constant of carbon-14c B) If we start with a sample of 1000 carbon14 nuclei, how many will be left in 22,290 years time. 2) A Peice of charcoal of mass 25g is found in the ruins of an ancient city. The sample shows a carbon-14 activity or R(t)= 4.167 decays/second. A) Convert the decay constant of carbon-14c from (1)(a) in terms of seconds instead of years. B) Find the number f remaining atoms of N(t) using the constant you found in (a) C) Suppose that the initial number of carbon-14c nuclei before decay is 1.63x10^12. What was the initial rate of decay ( or R initial) D) How long has the tree that this charcoal came from been dead? Okay so I'm given these equations So the equation for part 1, Since you know that your initial amount is 1/2 at time t=5730, you can assume values for your initial and final values i'll use 10 and 20 as an example $1/2N(t)=Ne^(-ct)<br /> (1/2)20=20e^(-c*5730)$ Then use properties of logs to solve for c and this will be the decay rate Part 2, use that decay rate to determine the amount left after t=22,290 So the equation for part 1, Since you know that your initial amount is 1/2 at time t=5730, you can assume values for your initial and final values i'll use 10 and 20 as an example $1/2N(t)=Ne^(-ct)<br /> (1/2)20=20e^(-c*5730)$ Then use properties of logs to solve for c and this will be the decay rate Part 2, use that decay rate to determine the amount left after t=22,290 Uhm, I haven't learned that method or not used to doing that. Or do I just plug in the numbers? Last edited by mr fantastic; April 9th 2010 at 03:05 PM. Reason: Merged posts April 8th 2010, 09:14 PM #2 Junior Member Jan 2010 April 8th 2010, 09:45 PM #3 Apr 2010
{"url":"http://mathhelpforum.com/pre-calculus/138042-need-help-logarithm-exponential-functione-problems-2-a.html","timestamp":"2014-04-21T07:30:08Z","content_type":null,"content_length":"38318","record_id":"<urn:uuid:250741ee-462c-4e3a-8237-88ccdd018975>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00408-ip-10-147-4-33.ec2.internal.warc.gz"}
Relative increase in entropy is always less than relative increase in maximum entropy August 3rd 2010, 07:02 PM Relative increase in entropy is always less than relative increase in maximum entropy Suppose we have a set of $N$ states out of which probabilities are calculated based on a frequency approach where N is the grand total, and the following entropy function is given: $<br /> H_1 = -\sum_{i=1}^{N}p_i \log_2 p_i .<br />$ In this case, the maximal entropy is $H_{{max}_{1}}=\log_2 N$. Now, suppose we increase the number of states from $N$ to $M$, $M>N$, and we re-evaluate the probabilities. Now, we get the following entropy function: $<br /> H_2 = -\sum_{i=1}^{M}q_i \log_2 q_i .<br />$ Now, the maximal entropy is $H_{{max}_{2}}=\log_2 M$. Based on information theory, increasing the number of states (from N to M) will increase the entropy. My question is related to the conclusion given as the title of this thread: Does the following inequality hold: $<br /> \frac{H_2 - H_1}{H_1} < \frac{H_{{max}_{2}}-H_{{max}_{1}}}{H_{{max}_{1}}}<br />$ That is, is the relative increase on entropy less than the relative increase in maximum entropy when increasing the number of states?
{"url":"http://mathhelpforum.com/advanced-statistics/152736-relative-increase-entropy-always-less-than-relative-increase-maximum-entropy-print.html","timestamp":"2014-04-21T16:02:42Z","content_type":null,"content_length":"6070","record_id":"<urn:uuid:ba5deaed-a56c-44a6-aa39-feb1212b8df1>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00532-ip-10-147-4-33.ec2.internal.warc.gz"}
Continuity of Functions Examples Continuity of Functions Examples Continuity at a Point via Pictures The Pencil Rule of ContinuityA continuous function is one that we can draw without lifting our pencil, pen, or Crayola crayon.Here are some examples of continuous functions:If a function is Continuity at a Point via Formulas It's good to have a feel for what continuity at a point looks like in pictures. However, sometimes we are asked about the continuity of a function for which we're given a formula, instead of a Functions and Combinations of Functions Many functions are continuous at every real number x. These functions include (but are not limited to):all polynomials (including lines) ex sin(x) and cos(x)It's helpful... Continuity on an Interval via Formulas When we are given problems asking whether a function f is continuous on a given interval, a good strategy is to assume it isn't. Try to find values of x where f might be discontinuous. If we're Continuity on Closed and Half-Closed Intervals When looking at continuity on an open interval, we only care about the function values within that interval. If we're looking at the continuity of a function on the open interval (a,b), we don't i... Determining Continuity When we say a function f is continuous, we usually mean it's continuous at every real number. In other words, it's continuous on the interval (-∞, ∞).Some examples of continuous functions that... Boundedness Theorem: A continuous function on a closed interval [a,b] must be bounded on that interval.There are two numbers - a lower bound M and an upper bound N - such that every value of f on Extreme Value Theorem Maximum and Minimum ValuesThe maximum value of a function on an interval is the largest value the function takes on within that interval. Similarly, the minimum value of a function on an inter... Intermediate Value Theorem Intermediate Value Theorem (IVT): Let f be continuous on a closed interval [a,b]. Pick a y-value M with f(a)
{"url":"http://www.shmoop.com/continuity-function/examples.html","timestamp":"2014-04-16T04:12:14Z","content_type":null,"content_length":"27575","record_id":"<urn:uuid:d0eb1fcb-016a-4284-817b-a56a9a01bd21>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00258-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts by Total # Posts: 1,174 is it true or false External motivation is not desirable Internal motivation is 100.0 g of Potassium Phosphate reacts with 100.0 g of Barium Chloride. Which molecule is completely used up by the reaction? How many grams of the other molecule are left over? When I started with 100.0 g of K3PO4 I got 147.1g of BaCl2. When I started with 100.0g of BaCl2 I go... 100.0 g of Potassium Phosphate reacts with 100.0 g of Barium Chloride. Which molecule is completely used up by the reaction? How many grams of the other molecule are left over? physics plz help i tried but I am getting incorrect answer physics plz help there were options. physics plz help equation that i can integrate physics plz help I need help in calculating work. following quantites are given. * G: 6.67E-11 *Mass of earth:5.98E24 *mass of satelightt:80kg *x(initial) center of the earth-7E6 m *x(final) center of the earth 12E6 m How do I make integral equation I need use 20 interval or calculate in excel... I need help in calculating work. following quantites are given. * G: 6.67E-11 *Mass of earth:5.98E24 *mass of satelightt:80kg *x(initial) center of the earth-7E6 m *x(final) center of the earth 12E6 m How do I make integral equation I need use 20 interval. i need help with chapter 10 and 11 sentence check 2 thank you! In a predator/prey model the predator population is modeled by the function y=1000 sin (2t) +400. What is the smallest population according to this model? thank you!! How do I find a rectangular-coordinate equation for the curve by elimination the parameter? x=2 sin t, y= 4 cos t so far I have x= r cos(theta) and y= r sin (theta) but, it's the opposite. please Thanks! :D Jill is jumping on a trampoline in her backyard. When will she have both potential and kinetic energy? A. Halfway between the highest point and the lowest point of a jump B. At the highest point of a jump C. At the lowest point of the jump D. Just before she begins a jump 5 - (16 tenths) = 3.4 can you not help me? chemistry is my weak subject hence why i am on here to get help. I have online questions that need answering in a day and im stuck that's the truth. I have to fnd someone that I might have to pay then looks like it... :( I really don't have a clue 3PbO2 + 2Cr3++ 2H2O3Pb2+ + 2CrO42-+ 4H+ In the above redox reaction, use oxidation numbers to identify the element oxidized, the element reduced, the oxidizing agent and the reducing agent. 1.name of the element oxidized: 2.name of the element reduced: 3.formula of the oxidiz... SO42- + HAsO2+ 2H+H3AsO4 + SO2 In the above redox reaction, use oxidation numbers to identify the element oxidized, the element reduced, the oxidizing agent and the reducing agent. 1. name of the element oxidized: 2. name of the element reduced: 3. formula of the oxidizing ag... it's not duplicate they are all different questions but similar 3PbO2 + 2Cr3++ 2H2O3Pb2+ + 2CrO42-+ 4H+ 1. In the above reaction, the oxidation state of chromium changes from (answer) to (answer). 2. How many electrons are transferred in the reaction? SO42- + HAsO2+ 2H+H3AsO4 + SO2 1. In the above reaction, the oxidation state of sulfur changes from (Answer) to (Answer) 2. How many electrons are transferred in the reaction? i dont know much about chemitry thats why its confusing I don't know how to calculate them sorry i am stuck on many questions thats why used diff names. i need help and be grateful if u can help but most questions I dont know how to work it out. But aren't there only 2 angles? I'm confused. But how do you get 60? The directions that came with the kite pattern that Andre ordered on the internet said to cut out 2 supplementary angles so that the measure of one is twice the measure of the other. How many degrees are in each angle? Justify your answer. Thanks! for the info. so the answer is badminton is played with a Birdie & a racket. 2. beach paddle ball ? The sport of Association football is quite different from football in American culture. In the statements provided, select the one in which association football and american football are not similar. having players wear cleated shoes having players kick the ball during the gam... Sport Strategies ~ Football Sport Strategies ~ Football My answer is c. using a ball that is not a sphere. ^^^^^ Sport Strategies ~ Football The sport of Association football is quite different from football in American culture. In the statements provided, select the one in which association football and american football are not similar. a. having players wear cleated shoes b. having players kick the ball during t... Math ~CHECK MY ANSWERS~ 1) Which of these is a rational number? a. Pi b. square root 3 ****** c. square root 2 d. 1.3 (the # 3 has a line at the top) 2) Which of the following sets contains 3 irrational numbers? A. square root 120, n, square root 3 ****** b. - square root 256, 1/9, 1/12 c. 3.14, -47,... Grammar ***NEVER MIND*** The answer is c) Introduction, body, conclusion . Grammar ***PLEASE ANSWER*** Is it C ? Grammar ***CHECK MY WORK PLEASE*** Is it C ? Please answer? Grammar ***CHECK MY WORK PLEASE*** What are the three main parts of a research report? a) Title, body, conclusion ********* b) Introduction, body, sources c) Introduction, body, conclusion d) Introduction, evidence, conclusion Grammer ~CHECK MY WORK PLEASE~ Grammer ~CHECK MY WORK PLEASE~ Which of the following sentences is correct? a) Mr. and Mrs. Smith goes on vacation. b) Mr. and Mrs. Smith has gone on vacation. c) Mr. and Mrs. Smith have gone on vacation. ********** d) Mr. and Mrs. Smith is going on vacation. Language Arts ~CHECK MY WORK PLEASE~ Thanks! for checking :) Language Arts ~CHECK MY WORK PLEASE~ OK, THANKS!FOR THE HELP GUYS! Language Arts ~CHECK MY WORK PLEASE~ OOPS SORRY The railway lines has been linking more and more cities together. Language Arts ~CHECK MY WORK PLEASE~ 1. What can you predict about the poem by previewing the title, subtitle, and first line of the poem "In response to executive order 9066"? a.) It will be a letter to the Japanese-American citizens in which they are told they must leave their homes. b) It will be a l... csc^4-cot^4= 1+cos^2/sin^2 i need help! please help me social studies Thanks Ms. Sue! social studies why Washington, D.C. was chosen as the United States capital? NO LINKS PLEASE!!!! social studies the U.S. signed a peace treaty with the British treaty of pairs there both the same thing or meaning? social studies ok thnx. sorry for the comment ^^^^ Ms. Sue !!!!!!!!!!!!! ^^^^^ Sorry! social studies Ms. Sue !!!!!!!!!!!!! social studies Month: April ? Year: 1803 ? social studies Ms. Sue is this right? Month: October ? Year: 1803 ? social studies What month did Thomas Jefferson bought the Louisiana purchase? social studies oh yes sorry. social studies other reasons why? social studies ok, thnx oh and one more. Why did Thomas Jefferson send Lewis and Clark on their expedition to the west? (no links please, i need to know why?) social studies is this correct and true? Why did Jefferson send Lewis and Clark on an expedition out west? 1. To explore the Louisiana purchase 2. Find the northwest passage 3. To research the plants and animals of the Louisiana Purchase social studies Thnx both. social studies why Lewis and Clark went on the expedition? why was the Lewis and Clark expedition important? Math ~CHECK MY WORK PLEASE~ OK I understood, Thnx for answering fast! Math ~CHECK MY WORK PLEASE~ Hello, Please check my answers, with stars (*) are my answers! Please check if they are wrong please tell the right answer and why it is right (explain it) so I can understand why! Thanks! For questions 1-2, solve the inequality. 1.) x + 8 < -28 a~ x < -36 ****** b~ x &l... Ok, I under stand thnx. Ms.Sue! a.) Solve a - 9 = 20 b.) Solve b - 9 > 20 c) How is solving the equation in part a similar to solving the inequality in part b? d.) How are the solutions different? Victor Malaba has a net income of $1,240 per month. If he spends $150 on food, $244 on a car payment, $300 on rent, and $50 on savings, what percent of his net income can he spend on other things? A cheetah can accelerate from rest to 24.1 m/s in 2.02 s. a) Assuming the acceleration is constant over the time interval, what is the magnitude of the acceleration of the cheetah? b) What is the distance traveled by the cheetah in these 2.02 s? c) A runner can accelerate from... algebra 1 This homework question was removed due to a copyright claim submitted by the National Network of Digital Schools Management Foundation (NNDS). How do I graph it? SO I PUT number of lawns x=4 2 2 3 1 and number of hrs y= 3 5 2 5 1 ? This homework question was removed due to a copyright claim submitted by the National Network of Digital Schools Management Foundation (NNDS). How do I graph it? Is Tn and Co an element or a compound? In golf, the average score a food player should be able to achieve is called a par. par for a whole course is calculatedby adding up the par scores for each whole. Scores in golf are often expressed at some number either greater than or less than par. ms. Floop is having a pre... organic chemistry 1if you open the tert-butyl chloride and tert-butyl bromide to the air for few hours? What happened to the ratio of the halide mixture during this time? (2-methyl-2-propanol reaction ) Julie has been offered two jobs. The 1st one pays $400 per week. The 2nd job pays $175 per week plus 15% commision of her sales. How much will she have to sell i n order for the 2nd job to pay as much as the first? how do you do this? i dont know the steps to. If sodium,hydrogen,oxyegen,and chrlorine are mixed to gether is it a compund or a mixture? if you do 70*1.0/100 would the answer be .7? Thanks soooo much! I think I'm doing the process wrong since I know the answer isn't 20 but thats the only thing I'm getting. 11-15f=24 So is it 280 or -280? Another problem with my checking. The teacher says the answer is 280 but when I try to check it I never get the right one (24-d)/16=19 Thank you! I still can't understand this one.In class we got 42 but I got 48 and when I check it none of them match up 4/8y-21=3 What is y? 11-15f=24 What is f? My answer was 328 (24-d)/16=19 WHat is d? (b+15)/7=14 What is b? 10 2/3 social studies what is the name given to a curved of hooked area of land that extends into a sea of ocean social studies what would be the next label after 75 degrees north social studies I don't understand this question name the parallels that are labeled north of the equator The value of ∆Ho for the reaction below is -1107 kJ: 2Ba(s) + O2(g) → 2BaO(s) How many kJ of heat are released when 15.75 g of Ba(s) reacts completely with oxygen to form BaO(s)? A) 35.1 B) 114 C) 70.3 D) 20.8 E) 63.5 Pages: 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | Next>>
{"url":"http://www.jiskha.com/members/profile/posts.cgi?name=JENNY","timestamp":"2014-04-16T05:22:59Z","content_type":null,"content_length":"23726","record_id":"<urn:uuid:c990a266-8885-4c17-86fc-af2c70eee959>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00561-ip-10-147-4-33.ec2.internal.warc.gz"}
Concentration of Gaussian vectors up vote 3 down vote favorite If $f: \mathbb{R}^n \to \mathbb{R}$ is a Lipschitz function and $X$ is a standard $n$-dimensional Gaussian vector with $\mathbb{E} f(X) = 0$, then $f(X)$ is subgaussian (in a way that does not depend on $n$). If $f$ is $\mathcal{C}^1$, this is equivalent to saying that $|\nabla f|$ bounded implies $f(X)$ is subgaussian. There seem to be two natural generalizations of this. The first is to ask for weaker bounds on $|\nabla f|$. For example, if $|\nabla f|$ is subgaussian, then $f$ should be subexponential. The second generalization concerns functions $f: \mathbb{R}^n \to \mathbb{R}^k$. If I want to control $|f|$ independently of $k$, it is no longer enough to assume that $f$ is Lipschitz, since for the function $f(x) = (x_1, \dots, x_k)$, $|f|$ concentrates around $\sqrt k$. The natural condition seems to be a bound on the Frobenius norm of $D f$ (the matrix of partial derivatives). The following statement contains both generalizations simultaneously (and is not hard to prove): If $f: \mathbb{R}^n \to \mathbb{R}^k$ is continuously differentiable and $\mathbb{E} f(X) = 0$ then $$ \big(\mathbb{E} |f(X)|^p\big)^{1/p} \le c \sqrt p \big(\mathbb{E} \|Df\|_F^p\big)^{1/p}. $$ My question is whether a statement like this is known and (if so) where I can find a reference. reference-request pr.probability add comment 1 Answer active oldest votes To answer my own question, this follows from a more general result that is mentioned in "On measure concentration of vector valued maps" by Ledoux and Oleszkiewicz, Theorem 4: for any convex function $\Psi: \mathbb{R}^k \to \mathbb{R}$, $$ \mathbb{E} \Psi(f(X)) \le \mathbb{E} \Psi(\frac{\pi}{2} Y \cdot Df(X)) $$ where $X$ and $Y$ are independent standard Gaussians. If up vote 2 you condition the right hand side on $X$ and integrate $Y$, a standard result on the moments of order-2 Gaussian chaos gives $$ \mathbb{E} (\frac{\pi}{2} Y \cdot Df(X))^p \le (cp)^{p/2} down vote \mathbb{E} \|Df\|_F^p $$ which is what I claimed above. (By following the references a little more carefully, you can even get the sharp constant.) 1 It would be good of you to cite the source directly in the text of the answer. (The link does not work and if it did, it would become unhelpful if it ever broke.) – cardinal Apr 15 '12 at 23:39 Thanks, link fixed (and I've also written the authors, etc.) – Joe Neeman Apr 16 '12 at 5:18 add comment Not the answer you're looking for? Browse other questions tagged reference-request pr.probability or ask your own question.
{"url":"http://mathoverflow.net/questions/93615/concentration-of-gaussian-vectors","timestamp":"2014-04-21T00:29:04Z","content_type":null,"content_length":"52759","record_id":"<urn:uuid:a6a8493a-44dd-40ad-a520-aed96f8a490d>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00158-ip-10-147-4-33.ec2.internal.warc.gz"}
Towards an Axiom System for Default Logic Gerhard Lakemeyer, Hector J. Levesque Recently, Lakemeyer and Levesque proposed a logic of only-knowing which precisely captures three forms of nonmonotonic reasoning: Moore's Autoepistemic Logic, Konolige's variant based on moderately grounded expansions, and Reiter's default logic. Defaults have a uniform representation under all three interpretations in the new logic. Moreover, the logic itself is monotonic, that is, nonmonotonic reasoning is cast in terms of validity in the classical sense. While Lakemeyer and Levesque gave a model-theoretic account of their logic, a proof-theoretic characterization remained open. This paper fills that gap for the propositional subset: a sound and complete axiom system in the new logic for all three varieties of default reasoning. We also present formal derivations for some examples of default reasoning. Finally we present evidence that it is unlikely that a complete axiom system exists in the first-order case, even when restricted to the simplest forms of default Subjects: 3.3 Nonmonotonic Reasoning; 9.3 Mathematical Foundations This page is copyrighted by AAAI. All rights reserved. Your use of this site constitutes acceptance of all of AAAI's terms and conditions and privacy policy.
{"url":"http://aaai.org/Library/AAAI/2006/aaai06-042.php","timestamp":"2014-04-20T06:50:53Z","content_type":null,"content_length":"2941","record_id":"<urn:uuid:12b17908-2778-4a72-beaa-9040c01a40a1>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00305-ip-10-147-4-33.ec2.internal.warc.gz"}
Hypertext Help with LaTeX To get an expression, exp, to appear as a superscript, you just type ^{exp}. Can be used only in math mode. Thus, for a simple expression that is part of the running text: x$^3$ is the third power of x should display as "x^3 is the third power of x". Note that the braces around the argument may be omitted if the superscript is a single character. If a symbol has both subscripts and superscripts, the order doesn't matter. The following are equivalent: Superscripts may have their own superscripts: should display something like See also: Math Formulas, Subscripts Back to the Table of Contents Revised 31 May 1995.
{"url":"http://www.phy.duke.edu/~rgb/General/latex/ltx-180.html","timestamp":"2014-04-21T15:35:48Z","content_type":null,"content_length":"1554","record_id":"<urn:uuid:9178f71e-a5e1-4ae4-a558-6a42ce29a1e0>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00498-ip-10-147-4-33.ec2.internal.warc.gz"}
Mountlake Terrace Prealgebra Tutors ...In my personal studies I have completed math classes through calculus two. I have completed the WyzAnt geometry qualification quiz. I have privately tutored geometry for several students over the past 5 years. 4 Subjects: including prealgebra, geometry, algebra 1, algebra 2 ...As much as possible I like to help people learn by doing. If there are problems to solve, we would work through some examples together to make sure your thought processes are going in the right direction. My favorite subject is biology. 6 Subjects: including prealgebra, chemistry, biology, algebra 1 ...I have had much experience with students of all needs, backgrounds and abilities, and have been met with success in reaching the students and helping them to feel successful. My methods of teaching are forever adaptable in that it is my number one goal to reach the student and work with them in ... 11 Subjects: including prealgebra, reading, writing, geometry ...This is where the concepts for calculus are truly laid and any shakiness in understanding the topics here lead to hesitation in moving on to Calculus. This where I show students where the graphical representation and equation come together so that the visualization carries on with them when they... 16 Subjects: including prealgebra, geometry, algebra 1, algebra 2 ...For the past 3 years, I have been a First Years Program Leader on campus, essentially guiding freshmen through the various challenges and concerns they have upon entering college. I have taught at a "Read 'n' Lead" program at my local library where I read to and helped elementary school students... 42 Subjects: including prealgebra, reading, English, calculus
{"url":"http://www.algebrahelp.com/Mountlake_Terrace_prealgebra_tutors.jsp","timestamp":"2014-04-16T07:15:56Z","content_type":null,"content_length":"25349","record_id":"<urn:uuid:6f3a9bc8-0a12-4e20-b801-77ced10ede07>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00153-ip-10-147-4-33.ec2.internal.warc.gz"}
Attainment of optimal solution in a semiobnoxious location problem E. Carrizosa, D. Romero Morales It is known that if the sum of weights in the Weber problem with attraction and repulsion is positive, then the problem attains an optimal solution. In this note we extend this result to the nonlinear extension of the abovementioned problem, which has only been addressed in the literature for bounded feasible regions.
{"url":"http://users.ox.ac.uk/~mast0730/abstractATTAIN.htm","timestamp":"2014-04-19T14:33:50Z","content_type":null,"content_length":"1942","record_id":"<urn:uuid:56af96b9-6768-4023-bcce-977af213bb3f>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00333-ip-10-147-4-33.ec2.internal.warc.gz"}
Kennesaw Algebra 1 Tutor Find a Kennesaw Algebra 1 Tutor I am a Georgia Tech Biomedical Engineering graduate and I have been tutoring high school students in the subjects of math and science for the last three years. I love helping students reach their full potential! I have found that most of the time all a student needs is someone encouraging them and letting them know that they are SMART and that they CAN do it! 15 Subjects: including algebra 1, chemistry, geometry, algebra 2 ...I also have strong interest in working with children. If you want to know more, let me know. Good Luck. 11 Subjects: including algebra 1, geometry, ASVAB, algebra 2 I am certified in math and science for grades fourth through eighth. I have a passion for teaching math and science, and love working one on one with students. After having worked in a public school, I feel that the focus of education should be the student, not the politics that go into public schools. 14 Subjects: including algebra 1, geometry, biology, elementary math ...I know most of the hands, I also know how to do quick probability in my head. I know all of the rules. I do believe I am qualified to tutor in the field of fitness. 29 Subjects: including algebra 1, English, geometry, algebra 2 ...I have worked with a number of students to not only improve, but to challenge themselves academically. Students I've worked with in the past have increased their reading level and thus feel better prepared to handle classes in middle and high school and, eventually, college. My experience with ... 28 Subjects: including algebra 1, Spanish, reading, English Nearby Cities With algebra 1 Tutor Acworth, GA algebra 1 Tutors Austell algebra 1 Tutors Canton, GA algebra 1 Tutors Cartersville, GA algebra 1 Tutors Doraville, GA algebra 1 Tutors Duluth, GA algebra 1 Tutors Dunwoody, GA algebra 1 Tutors East Point, GA algebra 1 Tutors Hiram, GA algebra 1 Tutors Mableton algebra 1 Tutors Marietta, GA algebra 1 Tutors Milton, GA algebra 1 Tutors Norcross, GA algebra 1 Tutors Smyrna, GA algebra 1 Tutors Woodstock, GA algebra 1 Tutors
{"url":"http://www.purplemath.com/Kennesaw_algebra_1_tutors.php","timestamp":"2014-04-17T21:37:34Z","content_type":null,"content_length":"23661","record_id":"<urn:uuid:0bac800e-0681-4e1a-b0fd-fa0a22cff5ce>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00029-ip-10-147-4-33.ec2.internal.warc.gz"}
Random Numbers Randomized September 20th, 2002, 05:28 AM #1 Join Date Jan 2002 Random Numbers Randomized (C++ Implementation) Sometime or the other you need to generate random numbers for your programs as they play a very important role in computer applications, especially in simulations. A particularly useful random number sequence is the "uniform random number sequence". It has a specified set of numbers from which the sequence draws its random numbers. In each position of the random number sequence, any no. from the set is equally likely to occur. Because a random number sequence is supposed to be random, there can't be any computer algorithm that iteratively computes truly random numbers. The instructions that constitute an algorithm are deterministic rules-knowing them tells you the next number. However, some functions do produce sequences of numbers that appear to be random. These sequences are called "Pseudorandom number sequences", although most people are imprecise and drop the prefix pseudo. The C++ stdlib library provides 2 functions that are useful in generating pseudorandom number sequences. They are rand() and srand() and they are declared in stdlib.h. Function rand() takes no parameters. Each time it is invoked, it returns a uniform pseudorandom number from the inclusive interval 0 to RAND_MAX, where RAND_MAX is an implementation dependent preprocessor macro constant defined in stdlib.h. In most implementations, the generation of the current pseudorandom number is a function of the previously generated pseudorandom number. The generation of the first pseudorandom number by a program is based on a similar function of an initial value, called the seed, that is supplied to the pseudorandom number generator. The program given below generates 5 pseudorandom numbers. -------Code Begin------------ #include <iostream.h> #include <stdlib.h> int main() for(int i=1;i<=5;++i) return 0; -------Code End---------------- But when you run this program, you will see that every time the program is run, the same set of 5 numbers is generated. Why is this??. Well, this repetition is part of the design of the function so that while the program is being tested or examined, it is possible to reproduce the same statement execution sequence. However, if you want to produce a different sequence of pseudorandom numbers, the function srand() is used. Function srand() expects an unsigned int as its parameter which is used to set the seed for generating the first pseudorandom number. Once the seed is set, rand(), should produce a different sequence of random numbers. In the program given below, the user provides the seed value that is to be passed to srand(). -------Code Begin------------ #include <iostream.h> #include <stdlib.h> int main() cout<<"Random number seed (number): "; unsigned int seed; for(int i=1;i<=5;++i) return 0; -------Code End---------------- Now, here if the user has to undergo the hassles of providing the seed every time and if we write a constant no. as the seed in the source code itself, it will again generate the same sequence every time. So, we use the current time as the basis for the seed value because that way, the seed should be different for each run of the program. The current time is determined using the function time(), which is defined in the time standard library. Function time() returns a value of type time_t, which is an integral type and has to be type cast into unsigned int before it can be passed to srand(). An implementation for such a program is given below. -------Code Begin------------ #include <iostream.h> #include <stdlib.h> #include <time.h> int main() srand((unsigned int) time(0)); for(int i=1;i<=5;++i) return 0; -------Code End---------------- Now, you will see that every time the program is run, a new sequence is generated. But many times, we need to generate integral numbers between two integral values. Like I recently made a Cricket match simulator and I needed to generate a number between 1 and 6 for the runs. For such situations, we develop a function that allows us to pass the lower value and higher value between which the random number is to be generated (inclusively, i.e., including both the higher value as well as the lower value) -------Code Begin------------ int Random(int Low, int High) int IntervalSize=High-Low+1; int RandomOffset=rand()%IntervalSize; return Low+RandomOffset; -------Code End---------------- Now, you can use this fuction in any of your programs and call it whenever needed it. But, remember to call srand((unsigned int) time(0)); before calling this function. Also, you may want to palce a simple check in the above function to check whether Low is less than High or not. So, that's it for now. Hope I have been able to make you understand properly this simple but sometimes overlooked concept. Let me know if i missed something or if i was worng somewhere.
{"url":"http://www.antionline.com/showthread.php?233805-Random-Numbers-Randomized&p=572631&mode=threaded","timestamp":"2014-04-17T12:34:35Z","content_type":null,"content_length":"67549","record_id":"<urn:uuid:55d70618-1300-42f3-8545-97efd0e191ae>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00231-ip-10-147-4-33.ec2.internal.warc.gz"}
Simplifiy this radical August 12th 2013, 04:38 AM #1 Oct 2011 Simplifiy this radical $=\sqrt{\frac{2\cdot 2\cdot a\cdot a\cdot a}{9\cdot 3\cdot b\cdot b\cdot b}}$ and I can take the square root of 9. $=6ab\sqrt{\frac{a}{b}}$ is this correct? The book I'm working from doesn't have the answers so I'm cross checking my work with Wolfram but Wolfram has a habbit of providing the same answer but in a totally different format. If I'm wrong please explain why thank you This is the solution from Wolfram $\frac{2\sqrt{\frac{a^3}{b^3}}}{3\sqrt{3}}$ **EDIT** as I relook over the problem, maybe the answer might be $\frac{2a}{3b}\sqrt{\frac{a}{3b}}$ ? Last edited by uperkurk; August 12th 2013 at 04:48 AM. Re: Simplifiy this radical You are correct up to here then you get the fractions confused Collect pairs of the same terms together so you can get the square root easily $=\sqrt{\frac{2^2\cdot a^2 \cdot a}{3^2\cdot 3\cdot b^2\cdot b}}$ Then split the square root into those terms which are squared and those which aren't $=\sqrt{\frac{2^2\cdot a^2}{3^2\cdot b^2}}\sqrt{\frac{a}{b\cdot 3}}$ $=\frac{2\cdot a}{3\cdot b}\sqrt{\frac{a}{b\cdot 3}}$ Re: Simplifiy this radical Re: Simplifiy this radical Ah yes it is, as I was posting you edited your post saying that you had looked over the problem again. Wolfram Alpha just expresses the answer differently. Re: Simplifiy this radical ok so I'm looking at this one, hopefully you can point me in the right direction. I'll say I need to do something like this $\sqrt[4]{\frac{2a^8}{b^2c^3}}\times \sqrt[4]{\frac{27b}{27b}} = \sqrt[4]{\frac{54a^8b}{3^3b^3c^3}}$ so now I have the denominators all the same, now what? I could take the cube root of most things? $\sqrt[4]{\frac{54a^8b}{3^3b^3c^3}} = \frac{\sqrt[4]{6\cdot 9\cdot a^8\cdot b}}{\sqrt{3bc}}$ I'm lost here... **EDIT** nah that surely isn't right, the 4th root is making it confusing Re: Simplifiy this radical ok so I'm looking at this one, hopefully you can point me in the right direction. I'll say I need to do something like this $\sqrt[4]{\frac{2a^8}{b^2c^3}}\times \sqrt[4]{\frac{27b}{27b}} = \sqrt[4]{\frac{54a^8b}{3^3b^3c^3}}$ so now I have the denominators all the same, now what? I could take the cube root of most things? $\sqrt[4]{\frac{54a^8b}{3^3b^3c^3}} = \frac{\sqrt[4]{6\cdot 9\cdot a^8\cdot b}}{\sqrt{3bc}}$ I'm lost here... **EDIT** nah that surely isn't right, the 4th root is making it confusing It shouldn't as it is true that: $\sqrt[R]{A} = {A^{\frac{1}{R}}}$ Re: Simplifiy this radical I'll say I need to do something like this $\sqrt[4]{\frac{2a^8}{b^2c^3}}\times \sqrt[4]{\frac{27b}{27b}}$ There should be no reason to multiply by $\sqrt[4]{\frac{27b}{27b}}$ if you are going to simplify it immediately. You could simplify it before bringing them into 1 fraction $\sqrt[4]{\frac{2a^8}{b^2c^3}}\times \sqrt[4]{\frac{27b}{27b}}=\sqrt[4]{\frac{2a^8}{b^2c^3}}\times 1$ Learn the rules of indices for taking roots. When you have $\sqrt[4]{a^3}$ that is equal to $(a^3)^\frac{1}{4}$ which in turn is equal to $a^\frac{3}{4}$ Re: Simplifiy this radical ok so I'm looking at this one, hopefully you can point me in the right direction. I'll say I need to do something like this $\sqrt[4]{\frac{2a^8}{b^2c^3}}\times \sqrt[4]{\frac{27b}{27b}} = \sqrt[4]{\frac{54a^8b}{3^3b^3c^3}}$ so now I have the denominators all the same, now what? I could take the cube root of most things? $\sqrt[4]{\frac{54a^8b}{3^3b^3c^3}} = \frac{\sqrt[4]{6\cdot 9\cdot a^8\cdot b}}{\sqrt{3bc}}$ I'm lost here... **EDIT** nah that surely isn't right, the 4th root is making it confusing You want to 'rationalize the denominator' , that means the FACTORS of the denominator should become perfect 4th powers. $\sqrt[4]{ \frac{2a^8}{b^2c^3} \cdot \frac{b^2c}{b^2c}} \ = \ \frac{ \sqrt[4]{2a^8b^2c}}{ \sqrt[4]{b^4c^4}} \ = \ \frac{a^2 \sqrt[4]{2b^2c}}{bc}$ August 12th 2013, 04:53 AM #2 Super Member Oct 2012 August 12th 2013, 05:00 AM #3 Oct 2011 August 12th 2013, 05:14 AM #4 Super Member Oct 2012 August 12th 2013, 05:37 AM #5 Oct 2011 August 12th 2013, 02:05 PM #6 Junior Member Jan 2010 August 12th 2013, 04:12 PM #7 Super Member Oct 2012 August 13th 2013, 11:12 PM #8
{"url":"http://mathhelpforum.com/algebra/221151-simplifiy-radical.html","timestamp":"2014-04-21T02:31:40Z","content_type":null,"content_length":"59675","record_id":"<urn:uuid:bfb47f2b-75a8-4182-8f58-61754c7eb561>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00377-ip-10-147-4-33.ec2.internal.warc.gz"}
Compute Lie algebra cohomology up vote 7 down vote favorite Is there a computer algebra system that is able to compute the Lie algebra cohomology in a given representation?what if the Lie algebra is finite dimensional? In my case I would like to be able to compute the the cohomology in the following situation: let $\mathfrak{g}\subset \mathfrak{h}$ be an inclusion of finite dimensional complex Lie albebras, I'd like to compute the cohomology of $Hom(\ mathfrak{g}, \mathfrak{h}/\mathfrak{g})$. rt.representation-theory lie-algebra-cohomology computer-algebra 1 You might try LiE www-math.univ-poitiers.fr/~maavl/LiE although it looks like it only does stuff about reductive groups. For fixed finite-dimensional algebra and fixed finite-dimensional representation, I'm sure Mathematica can handle it. – Theo Johnson-Freyd Oct 26 '10 at 16:55 1 Although LiE is a great package, it does not do Lie algebra cohomology. – Chuck Hague Oct 27 '10 at 15:33 add comment 2 Answers active oldest votes I have looked at this question a few years ago (with some more recent sporadic gilmpses), so I am definitely not uptodate. Here it goes, anyway, what I have learned back then: • GAP. GAP is a wonderful tool, but I would not call its cohomology computation capabilities efficient. As far I understand, it implements a straightfoward approach by constructing appropriate matrices and solve a linear algebra problem. One sees easily how the matrix size grows prohibitively with the dimension of an algebra. Seems to be not suitable for anything beyond "toy" problems. • Mathematica. Mathematica-based package "SuperLie" for computations in Lie (super)algebras, including cohomology, written by Pavel Grozman and used by Dimitry Leites and his collaborators in their recent papers (see arXiv): http://www.equaonline.com/math/SuperLie/ . Seems to have a very steep learning curve but looks quite impressive. Seems to outperform GAP, but how far - I don't know. • C. There is program (constantly evolving as far I understand) for computations of cohomology of Lie (super)algebras by Vladimir Kornyak (see arXiv). He is doing things beyond straightforward linear algebra approach - for example, he tries to split the cochain complex to smaller subcomplexes and perform reduction modulo an appropriate prime (in the case of zero characteristic). Seems to be comparable with "SuperLie" (as far as cohomology is concerned). It is written in plain C and does not have overheads of computer algebra systems. Unfortunately, Kornyak does not disclose (at least publically) sources or even binaries. • REDUCE. N. v.d Hijligenberg and G. Post, Computation by computer of Lie superalgebra homology and cohomology, Acta Appl. Math. 41 (1995), 123-134 http://dx.doi.org/10.1007/BF00996108 - up vote 9 haven't looked at it. down vote accepted • LiE J. Silhan, Algorithmic computations of Lie algebras cohomologies, Proceedings of the 22nd Winter School ``Geometry and Physics'', Rend. Circ. Mat. Palermo Suppl. No. 71 (2003), 191-197 http://dml.cz/handle/10338.dmlcz/701718 . A more specific program, implemented in LiE, for computation of cohomology of parabolic subalgebras of classical Lie algebras and related, basing on celebrated Kostant's work. Haven't looked at it thoroughly. • Magma. I never bothered with this commercial package which seems to be comparable with GAP. • ? D.V. Reshetnikov, Computation of cohomology groups of the Lie algebras of type $B_n$ and $C_n$, Russian Math. (Izv. VUZ) 53 (2009), N8, 58-59 http://dx.doi.org/10.3103/ S1066369X0908009X . Here the author reports (quite uselessly, I should admit, as no further details are given) about a program for computation of Lie algebra cohomology developed by I personally think that there is a big unexplored area here - one should use heavily sparsity of occuring matrices (currently none of the programs described above seems to use it). Which kind of sparsity it is, is not clear apriori, and, moreover, I suspect that for different algebras and modules one will have different kinds of sparsity. This makes an interesting connection with methods and tricks from numerical linear algebra. Again, take all this with a grain of salt, as things may have changed since I looked at them. add comment In the Maple computer algebra system you have the package LieAlgebraCohomology which should do what you want. up vote 3 down If I'm not wrong, maple can only compute the cohomology of $\mathfrak{g}$ in a Lie subalgebra, not in any representation. – Michele Torielli Oct 28 '10 at 15:22 @Michele: this depends on the version of Maple. At least they claim that Maple 12 can compute the Lie algebra cohomology with coefficients in a representation: maplesoft.com/ view.aspx?SF=5898/M12WhatsNewPro.pdf – mathphysicist Oct 29 '10 at 19:35 yes, I agree with you but when you open the description of the command LieALgebra Cohomology it seems that it can compute only if the representation is a Lie subalgebra. Thank you for the replay. – Michele Torielli Oct 30 '10 at 8:34 add comment Not the answer you're looking for? Browse other questions tagged rt.representation-theory lie-algebra-cohomology computer-algebra or ask your own question.
{"url":"http://mathoverflow.net/questions/43665/compute-lie-algebra-cohomology","timestamp":"2014-04-18T11:11:41Z","content_type":null,"content_length":"63167","record_id":"<urn:uuid:1c630f31-944b-42ff-8b83-2e59fb448b36>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00002-ip-10-147-4-33.ec2.internal.warc.gz"}
Radar Basics - Part 4: Space-time adaptive processing | EE Times Design How-To Radar Basics – Part 4: Space-time adaptive processing In Part 2 of this series on Radar Basics, the use of Doppler processing was discussed as a key method to discriminate both in distance and velocity. Discrimination in direction (or angle of arrival to the antennas) is provided by aiming the radar signal, using either traditional parabolic or more advanced electronic steering of array antennas. Under certain conditions, other methods are required. For example, jammers are sometimes used to prevent detection by radar. Jammers often emit a powerful signal over the entire frequency range of the radar. In other cases, a moving target has such a slow motion that Doppler processing is unable to detect against stationary background clutter – such as a person or vehicle moving at walking speed. A technique called space time adaptive processing (STAP) can be used to find targets that could otherwise not be detected. Because the jammer is transmitted continuously, its energy is present in all the range bins. And, as shown in Figure 1, the jammer cuts across the all Doppler frequency bins due to its wideband, noise-like nature. It does appear at a distinct angle of arrival however. Figure 1 also depicts the ground degree of clutter in a side-looking airborne radar due to the Doppler of the ground relative to the aircraft motion. A slow moving target return can easily blend into the background clutter. STAP radar processing combines temporal and spatial filtering that can be used to both null jammers and detect slow moving targets. It requires very high numerical processing rates as well as low latency processing, with dynamic range requirements that generally require floating-point processing. Figure 1. Clutter and jammer effects in Doppler space STAP processing requires use of an array antenna. However, in contrast to the active electronically scanned array (AESA), for STAP the antenna receive pattern is not electronically steered as with arrays. In this case, the array antenna provides the raw data to the STAP radar processor, while the antenna processor does not perform the beam steering, phase rotation or combining steps, as indicated in Figure 2. Also, while the AESA is depicted in one dimension, this array can be – and often is – two dimensional in both elevation (up and down) and azimuth (side to side). In this way, the antenna receive pattern can be steered or aimed in both elevation and azimuth. Figure 2. Array antenna necessary for STAP radar Radar processing can occur over consecutive pulses, as long as they lie within the coherent processing interval, considered “slow” time. The radar samples collected during the pulse repetition frequency (PRF) interval are binned, which corresponds to the range. The PRF interval is referred to as “fast” time. Doppler processing occurs across the samples in the range bins as shown in Figure 3. Figure 3. Doppler radar processing diagram Doppler processing operates on an array of data for detection processing. STAP radar, on the other hand, operates on a cube of data as illustrated in Figure 4. The extra dimension, , comes from the distinct inputs from the array antenna (for both elevation and azimuth). This will produce a radar data cube of dimensions (number of array antenna inputs) by (number of range bins in fast time) by (number of pulse in CPI in slow time). Doppler processing occurred over the data slice across the dimensions. In STAP, we will see that slices of data across the dimensions are processed. Figure 4. Radar Datacube used in STAP Radar Before discussing the STAP algorithm, it may help to provide some context. STAP is basically an adaptive filter, which can filter over the spatial and temporal (or time) domain. The goal of STAP is to take a hypothesis that there is a target at a given location and velocity, and create a filter that has high gain for that specific location and velocity, and apply proportional antenuation of all signals (clutter, jammers and any other unwanted returns). There can be many suspected targets to generate location and velocity hypotheses for, and these are all normally processed together in real time. This produces very high processing and throughput requirements on the STAP processor. The volume of data coming from the receive antenna is very high, thus this data must be processed in real time, rather than be stored for later off-line processing. Further, as this is an adaptive filter system, the data is processed immediately as part of a feedback loop to generate the optimal filters used to detect the suspected targets. This brings up the issue of how the suspected targets are identified for subequent STAP processing. This can come from weak detections found in Doppler processing, from other IR or visual sensors, from intelligence data, or from many other sources. This issue is beyond the scope of these discussions on how STAP processing works. But as will be shown, STAP has the capability to pull targets that are below the clutter into a range that can be reliably detected. A good analogy is a magnifying glass. Conventional methods are used to view the big picture, but if something of interest is noted, STAP can be used to act as a magnifying glass to zoom into a specific area and see things that would be otherwise undetectable. For each suspected target, a target steering vector must be computed. This target steering vector is formed by the cross product of the vector representing the Doppler frequency and the vector representing the antenna angle of elevation and azimuth. For simplicity, we will assume only azimuth angles are used. The Doppler frequency offset vector is a complex phase rotation: = e for n = 1..N-1 The spatial angle vector is also a phase rotation vector: = e for m = 1..M-1, for given angle of arrival θ and wavelength λ The target steering vector is the cross product vector A [θ] as shown in Figure 5, and is vector of length . This must be computed for every target of interest. Figure 5. Target steering vector t = f (Angle, Doppler) Next, the the interfence covariance matrix must be estimated. One method is to compute and average this for many range bins surrounding the range of interest. To compute this, a colum vector is built from a slice of the radar data cube at a given range bin . The covariance matrix by definition will be the vector cross product: * · Here, the vector is conjugated and then multiplied by its transpose. As is of length , the covariance matrix is of size [( · M) x (N · )]. Remember that all the data and computations are being performed with complex numbers, representing both magnitude and phase. An important characteristic of I is that it is hermitian, which means that I = SI* or equal to its conjugate transpose. This symmetry is a proporty of covarioance matrices. Figure 6. Computing the interference covariance matrix The covariance matrix represents the degree of correlation across both antenna array inputs and over the pulses comprising the CPI (coherent processing interval). The intention here is to characterize undesired signals and create an optimal filter to remove them, thereby facilitating detection of the target. The undesired signals can include noise, clutter and jammers. noise + S[jammer ] The covariance matrix is very difficult to calculate or model, therefore it is estimated. Since the covariance matrix will be used to compute the optimal filter, it should not contain the target data. Therefore, it is not computed using the range data right where the target is expected to be located. Rather, it uses an average of the covariance matrices at many range bins surrounding, but not at the target location range. This average is an element by element average for each entry in the covariance matrix, across these ranges. This also means that many covariance matrices need to be computed from the radar data cube. The assumption is that the clutter and other unwanted signals are highly correlated to that at the target range, if the differnce in range is reasonably small. Figure 7. Estimating the covariance matrix using neighboring range bin data The estimated covariance matrix can used to build the optimal filter. As those of you with experience with adaptive filters already guess, this is going to involve inversion of the covariance matrix, which is very computationally expensive, and generally requires the dynamic range of floating point numerical representation. Recall that this matrix is of size [( ) x ( )] and can be quite large. Fortunately, the matrix inversion result can be used with multiple targets at the same range. The steps are as follows: *, or One method for solving for is known as QR Decomposition, which we will use here. Another popular method is the Choleski Decomposition. Perform the substitution , or product of two matrices. can be computed from using one of several methods, such as Gram-Schmidt, Householder transformation, or Givens rotation. The nature of the decomposition in to two matrices is that will turn out to be an upper triangular matrix and will be an orthonormal matrix, or a matrix composed of orthogonal vectors of unity length. Orthnonornal matrices have the key property of: So it is trival to invert . Please refer to a text on linear algebra for more detail on QR Decomposition. * now multiply both sides by QH is an upper triangular matrix, can be solved by a process known as “back substitution”. This is started with the bottom row that has one non-zero element, and solving for the bottom element in . This result can be back-substituted for the second to bottom row with two non-zero elements in the matrix, and the second to bottom element of solved for. This continues until the vector is completely solved. Notice that since the steering vector is unique for each target, the back substitution computation must be performed for each steering vector. Then solve for the actual weighting vector / ( *), where dot product ( *) is a weighting factor (this is a complex scaler, not vector) Finally solve for the final detection result by the dot product of and the vector y from the range bin of interest. is a complex scaler, which is then fed into the detection threshold process. After this math, it is worthwhile to try to get an intuitive understanding of what is going on. Shown in Figure 8 is a plot of , the inverted covariance matrix. In this case, there is a jammer at 60 degrees azimuth angle, and a target at 45 degrees, 1723 meters range and normalized Doppler of 0.11. Figure 8. Logarithmic plot of inverted covariance matrix Notice the very small values, on the order of –80 dB, present at 60 degrees. The STAP filtering process is detecting the correlation associated with the jammer location at 60 degrees. But inverting the covariance matrix, this jammer will be severly attenuated. Notice also the diagonal yellow clutter line. This is a side looking airborne radar, so the ground clutter has positive Doppler looking in the forward direction, or angle and negative Doppler in the backward direction or angle. This ground clutter is being attenuated at about –30 dB, proportionally less severely than the more powerful jammer signal. The target is not present in this plot. Recall that the estimated covariance matrix is determined in range bins surrounding, but not at the expected range of the target. However in any case, it would not likely be visible anyway. However, using STAP processing with the target steerig vector can make a dramatic difference, as shown in Figure 9. The top plot shows the high return of the peak ground clutter at range of 1000m with magnitude of ~ 0.01, and noise floor of about ~0.0005. With STAP processing, the noise floor is pushed down to ~0.1 x 10-6 and the target signal at about 1.5 x 10 is now easily detected. It is also clear that floating point numerical representation and processing will be needed for adequate performance of the STAP algorithm. Figure 9. STAP processing gain Next, the processing requirements should be considered, using the following assumptions: PRF = 1000 Hz 12 antenna array inputs ( vectors are of length 12 or = 12) 16 pulse processing (Doppler vectors are of length 16 or = 16) Minimum required size of is [192x192], in complex single precision format Assume 32 likely targets to process (32 target steering vectors) Use of 200 range bins to estimate Table 1. STAP GFLOPs estimate In fact, this is a very conservative scenario. The PRF is rather low and the number of antenna array inputs is very small. Should the number of antenna array inputs increase by 12 to 48, the processing load of the matrix processing, in particular QR Decomposition, goes up by the third power or 64 times. This would require over 3 TeraFLOPs of realtime floating point processing power. Because of this, the limitations on STAP are clearly the processing capabilities of the radar system. The theory of STAP has been known for a long time, but the processing requirements have made it impractical until fairly recently. Many radar applications benefiting from STAP are airborne and often have stringent size, weight, and power (SWaP) constraints. Very few processing architectures can meet the throughput requirements of STAP, while even fewer can simultaneously meet the SWaP One alternative is to use FPGAs. Several FPGA vendors have long offered floating-point operator libraries such as multiply and add/subtract that have similar areas, performance levels, and latencies. The combination of multiple arithmetic operators into higher level functions such a vector dot product operator are inefficient and suffer from significantly reduced . Typical latencies for both multipliers and adders are in the range of 10; a dot product operator with a few tens of inputs may therefore exceed a latency of 100. Routing congestion and datapath latencies are have been critical restrictions on floating point implementations on FPGA architectures. Parallelism is a key advantage of a hardware solution like FPGAs, but it is often not applied to floating point signal processing because the long latencies make the data dependencies in algorithms such as matrix decomposition difficult to manage. Therefore, the resultant systems offered poor performance levels, uncompetitive to other platforms such as GPU or multi-core CPU architectures. Altera has developed a floating-point design flow that can overcome these issues. Rather than building a datapath from individual operators, the entire datapath is considered as a single function with inter-operator redundancy factored out. Mantissa representation can be converted to hardware-friendly twos complement and mantissa widths extended to reduce the frequency of normalizations. Elementary functions can be implemented as much as possible using hard multipliers, which offer guaranteed internal routing and timing, as well as low power and latency. New techniques can be applied for matrix decompositions, with the algorithms restructured to remove most of the data dependencies, so that parallel – and therefore high latency – datapaths can be used for these computations. This approach is known as “Fused Datapath”, and when combined with Altera’s new 28nm Variable Precision DSP block architecture, offers extremely high data processing capabilities, in excess of one Teraflop on a single FPGA die. In addition, Fused Datapath methodology is actually more accurate than computing on a microprocessor, which uses the standard IEEE754 floating point conventions. This has been measured by analyzing single precision Fused Datapath and single precision computations on a desktop PC and comparing both to a double precision result standard. Moreover, this toolflow has been optimized specifically for radar applications, with support for vector operators and common linear algebra constructs, as shown in Figure 10. A useful set of floating point trigonometric and other math library functions is also integrated. Typical using Fused Datapath on dense floating point designs in large Stratix FPGAs is between 200 to 250 MHz. Figure 10. Fused datapath floating-point library support STAP radar designs have been built with this new floating point toolflow. The performance of a key function, the matrix inversion, is shown in Table 2. In this case, the Choleski Decomposition performance is shown rather than QR Decomposition, as it is turns out to be more efficient for hardware implementations (either algorithm can be used in most STAP applications). Table 2. FPGA matrix inversion throughput metrics The designer has two methods to trade throughput against FPGA hardware resources. As part of the toolflow support for vector operations, the tool allows the user to parameterize the vector size for various processing steps. A large vector size can process more data simultaneously, at the expense of more hardware. Smaller vector sizes require more looping to complete the calculations, but use less device resources, and has reduced throughput. performance on 40nm Stratix IV and 28nm Stratix V FPGAs are similar; however the much higher density and architectural improvements of Stratix V enable more Choleski cores to be built within the same chip, thus allowing for a proportional increase in aggregate throughput through parallelism (see Table 2). All of these metrics were built with Fused Datapath technology using single precision floating point precision. This new capability provides superior computational capability for radar system designers. One attractive architecture for STAP or other radar backend processing is to partition the high GFLOPs and predictable processing in an FPGA (for example: covariance matrix computation and inversion) while maintaining the more dynamic and lower GFLOPs on a processor (steering vector generation, detection processing). This can also help preserve the code base investment in legacy processor architectures. Also see Part 1 Part 2 Part 3 , and Part 5 of this five-part mini-series on “Radar Basics”. Fundamentals of Radar Processing by Mark Richards. Readers are encouraged to refer to this text for further information on STAP. About the author Mr. Parker joined Altera in January 2007, and has over 20 years of DSP wireless engineering design experience with Alvarion, Soma Networks, TCSI, Stanford Telecom and several startup companies.
{"url":"http://www.eetimes.com/document.asp?doc_id=1278878","timestamp":"2014-04-16T08:23:28Z","content_type":null,"content_length":"156410","record_id":"<urn:uuid:7e63eefd-9fc2-4ab5-8086-c75d9217f25e>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00154-ip-10-147-4-33.ec2.internal.warc.gz"}
hollow sphere moment of inertia Best Results From Yahoo Answers From Yahoo Answers Question:HELP!!! What is the linear acceleration of the center of mass of a solid sphere? A solid sphere (radius R, mass M) rolls down an incline as shown in the figure**. Its linear acceleration of the center of mass is what? **the figure could not be pasted onto yahoo, so here is the link: http://www.scribd.com/doc/18103177/Serway-Physic-Chapter-11 It is problem number 20 (just scroll down a bit), and the figure is on the right hand side. Thanks. The answer is (5/7)(g)(sin ). How was this derived? Answers:we start with newton's second law the sum of the forces acting on the sphere will equal ma where a is the linear acceleration the forces are the component of gravity down the plane and the force of friction up the plane this gives us mg sin (theta) - u mg cos(theta) = ma where u is the coeff of friction a=gsin(theta) - u g cos(theta) Now, consider the torque on the rolling sphere. The force of friction generates a torque on the sphere; the lever arm of this force is the radius of the sphere, so we have torque = f R since f= u mg cos(theta), the torque is: torque = umg cos(theta) R now, we use the fact that torque = I A where I is the moment of inertia (for a sphere this is 2/5 mR^2) and A is the angular acceleration A is related to linear acceleration via a= R A or A=a/R so we have: torque = u mg cos(theta) R = I A = 2/5 mR^2(a/R) so this gives us u g cos(theta)=2/5 a now return to g sin(theta) - u gcos(theta) = a substitute 2/5 a for u g cos(theta) to get g sin(theta) - 2 /5 a = a g sin(theta) = 7/5 a or a=5/7 g sin(theta) Question:Two spheres look identical and have the same mass.One is hollow and other is solid.Which which method would determine which is which? a. roll them down and incline b. drop them from the same height c. weigh them on a scale Thank you:> Answers:definitely not (b) two objects falling from the same height will fall according to the newtonian equations of motion. all are mass independant definitely not (c) scales measure mass or weight. weight = mass x g. since mass is the same and g is the same, the balance would read the same for both objects leaves (a) so why would the balls roll differently? they are the same shape. same size. but one has all it's mass around the outside of the sphere and the other is uniform..... so here's the answer..... moment of inertia for a rotating body is a measure of it's resistence to rotate. at a given applied torque, an object with a higher moment of inertia will be more resistent to rotate. ie.. it will accelerate slower. the moment of inertia of a hollow sphere = 2 m r / 3 the moment of inertia of a solid sphere = 2 m r / 5 since m is the same and so is r.... and since 2/3 > 2/5, the hollow sphere will have a higher moment of inertia and therefore will be more resistent to rotate than the solid sphere for a given torque. in this case, that torque is due to the weight of the object via normal force and friction. the torque = the force of friction. so the solid sphere will accelerate faster down the incline ******** update ********** I see someone gave me a thumbs down. so maybe they want more details? t = I where t = torque = force x moment arm I = moment of inertia = angular acceleration so = t / I for a ball rolling down an incline, torque is the friction imparted by the surface = x Fn (coeff of friction x normal force). but Fn = m x a x cos (m x a = weight of object acting straight down = angle of incline so ma = Fn... draw a pic if you need).. so t = m a makes = t / I = m a / I for hollow sphere, = m a / (2 m r /3) = 3 a / (2 r ) for solid sphere, = m a / (2 m r /5) = 5 a / (2 r ) solid / hollow = 5 a / (2 r ) / 3 a / (2 r ) = 5/3 solid = 5/3 hollow ie, the acceleration of the solid sphere = 5/3 x the acceleration of the hollow sphere.... keep in mind, the sphere is not sliding down the plane. it is rolling.... Question:What's a good physics energy or energy conservation activity for 11th or 12th grade physics? Visual and/or hands on would be best. Thanks! Answers:A good experiment/project involving energy conservation and angular momentum/moment of inertia is to have the students build a well (like a somewhat u-shaped section of a roller coaster track), acquire several common objects with different moments of inertia (solid sphere, hollow sphere, solid disk, hollow disk, ...) , and then test how they act differently in the well. For instance, start them from the same height and see what height they return to and how long each takes to reach it's max height once again. What they will find is that they all return to their original height due to energy conservation, but that they will take different times to do so depending on there moment of inertia. Question:A 2.56 kg hollow cylinder with inner radius 0.17 m and outer radius 0:43 m rolls with- out slipping when it is pulled by a horizontal string with a force of 49.8 N Its moment of inertia about the center of mass is (1/2)*m*(R^2out + R^2in) What is the acceleration of the cylinders center of mass? Answers:_________________________________________________ Outer radius = R =0.43 m inner radius =r=0.17 m friction force = f The torque due to frictional force= T=R*f but torque T =moment of inertia I*angular acceleration(a!) moment of inertia= I = (1/2)*m*(R^2 + r^2) the acceleration of the cylinders center of mass =a angular acceleration a!=a / R R*f = I*a!=Ia/R f = Ia /R^2 Applied force =F= 49.8 N mass=m=2.56 kg Applying Newton's second law, ma= F - f ma = F - I a/R^2 ma + Ia/R^2 = F a= F / [m +I/R^2 ] substituting, I =(1/2)m[R^2+r^2] a = F / m [1+(1/2)[1 +( r/R)^2 ] a = 2 F / m [3+( r/ R)^2 ] a =12.3265 m/s^2 the acceleration of the cylinders center of mass is 12.3265 m/s^2 _____________________________________________________________________
{"url":"http://www.edurite.com/kbase/hollow-sphere-moment-of-inertia","timestamp":"2014-04-17T21:31:38Z","content_type":null,"content_length":"71499","record_id":"<urn:uuid:dff83663-c889-44b4-8bdb-cd07568165a5>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00558-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: The generation time G for a particular bacteria is the time it takes for the population to double. The bacteria increase in population is shown by the formula , where t is the time period of the population increase, a is the number of bacteria at the beginning of the time period, and P is the number of bacteria at the end of the time period. If the generation time for the bacteria is 6 hours, how long will it take 8 of these bacteria to multiply into a colony of 7681 bacteria? Round to the nearest hour. A. 177 hours B. 76 hours C. 4 hours D. 85 hours • 7 months ago • 7 months ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/521b82fce4b0750826dfba04","timestamp":"2014-04-19T22:34:39Z","content_type":null,"content_length":"25746","record_id":"<urn:uuid:510e7542-8c04-4c88-9790-ba26181aa74e>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00484-ip-10-147-4-33.ec2.internal.warc.gz"}
What is .02 in scientific notation? Scientific notation is a way of writing numbers that are too big or too small to be conveniently written in decimal form. Scientific notation has a number of useful properties and is commonly used in calculators and by scientists, mathematicians and engineers. In scientific notation all numbers are written in the form of Related Websites:
{"url":"http://answerparty.com/question/answer/what-is-02-in-scientific-notation","timestamp":"2014-04-16T10:10:48Z","content_type":null,"content_length":"20700","record_id":"<urn:uuid:0b248d1b-e6df-4245-881f-50e45af46b7e>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00514-ip-10-147-4-33.ec2.internal.warc.gz"}
Borsuk-Ulam theorem Borsuk-Ulam theorem Theorem: There exists no continuous map $f:S^{{n}}\to S^{{n-1}}$ which is antipode preserving for $n>0$. Some interesting consequences of this theorem have real-world applications. For example, this theorem implies that at any time there exists antipodal points on the surface of the earth which have exactly the same barometric pressure and temperature. Mathematics Subject Classification no label found
{"url":"http://planetmath.org/borsukulamtheorem","timestamp":"2014-04-17T12:41:52Z","content_type":null,"content_length":"37338","record_id":"<urn:uuid:183dcb6f-40e5-4f4b-9563-611ea63486ac>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00157-ip-10-147-4-33.ec2.internal.warc.gz"}
not sure which on to choose June 6th 2008, 03:56 AM #1 Junior Member May 2008 not sure which on to choose ) A tyre manufacturer claims that his new steel-belted tire has a mean life expectancy of 40,000 miles. A consumer association decides to test these claims against the alternatives that the mean life expectancy is less than 40,000 miles. 100 tyres are selected at random and the sample mean is found to be 38,500 miles with the sample standard deviation of 2,000 miles. Perform the hypothesis test at the 5% (or 0.05) level of significance. because its a sample standard deviation i should use xbar- meu = a and then SD/ n square root= b so is it 38500- 40000 = 1500 and the 2000 / sq of 100 = 10 = 2000/ 10 = 200 so 1500/ 200 = 7.5 h0 meu= 40000 h1 meu doesnt=40000 Last edited by crashuk; June 6th 2008 at 05:58 AM. xbar = 30 s squared = 100 and n = 64 Test the hypothesis q H0 : μX = 25 and H1 : μX ≠ 25: would it be 30-25/ s= 10 n= 10= Last edited by crashuk; June 6th 2008 at 05:30 AM. ) A tyre manufacturer claims that his new steel-belted tire has a mean life expectancy of 40,000 miles. A consumer association decides to test these claims against the alternatives that the mean life expectancy is less than 40,000 miles. 100 tyres are selected at random and the sample mean is found to be 38,500 miles with the sample standard deviation of 2,000 miles. Perform the hypothesis test at the 5% (or 0.05) level of significance. because its a sample standard deviation i should use xbar- meu = a and then SD/ n square root= b so is it 38500- 40000 = 1500 and the 2000 / sq of 100 = 10 = 2000/ 10 = 200 so 1500/ 200 = 7.5 h0 meu= 40000 h1 meu doesnt=40000 You need to say what test you're using. I assume you're using a z-test since, although the population sd is unknown, n is large and the assumption of normality is reasonable. So you have z = -7.5. What's the critical value of z for a one-sided test at the 0.05 level of significance? Is -7.5 larger or smaller than this value? What conclusion do you draw? Last edited by mr fantastic; June 9th 2008 at 02:56 AM. Reason: Added the negative .... I should've checked the arithmetic more closely i thought it was a two sided test? = and not= ? You need to say what test you're using. I assume you're using a z-test since, although the population sd is unknown, n is large and the assumption of normality is reasonable. So you have z = 7.5. What's the critical value of z for a one-sided test at the 0.05 level of significance? Is 7.5 larger or smaller than this value? What conclusion do you draw? how do you work that out? Usually you would use a table of critical values to get the critical value (it can of course be got from the usual four figure tables): The critical value of z is -1.64. -7.5 < -1.64 therefore the result is significant at the 0.05 level. Therefore you reject the null hypothesis. The manufacturers claim appears to be bogus. Last edited by mr fantastic; June 9th 2008 at 02:57 AM. Reason: Added the negatives. Unbiased estimate? Shouldn't you use an unbiased estimate of the sample variance, because it's not normally distributed? like: $\frac{n}{n-1}(2000^2) = 4040404.04$ Then if z<-1.6448, reject $H_{o}: \mu=40000$ and z= $\frac{38500-40000}{\sqrt{\frac{4040404.04}{100}}}$ =-7.462405778<-1.6448, so reject $H_{o}$ Last edited by Nyoxis; June 9th 2008 at 08:35 AM. I think n = 100 is large enough for the correction factor to be ignored. Also, I think an asusmption of normality is reasonable (the lifetime of a tyre might just follow a Weibull distribution, say. But the value of the parameters would I think make a normal approximation reasonable) June 6th 2008, 04:07 AM #2 Junior Member May 2008 June 6th 2008, 06:09 AM #3 June 7th 2008, 05:52 AM #4 Junior Member May 2008 June 7th 2008, 03:01 PM #5 June 8th 2008, 08:45 PM #6 Junior Member May 2008 June 8th 2008, 10:17 PM #7 June 9th 2008, 02:34 AM #8 Junior Member Mar 2008 June 9th 2008, 03:02 AM #9
{"url":"http://mathhelpforum.com/statistics/40788-not-sure-choose.html","timestamp":"2014-04-21T04:35:15Z","content_type":null,"content_length":"61801","record_id":"<urn:uuid:0ec045bf-5a5e-4976-a028-39f7e71af11b>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00461-ip-10-147-4-33.ec2.internal.warc.gz"}
applied logic :: Inductive logic applied logic Article Free Pass Inductive reasoning means reasoning from known particular instances to other instances and to generalizations. These two types of reasoning belong together because the principles governing one normally determine the principles governing the other. For pre-20th-century thinkers, induction as referred to by its Latin name inductio or by its Greek name epagoge had a further meaning—namely, reasoning from partial generalizations to more comprehensive ones. Nineteenth-century thinkers—e.g., John Stuart Mill and William Stanley Jevons—discussed such reasoning at length. The most representative contemporary approach to inductive logic is by the German-born philosopher Rudolf Carnap (1891–1970). His inductive logic is probabilistic. Carnap considered certain simple logical languages that can be thought of as codifying the kind of knowledge one is interested in. He proposed to define measures of a priori probability for the sentences of those languages. Inductive inferences are then probabilistic inferences of the kind that are known as Bayesian. If P(—) is the probability measure, then the probability of a proposition A on evidence E is simply the conditional probability P(A/E) = P(A & E)/ P(E). If a further item of evidence E* is found, the new probability of A is P(A/E & E*). If an inquirer must choose, on the basis of the evidence E, between a number of mutually exclusive and collectively exhaustive hypotheses A[1], A[2], …, then the probability of A[i] on this evidence will be P(A[i]/E) = [P(E(A[i]) P(A[i])] / [P(E/A[1]) + P(E/A[2]) + …]This is known as Bayes’s theorem. Relying on it is not characteristic of Carnap only. Many different thinkers used conditionalization as the main way of bringing new information to bear on beliefs. What was peculiar to Carnap, however, was that he tried to define for the simple logical languages he was considering a priori probabilities on a purely logical basis. Since the nature of the primitive predicates and of the individuals in the model are left open, Carnap assumed that a priori probabilities must be symmetrical with respect to both. If one considers a language with only one-place predicates and a fixed finite domain of individuals, the a priori probabilities must determine, and be determined by, the a priori probabilities of what Carnap called state-descriptions. Others call them diagrams of the model. They are maximal consistent sets of atomic sentences and their negations. Disjunctions of structurally similar state-descriptions are called structure-descriptions. Carnap first considered an even distribution of probabilities to the different structure-descriptions. Later he generalized his quest and considered an arbitrary classification schema (also known as a contingency table) with k cells, which he treated as on a par. A unique a priori probability distribution can be specified by stating the characteristic function associated with the distribution. This function expresses the probability that the next individual belongs to the cell number i when the number of already-observed individuals in the cell number j is n[j]. Here j = 1,2,…,k. The sum (n[1] + n[2] + …+ n[k]) is denoted by n. Carnap proved a remarkable result that had earlier been proposed by the Italian probability theorist Bruno de Finetti and the British logician W.E. Johnson. If one assumes that the characteristic function depends only on k, n[i], and n, then f must be of the form n[i] + (λ/k)/n + λwhere λ is a positive real-valued constant. It must be left open by Carnap’s assumptions. Carnap called the inductive probabilities defined by this formula the λ-continuum of inductive methods. His formula has a simple interpretation. The probability that the next individual will belong to the cell number i is not the relative frequency of observed individuals in that cell, which is n[i]/n, but rather the relative frequency of individuals in the cell number i in a sample in which to the actually observed individuals there is added an imaginary additional set of λ individuals divided evenly between the cells. This shows the interpretational meaning of λ. It is an index of caution. If λ = 0, the inquirer follows strictly the observed relative frequencies n[i]/n. If λ is large, the inquirer lets experience change the a priori probabilities 1/k only very slowly. This remarkable result shows that Carnap’s project cannot be completely fulfilled, for the choice of λ is left open not only by the purely logical considerations that Carnap is relying on. The optimal choice also depends on the actual universe of discourse that is being investigated, including its so-far-unexamined part. It depends on the orderliness of the world in a sense of order that can be spelled out. Caution in following experience should be the greater the less orderly the universe is. Conversely, in an orderly universe, even a small sample can be taken as a reliable indicator of what the rest of the universe is like. Carnap’s inductive logic has several limitations. Probabilities on evidence cannot be the sole guides to inductive inference, for the reliability such of inferences may also depend on how firmly established the a priori probability distribution is. In real-life reasoning, one often changes prior probabilities in the light of further evidence. This is a general limitation of Bayesian methods, and it is in evidence in the alleged cognitive fallacies studied by psychologists. Also, inductive inferences, like other ampliative inferences, can be judged on the basis of how much new information they yield. An intrinsic limitation of the early forms of Carnap’s inductive logic was that it could not cope with inductive generalization. In all the members of the λ-continuum, the a priori probability of a strict generalization in an infinite universe is zero, and it cannot be increased by any evidence. It has been shown by Jaakko Hintikka how this defect can be corrected. Instead of assigning equal a priori probabilities to structure-descriptions, one can assign nonzero a priori probabilities to what are known as constituents. A constituent in this context is a sentence that specifies which cells of the contingency table are empty and which ones are not. Furthermore, such probability distinctions are determined by simple dependence assumptions in analogy with the λ-continuum. Hintikka and Ilkka Niiniluoto have shown that a multiparameter continuum of inductive probabilities is obtained if one assumes that the characteristic function depends only on k, n[i], n, and the number of cells left empty by the sample. What is changed in Carnap’s λ-continuum is that there now are different indexes of caution for different dimensions of inductive inference. These different indexes have general significance. In the theory of induction, a distinction is often made between induction by enumeration and induction by elimination. The former kind of inductive inference relies predominantly on the number of observed positive and negative instances. In a Carnapian framework, this means basing one’s inferences on k, n[i], and n. In eliminative induction, the emphasis is on the number of possible laws that are compatible with the given evidence. In a Carnapian situation, this number is determined by the number e of cells left empty by the evidence. Using all four parameters as arguments of the characteristic function thus means combining enumerative and eliminative reasoning into the same method. Some of the indexes of caution will then show the relative importance that an inductive reasoner is assigning to enumeration and to elimination. Do you know anything more about this topic that you’d like to share?
{"url":"http://www.britannica.com/EBchecked/topic/30698/applied-logic/283693/Inductive-logic","timestamp":"2014-04-20T21:13:05Z","content_type":null,"content_length":"92378","record_id":"<urn:uuid:4bfe31ee-76f8-495d-ad38-1d9f3d4623c2>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00590-ip-10-147-4-33.ec2.internal.warc.gz"}
Charged Higgs Pair Production in a General Two Higgs Doublet Model at $e^+e^-$ and $\mu^+\mu^-$ Linear Colliders - ILC Document Server Published Article / hep-ph arXiv:1310.7098 Charged Higgs Pair Production in a General Two Higgs Doublet Model at $e^+e^-$ and $\mu^+\mu^-$ Linear Colliders Hashemi, Majid Abstract: In this paper, charged Higgs pair production through $\ell^+ \ell^- \rightarrow H^+ H^-$ where $\ell = e$ or $\mu$, is studied within the framework of a general Two Higgs Doublet Model (2HDM). The analysis is relevant to a future $e^+e^-$ or $\mu^+\mu^-$ collider operating at center of mass energy of $\sqrt{s}=500$ GeV. Two different scenarios of small and large $\alpha$ values is studied. Here $\alpha$ is the parameter which diagonalizes the neutral CP-even Higgs boson mass matrix. Within the Minimal Supersymmetric Standard Model (MSSM), cross section of this process is almost the same at $e^+e^-$ and $\mu^+\mu^-$ colliders. It is shown that at $e^+e^-$ colliders within a general 2HDM, cross section is not sensitive to the mass of neutral Higgs bosons, however, it can acquire large values up to several picobarn at $\mu^+\mu^-$ colliders with the presence of heavy neutral Higgs bosons. A scan over Higgs boson mass parameter space is performed to analyze the effect of large masses of neutral Higgs bosons involved in the s-channel propagator and thus in the total cross section of this process. Note: Comments: 13 pages, 6 figures Published in: Commun. Theor. Phys. 61 (2014) 69 Record created 2013-12-02, last modified 2013-12-31
{"url":"http://ilcdoc.linearcollider.org/record/47255","timestamp":"2014-04-19T22:57:11Z","content_type":null,"content_length":"14630","record_id":"<urn:uuid:bfc05d05-e5ab-4534-80ed-e0712f9faa21>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00012-ip-10-147-4-33.ec2.internal.warc.gz"}
kinetic energy during energy levels December 27th 2010, 01:12 AM kinetic energy during energy levels 1. The problem statement, all variables and given/known data this is a problem that puzzles me. it goes like this: energy/10?18 J ionisation level 0.0 level D level C level B ground state A ? 4.6 An electron with kinetic energy 2.6 × 10?18 J collides inelastically with an electron in the ground state. State which energy levels may be occupied following this collision. A photon of energy 2.6 × 10?18 J is incident on an electron in the ground state. State and explain what would happen. 2. Relevant equations e= h1 - h2 3. The attempt at a solution i looked at the kinetic energy needed to escape the ground state. if i calculate 4.6-2.6, i get 2j of kinetic energy. the electron cant get to level d. will b to c be correct? for the second part, i am speculating the kinteic energy will have no effect at the ground level. but i am not totally sure. can someone please explain. please comment.
{"url":"http://mathhelpforum.com/math-topics/166945-kinetic-energy-during-energy-levels-print.html","timestamp":"2014-04-21T16:37:28Z","content_type":null,"content_length":"4100","record_id":"<urn:uuid:8ef02092-73f1-4890-8bce-e33eb5a8b19e>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00033-ip-10-147-4-33.ec2.internal.warc.gz"}
The n-Category Café Only had time to very quickly look at this paper. But here is a quick question. One of the puzzles mentioned in the introduction is that thinking of a kk-fold monoidal rr-catgory as a (k+r)(k+r)-category with all (i≤k)(i \leq k)-morphisms trivial runs us into the problem that the former gadget seems to want to live in a (r+1)(r+1)-category, while the latter lives in an (k+r+1)(k+r+1)-category. But maybe this is rather telling us that our expectation about the home of kk-fold monoidal rr-categories is wrong? The reason I am saying this is that only recently I had a long discussion with somebody which crucially involved the 2-category whose - objects are groups - morphisms are group homomorphisms - 2-morphisms are “intertwiners” of these, namely precisely those 2-morphisms which we obtain by thinking of the groups as 1-object categories and of their morphisms as fucntors. The application we were talking about crucially demanded to take this 2-category serious. I noticed that it took me a while to make the structure of this 2-category transparent to my discussion partner. And I thought by myself that we should all better get used to thinking of groups as 1-object groupoids generally. Precisely the same issue, in its analogous incranation, plays a crucial role in the entire field of von Neumann algebras. There it is very important to consider 2-categories (even though these are not always identified as such) whose objects are algebras, whose morphisms are algebra homomorphisms and whose 2-morphisms are intertwiners. In other words, to regard algebras as 1-object Vect-enriched categories. In a couple of introductory talks to von Neumann algebra theory that I heard recently on this subject lots of time was spent with explaining what these intertwiners are and how their horizontal and vertical compositon works. To the uninitiated eyes, the entire construction here is bound to look intricate and ad hoc. But it all comes down to a triviality once we seriously think of algebras as 1-object categories: these intertwiners are precisely natural transformations, henece 2-morphisms in Cat. That explains everything that is ever done with them. So, my question is this: maybe it is “evil” (in the sense John Baez uses this word) to regard kk-fold monoidal rr-categories as anything else than kk-tuply stabilized (k+r)(k+r)-categories. Maybe we should not try to do that in the first place. A couple of applications do suggest so. (By the way: Bruce once had a remark/question on precisely this issue here: Algebras as 2-Categories and its Effect on Algebraic Geometry).
{"url":"http://golem.ph.utexas.edu/category/2007/06/degeneracy.html","timestamp":"2014-04-16T15:59:38Z","content_type":null,"content_length":"36500","record_id":"<urn:uuid:ad6ffc64-6f6b-492b-b11b-fee01bab219a>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00590-ip-10-147-4-33.ec2.internal.warc.gz"}
[SOLVED] Theoretical Derivation of Benford's Law May 7th 2010, 08:17 PM [SOLVED] Theoretical Derivation of Benford's Law I have a question regarding a theoretical derivation of benford's law, the method used to derive it rests on the invariance of scale, first we transformed the x=1..10 initial domain into the y= c..10c getting Transformation was y = cx And then the next transformation was a movement of the number back to 1..10 by saying that z=y if y < 10 or z=y/10 if y>=10, i then have the piecewise probability function of z as $f_{z}=1/c*f_{x}(z/c)$ for y<10 or equiv x<10/c $f_{z}=10/c*f_{x}(10z/c)$ for y>10 or equiv x>=10/c The next step then says that based on the invariance of scale, fz is equal to fx but i can't understand how we are meant to derive the distribution of the function from this, knowing the distribution beforehand is log(1+1/x) helps and i can see how the invariance works but attempting to find c without first knowing that the distribution is fixed is very confusing May 7th 2010, 09:51 PM That's okay everyone i solved it by assuming a form of solution and integrating":) May 8th 2010, 10:01 PM kindly explain how to compare the fx and fz I had a very similar question asked in a high school assignment in year 12 and was unable to do it. I lost a good deal of marks for that. I was able to do to the point you explained but then had to derive a form for fx. Though at that point I thought that there must be some information missing in the question but now seeing your question I think I was not clear about the question itself. Could you try and explain how you got the log expression? Thanks a lot!
{"url":"http://mathhelpforum.com/advanced-statistics/143632-solved-theoretical-derivation-benfords-law-print.html","timestamp":"2014-04-16T11:22:17Z","content_type":null,"content_length":"5736","record_id":"<urn:uuid:e35f7750-a03b-440d-b0dd-c1980183edea>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00356-ip-10-147-4-33.ec2.internal.warc.gz"}
FOM: Length of formalizations Randy Pollack rap at dcs.ed.ac.uk Fri Jan 14 10:00:30 EST 2000 The discussion on "Length of formalizations" misses a crucial point. Mizar supports a complicated linguistic superstructure on top of its foundational logic. The mathematics that is formalized in Mizar is expressed in terms of this superstructure. Although the logic may be formally laid out, this linguistic superstructure is almost completely unspecified. One learns about it by example and personal transmission. Thus, it seems misleading to think that Mizar books are fully formalized. I'm not saying Mizar is smoke. On the contrary, since our favorite logics clearly cannot support actual formalization, it is essential to have a linguistic superstructure, or perhaps a more expressive foundational logic. But a long-term goal of the enterprise must be a formal explanation of this linguistic layer. The situation with Isabelle is a bit different than Mizar. Isabelle also has a linguistic superstructure for parsing, pretty printing, etc. But Isabelle also has a formal metalevel in which the particular object logics (eg FOL+ZF) are formalized. This metalevel (which is an impredicative HOL with type classes and equality; much stronger than the FO object logic) supports many of the linguistic tools necessary for mathematics. E.g. equality at the metalevel supports definitions at the object level. (There is much more to say about metalevel approaches.) The formalization of ZF in Isabelle is truly impressive. To my reading it uses the formal metalanguage in surprisingly subtle I don't think Isabelle, or any other existing proof tool, is yet competent to deal with everyday mathematical practice. Among the outstanding problems are the packaging of mathematical structures and abstract use of such structures. BTW, I agree with Andrej Bauer that it is "when" not "if" mainstream maths will be presented formally. Randy Pollack Phone: +44 131 650 5145 URL: www.dcs.ed.ac.uk/~rap/ Computer Science, Edinburgh Univ. Kings Buildings, Edinburgh EH9 3JZ, UK More information about the FOM mailing list
{"url":"http://www.cs.nyu.edu/pipermail/fom/2000-January/003574.html","timestamp":"2014-04-21T08:32:45Z","content_type":null,"content_length":"4263","record_id":"<urn:uuid:b77cb43b-4512-48f8-b834-2e7378fa8ae2>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00232-ip-10-147-4-33.ec2.internal.warc.gz"}
Statistical Mechanics Statistical Mechanics 83964 Sylabus:The course is devoted to the study of statistical mechanics and thermodynamics. A basic theory is given. Different examples and problems are presented. • Lecture 1 Introduction to statistical mechanics. The macroscopic and the microscopic states. Equilibrium and observation time. Equilibrium and molecular motion. Relaxation time. Local equilibrium. Phase space of a classical system. Statistical ensemble. Liouville’s theorem. Density matrix in statistical mechanics and its properties. Liouville’s-Neiman equation. • Lecture 2 The microcanonical ensemble. Quantum states and the phase space. Some paradoxes in statistical physics. Ergodic hypothesis. Quasi-ergodic systems. Some model systems in statistical physics: Spin system, classical and quantum consideration. • Lecture 3 Entropy in statistical mechanics. Thermodynamic contacts: Mechanical contact, heat contact, diffusion contact. Equilibrium. Chemical potential. The main distributions in statistical mechanics. A system in the canonical ensemble. Thermostat. • Lecture 4 Thermodynamics. Equilibrium (reversible) and noequilibrium ( nonrevrsible) processes. Adiabatic, isotermic, isobaric and isochoric processes. Connection between statistical and thermodynamic quantities. Helmholtz free energy F. Enthalpy H. Gibbs Free Energy G. Thermodynamic potentials. Heat capacity. The lows of thermodynamics. Thermodynamic functions for the canonical ensemble. Partition functions. Alternative expression for the partition function. Density of states. A system of harmonic oscillators. • Lecture 5 The grand canonical ensemble. Grand partition function. Connection with Thermodynamic functions. Density and energy fluctuations in the grand canonical ensemble: correspondence with other ensembles. Fermi-Dirac statistics. Classical limit. Bose-Einstein statistics. • Lecture 6 Ideal gases. Entropy. Sackur-Tetrode formula. De Broglie wavelength. Chemical potential. Ideal gas in canonical ensemble. Entropy of a system in a canonical ensemble. Free Energy. Maxwell Velocity Distribution. Principle of equipartition of energy. Heat capacity. Ideal gas in the grand canonical ensemble. • Lecture 7 Gaseous systems composed of molecules with internal motion: Monatomic molecules, Diatomic molecules. Electron, Vibartional and Rotational contribution. Fermi gas. Electron gas in metals. Heat capacity of electron gas. • Lecture 8 Ideal Bose gas. Thermodynamic behavior of an ideal Bose gas. The temperature of condensation. Elementary excitation in liquid helium II. Thermodynamics of black-body radiation. Planck’s formula for the distribution of energy over the black-body spectrum. Stefan-Boltzmann law of black-body radiation. • Lecture 9 Thermodynamics of crystal lattice. The field of sound waves. Phonons and second sound. The Debye model. The Debye temperature. Specific heat of the solid in the Debye model. • Lecture 10 Non Ideal systems. Intermolecular interactions. Lenard-Jones potential. Corrections to the Ideal Gas Law. Van der Waals equation. Short Distance and Long Distance Interaction. The Plasma Gas and Ionic Solutions. The Debye-Huckel radius. • Lecture 11 Phase transition. Critical point. First-order phase transition. Phase diagrams. The theory of Yang and Lee. A dynamical model for phase transition. Weiss theory of ferromagnetism. Second order phase transition. Landau theory. Critical point exponents. Chemical equilibrium and chemical reactions. • Lecture 12 Ising model as a macroscopic model of phase transition. Why the Ising model is very important? Relationship between lattice models, models of ferroelectrics and Ising model. The classical formulation of the problem. Exact solutions. Drawbacks of the mean field approximation. The Static Fluctuation Approximation as new method of the solving the Ising problem. • Lecture 13 Fluctuations. Fluctuations of macroscopic variables. Correlation functions. Response and Fluctuation. Density correlation function. Theory of random processes. Spectral analysis of fluctuations: the Wiener-Khintchine theorem. The Nyquist theorem. Applications of Nyquist theorem. • Lecture 14 Brownian motion. Einstein-Smoluhowski theory of the Brownian motion. Langevin theory of the Brownian motion Approach to equilibrium: Foker-Planck equation. The fluctuation-dissipation 1. R.K.Pathria, Statistical Mechanics, Pergamon Press, 1986 2. R.Kubo, Statistical Mechanics, Interscience Publishers, New York, 1965. 3. L.D. Landau and E.M. Lifshitz, Statistical Physics, Pergamon Press, 1980. 4. Shang-Keng Ma, Statistical Mechanics, World Scientific, 1985. 5. C.Kittel, Elementary Statistical Physics, John Wiley & Sons, Inc. New York, 1958. 6. J.M.Yomans, Statistical Mechanics of Phase Transitions, Clarendon Press, Oxford, 1992. 7. R.P.Feynman, Statistical Mechanics, A Set of Lectures., Addison-Wesley Publishing Company, 1972 8. F.Reif, Statistical Physics, Berkeley Physics Course , V5, Mgraw-Hill Book Company, 1965. 9. C.J. Thompson, Classical equilibrium Statistical Mechanics, Clarendon Press, Oxford, 1988.
{"url":"http://aph.huji.ac.il/courses/2008_9/83964/index.html","timestamp":"2014-04-21T04:33:32Z","content_type":null,"content_length":"10236","record_id":"<urn:uuid:45d4cb15-ac50-4072-9d8c-dc13b91f8f15>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00278-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions - User Profile for: urne_@_lumni.princeton.edu Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. User Profile: urne_@_lumni.princeton.edu User Profile for: urne_@_lumni.princeton.edu UserID: 41884 Name: Kirby Urner Registered: 12/6/04 Total Posts: 4,709 Show all user messages
{"url":"http://mathforum.org/kb/profile.jspa?userID=41884","timestamp":"2014-04-16T22:29:59Z","content_type":null,"content_length":"11929","record_id":"<urn:uuid:017589ed-da53-4c36-a042-6aa9fae18a8c>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00329-ip-10-147-4-33.ec2.internal.warc.gz"}
Eastwick, PA Calculus Tutor Find an Eastwick, PA Calculus Tutor ...I taught Prealgebra with a national tutoring chain for five years. I have taught Prealgebra as a private tutor since 2001. I completed math classes at the university level through advanced 12 Subjects: including calculus, writing, geometry, algebra 1 ...I would happy to share that knowledge and experience to help someone pass the math section of the ACT test. I have a bachelor's degree in mathematics. I have taken courses dealing with linear algebra, including the courses Linear Algebra and Linear Programming (which is basically an applied linear algebra course). I have a master's in education, in particular, mathematics education. 16 Subjects: including calculus, English, physics, geometry ...I can remember strict teachers drilling proper English usage into my head: diagramming sentences, looking up words in the dictionary, re-writing papers that my teachers knew I didn't put much effort into. It's no wonder that my English skills exceed those of most of today's English teachers. Un... 23 Subjects: including calculus, English, geometry, statistics ...I am particularly good at explaining concepts. The best way to build a solid vocabulary is to read as much as possible in a variety of areas. General-interest magazines are an excellent 32 Subjects: including calculus, English, geometry, biology I am graduate student working in engineering and I want to tutor students in SAT Math and Algebra and Calculus. I think I could do a good job. I studied Chemical Engineering for undergrad, and I received a good score on the SAT Math, SAT II Math IIC, GRE Math, and general math classes in school. 8 Subjects: including calculus, geometry, algebra 1, algebra 2 Related Eastwick, PA Tutors Eastwick, PA Accounting Tutors Eastwick, PA ACT Tutors Eastwick, PA Algebra Tutors Eastwick, PA Algebra 2 Tutors Eastwick, PA Calculus Tutors Eastwick, PA Geometry Tutors Eastwick, PA Math Tutors Eastwick, PA Prealgebra Tutors Eastwick, PA Precalculus Tutors Eastwick, PA SAT Tutors Eastwick, PA SAT Math Tutors Eastwick, PA Science Tutors Eastwick, PA Statistics Tutors Eastwick, PA Trigonometry Tutors Nearby Cities With calculus Tutor Billingsport, NJ calculus Tutors Briarcliff, PA calculus Tutors Colwyn, PA calculus Tutors Fernwood, PA calculus Tutors Llanerch, PA calculus Tutors Middle City East, PA calculus Tutors Middle City West, PA calculus Tutors Oakview, PA calculus Tutors Passyunk, PA calculus Tutors Primos Secane, PA calculus Tutors Primos, PA calculus Tutors Ridley, PA calculus Tutors Secane, PA calculus Tutors Tinicum, PA calculus Tutors Westbrook Park, PA calculus Tutors
{"url":"http://www.purplemath.com/Eastwick_PA_calculus_tutors.php","timestamp":"2014-04-17T19:57:25Z","content_type":null,"content_length":"24046","record_id":"<urn:uuid:2662d443-be29-4d95-8b91-5bf9443d7245>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00078-ip-10-147-4-33.ec2.internal.warc.gz"}
Eastchester Statistics Tutor Find an Eastchester Statistics Tutor ...I can also help you prepare for GRE, GMAT, MCAT, DAT, and PRAXIS tests. For talented 7th and 8th graders I also offer Bergen Academies Entrance Exam Prep. I'm a very dynamic and engaging instructor who can quickly assess your needs and determine the best way to help you achieve your potential. 83 Subjects: including statistics, chemistry, physics, calculus ...I am a student of economics having worked on Wall Street for several years, and I have excellent quantitative skills and the ability to explain mathematical concepts as a result of tutoring mathematics. I have developed techniques that have helped students raise their scores dramatically! I wor... 34 Subjects: including statistics, calculus, writing, GRE ...I have a long background in Finance, mathematics and statistics, including Fellowship status as a Chartered Certified Accountant. I qualified with KPMG, then working as Audit Manager in the Caribbean. During my time as a mathematics and test preparation tutor, I have worked extensively with ADD students of all ages. 55 Subjects: including statistics, English, reading, writing ...I have taken probability in college. Additionally, I have been preparing for the P/1 actuarial exam for the last month, the first test to qualify as an actuary and essentially a difficult and comprehensive probability exam, giving me more recent practical experience. I have a comprehensive and well practiced background in mathematics from my degree in physics. 25 Subjects: including statistics, chemistry, calculus, physics ...I just completed AP Human Geography. I have taken both AP English Language and Composition and AP English Literature and Composition. I received a 5 on the AP English Language and Composition exam, and I have yet to receive my AP English Literature and Composition score. 43 Subjects: including statistics, English, calculus, reading
{"url":"http://www.purplemath.com/Eastchester_Statistics_tutors.php","timestamp":"2014-04-19T02:39:44Z","content_type":null,"content_length":"24225","record_id":"<urn:uuid:f7009689-eced-4404-b77e-185d879acd69>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00213-ip-10-147-4-33.ec2.internal.warc.gz"}
Math eBook: Higher Order Derivatives When more than one derivative is applied to a function, it is consider higher order derivatives. For example, the derivative of y = x^7 is 7x^6. 7x^6 is differentiable and its derivative is 42x^5. 42x^5 is called the second order derivative of y with respect to x for function x^7. Repeating this process, function y's third order derivative 210x^4 is obtained. The higher order derivative notations is shown on the left. The chain rule also holds for higher order derivatives. For instance, one can find the fourth derivative for y = A sin(ωt + 1), in which y is a function of t. Consider y is a function of θ where θ = ωt +1. Therefore, the first order derivative is The second order derivative is The third order derivative is The forth order derivative is The implicit differentiation method also holds for higher order derivatives. For example, one can find the second derivative for ellipse In ellipse, y is a function of x. In order to find the derivative of y with respect to x, implicit differentiation method is applied. The first derivative is Rearranging equation (1) gives Taking another derivative with respect to x gives Substitute equation (2) to (3),
{"url":"https://ecourses.ou.edu/cgi-bin/eBook.cgi?topic=ma&chap_sec=03.1&page=theory","timestamp":"2014-04-21T00:35:39Z","content_type":null,"content_length":"13929","record_id":"<urn:uuid:65618688-c1c4-4590-a073-726210e5edd1>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00046-ip-10-147-4-33.ec2.internal.warc.gz"}
math pals at Math Cats Classes of math pals can agree to write about any math topic, but here are a few ideas to get you started: * Agree on a math challenge to discuss and solve. Many possible challenges are archived at the MathMagic area of the Math Forum: www.mathforum.com/mathmagic/ * Trade math story problems: Each student writes and sends one or more math story problems (without giving the answers). Or: Teachers could pose problems to the students. These story problems might be a bit complex and worthy of discussion. The math pals write back and forth to share their thoughts, questions, and explanations until they agree on an answer. They could also draw pictures of the math story problems. * Compare math topics and approaches. Discuss the differences in what each class is currently studying as well as the differences in classroom environment, learning and teaching styles, and life in general. * Share favorite math activities: What are your favorite math games? Do you know any math puzzles or riddles? * Write about where you find math in your community: - Interview adults and youths in your community to find out how they use math in their daily lives and in their work. Share your interviews. - Go on a math hike in your neighborhood and write about what you find: What geometric shapes do you find? Where do you find math in nature? * Trade math art: Create geometric designs, 3D space forms, or other math crafts and send them to your math pals. Decorate your classroom. * "Time" for math: - Write down your schedule on a typical day. When do you get up? Eat breakfast? Leave the house? Arrive at school? and so on. When do you go to bed? - How much time do you spend each day (or each week) in school? in bed? eating? watching TV? playing outside? reading? doing your favorite hobby? Make a pie chart. * Conduct math surveys: Conduct class surveys, graph the results, and compare with your math pals. Or survey 100 people in each school and compare the results. Possible survey questions to get started: - What is your favorite color? food? music group? TV show? sport? book? - How much time do you spend each week doing... (whatever)? - How many pets do you have? * Conduct a math/science experiment: Grow plants from seeds... Check your classmates' heart rates after 100 jumping jacks... conduct a paper airplane contest... and then share, compare, and discuss your findings with your math pals. * For math pals of different grade levels: The younger group can tell the older group what they are currently learning in math, and the older group might offer help and extra insights: how will these math skills be expanded upon in the future? Where will these skills take them later in their schooling?
{"url":"http://www.mathcats.com/explore/mathpals.html","timestamp":"2014-04-19T05:16:51Z","content_type":null,"content_length":"12703","record_id":"<urn:uuid:fb324989-4733-493a-86f7-1906ff75b815>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00110-ip-10-147-4-33.ec2.internal.warc.gz"}
“Charlie would like to pack one less outfit than J Posted by Anonymous on Saturday, June 11, 2011 at 3:34am. “Charlie would like to pack one less outfit than Jane for their trip. Loretta would like to bring five outfits less than twice what Jane packs. Due to space in the suitcase, they are limited to a total of 24 outfits. How many outfits can they each pack?” • “Charlie would like to pack one less outfit than J - Jai, Saturday, June 11, 2011 at 3:47am first, we represent the unknowns using variables. let x = outfits that Jane packed let x-1 = outfits that Charlie packed let 2x-5 = outfits that Loretta packed since "limited to a total of 24 outfits", it means that the maximum number of outfits is 24: x + (x-1) + (2x-5) <= 24 note: <= means less than or equal to 4x - 6 <= 24 4x <= 30 x <= 7.5 but since outfits cannot be a decimal/fraction, we take x = 7 (the max number of outfit that Jane can pack) and thus, x-1 = 6 (max number of outfit that Charlie can pack) 2x-5 = 9 (max number of outfit that Loretta can pack) hope this helps~ :) Related Questions Translate the following situation into an inequali - “Charlie would like to pack... math - Translate the following situation into an inequality. Do not solve. “... homework - ranslate the following situation into an inequality. Do not solve. “... Counting/ Probability - I dont understand this problem at all, all help is ... Elem. Ed. - Would you please help me with this question? Charlie does not come ... Dressmaking - Ms. Troy would like a copy of an outfit she owns and really likes Counting and probability - I have no idea on how to do this, please help! Maya ... English - I am just curious if you know of any one I could submit an English ... sewing - Base the answers to Questions 1–3 on the following information: Ms. ... Math - 1)Ms.Troy would like you to make a copy of an outfit that she owns and ...
{"url":"http://www.jiskha.com/display.cgi?id=1307777648","timestamp":"2014-04-18T17:27:19Z","content_type":null,"content_length":"9252","record_id":"<urn:uuid:d2441ad1-efc4-4f8f-8268-4145d5c22850>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00620-ip-10-147-4-33.ec2.internal.warc.gz"}
Covariance structure of the Gibbs sampler with applications to the comparisons of estimators and augmentation schemes Results 1 - 10 of 160 - Journal of the American Statistical Association , 1998 "... A general framework for using Monte Carlo methods in dynamic systems is provided and its wide applications indicated. Under this framework, several currently available techniques are studied and generalized to accommodate more complex features. All of these methods are partial combinations of three ..." Cited by 453 (8 self) Add to MetaCart A general framework for using Monte Carlo methods in dynamic systems is provided and its wide applications indicated. Under this framework, several currently available techniques are studied and generalized to accommodate more complex features. All of these methods are partial combinations of three ingredients: importance sampling and resampling, rejection sampling, and Markov chain iterations. We deliver a guideline on how they should be used and under what circumstance each method is most suitable. Through the analysis of differences and connections, we consolidate these methods into a generic algorithm by combining desirable features. In addition, we propose a general use of Rao-Blackwellization to improve performances. Examples from econometrics and engineering are presented to demonstrate the importance of Rao-Blackwellization and to compare different Monte Carlo procedures. Keywords: Blind deconvolution; Bootstrap filter; Gibbs sampling; Hidden Markov model; Kalman filter; Markov... , 1994 "... this paper we exploit Gibbs sampling to provide a likelihood framework for the analysis of stochastic volatility models, demonstrating how to perform either maximum likelihood or Bayesian estimation. The paper includes an extensive Monte Carlo experiment which compares the efficiency of the maximum ..." Cited by 354 (37 self) Add to MetaCart this paper we exploit Gibbs sampling to provide a likelihood framework for the analysis of stochastic volatility models, demonstrating how to perform either maximum likelihood or Bayesian estimation. The paper includes an extensive Monte Carlo experiment which compares the efficiency of the maximum likelihood estimator with that of quasi-likelihood and Bayesian estimators proposed in the literature. We also compare the fit of the stochastic volatility model to that of ARCH models using the likelihood criterion to illustrate the flexibility of the framework presented. Some key words: ARCH, Bayes estimation, Gibbs sampler, Heteroscedasticity, Maximum likelihood, Quasi-maximum likelihood, Simulation, Stochastic EM algorithm, Stochastic volatility, Stock returns. 1 INTRODUCTION - Machine Learning , 2000 "... Abstract. In many multivariate domains, we are interested in analyzing the dependency structure of the underlying distribution, e.g., whether two variables are in direct interaction. We can represent dependency structures using Bayesian network models. To analyze a given data set, Bayesian model sel ..." Cited by 202 (5 self) Add to MetaCart Abstract. In many multivariate domains, we are interested in analyzing the dependency structure of the underlying distribution, e.g., whether two variables are in direct interaction. We can represent dependency structures using Bayesian network models. To analyze a given data set, Bayesian model selection attempts to find the most likely (MAP) model, and uses its structure to answer these questions. However, when the amount of available data is modest, there might be many models that have non-negligible posterior. Thus, we want compute the Bayesian posterior of a feature, i.e., the total posterior probability of all models that contain it. In this paper, we propose a new approach for this task. We first show how to efficiently compute a sum over the exponential number of networks that are consistent with a fixed order over network variables. This allows us to compute, for a given order, both the marginal probability of the data and the posterior of a feature. We then use this result as the basis for an algorithm that approximates the Bayesian posterior of a feature. Our approach uses a Markov Chain Monte Carlo (MCMC) method, but over orders rather than over network structures. The space of orders is smaller and more regular than the space of structures, and has much a smoother posterior “landscape”. We present empirical results on synthetic and real-life datasets that compare our approach to full model averaging (when possible), to MCMC over network structures, and to a non-Bayesian bootstrap approach. - Econometrica , 1998 "... This paper is concerned with the Bayesian estimation of non-linear stochastic differential equations when only discrete observations are available. The estimation is carried out using a tuned MCMC method, in particular a blocked Metropolis-Hastings algorithm, by introducing auxiliary points and usin ..." Cited by 155 (18 self) Add to MetaCart This paper is concerned with the Bayesian estimation of non-linear stochastic differential equations when only discrete observations are available. The estimation is carried out using a tuned MCMC method, in particular a blocked Metropolis-Hastings algorithm, by introducing auxiliary points and using the Euler-Maruyama discretisation scheme. Techniques for computing the likelihood function, the marginal likelihood and diagnostic measures (all based on the MCMC output) are presented. Examples using simulated and real data are presented and discussed in detail. - J. R. Statist. Soc. B , 2000 "... In treating dynamic systems, sequential Monte Carlo methods use discrete samples to represent a complicated probability distribution and use rejection sampling, importance sampling, and weighted resampling to complete the on-line "filtering" task. In this article we propose a special sequential Mont ..." Cited by 151 (5 self) Add to MetaCart In treating dynamic systems, sequential Monte Carlo methods use discrete samples to represent a complicated probability distribution and use rejection sampling, importance sampling, and weighted resampling to complete the on-line "filtering" task. In this article we propose a special sequential Monte Carlo method, the mixture Kalman filter, which uses random mixture of normal distributions to represent a target distribution. It is designed for on-line estimation and prediction of conditional and partial conditional dynamic linear models, which are themselves a class of widely used nonlinear system and also serve to approximate many other nonlinear systems. Compared with a few available filtering methods including Monte Carlo ones, the efficiency gain provided by the mixture Kalman filter can be very substantial. Another contribution of this article is the formulation of many nonlinear systems into conditional or partial conditional linear form, to which the mixture Kalman filter can be... - American Political Science Review , 2000 "... We propose a remedy for the discrepancy between the way political scientists analyze data with missing values and the recommendations of the statistics community. Methodologists and statisticians agree that "multiple imputation" is a superior approach to the problem of missing data scattered through ..." Cited by 141 (40 self) Add to MetaCart We propose a remedy for the discrepancy between the way political scientists analyze data with missing values and the recommendations of the statistics community. Methodologists and statisticians agree that "multiple imputation" is a superior approach to the problem of missing data scattered through one's explanatory and dependent variables than the methods currently used in applied data analysis. The reason for this discrepancy lies with the fact that the computational algorithms used to apply the best multiple imputation models have been slow, difficult to implement, impossible to run with existing commercial statistical packages, and demanding of considerable expertise. In this paper, we adapt an existing algorithm, and use it to implement a generalpurpose, multiple imputation model for missing data. This algorithm is considerably faster and easier to use than the leading method recommended in the statistics literature. We also quantify the risks of current missing data practices, ... - Statistica Sinica , 1997 "... Abstract: This paper describes and compares various hierarchical mixture prior formulations of variable selection uncertainty in normal linear regression models. These include the nonconjugate SSVS formulation of George and McCulloch (1993), as well as conjugate formulations which allow for analytic ..." Cited by 124 (5 self) Add to MetaCart Abstract: This paper describes and compares various hierarchical mixture prior formulations of variable selection uncertainty in normal linear regression models. These include the nonconjugate SSVS formulation of George and McCulloch (1993), as well as conjugate formulations which allow for analytical simplification. Hyperparameter settings which base selection on practical significance, and the implications of using mixtures with point priors are discussed. Computational methods for posterior evaluation and exploration are considered. Rapid updating methods are seen to provide feasible methods for exhaustive evaluation using Gray Code sequencing in moderately sized problems, and fast Markov Chain Monte Carlo exploration in large problems. Estimation of normalization constants is seen to provide improved posterior estimates of individual model probabilities and the total visited probability. Various procedures are illustrated on simulated sample problems and on a real problem concerning the construction of financial index tracking portfolios. , 1996 "... this paper, a special Metropolis-Hastings type algorithm, Metropolized independent sampling, proposed firstly in Hastings (1970), is studied in full detail. The eigenvalues and eigenvectors of the corresponding Markov chain, as well as a sharp bound for the total variation distance between the n-th ..." Cited by 96 (3 self) Add to MetaCart this paper, a special Metropolis-Hastings type algorithm, Metropolized independent sampling, proposed firstly in Hastings (1970), is studied in full detail. The eigenvalues and eigenvectors of the corresponding Markov chain, as well as a sharp bound for the total variation distance between the n-th updated distribution and the target distribution, are provided. Furthermore, the relationship between this scheme, rejection sampling, and importance sampling are studied with emphasizes on their relative efficiencies. It is shown that Metropolized independent sampling is superior to rejection sampling in two aspects: asymptotic efficiency and ease of computation. Key Words: Coupling, Delta method, Eigen analysis, Importance ratio. 1 1 Introduction - JOURNAL OF THE AMERICAN STATISTICAL ASSOCIATION , 2002 "... Markov chain Monte Carlo (MCMC) sampling strategies can be used to simulate hidden Markov model (HMM) parameters from their posterior distribution given observed data. Some MCMC methods (for computing likelihood, conditional probabilities of hidden states, and the most likely sequence of states) use ..." Cited by 86 (8 self) Add to MetaCart Markov chain Monte Carlo (MCMC) sampling strategies can be used to simulate hidden Markov model (HMM) parameters from their posterior distribution given observed data. Some MCMC methods (for computing likelihood, conditional probabilities of hidden states, and the most likely sequence of states) used in practice can be improved by incorporating established recursive algorithms. The most important is a set of forward-backward recursions calculating conditional distributions of the hidden states given observed data and model parameters. We show how to use the recursive algorithms in an MCMC context and demonstrate mathematical and empirical results showing a Gibbs sampler using the forward-backward recursions mixes more rapidly than another sampler often used for HMM's. We introduce an augmented variables technique for obtaining unique state labels in HMM's and finite mixture models. We show how recursive computing allows statistically efficient use of MCMC output when estimating the hidden states. We directly calculate the posterior distribution of the hidden chain's state space size by MCMC, circumventing asymptotic arguments underlying the Bayesian information criterion, which is shown to be inappropriate for a frequently analyzed data set in the HMM literature. The use of log-likelihood for assessing MCMC convergence is illustrated, and posterior predictive checks are used to investigate application specific questions of model adequacy. - STATISTICAL SCIENCE , 2001 "... Two important questions that must be answered whenever a Markov chain Monte Carlo (MCMC) algorithm is used are (Q1) What is an appropriate burn-in? and (Q2) How long should the sampling continue after burn-in? Developing rigorous answers to these questions presently requires a detailed study of the ..." Cited by 74 (19 self) Add to MetaCart Two important questions that must be answered whenever a Markov chain Monte Carlo (MCMC) algorithm is used are (Q1) What is an appropriate burn-in? and (Q2) How long should the sampling continue after burn-in? Developing rigorous answers to these questions presently requires a detailed study of the convergence properties of the underlying Markov chain. Consequently, in most practical applications of MCMC, exact answers to (Q1) and (Q2) are not sought. The goal of this paper is to demystify the analysis that leads to honest answers to (Q1) and (Q2). The authors hope that this article will serve as a bridge between those developing Markov chain theory and practitioners using MCMC to solve practical problems. The ability to formally address (Q1) and (Q2) comes from establishing a drift condition and an associated minorization condition, which together imply that the underlying Markov chain is geometrically ergodic. In this paper, we explain exactly what drift and minorization are as well as how and why these conditions can be used to form rigorous answers to (Q1) and (Q2). The basic ideas are as follows. The results of Rosenthal (1995) and Roberts and Tweedie (1999) allow one to use drift and minorization conditions to construct a formula giving an analytic upper bound on the distance to stationarity. A rigorous answer to (Q1) can be calculated using this formula. The desired characteristics of the target distribution are typically estimated using ergodic averages. Geometric ergodicity of the underlying Markov chain implies that there are central limit theorems available for ergodic averages (Chan and Geyer 1994). The regenerative simulation technique (Mykland, Tierney and Yu 1995, Robert 1995) can be used to get a consistent estimate of the variance of the asymptotic nor...
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=27840","timestamp":"2014-04-20T10:24:21Z","content_type":null,"content_length":"41973","record_id":"<urn:uuid:d6d83344-d91f-4da2-abf7-9f2a17a959a5>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00416-ip-10-147-4-33.ec2.internal.warc.gz"}
Energy-Balanced Density Control to Avoid Energy Hole for Wireless Sensor Networks International Journal of Distributed Sensor Networks Volume 2012 (2012), Article ID 812013, 10 pages Research Article Energy-Balanced Density Control to Avoid Energy Hole for Wireless Sensor Networks ^1School of Information Science & Engineering, Northeastern University, Shenyang 110819, China ^2Research Institute, Northeastern University, Shenyang 110819, China Received 2 June 2011; Revised 1 September 2011; Accepted 2 October 2011 Academic Editor: Mandar Chitre Copyright © 2012 Jie Jia et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Density control is of great relevance for wireless sensor networks monitoring hazardous applications where sensors are deployed with high density. Due to the multihop relay communication and many-to-one traffic characters in wireless sensor networks, the nodes closer to the sink tend to die faster, causing a bottleneck for improving the network lifetime. In this paper, the theoretical aspects of the network load and the node density are investigated systematically. And then, the accessibility condition to satisfy that all the working sensors exhaust their energy with the same ratio is proved. By introducing the concept of the equivalent sensing radius, a novel algorithm for density control to achieve balanced energy consumption per node is thus proposed. Different from other methods in the literature, a new pixel-based transmission mechanism is adopted, to reduce the duplication of the same messages. Combined with the accessibility condition, nodes on different energy layers are activated with a nonuniform distribution, so as to balance the energy depletion and enhance the survival of the network effectively. Extensive simulation results are presented to demonstrate the effectiveness of our algorithm. 1. Introduction With the help of technological advances in MEMS, a mass production of tiny and economical sensors becomes possible. A wireless sensor network consists of a large number of sensor nodes deployed in region of interest to collect related information and communicate the results to the users [1]. The network can be embedded in our physical environment and have many potential applications, such as battlefield surveillance, environment monitoring, and fire detection. Since the microsensors are usually supported by battery, they are thus limited in resources and vulnerable in nature. When sensor nodes are deployed to monitor hazardous applications like over a battlefield, an important question is to guarantee that the target area is covered and the detection probability is high. If a small number of nodes are deployed, blind spots or sensing holes might be left, which may reduce the accuracy of the results obtained. In order to enhance the reliability of the network, sensor nodes are usually deployed with high density, up to 20nodes/m^3. However, as all of the nodes share common sensing tasks, if those sensors operate in the active mode simultaneously, data collected in such a high-density network would be highly correlated and redundant, consuming an excessive amount of energy. To illustrate the point, imagine the scenario that when a certain triggering event occurs, a large number of nodes will send packets at the same time, making such a network less responsive and less energy efficient. As a result, deploying such a sensor network to monitor hazardous applications and maintaining its sensing coverage could be a daunting task. In general, density control is an effective method to solve the above problem. Recently, a class of work has appeared to find the optimal subset of sensor nodes for densely deployed wireless sensor network while these working nodes can completely cover the monitored area [2–8]. In addition, the problem is proved to be NP-complete [3]. However, most of the current works do not consider the issue of uneven energy depletion with distance to a predetermined sink. They all aimed to achieve a uniform hexagonal distribution to preserve area coverage with the fewest sensors. When using such a uniform distribution in many-to-one sensor network applications, the sensor nodes around the sink should forward more data and deplete their energy faster. Consequently, an energy imbalance problem manifests itself, as an energy hole is created around the sink node. If this happens, no more data can be transmitted to the sink. Moreover, the network lifetime ends soon and more energy of the nodes would be wasted. Experimental results in [9] show that when the network lifetime is over, up to 90% of the total initial energy of the nodes is left unused if the nodes are distributed uniformly in the network. It becomes a major concern for network designers to maintain the balance of power consumption so that the lifetime of sensor network is prolonged. In this paper, we formulate the energy imbalance problem and present a nonuniform distribution of sensor nodes to analyze the maximum network lifetime for many-to-one wireless sensor networks. In contrast to constant data acquisition rate, we import the pixel-based transmission mechanism to avoid sending needless duplication of the same sensing data. Furthermore, a density control algorithm is proposed to achieve balance of energy depletion by introducing the concept of the equivalent sensing radius. The rest of this paper is organized as follows. In Section 2, we review the related work in the literature. In Section 3, we theoretically analyze the nonuniform node distribution strategy. And after that, an energy-balanced density control algorithm is proposed in Section 4. Section 5 describes the simulation results of the proposed algorithm. Finally, the paper is concluded in Section 6. 2. Related Work As one of the most fundamental issues in wireless sensor networks, the density control problem has attracted significant research attention. Therefore, coverage together with sensor management has been a strong research focus for the last few years. In [3], the authors provide a method for finding the maximum number of disjoint cover sets that are working successively in a WSN. In each cover set, a sufficient number of sensor nodes necessary to cover the targets are active, while the remainder of the nodes are put to sleep. However, their approach is based on a centralized solution. A distributed approach named PEAS is proposed in [4], in which the nodes use a simple rule to decide about their activity. If a node cannot find any active node in the probing range, it becomes active. Otherwise, it returns to the sleeping mode. Although this approach eliminates the complexity of getting neighbours’ status, it does not require location information and cannot guarantee full sensing coverage for the target area. Similarly, the authors in [5] propose a scheduling scheme that enables each node to enter active or sleeping state based on the coverage relationship with its neighbours. In their approach, in order to avoid the “blind point” caused by two neighbours simultaneously turning off, a random back-off time is introduced before the node makes a decision about its status. However, these algorithms cannot achieve full sensing coverage for the target area. Our previous work reported in [6] attempted to find the best cover to maintain a full coverage of the network with the least number of working nodes, and a NSGA-II based approach was proposed. The coverage problem is also explored in [7], and a distributed, localized algorithm, called OGDC (Optimal Geographical Density Control), is proposed to maintain coverage as well as connectivity. They prove that if the communication range is at least twice the sensing range, complete coverage implies connectivity. In particular, the jointly coverage and connectivity problem is studied in [8], and a sleep-awake scheduling scheme is proposed for energy conservation and surveillance quality provisioning. In [10], the coverage maintenance protocol named as PCP is proposed, and the simulation results show that it can significantly save the number of activated sensors by using probabilistic sensing model. These algorithms all focused on finding a uniform distribution, thus to reduce the number of working nodes. However, as the sensors closer to the sink tend to carry more traffic loads and thus would consume more energy, the uniform deployment will cause the network lifetime descended by the sensors at the first-hop from the sink. This is also known as the “energy hole” problem, which is characterized by a mathematical model in [11]. Apparently, it cannot prolong the system lifetime under a uniform distribution by simply increasing the number of nodes. The authors in [12–14] have also investigated several approaches to mitigate this problem. In [12], the authors present a mathematical model and aim to investigate some approaches towards mitigating this energy hole problem. However, the uneven energy depletion still exists, even by using their mere system design and the associated routing strategy. The authors in [13] investigated the energy hole problem and designed guidelines for maximizing lifetime and avoiding energy holes in sensor networks with nonuniform distribution. In [14], the authors proposed a nonuniform deployment scheme based on a general sensor application model. They derived a formula to determine the number of nodes as a function of the distance from the sink. Simulation results show that their method can enhance the network lifetime. Since each sensor was also assumed to report the data to the sink with the same acquisition rate, it cannot achieve the energy balance completely in the entire network. 3. Preliminaries 3.1. Assumptions and Network Model In this section, we present our network model and basic assumptions. Assume that a set of heterogeneous sensors are deployed in a circular area with radius in order to monitor some physical phenomenon. We refer to the complete set of sensors that has been deployed as . Each sensor node has an ID, a fixed transmission range , and a fixed sensing range . Note that the location awareness is impractical in the highly dense network. In recent years, many research efforts have been made to address the localization problem [15–18]. However, this requirement can be relaxed slightly in our work if each node is aware of its relative location to the neighbours. The only sink node is located at the centre of the circle, as shown in Figure 1. We divide the area into adjacent coronas with the same width of and denote the th corona by . Obviously, the corona is composed of nodes whose distances to the sink are between and . The network works in rounds, and each round is further divided into two phases: the first phase of node selection and the second phase of stability monitoring. In the first phase, the suitable sensor nodes are selected to work and the rest of the nodes are set to sleep state thus to save energy. During the second phase, each working node should send their sensing messages to the sink node per unit monitoring cycle . In order to avoid retransmitting the same messages in cross-covered areas, each sensor needs to check its own Voronoi polygon through the establishment of Voronoi graph with its neighbours before sending any data. In our work, this mechanism is called as the pixel-based transmission mechanism, which can ensure that the information data for any pixel in the target area is sent only once. We use a simplified power consumption model and do not consider the MAC layer and physical layer issues. In our model, the energy consumption is only dominated by communication costs, as opposed to sensing and processing costs. The initial energy of each sensor is , and the sink has no energy limitation. A node consumes units of energy when sending one bit, while it depletes units of energy when receiving one bit, where . 3.2. Nonuniform Node Distribution Based on the network model, nodes belonging to corona will forward both the data generated by themselves and the data generated by coronas while the nodes in the outermost corona need not forward any data. Assume that the sensors in each corona are distributed uniformly and there is no data aggregation at any forwarding nodes. Define the number of nodes deployed in corona to be and the number of pixels in corona to be . Based on the pixel-based transmission mechanism, the number of messages for corona to receive and forward is and . As the sensing messages are transmitted per monitoring cycle , the average energy consumption for sensors in corona during is Note that (1) can be simplified as where is the node density of corona . Sensors in corona only need to send their own sensing messages; so the energy depletion of sensors in corona is Thus, we can formulate as follows: Ideally, when all the nodes deplete their energy with the same ratio, the network lifetime is prolonged and the energy efficiency is improved. In particular, there is no energy wasted and the network lifetime can be given by Theorem 1. Maximum energy efficiency is possible, in the sense that all the working nodes take the pixel-based transmission mechanism, and the node distribution density ρ[i] in corona C[i] satisfies Proof. To use the deductive method, suppose that (6) is true, and thus (2) can be described as follows: Owing to , after basic transformations, we have Since is a permanent establishment, we can get the following conclusion, . This completes the proof of Theorem 1. Theorem 1 shows that in a circular monitored area, based on the pixel data transmission mechanism, if the sensors in each corona obey a nonuniform distribution and the distribution density meets a certain condition, the energy-balanced depletion of the whole network can be achieved. Besides, the node density of corona only relates to of corona and the corona number . Further we will analyze the lifetime enhancement of the nonuniform distribution strategy to the traditional one. Suppose that the node density in nonuniform distribution satisfies (6) and the initial conditions are the same. In the uniform distribution, the density is equal to . As the innermost corona needs to forward all of the sensing messages in the whole network, it consumes the most energy. Thus the maximum lifetime of network in uniform distribution is determined by the survival time . The network lifetime can be calculated as where is the average energy depletion of per unit time in uniform distribution. Using (8), we can get the average energy depletion in under energy-balanced conditions as Thus the lifetime enhancement is Therefore, the network lifetime of nonuniform distribution can be extended times effectively compared with the traditional uniform distribution strategy. 4. Energy-Balanced Density Control 4.1. Problem Formulation The problem of Energy-Balanced Density Control (EBDC) is formalized as follows. Given a set of potential sensors, , find a subset , which achieves a nonuniform sensor distribution satisfying (6), and the number of sensors is minimized with a full coverage. The subset is named as the energy balance working cover for the target area. 4.2. Density Control Based on Equivalent Sensing Radius The proposed algorithm, called EBDC, is inspired by the algorithm introduced in [7]. As a contribution, we made major modifications with the purpose of selecting sensors at variable densities according to (6). Definition 2 (Equivalent sensing radius). It is defined as the sensing radius when the given distribution density is the lowest one to maintain network coverage. As the hexagonal distribution is the optimal sensor distribution to fully cover the target area with the fewest sensors, define to be the hexagonal area covered by sensor with the sensing radius can be calculated as And the minimum distribution density to fully cover the area is Thus, the relationship of the equivalent sensing radius and the distribution density is Theorem 3. If the sensor selection algorithm uses the equivalent sensing radius according to the density , the network can achieve balanced energy depletion, where satisfies Proof. According to the definition of equivalent sensing radius, we can combine it with the energy-balanced condition in (6). Thus we have After transformation, we have This concludes the proof of Theorem 3. Therefore, by introducing the concept of equivalent sensing radius, the problem of EBDC can be transformed into a uniform density control problem with different sensing radius, which gives the chance of using the existing schemes to solve it. In this paper, the density control algorithm is combined with OGDC approach, which only needs relative location during node selection. In order to make sure that the node selected in each corona satisfies hexagonal distribution with its equivalent sensing radius, first, each sensor needs to know which corona is located. The calculation mechanism of corona number is presented in Section 4.2.1. And then the optimal principle for sensor selection is adopted [7]: anytime when a sensor in corona is active, the next active node with the distance of away from the first one will be selected, and a similar selection method is used for the third node. Ideally, the centres of the three sensors should form an equilateral triangle with edge . 4.2.1. Calculation of Corona Number Since the sensors are deployed with high density, there are challenges to calculating each sensor's location and measuring the distance between sensor nodes accurately. Moreover, it seems to be impossible for sensor to calculate corona number based on the distance . On the other hand, as the corona number is equal to the minimum hop count from each sensor to sink, the corona number for each sensor can be calculated simply on the basis of its minimum hop count through routing. In our paper, this minimum hop count is calculated by using DV-hop localization algorithm [16]. The calculation of minimum hop count in DV-hop localization algorithm is similar to classical distance vector routing. At first, the sink node broadcasts a beacon to be flooded throughout the network containing its position with a hop-count parameter, which is initialized to be one. Then, each receiving node maintains the minimum counter value per anchor of all the beacons received and ignores those with higher hop-count values. At every intermediate hop, beacons are flooded outward with hop-count values incremented. Through this mechanism, all of the sensor nodes can get the minimum hops to the sink. The calculation of corona number based on hop count is shown in Figure 2. 4.2.2. Selection of the Starting Node After all the sensors calculated their corona number and the corresponding equivalent sensing radius, they are powered on with undecided status. Then the node volunteer whose energy exceeds a predetermined power threshold will become a starting node with probability , where is related to the length of the round. In general, it is set to a value so as to ensure that the sensor can remain powered on until the end of the round with high probability. And then, a back-off timer of seconds is set, where is distributed uniformly in . When the timer expires, the node turns into the “ON” state and broadcasts a power-on message meanwhile. The power-on message is a quaternary array of location, , Corona_Num, , indicating the location of sensor, the equivalent sensing radius, the serial corona number, and the angle of next node. Note that is used to determine the direction along which the second working node should be located. It is uniformly distributed in , where represents the direction range of the next selecting nodes and is related to the location of the first selecting node. In the next section, we will give a detailed description of how to calculate . A message-driven mechanism is used in the wake-up process. If the initial candidate node receives power-on messages before the back-off time finishes, its timer is cancelled and this node cannot become a starting node. This method helps to avoid many neighbours to become the starting node at the same time effectively. If the node does not volunteer itself to be a starting node, a timer of seconds will be set to a sufficiently large value, such that there is at least one node whose power level qualifying to be a starting node and the selection of working nodes can be completed in an early stage of each 4.2.3. Actions Taken When Receiving a Power-on Message When a node receives a power-on message, it first checks whether the Corona_Num are equal. If this message comes from an adjacent corona, and the receiving node is not “ON”, or there are not any uncovered crossings, it will omit this message and sets itself to “OFF” state. Otherwise, it will become a starting node of its corona and transmits a new power-on message with new Corona_Num and a new equivalent sensing radius. If the power-on message comes from the same corona, the subsequent actions taken are to ensure that the working sensors selected form a hexagon distribution. Similar to OGDC, , , and are back-off timers indicating the different retreated actions in different cases. In any of the above three cases, when the back-off timer expires, the node sets its state to “ON” and broadcasts a power-on message with a new direction field set to −1 (indicating a message generated by a nonstarting node). The whole procedure of a node receiving a power-on message is shown in Figure 3. 4.2.4. Direction Range in Power-on Message The parameter in power-on message indicates the direction along which next working node hoping to be activated. In terms of large coverage area, is distributed uniformly in . In order to distinguish the power-on message from starting nodes or nonstarting nodes, we set the latter to −1. In terms of the node selection in different coronas with different equivalent sensing radius, the ratio of corona width to equivalent sensing radius cannot be ignored. Therefore, when the OGDC scheme is implemented in each corona with , the boundary effect must be considered. Furthermore, in order to speed the dissemination of power-on message in the same corona, it is also need to control the direction range . Define the coordinate of candidate starting sensor as , and the sink as . Firstly, the distance between and sink is calculated, and then the corona serial number as well as the equivalent sensing radius is determined according to dist(A, sink). Based on this point, we can calculate the direction range as follows: if the sensing disc centred at with radius has at most one crossing point with the inner or outer boundary of corona , is set to , as shown in Figure 4(a). Otherwise, we have one of the three cases, as depicted from Figure 4(b) to Figure 4(d): (i) there exists two crossing points between the sensing disc and the inner border of the outer adjacent corona; (ii) there exists two crossing points between the sensing disc and the outer border of the inner adjacent corona; (iii) there exists four crossing points between the sensing disc and the border of both inner and outer adjacent coronas. If case (i) satisfies, we have and , as shown in Figure 4(b). The intersection coordinates can be obtained by the following formulation: Assuming the calculated crossing points as and , the direction range is given by . If case (ii) satisfies, we have and , as shown in Figure 4(c). The intersection coordinates can be obtained by the following formulation: Assuming the calculated crossing points as and , the direction range is given by . If case (iii) satisfies, we have and , as shown in Figure 4(d). The intersection coordinates can be obtained by the following formulation: Using , , , and to represent the calculated crossing points, we can get the direction range : After the direction range is set, the nodes along will be selected first, like nodes and in Figure 4(d). When node becomes the power-on node, and will be the following active nodes to form a hexagonal distribution. We should note that although the node selection along range will result in the region out of being uncovered originally, with the dissemination of the power-on message in the adjacent coronas, finally, it will be fully covered by the node in its adjacent coronas. Take node in Figure 4(d) as an example. When node receives a power-on message from node , it will become the new starting node in corona according to Figure 2, thus to cover the shadow area in corona . To summarize, by calculating the direction range, we can achieve a rapid selection of the working nodes in the same corona and reduce the number of invalid power-on messages effectively. 5. Simulations Results In this section, we evaluate the performance of the proposed density control algorithm. The basic simulation parameters are listed in Table 1. Initially, in order to deploy more nodes close to the sink node, the deployment model of two-dimensional Gaussian distribution is adopted. Given the coordinate of sink as , the node deployment density follows: where is the standard deviation of coordinate , and it is equal to the communication radius in our simulation. As the finally selected nodes obey approximate uniform distribution in the corona in each round, the sensing data forwarding strategy is similar to [13]. Any node in corona can communicate with almost nodes in the ring directly, where . Among these candidate forwarding nodes, the node with most residual energy will be selected as the forwarding node. There are 1000 potential sensors randomly distributed in the circular area of radius 60 using Gaussian distribution deployment model, as shown in Figure 5(a). The target area is divided into three coronas denoted by , , and . From (17), we can calculate the equivalent sensing radius from to to be 2.8, 5.28, and 10. Figure 5(b) shows the working sensors selected after running NSGA-II 500 generations. The number of working nodes selected from corona to is 24, 53, and 59. Further, those working sensors in Figure 5 (b) are renamed as 1, 2, 3,…, 136, where the sensors with the larger IDs belong to outer coronas and those with smaller IDs are closer to sink node. In order to verify that the working sensors selected by our algorithm can balance energy consumption, the energy depletion of this working set in one round is investigated especially. In our simulation, the working round is set as 1000s, and the monitoring cycle is 3s. The energy depletion of those nodes in one working round is shown in Figure 6. From Figure 6, we can see that although the nodes in corona and behave as both data originator and router, the energy consumption of the whole working set is almost equal, thus to enhance the power efficiency of sensors in outer coronas. This is mainly because the inner sensors’ sensing pixels are much smaller than those of the outer sensors by adopting nonuniform sensor distribution and pixel-based transmission mechanism. Figure 7 shows the relationship between the total energy left and working rounds. From Figure 7, we can see that the total energy left with working rounds has an approximate linear relation. When the network runs to 150 working rounds, the remaining energy is 102300. Continuing to run algorithm, we can see that the energy attenuation with the working cycles becomes more flat. That is because the remaining survival nodes can no longer establish communication with the sink node, and the energy consumption is mainly caused by network sensing with little data forwarding. Although there is about 10% of the residual energy, due to the uneven distribution of these final survival nodes, they cannot meet the needs of the network coverage and connectivity any more. We further compare the performance of EBDC with OGDC, PCP, and the nonuniform distribution in [13]. In those series of simulations, we vary the deployed nodes density from 1000 to 3000 nodes in the circular area with radius 60. The round length is set as 1000s and the monitoring cycle is 3s. Figure 8 shows the network lifetime comparison with different node deployments. Although OGDC focuses on how to select the optimal cover set, it does not consider the imbalance consumption of energy near the sink. This mainly causes the nodes near the sink node to forward data more frequently and finally gets a much shorter network lifetime. As PCP uses a probabilistic detection model, it needs fewer active sensors to cover the target area completely and thus has more working rounds than OGDC. However, the problem of imbalanced energy depletion is not solved effectively in PCP. Although literature [13] adopts a nonuniform node distribution, the energy imbalance still exists because of constant data acquisition, which inevitably leads inner sensors consume more energy than outer sensors. As the pixel-based transmission mechanism is imported in our scheme, the total transmitted messages in each round are much smaller. Our algorithm can achieve the energy balance by selecting suitable nodes to work and thus has a much longer network lifetime. Figure 9 shows the comparison of the energy unused ratio with different deployed nodes, which refers to the ratio of the residual energy to the total energy at the end of the network lifetime. With the increase of deployed nodes in the network, the energy unused ratio shows a downward tendency using our algorithm. This is because that the energy consumption of each node selected by our algorithm in each working cycle is almost equal. The energy unused is mainly caused by the initially uneven distribution of nodes. The more nodes deployed, the more chance can be got for the survival nodes connecting to the sink and thus reduce the energy unused ratio. Since OGDC and PCP adopt a uniform node selection strategy and do not consider the phenomenon of energy imbalance consumption, both of them have a large energy unused ratio. Actually, as the nonuniform distribution strategy does not take the pixel-based data transmission mechanism, the energy imbalance cannot be avoided. Compared with the above methods, our density control algorithm based on energy balance has the higher-energy efficiency, which verifies the effectiveness of the algorithm. 6. Conclusion In this paper, we have investigated the density control problem to select the energy-balanced working nodes for sensor networks. We analyze energy attenuation in nonuniform distribution strategy theoretically and prove that when the pixel-based transmission mechanism is adopted, a full energy balance can be achieved through the rational node distribution density. Contributively, a distributed nonuniform density control algorithm with the concept of equivalent sensing radius is proposed. Simulation results show that our algorithm has a better performance than the existing algorithms and can prolong the network lifetime effectively. In the future, as our work requires that each node knows its relative locations, we plan to investigate more deeply the impact of location on the performance of the proposed approach. We also intend to extend EBDC to the probabilistic sensing models and investigate some potential applications of EBDC such as topology control, distributed storage, and network health monitoring. The authors would like to thank the reviewers giving valuable comments on the earlier version of this paper. This work is supported by the National Natural Science Foundation of China under Grant nos. 60903159, 61173153, 61070162, 71071028, and 70931001; China Postdoctoral Science Foundation funded project under Grant no. 20110491508; the Specialized Research Fund for the Doctoral Program of Higher Education under Grant no. 20070145017; and the Fundamental Research Funds for the Central Universities under Grant nos. N090504003 and N090504006. 1. I. F. Akyildiz, S. Weilian, Y. Sankarasubramaniam, and E. Cayirci, “A survey on sensor networks,” IEEE Communications Magazine, vol. 40, no. 8, pp. 102–114, 2002. View at Publisher · View at Google Scholar · View at Scopus 2. J. Jiang and W. Dou, “A coverage-preserving density control algorithm for wireless sensor networks,” in Proceedings of the 3rd International Conference on Ad Hoc Networks and Wireless, vol. 3158, pp. 42–55, LNCS, 2004. 3. S. Slijepcevic and M. Potkonjak, “Power efficient organization of wireless sensor networks,” in Proceedings of the IEEE International Conference on Communications, (ICC '01), pp. 472–476, IEEE, June 2000. 4. F. Ye, G. Zhong, J. Cheng, S. Lu, and L. Zhang, “PEAS: a robust energy conserving protocol for long-lived sensor networks,” in Proceedings of the 23th IEEE International Conference on Distributed Computing Systems, (ICDCS '03), pp. 28–37, IEEE, May 2003. 5. D. Tian and N. D. Georganas, “A coverage-preserving node scheduling scheme for large wireless sensor networks,” in Proceedings of the ACM International Workshop on Wireless Sensor Networks and Applications, (WSNA '02), pp. 32–41, ACM, September 2002. View at Scopus 6. J. Jia, J. Chen, G. R. Chang, and Y. Y. Wen, “Efficient cover set selection in wireless sensor networks,” Acta Automatica Sinica, vol. 34, no. 9, pp. 1157–1162, 2008. View at Publisher · View at Google Scholar · View at Scopus 7. H. Zhang and J. C. Hou, “Maintaining sensing coverage and connectivity in large sensor networks,” Adhoc and Sensor Wireless Networks, vol. 1, no. 1, pp. 89–124, 2005. 8. A. Keshavarzian, H. Lee, and L. Venkatraman, “Wakeup scheduling in wireless sensor networks,” in Proceedings of the 7th ACM International Symposium on Mobile Ad Hoc Networking and Computing, (MOBIHOC '06), pp. 322–333, ACM, May 2006. View at Scopus 9. A. Wadaa, S. Olariu, L. Wilson, M. Eltoweissy, and K. Jones, “Training a wireless sensor network,” Mobile Networks and Applications, vol. 10, no. 1, pp. 151–168, 2005. View at Publisher · View at Google Scholar · View at Scopus 10. M. Hefeeda and H. Ahmadi, “Energy-efficient protocol for deterministic and probabilistic coverage in sensor networks,” IEEE Transactions on Parallel and Distributed Systems, vol. 21, no. 5, pp. 579–593, 2010. View at Publisher · View at Google Scholar · View at Scopus 11. J. Li and P. Mohapatra, “An analytical model for the energy hole problem in many-to-one sensor networks,” in Proceedings of the 62nd Vehicular Technology Conference, (VTC '05), pp. 2721–2725, IEEE, September 2005. 12. S. Olariu and I. Stojmenovic, “Design guidelines for maximizing lifetime and avoiding energy holes in sensor networks with uniform distribution and uniform reporting,” in Proceedings of the 25th IEEE International Conference on Computer Communications, (INFOCOM '06), pp. 1–12, IEEE, April 2006. 13. X. B. Wu and G. H. Chen, “The energy hole problem of nonuniform node distribution in wireless sensor networks,” Chinese Journal of Computers, vol. 31, no. 2, pp. 253–261, 2008. View at Scopus 14. M. Cardei, Y. Yang, and J. Wu, “Non-uniform sensor deployment in mobile wireless sensor networks,” in Proceedings of the International Symposium on a World of Wireless, Mobile and Multimedia Networks, (WoWMoM '08), pp. 1–8, June 2008. View at Publisher · View at Google Scholar · View at Scopus 15. A. Savvides, C. C. Han, and M. B. Strivastava, “Dynamic fine-grained localization in ad-hoc networks of sensors,” in Proceedings of the seventh annual international conference on Mobile computing and networking, (Mobicom '01), pp. 166–179, July 2001. View at Scopus 16. D. Niculescu and B. Nath, “DV based positioning in ad hoc networks,” Telecommunication Systems, vol. 22, no. 1–4, pp. 267–280, 2003. View at Publisher · View at Google Scholar · View at Scopus 17. R. Sugihara and R. K. Gupta, “Sensor localization with deterministic accuracy guarantee,” in Proceedings of the 30th International Conference on Computer Communications, (INFOCOM '11), pp. 1772–1780, June 2011. 18. M. Jin, S. Xia, H. Wu, and X. Gu, “Scalable and fully distributed localization with mere connectivity,” in Proceedings of the 30th International Conference on Computer Communications, (INFOCOM '11), pp. 3164–3172, June 2011.
{"url":"http://www.hindawi.com/journals/ijdsn/2012/812013/","timestamp":"2014-04-20T09:01:41Z","content_type":null,"content_length":"324287","record_id":"<urn:uuid:69b4ae79-dda5-4c9d-8a5d-34df7908055b>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00330-ip-10-147-4-33.ec2.internal.warc.gz"}
search results Expand all Collapse all Results 1 - 25 of 101 1. CMB Online first Combinatorially Factorizable Cryptic Inverse Semigroups An inverse semigroup $S$ is combinatorially factorizable if $S=TG$ where $T$ is a combinatorial (i.e., $\mathcal{H}$ is the equality relation) inverse subsemigroup of $S$ and $G$ is a subgroup of $S$. This concept was introduced and studied by Mills, especially in the case when $S$ is cryptic (i.e., $\mathcal{H}$ is a congruence on $S$). Her approach is mainly analytical considering subsemigroups of a cryptic inverse semigroup. We start with a combinatorial inverse monoid and a factorizable Clifford monoid and from an action of the former on the latter construct the semigroups in the title. As a special case, we consider semigroups which are direct products of a combinatorial inverse monoid and a group. Keywords:inverse semigroup, cryptic semigroup 2. CMB Online first On the ${\mathcal F}{\Phi}$-Hypercentre of Finite Groups Let $G$ be a finite group, $\mathcal F$ a class of groups. Then $Z_{{\mathcal F}{\Phi}}(G)$ is the ${\mathcal F}{\Phi}$-hypercentre of $G$ which is the product of all normal subgroups of $G$ whose non-Frattini $G$-chief factors are $\mathcal F$-central in $G$. A subgroup $H$ is called $\mathcal M$-supplemented in a finite group $G$, if there exists a subgroup $B$ of $G$ such that $G=HB$ and $H_1B$ is a proper subgroup of $G$ for any maximal subgroup $H_1$ of $H$. The main purpose of this paper is to prove: Let $E$ be a normal subgroup of a group $G$. Suppose that every noncyclic Sylow subgroup $P$ of $F^{*}(E)$ has a subgroup $D$ such that $1\lt |D|\lt |P|$ and every subgroup $H$ of $P$ with order $|H|=|D|$ is $\mathcal M$-supplemented in $G$, then $E\leq Z_{{\mathcal U}{\Phi}} Keywords:${\mathcal F}{\Phi}$-hypercentre, Sylow subgroups, $\mathcal M$-supplemented subgroups, formation Categories:20D10, 20D20 3. CMB 2014 (vol 57 pp. 277) On Mutually $m$-permutable Product of Smooth Groups Let $G$ be a finite group and $H$, $K$ two subgroups of G. A group $G$ is said to be a mutually m-permutable product of $H$ and $K$ if $G=HK$ and every maximal subgroup of $H$ permutes with $K$ and every maximal subgroup of $K$ permutes with $H$. In this paper, we investigate the structure of a finite group which is a mutually m-permutable product of two subgroups under the assumption that its maximal subgroups are totally smooth. Keywords:permutable subgroups, $m$-permutable, smooth groups, subgroup lattices Categories:20D10, 20D20, 20E15, 20F16 4. CMB 2014 (vol 57 pp. 390) Simplicity of Some Twin Tree Automorphism Groups with Trivial Commutation Relations We prove simplicity for incomplete rank 2 Kac-Moody groups over algebraic closures of finite fields with trivial commutation relations between root groups corresponding to prenilpotent pairs. We don't use the (yet unknown) simplicity of the corresponding finitely generated groups (i.e., when the ground field is finite). Nevertheless we use the fact that the latter groups are just infinite (modulo center). Keywords:Kac-Moody group, twin tree, simplicity, root system, building Categories:20G44, 20E42, 51E24 5. CMB Online first Strong Asymptotic Freeness for Free Orthogonal Quantum Groups It is known that the normalized standard generators of the free orthogonal quantum group $O_N^+$ converge in distribution to a free semicircular system as $N \to \infty$. In this note, we substantially improve this convergence result by proving that, in addition to distributional convergence, the operator norm of any non-commutative polynomial in the normalized standard generators of $O_N^+$ converges as $N \to \infty$ to the operator norm of the corresponding non-commutative polynomial in a standard free semicircular system. Analogous strong convergence results are obtained for the generators of free unitary quantum groups. As applications of these results, we obtain a matrix-coefficient version of our strong convergence theorem, and we recover a well known $L^2$-$L^\ infty$ norm equivalence for non-commutative polynomials in free semicircular systems. Keywords:quantum groups, free probability, asymptotic free independence, strong convergence, property of rapid decay Categories:46L54, 20G42, 46L65 6. CMB 2014 (vol 57 pp. 231) On the Multiplicities of Characters in Table Algebras In this paper we show that every module of a table algebra can be considered as a faithful module of some quotient table algebra. Also we prove that every faithful module of a table algebra determines a closed subset which is a cyclic group. As a main result we give some information about multiplicities of characters in table algebras. Keywords:table algebra, faithful module, multiplicity of character Categories:20C99, 16G30 7. CMB Online first On Braided and Ribbon Unitary Fusion Categories We prove that every braiding over a unitary fusion category is unitary and every unitary braided fusion category admits a unique unitary ribbon structure. Keywords:fusion categories, braided categories, modular categories Categories:20F36, 16W30, 18D10 8. CMB Online first ZL-amenability Constants of Finite Groups with Two Character Degrees We calculate the exact amenability constant of the centre of $\ell^1(G)$ when $G$ is one of the following classes of finite group: dihedral; extraspecial; or Frobenius with abelian complement and kernel. This is done using a formula which applies to all finite groups with two character degrees. In passing, we answer in the negative a question raised in work of the third author with Azimifard and Spronk (J. Funct. Anal. 2009). Keywords:center of group algebras, characters, character degrees, amenability constant, Frobenius group, extraspecial groups Categories:43A20, 20C15 9. CMB 2013 (vol 57 pp. 125) Camina Triples In this paper, we study Camina triples. Camina triples are a generalization of Camina pairs. Camina pairs were first introduced in 1978 by A .R. Camina. Camina's work was inspired by the study of Frobenius groups. We show that if $(G,N,M)$ is a Camina triple, then either $G/N$ is a $p$-group, or $M$ is abelian, or $M$ has a non-trivial nilpotent or Frobenius quotient. Keywords:Camina triples, Camina pairs, nilpotent groups, vanishing off subgroup, irreducible characters, solvable groups 10. CMB 2013 (vol 57 pp. 9) Integral Sets and the Center of a Finite Group We give a description of the atoms in the Boolean algebra generated by the integral subsets of a finite group. Keywords:integral set, characters, Boolean algebra 11. CMB 2013 (vol 56 pp. 795) Upper Bounds for the Essential Dimension of $E_7$ This paper gives a new upper bound for the essential dimension and the essential 2-dimension of the split simply connected group of type $E_7$ over a field of characteristic not 2 or 3. In particular, $\operatorname{ed}(E_7) \leq 29$, and $\operatorname{ed}(E_7;2) \leq 27$. Keywords:$E_7$, essential dimension, stabilizer in general position Categories:20G15, 20G41 12. CMB 2012 (vol 57 pp. 326) On Zero-divisors in Group Rings of Groups with Torsion Nontrivial pairs of zero-divisors in group rings are introduced and discussed. A problem on the existence of nontrivial pairs of zero-divisors in group rings of free Burnside groups of odd exponent $n \gg 1$ is solved in the affirmative. Nontrivial pairs of zero-divisors are also found in group rings of free products of groups with torsion. Keywords:Burnside groups, free products of groups, group rings, zero-divisors Categories:20C07, 20E06, 20F05, , 20F50 13. CMB 2012 (vol 57 pp. 303) Octonion Algebras over Rings are not Determined by their Norms Answering a question of H. Petersson, we provide a class of examples of pair of octonion algebras over a ring having isometric norms. Keywords:octonion algebras, torsors, descent Categories:14L24, 20G41 14. CMB 2012 (vol 56 pp. 881) Free Groups Generated by Two Heisenberg Translations In this paper, we will discuss the groups generated by two Heisenberg translations of $\mathbf{PU}(2,1)$ and determine when they are free. Keywords:free group, Heisenberg group, complex triangle group Categories:30F40, 22E40, 20H10 15. CMB 2012 (vol 57 pp. 97) Rationality and the Jordan-Gatti-Viniberghi decomposition We verify our earlier conjecture and use it to prove that the semisimple parts of the rational Jordan-Kac-Vinberg decompositions of a rational vector all lie in a single rational orbit. Keywords:reductive group, $G$-module, Jordan decomposition, orbit closure, rationality Categories:20G15, 14L24 16. CMB 2012 (vol 57 pp. 424) A Note on Amenability of Locally Compact Quantum Groups In this short note we introduce a notion called ``quantum injectivity'' of locally compact quantum groups, and prove that it is equivalent to amenability of the dual. Particularly, this provides a new characterization of amenability of locally compact groups. Keywords:amenability, conditional expectation, injectivity, locally compact quantum group, quantum injectivity Categories:20G42, 22D25, 46L89 17. CMB 2012 (vol 57 pp. 132) Twisted Conjugacy Classes in Abelian Extensions of Certain Linear Groups Given a group automorphism $\phi:\Gamma\longrightarrow \Gamma$, one has an action of $\Gamma$ on itself by $\phi$-twisted conjugacy, namely, $g.x=gx\phi(g^{-1})$. The orbits of this action are called $\phi$-twisted conjugacy classes. One says that $\Gamma$ has the $R_\infty$-property if there are infinitely many $\phi$-twisted conjugacy classes for every automorphism $\phi$ of $\Gamma$. In this paper we show that $\operatorname{SL}(n,\mathbb{Z})$ and its congruence subgroups have the $R_\infty$-property. Further we show that any (countable) abelian extension of $\Gamma$ has the $R_\infty$-property where $\Gamma$ is a torsion free non-elementary hyperbolic group, or $\operatorname{SL}(n,\mathbb{Z}), \operatorname{Sp}(2n,\mathbb{Z})$ or a principal congruence subgroup of $\ operatorname{SL}(n,\mathbb{Z})$ or the fundamental group of a complete Riemannian manifold of constant negative curvature. Keywords:twisted conjugacy classes, hyperbolic groups, lattices in Lie groups 18. CMB 2012 (vol 56 pp. 630) Inverse Semigroups and Sheu's Groupoid for the Odd Dimensional Quantum Spheres In this paper, we give a different proof of the fact that the odd dimensional quantum spheres are groupoid $C^{*}$-algebras. We show that the $C^{*}$-algebra $C(S_{q}^{2\ell+1})$ is generated by an inverse semigroup $T$ of partial isometries. We show that the groupoid $\mathcal{G}_{tight}$ associated with the inverse semigroup $T$ by Exel is exactly the same as the groupoid considered by Keywords:inverse semigroups, groupoids, odd dimensional quantum spheres Categories:46L99, 20M18 19. CMB 2011 (vol 55 pp. 783) Products and Direct Sums in Locally Convex Cones In this paper we define lower, upper, and symmetric completeness and discuss closure of the sets in product and direct sums. In particular, we introduce suitable bases for these topologies, which leads us to investigate completeness of the direct sum and its components. Some results obtained about $X$-topologies and polars of the neighborhoods. Keywords:product and direct sum, duality, locally convex cone Categories:20K25, 46A30, 46A20 20. CMB 2011 (vol 56 pp. 395) Coessential Abelianization Morphisms in the Category of Groups An epimorphism $\phi\colon G\to H$ of groups, where $G$ has rank $n$, is called coessential if every (ordered) generating $n$-tuple of $H$ can be lifted along $\phi$ to a generating $n$-tuple for $G$. We discuss this property in the context of the category of groups, and establish a criterion for such a group $G$ to have the property that its abelianization epimorphism $G\to G/[G,G]$, where $[G,G]$ is the commutator subgroup, is coessential. We give an example of a family of 2-generator groups whose abelianization epimorphism is not coessential. This family also provides counterexamples to the generalized Andrews--Curtis conjecture. Keywords:coessential epimorphism, Nielsen transformations, Andrew-Curtis transformations Categories:20F05, 20F99, 20J15 21. CMB 2011 (vol 56 pp. 272) On Super Weakly Compact Convex Sets and Representation of the Dual of the Normed Semigroup They Generate In this note, we first give a characterization of super weakly compact convex sets of a Banach space $X$: a closed bounded convex set $K\subset X$ is super weakly compact if and only if there exists a $w^*$ lower semicontinuous seminorm $p$ with $p\geq\sigma_K\equiv\sup_{x\in K}\langle\,\cdot\,,x\rangle$ such that $p^2$ is uniformly Fréchet differentiable on each bounded set of $X^*$. Then we present a representation theorem for the dual of the semigroup $\textrm{swcc}(X)$ consisting of all the nonempty super weakly compact convex sets of the space $X$. Keywords:super weakly compact set, dual of normed semigroup, uniform Fréchet differentiability, representation Categories:20M30, 46B10, 46B20, 46E15, 46J10, 49J50 22. CMB 2011 (vol 56 pp. 13) Ordering the Representations of $S_n$ Using the Interchange Process Inspired by Aldous' conjecture for the spectral gap of the interchange process and its recent resolution by Caputo, Liggett, and Richthammer, we define an associated order $\prec$ on the irreducible representations of $S_n$. Aldous' conjecture is equivalent to certain representations being comparable in this order, and hence determining the ``Aldous order'' completely is a generalized question. We show a few additional entries for this order. Keywords:Aldous' conjecture, interchange process, symmetric group, representations Categories:82C22, 60B15, 43A65, 20B30, 60J27, 60K35 23. CMB 2011 (vol 55 pp. 673) Multiplicity Free Jacquet Modules Let $F$ be a non-Archimedean local field or a finite field. Let $n$ be a natural number and $k$ be $1$ or $2$. Consider $G:=\operatorname{GL}_{n+k}(F)$ and let $M:=\operatorname{GL}_n(F) \times \ operatorname{GL}_k(F)\lt G$ be a maximal Levi subgroup. Let $U\lt G$ be the corresponding unipotent subgroup and let $P=MU$ be the corresponding parabolic subgroup. Let $J:=J_M^G: \mathcal{M}(G) \ to \mathcal{M}(M)$ be the Jacquet functor, i.e., the functor of coinvariants with respect to $U$. In this paper we prove that $J$ is a multiplicity free functor, i.e., $\dim \operatorname{Hom}_M(J (\pi),\rho)\leq 1$, for any irreducible representations $\pi$ of $G$ and $\rho$ of $M$. We adapt the classical method of Gelfand and Kazhdan, which proves the ``multiplicity free" property of certain representations to prove the ``multiplicity free" property of certain functors. At the end we discuss whether other Jacquet functors are multiplicity free. Keywords:multiplicity one, Gelfand pair, invariant distribution, finite group Categories:20G05, 20C30, 20C33, 46F10, 47A67 24. CMB 2011 (vol 54 pp. 654) Norm One Idempotent $cb$-Multipliers with Applications to the Fourier Algebra in the $cb$-Multiplier Norm For a locally compact group $G$, let $A(G)$ be its Fourier algebra, let $M_{cb}A(G)$ denote the completely bounded multipliers of $A(G)$, and let $A_{\mathit{Mcb}}(G)$ stand for the closure of $A (G)$ in $M_{cb}A(G)$. We characterize the norm one idempotents in $M_{cb}A(G)$: the indicator function of a set $E \subset G$ is a norm one idempotent in $M_{cb}A(G)$ if and only if $E$ is a coset of an open subgroup of $G$. As applications, we describe the closed ideals of $A_{\mathit{Mcb}}(G)$ with an approximate identity bounded by $1$, and we characterize those $G$ for which $A_{\mathit {Mcb}}(G)$ is $1$-amenable in the sense of B. E. Johnson. (We can even slightly relax the norm bounds.) Keywords:amenability, bounded approximate identity, $cb$-multiplier norm, Fourier algebra, norm one idempotent Categories:43A22, 20E05, 43A30, 46J10, 46J40, 46L07, 47L25 25. CMB 2011 (vol 55 pp. 48) Freyd's Generating Hypothesis for Groups with Periodic Cohomology Let $G$ be a finite group, and let $k$ be a field whose characteristic $p$ divides the order of $G$. Freyd's generating hypothesis for the stable module category of $G$ is the statement that a map between finite-dimensional $kG$-modules in the thick subcategory generated by $k$ factors through a projective if the induced map on Tate cohomology is trivial. We show that if $G$ has periodic cohomology, then the generating hypothesis holds if and only if the Sylow $p$-subgroup of $G$ is $C_2$ or $C_3$. We also give some other conditions that are equivalent to the GH for groups with periodic cohomology. Keywords:Tate cohomology, generating hypothesis, stable module category, ghost map, principal block, thick subcategory, periodic cohomology Categories:20C20, 20J06, 55P42
{"url":"http://cms.math.ca/cmb/msc/20","timestamp":"2014-04-18T21:04:46Z","content_type":null,"content_length":"66044","record_id":"<urn:uuid:9800bf95-646f-4890-95ab-eaca563dbc00>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00600-ip-10-147-4-33.ec2.internal.warc.gz"}
The Singularity Is Here In Chess March 19, 2012 Chess machines can already beat all humans Kenneth Regan, the same as our Ken, is besides many things, an international chess master. He combines his strong understanding of theory, of computing, and of chess. This combination is almost unique in the world and has led him to work on many interesting questions concerning chess. Today I am proud to announce that Ken’s work on chess is highlighted in the New York Times Tuesday Science section. Chess Cheating Players can use computers to cheat at chess today—even at the world championship level. It is believed that programs that run on laptops can play way above all human players today, and the gap is growing as the programs get better and the laptops get faster and have more memory. A player was even caught red-handed having consulted a program on his smartphone during the final round of last June’s German Championship. Of course tournament officials are always on guard to see if anything is amiss. But some cheating is alleged to have escaped human surveillance. This is where Ken enters the fray. Beyond Cheating Ken’s work started as a method to indirectly discover whether or not a player cheated by using a computer. The central idea was simple: look to see if there was more agreement with the players moves and the program than would be statistically reasonable. But it is not simple. Comparing human play with machine play raised many questions that went well beyond the narrow goal of detecting cheating. What Ken and his co-authors have done in the last few years is study the record of chess games, not with the goal of just cheating detection, but to understand better how humans make complex decisions under pressure. This work is discussed in the Times article and of course in technical papers that Ken has written with his co-authors Guy Haworth and Giuseppe DiFatta and Bartlomiej Macieja. Here are the two most recent papers: • K. Regan and G. Haworth, “Intrinsic Chess Ratings.” Proceedings of AAAI 2011, San Francisco, August 2011. • K. Regan, B. Macieja and G. Haworth, “Understanding Distributions of Chess Performances,” to appear in the proceedings of the 13th ICGA conference on Advances in Computer Games, Tilburg, Netherlands, November 2011. I will just summarize what Ken has done by quoting him: I have created a statistical model of decision making in chess. As input the model takes (a) deep-and-broad computer analysis of chess positions, and (b) parameters modeling the skill level and attributes of a human (or cyber) player. It outputs (c) estimated probabilities for every move in every position by such a player, and (d) projected error bars on associated statistics, which appear to be within a factor of 1.15 (best-move choice frequency) to 1.4 (error frequency) of “true” in field tests done to date. The former error-bar test was first reported in our blog here. This work is based on over 20 million pages (figuring 2K/page) of data on human decision-making under pressure. It is significant that it’s all from actual competition, not a simulation where there can be nagging doubt about true incentive as in some economics/social-choice studies. Ken and student helpers have written Perl code to collate the data and C++ code to analyze it, almost 200 dense single-spaced pages of code. In my opinion Ken’s work is quite non-trivial and could have applications to other human decision problems. In any event Ken has been working on it for the last several years. And now his work is good enough to be used in actual cheating cases and more. This is in the Times article. Way to go Ken. Open Problems Will a time come soon when we will “cheat” in doing mathematics by using a computer? 1. March 19, 2012 11:19 pm “Will a time come soon when we will “cheat” in doing mathematics by using a computer?” Aren’t we there already? At the problem solving level, we use tools such as maxima, matlab and mathematica to do things faster (and I am not talking about numerical analysis per se but more about letting the computer do the algebra/calculus for us). And there are also the people who work in automated theorem proving. :) Does it matter if we “cheat” a little, if that leads to new insights? □ March 19, 2012 11:43 pm Moral issues associated to medical “cheating” are complex… but as a purely factual matter, it is beyond doubt the case, that everyone in the STEM, chess, and athletic communities increasingly competes / collaborates / teaches / learns from / is evaluated by colleagues who are doping. ☆ March 19, 2012 11:47 pm I see your point. Thanks for the nice reference. ☆ March 19, 2012 11:55 pm A review of the literature and a summary ethical guidance for physician-patient decision-making is here. But what are a teacher’s responsibilities, to students who inquire: “Many other students dope … should I / must I dope too?” These issues are far tougher for drug enhancement, than for computer enhancement. 2. March 19, 2012 11:22 pm Was it Kasparov who suggested generalizing Chess by randomizing the placement of pieces in the first row? There are also variants on 10×10 boards, “fairy” chess pieces like the Knightrider, Grasshopper, Camel, etc. Do you believe that moving to these variants and destroying known openings and endgames would put humans ahead of computers again? Or are computers getting good enough (or at least fast enough) to handle these variants as well? □ March 19, 2012 11:30 pm “Do you believe that moving to these variants and destroying known openings and endgames would put humans ahead of computers again?” In my opinion, humans would be on top this way … but only for a little while, until the chess programmers catch up with those too. ☆ March 20, 2012 12:09 am It is an interesting question whether humans do better in the long run in these randomized games. One advantage of a computer would be simply avoiding rookie mistakes, like letting your Grasshopper and your Camel get forked by a Goblin. And humans would be deprived of opening libraries and end-game analysis just like the computers. Although I guess if the rules varied wildly enough, then computers might start having a lot of trouble just with static analyses of position, like in Go. □ March 20, 2012 10:09 am Thanks everyone. I’m on record at Hans Bodlaender’s great Chess Variants website as favoring a “non-random” version of Fischer’s chess, between it and David Bronstein’s proposal, here. I believe something like this will need to be adopted by 2040 or 2050. Computers already seem to be equally imposing at Fischer’s chess. My one attempt so far to make a chess-like game that favors human-type thinking is to start each side off with 8 “tandem pawns”. A tandem pawn moves and captures like an ordinary pawn, and can also decouple itself by moving just one of the two pawns in it. Pawns cannot (re-)couple back into tandems—thus for every standard chess position, the game rules are the same. I’m considering also having one special “Rocket Move” where on any square in ranks 2-6 (not just the 2nd rank), one pawn of a tandem may shoot forward 2 squares while the other one “recoils” backwards one square, even promoting if the tandem is on the 6th rank. This would allow having pawns on the first rank, without disturbing the property of being a “conservative extension” with regard to legal positions in standard chess. The idea is to emphasize pawn structure and make plans longer-term, which I take to favor humans. ☆ March 26, 2012 12:10 am That sounds fun! Have you played it much? Does it feel strategically very different from conventional chess? ☆ April 3, 2012 2:51 pm Have you looked into Arimaa? ☆ April 3, 2012 3:04 pm Not personally (Arimaa), beyond one article describing the rules and strategy, though I know it was conceived expressly to be a game where humans could compete with computers. Do we? □ April 12, 2012 9:48 am If you want to have a chance against computers just switch to Go (baduk, weichi). 3. March 20, 2012 12:10 am I’m okey with all that,cause playing politics is the greatest cheater in human history ! 4. March 20, 2012 3:33 am Actually, I personally already “cheated” in doing mathematics, if it is cheating at all. In a nutshell, I had a conjecture that some function in a high-dimensional combinatorial space is smooth in that sense that from any point we can go to a global minimum monotonously decreasing the function. The conjecture came from a computer simulation: whatever was the starting point, a random walk forced not to increase the function always ended in a global minimum. What is more interesting, when we wanted to prove that the conjecture is true, we asked the computer to show such monotonously decreasing paths for some cases that we found the hardest ones. By analyzing these paths we got the clue how to prove the conjecture, what is now a theorem. Is it a cheating? 5. March 20, 2012 3:57 am Chess is a game with perfect information, this means that there is optimal strategy solution. In contrast with automated theorem proving which proving is not a game with perfect information (in general case). 6. March 20, 2012 5:11 am In mathematics, what is mainly considered as “cheating” is not trying to have a computer producing a proof but rather trying to understand “what is going on” without proving, based on heuristic arguments, numerical calculations and other computer experimentation, and even by setting an array of conjectures and meta-conjectures. This is a source of tension between pure and applied mathematics and also at times with other scientific areas that uses mathematical formalism. Congratulations, Ken, for this recognition. Using statistics to detect various kind of cheating, and identifying human imperfect decision making (which is a sort of Turing test in reverse) are very exciting areas, and so is, of course, chess, human- and computerized. 7. March 20, 2012 6:02 am That wasn’t Kasparov. That was Fischer. Regarding mathematicians cheating with computer – isn’t use of Mathematica and/or Maple considered as such? Also Shalosh.B.Ekhad is publishing papers with Doron Zeilberger for some time already 8. March 20, 2012 3:10 pm Why not let players use computers? This ups the ante in professional chess to force the top players to develop/acquire better software/hardware to improve their game if they want to remain competitive. This favors people like Ken who deeply understand both the game and the algorithmics of chess. Maybe chess will become like auto racing with the player/driver controlling a computer/ car that gets money from sponsors. Regarding variations that supposedly favor humans, I think machines can come up to speed on those much faster than humans. Humans acquire an advantage from meta-level intuition which derives from experience, but that only comes with time. It also reaches a limit (Kasparav?) imposed by our biology. In other words, machines become competent faster and eventually surpass humans, but there’s a sweet spot when humans tend to dominate. □ March 20, 2012 5:28 pm Allowing players to use computers would take away one of the pleasures a chess spectator has; seeing an expert player’s eyes bulge after blundering away a piece or the whole game. A computer simply isn’t going to forget that a piece is en prise or fail to see a knight fork a few ply ahead because the position is complex. Watching humans + computers play chess is basically the same as watching computers play chess… mostly interesting only to chess programmers. 9. March 21, 2012 1:37 am Regarding the “singularity” in doing mathematics and science, here is a recent article about it (featuring Zeilberger and Gowers on the mathematics part) and also discussing a program ‘Eureka’ for automatic mathematical modeling for scientific phenomena http://www.haaretz.com/weekend/magazine/an-israeli-professor-s-eureqa-moment-1.410881: □ March 22, 2012 8:29 pm Eureqa is cute but hardly brings us any closer to the singularity. □ April 4, 2012 4:01 am Regarding the chess singularity, just this week Chessbase continued that magazine’s longstanding tradition of breaking uniquely interesting stories by reporting the solving of the Kings’s Gambit (a famous chess opening). Rajlich: Busting the King’s Gambit, this time for sure The computer’s tree isn’t especially insightful – it tells you what needs to be played in each position, but does not tell you why. “Because it is in my database of positions” is not an answer a human player is seeking. But you can try out lines to your heart’s content: after 3.Be2, Black has 30 legal replies, and 27 of them lead to a draw. As the article reports, plans are now underway to use the spare cycles of Google’s server farm to solve chess completely during the coming year, similar to the way Jonathan Schaeffer and his colleagues recently solved the game of checkers with the search engine called Chinook. Google expects the search to be complete by April 1, 2013. Kudos to author Vasik Rajlich for his incredible account. ☆ April 11, 2012 3:37 pm Heh, I just read your link and was fooled by it, very clever April’s fool joke (as is explained by a later chessbase article). 10. March 21, 2012 7:22 am We’ve already been using computers to find proofs: http://en.wikipedia.org/wiki/Four_color_theorem 11. March 23, 2012 9:07 am hey Dick and Ken, when can we expect you guys to write on the late Turing award winner and his work? □ March 23, 2012 9:08 am ☆ March 23, 2012 10:26 am We congratulated him at the end of this post, and have discussed doing more. 12. September 4, 2012 2:33 pm I do not think much thought should be given to computers being better than humans in chess. A computer does one particular task well, and all computers are invented by humans. Also we don’t seem to mind that machines can run faster than Usain Bolt, or that gorillas can beat up any American football player. So why bother if something else is better than humans at a particular task. Recent Comments John Sidles on Multiple-Credit Tests KWRegan on Multiple-Credit Tests John Sidles on Multiple-Credit Tests John Sidles on Multiple-Credit Tests Leonid Gurvits on Counting Is Sometimes Eas… Cristopher Moore on Multiple-Credit Tests Multiple-Credit Test… on Wait Wait… Don’t F… Amanda on Counting Is Sometimes Eas… matrix 15 year anniv… on The Evil Genius Phil on Wait Wait… Don’t F… Sam Slogren on Counting Is Sometimes Eas… Dustin Yoder on Can We Solve Chess One Da… Amir Ben-Amram on Counting Is Sometimes Eas… Istvan on Counting Is Sometimes Eas… Istvan on Counting Is Sometimes Eas…
{"url":"http://rjlipton.wordpress.com/2012/03/19/the-singularity-is-here-in-chess/","timestamp":"2014-04-16T07:22:30Z","content_type":null,"content_length":"112835","record_id":"<urn:uuid:e7ece158-fee1-4e90-8e7a-5dc78878246d>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00320-ip-10-147-4-33.ec2.internal.warc.gz"}
A bug or design error that limits a program's capabilities, and which is sufficiently egregious that nobody can quite work up enough nerve to describe it as a feature. Often used (especially by marketroid types) to make it sound as though some crippling bogosity had been intended by the designers all along, or was forced upon them by arcane technical constraints of a nature no mere user could possibly comprehend (these claims are almost invariably false). Old-time hacker Joseph M. Newcomer advises that whenever choosing a quantifiable but arbitrary restriction, you should make it either a power of 2 or a power of 2 minus 1. If you impose a limit of 17 items in a list, everyone will know it is a random number - on the other hand, a limit of 15 or 16 suggests some deep reason (involving 0- or 1-based indexing in binary) and you will get less flamage for it. Limits which are round numbers in base 10 are always especially suspect. Try this search on Wikipedia, OneLook, Google Nearby terms: resource fork « Resource Reservation Protocol « Restricted EPL « restriction » Restructured EXtended eXecutor » restructuring » retcon Copyright Denis Howe 1985
{"url":"http://foldoc.org/restriction","timestamp":"2014-04-17T00:51:45Z","content_type":null,"content_length":"5545","record_id":"<urn:uuid:e7d1d507-4dfe-4ce2-8a4e-77b9220b3515>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00649-ip-10-147-4-33.ec2.internal.warc.gz"}
Argo, IL Algebra 2 Tutor Find an Argo, IL Algebra 2 Tutor ...They seem to march on, concerned not with whether they have left you far behind. Rather, I find it much more efficient to aide learning with conversation, marching side by side and never to lead too far ahead. I do my best to pause and retrace our route so that when you inevitably ask the same ... 7 Subjects: including algebra 2, calculus, physics, geometry ...I began tutoring my cousin who was struggling in math while I was still in high school. I found then that I have a natural ability to break concepts down to those who might tend to struggle with the challenging mathematics content. Over the past few years I have worked with students ranging from elementary to college-age as well as non-traditional students. 20 Subjects: including algebra 2, physics, SAT math, trigonometry Hi! I attended Dominican University and am majoring in Mathematics with a concentration of Secondary Education. I plan to get my certificate in Special Education as well. 19 Subjects: including algebra 2, reading, calculus, geometry I am certified math teacher. Currently, I work as a substitute teacher at Elmwood Park School District and Morton High Schools in Cicero. I have been tutoring students since 2008 and preparing them for ACT. I have BA in Mathematics and Secondary Education from Northeastern Illinois University. 12 Subjects: including algebra 2, calculus, geometry, algebra 1 ...I enjoy helping students understand the subject and realize that Math can be fun and not stressful.Algebra 1 is the basis of all other Math courses in the future and is used in many professions. Topics include: simplifying expressions, algebraic notation, number systems, understanding and solvin... 11 Subjects: including algebra 2, calculus, geometry, algebra 1 Related Argo, IL Tutors Argo, IL Accounting Tutors Argo, IL ACT Tutors Argo, IL Algebra Tutors Argo, IL Algebra 2 Tutors Argo, IL Calculus Tutors Argo, IL Geometry Tutors Argo, IL Math Tutors Argo, IL Prealgebra Tutors Argo, IL Precalculus Tutors Argo, IL SAT Tutors Argo, IL SAT Math Tutors Argo, IL Science Tutors Argo, IL Statistics Tutors Argo, IL Trigonometry Tutors Nearby Cities With algebra 2 Tutor Bedford Park algebra 2 Tutors Bridgeview algebra 2 Tutors Brookfield, IL algebra 2 Tutors Countryside, IL algebra 2 Tutors Forest View, IL algebra 2 Tutors Hodgkins, IL algebra 2 Tutors Justice, IL algebra 2 Tutors La Grange Park algebra 2 Tutors Lyons, IL algebra 2 Tutors Mc Cook, IL algebra 2 Tutors Mccook, IL algebra 2 Tutors Riverside, IL algebra 2 Tutors Stickney, IL algebra 2 Tutors Summit Argo algebra 2 Tutors Summit, IL algebra 2 Tutors
{"url":"http://www.purplemath.com/Argo_IL_Algebra_2_tutors.php","timestamp":"2014-04-21T14:54:13Z","content_type":null,"content_length":"23936","record_id":"<urn:uuid:9525b5d9-5e13-497e-ab65-8e3658b1b2f9>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00617-ip-10-147-4-33.ec2.internal.warc.gz"}
Complex modulus of Z[sqrt(-10)] March 5th 2012, 01:48 AM #1 Junior Member Mar 2011 Complex modulus of Z[sqrt(-10)] My lecture notes has the following. Consider the integral domain $\mathbb{Z}[\sqrt{-10}] = \{ x + y \sqrt{-10} : x,y \in \mathbb{Z} \}$ and the equation $7 = (x + y\sqrt{-10}) (u + v \sqrt{-10})$ for some $x,y,u,v \in \mathbb{Z}$. Taking the modulus (in $\mathbb{C}$) of both sides and squaring we get $49 = (x^2 + 10 y^2) (u^2 + 10 v^2)$. My question is: how do we get the above equation by taking modulus in $\mathbb{C}$? Can anyone pls show me the steps? Re: Complex modulus of Z[sqrt(-10)] I am sorry for this dumb question, I haven't played with complex numbers for a while. $x + y\sqrt{-10} = x + i y \sqrt{10}$ March 5th 2012, 01:53 AM #2 Junior Member Mar 2011
{"url":"http://mathhelpforum.com/number-theory/195617-complex-modulus-z-sqrt-10-a.html","timestamp":"2014-04-20T02:16:33Z","content_type":null,"content_length":"32719","record_id":"<urn:uuid:c8c15ee5-698b-4abb-9706-6dac36d178aa>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00322-ip-10-147-4-33.ec2.internal.warc.gz"}
12.6.4 Self-labelling Next: 12.6.5 Global Equivalencing Up: 12.6 Cluster Algorithms for Previous: 12.6.3 Parallel Cluster Algorithms We shall refer to this algorithm as ``self-labelling,'' since each site figures out which cluster it is in by itself from local information. This method has also been referred to as ``local label propagation'' [Brower:91a], [Flanigan:92a]. We begin by assigning each site, i, a unique cluster label, n, which has a different cluster label, However, the SIMD nature of these computers leads to very poor load balancing. Most processors end up waiting for the few in the largest cluster which are the last to finish. We implemented this on the AMT DAP and obtained only about 20% efficiency. We can improve this method on a MIMD machine by using a faster sequential algorithm, such as ``ants in the labyrinth,'' to label the clusters in the sublattice on each processor, and then just use self-labelling on the sites at the edges of each processor to eventually arrive at the global cluster labels [Baillie:91a], [Coddington:90a], [Flanigan:92a]. The number of steps required to do the self-labelling will depend on the largest cluster which, at the phase transition, will generally span the entire lattice. The number of self-labelling steps will therefore be of the order of the maximum distance between processors, which for a square array of P processors is just L for an L is substantially greater than the number of processors, we can expect to obtain a reasonable speedup. Of course, this algorithm suffers from the same type of load imbalance as the SIMD version. However, in this case, it is much less severe since most of the work is done with ``ants in the labyrinth,'' which is well load balanced. The speedups obtained on the Symult 2010, for a variety of lattice sizes, are shown in Figure 12.27. The dashed line indicates perfect speedup (i.e., 100% efficiency). The lattice sizes for which we actually need large numbers of processors are of the order of Figure 12.27: Speedups for Self-Labelling Algorithm Next: 12.6.5 Global Equivalencing Up: 12.6 Cluster Algorithms for Previous: 12.6.3 Parallel Cluster Algorithms Guy Robinson Wed Mar 1 10:19:35 EST 1995
{"url":"http://www.netlib.org/utk/lsi/pcwLSI/text/node294.html","timestamp":"2014-04-19T06:55:02Z","content_type":null,"content_length":"6572","record_id":"<urn:uuid:0ccded82-6102-4220-a8b5-eafc0778a35b>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00408-ip-10-147-4-33.ec2.internal.warc.gz"}
Topological space - definition and basic concepts - ProvenMath - Apronus.com TOPOLOGICAL SPACE Definition T.1 - Topological Space (X,G) is a topological space if and only if the following conditions hold. T.1.A0 X, G are sets and G c P(X) T.1.A1 O :- G and X :- G T.1.A2 /\(A,B:-G) AnB :- G T.1.A3 /\(McG) u(M) :- G Definition T.2 - Open Set Let (X,G) be a topological space. We define that A is an open subset of the topological space (X,G) if and only if A :- G. Remark T.3 Whenever the context is clear we will simply write "A is an open set" or "A is open". Definition T.4 - Interior Let (X,G) be a topological space and let A c X. We define that int(A) = u({U | U c A and U is open}). Theorem T.5 If (X,G) is a topological space and A c X then int(A) c A. Take any x:-int(A). We get an open set U such that x:-U and U c A. Hence x:-A. We showed that int(A) c A. Theorem T.6 If (X,G) is a topological space and A c X then int(A) is open. Let M = {U | U c A and U is open}. Notice that M c G. By T.1.A3 u(M) :- G. But int(A) = u(M). So int(A) :- G. Theorem T.7 If (X,G) is a topological space and A c X then A is open <=> A = int(A). Since A = int(A), A is open by Theorem T.6. Take any x:-A. Since A is open, x:-u({E | E c A and E is open}). So x:-int(A). We showed that A c int(A). By Theorem T.5 int(A) c A, so we have that A = int(A). Theorem T.8 If (X,G) is a topological space and A,B c X then A c B => int(A) c int(B). By Theorem T.5 we have int(A) c A c B. Take any x:-int(A). By Theorem T.6 int(A) is open and we have int(A) c B. Hence x:-u({E | E c B and E is open}). So x:-int(B). We showed that int(A) c int(B). Theorem T.9 If (X,G) is a topological space and A,B c X then int(A n B) = int(A) n int(B). Take any x:-int(A n B). We get an open set U such that x:-U and U c A n B. Then U c A and U c B. Hence x:-u({E | E c A and E is open}). So x:-int(A). Analogously, x:-int(B). Thus x :- int(A) n int(B). We showed that int(A n B) c int(A) n int(B). Take any x :- int(A) n int(B). We have an open set U such that x:-U and U c A. We have an open set V such that x:-V and V c B. By T.1.A2 U n V is open. Since x :- U n V, and U n V c A n B, we have that x :- int(A n B). We showed that int(A) n int(B) c int(A n B). Definition T.10 - Closed Set Let (X,G) be a topological space. We define that A is a closed subset of the topological space (X,G) if and only if A c X and X\A :- G. Remark T.11 Whenever the context is clear we will simply write "A is a closed set" or "A is closed". Theorem T.12 If (X,G) is a topological space then O and X are closed. By T.1.A01 O:-G and X:-G. Hence X\O is closed and X\X is closed. Thus X and O are closed. Theorem T.13 If (X,G) is a topological space and A,B are closed then A u B is closed. We have that X\A :- G and X\B :- G. By T.1.A2 X\A n X\B :- G. Hence X \ (A u B) = X\A n X\B :- G. So A u B is closed. Theorem T.14 If (X,G) is a topological space, M c {E : E is closed} and M!=0 then n(M) is closed. Notice that n(M) = n({E | E:-M}). By De Morgan's Law Theorem S.IS.3 we have that X \ n({E | E:-M}) = u({X\E | E:-M}). Now, {X\E | E:-M} = {X\E | E is closed and E:-M} = {X\E : X\E is open and E:-M} = {U : U is open and X\U:-M} c G. Hence by T.1.A3 u({X\E | E:-M}) = u({U : U is open and X\U:-M}) :- G. So X\n(M) = X\n({E | E:-M}) = u({X\E | E:-M}) :- G. Thus n(M) is closed. Definition T.15 - Closure Let (X,G) be a topological space and let A c X. We define that clo(A) = n({F | A c F and F is closed}). Theorem T.15.1 If (X,G) is a topological space, A c X and x:-X then x:-clo(A) <=> /\(F) (AcF and F is closed => x:-F). Let M = {F | AcF and F is closed}. Since X is closed, X:-M, and thus M!=O. By Theorem S.IS.5 we have that n(M) = {x:-X | /\(A:-M) x:-A}. Since n(M) = clo(A) we have that x:-clo(A) <=> /\(A:-M) x:-A. Hence x:-clo(A) <=> /\(F) (F:-M => x:-F). Thus x:-clo(A) <=> /\(F) (AcF and F is closed => x:-F). Theorem T.16 If (X,G) is a topological space and A c X then A c clo(A). Take any x:-A. Take any F such that F is closed and A c F. Then x:-F. We showed that /\(F) (F is closed and A c F => x:-F ). Hence by Theorem T.15.1 x:-clo(A). We showed that A c clo(A). Theorem T.17 If (X,G) is a topological space and A c X then clo(A) is closed. Let M = {F | A c F and F is closed}. Since X:-M, by Theorem T.14 n(M) is closed. Since clo(A) = n(M), clo(A) is closed. Theorem T.18 If (X,G) is a topological space and A c X then A is closed <=> A = clo(A) Since A = clo(A), A is closed by Theorem T.17. Take any x:-clo(A). Then /\(E) (E is closed and AcE => x:-E). Since A is closed, x:-A. We showed that clo(A) c A. By Theorem T.16, A c clo(A) so we have that A = clo(A). Theorem T.19 If (X,G) is a topological space and A c X then X \ clo(A) = int(X \ A). By De Morgan's Law Theorem S.IS.2 we have that X\u({U | U c X\A and U is open}) = n({X\U | U c X\A and U is open}). Follow the calculations below. clo(A) = n({F | A c F and F is closed}) = n({F | X\F c X\A and X\F is open}) = n({X\U | U c X\A and U is open}) = X \ u({U | U c X\A and U is open}) = X \ int(X\A). Hence int(X\A) = X \ clo(A). Theorem T.20 If (X,G) is a topological space and A c X then X \ int(A) = clo(X\A). By Theorem T.19 we have X \ clo(X\A) = int(X\(X\A)) = int(A). Hence X \ int(A) = clo(X\A). Theorem T.21 If (X,G) is a topological space and A,B c X then A c B => clo(A) c clo(B). Since A c B, we have X\B c X\A. By Theorem T.8 we have that int(X\B) c int(X\A). By Theorem T.19 we have that int(X\B) = X \ clo(B) and int(X\A) = X \ clo(A). Hence X \ clo(B) c X \ clo(A). Thus clo(A) c clo(B). Theorem T.22 If (X,G) is a topological space and A,B c X then clo(A u B) = clo(A) u clo(B). Apply Theorem T.19 and Theorem T.9 X \ clo(A u B) = int(X\(AuB)) = int(X\A n X\B) = int(X\A) n int(X\B) = X\clo(A) n X\clo(B) = X \ (clo(A) u clo(B)). Hence clo(A u B) = clo(A) u clo(B).
{"url":"http://www.apronus.com/provenmath/top_space.htm","timestamp":"2014-04-20T13:19:27Z","content_type":null,"content_length":"8184","record_id":"<urn:uuid:dff5e5ff-37a0-41cb-9085-bd17425d9e93>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00502-ip-10-147-4-33.ec2.internal.warc.gz"}
IVRITEX Archives -- November 2009 (#1) ┃ ┃ Date: Thu, 19 Nov 2009 00:02:02 +0200 Reply-To: Hebrew TeX list <[log in to unmask]> Sender: Hebrew TeX list <[log in to unmask]> From: Zaar Hai <[log in to unmask]> Subject: Correct brackets in \eqref Content-Type: text/plain; charset=UTF-8 Good day dear list. I'm using LaTeX on ubuntu to write one-page Hebrew doc. I know there is exist a problem in code like this: $$\label{eq:1} 3y^{2}y'+x^3+xy^3=0$$ Then using \eqref{eq:1} produces )1( instead of (1). The proposed solution is to use \L{eqref{eq:1}}, but this is not so pretty since reference are rendered using English font and not Hebrew one. Are there any better workaround for this? Back to: Top of message | Previous page | Main IVRITEX page ┃LISTSERV.TAU.AC.IL ┃ Date: Thu, 19 Nov 2009 00:02:02 +0200 Reply-To: Hebrew TeX list <[log in to unmask]> Sender: Hebrew TeX list <[log in to unmask]> From: Zaar Hai <[log in to unmask]> Subject: Correct brackets in \eqref Content-Type: text/plain; charset=UTF-8 Good day dear list. I'm using LaTeX on ubuntu to write one-page Hebrew doc. I know there is exist a problem in code like this: $$\label{eq:1} 3y^{2}y'+x^3+xy^3=0$$ Then using \eqref{eq:1} produces )1( instead of (1). The proposed solution is to use \L{eqref{eq:1}}, but this is not so pretty since reference are rendered using English font and not Hebrew one. Are there any better workaround for this? Back to: Top of message | Previous page | Main IVRITEX page Date: Thu, 19 Nov 2009 00:02:02 +0200 Reply-To: Hebrew TeX list <[log in to unmask]> Sender: Hebrew TeX list <[log in to unmask]> From: Zaar Hai <[log in to unmask]> Subject: Correct brackets in \ eqref Content-Type: text/plain; charset=UTF-8 Good day dear list. I'm using LaTeX on ubuntu to write one-page Hebrew doc. I know there is exist a problem in code like this: $$\label{eq:1} 3y^{2}y'+x^ 3+xy^3=0$$ Then using \eqref{eq:1} produces )1( instead of (1). The proposed solution is to use \L{eqref{eq:1}}, but this is not so pretty since reference are rendered using English font and not Hebrew one. Are there any better workaround for this? Thanks, -- Zaar Good day dear list. I'm using LaTeX on ubuntu to write one-page Hebrew doc. I know there is exist a problem in code like this: $$\label{eq:1} 3y^{2}y'+x^3+xy^3=0$$ Then using \eqref{eq:1} produces )1 ( instead of (1). The proposed solution is to use \L{eqref{eq:1}}, but this is not so pretty since reference are rendered using English font and not Hebrew one. Are there any better workaround for this? Thanks, -- Zaar
{"url":"http://listserv.tau.ac.il/cgi-bin/wa?A2=ind0911&L=ivritex&T=0&P=69","timestamp":"2014-04-20T03:12:12Z","content_type":null,"content_length":"12399","record_id":"<urn:uuid:8fbab9b1-8ac3-4802-a8a4-dc0fedf13513>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00543-ip-10-147-4-33.ec2.internal.warc.gz"}
Adding Electrical Resistance In Series With Bypass Capacitors Using Annular Resistors - Patent 6727780 United States Patent: 6727780 ( 1 of 1 ) United States Patent , &nbsp; et al. April 27, 2004 Adding electrical resistance in series with bypass capacitors using annular A method for achieving a desired value of electrical impedance between conductors of an electrical power distribution structure by electrically coupling multiple bypass capacitors and corresponding electrical resistance elements in series between the conductors. The resistance elements may be annular resistors, and may provide the designer a greater degree of control of the system ESR. The annular resistors may comprise a first terminal, an annular resistor, and a second terminal. The second terminal may be located within the confines of the annular resistor. The annular resistors may be printed onto a conductive plane (e.g. a power plane or a ground plane), or may be a discrete component. Novak; Istvan (Maynard, MA), St.Cyr; Valerie (Lincoln, MA), Freda; Michael C. (Morgan Hill, CA), Tetreault; Merle (Tyngsboro, MA) Sun Microsystems, Inc. (Santa Clara, Appl. No.: October 24, 2001 Current U.S. Class: 333/136 ; 333/172; 333/32 Current International Class: H05K 1/02&nbsp(20060101); H05K 1/16&nbsp(20060101); H05K 1/11&nbsp(20060101); H03H 007/38&nbsp() Field of Search: References Cited [Referenced By] U.S. Patent Documents January 1988 September 1994 Howard et al. June 1996 Tonogai et al. June 1996 February 1997 Howard et al. January 1998 January 1998 Howard et al. July 1999 Chase et al. August 2000 February 2001 Bloom et al. April 2001 April 2001 Novak et al. May 2001 Dunn et al. February 2003 Novak et al. Primary Examiner: Pascal; Robert Assistant Examiner: Takaoka; Dean Attorney, Agent or Firm: Meyertons Hood Kivlin Kowert & Goetzel, P.C. Kivlin; B. Noel What is claimed is: 1. An electrical power distribution structure comprising: a first conductor and a second conductor; a capacitor having a first terminal and a second terminal, the first terminal being electrically coupled to the first conductor; and an annular resistor electrically coupled in series between the second terminal of the capacitor and the second conductor; wherein a sun of a resistance value of the annular resistor and an equivalent series resistance (ESR) value of the capacitor is substantially equal to a required mounted resistance of the capacitor. 2. The electrical power distribution structure as recited in claim 1, wherein the annular resistor includes a first periphery and a second periphery, wherein the first periphery is electrically coupled to the second conductor, and the second periphery is electrically coupled to the second terminal of the capacitor. 3. The electrical power distribution structure as recited in claim 2, wherein the annular resistor is printed on the second conductor. 4. The electrical power distribution structure as recited in claim 1, wherein one of the first conductor and the second conductor is a ground plane and wherein the other conductor is a power plane. 5. The electrical power distribution structure as recited in claim 1, wherein at least one of the first conductor and the second conductor is planar. 6. The electrical power distribution structure as recited in claim 1, wherein the annular resistor is substantially circular in shape. 7. The electrical power distribution structure as recited in claim 2, wherein the first periphery of the annular resistor is an outer periphery, and the second periphery of the annular resistor is an inner periphery. 8. The electrical power distribution structure as recited in claim 1, wherein the required mounted resistance of the capacitor is determined by a formula R.sub.m-req =n.multidot.Z.sub.t, wherein R.sub.m-req is the required mounted resistance of the capacitor, n is a quantity of capacitors in the electrical power distribution structure, and Z.sub.t is a target impedance of the electrical power distribution structure. 9. A method for achieving a target electrical impedance in an electrical power distribution structure, the method comprising: providing a first conductor and a second conductor; electrically coupling a first terminal of a capacitor to the first conductor; selecting an annular resistor such that a sum of a resistance value of the annular resistor and an equivalent series resistance (ESR) value of the capacitor is substantially equal to a required mounted resistance of the capacitor; and electrically coupling an annular resistor in series with a second terminal of the capacitor and the second conductor. 10. The method as recited in claim 9, wherein the first conductor is a ground plane and the second conductor is a power plane. 11. The method as recited in claim 9, wherein at least one of the first conductor and the second conductor is planar. 12. The method as recited in claim 9, wherein the annular resistor is substantially circular in shape. 13. The method as recited in claim 9, wherein the annular resistor is printed on the second conductor. 14. The method as recited in claim 9, wherein the required mounted resistance of the capacitor is determined by a formula R.sub.m-req =n.multidot.Z.sub.t, wherein R.sub.m-req is the required mounted resistance of the capacitor, n is a quantity of capacitors in the electrical power distribution structure, and Z.sub.t is a target impedance of the electrical power distribution structure. 15. An electrical power distribution structure comprising: a first conductor and a second conductor; an annular resistor having a first periphery electrically coupled to the first conductor; an electrically conductive via coupled to a second periphery of the annular resistor; and a capacitor coupled in series with the annular resistor, the annular resistor being interposed between the via and the second conductor; wherein a sum of a resistance value of the annular resistor and an equivalent series resistance (ESR) value of the capacitor is substantially equal to a required mounted resistance of the capacitor. 16. The electrical power distribution structure as recited in claim 15, wherein the annular resistor is formed in a region that is coplanar with the first conductor. 17. The electrical power distribution structure as recited in claim 15, wherein the annular resistor is formed within a void defined by the first conductor. 18. The electrical power distribution structure as recited in claim 15, wherein one of the first conductor and the second conductor is a power plane and the other conductor is a ground plane. 19. The electrical power distribution structure as recited in claim 15, wherein the capacitor has a first terminal and a second terminal, the first terminal being electrically coupled to the second periphery of the annular resistor and the second terminal being electrically coupled to second conductor. 20. The electrical power distribution structure as recited in claim 15, wherein at least one of the first conductor and the second conductor is planar. 21. The electrical power distribution structure as recited in claim 15, wherein the annular resistor is substantially circular in shape. 22. The electrical power distribution structure as recited in claim 15, wherein the first periphery of the annular resistor is an outer periphery and the second periphery of the annular resistor is an inner periphery. 23. The electrical power distribution structure as recited in claim 15, wherein the via is electrically coupled to the second periphery of the annular resistor. 24. The electrical power distribution structure as recited in claim 15, wherein the first periphery of the annular resistor is electrically coupled to the first conductor. 25. The electrical power distribution structure as recited in claim 15, wherein the required mounted resistance of the capacitor is determined by a formula R.sub.m-req =n.multidot.Z.sub.t, wherein R.sub.m-req is the required mounted resistance of the capacitor, n is a quantity of capacitors in the electrical power distribution structure, and Z.sub.t is a target impedance of the electrical power distribution structure. 26. A method for decoupling a power distribution system comprising first and second conductors, the method comprising: placing a first periphery of an annular resistor in contact with the first conductor; placing an electrically conductive via in contact with a second periphery of the annular resistor, wherein a sum of a resistance value of the annular resistor and an equivalent series resistance (ESR) value of the capacitor is substantially equal to a required mounted resistance of the capacitor; and coupling a capacitor in series with the annular resistor, wherein the annular resistor is interposed between the via and the second conductor. 27. The method as recited in claim 26, wherein the annular resistor is formed within a void defined by the first conductor. 28. The method as recited in claim 26, wherein the first conductor is a power plane and the second conductor is a ground plane. 29. The method as recited in claim 26, wherein the capacitor has a first terminal and a second terminal, the first terminal being electrically coupled to the second periphery of the annular resistor and the second terminal being connected to the second conductor. 30. The method as recited in claim 26, wherein at least one of the first conductor and the second conductor is planar. 31. The method as recited in claim 26, wherein the annular resistor is substantially circular in shape. 32. The method as recited in claim 26, wherein the first periphery of the annular resistor is an outer periphery, and the second periphery of the annular resistor is an inner periphery. 33. The method as recited in claim 26, wherein the via is electrically coupled to the second periphery of the annular resistor. 34. The method as recited in claim 26, wherein the first periphery of the annular resistor is electrically coupled to the first conductor. 35. The method as recited in claim 26, wherein the annular resistor is formed in a region that is coplanar with the first conductor. 36. The method as recited in claim 26, wherein the required mounted resistance of the capacitor is determined by a formula R.sub.m-req =n.multidot.Z.sub.t, wherein R.sub.m-req is the required mounted resistance of the capacitor, n is a quantity of capacitors in the electrical power distribution structure, and Z.sub.t is a target impedance of the electrical power distribution structure. Description 1. Field of the Invention This invention relates to electronic systems, and more particularly to electrical interconnecting apparatus forming electrical power distribution structures. 2. Description of the Related Art A power distribution network of a typical printed circuit board (PCB) includes several capacitors coupled between conductors used to convey direct current (d.c.) electrical power voltages and ground conductors. For example, the power distribution network of a digital PCB typically includes a bulk decoupling or "power entry" capacitor located at a point where electrical power enters the PCB from an external power supply. The power distribution network also typically includes a decoupling capacitor positioned near each of several digital switching circuits (e.g., digital integrated circuits coupled to the PCB). The digital switching circuits dissipate electrical power during switching times (e.g., clock pulse transitions). Each decoupling capacitor typically has a capacitance sufficient to supply electrical current to the corresponding switching circuit during switching times such that the d.c. electrical voltage supplied to the switching circuit remains substantially constant. The power entry capacitor may, for example, have a capacitance greater than or equal to the sum of the capacitances of the decoupling capacitors. In addition to supplying electrical current to the corresponding switching circuits during switching times, decoupling capacitors also provide low impedance paths to the ground electrical potential for alternating current (a.c.) voltages. Decoupling capacitors thus shunt or "bypass" unwanted a.c. voltages present on d.c. power trace conductors to the ground electrical potential. For this reason, the terms "decoupling capacitor" and "bypass capacitor" are often used synonymously. As used herein, the term "bypass capacitor" is used to describe any capacitor coupled between a d.c. voltage conductor and a ground conductor, thus providing a low impedance path to the ground electrical potential for a.c. voltages. A typical bypass capacitor is a two-terminal electrical component. FIG. 1 is a diagram of an electrical model 10 of a capacitor (e.g., a bypass capacitor) valid over a range of frequencies including a resonant frequency f.sub.res of the capacitor. Electrical model 10 includes an ideal capacitor, an ideal resistor, and an ideal inductor in series between the two terminals of the capacitor. The ideal capacitor has a value C equal to a capacitance of the capacitor. The ideal resistor has a value equal to an equivalent series resistance (ESR) of the capacitor, and the ideal inductor has a value equal to an equivalent series inductance (ESL) of the capacitor. The series combination of the capacitance (C) and the inductance (ESL) of the capacitor results in series resonance and a resonant frequency f.sub.res given by: ##EQU1## FIG. 2 is a graph of the logarithm of the magnitude of the electrical impedance (Z) between the terminals of electrical model 10 versus the logarithm of frequency f. At frequencies f lower than resonant frequency f.sub.res, the impedance of electrical model 10 is dominated by the capacitance, and the magnitude of Z decreases with increasing frequency f. At the resonant frequency f.sub.res of the capacitor, the magnitude of Z is a minimum and equal to the ESR of the capacitor. Within a range of frequencies centered about resonant frequency f.sub.res, the impedance of electrical model 10 is dominated by the resistance, and the magnitude of Z is substantially equal to the ESR of the capacitor. At frequencies f greater than resonant frequency f.sub.res, the impedance of electrical model 10 is dominated by the inductance, and the magnitude of Z increases with increasing frequency f. When a desired electrical impedance between a d.c. voltage conductor and a ground conductor is less than the ESR of a single capacitor, it is common to couple more than one of the capacitors in parallel between the d.c. voltage conductor and the ground conductor. In this situation, all of the capacitors have substantially the same resonant frequency f.sub.res, and the desired electrical impedance is achieved over a range of frequencies including the resonant frequency f.sub.res. When the desired electrical impedance is to be achieved over a range of frequencies broader than a single capacitor can provide, it is common to couple multiple capacitors having different resonant frequencies between the d.c. voltage conductor and the ground conductor. The ESRs and resonant frequencies of the capacitors are selected such that each of the capacitors achieves the desired electrical impedance over a different portion of the range of frequencies. In parallel combination, the multiple capacitors achieve the desired electrical impedance over the entire range of frequencies. A digital signal alternating between high and low voltage levels includes contributions from a fundamental sinusoidal frequency (i.e., a first harmonic) and integer multiples of the first harmonic. As the rise and fall times of a digital signal decrease, the magnitudes of a greater number of the integer multiples of the first harmonic become significant. As a general rule, the frequency content of a digital signal extends to a frequency equal to the reciprocal of .pi. times the transition time (i.e., rise or fall time) of the signal. For example, a digital signal with a 1 nanosecond transition time has a frequency content extending up to about 318 MHz. All conductors have a certain amount of electrical inductance. The voltage across the inductance of a conductor is directly proportional to the rate of change of current through the conductor. At the high frequencies present in conductors carrying digital signals having short transition times, a significant voltage drop occurs across a conductor having even a small inductance. Transient switching currents flowing through electrical impedances of d.c. power conductors cause power supply voltage perturbations (e.g., power supply "droop" and ground "bounce"). As signal frequencies increase, continuous power supply planes (e.g., power planes and ground planes) having relatively low electrical inductances are being used more and more. The parallel power and ground planes are commonly placed in close proximity to one another in order to further reduce the inductances of the planes. When choosing capacitors for bypassing a power distribution system, a designer may typically specify capacitance of each of the chosen capacitors. However, it may not be possible to specify the resistance and inductance values of the capacitor. Inductance values may depend on the interconnection technology used for the capacitor, and may be influenced somewhat. Resistance values are typically not user definable, and thus, it may be difficult for the designer of the power distribution system to control the ESR. Several methods are presented for achieving a desired value of electrical impedance between conductors of an electrical power distribution structure by electrically coupling multiple bypass capacitors and corresponding electrical resistance elements in series between the conductors. The resistance elements may be annular resistors, and may provide the designer a greater degree of control of the system ESR. The annular resistors may comprise a resistive ring having an outer periphery and an inner periphery. The outer periphery may be considered to be a first terminal, while the inner periphery may be considered to be a second terminal. The outer periphery may be electrically coupled to a conductive plane, such as a power plane, while the inner periphery may be coupled to a terminal of a capacitor. The annular resistors may be printed onto a conductive plane (e.g. a power plane or a ground plane), or may be implemented as discrete components, which may be placed into a void of a conductive plane. The methods include bypass capacitor selection criteria and electrical resistance determination criteria based upon simulation results. An exemplary electrical power distribution structure produced by one of the methods includes at least one pair of parallel planar conductors separated by a dielectric layer, n discrete electrical capacitors, and n electrical resistance elements, where n.gtoreq.2. Each of the n discrete electrical resistance elements is coupled in series with a corresponding one of the n discrete electrical capacitors between the planar conductors. The n capacitors have substantially the same capacitance C, mounted resistance R.sub.m, mounted inductance L.sub.m, and mounted resonant frequency f.sub.m-res. The mounted resistance R.sub.m of each of the n capacitors includes an electrical resistance of the corresponding electrical resistance element. The electrical power distribution structure achieves an electrical impedance Z at the resonant frequency f.sub.m-res of the capacitors. In order to achieve the desired value of electrical impedance, the mounted resistance R.sub.m of each of the n capacitors is substantially equal to (n.multidot.Z). In order to reduce variations in the electrical impedance with frequency, the mounted inductance L.sub.m of each of the n capacitors is less than or equal to (0.2.multidot.n.multidot..mu..sub.0.multidot.h), where .mu..sub.0 is the permeability of free space, and h is a distance between the planar conductors. It is noted that dielectric materials used to form dielectric layers are typically non-magnetic, and thus the relative permeability .mu..sub.r of the dielectric layer is assumed to be unity. The mounted resistance R.sub.m of each of the n capacitors may be, for example, the sum of an equivalent series resistance (ESR) of the capacitor, the electrical resistance of the corresponding electrical resistance element, and the electrical resistances of all conductors coupling the capacitor between the planar conductors. The mounted inductance L.sub.m of each of the n capacitors may be the electrical inductance resulting from the coupling of the capacitor between the planar conductors. For example, each of the n capacitors may have a body. In this situation, the mounted resistance R.sub.m of each of the n capacitors may be the sum of the ESR of the capacitor body, the electrical resistance of the corresponding electrical resistance element, and the electrical resistances of all conductors (e.g., solder lands and vias) coupling the capacitor body between the planar conductors. Similarly, the mounted inductance L.sub.m of each of the n capacitors may be the electrical inductance resulting from the coupling of the capacitor body between the planar conductors. The mounted resonant frequency f.sub.m-res resulting from capacitance C and mounted inductance L.sub.m may be given by: ##EQU2## The n discrete capacitors may or may not be used to suppress electrical resonances between the planar conductors. Where the n discrete capacitors are not used to suppress the electrical resonances, the n discrete capacitors may be located upon, and distributed about, one or more surfaces of the planar conductors. On the other hand, when the n discrete capacitors are used to suppress the electrical resonances, the n discrete capacitors may be positioned along at least a portion of corresponding outer edges of the planar conductors. In this situation, adjacent capacitors may be separated by substantially equal spacing distances. Several embodiments of an electrical power distribution structure are presented including an electrical resistance element coupled in series with a capacitor between a pair of parallel conductive planes separated by a dielectric layer (e.g., between a power plane and a ground plane). In the embodiments, the electrical resistance elements are incorporated in ways which do not appreciably increase physical dimensions of current loops coupling the capacitor between the pair of parallel conductive planes. As a result, the mounted inductance L.sub.m of the capacitor is not changed substantially over a corresponding conventional structure. A first method for achieving a target electrical impedance Z.sub.t in an electrical power distribution structure including a pair of parallel planar conductors separated by a dielectric layer may be useful where bypass capacitors will not be used to suppress plane resonances. In this situation, the bypass capacitors may be distributed about a surface of at least one of the planar conductors. The first method includes determining a required number n of a selected type of discrete electrical capacitor dependent upon an inductance of the electrical power distribution structure L.sub.p and a mounted inductance L.sub.m of a representative one of the selected type of discrete electrical capacitor when electrically coupled between the planar conductors, wherein n.gtoreq.2. The required number n of the selected type of capacitor may be determined using: ##EQU3## The target electrical impedance Z.sub.t is used to determine a required value of mounted resistance R.sub.m-req for the n discrete electrical capacitors. The required value of mounted resistance R.sub.m-req may be determined using: The required number n of the selected type of discrete electrical capacitor may be selected such that each of the n capacitors has an equivalent series resistance (ESR) which is less than the required value of mounted resistance R.sub.m-req. The mounted resistance R.sub.m of a representative one of the n capacitors may be determined when the representative capacitor is coupled between the pair of parallel planar conductors and when the electrical resistance of a corresponding electrical resistance element is zero. The electrical resistance of each of n electrical resistance elements may be determined by subtracting the mounted resistance R.sub.m of the representative capacitor from the required value of mounted resistance R.sub.m-req. The n discrete electrical capacitors and the n electrical resistance elements may be electrically coupled between the planar conductors such that each of the n discrete electrical capacitors is coupled in series with a corresponding one of the n electrical resistance elements. The first method may also include determining a separation distance h between the parallel planar conductors required to achieve the target electrical impedance Z.sub.t. The separation distance h may be determined using: ##EQU4## where .di-elect cons..sub.r is the relative permittivity of the dielectric layer and d.sub.p is a distance around an outer perimeter of the electrical power distribution structure. Separation distance h is in milli-inches (hereinafter "mils") when the target electrical impedance Z.sub.t is in ohms and distance d.sub.p is in inches. A thickness t for the dielectric layer may be selected such that the thickness t is less than or equal to the required separation distance h. Thickness t may be used to determine the inductance of the electrical power distribution structure L.sub.p. The inductance of the electrical power distribution structure L.sub.p may be determined using: wherein .mu..sub.0 is the permeability of free space. The type of discrete electrical capacitor may be selected, wherein capacitors of the selected type have at least one substantially identical physical dimension (e.g., a length of the capacitor package between terminals) upon which the mounted inductance of the capacitors is dependent. The physical dimension may be used to determine the mounted inductance L.sub.m of the representative capacitor. A second method for achieving a target electrical impedance Z.sub.t in an electrical power distribution structure including a pair of parallel planar conductors separated by a dielectric layer may be useful where the bypass capacitors will be used to suppress plane resonances. In this situation, at least a portion of the bypass capacitors will be electrically coupled between the planar conductors along an outer edge of the planar conductors. The second method includes determining a first required number n.sub.1 of discrete electrical capacitors dependent upon an inductance of the electrical power distribution structure L.sub.p and a mounted inductance L.sub.m of each of the discrete electrical capacitors when electrically coupled between the planar conductors, where n.sub.1.gtoreq.2. The first required number n.sub.1 of the discrete electrical capacitors may be determined using: ##EQU5## A second required number n.sub.2 of the discrete electrical capacitors is determined dependent upon a distance d.sub.p around an outer perimeter of the electrical power distribution structure (i.e., the parallel planar conductors) and a spacing distance S between adjacent discrete electrical capacitors, where n.sub.2.gtoreq.2. The second required number n.sub.2 of the discrete electrical capacitors may be determined using: ##EQU6## Spacing distance S may be less than or equal to a maximum spacing distance S.sub.max between adjacent electrical capacitors. The electrical power distribution structure may be, for example, part of an electrical interconnecting apparatus, and electrical signals may be conveyed within the electrical interconnecting apparatus. The electrical signals may have an associated frequency range, and maximum spacing distance S.sub.max may be a fraction of a wavelength of a maximum frequency f.sub.max of the frequency range of the electrical signals. Maximum spacing distance S.sub.max may be given by: ##EQU7## wherein c is the speed of light in a vacuum, .di-elect cons..sub.r is the relative permittivity (i.e., the dielectric constant) of the dielectric layer, and f.sub.max is the maximum frequency of the frequency range of the electrical signals. If n.sub.2.gtoreq.n.sub.1, the following steps may be performed. A required value of mounted resistance R.sub.m-req may be determined for n.sub.2 of the discrete electrical capacitors dependent upon the target electrical impedance Z.sub.t. The required value of mounted resistance R.sub.m-req for the n.sub.2 capacitors may be determined using: The number n.sub.2 of the discrete electrical capacitors may be selected wherein each of the n.sub.2 capacitors has an equivalent series resistance (ESR) which is less than the required value of mounted resistance R.sub.m-req. The mounted resistance R.sub.m of a representative one of the n.sub.2 capacitors may be determined when the representative capacitor is coupled between the pair of parallel planar conductors and when the electrical resistance of a corresponding electrical resistance element is zero. The electrical resistance of each of n.sub.2 electrical resistance elements may be determined by subtracting the mounted resistance R.sub.m of the representative capacitor from the required value of mounted resistance R.sub.m-req. The n.sub.2 discrete electrical capacitors and the n.sub.2 electrical resistance elements may be electrically coupled between the planar conductors along the outer perimeter of the parallel planar conductors such that each of the n.sub.2 discrete electrical capacitors is coupled in series with a corresponding one of the n.sub.2 electrical resistance elements. The second method may also include the determining of a separation distance h between the parallel planar conductors required to achieve the target electrical impedance Z.sub.t as described above. A thickness t for the dielectric layer may be selected such that the thickness t is less than or equal to the required separation distance h. Thickness t may be used to determine the inductance of the electrical power distribution structure L.sub.p as described above. The type of discrete electrical capacitor may be selected, wherein capacitors of the selected type have at least one substantially identical physical dimension (e.g., a length of the capacitor package between terminals) upon which the mounted inductance of the capacitors is dependent. The physical dimension may be used to determine the mounted inductance L.sub.m of the representative capacitor. If n.sub.1 &gt;n.sub.2, the following steps may be performed. The target electrical impedance Z.sub.t may be used to determine a required value of mounted resistance R.sub.m-req for n.sub.1 of the discrete electrical capacitors. The required value of mounted resistance R.sub.m-req for the n.sub.1 capacitors may be determined using: The number n.sub.1 of the discrete electrical capacitors may be selected, wherein each of the n.sub.1 capacitors has an equivalent series resistance (ESR) which is less than the required value of mounted resistance R.sub.m-req. The mounted resistance R.sub.m of a representative one of the n.sub.1 capacitors may be determined when the representative capacitor is coupled between the pair of parallel planar conductors and when the electrical resistance of a corresponding electrical resistance element is zero. The electrical resistance of each of n.sub.1 electrical resistance elements may be determined by subtracting the mounted resistance R.sub.m of the representative capacitor from the required value of mounted resistance R.sub.m-req. The n.sub.1 discrete electrical capacitors and the n.sub.1 electrical resistance elements may be electrically coupled between the planar conductors such that: (i) each of the n.sub.1 discrete electrical capacitors is coupled in series with a corresponding one of the n.sub.1 electrical resistance elements, (ii) n.sub.2 of the discrete electrical capacitors and the corresponding electrical resistance elements are positioned along an outer perimeter of the planar conductors, and (iii) the remaining (n.sub.1 -n.sub.2) capacitors and the corresponding electrical resistance elements are dispersed across a surface of at least one of the planar conductors. Regarding distance d.sub.p around the outer edges (i.e., the outer perimeter) of the electrical power distribution structure, the electrical power distribution structure may have, for example, four sides arranged as two pairs of opposite sides. The sides forming one of the pairs of opposite sides may have equal lengths x, and the other two opposite sides may have equal lengths y. In this situation, the distance d.sub.p around the outer perimeter of the electrical power distribution structure is equal to 2.multidot.(x+y). Other aspects of the invention will become apparent upon reading the following detailed description and upon reference to the accompanying drawings in which: FIG. 1 is a diagram of an electrical model of a capacitor (e.g., a bypass capacitor) valid over a range of frequencies including a resonant frequency f.sub.res of the capacitor, wherein the electrical model includes an ideal capacitor, an ideal resistor, and an ideal inductor in series between two terminals of the capacitor, and wherein the ideal capacitor has a value C equal to a capacitance of the capacitor, and wherein the ideal resistor has a value equal to an equivalent series resistance (ESR) of the capacitor, and wherein the ideal inductor has a value equal to an equivalent series inductance (ESL) of the capacitor; FIG. 2 is a graph of the logarithm of the magnitude of the electrical impedance (Z) between the terminals of the electrical model of FIG. 1 versus the logarithm of frequency f; FIG. 3 is a perspective view of a structure including a pair of 10 in..times.10 in. square conductive planes separated by a dielectric layer having a dimension or height h between the conductive planes; FIG. 4 is a graph of the simulated magnitude of electrical impedance (Z) of the structure of FIG. 3 between the pair of rectangular conductive planes versus frequency; FIG. 5 is a cross sectional view of a portion of one embodiment of an electrical interconnecting apparatus including a power distribution structure having two different pairs of conductive power planes, wherein the interconnecting apparatus includes two signal planes between the pairs of power planes; FIG. 6 is a cross sectional view of a portion of one embodiment of an electrical interconnecting apparatus including a power distribution structure having three different pairs of conductive power planes, wherein the interconnecting apparatus includes two signal planes between a first and a second of the three pairs of power planes, and two more signal planes between the second and the third of the three pairs of power planes; FIG. 7 is a perspective view of a portion of an electrical power distribution structure including a capacitor (e.g., an interdigitated capacitor) mounted upon an upper surface of an interconnecting apparatus and electrically coupled between an electrical power (i.e., power) conductor layer and an electrical ground (i.e., ground) conductor layer of the interconnecting apparatus; FIG. 8 is a top plan view of one embodiment of the power conductor layer of FIG. 7 following a process (e.g., an etch process) during which a portion of an electrically conductive material (e.g., a metal) forming the power conductor layer is removed from an isolation region, thereby forming an island electrically isolated from a remainder of the power conductor layer; FIG. 9 is a top plan view of the embodiment of the power conductor layer of FIG. 7 following a process during which two resistive stripes are formed between the island and the remainder of the power conductor layer on opposite sides of the island, wherein the capacitor of FIG. 7 and an electrical resistance offered by the two resistive stripes of FIG. 9 are coupled in series between the power conductor layer and the ground conductor layer of the interconnecting apparatus of FIG. 7; FIG. 10 is a cross sectional view of a portion of an electrical power distribution structure wherein vias with relatively high electrical resistances are used to electrically couple a capacitor (e.g., a multilayer ceramic capacitor) between a planar power conductor (i.e., a power plane) and a planar ground conductor (i.e., a ground plane) of an interconnecting apparatus; FIG. 11 is a cross sectional view of a portion of an electrical power distribution structure wherein an electrically resistive adhesive material is used to electrically couple a capacitor between a power plane and a ground plane of an interconnecting apparatus; FIG. 12 is a cross sectional view of a portion of an electrical power distribution structure wherein a resistive coupon is positioned between a capacitor and an interconnecting apparatus, and wherein an electrical resistance offered by the resistive coupon is electrically coupled in series with the capacitor between a power plane and a ground plane of the interconnecting apparatus; FIG. 13 is a cross sectional view of a portion of an electrical power distribution structure wherein a capacitor is electrically coupled between a power plane and a ground plane of an interconnecting apparatus, and wherein the capacitor includes a single electrical resistance element in series with a capacitance element; FIG. 14A is a top view of one embodiment of an annular resistor; FIG. 14B is a schematic representation of one embodiment of an annular resistor; FIG. 15 is a side view of a one embodiment of a multi-terminal capacitor mounted on a printed circuit board, wherein some of the capacitor terminals are electrically connected to a ground plane, and some of the terminals are electrically connected to annular resistors which are printed into a power plane; FIG. 16 is a top plan view of one embodiment of a multi-terminal capacitor, wherein a plurality of the capacitor terminals are connected to annular resistors; FIG. 17 is a side view of a second embodiment of a four-terminal capacitor mounted on a printed circuit board, wherein some of the capacitor terminals are electrically connected to a ground plane, and some of the terminals are electrically connected to annular resistors which are printed into a power plane; FIG. 18 is a top plan view of the second embodiment of a four-terminal capacitor, wherein two of the capacitor terminals are connected to annular resistors; FIG. 19 is a side view of a third embodiment of a two-terminal capacitor mounted on a printed circuit board, wherein one of the capacitor terminals is electrically connected to a ground plane, and the other terminal is electrically connected to an annular resistor which is printed into a power plane; FIG. 20 is a top plan view of the third embodiment of a two-terminal capacitor, wherein one of the capacitor terminals is connected to an annular resistor; FIG. 21 is a side view of a fourth embodiment of a two-terminal capacitor mounted on a printed circuit board, wherein one of the capacitor terminals is electrically connected to a ground plane, and the other terminal is electrically connected to three annular resistors which are printed into a power plane; FIG. 22 is a top plan view of a fourth embodiment of a two-terminal capacitor, wherein one of the capacitor terminals is connected to three annular resistors; FIGS. 23A-23C in combination form a flow chart of one embodiment of a first method for achieving a target electrical impedance Z.sub.t in an electrical power distribution structure including a pair of parallel planar conductors separated by a dielectric layer; and FIGS. 24A-24F in combination form a flow chart of one embodiment of a second method for achieving a target electrical impedance Z.sub.t in an electrical power distribution structure including a pair of parallel planar conductors separated by a dielectric layer; While the invention is susceptible to various modifications and alternative forms, specific embodiments thereof are shown by way of example in the drawings and will herein be described in detail. It should be understood, however, that the drawings and detailed description thereto are not intended to limit the invention to the particular form disclosed, but on the contrary, the intention is to cover all modifications, equivalents and alternatives falling within the spirit and scope of the present invention as defined by the appended claims. FIG. 3 is a perspective view of a structure 20 including a pair of 10 in..times.10 in. square conductive planes 22 separated by a fiberglass-epoxy composite dielectric layer 24 having a height h. Each conductive plane 22 is made of copper and is about 0.0014 in. thick. Dielectric layer 24 is made of FR4 dielectric material having a dielectric constant of about 4.0, and height h is approximately 0.002 in. FIG. 4 is a graph of the simulated magnitude of electrical impedance (Z) of structure 20 of FIG. 3 between the pair of rectangular conductive planes 22 versus frequency. The graph was created by modeling each half-inch square of the pair of conductive planes 22 as a matrix of transmission lines. The impedance value was computed by simulating the application of a 1 ampere constant current between the centers of planes 22, varying the frequency of the current, and determining the magnitude of the steady state voltage between the centers of planes 22. As shown in FIG. 4, the magnitude of the electrical impedance between conductive planes 22 of FIG. 3 varies widely at frequencies above about 500 MHz. Conductive planes 22 exhibit multiple electrical resonances at frequencies between about 150 MHz and 1 GHz, resulting in alternating high and low impedance values. Conductive planes 22 would be poor candidates for power and ground planes of an electrical interconnecting apparatus (e.g., a PCB) conveying signals having significant frequency content above 500 MHz as the high impedance values of conductive planes 22 at frequencies above 500 MHz would cause relatively large power supply voltage perturbations. FIGS. 5 and 6 will now be used to illustrate exemplary interconnect apparatus and how an effective distance (e.g., height) h may be calculated for power distribution structures of the interconnect apparatus. FIG. 5 is a cross sectional view of a portion of one embodiment of an electrical interconnecting apparatus 120 including a power distribution structure having two different pairs of conductive power planes. Interconnecting apparatus 120 includes a GROUND1 plane 122 and a POWER1 plane 124 forming one of the pairs of conductive power planes, a SIGNAL1 plane 126, a SIGNAL2 plane 128, and a POWER2 plane 130 and a GROUND2 plane 132 forming the other pair of conductive power planes. POWER1 plane 124 and POWER2 plane 130 are coupled by a via 134, and GROUND1 plane 122 and GROUND2 plane 132 are coupled by a via 136. SIGNAL1 plane 126 and SIGNAL2 plane 128 are used to convey electrical signals within interconnecting apparatus 120. As shown in FIG. 5, GROUND1 plane 122 and POWER1 plane 124 are separated by a height h.sub.1, and POWER2 plane 130 and GROUND2 plane 132 are separated by a height h.sub.2. For interconnecting apparatus 120 of FIG. 5, h for use in the above equation for calculating the impedance of the power distribution structure is given by: ##EQU8## where h.sub.1 and h.sub.2 are in mils. It is noted that if h.sub.1 =h.sub.2 =h.sub.x, then h=h.sub.x /2. FIG. 6 is a cross sectional view of a portion of one embodiment of an electrical interconnecting apparatus 140 including a power distribution structure having three different pairs of conductive power planes. Interconnecting apparatus 140 includes a GROUND1 plane 142 and a POWER1 plane 144 forming a first of the three pairs of conductive power planes, a SIGNAL1 plane 146, a SIGNAL2 plane 148, a GROUND2 plane 150 and a POWER2 plane 152 forming a second of three pairs of conductive power planes, a SIGNAL3 plane 154, a SIGNAL4 plane 156, and a GROUND3 plane 158 and a POWER3 plane 160 forming the third pair of conductive power planes. POWER1 plane 144, POWER2 plane 152, and POWER3 plane 160 are coupled by a via 162, and GROUND1 plane 142, GROUND2 plane 150, and GROUND3 plane 158 are coupled by a via 164. SIGNAL1 plane 146, SIGNAL2 plane 148, SIGNAL3 plane 154, and SIGNAL4 plane 156 are used to convey electrical signals within interconnecting apparatus 140. As shown in FIG. 6, GROUND1 plane 142 and POWER1 plane 144 are separated by a height h.sub.3, POWER2 plane 152 and GROUND2 plane 150 are separated by a height h.sub.4, and POWER3 plane 160 and GROUND3 plane 158 are separated by a height h.sub.5. For interconnecting apparatus 140 of FIG. 6, h for use in the above equation for calculating the impedance of the power distribution structure is given by: ##EQU9## where h.sub.3, h.sub.4, and h.sub.5 are in mils. It is noted that if h.sub.3 =h.sub.4 =h.sub.5 =h.sub.y, then h=h.sub.y /3. The smoothest impedance curve for a pair of parallel conductive planes separated by a dielectric layer may be achieved when the parallel resultant of the ESR values of all n bypass capacitors (ESR/n) coupled between the pair of parallel conductive planes is equal to the characteristic impedance of the pair of parallel conductive planes. As described above, a separation distance h between the parallel conductive planes may be determined in order to achieve a target electrical impedance Z.sub.t. The target electrical impedance Z.sub.t may then be used to determine a required value of mounted resistance R.sub.m-req for n discrete electrical capacitors (e.g., bypass capacitors): The n discrete electrical capacitors may be selected such that the n capacitors each have an equivalent series resistance (ESR) which is less than or equal to the required value of mounted resistance R.sub.m-req. Where the ESR of the n capacitors is less than the required value of mounted resistance R.sub.m-req, an electrical resistance element may be placed in series with each of the n capacitors. In this situation, the mounted resistance R.sub.m of a given one of the n capacitors may include the ESR of the capacitor, an electrical resistance of a corresponding electrical resistance element in series with the capacitor, and the electrical resistances of all conductors coupling the capacitor between the pair of parallel conductive The electrical resistance value for each of the n electrical resistance elements may be selected such that the mounted resistance R.sub.m of each of the n capacitors is equal to the required value of mounted resistance R.sub.m-req. This may be accomplished by determining the mounted resistance R.sub.m of a representative one of the n capacitors when coupled between the planar conductors and when the electrical resistance of the corresponding electrical resistance element is zero. In this situation, the mounted resistance R.sub.m of the representative capacitor may be equal to the sum of the ESR of the representative capacitor and the electrical resistances of all conductors coupling the capacitor between the planar conductors. The electrical resistance of each of the n electrical resistance elements may be determined by subtracting the mounted resistance R.sub.m of the representative capacitor from the required value of mounted resistance R.sub.m-req. FIGS. 7-13 will now be used to illustrate several embodiments of an electrical power distribution structure including an electrical resistance element coupled in series with a capacitor between a pair of parallel conductive planes separated by a dielectric layer (e.g., between a power plane and a ground plane). In the embodiments of FIGS. 7-13, electrical resistance elements are incorporated in ways which do not appreciably increase physical dimensions of current loops coupling the capacitor between the pair of parallel conductive planes. As a result, the mounted inductance L.sub.m of the capacitor is not changed substantially over a corresponding conventional structure. FIG. 7 is a perspective view of a portion 170 of an electrical power distribution structure including a capacitor 172 (e.g., an interdigitated capacitor) mounted upon an upper surface of an interconnecting apparatus 174. Interconnecting apparatus 174 may be, for example, a PCB, a component of a semiconductor device package, or formed upon a surface of an integrated circuit substrate. Interconnecting apparatus 174 includes a signal conductor layer 176, an electrical ground (i.e., ground) conductor layer 178, and an electrical power (i.e., power) conductor layer 180. Capacitor 172 has a body and multiple power and ground terminals positioned along opposite side surfaces of the body. The power and ground terminals alternate along the sides of the body. A total of 8 vias are used to couple capacitor 172 between power conductor layer 180 and ground conductor layer 178. Vias 182A and 182B of FIG. 7 are used to connect corresponding power terminals of capacitor 172 to a portion of power conductor layer 180. Vias 184A and 184B of FIG. 7 are used to connect corresponding ground terminals of capacitor 172 to ground conductor layer 178. Two other vias on a side of capacitor 172 opposite vias 182A, 182B, 184A, and 184B are used to couple corresponding power terminals of capacitor 172 to power conductor layer 180. An additional two vias on the opposite side of capacitor 172 are used to connect corresponding ground terminals of capacitor 172 to ground conductor layer 178. The multiple parallel current paths formed between power conductor layer 180 and ground conductor layer 178 through capacitor 172 reduce a mounted inductance of capacitor 172. FIG. 8 is a top plan view of one embodiment of power conductor layer 180 of FIG. 7 following a process (e.g., an etch process) during which a portion of an electrically conductive material (e.g., a metal) forming power conductor layer 180 is removed from an isolation region 190, thereby forming an island 192 electrically isolated from a remainder 194 of power conductor layer 180. FIG. 9 is a top plan view of the embodiment of power conductor layer 180 of FIG. 8 following a process during which two resistive stripes 196A and 196B are formed between island 192 and remainder 194 of power conductor layer 180. In the embodiment of FIG. 9, resistive stripes 196A and 196B are formed in portions of isolation region 190 on opposite sides of island 192. During use of interconnecting apparatus 174 (FIG. 7), an electrical power supply voltage is impressed between remainder 194 of power conductor layer 180 and ground conductor layer 178. Connected between island 192 and ground conductor layer 178 by the eight vias, capacitor 172 presents an electrical capacitance between island 192 and ground conductor layer 178. Resistive stripes 196A and 196B resistively couple island 192 to remainder 194 of power conductor layer 180. Resistive stripes 196A and 196B, electrically in parallel between island 192 and remainder 194 of power conductor layer 180, present a single value of resistance between island 192 and remainder 194 of power conductor layer 180. The electrical resistance presented by resistive stripes 196A and 196B and the electrical capacitance of capacitor 172 are coupled in series between remainder 194 of power conductor layer 180 and ground conductor layer 178, forming a series resistance-capacitance (RC) network between remainder 194 of power conductor layer 180 and ground conductor layer 178. Resistive stripes 196A and 196B are formed from electrically resistive materials (e.g., resistive inks). Resistive stripes 196A and 196B each present an electrical resistance between island 192 and remainder 194 of power conductor layer 180. The magnitudes of the resistances presented by resistive stripes 196A and 196B depend upon the physical dimensions of respective resistive stripes 196A and 196B. The magnitudes of the resistances presented by resistive stripes 196A and 196B are also dependent upon the electrical resistivities of the electrically resistive materials used to form respective resistive stripes 196A and 196B. In the embodiment of FIG. 9, multiple anchor regions 198 exist in power conductor layer 180 along perimeters of remainder 194 and island 192 adjacent to the portions of isolation region 190 where resistive stripes 196A and 196B are formed. Each anchor region includes a protrusion extending outwardly from a perimeter of remainder 194 toward island 192 and a correspondingly-shaped recess in an adjacent perimeter of island 192. Anchor regions 198 help keep resistive stripes 196A and 196B in place despite any lateral shear forces which may be exerted upon resistive stripes 196A and 196B during assembly of interconnecting apparatus 174. In the embodiment of FIG. 9, resistive stripes 196A and 196B are formed from a material having an electrical resistivity higher than that of the electrically conductive material (e.g., a metal) removed from power conductor layer 180 to form isolation region 190. As a result, a mounted resistance R.sub.m of capacitor 172 is increased over a corresponding conventional structure. A mounted inductance L.sub.m of capacitor 172 would not be expected to change substantially over the corresponding conventional structure as the physical dimensions of the current path through capacitor 172 are substantially unchanged over the corresponding conventional structure. Capacitor 172 may be, for example, one of n capacitors coupled between power conductor layer 180 and ground conductor layer 178 to stabilize the electrical impedance of the electrical distribution structure including power conductor layer 180 and ground conductor layer 178. A target electrical impedance Z.sub.t may be used to determine a required value of mounted resistance R.sub.m-req for the n capacitors according to: Capacitor 172 may have an ESR which is less than the required value of mounted resistance R.sub.m-req. In this situation, the electrical resistance value offered by resistive stripes 196A and 196B in parallel may be selected such that the mounted resistance R.sub.m of capacitor 172 is equal to the required value of mounted resistance R.sub.m-req. This may be accomplished by determining the mounted resistance R.sub.m of capacitor 172 when the electrical resistance value offered by resistive stripes 196A and 196B in parallel is zero. The mounted resistance R.sub.m of capacitor 172 when the electrical resistance value offered by resistive stripes 196A and 196B in parallel is zero may be equal to the sum of the ESR of capacitor 172 and the electrical resistances of all conductors coupling capacitor 172 between the planar conductors. The electrical resistance value offered by resistive stripes 196A and 196B in parallel may be determined by subtracting the mounted resistance R.sub.m of capacitor 172 when the electrical resistance value offered by resistive stripes 196A and 196B in parallel is zero from the required value of mounted resistance R.sub.m-req. FIG. 10 is a cross sectional view of a portion 200 of an electrical power distribution structure wherein vias with relatively high electrical resistances are used to electrically couple a capacitor 202 (e.g., a multilayer ceramic capacitor) between a planar power conductor (i.e., a power plane) 204 and a planar ground conductor (i.e., a ground plane) 206 of an interconnecting apparatus 203. Capacitor 202 may be, for example, a bypass capacitor. Interconnecting apparatus 203 may be, for example, a PCB, a component of a semiconductor device package, or formed upon a surface of an integrated circuit substrate. Interconnecting apparatus 203 includes multiple layers of planar electrical conductors separated by dielectric layers. In the embodiment of FIG. 10, capacitor 202 has two terminals 210 and 212 on opposite ends of a body or package. Terminal 210 is electrically connected to a first solder land 214 by a solder fillet 216. Solder land 214 is electrically coupled to ground plane 206 by a via 218. Terminal 212 is electrically connected to a second solder land 220 by a solder fillet 222. Solder land 220 is electrically coupled to power plane 204 by a via 224. Solder lands 214 and 220 are formed within a signal plane 208 of interconnecting apparatus 203. Signal plane 208 includes multiple signal lines (i.e., interconnects or traces) used to convey signals within interconnecting apparatus 203. During use of interconnecting apparatus 203, power plane 204 is connected to a power terminal of an electrical power supply at a power entry point of interconnecting apparatus 203, and ground plane 206 is connected to a ground terminal of the power supply at the power entry point. Power plane 204 and ground plane 206 provide electrical power to electronic devices connected between power plane 204 and ground plane 206. In the embodiment of FIG. 10, vias 218 and 224 are formed from a material having an electrical resistivity higher than that of conventional via-forming materials in order to increase a mounted resistance R.sub.m of capacitor 202. It is noted that the mounted inductance L.sub.m of capacitor 202 would not be expected to change substantially over a corresponding conventional structure as only vias 218 and 224 are modified, and the physical dimensions of the current loop coupling capacitor 202 between power plane 204 and ground plane 206 are not increased substantially over the corresponding conventional structure. Capacitor 202 may be, for example, one of n capacitors coupled between power plane 204 and ground plane 206 to stabilize the electrical impedance of the electrical distribution structure including power plane 204 and ground plane 206. A target electrical impedance Z.sub.t may be used to determine a required value of mounted resistance R.sub.m-req for the n capacitors according to: Capacitor 202 may have an ESR which is less than the required value of mounted resistance R.sub.m-req. In this situation, the combined electrical resistance values of vias 218 and 224 may be selected such that the mounted resistance R.sub.m of capacitor 202 is equal to the required value of mounted resistance R.sub.m-req. This may be accomplished by determining the mounted resistance R.sub.m of capacitor 202 when the electrical resistances of vias 218 and 224 are both zero. The mounted resistance R.sub.m of capacitor 202 when the electrical resistances of vias 218 and 224 are both zero may be equal to the sum of the ESR of capacitor 202 and the electrical resistances of all conductors coupling the capacitor between the planar conductors (e.g., the electrical resistances of solder lands 214 and 220, and power plane 204 between via 218 and via 224 due to the length of the capacitor 202 package). The combined electrical resistances of vias 218 and 224 may be determined by subtracting the mounted resistance R.sub.m of capacitor 202 when the electrical resistances of vias 218 and 224 are both zero from the required value of mounted resistance R.sub.m-req. FIG. 11 is a cross sectional view of a portion 230 of an electrical power distribution structure wherein an electrically resistive adhesive material is used to electrically couple capacitor 202 between power plane 204 and ground plane 206 of interconnecting apparatus 203. Components of the electrical power distribution structure shown in FIG. 10 and described above are labeled similarly in FIG. 11. In the embodiment of FIG. 11, terminal 210 of capacitor 202 is electrically connected to first solder land 214 by a first amount of an electrically resistive adhesive material 232. Solder land 214 is electrically coupled to ground plane 206 by a via 234. Terminal 212 is electrically connected to a second solder land 220 by a second amount of the electrically resistive adhesive material 236. Solder land 220 is electrically coupled to power plane 204 by a via 238. In the embodiment of FIG. 11, the first amount of the electrically resistive adhesive material 232 and the second amount of the electrically resistive adhesive material 236 have electrical resistivities higher than that of conventional solder fillets in order to increase mounted resistance R.sub.m of capacitor 202. It is noted that the mounted inductance L.sub.m of capacitor 202 would not be expected to change substantially over a corresponding conventional structure as only the mechanisms for attaching terminals 210 and 212 of capacitor 202 to respective solder lands 214 and 220 are modified, and the physical dimensions of the current loop coupling capacitor 202 between power plane 204 and ground plane 206 are not increased substantially over the corresponding conventional structure. In the embodiment of FIG. 11, capacitor 202 may be one of n capacitors coupled between power plane 204 and ground plane 206 to stabilize the electrical impedance of the electrical distribution structure including power plane 204 and ground plane 206. A target electrical impedance Z.sub.t may be used to determine a required value of mounted resistance R.sub.m-req for the n capacitors according to: Capacitor 202 may have an ESR which is less than the required value of mounted resistance R.sub.m-req. In this situation, the combined electrical resistance values of the first amount of the electrically resistive adhesive material 232 and the second amount of the electrically resistive adhesive material 236 may be selected such that the mounted resistance R.sub.m of capacitor 202 is equal to the required value of mounted resistance R.sub.m-req. This may be accomplished by determining the mounted resistance R.sub.m of capacitor 202 when the electrical resistances of the first amount of the electrically resistive adhesive material 232 and the second amount of the electrically resistive adhesive material 236 are both zero. The mounted resistance R.sub.m of capacitor 202 when the electrical resistances of the first amount of the electrically resistive adhesive material 232 and the second amount of the electrically resistive adhesive material 236 are both zero may be equal to the sum of the ESR of capacitor 202 and the electrical resistances of all conductors coupling the capacitor between the planar conductors (e.g., the electrical resistances of solder lands 214 and 220, and power plane 204 between via 234 and via 238 due to the length of the capacitor 202 package). The combined electrical resistances of the first amount of the electrically resistive adhesive material 232 and the second amount of the electrically resistive adhesive material 236 may be determined by subtracting the mounted resistance R.sub.m of capacitor 202 when the electrical resistances of the first amount of the electrically resistive adhesive material 232 and the second amount of the electrically resistive adhesive material 236 are both zero from the required value of mounted resistance R.sub.m-req. FIG. 12 is a cross sectional view of a portion 240 of an electrical power distribution structure wherein a resistive coupon 242 is positioned between capacitor 202 and interconnecting apparatus 203, and wherein an electrical resistance offered by resistive coupon 242 is electrically coupled in series with capacitor 202 between power plane 204 and ground plane 206 of interconnecting apparatus 203. Components of the electrical power distribution structure shown in FIGS. 10-11 and described above are labeled similarly in FIG. 12. In the embodiment of FIG. 12, terminal 210 of capacitor 202 is electrically connected to a solder land 244 on an upper surface of resistive coupon 242 by a solder fillet 246. Solder land 244 is electrically coupled to a side terminal 248 on a side surface of resistive coupon 242 via a first resistive region 250 of resistive coupon 242. Side terminal 248 of resistive coupon 242 is electrically connected to a solder land 252 of interconnecting apparatus 203 by a solder fillet 254. Solder land 252 of interconnecting apparatus 203 is electrically connected to ground plane 206 by a via 256. Terminal 212 of capacitor 202 is electrically connected to a solder land 258 on the upper surface of resistive coupon 243 by a solder fillet 260. Solder land 258 is electrically coupled to a side terminal 262, on a side surface of resistive coupon 243 opposite side terminal 248, via a second resistive region 264 of resistive coupon 243. Side terminal 262 of resistive coupon 243 is electrically connected to a solder land 266 of interconnecting apparatus 203 by a solder fillet 268. Solder land 266 of interconnecting apparatus 203 is electrically connected to power plane 204 by a via 270. In the embodiment of FIG. 12, the first resistive region 250 and the second resistive region 264 of the respective resistive coupons 242 and 243 have electrical resistivities higher than that of conventional solder fillets in order to increase mounted resistance R.sub.m of capacitor 202. It is noted that the mounted inductance L.sub.m of capacitor 202 would not be expected to change substantially over a corresponding conventional structure as physical dimensions of the resistive coupons 242 and 243 may be relatively small, and thus the physical dimensions of the current loop coupling capacitor 202 between power plane 204 and ground plane 206 may not be increased substantially over the corresponding conventional structure. In the embodiment of FIG. 12, capacitor 202 may be one of n capacitors coupled between power plane 204 and ground plane 206 to stabilize the electrical impedance of the electrical distribution structure including power plane 204 and ground plane 206. As described above, a target electrical impedance Z.sub.t may be used to determine a required value of mounted resistance R.sub.m-req for the n capacitors according to: Capacitor 202 may have an ESR which is less than the required value of mounted resistance R.sub.m-req. In this situation, the combined electrical resistance values of the first resistive region 250 and the second resistive region 264 of the respective resistive coupons 242 and 243 may be selected such that the mounted resistance R.sub.m of capacitor 202 is equal to the required value of mounted resistance R.sub.m-req. This may be accomplished by determining the mounted resistance R.sub.m of capacitor 202 when the electrical resistances of the first resistive region 250 and the second resistive region 264 are both zero. The mounted resistance R.sub.m of capacitor 202 when the electrical resistances of the first resistive region 250 and the second resistive region 264 are both zero may be equal to the sum of the ESR of capacitor 202 and the electrical resistances of all conductors coupling the capacitor between the planar conductors (e.g., the electrical resistances of solder lands 252 and 266, and power plane 204 between via 256 and via 270 due to the length of the capacitor 202 package). The combined electrical resistances of the first resistive region 250 and the second resistive region 264 may be determined by subtracting the mounted resistance R.sub.m of capacitor 202 when the electrical resistances of the first resistive region 250 and the second resistive region 264 are both zero from the required value of mounted resistance R.sub.m-req. FIG. 13 is a cross sectional view of a portion 280 of an electrical power distribution structure wherein capacitor 202 is electrically coupled between power plane 204 and ground plane 206 of interconnecting apparatus 203, and wherein capacitor 202 includes an electrical resistance element 282 in series with a capacitance element. Components of the electrical power distribution structure shown in FIGS. 22-24 and described above are labeled similarly in FIG. 25A. In the embodiment of FIG. 13, in addition to terminals 210 and 212, capacitor 202 includes two interleaved sets of conductive plates arranged in parallel and separated by a dielectric. One of the two sets of conductive plates is electrically connected to terminal 212. The other set of conductive plates is electrically coupled to terminal 210 via internal electrical resistance element 282. Terminal 210 is electrically connected to first solder land 214 by solder fillet 216. Solder land 214 is electrically coupled to ground plane 206 by a via 284. Terminal 212 is electrically connected to second solder land 220 by solder fillet 222. Solder land 220 is electrically coupled to power plane 204 by a via 286. In the embodiment of FIG. 13, electrical resistance element 282 is formed from a material having a relatively high electrical resistivity (e.g., higher than that of a metal conductor) in order to increase mounted resistance R.sub.m of capacitor 202. It is noted that the mounted inductance L.sub.m of capacitor 202 would not be expected to change substantially over a corresponding conventional structure as the physical the length of the capacitor 202 package may not be increased significantly. Accordingly, the physical dimensions of the current loop coupling capacitor 202 between power plane 204 and ground plane 206 may not be increased substantially over the corresponding conventional structure. Capacitor 202 may be one of n capacitors coupled between power plane 204 and ground plane 206 to stabilize the electrical impedance of the electrical distribution structure including power plane 204 and ground plane 206. As described above, a target electrical impedance Z.sub.t may be used to determine a required value of mounted resistance R.sub.m-req for the n capacitors according to: Capacitor 202 may have an ESR which is less than the required value of mounted resistance R.sub.m-req. In this situation, the electrical resistance of electrical resistance element 282 may be selected such that the mounted resistance R.sub.m of capacitor 202 is equal to the required value of mounted resistance R.sub.m-req. This may be accomplished by determining the mounted resistance R.sub.m of capacitor 202 when the electrical resistance of electrical resistance element 282 is zero. The mounted resistance R.sub.m of capacitor 202 when the electrical resistance of electrical resistance element 282 is zero may be equal to the sum of the ESR of capacitor 202 and the electrical resistances of all conductors coupling the capacitor between the planar conductors (e.g., the electrical resistances of solder lands 214 and 220, and power plane 204 between via 284 and via 286 due to the length of the capacitor 202 package). The electrical resistance of electrical resistance element 282 may be determined by subtracting the mounted resistance R.sub.m of capacitor 202 when the electrical resistance of electrical resistance element 282 is zero from the required value of mounted resistance R.sub.m-req. The n capacitors may then be selected having internal electrical resistance elements 282 with electrical resistances substantially equal to the determined value of electrical resistance. FIGS. 14A and 14B are top views of one embodiment of an annular resistor that may be used to provide the resistance in lieu of the resistive elements described above. Annular resistor 400, in one embodiment, is a circularly shaped resistor. In some embodiments, annular resistor 400 may be printed into a planar conductor (e.g. a power plane), while in other embodiments, annular resistor 400 may be a discrete component which may be mounted on a printed circuit board. Annular resistor 400 may also be placed into a void in a planar conductor. Annular resistor 400 includes a first terminal 402 and a second terminal 406. An annular resistor 404 may be arranged between first terminal 402 and second terminal 406. In various embodiments, the first terminal may be the outer periphery of the annular resistor 404. Similarly, in some embodiments, the second terminal may be the inner periphery of the annular resistor 404. As shown in FIG. 14B, the annular resistor may be considered, from an electrical model point of view, to be a plurality of resistors connected in parallel between the first terminal 402 and the second terminal 406. Turning now to FIG. 15 a side view of one embodiment of a multi-terminal capacitor, for example an eight terminal capacitor, mounted on a printed circuit board, wherein some of the capacitor terminals are electrically connected to a ground plane, and some of the terminals are electrically connected to annular resistors which are printed into a power plane. Multi-terminal capacitor 172 is mounted upon printed circuit board (PCB) 171. PCB 171 includes a pair of planar conductors, ground plane 178 and power plane 180, both of which are part of an electrical power distribution structure. The pair of planar conductors are separated by dielectric material 179. PCB 171 also includes a plurality of pads 177 located on surface layer 171 for mounting multi-terminal capacitor 172. A plurality of first terminals 173A of multi-terminal capacitor 172 is electrically connected to ground plane 178 through a plurality of first pads 177A and vias 184. A plurality of second terminals 173B is connected to a second terminal 406 of an annular resistor 400 through a plurality of second pads 177B and vias 182. Multi-terminal capacitor 172 may include a plurality of individual capacitors, or may include a single capacitor which is electrically connected to the plurality of first terminals 173A and the plurality of second terminals 173B. In either case, by connecting the plurality of second terminals 173B to annular resistors 400, a series RC circuit is formed between ground plane 178 and power plane 180, as the first terminals 402 of annular resistors 400 are electrically connected to power plane 180. Moving now to FIG. 16, a top plan view of one embodiment of a multi-terminal capacitor, wherein a plurality of the capacitor terminals are connected to annular resistors is shown. Multi-terminal capacitor 172 includes a plurality of first leads 173A which are electrically coupled to a ground plane. Each of a plurality of second leads 173B are electrically connected to a terminal of an annular resistor 400. As shown in FIG. 15, annular resistors 400, in this embodiment, are printed into a power plane in such a pattern to allow them to be placed in series with a bypass capacitor that is coupled to the ground plane. In other embodiments, the annular resistors may be arranged in different patterns provided there is sufficient space between Turning now to FIG. 17 a side view of a second embodiment of an exemplary four-terminal capacitor mounted on a printed circuit board, wherein some of the capacitor terminals are electrically connected to a ground plane, and some of the terminals are electrically connected to annular resistors which are printed into a power plane. Four-terminal capacitor 500 is mounted upon printed circuit board (PCB) 502. PCB 502 includes a pair of planar conductors, ground plane 504 and power plane 506, both of which are part of an electrical power distribution structure. The pair of planar conductors are separated by dielectric material 508. PCB 502 also includes a plurality of pads 510 located on surface layer 512 for mounting the four-terminal capacitor 500. Two first terminals 514A of the four-terminal capacitor 500 are electrically connected to the ground plane 504 through a plurality of first pads 510A and vias 516. Two second terminals 514B are connected to a second terminal 518 of respective annular resistors 520 through a plurality of second pads 510B and vias 521. The four-terminal capacitor 500 may include a plurality of individual capacitors, or may include a single capacitor which is electrically connected to the two first terminals 514A and the two second terminals 514B. In either case, by connecting the two second terminals 514B to respective annular resistors 520, a series RC circuit is formed between the ground plane 504 and the power plane 506, as first terminals 522 of annular resistors 520 are electrically connected to the power plane 506. Moving now to FIG. 18, a top plan view is shown of the second embodiment of the four-terminal capacitor, wherein two of the capacitor terminals are connected to annular resistors. The four-terminal capacitor 500 includes the two first terminals 514A which are electrically coupled to the ground plane 504. Each of the two second terminals 514B are electrically connected to the terminal second terminal 518 of a respective annular resistor 520. As shown in FIG. 17, the annular resistors 520, in this embodiment, are printed into the power plane 506 in such a pattern to allow them to be placed in series with a bypass capacitor that is coupled to the ground plane. In other embodiments, the annular resistors may be arranged in different patterns providing there is sufficient space between them. Turning now to FIG. 19 a side view of a third embodiment of a two-terminal capacitor mounted on a printed circuit board, wherein one of the capacitor terminals is electrically connected to a ground plane, and the other terminal is electrically connected to an annular resistor which is printed into a power plane. Two-terminal capacitor 600 is mounted upon printed circuit board (PCB) 602. PCB 602 includes a pair of planar conductors, ground plane 604 and power plane 606, both of which are part of an electrical power distribution structure. The pair of planar conductors are separated by dielectric material 608. PCB 602 also includes a plurality of pads 610 located on surface layer 612 for mounting the two-terminal capacitor 600. A first terminal 614A of the two-terminal capacitor 600 is electrically connected to the ground plane 604 through a first pad 610A and via 616. A second terminal 614B is connected to a second terminal 618 of an annular resistor 620 through a second pad 610B and via 621. The two-terminal capacitor 600 is electrically connected to the first terminal 614A and the second terminal 614B. By connecting the second terminal 614B to an annular resistor 620, a series RC circuit is formed between the ground plane 604 and the power plane 606, as the first terminal 622 of the annular resistor 620 is electrically connected to the power plane 606. Moving now to FIG. 20, a top plan view is shown of the third embodiment of the two-terminal capacitor, wherein one of the capacitor terminals is connected to an annular resistor. The two-terminal capacitor 600 includes the first terminal 614A which is electrically coupled to the ground plane 604. The second terminal 614B is electrically connected to the second terminal 618 of the annular resistor 620. As shown in FIG. 18, the annular resistor 620, in this embodiment, is printed into the power plane 606 in such a pattern to allow the annular resistor to be placed in series with a bypass capacitor that is coupled to the ground plane. Turning now to FIG. 21 a side view is shown of a fourth embodiment of a two-terminal capacitor mounted on a printed circuit board, wherein one of the capacitor terminals is electrically connected to a ground plane, and the other terminal is electrically connected to three annular resistors which are printed into a power plane. Two-terminal capacitor 700 is mounted upon printed circuit board (PCB) 702. PCB 702 includes a pair of planar conductors, ground plane 704 and power plane 706, both of which are part of an electrical power distribution structure. The pair of planar conductors are separated by dielectric material 708. PCB 702 also includes a plurality of pads 710 located on surface layer 712 for mounting the two-terminal capacitor 700. A first terminal 714A of the two-terminal capacitor 700 is electrically connected to the ground plane 704 through a first pad 710A and via 716. A second terminal 714B is connected to a second terminal 718 of an annular resistor 720 through a second pad 710B and via 721. The two-terminal capacitor 700 is electrically connected to the first terminal 714A and the second terminal 714B. By connecting the second terminal 714B to three annular resistors 720, a series RC circuit is formed between the ground plane 704 and the power plane 706, as the first terminals 722 of the annular resistors 720 are electrically connected to the power plane 706. Moving now to FIG. 22, a top plan view is shown of the fourth embodiment of the two-terminal capacitor, wherein one of the capacitor terminals is connected to three annular resistors. The two-terminal capacitor 700 includes the first terminal 714A which is electrically coupled to the ground plane 704. Each of three second terminals 714B are electrically connected to the second terminal 718 of a respective one of the three annular resistors 720. As shown in FIG. 20, the annular resistors 720, in this embodiment, are printed into the power plane 706 in such a pattern to allow them to be placed in series with a bypass capacitor that is coupled to the ground plane. In other embodiments, the annular resistors 720 may be arranged in different patterns providing there is sufficient space between them. While the annular resistors 720 have been shown as being substantially symmetric and annular circles, it is clear that the annular resistors can take on many different forms. These forms can include squares, rectangles, ellipses and the like, in addition to non-symmetrical shapes. The choice of the shape of the annular resistor is based on the design constraints for a particular PCB design including, but not limited to, fabrication tolerances, resistive value requirements and cost. In addition, the placement of the via connecting the capacitor terminal to the inner periphery of the annular resistor, although shown (in some Figures) as being substantially in the center, is not meant to be limiting. Just as the shape of the annular resistor may vary depending on different factors, the positioning of the via with respect to the inner periphery may also move. As long as the via electrically couples the inner periphery of the annular resistor to the terminal of the capacitor, the location of the via within the inner periphery can be moved to meet PCB design requirements. Further, while the annular resistor has been shown to be co-planar with the power plane, it is clear that the annular resistor can be manufactured according to other known techniques. These techniques include forming the annular resistor in a void defined by either a ground plane, power plane or signal plane or forming the annular resistor on a respective plane. Still further, the annular resistor can be formed within and in the plane. Examples of such annular resistors can be found in U.S. Pat. No. 5,708,569 issued to Howard, et al. on Jan. 13, 1998 and which is herein fully incorporated by reference. FIGS. 23A-23C in combination form a flow chart of one embodiment of a first method 300 for achieving a target electrical impedance Z.sub.t in an electrical power distribution structure including a pair of parallel planar conductors separated by a dielectric layer. During a step 302, a distance d.sub.p around the outer edges (i.e., the outer perimeter) of the electrical power distribution structure is determined (e.g., measured) as described above. A separation distance h between the parallel planar conductors required to achieve the target electrical impedance Z.sub.t is determined during a step 304 using distance d.sub.p and the relative dielectric constant .di-elect cons..sub.r of the dielectric layer. The following equation, based on the above empirical formula for the electrical impedance Zp, may be used to determine separation distance h: ##EQU10## where impedance Z.sub.t is in ohms and distance d.sub.p is in inches. During a step 306, a thickness t is selected for the dielectric layer, where t.ltoreq.h. Step 306 reflects the fact that thicknesses of dielectric layers between electrically conductive layers (e.g., copper sheets) of commercially available multi-layer printed circuit boards are typically selected from a range of available thicknesses. It is very likely that the above empirical formula for h will yield a required separation distance which lies between two available thickness within the range of available thicknesses. Assume, for example, that the above empirical formula for h yields a required separation distance which lies between a first available thickness and second available thickness, where the first available thickness is greater than the second available thickness. In this situation, selected thickness t may be the second available thickness such that t.ltoreq.h. During a step 308, the selected dielectric layer thickness t is used to determine the inductance L.sub.p of the electrical power distribution structure. The following equation may be used to calculate inductance L.sub.p : wherein .mu..sub.0 is the permeability of free space. It is noted that the dielectric material used to form the dielectric layer is assumed to be non-magnetic such that the relative permeability .mu..sub.r of the dielectric layer is substantially unity. A type of discrete electrical capacitor is selected during a step 310, wherein capacitors of the selected type have at least one substantially identical physical dimension (e.g., a length of the capacitor package between terminals) upon which a mounted inductance of the capacitors is dependent. During a step 312, the at least one substantially identical physical dimension is used to determine a mounted inductance L.sub.m of a representative one of the selected type of discrete electrical capacitor when the representative capacitor is electrically coupled between the planar conductors. The mounted inductance L.sub.m of the representative discrete electrical capacitor is the electrical inductance resulting from the coupling of the capacitor between the planar conductors. During a step 314, a required number n of the selected type of discrete electrical capacitor is determined dependent upon the inductance of the electrical power distribution structure L.sub.p and the mounted inductance L.sub.m, wherein n.gtoreq.2. The required number n of the selected type of discrete electrical capacitor may be determined using: ##EQU11## The target electrical impedance Z.sub.t is used during a step 316 to determine a required value of mounted resistance R.sub.m-req for the n discrete electrical capacitors. The required value of mounted resistance R.sub.m-req may be determined During a step 318, the required number n of the selected type of discrete electrical capacitor are selected, wherein each of the n capacitors has an equivalent series resistance (ESR) which is less than the required value of mounted resistance R.sub.m-req. During a step 320, a mounted resistance R.sub.m of a representative one of the n discrete electrical capacitors is determined when an electrical resistance of a corresponding electrical resistance element is zero. The electrical resistance of each of n electrical resistance elements is determined during a step 322 by subtracting the mounted resistance R.sub.m of the representative capacitor from the required value of mounted resistance R.sub.m-req. During a step 324, the n discrete electrical capacitors and the n electrical resistance elements are electrically coupled between the planar conductors such that each of the n discrete electrical capacitors is coupled in series with a corresponding one of the n electrical resistance It is noted that during step 306, it is possible that the above empirical formula for h will yield a required separation distance which is less than a minimum available thickness. For example, a minimum thickness of dielectric layers for manufactured printed circuit boards may be 2 mils. If the above empirical formula for h yields a required separation distance which is less than 2 mils, it is possible to add additional pairs of parallel planar conductors to the electrical power distribution structure such that an equivalent thickness t between a representative single pair of parallel planar conductors is achieved. In general, for a structure having n pairs of parallel planar conductors separated by dielectric layers: ##EQU12## where t.sub.i is the thickness of the dielectric layer between the ith pair of the n pairs. The thickness of the dielectric layer between the n pairs of parallel planar conductors may be selected from the range of available thicknesses such that the resulting value of t is less than or equal to h. FIGS. 24A-24F in combination form a flow chart of one embodiment of a second method 330 for achieving a target electrical impedance Z.sub.t in an electrical power distribution structure including a pair of parallel planar conductors separated by a dielectric layer. During a step 332, a distance d.sub.p around the outer edges (i.e., the outer perimeter) of the electrical power distribution structure is determined (e.g., measured) as described above. A separation distance h between the parallel planar conductors required to achieve the target electrical impedance Z.sub.t is determined during a step 334 using distance d.sub.p and the relative dielectric constant .di-elect cons..sub.r of the dielectric layer. The following equation, based on the above empirical formula for electrical impedance Zp, may be used to determine separation distance h: ##EQU13## where impedance Z.sub.t is in ohms and distance d.sub.p is in inches. During a step 336, a thickness t is selected for the dielectric layer, where t.ltoreq.h. Step 336 reflects the fact that thicknesses of dielectric layers between electrically conductive layers (e.g., copper sheets) of commercially available multi-layer printed circuit boards are typically selected from a range of available thicknesses. As described above, where the empirical formula for h above yields a required separation distance which lies between a first available thickness and second available thickness, and the first available thickness is greater than the second available thickness, selected thickness t may be the second available thickness such that t.ltoreq.h. During a step 338, the selected dielectric layer thickness t is used to determine the inductance L.sub.p of the electrical power distribution structure. The following equation may be used to calculate inductance L.sub.p : wherein .mu..sub.0 is the permeability of free space. Again, it is noted that the dielectric material used to form the dielectric layer is assumed to be non-magnetic such that the relative permeability .mu..sub.r of the dielectric layer is substantially unity. A type of discrete electrical capacitor is selected during a step 340, wherein capacitors of the selected type have at least one substantially identical physical dimension (e.g., a length of the capacitor package between terminals) upon which a mounted inductance of the capacitors is dependent. During a step 342, the at least one substantially identical physical dimension is used to determine a mounted inductance L.sub.m of a representative one of the selected type of discrete electrical capacitors when the representative capacitor is electrically coupled between the planar conductors. Again, the mounted inductance L.sub.m of the representative discrete electrical capacitor is the electrical inductance resulting from the coupling of the capacitor between the planar conductors. During a step 344, a first required number n.sub.1 of discrete electrical capacitors is determined dependent upon the inductance of the electrical power distribution structure L.sub.p and the mounted inductance L.sub.m of the selected type of discrete electrical capacitor when electrically coupled between the planar conductors, wherein n.sub.1.ltoreq.2. The first required number n.sub.1 may be determined using: ##EQU14## A second required number n.sub.2 of the selected type of discrete electrical capacitor is determined during a step 346 dependent upon distance d.sub.p and a spacing distance S between adjacent discrete electrical capacitors, wherein n.sub.2.gtoreq.2. The second required number n.sub.2 may be determined using: ##EQU15## The electrical power distribution structure may be part of an electrical interconnecting apparatus (e.g., a printed circuit board). In this situation, spacing distance S may be less than or equal to a maximum spacing distance S.sub.max, where S.sub.max is a fraction of a wavelength of a maximum frequency f.sub.max of a frequency range of electrical signals conveyed within the electrical interconnecting apparatus. During a decision step 348, the first and second required numbers n.sub.1 and n.sub.2 are compared. If n.sub.2.gtoreq.n.sub.1, step 350 is performed next. On the other hand, if n.sub.1 &gt;n.sub.2, step 360 is performed next. During step 350, the target electrical impedance Z.sub.t is used to determine a required value of mounted resistance R.sub.m-req for n.sub.2 of the discrete electrical capacitors. The required value of mounted resistance R.sub.m-req for the n.sub.2 capacitors may be determined using: The number n.sub.2 of the discrete electrical capacitors are selected during step 352, wherein each of the n.sub.2 capacitors has an equivalent series resistance (ESR) which is less than the value of required mounted resistance R.sub.m-req. During a step 354, a mounted resistance R.sub.m of a representative one of the n.sub.2 capacitors is determined when the representative capacitor is coupled between the pair of parallel planar conductors and when an electrical resistance of a corresponding electrical resistance element is zero. The electrical resistance of each of n.sub.2 electrical resistance elements is determined during a step 356 by subtracting the mounted resistance R.sub.m of the representative capacitor from the required value of mounted resistance R.sub.m-req. During a step 358, the n.sub.2 discrete electrical capacitors and the n.sub.2 electrical resistance elements are electrically coupled between the planar conductors along an outer perimeter of the parallel planar conductors such that each of the n.sub.2 discrete electrical capacitors is coupled in series with a corresponding one of the n.sub.2 electrical resistance elements. During step 360, the target electrical impedance Z.sub.t is used to determine a required value of mounted resistance R.sub.m-req for n.sub.1 of the discrete electrical capacitors dependent upon. The required value of mounted resistance R.sub.m-req for the n.sub.1 capacitors may be determined using: The number n.sub.1 of the discrete electrical capacitors are selected during a step 362, wherein each of the n.sub.1 capacitors has an equivalent series resistance (ESR) which is greater than the required value of mounted resistance R.sub.m-req. During a step 364, a mounted resistance R.sub.m of a representative one of the n.sub.1 capacitors is determined when the representative capacitor is coupled between the pair of parallel planar conductors and when an electrical resistance of a corresponding electrical resistance element is zero. The electrical resistance of each of n.sub.1 electrical resistance elements is determined during a step 366 by subtracting the mounted resistance R.sub.m of the representative capacitor from the required value of mounted resistance R.sub.m-req. During a step 368, the n.sub.1 discrete electrical capacitors and the n.sub.1 electrical resistance elements are electrically coupled between the planar conductors such that: (i) each of the n.sub.1 discrete electrical capacitors is coupled in series with a corresponding one of the n.sub.1 electrical resistance elements, (ii) n.sub.2 of the discrete electrical capacitors and the corresponding electrical resistance elements are positioned along an outer perimeter of the planar conductors, and (iii) the remaining (n.sub.1 -n.sub.2) capacitors and the corresponding electrical resistance elements are dispersed across a surface of at least one of the planar conductors. It is noted that during step 336, it is possible that the above empirical formula for h will yield a required separation distance which is less than a minimum available thickness. For example, a minimum thickness of dielectric layers for manufactured printed circuit boards may be 2 mils. If the above empirical formula for h yields a required separation distance which is less than 2 mils, it is possible to add additional pairs of parallel planar conductors to the electrical power distribution structure such that an equivalent thickness t between a representative single pair of parallel planar conductors is achieved. In general, for a structure having n pairs of parallel planar conductors separated by dielectric layers: ##EQU16## where t.sub.1, is the thickness of the dielectric layer between the ith pair of the n pairs. The thickness of the dielectric layer between the n pairs of parallel planar conductors may be selected from the range of available thicknesses such that the resulting value of t is less than or equal to h. While the invention is susceptible to various modifications and alternative forms, specific embodiments thereof are shown by way of example in the drawings and will herein be described in detail. It should be understood, however, that the drawings and description thereto are not intended to limit the invention to the particular form disclosed, but, on the contrary, the invention is to cover all modifications, equivalents, and alternatives falling with the spirit and scope of the present invention as defined by the appended claims. * * * * *
{"url":"http://www.docstoc.com/docs/52961906/Adding-Electrical-Resistance-In-Series-With-Bypass-Capacitors-Using-Annular-Resistors---Patent-6727780","timestamp":"2014-04-24T23:32:43Z","content_type":null,"content_length":"164511","record_id":"<urn:uuid:78468aef-9b6f-4df9-a4cd-5422eb1ce15a>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00655-ip-10-147-4-33.ec2.internal.warc.gz"}
Measurement unit conversion: petawatt ›› Measurement unit: petawatt Full name: petawatt Plural form: petawatts Category type: power Scale factor: 1.0E+15 ›› SI unit: watt The SI derived unit for power is the watt. 1 watt is equal to 1.0E-15 petawatt. Valid units must be of the power type. You can use this form to select from known units: I'm feeling lucky, show me some random units ›› Definition: Petawatt The SI prefix "peta" represents a factor of 10^15, or in exponential notation, 1E15. So 1 petawatt = 10^15 watts. The definition of a watt is as follows: The watt (symbol: W) is the SI derived unit for power. It is equivalent to one joule per second (1 J/s), or in electrical units, one volt ampere (1 V·A). ›› Sample conversions: petawatt petawatt to hectowatt petawatt to kilogram-force meter/hour petawatt to attowatt petawatt to dyne centimeter/second petawatt to joule/hour petawatt to horsepower [international] petawatt to cheval vapeur petawatt to yoctowatt petawatt to exawatt petawatt to gigawatt
{"url":"http://www.convertunits.com/info/petawatt","timestamp":"2014-04-20T18:25:05Z","content_type":null,"content_length":"22748","record_id":"<urn:uuid:1dee5699-614e-4136-b890-8a317e8c4a35>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00117-ip-10-147-4-33.ec2.internal.warc.gz"}
Things visible and invisible Via Ben Goldacre's MiniBlog, “homeopath adopts victim posture,” I found a posting which links to “Towards a New Model of the Homeopathic Process Based on Quantum Field Theory” by Lionel R. Milgrom, Forschende Komplementärmedizin 13 (3) 174-183 (2006). This was blogged at the time but I missed it. It's worth picking up on it again now though, especially as I've managed to read the full text without chewing my own foot off. From the summary: “Disease manifestation by the Vital Force (Vf) could be an event similar to spontaneous symmetry breaking in QFT: the curative remedy acting to restore the broken symmetry of the Vf field. Entanglement between patient, practitioner, and remedy might be representable as Feynman-like diagrams.” I'll start by pointing out how short the decoherence time is in a complicated system like a human. Tegmark [Phys. Rev. E 61 (4) 4194-4206 (2000)] estimates 0.0000000000001 s at most: he says, “We find that the decoherence time scales (∼10^-13—10^-20 s) are typically much shorter than the relevant dynamical time scales (∼10^-3—10^-1 s), both for regular neuron firing and for kinklike polarization excitations in microtubules. This conclusion disagrees with suggestions by Penrose and others that the brain acts as a quantum computer, and that quantum coherence is related to consciousness in a fundamental way.” Those who do believe that the brain is a quantum computer [for example, Hagan et. al. Phys. Rev. E 65 061901 (2002)] calculate a decoherence time more like 0.0001 s, which is a milliard times longer, but still somewhat shorter than a homeopathic consultation. Anyway, the whole idea of a “Vital force” is based on naïve biological intuition, and biology has come on quite a long way in the past couple of hundred years or so. So, following the introduction there's a section on quantum field theory, which “draw[s] heavily for exposition on the writings of" (i.e. is copied out of the books of) Dr John Gribbon (sic.) and Prof. Sunny Y. Auyung (sic.).” The former is a popular science book on quantum field theory, the latter “presents a philosophical analysis of QFT.” He spells these two authors' names incorrectly, though not always. He writes, “In classical physics, fields, e.g., electromagnetic and gravitational, are imagined as attached to and emanating from sources... However in quantum physics, fields are intrinsic and irreducible parts of a relativistic 4-D space-time continuum. And because of the concept of wave-particle duality, this means that ripples in a field may also be described in terms of force-carrying particles, known as bosons, which are exchanged between other quantum entities called fermions. This is known as the first quantisation.” Wrong: first quantization means that we treat our particles as quantum objects moving in classical potentials. Fermions are particles with half-integer spin, and their statistics means that there can only be one fermion in any given state - therefore, fermions make solid matter. He continues, “This idea can be taken further by describing matter particles (e.g., electrons) in terms of waves, which are ripples in another kind of field depending on the type of particle. Thus the particles themselves may also be described in terms of field quanta, and this has been called the second quantisation.” Second quantization is where the classical potential is replaced by the exchange of virtual bosons. In the case of the electromagnetic field these are photons. A bit later, he is talking about symmetry breaking and the Higgs field but he appears to garble this with the concept of zero-point energy slightly. And then, “The Higgs field is what is called a scalar field, which means that it is the same everywhere, hence extremely hard to detect.” A classical scalar field is just something which can be described by a single numerical value for each point in space. (The pressure of the earth's atmosphere, for example, is a scalar field, while the wind makes a vector field.) In Quantum field theory a scalar field is one whose force-carrying bosons have zero spin - so it's true that the Higgs field certainly would be one of these, and the Higgs field would also have the same vacuum expectation value everywhere. He then goes on to talk about Feynman diagrams without, of course, any references to Feynman's scientific publications but rather to his popular science account. At least by not misspelling Feynman's name he avoids a five point penalty. The next section is entitled “Quantum Field Theory as a Metaphor for the Homeopathic Process.” This is a trick which Sokal & Bricmont flagged - there's a “strong” interpretation of this paper which says that homeopathy works by quantum entanglement, but when someone points out that this is nonsense he can claim the “weak” interpretation where it's only a metaphor, albeit one which is used to confuse and impress the audience rather than enlighten them. In any case there are references all over the place to something called “Weak Quantum Theory” - Harald Atmanspacher, Hartmann Römer, and Harald Walach. “Weak Quantum Theory: Complementarity and Entanglement in Physics and Beyond.” Found. Phys. 32 (3) 379-406 (2002). The last of these three authors also turns up here and here, by the way. I can't mine all of this rich seam alone, read for yourself if you are able. (There are some comments at the JREF forum.) So anyway, his summary of quantum field theory contains quite a few mistakes: He gets “first quantisation” and “second quantisation” the wrong way round, he seems to confuse zero-point energy and the Higgs field slightly, he states that “the Higgs field is what is called a scalar field, which means that it is the same everywhere,” which is not what “scalar field” actually means, and he misspells the names of the authors (John Gribbin and Sunny Auyang) whose books he is getting this from. He introduces the “Mexican-hat potential” and then modifies it as if “different energy states of the Vital Force, Vf” correspond to different states of ill health and the state in the centre (where the symmetry is unbroken) represents health. It's not obvious what he thinks the x-axis of his graph is - he then takes a picture which he previous used in the paper I'm going to get to in a minute (which once did useful service as a schematic representation of localized and delocalized electrons in a crystalline conductor but has already been ruined by changing it to be about localized and delocalized energy states of the Vital force) and bungs in his new potential as if a state of chronic ill health moved a person a little bit to one side. Greater (and more tractable) abuses of quantum theory seem to take place in “Patient-Practitioner-Remedy (PPR) Entanglement, Part 7: A Gyroscopic Metaphor for the Vital Force and Its Use to Illustrate Some of the Empirical Laws of Homeopathy” by Lionel R. Milgrom, Forschende Komplementärmedizin 11 (4) 212-223 (2004). “It can be argued that homeopathy might be better ‘explained’ within the conventional scientific paradigm... if it were to draw on certain of the more modern ideas and concepts that have been developed within the physical sciences, particularly physics...” Well you should start by drawing on some biology. And if you want to draw on “modern ideas” you could also learn that taking the square of a complex number [ z^2=(x+iy)^2=x^2+2ixy-y^2 ] isn't the same as taking its modulus squared [ |z^2=|x+iy|^2=(x+iy)×(x-iy)=z.z^*=x^2-y^2 ]. But anyway, most of the actual quantum mechanics seems to have been copied out accurately in this paper even if he gets the units of Planck's constant wrong; he writes Js^-1, not Js. Given that he's going to spend the whole paper talking about angular momentum he'd have been better off by defining ħ=h/2π anyway. The main body of this paper is the gyroscope metaphor for the Vital force. He notices that the faster a gyroscope spins, the slower it precesses if it's not standing vertically (and therefore has gravity trying to pull it into a horizontal position) and he decides that this is the same as having a strong Vital force resisting the effects of “dis-ease” (sic.). He goes through (i.e. copies out) a derivation ending up with simple harmonic motion and then tries to use this to “predict” (i.e. postdict) some “empirical laws of homeopathy” (as if such things exist) although he keeps needing to fudge it. (And I don't find any actual testable, falsifiable predictions yet.) Arnst-Schulz Law as refined by Koetschau: “Every drug has a stimulating effect in a small dose, while larger doses inhibit, and much larger doses kill” - the figure given here is much better than the one in the paper, but it's still not accurate and doesn't show exactly the things which Milgrom tries to show. Koetschau's refinement says that small doses have a stimulating effect, moderate doses at first stimulate and then depress and then the patient returns to normal, and that large doses cause a large stimulation followed by a depression large enough that it leads to death. Milgrom has at hand an equation which says that the Vital force oscillates sinusoidally, and although he's talking about the time dependence of effect now there's no point at which he puts time into his equation. His independent variable is “S[2]” and his wavenumber is “k[2]”; on previous pages he defined “k[2]Σσ[2]” as “the totality of secondary symptoms,” apparently, and then S[2] is the integral (over what variable, is not specified) of Σσ[2] with the k[2] not in it anymore - “Σσ[2] represents the totality of secondard symptoms exhibited by Vf. However, just as the pixels of a television screen are integrated by the brain into an image, which is more that the sum of its pixelated parts, so ∫Σσ[2] represents the overall image of an individual's Vf, integrated over and out of the sum of the secondary symptoms presented to the practitioner.” So following this hand-waving he ends up with Vf=Ae^ik [2]S[2]+Be^-ik[2]S[2] which, if anything (and it's a big if), describes the shape of the wavefunction, and it doesn't change with time because it's a stationary state (i.e. an energy eigenstate); to understand what happens to his Vital force gyroscope under the influence of a “drug” he'd actually need time-dependent peturbation theory - I look forward to seeing this in a future paper. So his attempt to apply this to the variation of effects with time is already kind of knackered and I can't help feeling that I'm trying far too hard to find meaning in it all when there really isn't any, and his misuse of concepts, formalism and terminology is sapping my will to live - he appears to plot a line with an imaginary gradient, calls a system which has a large negative response to a positive stimulus “over-damped” (you actually need feedback for that to happen - an overdamped system would just slowly return to equilibrium) and magically has his sinusoidal oscillation stop after exactly one cycle. As an exercise for the reader, set yourself up a damped harmonic oscillator (in a spreadsheet or something) and work out what really happens in the case of small, medium and large offsets at time zero. Then there's a part where he copies a schematic diagram of the localized and delocalized electronic energy levels of a (1-dimensional) crystalline conductor out of a solid-state physics text book and then relabels “atomic nuclei regularly spaced within a lattice” as “individual Vf's of provers and their associates” and “electronic energy states” as “Vf energy states.” This is to somehow illustrate the concept that “there can be synchronous effects among provers and between provers and those closely related to them who are not otherwise involved in the proving.” - a clause inserted to excuse the placebo effect, I expect. This is the same diagram, with different labels, as the one in the other paper where he's talking about disease as a broken symmetry or something. Also, there's a caveat at the end of the paper about the Vf being a nonphysical entity and needing its own therapeutic `state-space' a bit like a Hilbert space - something which he turns to in the other paper. A more recent article is “Journeys in the country of the blind: Entanglement theory and the effects of blinding on trials of homeopathy and homeopathic provings” by Lionel R. Milgrom, Evidence-based Complementary and Alternative Medicine 4 (1) 7-16 (2007). It's rubbish on many levels, from misspelling the names of authors whose work he refers to (again) and getting the units of Planck's constant wrong (he writes Js^-1, not Js, again) to typographical errors in equations and the fundamental misconception that when a wavefunction collapses it becomes zero. It actually becomes an eigenstate of the operator which collapsed it, but it keeps on being a wavefunction. (See S. Dürr et. al. regarding the Double-Slit experiment, by the way.) He freely exchanges terms like “metaphor,” “model,” and “analogy” when describing the relationship between quantum physics and homeopathy, such that it's not obvious how seriously anything should be taken. (Roughly, a model is usually considered to be a mathematical description of a physical system where the trick is to make it complicated enough that it reproduces the important physical phenomena but simple enough that we can still tell what's going on it in and therefore gain some Insight; a metaphor is a way of describing a complicated physical problem in normal language which we know isn't rigorous; and an analogy is a comparison between a familiar system and an unfamiliar one so that you get a head start understanding the unfamiliar one.) But it's nonsense to suggest that there really is quantum entanglement between humans in verum and control groups during a double-blind trial, and the metaphorical interpretation is equally useless if it doesn't help in understanding any real phenomenon. You certainly can't expect to use the physics of a metaphor (even if you understand it, which he doesn't) to bring new information to the physical thing it's supposed to be a metaphor for. It's like saying, “an electron feels a force,” and then wondering what other emotions it can experience. And he writes, “... in order to comply with implicit assumptions inherent in the DBRCT methodology, homeopathic practioners are expected to engage in a highly questionable (and ultimately confusing) form of self-deception that would be utterly unthinkable in a real therapeutic situation.” I just felt a dip in the world irony level. It's idea that in a double-blind randomized controlled trial of homeopathic remedy versus placebo, where trials demonstrate that homeopathy works no better than a placebo, it's either because the homeopathic practitioners can't bring themselves to deceive their patients (subconcious self-deception obviously coming much more easily) or because the control group is entangled in some quantum-mechanical-but-not-really-resonance way with the patients who get the treatment; so I'd like to see how it would turn out if we had a trial of a homeopathic remedy versus a conventional one. Then we'll see if the entanglement between the two groups still works (so that the homeopathic group performs as well as the conventional one, not just as well as a placebo) and what excuses they come up with when it doesn't. This paper also contains the pseudoscience buzzword "non-linear", because he wants to explain that the practitioner is both part of the entangled patient-practitioner-remedy wavefunction and part of the homeopathic operator which operates on that wavefunction to produce the change in symptoms. This would be nonsense if it actually meant anything. Why bother? Well, this is apparently The Principle That Makes Homeopathy Scientifically Possible: “In a heroic series of articles [11, 12, 13, 14, 15], Milgrom derives many known aspects of homeopathic medicine from his intuition that the TAI [therapeutically active ingredient] is a quantum wave function.” Note: “is a quantum wave function” not “behaves like” or “can be imagined to share some of the spooky properties of” or such like. So there really are people out there who actually think this is going to make homeopathy scientifically respectable. I'm not really expecting to make the blindest bit of difference but somebody has to read this stuff and point out that it's nonsense. Edit: how could I have forgotten this? • 1 comment • 1 comment
{"url":"http://shpalman.livejournal.com/2016.html","timestamp":"2014-04-21T14:47:40Z","content_type":null,"content_length":"84989","record_id":"<urn:uuid:a9329002-b868-4fd8-a7ab-47ec97a2a9c7>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00524-ip-10-147-4-33.ec2.internal.warc.gz"}
The transactional interpretation of quantum mechanics An entry in Ars Mathematica has alerted me to John Cramer’s Transactional Interpretation of Quantum Mechanics [see also Wikipedia]. It feels exactly right. The trouble with quantum mechanics has always been that it makes accurate predictions but doesn’t make sense. People make a virtue of this. It shows how far above our heads the whole theory is. “My thoughts are not your thoughts, my ways are not your ways, says the Lord”. This is self-indulgent obscurantism and it leads to such New Age loopiness as The Dancing Wu Li Masters (in which, among other delights, every chapter is called Chapter One). The transactional interpretation is solidly and sensibly based on mathematics – specifically, on a bit of mathematics that has mostly been ignored because it’s embarrassing. The equations that describe the propagation of an electromagnetic wave (such as light) have two solutions. One describes a wave that is carrying energy into the future; the other describes a wave that carries negative energy into the past. It is this second solution that people don’t like much and consequently ignore. It turns out that if you don’t just look at the particle that’s sending out the radiation but also look at the one that’s absorbing it, the second particle’s backward wave reinforces the first particle’s forward wave between the particles, but the second particle’s backward wave cancels out the first particle’s backward wave, which is why you don’t see a backward wave before the first particle; and the second particle’s forward wave cancels out the first particle’s forward wave, which is why you don’t see a forward wave after the second particle has absorbed it. That was rather an involved sentence, but it amounts to this: if you embrace all the solutions of your equation instead of cowering away from the one you don’t like, the result you get makes perfect More than sense, in fact: because we have both a wave going from the transmitter T to the receiver R and a wave going backwards in time from R to T, the two particles can interact in the way called for by quantum physics – the “collapse of the wave packet” and so on – without requiring an act of measurement by an external observer. There really can be a world “out there” independent of us. We have got our real universe back. This is all really nice but what makes it smell right to me is something about the mathematics of quantum measurements. When you make a measurement, the famous “collapse” yields an eigenvalue of the measurement operator and puts the system into a state represented by the corresponding eigenvector. This reminds me of what happens when you multiply a vector repeatedly by a matrix: gradually, as you multiply more and more times, you end up with the results growing like powers of the largest eigenvalue of the matrix, and the vector itself turns more and more into a multiple of the corresponding eigenvector. So the eigenvalue behaviour of quantum measurement makes me think “an operator is being repeated infinitely often”. The way I always thought this would work was if time were circular on a sufficiently small scale; but the advanced-and-retarded-wave scenario does the same thing more economically. An additional advantage of the transactional interpretation seems to be that time is relegated from being some grand causal factor to being just another co-ordinate. This parallels what happens in classical dynamics – where, since everything is determined by everything else, the entire behaviour of the universe can be portrayed as a static unchanging configuration in 3-plus-1-dimensional space and the entire notion of “cause” disappears. This precedent seems to say that a physical theory that requires concepts of causality is in some way flawed. If the transactional interpretation of quantum mechanics really does remove that flaw then it is something we have been waiting for for a long time. 2 thoughts on “The transactional interpretation of quantum mechanics” 1. Pingback: Reference: Quantum Time « Mike Cane 2008 2. Someone once complained that the transactional interpretation of quantum mechanics has a vague notion that a photon has to scope out in all directions before it can “find” a partner with which it can share its transaction. I agree that there is a beautiful symmetry in the notion that our universe is “a static unchanging configuration in 3- + 1- dimensional space”. I would go further to conjecture that our universe is part of a multiverse that is not static and unchanging. In this view, our universe will merge with a larger structure as it completes its evolution, yet at the same time our universe could be only one wave pattern existing for an instant (of “hypertime”) in the larger 3- + 1- dimensional space. Imagine that all the photon pairs created in our lifetimes are the result of a simpler wave activity in a portion of the larger 3- + 1- dimensional space. The philosophy given above is akin to the concept of Islam (or surrender) to the will of God or the Calvinist concept of predestination. After sending this message I will play the violin and will enjoy playing it regardless of whether this activity will have been determined at the time of the unfolding of our universe. Does “spirituality” put its feelers beyond the 3- + 1- dimensional space? I know that I would go crazy (with “information overload”) if I were to have access to energies throughout the 3- +1- dimensional space, but I have this feeling that “spirituality” offers me a level of communion that is independent of the transactional interpretation. Having offered this brief acknowledgment of spirituality, I still do continue to share your curiosity of (and desire to understand) how quantum mechanics works. Perhaps if enough people think about it there will be the learning of it within the cosmic realm (with no one individual responsible for the full understanding, but rather the collective consciousness).
{"url":"http://nugae.wordpress.com/2007/07/23/the-transactional-interpretation-of-quantum-mechanics/","timestamp":"2014-04-20T18:59:22Z","content_type":null,"content_length":"30600","record_id":"<urn:uuid:e830c24b-3326-43be-8f68-1d90dc512b03>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00021-ip-10-147-4-33.ec2.internal.warc.gz"}
Infimum over all vector-valued L^2 spaces up vote 5 down vote favorite Suppose I have a Banach space $E$ (which may be finite dimensional if you wish), a Hilbert space $H$ and a tensor $\tau \in H\otimes E$ in the algebraic tensor product. There are lots of ways to choose a measure $\mu$ and an isometry (not assumed surjective) $\theta:H\rightarrow L^2(\mu)$. Then $(\theta\otimes\iota)\tau \in L^2(\mu)\otimes E \subseteq L^2(\mu;E)$, and so I can compute the norm of $(\theta\otimes\iota)\tau$ in the vector-valued space $L^2(\mu;E)$. Is there an intrinsic (or simple, etc.) characterisation of the infimum (over all choices of $\theta$ and $\mu$) of this norm? (The infimum is non-zero, assuming $\tau\not=0$, as it's always larger than the injective tensor norm. But it's not obvious to me that you actually get a norm on $H\otimes E$ from this). If $E$ is a Hilbert space, then the norm is independent of the choice of $\mu$ and $\theta$; you just get the Hilbert space tensor product norm. But what if, say, $E$ is a finite-dimensional $\ell^\ infty$ space? banach-spaces fa.functional-analysis hilbert-spaces Hi Matt! Are you placing any kind of restriction on $\mu$? Are you allowing any measure space at all? It's rather a long shot, but you know the Nagy-Foias functional model for contraction operators between Hilbert spaces? That might change your question into an equivalent question about Hankel and Toeplitz operators - but it probably will be equally difficult. – Zen Harper Jun 7 '11 at 1:10 @Zen: Well, I did originally mean any measure. But as Pietro correctly says, you can obviously approximate and work just in $\ell^2$... – Matthew Daws Jun 7 '11 at 9:17 add comment 1 Answer active oldest votes Here is a first step. Let $\tau=\sum_{i=1}^n h_i\otimes u_i$ with $h_1,\dots h_n\in H$ and $u_1,\dots u_n\in E$, where w.l.o.g. $h_i$ are orthonormal. Then, in your infimum, you may fix $\mu$ to be the counting measure on $\mathbb{N}$, so that $L^2(\mu)=\ell^2$ (this follows from a simple argument using the density of simple functions and the Gram-Schmidt orthonormalization process), and the infimum writes up vote 1 down $$\inf\Bigg( \, \sum_{k=0}^\infty \, \Bigg \| \, \sum_{i=1}^n \lambda_{k,i}u_i \Bigg\|_E^2 \, \Bigg)^{1/2}$$ taken over all $\lambda_{k,i}$ with $\sum_{k=0}^\infty \lambda_{k,i} \lambda_{k,j}=\delta_{i,j} $. Then, it is not clear to me how to make a further reduction, even in the case of $n= 2$ vectors. add comment Not the answer you're looking for? Browse other questions tagged banach-spaces fa.functional-analysis hilbert-spaces or ask your own question.
{"url":"http://mathoverflow.net/questions/66904/infimum-over-all-vector-valued-l2-spaces?sort=votes","timestamp":"2014-04-21T16:06:08Z","content_type":null,"content_length":"53629","record_id":"<urn:uuid:67614f9a-947f-437d-b94b-cd0c145dd6e4>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00449-ip-10-147-4-33.ec2.internal.warc.gz"}
Millbury, MA Calculus Tutor Find a Millbury, MA Calculus Tutor ...I'm that geeky math-loving girl, that was also a cheerleader, so I pride myself in being smart and fun!! I was an Actuarial Mathematics major at Worcester Polytechnic Institute (WPI), and worked in the actuarial field for about 3.5 years after college. Since then I have been a nanny and a tutor ... 17 Subjects: including calculus, geometry, statistics, linear algebra ...I have taught all age groups from kindergartner to graduate/professional students during my own teaching career of almost 30 years. I have also taught students who were not able to perform well in math and science while in primary and middle school. I am consistent and patient. 11 Subjects: including calculus, geometry, biology, Japanese ...In most cases, if I don't know it, I can teach myself and help you improve. My experience is that learning is both simple and pleasurable when you approach it with the right mind. I have a B.S. in Psychological Science from WPI (located in Worcester, MA) which is essentially an undergraduate degree in designing and analyzing scientific experiments. 24 Subjects: including calculus, chemistry, English, reading I currently teach Mathematics, Statistics and Macroeconomics at Quinsigamond Community College. In my spare time, I would like to help students better themselves and their grade by tutoring. With my deep academic and professional experience I believe I can be an asset for your child's future. 11 Subjects: including calculus, French, geometry, statistics ...I enjoy helping others understand the logic and rules that govern our writing, interpretation, and speech. I have almost six months' experience tutoring in English half-time, including grammar. I have a masters degree in math, but have not lost sight of the difficulties encountered in elementary math. 29 Subjects: including calculus, reading, English, geometry Related Millbury, MA Tutors Millbury, MA Accounting Tutors Millbury, MA ACT Tutors Millbury, MA Algebra Tutors Millbury, MA Algebra 2 Tutors Millbury, MA Calculus Tutors Millbury, MA Geometry Tutors Millbury, MA Math Tutors Millbury, MA Prealgebra Tutors Millbury, MA Precalculus Tutors Millbury, MA SAT Tutors Millbury, MA SAT Math Tutors Millbury, MA Science Tutors Millbury, MA Statistics Tutors Millbury, MA Trigonometry Tutors
{"url":"http://www.purplemath.com/Millbury_MA_Calculus_tutors.php","timestamp":"2014-04-18T21:20:04Z","content_type":null,"content_length":"24103","record_id":"<urn:uuid:72a56a76-b343-45d3-af89-aa5cc30edf5a>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00090-ip-10-147-4-33.ec2.internal.warc.gz"}
Common Core Creations Pin It 3.OA.4 Determine the unknown whole number in a multiplication or division quotation relating three whole numbers. For example, determine the unknown number that makes the equation true in each of the equations 8 × ? = 48, 5 = _ ÷ 3, 6 × 6 = ? Unpacking this standard, I found the following I CAN statements. • I can write a FACT FAMILY using multiplication and division. • I can label the positions in a FACT FAMILY. • I can identify the PARTS and WHOLE of an equation. • I can use an array to show related facts. • I can identify symbols used for missing numbers. • I can find the missing number in an equation using the FACT FAMILY. • I can write an equation (number sentence) to match a word problem. • I can write a story problem to match a fact family. The familiar FACT FAMILIES are the backbone of this standard. However, some 3rd grade students will need support to understand the vocabulary of the standard. Jenn at Finally in First offers an engaging activity with pictures of family members labeled with sticky note numbers to help students develop the meanings of "relate" to "relative" to "relation." Using this as a jumping off point, I made a graphic to illustrate the connection between operations and size of units. This flash resource from Teacher Network (requires log-in with FREE registration) is part of a downloadable lesson packet that can be used with a projection device to practice creating multiplication and division equations with a set of numbers. Simply insert the three numbers and drag the circles to the proper position within the equation blanks. Update: 8.4.2012 I found a set of multiplication strategy posters at the math learning center. 0 comments:
{"url":"http://commoncorecreations.blogspot.com/2012/07/3oa4.html","timestamp":"2014-04-19T07:24:15Z","content_type":null,"content_length":"87312","record_id":"<urn:uuid:dbe2e006-979b-489a-b0be-8b57b19f51ba>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00359-ip-10-147-4-33.ec2.internal.warc.gz"}
Mplus Discussion >> Censored & Nonnormal data Anonymous posted on Wednesday, September 12, 2001 - 11:53 am Using Mplus, what is the best way to deal with Censored & Nonnormal data? Do you recommend any estimation method? I found from a stat book that WLSMV in Mplus is appropriate for nonnormal data. Is it also good for censored data? Linda K. Muthen posted on Thursday, September 13, 2001 - 9:40 am For continuous censored variables, I would recommend the Mplus estimator MLM. For categorical outcomes with floor or ceiling effects, I would recommend the Mplus estimator WLSMV. You might also consider that the censoring or floor and ceiling effects is the result of a mixture of subpopulations and consider mixture modeling. Anonymous posted on Thursday, September 13, 2001 - 10:29 am Thanks, Linda. Is it possible to use simultaneously missing data technique and MLM or WLSMV? Linda K. Muthen posted on Thursday, September 13, 2001 - 11:49 am No, missing is available only for ML. There's a table on page 38 of the Mplus User's Guide which shows which estimators are available for various situations. You may find this helpful. Anonymous posted on Sunday, October 26, 2003 - 10:43 am Does the newest version of Mplus have bootstrapping capabilities? Linda K. Muthen posted on Sunday, October 26, 2003 - 10:57 am Mplus Version 3 will have bootstrapping of standard errors and confidence intervals. Liesbeth posted on Wednesday, September 29, 2004 - 4:00 am Is there a single technique in MPLUS that can deal with missing values while the assumption of normality is violated? Linda K. Muthen posted on Wednesday, September 29, 2004 - 3:48 pm There is no general theory for non-normality robust MAR missing data handling. See Web Note 2. Dustin posted on Thursday, December 30, 2004 - 1:40 pm I am attempting to perform a four factor CFA, with each factor consisting of approximately 8-items. I have some missing data, and would also like to compare nested models to compare different factor structures. The problem is that the items are from a measure that uses a likert rating from 0-2. Here is the question: Can I treat the items as non-normal continuous rather than ordinal and use MLR in Mplus 3? From what I understand, I can not do nested model chi square tests using WLSMV for ordinal variables. Are there any articles on when is it appropriate to treat likert items as continuous non-normal rather than ordinal? Linda K. Muthen posted on Thursday, December 30, 2004 - 1:44 pm The DIFFTEST option of Version 3 allows chi-square difference testing with WLSMV. Dustin posted on Friday, December 31, 2004 - 4:55 am That is great. Out of curiosity, do you have any opinion on when it is appropriate to treat likert scale items as continous non-normal rather than ordinal. There seems to be a dearth of research on the subject, so references in this regard would also be useful. Thanks again. I look forward to attending some of your courses in the future. bmuthen posted on Friday, December 31, 2004 - 5:49 am See the two Muthen-Kaplan references in the Mplus Reference section under SEM. Dustin posted on Friday, December 31, 2004 - 7:33 am Last question regarding this issue. The first step of my study involves a CFA with ordinal item indicators (0-2) to test a hypothesized four factor solution. The second step involves relating these factors to longitudinal data (probably a growth curve model) regarding the development of delinquency over 6 follow-up periods. While WLSMV seems approriate to determine the factor structutre, ML estimation seems more appropriate for handeling the missing data present on the outcome variable over time (especially since WLSMV uses pairwise deletion). Would it be appropriate to first run the CFA with WLSMV and save the factor scores. Then run a ML growth model using the factor scores as predictors. Any other suggestions would be extremely helpful. bmuthen posted on Friday, December 31, 2004 - 7:47 am It sounds like you have a factor as one of the time-invariant covariates of a growth process. What you suggests is reasonable. Here are some thoughts on alternatives. Although doable, ML for categorical outcomes leads to heavy computations here given at least 3 latent variables (1 for the factor and 2 more for a growth intercept and slope). And, as you say, WLSMV would work with pairwise deletion. One question is what predicts the missingness on the growth outcome. Is it predicted by observed covariates, the factor covariate, or by the outcome at time 1? If the former, WLSMV might still be ok since WLSMV allows MAR wrt covariates. If one of the latter, WLSMV is not ok. Dustin posted on Friday, December 31, 2004 - 8:03 am There are actually four factors that are assessed prior to the growth process, not just one. As a result, ML for categorical outcomes will not produce a chi-square statistic for testing nested model fit (Mplus says is too complex to estimate). The issue of covariates being associated with missingness in the growth process is an interesting one. I am planning on cotrol for a prior history (lifetime) of deliqnuency at Time 1, while the intercept and slope of the growth factor will be assessed at times 2-7 (delinquency over the last 6 months). It seems like you are saying that this may help adjust for the fact that missingness may be related to delinquency. Thanks for your helpful comments. As I loyal user of Mplus, this website is great. Anonymous posted on Friday, March 18, 2005 - 1:24 pm When doing poisson regression models in Mplus, is there a way to correct the standard errors for overdispersion? Linda K. Muthen posted on Friday, March 18, 2005 - 3:54 pm Although we have not studied this yet using a simulation study, we think that the MLR standard errors do this. Do you have a reference for a correction for overdisperson that you are thinking about? bmuthen posted on Saturday, March 19, 2005 - 4:57 am Perhaps you are referring to zero-inflated Poisson (ZIP) modeling when you say "overdispersion". If so, yes Mplus can do ZIP modeling and therefore gets the correct SEs. Anonymous posted on Monday, May 30, 2005 - 1:02 pm If I have non-normal data but a very large sample size (>9000) am I ok if using MLE? bmuthen posted on Monday, May 30, 2005 - 3:39 pm You don't need a very large sample for MLE to give non-normality robust point estimates (and non-normality robust SEs when using the Mplus MLR estimator). But your title mentions "censored" which implies that you have observed variables with a floor or ceiling effect, in which case a standard linear model is probably not appropriate (and large sample size gives no advantage) - in such situations it is better to switch to a non-linear model, such as a censored-normal model, a zero-inflated model, or a two-part model (see the Mplus Version 3 User's Guide). Henri Bonnabau posted on Thursday, November 17, 2005 - 3:41 am could you give me some papers or books to consult for nonnormality data with hight skweness and kurtosis estimation. Thanks in advance bmuthen posted on Thursday, November 17, 2005 - 5:15 am Search for articles by Mardia. Annonymous posted on Monday, January 30, 2006 - 7:57 am If I can reduce skewness in my dependent variable from 2.03 to 0.056 without having to remove any outliers, is there any advantage to doing this? In other words, is there a degree of skewness after which WLS is no longer an appropriate estimator? Linda K. Muthen posted on Monday, January 30, 2006 - 8:22 am What is the scale of your dependent variable? And how did you reduce the skewness? Annonymous posted on Monday, January 30, 2006 - 8:29 am it is continuous, and the skewness was reduced by transforming the data in SAS with a macro for the box-cox approach to transformation. Linda K. Muthen posted on Monday, January 30, 2006 - 9:18 am I would not recommend WLS which with continuous outcomes is ADF unless you have a very small model and very large sample. I also would not transform to avoid skewness unless there is another reason to do so, for example, a substantive reason. Instead I would use the MLR estimator. Matt Diemer posted on Wednesday, February 15, 2006 - 12:06 pm To add a follow-up question to this thread: How would you all recommend addressing skewness/kurtosis in a complex sample design data set using categorical indicators/variables? [also some missing data] My intent was to use WLSMV (because of the categorical indicators) and have reviewed Yu's (2002) dissertation re: some of these issues. any suggestions/recommendations for references would be much appreciated. Thank you, bmuthen posted on Thursday, February 16, 2006 - 6:13 am With categorical variables, the skewness/kurtosis is not a problem for model assumptions as it is with continuous outcomes where normality is violated. The only issue is the possibility of zero cells in bivariate tables, which can be problematic in that information on correlations between variables is limited. The additional feature of complex sample design is incorporated in Mplus. I have a 1989 Soc Meth & Research article on skewed binary outcomes that might be relevant - see our web site under References for categorical data. Nina Zuna posted on Sunday, August 20, 2006 - 2:38 pm Dear Drs. Muthén and Muthén, I was reading an older book chapter entitled SEM with Non-normal variables: Problems and Remedies by West, Finch, & Curran, 1995; the authors noted the use of a CVM estimator by Muthén. 1. Am I correct to assume that CVM at that time only referred to WLS, but now there are several estimators available in Mplus to handle non-normal data (e.g., MLR, MLM, WLS)? Secondly, obviously extra caution should be extended when using Likert scales with <10 response options, particularly with Multiple Group Measurement Invariance testing (Lubke & Muthén,2004). 2. I am doing a multiple grp invariance test, have missing data, and am including the means in my model. I noticed I couldn't use MLM or MLMV with missing data. Is it OK to use MLR for Multiple grp Invariance in this situation (my scale is 1-5) or will my means by biased since not addressed in the MLR estimator (or are they)? What are my options? 3. Is a Likert scale considered categorical or continuous, non-normal? If one assumes a Likert scale is categorical as opposed to continuous, is there any need to do the test for multivariate normality or is this test reserved for only continuous variables? Thank you kindly for your recommendations. Bengt O. Muthen posted on Monday, August 21, 2006 - 6:37 am 1. CVM referred to categorical variable methodology, which is represented by WLS, WLSM, and WLSMV, where the latter is the current Mplus default. Mplus now also does CVM using ML and MLR. MLM refers to analysis with continuous outcomes using non-normality robust ML. 2. Multiple-group analysis with categorical outcomes can be done by WLSMV or by ML(R), where the latter needs to use KNOWNCLASS for the multiple groups. If you declare the dependent variables as categorical, the correct model will be used (irrespective of estimator) and therefore no problems with the estimates. 3. Likert scales can be considered categorical, continuous-normal, or continuous normal - it is your choice. Only if they are strongly skewed with pronounced floor or ceiling effects would I use CVM. Normality tests are only for continuous outcomes. Scott posted on Tuesday, August 21, 2007 - 11:07 am I am conducting LGMM on data with a cohort-sequential design. I have missing data (not missing at random but due to the design). The DVs are continuous (index of self report delinquency). 1) Given the missingness, is there a way to test for nonnormality, beside the SK tests (outlined in Muthen, 2003)? 2) Instead, are the MLM and MLR options robust enough to handle most deviations from normality? 3) Also, what is the difference between MLM and MLR for the estimator options? 4) Which should I use in my case? Linda K. Muthen posted on Thursday, August 23, 2007 - 12:10 pm 1. I don't know of any way. You could run with ML and also with MLM or MLR and see if results differ thereby deducing whether there is non-normality. 2. Yes. 3. MLM is described in TechnicalAppendix 4. MLR is described in Technical Appendix 8. MLR uses a sandwich estimator. 4. We use MLR as the default. You should get very close results using either. Sofia Diamantopoulou posted on Monday, April 27, 2009 - 6:39 am Dear Drs Muthen, I am estimating a path model using MLR estimator and I wonder why chi-square is not included in the results for the tests of model fit. Thank you in advance. Linda K. Muthen posted on Monday, April 27, 2009 - 7:45 am You must be using this with outcomes that are not continuous. In this case, means, variances, and covariances are not sufficient statistics for model estimation and chi-square and related fit statistics are not available. Michin Hong posted on Wednesday, December 29, 2010 - 10:01 am Dear Drs. Muthen, I am pretty new to Mplus and working on a SEM model using a data set(n=1837). For some reasons, I got the following warning messages for 8 different variables. They are all countinous variables with more than 5 categories and seem to be okay in terms of normality (skewness and kurtosis are all less than 2). I alreday have 5 categorical variables out of 13 variables in the model including the oridnal endogenous variable with 4 categories. So, it seems that the program tries to treat all my variables FYI, I used WLSMV estimator. Thank you. Linda K. Muthen posted on Wednesday, December 29, 2010 - 10:43 am Please send you input, data, output, and license number to support@statmodel.com. It sounds like you are not reading the data correctly. Grant Bickerton posted on Thursday, February 24, 2011 - 1:07 pm I am new to MPlus, and looking to use the MLM estimator to help manage some kurtosis among in some variables. HOwever, I just cannot seem to get it to work. I have put ESTIMATOR = ML; in the analysis command, as well as have LISTWISE=ON; in the data command, but then I get the following error still and the analysis proceds but using the ML estimator. *** WARNING in ANALYSIS command Starting with Version 5, TYPE=MISSING is the default for all analyses. To obtain listwise deletion, use LISTWISE=ON in the DATA command. 1 WARNING(S) FOUND IN THE INPUT INSTRUCTIONS I am sorry for such a basic question, but is there something very obvious that I am not including? Many thanks. Linda K. Muthen posted on Thursday, February 24, 2011 - 2:44 pm The warning is just to inform you of a change starting with Version 5. The MLM estimator can be used only with complete data. Just put ESTIMATOR=MLM; in the ANALYSIS command to get MLM. You won't get that with ML. The MLR estimator is also robust to non-normality and can be used with incomplete data. anja koen posted on Tuesday, March 08, 2011 - 3:58 pm Is the chi-square calculated by the MLR estimator a Satorra-Bentler scaled chi-square? Thanks! Bengt O. Muthen posted on Tuesday, March 08, 2011 - 6:36 pm No. Satorra-Bentler is the MLM chi-2. The MLR chi-2 is asymptotically the same as the Yuan-Bentler (2000) T2* version (see V6 UG, page 533). Cameron Hopkin posted on Monday, October 24, 2011 - 12:13 pm I'm a relative novice in regards to both M Plus and SEM. I've got a skewed count variable -- drug use in the last 30 days -- as my outcome variable, and would like to use zero-inflated poisson modeling to account for that non-normality, but I've got an interaction variable that is important to my analysis. As I understand it, ZIP modeling introduces a "structural zero" parameter -- how would that parameter function alongside interaction effects? Would I need to include it as a possible interactor as well? I'm not sure if I'm even thinking about this in the right way. Any pointers you could give would be appreciated. Bengt O. Muthen posted on Monday, October 24, 2011 - 6:25 pm You can include the interaction, and other covariates, also in the prediction of the binary part of the ZIP (zero-inflated Poisson), that is, the prediction of being at zero or not. Another approach is to specify the outcome as negative binomial where the inflation part is "built in" and doesn't need referring to. Cameron Hopkin posted on Saturday, October 29, 2011 - 7:00 am Thank you. Speaking of the same model, what is the best way to deal with centering the indicators for the latent interaction variable? My advisor is under the impression that M Plus has some centering scheme by default for interactions, but all my data manipulation, including the construction of the AxB indicators, was done in a different program. If, as I suspect, there is no default centering, would you recommend centering all indicators and including a mean structure, or using residual-centering with no mean structure as advocated in (Little, T. D., Bovaird, J. A., & Widaman, K. F. (2006). On the merits of orthogonalizing powered and product terms: Implications for modeling interactions among latent variables. Structural Equation Modeling, 13, 497–519)? Does the fact that this is a ZIP model have any bearing on that decision? Thank you for the help! Bengt O. Muthen posted on Saturday, October 29, 2011 - 8:33 am It sounds like you are interested in a latent variable interaction as a predictor. Note that Mplus offers the XWITH approach to that so that it is not necessary to use products of factor indicators. The latent variables entering the XWITH interaction are in typical models centered, that is, have zero means. Cameron Hopkin posted on Saturday, October 29, 2011 - 12:13 pm In attempting to use the XWITH option I seem to have run astray, and am getting a "fatal error: reciprocal interaction problem" message. I think I've reproduced the syntax from the manual's example (5.13) as it applies to my situation (as described directly above), but perhaps I'm wrong. The model follows: names = ..... usevar = ual30 Imp1 Imp2 Imp3 SS1 SS2 SS3; missing = blank; count = ual30 (i); type = random; Imp by Imp1 Imp2 Imp3; SS by SS1 SS2 SS3; IMPxSS | Imp XWITH SS ual30 on Imp SS; ual30 on IMPxSS; ual30#1 on Imp SS; ual30#1 on IMPxSS; Output: TECH1, TECH8; Cameron Hopkin posted on Monday, October 31, 2011 - 7:58 am Never mind. It was a missing semi-colon on the XWITH statement. Pshh. Don't I feel silly. Tracey LaPierre posted on Sunday, November 06, 2011 - 3:08 pm Dear Drs Muthen, I am running a path analysis with depressive symptoms as my dependent variable, modeled as a continuous latent variable with 12 observed ordinal indicators that are a count of the number of days in the past week each symptom was experienced. I also have three latent variable mediators with categorical indicators, a categorical independent variable and a number of control variables. I am using WLSMV as my estimator, theta parameterization and calculating bcboot confidence intervals. l have good model fit (measurement & structural). I interpreted the coefficients for the continuous latent dependent variable as regular OLS coefficients. I am trying to respond to a reviewers concern about the non-normality of my dependent variable (and some of the latent mediators with the same issue). In previous work using Stata I have created a single indicator by summing the 12 indicators and logged it to reduce nonnormality, then tested for violations of the normality assumption. 1) Is there a corresponding assumption of normality for continuous latent variables in SEM? 2) How does one test if this assumption is violated and what are the consequences? 3) Can you transform a latent variable to normalize it? 4) Treating my indicators as categorical produces better model fit than treating them as continuous. Should I be logging them and treating them as continuous instead, even if it results is worse model fit? Bengt O. Muthen posted on Sunday, November 06, 2011 - 5:58 pm The fact that your ordinal indicators have skewed distributions does not mean that the factor behind them has a non-normal distribution. You can have a normal factor influencing ordinal indicators and the reason they are skewed is that you are recording a rare event. The strong non-normality of the indicators is really only a potential problem if you treat them as continuous instead of categorical (or counts) (1) Normality is typically assumed for latent variables in SEM. (2) It is hard to test if this holds. (3) I would not transform the indicators. If they are counts you can also treat them as counts instead of categorical. Dennis Föste posted on Wednesday, June 19, 2013 - 6:12 am Dear Drs. Muthén and Muthén, is there any equivalent to the Stata SSC censornb (Hilbe, 2005) in MPLUS for a survival parameterization of censored negative binomial regression? The survival parameterization of censoring allows censoring to take place anywhere in the data, not only at cut points (Hilbe 2012: 406). Thank you in advance! Bengt O. Muthen posted on Wednesday, June 19, 2013 - 11:57 am We have truncated and hurdle negbin, but not survival-parameterized censored negbin. Eveline Hoeben posted on Tuesday, July 30, 2013 - 6:59 am Dear Drs. Muthén and Muthén, I want to estimate a mediation model with a count dependent variable (negative binomial). The paper ‘Muthén, B. (2011). Applications of causally defined direct and indirect effects in mediation analysis using SEM in Mplus.’ was very helpful and I was able to run the example inputfiles in tables 52-54. However, if I apply these syntaxes to my own data I get the following error message:'Unknown group name 1 specified in group-specific MODEL command.’ The message shows for count = y(p) as well as for count = y(nb). Below is (part of) my inputfile, can you tell me what I am doing wrong? I use Mplus version 7. Many thanks, TYPE = RANDOM; ESTIMATOR = ML; y ON x*.116(beta2); beta1 | y ON m1; beta1 ON x*-.009(beta3); m1 ON x*.182(gamma1); Linda K. Muthen posted on Tuesday, July 30, 2013 - 7:24 am Please send the output and your license number to support@statmodel.com. Matthew Clement posted on Friday, September 27, 2013 - 6:35 am I'm hoping to use poisson with a log-link and robust standard errors to model a non-negative, positively skewed, continuous dependent variable. Is this possible in Mplus? If so, how is it done? I can't find any information on the discussion board about this. Linda K. Muthen posted on Friday, September 27, 2013 - 6:50 am This is available in Mplus using the COUNT option. This is appropriate for count variables not for continuous variables. Matthew Clement posted on Friday, September 27, 2013 - 7:38 am Thanks for the response. But, why can't poisson be employed to model continuous variables in Mplus? There is a use for it, and several other statistical software packages do it: http://www.stata.com/ If you have any solution, it would be greatly appreciated. I'm a grad student, and I've turned to Mplus because of its superior SEM modeling capabilities. Tihomir Asparouhov posted on Friday, September 27, 2013 - 10:33 am You can use the following example as a building block to accommodate GLM with log link. data: file is 1.dat; variable: names=x y; constrain=x; usevar=y; model: [y] (mean); model constraint: new(b); mean=exp(b*x); Example 5.23 in the user's guide also illustrates some of these features but that example is unrelated to GLM. Matthew Clement posted on Friday, September 27, 2013 - 1:24 pm Thanks for the response, Tihomir. I've never tried to incorporate a model constraint in Mplus before; I don't know what you're doing. What's "x"? Etc. Could you elaborate or point me to another source that talks about how to estimate generalized linear models in Mplus? Thanks, Matt Tihomir Asparouhov posted on Friday, September 27, 2013 - 2:39 pm X is the predictor variable that comes from the data. In the above model Log(E(Y|X))=b*X which is the GLM model with log link function, i.e, Matthew Clement posted on Sunday, September 29, 2013 - 7:35 am Thanks again for your help. I have the following CONTINUOUS variables: y1 (poisson, with log link) y2 (gaussian, with identity link) Ultimately, I would like to estimate the following nonrecursive model with robust standard errors: y2 = y1 + x1 + x2 y1 = y2 + x1 How is that done in Mplus? (I don't see how the poisson family of GLMs is specified in your above example.) Tihomir Asparouhov posted on Monday, September 30, 2013 - 11:27 am This should work names=y1 y2 x1 x2; usevar=y1 x2 y2d x1d; define: y2d=y2; x1d=x1; constrain=y2 x1; model: [y1] (mean); y2d on y1 x1d x2; model constraint: new(a b1 b2); mean=exp(a+b1*y2+b2*x1); I am doubling the variables that need to be in the constraint and the model statement with the input but you can alternatively do that outside of Mplus. Also I forgot the intercept in my previous Bengt O. Muthen posted on Monday, September 30, 2013 - 12:38 pm A FAQ on this has now been posted: GLM with log link and Poisson regression for continuous variables Matthew Clement posted on Monday, September 30, 2013 - 4:49 pm As suggested, I ran the following: names=y1 y2 x1 x2; usevar=y1 x2 y2d x1d; define: y2d=y2; x1d=x1; constrain=y2 x1; model: [y1] (mean); y2d on y1 x1d x2; model constraint: new(a b1 b2); mean=exp(a+b1*y2+b2*x1); But now I get the error: *** ERROR An internal error has occurred. This may be caused by an error in the DEFINE command or in the USEOBSERVATIONS option of the VARIABLE command. Check these statements in your input. Tihomir Asparouhov posted on Tuesday, October 01, 2013 - 9:50 am Ops sorry - switch rows 4 and 5 Matthew Clement posted on Wednesday, October 02, 2013 - 5:43 am One last question... If I wanted to add a nonrecursive relationship to this model, what would the model and constraint statements look like? Here are the variables again: y1 (poisson, with log link) y2 (gaussian, with identity link) The model looking like this: y2 = x1 + x2 y1 = x1 y2 <-> y1 Tihomir Asparouhov posted on Wednesday, October 02, 2013 - 8:35 am names=y1 y2 x1 x2; usevar=y1 y2 x2 x1d; define: x1d=x1; model: [y1] (mean); y2 on x1d x2; y1 with y2; model constraint: new(a b1); mean=exp(a+b1*x1); Back to top
{"url":"http://www.statmodel.com/cgi-bin/discus/discus.cgi?pg=next&topic=11&page=143","timestamp":"2014-04-16T19:33:36Z","content_type":null,"content_length":"109399","record_id":"<urn:uuid:d556c6c1-1645-42b9-a622-91a6c2dd88e2>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00085-ip-10-147-4-33.ec2.internal.warc.gz"}
[Numpy-discussion] portable doctests despite floating points numbers Sébastien Barthélemy barthelemy@crans.... Fri Oct 15 03:12:28 CDT 2010 Hello all, I use doctest for examples and tests in a program which relies heavily on numpy. As floating point calculations differs slightly across computers (32/64 bits), I have troubles writing portable doctests. The doctest documentation [1] advises to use numbers in the form int/2**n. This is quite restrictive. Using numpy.set_printoptions(suppress=True) also helps, but I still have problems with numbers around 0. On some platform a result prints as 0., on others as -0. Is there a workaround? [1] http://docs.python.org/library/doctest.html#warnings More information about the NumPy-Discussion mailing list
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2010-October/053334.html","timestamp":"2014-04-20T09:39:13Z","content_type":null,"content_length":"3317","record_id":"<urn:uuid:ce5b62fc-50eb-4a9b-84d3-4a391c2a4df7>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00340-ip-10-147-4-33.ec2.internal.warc.gz"}
Measurement is assigning a real number to some physical quantity such as length, width, height, volume, mass, brightness, etc. A physical apparatus suitable for the measurement to be made is used to make the measurement. Measurement involves error, so it is appropriate to discuss accuracy and precision at some point. A yard stick measures length in yards. A meter stick measures length in meters. A ruler measures length in feet. A thermometer measures temperature in degrees F or C. Using these tools we can measure length and width to find area, and if we measure height, we can then find volume. With yardsticks and rulers we deal heavily with fractions and conversions between these fractions. With meter sticks we deal with decimal points, moving them left or right as we convert, no fractions Below is part of a ruler, exaggerated in size for clarity: The top scale is in inches (in), that means the major marks, labeled with numbers, represent 1 inch in length. We use the double quote " to mean inches, so each major mark represents 1". Counting all the marks between 0" and 1' we find 16 altogether that are equally spaced between 0" and 1'. So, each mark represents 1/16". Continuing, taking 2 of these marks at a time we see there are 8 such pairs between 0" and 1". As you can see this mark represents 2 of the 1/16" marks and there are 8 of them, so each of these 8 marks represents 1/8". Likewise, the 3rd larger mark groups 2 of these 1/8" marks and 4 of the 1/16" marks and there are 4 of these groups, so this mark must represent 1/4". Finally, the next larger mark represents 2 of these 1/4" marks, 4 of the 1/8" marks and 8 of the 1/16" marks, and 2 of these marks represents 1", so one mark would represent 1/2". Moving the other way, each mark in order from larger to smaller represents 1/2 of the previous mark. Starting with 1", then next largest mark is 1/2 of 1" or 1/2". Now moving to the next mark, that mark is 1/2 of the 1/2" mark, so it represents 1/4". Continuing, the next mark is 1/2 of 1/4" so it represents 1/8". And finally the smallest mark is 1/2 of 1/8" which is 1/16". We say that this scale is calibrated to the nearest 1/16". This means that it's precision is 1/16". But, since we cannot measure to 1/32" or smaller, the best we can say is a measurement made with this ruler is accurate to 1/16" ± 1/32". The error is expected to be 1/2 above or below the smallest unit of measurement with any instrument. One cannot speak of an exact measurement, only of an approximation with an error bound. Now, let's investigate the metric side of that ruler. Each major mark represents 1cm. There are 10 smallest marks in equally spaced each of these marks. So each of these marks would represent 1/10 of 1cm. 1/10 of 1cm is 1mm. Notice there are marks that group 5 of these smallest marks, these next larger marks would represent 1/2 of 1cm which is 0.5cm. In the English system we use powers of 2 to measure lengths; In the metric system we use powers of 10. The metric system is easier since powers of ten require only movement of the decimal left or right. The English system requires a multiplication or division by some power of 2 then a conversion to a decimal. Example, say we measured a length of 1 and 5/16" and wanted to express this in feet. We know there are 12" in a foot. So we need to divide 1 5/16" by 12". Let's see, 1 5/16" is 21/16"; now divide this by 12 and we get (21/(16*12)) = 21 / 192 ft = (I need my calculator here...) = 0.0625 ft. Hmm, looking at the ruler above, 1 5/16" looks to be approximately 32mm. 1mm is 1/1000 of a meter, that is 1mm = 0.001m. If we have 32 of these mm then that must be 0.032m. And we're done. Let's look at this a bit closer. 1m = 10dm (dm = decimeter, dec --> 10) 1m = 100cm (cm = centimeter, cent --> 100) 1m = 1000mm (mm = millimeter, mil --> 1000) We rarely speak of decimeters. So let's focus on cm and mm. So we measured 32 mm. We want this in meters. 32.0mm a mm is smaller than 1m by 3 decimal places look, 1m = 1000 mm, divide by 1000, and we get 1/1000 m = 1 mm. Dividing any number by 1000 involves shifting the decimal to the left by 3, or multiplying by 3 factors of 10 (1000 = 10 * 10 * 10, 3 tens, 3 decimals) So. 3.20 (1 shift) 0.320 (2 shifts) 0.0320 (3 shifts) 0.0320 m Let's try another. What is this measurement in cm? 32.0 mm = 3.20 cm Ok, since 1m = 100cm and 1m = 1000mm a meter is a meter so 100cm = 1000mm, dividing by 100 we get 1cm = 10mm (which we saw in the ruler above) So, 1mm = 1/10 cm, and we have 32 mm, dividing by 10 moves the decimal to the left by 1. 32.0 mm = 3.20 cm. One last example. Say we have 1.32 m. How may cm is this? 1.32 m = 132 cm 1m = 100cm. we multiply by 100 that is, shift the decimal to the right by 2. 1.32m = 13.2 dm (1 shift) = 132 cm (2 shifts) Converting between the English system and the metric system requires a calculator. For length we have 2.54 cm = 1" (this should be committed to memory) all other length conversions can then be done. For example: 12in * 2.54 cm/in = 30.48cm (0.3048 m, 3048 mm) 1 yard = 3 ft, so 1 yard = 3 * 30.48cm = 91.44cm (0.9144m) (so a yard stick is slightly smaller than a meter stick. ) A 100 yard football field would be 91.44m. 1 mile is 5280ft/mi * (0.3048 m / ft) = 1609.3m We can 1km = 1000m, so we have 1609.3/1000 = 1.6093km 1 mile = 1.6093 km. If you look at the speedometer 60 MPH is just under 100 KPH. Other metric scales follow this moving decimal pattern where the units are all powers of ten and all use the same prefixes. Ok, one more example. 1) How many football fields one after the next are required to measure out one mile? A football field is 330 yards long. A mile is 1760 yards. By dividing 1760 by 330 we get 5.86666.... fields, i.e., almost 6 fields 2) How many football fields make up one quarter of a mile? About 6 / 4 = 1½ fields
{"url":"http://www.k12math.com/math-concepts/Measurement.htm","timestamp":"2014-04-19T04:22:58Z","content_type":null,"content_length":"30068","record_id":"<urn:uuid:14199a05-0200-475c-96b8-2a15c9b97a34>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00656-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: 610 – 100 = ? • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/5083eb69e4b0dab2a5ec3400","timestamp":"2014-04-19T07:17:24Z","content_type":null,"content_length":"76427","record_id":"<urn:uuid:2b40b44f-8e26-455c-bca4-215c31a74d3e>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00254-ip-10-147-4-33.ec2.internal.warc.gz"}
A number long considered to have strange, mystic properties. A phrase in a book written during the Dark Ages gave rise to the superstition that cats have nine lives. English author and satirist William Baldwin wrote in his Beware the Cat, "It is permitted for a witch to take her cat's body nine times." There were nine Muses, nine rivers of Hades, and nine heads on the Hydra. It took nine days for Vulcan to fall from the heavens. The phrase "nine days' wonder" comes from the proverb "a wonder lasts nine days and then the puppy's eyes are open." A cat-o'-nine-tails is a whip, usually made of nine knotted lines or cords fastened to a handle that produces scars like the scratches of a cat. Being on "cloud nine" may have its origin in Dante's ninth heaven of Paradise, whose inhabitants are blissful because they are closest to God. The term "the whole 9 yards" came from World War II fighter pilots in the Pacific. When arming their planes on the ground, the .50-caliber machine gun ammo belts measured exactly 27 feet, before being loaded into the fuselage. If the pilots fired all their ammo at a target, it got "the whole 9 yards." Less certain – though there is no shortage of theories – is the sources of the expression "dressed to the nines." Nine is the largest single-digit number and the one that occurs least frequently in most situations; an exception is the tendency of businesses to set prices that end with one or more 9's. Because 9 is one less than the base of our number system, it is easy to see if a number is divisible by 9 by adding the digits (and repeating on the result if necessary). This process is sometimes called casting out nines. Similar processes can be developed for divisibility by 99, 999, etc. or any number that divides one of these numbers. Nine has many other interesting properties. For example, write down a number containing as many digits as you like, add these digits together, and deduct the sum from the first number. The sum of the digits of this new number will always be a multiple of nine. Related category
{"url":"http://www.daviddarling.info/encyclopedia/N/nine.html","timestamp":"2014-04-16T07:17:49Z","content_type":null,"content_length":"8202","record_id":"<urn:uuid:7fdf2c06-afa9-4ea8-a4d6-52aff310d9b5>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00241-ip-10-147-4-33.ec2.internal.warc.gz"}
(This article was first published on Rmetrics blogs, and kindly contributed to R-bloggers) To leave a comment for the author, please follow the link and comment on his blog: Rmetrics blogs. R-bloggers.com offers daily e-mail updates about R news and tutorials on topics such as: visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, git, hadoop, Web... Variable selection using automatic methods When we have a set of data with a small number of variables we can easily use a manual approach to identifying a good set of variables and the form they take in our statistical model. In other situations we may have a large number of potentially important variables and it soon becomes a time Helping the blind use R – by exporting R console to Word Preface – R seems a natural fit for the blind statistician For blind people who wish to do statistics, R can be ideal. R command line interface offers straight forward statistical scripting in the form of question (what is the mean of x) followed by an answer (0.2). That is, instead of point-and-click dialog boxes with jumping windows of... Using R for Introductory Statistics, 3.1 Pairs of categorical data The grades data.frame holds two columns of letter grades, giving pairs of categorical data, like so:prev grade 1 B+ B+ 2 A- A- 3 B+ A- ... 122 B BThis type... Because it’s Friday: Ash I lived for 10 years within sight (on a clear day, anyway) of Mount St. Helens, and had seen and heard a lot about the devastation caused by the eruption and pyroclastic flow 30 years ago. But I'd heard relatively little about the effects of the ash cloud settling on land. I was surprised to learn about the effect... Color choosing in R made easy I don’t know about you, but when I want to make a graph in R, I handpick the colors, line widths etc… to produce awesome output. A lot of my time is spent on color choosing, I had to find a more convenient way of doing so. Earl F. Glynn’s “Chart of R colors” posted [&hellip R 2.11.1 scheduled for May 31 As announced by the R Core Team, the next update to R will be 2.11.1 to be released on May 31. Despite being a minor-minor version increment, this release is expected to sport at least one new feature: BIC (in package stats4) will work with multiple fitted models, like AIC does. There will also be some improvements to the... Tip of the day: Keep the console active in R Productivity Environment Jared Lander on Twitter asks: When I alt-tab into R Enterprise how do I make the #rstats console the active window by default. Now it goes to solution explorer. Revolution's crack Support engineer Stephen Weller offers this solution: The best way to do this is to right-click on the R-Console window and make it a 'Tabbed Document'.... Random sudokus [p-values] I reran the program checking the distribution of the digits over 9 “diagonals” (obtained by acceptable permutations of rows and column) and this test again results in mostly small p-values. Over a million iterations, and the nine (dependent) diagonals, four p-values were below 0.01, three were below 0.1, and two were above (0.21 and 0.42).
{"url":"http://www.r-bloggers.com/2010/05/page/7/","timestamp":"2014-04-16T16:35:46Z","content_type":null,"content_length":"36875","record_id":"<urn:uuid:fb87ecfa-223b-41f3-843f-6cb0a2b93abf>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00312-ip-10-147-4-33.ec2.internal.warc.gz"}
phi(n) and sigma(n) Numbers, phi(n) and sigma(n) phi(n) and sigma(n) For any positive integer n, the function div(n) is the number of positive divisors of n, including 1 and n. The function φ(n) is the number of elements relatively prime to n. This is also called the Euler totient function. The function σ(n) is the sum of all the divisors of n. A number is perfect if σ(n) is exactly twice n. The number is deficient if σ(n) is less than 2n, and abundant if σ(n) exceeds 2n. N is "almost perfect" if σ(n) = 2n-1, and "quasi perfect" if σ(n) = 2n+1. If n is a power of 2, such as 16, σ(n) is 1+2+4+8+16 = 31, making n almost perfect. No quasi perfect numbers are known. Once n is factored, div, φ, and σ are easily calculated. Consider 45 = 3 squared times 5. Each factor has zero one or two instances of the prime 3, and zero or 1 instances of the prime 5. That's 3 possibilities cross 2 possibilities, or 6 distinct factors. The factors are 1, 3, 9, 5, 15, and 45. If the factorization of n includes j primes, having exponents e1 e2 … ej, add one to all the exponents and multiply them together to get div(n). Compute φ(n) by counting the numbers below n that are coprime to n. Verify that c is coprime to n iff it is coprime to all the prime factors of n. Thanks to the chinese remainder theorem, the values of c mod n are determined by the values of c mod the prime factors of n. Thus we can count the values coprime to pe, for each p dividing n, and multiply these quantities together to obtain the number of values coprime to n. The values coprime to pe are precisely the numbers not divisible by p, namely pe-pe-1. Returning to φ(45), multiply 9-3 by 5-1 to obtain 24. To compute σ(45), realize that the divisors can be partitioned into those that are not divisible by 3, those that have one factor of 3, and those that have 2 factors of 3. In each case, the powers of 3, if any, are applied to 1 and 5. Thus σ(45) is (1+3+9) × (1+5), or 13×6, or 78. Verify this by adding 1+3+9+5+15+45. since 78 is less than 90, 45 is deficient, as are most numbers. Like div(n) and φ(n), σ(n) is a product of expressions, one expression for each distinct prime dividing n. In this case the expression associated with pe is the sum of the powers of p: 1+p+p2+p3+…+pe . For instance, σ(168) = (1+2+4+8) × (1+3) times (1+7) = 480, an abundant number. Verify that σ(s×t) is σ(s) × σ(t), provided s and t are relatively prime. This is called a multiplicative function. Note that div(n) and φ(n) are also multiplicative functions. If x = σ(n), and k is coprime to n, σ(kn) > kx. In other words, bringing in more factors increases σ faster than n. This is because each prime factor p in k contributes at least p+1 to σ. Similarly, increasing the exponent on a prime in n increases σ faster than n. If σ includes the factor 1+p, and we bring in another p, n is multiplied by p, but σ(n) is multiplied by (1+p+p2)/(1+p), which is a ratio larger than p. Once a number is abundant, all multiples of that number are abundant.
{"url":"http://www.mathreference.com/num,phi.html","timestamp":"2014-04-18T08:44:59Z","content_type":null,"content_length":"5021","record_id":"<urn:uuid:cdd44b3f-50d1-4b8f-af4a-678c36f5ffd2>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00271-ip-10-147-4-33.ec2.internal.warc.gz"}
multiplication of binary variable and semicontinuous variables Hi all, I am trying to linearize Xy where X is binary variable and y is a semi-continuous variable. whenever X=0, M<=y<=2M (M is big M), if X=1 then 1<=y<=5. any hint to help me to linearize X*y would be appreciated. Thank you Hesam asked 22 Feb '13, 15:22 accept rate: 0% If you want y between either 1 and 5 or M and 2M, depending on x, then y is not semicontinuous. A semicontinuous variable is either 0 or falls within a specified range (such as [1, 5]). So which do you mean? Another question: What is y representing and why does it make sense to have y between two effectively arbitrary large values in the one case? (I have to say, that sounds like the sort of logic puzzle that might come up in a homework problem on logical conditions in mixed integer programming class.) Hi Paul, sorry I used a wrong term for y. y is not a semi-continuous variable, it is between either 1 and 5 or M and 2M, depending on x. @ Matthew Saltzman: X represents a binary variable deciding to/not to select an activity in a resource constrained project scheduling problem. y is the start time of one activity. If X=0, then y should be very large (it means that the activity would start at very much far from now, then no resource would be used by this activity in the project life time). that's why y is between M and 2M. Right now I think I have already linearized Xy. This is not a homework or part of a homework, it is part of my research to formulate an underground mine scheduling problem. Thanks for clarifying. It does seem that simply y >= M would serve the purpose of the first constraint, with any upper bound being arbitrary. @ Matthew Saltzman: No problem. Yes any arbitrary upper bound more than M would work. \(1 + (M-1)(1-x)\le y \le 5 + (2M-5)(1-x)\) answered 25 Feb '13, 08:40 Paul Rubin ♦ accept rate: 17% Or just 1 + M(1-x) <= y <= 5 + 2M(1-x) as M is arbitrary. True. I decided to give the overly precise answer in case someone else stumbled on this and had a less arbitrary alternative interval in mind. Plus, this "M-1, M, what the heck" attitude is how we get such large budget deficits. :-) I write here what I have, I am not pretty sure if it is true, but it may give some idea to other people: define WP=X*y. Upper bound and lower bound of y are as follows: UPPER BOUND =U= 2M(1-X)+5X LOWER BOUND =L= M(1-X)+1X Since (1-X)*(1-X)=(1-X), (1-X)*X=0, and X*X=X, then above two equations are rearranged as follows: y-2M.(1-X)≤WP≤y-M.(1-X) (∀i=1…N)
{"url":"https://www.or-exchange.org/questions/7498/multiplication-of-binary-variable-and-semicontinuous-variables","timestamp":"2014-04-23T12:45:04Z","content_type":null,"content_length":"37829","record_id":"<urn:uuid:550981d6-0662-4b7d-a09e-056fcf4f5532>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00367-ip-10-147-4-33.ec2.internal.warc.gz"}
On time-bounded incompressibility of compressible strings Edgar G. Daylight, Wouter M. Koolen and Paul M.B. Vitányi Information Processing Letters 2008. For every total recursive time bound t, a constant fraction of all compressible (low Kolmogorov complexity) strings is t-bounded incompressible (high time-bounded Kolmogorov complexity); there are infinitely many infinite sequences of which every initial segment is compressible yet t-bounded incompressible; and there is a recursive infinite sequence of which every initial segment is t-bounded
{"url":"http://eprints.pascal-network.org/archive/00004948/","timestamp":"2014-04-16T07:29:12Z","content_type":null,"content_length":"5500","record_id":"<urn:uuid:1e3590b1-0d86-4430-a5e3-da7fd61bae10>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00049-ip-10-147-4-33.ec2.internal.warc.gz"}
[FOM] Harvey's effective number theorists Timothy Y. Chow tchow at alum.mit.edu Thu Apr 13 16:49:11 EDT 2006 Gabriel Stolzenberg writes: > Finally, I'd like to thank Harvey's first unnamed number theorist for > his comments and invite him to explain what makes the question of > getting an "effective" version of Falting's theorem that yields an > "effective" algorithm" for finding all rational points a "fundamental" > problem? Mathematicians whose own research doesn't involve hard analysis or other techniques requiring explicit quantitative calculations on a daily basis sometimes find it hard to understand why (for example) analysts sometimes get so excited about reducing an exponent from 3/4 to 2/3 or something like that. Part of it, of course, is that numbers like that provide a handy, quotable benchmark for progress, and what people are really excited about are the new techniques that enable one to break through what seemed to be a tough barrier. However, things also work in the opposite direction. That is, sometimes if you keep pushing on a bound then you'll eventually cross some kind of threshold that suddenly opens up a qualitatively new realm of knowledge that you couldn't touch before. For an analogy from a different area of math, consider the classification of finite simple groups. The classification theorem would be significantly weaker if we could only say, "there are finitely many sporadic groups," while having no idea *how* many. If you have an explicit list then you can prove all kinds of previously inaccessible theorems just by checking all the groups. This isn't usually the most satisfying type of proof, but at least it's a proof, and you might have no other proof available. Similarly, in number theory, Tijdeman showed that Catalan's conjecture could have only finitely many exceptions, but until Mihailescu's work, we couldn't actually assert Catalan's conjecture as a theorem. I don't know of any applications of Catalan's conjecture, but there are many other cases in number theory that are analogous to the simple group situation, where you push a bound low enough for explicit computations and thereby allow proofs of qualitatively new results. After enough experience with this sort of thing, one learns to respect the value of passing from no bound to some bound to a good bound just in general, knowing that this represents increased knowledge and power, as well as increased chances of crossing thresholds into new, uncharted territory. In some cases, of course, this optimistic viewpoint may turn out to be unfounded, just as any kind of study "for its own sake" may not yield the results that a hard-headed applications-oriented person wants to see. But I'm sure I don't need to teach FOM readers how to respond to someone who asks, "But is this work going to lead to applications?" More information about the FOM mailing list
{"url":"http://www.cs.nyu.edu/pipermail/fom/2006-April/010401.html","timestamp":"2014-04-19T11:58:36Z","content_type":null,"content_length":"5397","record_id":"<urn:uuid:2ff402d9-1f39-4e9e-a2d9-3c73ba364d88>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00187-ip-10-147-4-33.ec2.internal.warc.gz"}
MetalTabs.com Forum - A guide to help you set up a Solid State rig Okay, so I wrote sort of a little guide regarding all this SS head/ohms/cabs/wiring stuff. I certainly didn't know any of this stuff before I decided I wanted a VH140c, and alot of you dudes helped me out in understanding what the deal is, so for future reference I hope this helps anyone interested so more people won't be discouraged from getting a killer early 90's Crate or Ampeg, Randall Cyclone or any similar solid-state head. Hopes this helps you as well Peter. A guide to matching impedances/setting up a soild-state amp rig: 1) I'm looking at the back of my amp head, what the hell does all this stuff mean? The back of the amp will tell you what the lowest resistance that the amp wants to see hooked up to it. So using a Crate GX-130 head as an example, it says: 65W RMS @ 8 Ohms 4 ohm Min. Load The RMS and ohm information applies to each poweramp of the head. Each poweramp in the head dishes out 65 watts for a combined wattage of 130. Each poweramp has 2 outputs, and each one is rated at 4 ohms (hence the "@ 8 ohms" label.) The minimum load is the load where the amp puts out the most power. So what this is saying is that the head wants no lower than a 4 ohm load. You don't want to go below the minimum that the amp tells you. 2) Understanding the outputs: These soild-state heads we're talking about have stereo outputs. Usually the outs look like: The Ampeg VH140c's outs look a bit different. Basically we've got two left outs and two right outs regardless of the amp. As you can see from the pic of the VH140c's back panel, the RMS and ohm information applies to each stereo side of the head. So using the Crate GX-130 as an example, each two pairs of outs is rated at 65 watts RMS with a minimum 4 ohm load (the VH140c's is the same actually, just add 5 watts.) Stereo heads like these need stereo cabs. Stereo cabs have two mono inputs. So, you'll need two cables for your setup, one going from one of the head's LEFT channels into one of the stereo cab's inputs and one from the RIGHT going into the other input on the stereo cab. 3) Okay, now onto the cab wiring: Here's where things get odd, and I'll try to make it as easy as possible. Remember when I said that the minimum load is the load where the amp puts out the most power? This is why you need to make sure you're getting the right cab for your soild state head. The GX-130 and VH140c's back tell you that the minimum ohms for each side are 4. Now let's look at the back of a MESA Rectifier 4 X12. We see there are two mono inputs rated at 4 ohms each, it also tells us that these are split for stereo. If the back of our amp heads tell us each sides minimum ohm rating is 4, and we have this cab wired stereo for 4 ohms, we have found the best cab for use with our head, because it matches the minimum ohm rating for the head. Now you can kind of see what is going on. - 4 OHM LEFT OUTPUT ------------------ AMP 4 OHM INPUT -4 OHM RIGHT OUTPUT ----------------- OTHER 4 OHM AMP INPUT We're basically playing a matching game. All is happy and brutal if each side of our stereo solid state head's minimum ohm requirement is equal to the inputs of our cab. 4) What if I want to use two cabs? Say you want to use your SS head with TWO cabs instead of one. These heads usually have two poweramps, when combined, will give you the total wattage the head can push out. This is why you need either one cab with stereo inputs OR two regular mono cabs. Why two regular mono cabs? Lets just say your head was rated at a minimum of 8 ohms a side. In that case, you would need two 16 ohm cabs, because two 16 ohm cabs makes an 8 ohm load to the amp, which satisfies the requirement of most power. Most SS amp heads like the GX-130 or the VH140c are rated minimum 4 ohms a side. If we'd want to use two cabs with them, we'd need to get two regular 8 ohm cabs because two 8 ohm cabs together makes a 4 ohm load to the amp. We would not need them to be stereo because we've got cables going from each side (left + right) into each of the two cabs' single input. As you can see from the good 'ol MESA pic, this cab has got an 8 ohm input. If we'd want to run our solid state head with two cabs we'd need to get two cabs with that same kind of input (most regular guitar cabs come with only one input rated @ 8 ohms anyway. Perfect example being the Marshall 1960 4 X 12.) 5) That doesn't make sense! If each output (4 outputs in total) on my Crate GX-130C is 4 ohms, wouldn't I want one left output of 4 ohms going to a 4 ohm cab and the right output of 4 ohms going to the 4 ohm second cab? Why 8 ohm cabs when each output (1 left and 1 right) is only 4 ohms? Chalk this answer up to the laws of physics & electronics. Two 8 ohm loads are the same as one 4 ohm load. Just like: one 8 ohm cab = two 16 ohm cabs one 16 ohm cab = two 8 ohm cabs If you added two 16 ohm cabs together you do NOT get a 32 ohm load, you get an 8 ohm load. It sounds ridiculous, but it's true, it's just how impedance works. The "matching game" that we saw work in one-cab situations does not apply to two cabs. 6) What if I only want to get one cab now but I want to add another later? Most solid state amps will allow you to go above the lowest ohm load but usually not below the lowest ohm load. Check with the amps owners manual or contact the amp company for advice for their particular amp. If your amp has a minimum 4 ohm load, and you want to get one cab now and leave the door open to expand and add another cab later, then get one 8 ohm cab now and another 8 ohm cab later. When you run a single 8 ohm cab on an amp that can go down to 4 ohm, you lose a little power that only translates to about a 3 db drop in volume which is just barely perceivable, but you can expand and add another 8 ohm cab later which will not only give you more speakers to move more air, but your amp will then be at optimum power at the 4 ohm total load. (from www.avatarspeakers.com) 7) How does all this ohmage and stuff apply to tube amps?: With tube amps it's really best to try to match the amps exact load requirement, because a tube amp with the wrong speaker ohm load connected to it could sound fine but could be stressing the tubes and transformer causing them to prematurely wear out. 8) What are some well-known albums that feature solid-state amp distortion? I'm making this list because alot of people ask, and it was a big persuasion for me into exploring the world of solid-state amps. Crate GX-130 : Jack Owen from Cannibal Corpse was known to use these. Ampeg VH140c: Suffocation - Effigy Of The Forgotten & Pierced From Within. Gorguts - Considered Dead, Erosion Of Sanity and Obscura. Immolation - Dawn Of Possesion. Cynic - Focus. Dying Fetus, Assück, Misery Index, and Internal Bleeding used this particular amp on mostly all their recordings. Thanks to Soeru, the_bleeding, Bloodsoaked666, Josh, Adam Quick ( Vader Cabinets Avatar Speakers and anyone else that helped or contributed to this guide.
{"url":"http://metaltabs.com/forum/printthread.php?t=36993","timestamp":"2014-04-18T03:15:56Z","content_type":null,"content_length":"21034","record_id":"<urn:uuid:fbf3a128-ff18-47cc-b08a-181141b4f896>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00571-ip-10-147-4-33.ec2.internal.warc.gz"}
Mattapan SAT Math Tutor Find a Mattapan SAT Math Tutor ...I am a second year graduate student at MIT, and bilingual in French and English. I earned my high school diploma from a French high school, as well as a bachelor of science in Computer Science from West Point. My academic strengths are in mathematics and French. 16 Subjects: including SAT math, French, calculus, algebra 1 ...I have a good grasp of algorithms, data structures, and object-oriented programming principles, as well as general proficiency in software implementation. I am familiar with a few Java IDEs as well, so I am able to tutor from a versatile standpoint. I received excellent scores in all areas on my first and only attempt at the SAT. 38 Subjects: including SAT math, reading, English, physics I am a senior chemistry major and math minor at Boston College. In addition to my coursework, I conduct research in a physical chemistry nanomaterials lab on campus. I am qualified to tutor elementary, middle school, high school, and college level chemistry and math, as well as SAT prep for chemistry and math.I am a chemistry major at Boston College. 13 Subjects: including SAT math, chemistry, calculus, geometry ...I focus on helping my student develop both improved content knowledge as well as learning helpful problem-solving/learning strategies. My interest is in helping my students improve their analytical thinking and attain a deeper understanding of underlying concepts that will help them succeed not ... 33 Subjects: including SAT math, reading, English, GRE ...English for speakers of other languages is best taught in an immersive environment where the student is encouraged to read, write, and speak in English with as many opportunities as possible. The approach to tutoring should be interactive and tailored to the needs of the individual student. Above all, continual practice with the language is what will ensure increased proficiency and 63 Subjects: including SAT math, English, physics, reading
{"url":"http://www.purplemath.com/Mattapan_SAT_math_tutors.php","timestamp":"2014-04-16T22:20:11Z","content_type":null,"content_length":"24069","record_id":"<urn:uuid:a53b52dd-ef53-4e72-9888-6da5d84d325d>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00206-ip-10-147-4-33.ec2.internal.warc.gz"}
Some myths concerning statistical hypothesis testing Glen M. Sizemore gmsizemore2 at yahoo.com Fri Nov 8 10:28:55 EST 2002 RV: Glenn, your posts are couched in a somewhat hostile manner which does not encourage others to join in or to ask for clarification. GS: Not a word about the way Mat treated me? Why? Perhaps I escalated a bit, but that is how one responds to an ad hominem. You'll notice that I am civil to you, although we disagree. I have a long history with Mat. His favorite tactics are saying "rubbish" and not responding to what one wrote, as well as simply calling me stupid or uninformed. > 1.) a p-value is a conditional probability of the form p(A/B) where A is > observation and B is the truth of the null hypothesis. > 2.) you don't know if B is true or false. > Conclusion: whatever a p-value is, it cannot be a quantitative assessment > the truth of B because the meaning of the p-value is dependent on B and > don't know what B is. Now attack the premises or the conclusion. I dare RV: So p is the probability of B given A. GS: No, it is the probability of A given B. That's the whole point. RV: I am not sure where truth comes into it. But the quantitative assessment is the conditional probability. I know what B is (typically, that the two results are sampled from a common population that is normally distributed, homoscedastic etc), I just don't know if it is true. GS: The meaning of the p-value is quantitatively accurate only if the conditional portion is "there." > > Marc does not say what follows in his paper, but this > misconception > produced a state of affairs in which a great deal of > importance is > attached to findings before it is clear that the finding is > reliable. > result is that there is, all things being equal, a great deal > of > discrepant results in the various scientific literatures that rely on > > statistical significance testing. In contrast, for sciences in which the > > reliability is demonstrated in each subject (usually repeatedly), or > > "subject" if the preparation is not a whole animal, there is far less > > failure to replicate (this is because such data are published only when > > there have been numerous demonstrations of reliability within and across > > subjects). For an example of how this is done, you may examine my paper: > > Effects of Acutely Administered Cocaine on Responding Maintained by a > > Progressive-ratio Schedule of Food Presentation, which is in press in > > Behavioural Pharmacology. Or, you may examine virtually any paper in the > > Journal of the Experimental Analysis of Behavior. Or you may obtain a copy > of Sidman's Tactics of Scientific Research, or even Claude Bernard's > > book. > > Mat: doh! you are doing the very same as the people you chastise! by > repeating the experiments you are increasing your n, such that if there is > true difference it should become apparent. > GS: Nonsense. What I am doing, and what others like me do is directly > demonstrating the reliability. That's why it is not unheard of to publish > data collected and analyzed "individual-subject style" with 3 subjects. > such data are, as I explained, generally proven to be reliable through > direct and systematic replication. What "thinkers" like you do is increase > the N because doing so will almost always result in differences even if > "effect" is virtually nonexistent (see below). RV: Glenn, it is vey dependendent upon what you work on. I record single neurons. They just don't hang around long enough to do a lot of repeated GS: You mean in vitro? Are you saying you can't get a baseline, introduce a variable, and then withdraw the variable and then introduce it again? I don' t understand. RV: I also can't see why you prefer 3 people tested 5 times to 15 people tested once, unless you need trained subjects, or you want to look at intra and inter-subject variability which might be important for some things. For many clinical trials the patient gets better with treatment, and it is not ethical to make them sick again ;-) GS: Because I am interested in directly demonstrating the reliability of the effect within and across subjects. In the paper I just got published 4 of the 5 rats showed increases in breakpoint (the rats must "pay" leverpresses for food and the number that they must pay increases after each food delivery - after some point they stop responding and the "price" they "paid" is the breakpoint) at some dose of cocaine every time it was administered. So if each dose was given three times, and 4 of the 5 rats showed a consistent increase at some dose, the fact that cocaine can increase breakpoint was replicated many times before the end of the experiment. I can guarantee you that if you arrange such schedules with rats (and no doubt many other species) you will be very likely to find some dose that increases breakpoint in almost every subject. And indeed, I have replicated the result in an off-hand probe, and have also observed similar effects with tropane analogs. If I gave 15 rats one dose, I would not know if the increases were reliable at all. Indeed, in the rat that did not show the effect reliably, I did produce increases with the first administration of, I think, 30 mg/kg, but could not produce any increases after that. Incidentally, that rat was similar to the others on a couple of different measures. I wasn't talking about ethics, or even medical research in general (until Mat brought it up) but since you bring it up, why is it any less ethical to temporarily stop treatment than to give sick people placebo? That way everybody gets the drug, gets better, gets temporarily sicker when you withdraw the drug and start injecting vehicle, and then gets better again when you determine that they are getting worse and you reintroduce the drug. > averaged together. And if, say, only two subjects showed the effect in > question, I wouldn't publish the data, but I would strongly suspect that > there was something worth pursuing, and I might try to figure out why I > the effect in only two of the animals. RV: Surely this depends on what the effect is. Aren't there a small proportion of people who are HIV positve but never develop AIDS. Even if they were 2 out of 100, they would be worth investigating. This is really to do with being a good scientist, not a stats abuser. I don't think anyone is disagreeing with this. This is a very different situation from a controlled randomized trial where you are not exploring, but simply testing a simple hypothesis. GS: It is common for researchers to simply add subjects until significance is reached. Abuse of stats is exactly what I am complaining about. > GS: No, it doesn't. It tells you that IF THE NULL HYPOTHESIS IS TRUE > you don't know) there is a 5% or 1% chance of obtaining the data again. > Since you don't know if the null hypothesis is true or not, you have no > quantitative assessment of the likelihood of obtaining the observation, RV: But you're not interested in the likelihood of getting the observation - you already have it. GS: What one should be interested in is the reliability and generality of the finding. Repeatedly testing a few subjects can directly demonstrate the reliability as do direct replications in other laboratories, and systematic replication directly demonstrates the generality. Through this means the facts uncovered by the experimental analysis of behavior are among the most highly replicable in psychology. There are many, many effects that are very large and obtainable in virtually every subject. Some are pretty reliable but known to fail in the occasional subject. This is true of the cocaine-induced increases in breakpoint, as well as a couple of other cocaine effects, as well as, I'm sure, a few others. One might be able to track down why they are different (as I explained with respect to the monkey experiment - in this case what was at issue was not why a few subjects didn' t show the effect but, rather, why most monkeys did not show an effect that is quite reliable in rats and pigeons, but the principles are the same. In any event, what I am complaining about is the rather widespread notion that a small p-value is a quantitative estimate of the reliability of the RV: The issue is that if the likelihood of getting the data was small given that the null hypothesis is true, we choose to take a punt and say the null hypothesis is likely not true. GS: I'm well aware of that. > GS: Think about this: if you have a drug that produces large effects in > of the sample, and no effect in the other 60%, one could obtain > significance if one increased the N enough. So now we have an effect that > works in only 40% of the population and it is deemed important and > If you are dying, you might want to try it, but only an insipid idiot > call it reliable. Yet this is, apparently, your version of "modern > But, of course, in most experiments, not even the researcher may know how > many of his subjects actually showed an effect. All he or she may know > (because that is all they are paying attention to) is that p<.01. And > certainly the reader usually has no clue as to how many of the subjects > actually "had" the "effect." In medical research, fortunately, there is > pressure to pay close attention to the individual effects (BTW, Mat, if it > is possible to judge an effect in an individual, what do you need > for?) . However, I argue, and occasionally some enlightened MD argues, > significance testing is dangerous. Sometimes you have nothing else but > often you do. RV: Aren't we all on the same page? You plot the data. You look for sub- groups and weird effects. You can test for some of these properties. if everything looks like a homogeneous group then you can do some inferential stats on them. In your example, the data would have two peaks (at 0 and +x% effect) and would not be normally distibuted. Anyone testing this without caution is an idiot, but it does not make the statistical tests wrong. GS: I doubt we're all on the same page. Anyway, I am not attacking statistical theory, I am attacking the unreasoned and nearly ubiquitous reliance on significance testing, as well as misconcetions. > GS: Usually the null hypothesis is, in the simplest case, that there is no > difference between the control group (or control condition as in the > t-test, which is the simplest form of repeated-measures ANOVA; hehehe) and > experimental group. So, yes, if you are doing ANYTHING it is likely to > SOME effect, and if you throw enough subjects at it, you will eventually > reach a point where you "obtain statistical significance." This is, in > usually what happens in the sort of "science" you are talking about. BTW, > physics and many other sciences, what functions as the null hypothesis is, > in fact, the scientist's own prediction! That is, the scientist does > everything in his or her power to reject their own prediction, and when > does not occur they begin to assert the importance of their hypothesis. In > contrast, "scientists" like you do everything in their power to reject the > stawman notion that there is no effect which, as I have pointed out, is > almost certain to be false. RV: Come on Glenn, I don't think that too many papers are pointing out a 5% difference even if it is significant at p<0.001. GS: Oh really? RV: Maybe you've had a bad experience lately you want to share? Clinical significance involves the idea that the effect is worth risking a change in therapy and so must be a substantial improvement (not 5%) as well as a statisically significant GS: In the basic laboratory in a lot of sciences, something gets published if significance is obtained, and almost never gets published if no significance is obtained, no matter how large the effect appears visually. This is well known. "Richard Vickery" <Richard.Vickery at unsw.edu.au> wrote in message More information about the Neur-sci mailing list
{"url":"http://www.bio.net/bionet/mm/neur-sci/2002-November/052628.html","timestamp":"2014-04-18T22:00:02Z","content_type":null,"content_length":"15456","record_id":"<urn:uuid:d2ce411e-0ea4-43f1-91cf-0c816ef4f34e>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00248-ip-10-147-4-33.ec2.internal.warc.gz"}
Solving sudoku using PostgreSQL PL/pgSQL Posted: Wednesday, August 19th, 2009 at 3:09 pm • Updated: Wednesday, August 19th, 2009 at 3:25 pm I have no idea why I would want to solve sudoku in PostgreSQL PL/pgSQL. My guess would be just for the fun of it. I’m also hoping that it can serve as a tutorial example in programming PostgreSQL PL/ So what is sudoku? It is basically a number puzzle where the objective is to fill 9×9 grid so that the columns, rows, and the smaller 3×3 squares, called blocks, doesn’t have the same number repeated. If you’re not familiar with sudoku, this Wikipedia page on sudoku may be helpful. Now that you’re familiar with sudoku, there are many algorithms for solving sudoku. For this solution, I’ve decided to use brute force algorithm. If you’re just interested in the end result, please jump to last page. Table structure for sudoku The table to represent sudoku is pretty straight forward. It’s basically 4 columns that describe: • The x and y coordinate for each sudoku cell • The value of the cell • A marker to identify whether the value is given. I call this is_permanent CREATE TABLE sudoku ( x_col integer NOT NULL, y_col integer NOT NULL, val integer, is_permanent boolean NOT NULL DEFAULT false, CONSTRAINT sudoku_pkey PRIMARY KEY (x_col, y_col) ) WITHOUT OIDS; In addition to sudoku table, I I’ll need to get a list of possible values. So rather than typing it every time in query string, I thought it would be convenient to have a table with a list of legal CREATE TABLE nums ( num integer NOT NULL DEFAULT 1, CONSTRAINT nums_pkey PRIMARY KEY (num) ) WITHOUT OIDS; INSERT INTO nums VALUES (1),(2),(3),(4),(5),(6),(7),(8),(9); Test case sudoku puzzle For the purpose of testing the solution, let’s start with a rather easy to solve by hand. The puzzle and the solution is as below: Populating sudoku table Now that we have the puzzle to solve, let’s insert the known values into our sudoku table. INSERT INTO sudoku (x_col, y_col, val, is_permanent) VALUES ('1','3','5', TRUE), ('1','4','1', TRUE), ('1','6','9', TRUE), ('1','7','4', TRUE), ('1','8','7', TRUE), ('1','9','3', TRUE), ('2','3','9', TRUE), ('2','4','5', TRUE), ('2','7','8', TRUE), ('2','9','6', TRUE), ('3','1','1', TRUE), ('3','2','4', TRUE), ('3','4','8', TRUE), ('3','7','2', TRUE), ('4','1','4', TRUE), ('4','8','6', TRUE), ('5','3','6', TRUE), ('5','4','7', TRUE), ('5','6','2', TRUE), ('5','7','5', TRUE), ('6','2','8', TRUE), ('6','9','1', TRUE), ('7','3','4', TRUE), ('7','6','1', TRUE), ('7','8','2', TRUE), ('7','9','8', TRUE), ('8','1','5', TRUE), ('8','3','2', TRUE), ('8','6','8', TRUE), ('8','7','6', TRUE), ('9','1','3', TRUE), ('9','2','9', TRUE), ('9','3','8', TRUE), ('9','4','2', TRUE), ('9','6','7', TRUE), ('9','7','1', TRUE) On the next page, we’ll discuss on how to determine valid values for a given cell. WildWezyr Says: March 13th, 2010 at 3:00 pm It seems that your solution is very slow. I\’ve executed it to find solution for Easter Monster and it is now running for over 2 660 seconds with no result :-(. For comparison: my old JavaScript solution yields result after 4 seconds on Chrome 4. Look could at it here: http://wildwezyr-sudoku-solver.blogspot.com/ To facilitate population of sudoku table before execution of your code, I use this function: CREATE OR REPLACE FUNCTION func_full_solve_sudoku(input_data text) RETURNS varchar(81) as x int; y int; inp varchar(81); d int; c varchar(1); inp := regexp_replace(regexp_replace(input_data, \’[\\n\\r]\’, \’\', \’g\’), \’[^1-9]\’, \’ \’, \’g\’); truncate table sudoku; for y in 0..8 loop for x in 0..8 loop d := null; c := substring(inp, y * 9 + x + 1, 1); if c is not null and length(c) = 1 and c <> \’ \’ then d := cast(c as int); end if; if d is not null then insert into sudoku (x_col, y_col, val, is_permanent) values (y + 1, x + 1, d, true); end if; end loop; end loop; perform func_solve_sudoku(); inp := \’\'; for y in 0..8 loop for x in 0..8 loop select cast(val as varchar(1)) into c from sudoku where x_col = y + 1 and y_col = x + 1; inp := inp || coalesce(c, \’?\’); end loop; end loop; return inp; LANGUAGE \’plpgsql\’ VOLATILE; Then for Easter Monster I just run: select func_full_solve_sudoku(\’1…….2.9.4…5…6…7…5.9.3…….7…….85..4.7…..6…3…9.8…2…..1\’) If you want to see how is my solution performing for this puzzle, use this matrix in textarea on the left and click [import->] and then [solve]. WildWezyr Says: March 14th, 2010 at 4:33 am This is my second comment (first is still awaiting moderation): take a look at this: sudoku solver in just one select statement: it is PostgreSQL select query and is quite fast – 120 seconds for Easter Monster compared to 4 seconds of my JavaScript solver and indeterminate amount of time of yours ;-). Maresa Says: March 15th, 2010 at 8:39 am Yeah .. it is actually very slow. The link that you sent uses the new WITH RECURSIVE support of PostgreSQL 8.4 which was released after my posting. WildWezyr Says: March 18th, 2010 at 3:19 am @Maresa: additional pl/pgsql language constructs may lead to shorter solutions, but not that huge difference in terms of efficiency. you could do the same algorithm as with “WITH RECURSIVE” using plain old postgres functions and it should work aproximately as good as single query or even better.
{"url":"http://www.microshell.com/database/postgresql/solving-sudoku-using-postgresql-plpgsql/","timestamp":"2014-04-17T06:41:53Z","content_type":null,"content_length":"23906","record_id":"<urn:uuid:dbfce4a3-c0f4-4c8d-bde2-a22c379268c8>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00382-ip-10-147-4-33.ec2.internal.warc.gz"}
Millbury, MA Calculus Tutor Find a Millbury, MA Calculus Tutor ...I'm that geeky math-loving girl, that was also a cheerleader, so I pride myself in being smart and fun!! I was an Actuarial Mathematics major at Worcester Polytechnic Institute (WPI), and worked in the actuarial field for about 3.5 years after college. Since then I have been a nanny and a tutor ... 17 Subjects: including calculus, geometry, statistics, linear algebra ...I have taught all age groups from kindergartner to graduate/professional students during my own teaching career of almost 30 years. I have also taught students who were not able to perform well in math and science while in primary and middle school. I am consistent and patient. 11 Subjects: including calculus, geometry, biology, Japanese ...In most cases, if I don't know it, I can teach myself and help you improve. My experience is that learning is both simple and pleasurable when you approach it with the right mind. I have a B.S. in Psychological Science from WPI (located in Worcester, MA) which is essentially an undergraduate degree in designing and analyzing scientific experiments. 24 Subjects: including calculus, chemistry, English, reading I currently teach Mathematics, Statistics and Macroeconomics at Quinsigamond Community College. In my spare time, I would like to help students better themselves and their grade by tutoring. With my deep academic and professional experience I believe I can be an asset for your child's future. 11 Subjects: including calculus, French, geometry, statistics ...I enjoy helping others understand the logic and rules that govern our writing, interpretation, and speech. I have almost six months' experience tutoring in English half-time, including grammar. I have a masters degree in math, but have not lost sight of the difficulties encountered in elementary math. 29 Subjects: including calculus, reading, English, geometry Related Millbury, MA Tutors Millbury, MA Accounting Tutors Millbury, MA ACT Tutors Millbury, MA Algebra Tutors Millbury, MA Algebra 2 Tutors Millbury, MA Calculus Tutors Millbury, MA Geometry Tutors Millbury, MA Math Tutors Millbury, MA Prealgebra Tutors Millbury, MA Precalculus Tutors Millbury, MA SAT Tutors Millbury, MA SAT Math Tutors Millbury, MA Science Tutors Millbury, MA Statistics Tutors Millbury, MA Trigonometry Tutors
{"url":"http://www.purplemath.com/Millbury_MA_Calculus_tutors.php","timestamp":"2014-04-18T21:20:04Z","content_type":null,"content_length":"24103","record_id":"<urn:uuid:72a56a76-b343-45d3-af89-aa5cc30edf5a>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00090-ip-10-147-4-33.ec2.internal.warc.gz"}
Strange behavior related to value / reference Terry Reedy tjreedy at udel.edu Wed Oct 28 04:45:03 CET 2009 Lambda wrote: > I defined a function to raise a 2x2 matrix to nth power: There is a much faster way to raise x to a count power n than the definitional but naive method of multipling 1 by x n times. It is based on the binary representation of n. Example: x**29 = x**(16+8+4+1) = x**16 * x**8 * x**4 * x * 1 So square x 4 times and do 4 more multiplies (total 8) instead of 28! General algorithm is something like (untested): def expo(x, n): # assume n is a count if n%2: res = x else: res = 1 # or the identity for type(x) base = x n //= 2 # if n < 2, we are done while n: base *= base # or appropriate mul function if n%2: res *= base # ditto n //= 2 return res Terry Jan Reedy More information about the Python-list mailing list
{"url":"https://mail.python.org/pipermail/python-list/2009-October/556057.html","timestamp":"2014-04-19T18:47:11Z","content_type":null,"content_length":"3316","record_id":"<urn:uuid:5b48fa20-8c3a-41bc-9271-2e4133e1cded>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00298-ip-10-147-4-33.ec2.internal.warc.gz"}
The Voltage Divider If you've ever studied electronics, then you've likely studied and solved the voltage divider equations. Refer to the schematic for the L pad. In class, they probably stacked the resistors vertically, one above the other. You can analyze this circuit two different ways, getting the same answer at the end. Generally, the solution method taught for voltage dividers was based on Ohm's law, which works every time, but sometimes isn't the easiest way to wrap your brain around the problem. For the following discussion, R1 = 10k, R2 = 1k, Ein is 1V. From the look of things, you might intuit that the input voltage is divided by 10 to arrive at the output voltage. But is it really? Using Ohm's law, calculate the total current in the circuit: It = E/Rt = 1/(10k+1k) = 1/11k = 90.9 microamps. In a series circuit, the current through the circuit is the same at any node (Kirchoff's law). The voltage across any of the resistors in the loop is equal to the current through the resistor times the resistance: ER1 = ItR1 = 90.9ľA * 10k = .9091V ER2 = ItR2 = 90.9ľA * 10k = .09091V Finally, as a check, Kirchoff's law also says that the sum of the voltage drops in a circuit equals the applied voltage: ER1 + ER2 = .9091 + .09091 = 1V Now look at the circuit from the standpoint of the ratios of the elements. The applied input voltage will be divided by a factor of 1 plus the ratio of the elements. You can visualize this easily with a simple 1:1 divider; two equal-value resistors. By inspection you may already know that the input voltage will be divided by two. The ratio of the two resistors is 1. The attenuation ratio is the resistor ratio 1 plus 1 = 2. Revisiting the 10:1 voltage divider: Eo = Ein / k, where k is the attenuation ratio, or 1+(R1/R2). k = 1+(10k/1k) = 11 Eo = 1/11 = .0901V Vin/Vout = (R1+R2)/R2 = R1/R2 + R2/R2 = 1+(R1/R2) finally, solving for Vout: Vout = Vin/(1+(R1/R2)) As you can see, both methods arrived at the same answer; the output voltage is 90.1mV. You can also see that although the resistor values had a 10:1 ratio, the input:output voltage ratio was 11:1. A 10:1 input:output voltage ratio requires that the resistor values have a 9:1 ratio. Last modified 11/16/2012. 12:53:24
{"url":"http://www.uneeda-audio.com/pads/math.htm","timestamp":"2014-04-18T23:15:08Z","content_type":null,"content_length":"3647","record_id":"<urn:uuid:e85dab22-ae47-43cb-b6fa-75d84f80efbd>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00053-ip-10-147-4-33.ec2.internal.warc.gz"}
Simple Map Software I'm about to start a project. This will be a Simple Map Software that shows the map of our campus with a pathfinder that will plot shortest path. I've looked through useful data structures and eventually found Graph ADT. I'm stucked in choosing which is the best representation for the graph, is an "Edge List Structure", "Adjacency List Structure", or "Adjacency Matrix Structure"? Since the vertices will be places in our campus (e.g. canteen, library) and this will be many. Plus, can you provide a walkthrough on how will a simple map be? im really confused. Any HELP will be appreciated. Thanks.
{"url":"http://www.dreamincode.net/forums/topic/294016-poll-graph-implementation-of-a-simple-map-software/","timestamp":"2014-04-19T04:05:19Z","content_type":null,"content_length":"97640","record_id":"<urn:uuid:76bf27d4-747a-4ca2-86f3-0c110dc5dd1e>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00421-ip-10-147-4-33.ec2.internal.warc.gz"}
Glenbrook, CT Statistics Tutor Find a Glenbrook, CT Statistics Tutor ...History, World History, English Literature, English Language, and Statistics. Additionally, I have taken both the ACT and SAT, scores of which can be provided upon request. I have tutored in a variety of subjects across the board for students ranging from middle school through college level. 45 Subjects: including statistics, English, chemistry, calculus ...I am an MIT Engineer (BSEE and Meng EECS) with experience teaching and tutoring, especially in math. I have spent 250+ hours working with students in Cambridge, MA. I enjoy finding different ways to see a problem and helping each student understand in their own way. 36 Subjects: including statistics, Spanish, reading, writing ...I have teaching experience from tutoring my peers and teaching English in China.I am going to represent Fairfield University with other 5 teammates to participate in the Rotman International Trading Competition in Toronto in February 2014. I am a Chinese international student in Fairfield, Connecticut. Chinese is my first language. 8 Subjects: including statistics, geometry, Chinese, algebra 1 ...Do you have a student whose progress in reading is slower than expected or uneven with unexpected weaknesses, such as reading comprehension? Does your child have difficulty with spelling? Do you have a student who has difficulties with writing such as generating or getting ideas onto paper, organizing writing and grammatical problems? 30 Subjects: including statistics, English, piano, reading ...I try to simplify the topic, provide practice materials for the topic, so the students can have a better understanding. When a student does well, I feel like I have done well. Some of the students have told me that they wish I was their teacher. 47 Subjects: including statistics, chemistry, reading, accounting Related Glenbrook, CT Tutors Glenbrook, CT Accounting Tutors Glenbrook, CT ACT Tutors Glenbrook, CT Algebra Tutors Glenbrook, CT Algebra 2 Tutors Glenbrook, CT Calculus Tutors Glenbrook, CT Geometry Tutors Glenbrook, CT Math Tutors Glenbrook, CT Prealgebra Tutors Glenbrook, CT Precalculus Tutors Glenbrook, CT SAT Tutors Glenbrook, CT SAT Math Tutors Glenbrook, CT Science Tutors Glenbrook, CT Statistics Tutors Glenbrook, CT Trigonometry Tutors Nearby Cities With statistics Tutor Belle Haven, CT statistics Tutors East Norwalk, CT statistics Tutors Glenville, CT statistics Tutors Hillside, NY statistics Tutors Lewisboro, NY statistics Tutors Noroton Heights, CT statistics Tutors Noroton, CT statistics Tutors Ridgeway, CT statistics Tutors Rowayton, CT statistics Tutors Saugatuck, CT statistics Tutors Scotts Corners, NY statistics Tutors Springdale, CT statistics Tutors Stamford, CT statistics Tutors Tokeneke, CT statistics Tutors West Brentwood, NY statistics Tutors
{"url":"http://www.purplemath.com/Glenbrook_CT_Statistics_tutors.php","timestamp":"2014-04-20T19:14:00Z","content_type":null,"content_length":"24299","record_id":"<urn:uuid:cd84aa61-03ba-4b77-a935-1cc69dc69a5a>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00551-ip-10-147-4-33.ec2.internal.warc.gz"}
Manuscript Submission Papers will be posted electronically on this site as soon as accepted. What kind of papers are appropriate for SS? We publish papers in all areas of statistics: theory, computation, methodology, and applications. How should papers be written? Papers should be written in the same style as in typical statistical journals such as Statistical Science, the International Statistical Review, discussion papers in JRSS Series B, the Annals of Statistics, or the Journal of the American Statistical Association. Since the range of topics is wide, make sure that the abstract and Introduction are clear and can be read by a diverse audience. How are papers submitted? Papers must be submitted electronically at http://www.e-publications.org/ims/submission/index.php/. You can suggest potential referees or Associate Editors for your paper. Manuscripts should be written in Latex. Please see the LaTeX support page for IMS publications to use the IMS recommended template. How are papers handled? Your paper will be assigned to the Editor who will then assign the paper to an Associate Editor. Authors are encouraged to make available algorithms or code for carrying out the analyses presented in a paper. Such material will also be posted on Statlib.
{"url":"http://www.imstat.org/ss/mansub.html","timestamp":"2014-04-18T10:35:43Z","content_type":null,"content_length":"10061","record_id":"<urn:uuid:d916dfa5-3110-4e5e-8fc6-75ef0bdfd3ff>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00605-ip-10-147-4-33.ec2.internal.warc.gz"}
BANDWIDTH OF AN AMPLIFIER - 14180_55 2-3 Figure 2-1. Frequency response curve of audio amplifier. Notice in the figure that the lower frequency limit is labeled f[1] and the upper frequency limit is labeled f[2]. Note also the portion inside the frequency-response curve marked "BANDWIDTH." You may be wondering just what a "bandwidth" is. BANDWIDTH OF AN AMPLIFIER The bandwidth represents the amount or "width" of frequencies, or the "band of frequencies," that the amplifier is MOST effective in amplifying. However, the bandwidth is NOT the same as the band of frequencies that is amplified. The bandwidth (BW) of an amplifier is the difference between the frequency limits of the amplifier. For example, the band of frequencies for an amplifier may be from 10 kilohertz (10 kHz) to 30 kilohertz (30 kHz). In this case, the bandwidth would be 20 kilohertz (20 kHz). As another example, if an amplifier is designed to amplify frequencies between 15 hertz (15 Hz) and 20 kilohertz (20 kHz), the bandwidth will be equal to 20 kilohertz minus 15 hertz or 19,985 hertz (19,985 Hz). This is shown in figure 2-1. Mathematically: You should notice on the figure that the frequency-response curve shows output voltage (or current) against frequency. The lower and upper frequency limits (f[1] and f[2]) are also known as HALF-POWER POINTS. The half-power points are the points at which the output voltage (or current) is 70.7 percent of the maximum output voltage (or current). Any frequency that produces less than 70.7 percent of the maximum output voltage (or current) is outside the bandwidth and, in most cases, is not considered a useable output of the amplifier. The reason these points are called "half-power points" is that the true output power will be half (50 percent) of the maximum true output power when the output voltage (or current) is 70.7 percent of the maximum output voltage (or current), as shown below. (All calculations are rounded off to two decimal places.) As you learned in NEETS, Module 2, in an a.c. circuit true power is calculated using the resistance (R) of the circuit, NOT the impedance (Z). If the circuit produces a maximum output voltage of 10 volts across a 50-ohm load, then:
{"url":"http://electriciantraining.tpub.com/14180/css/14180_55.htm","timestamp":"2014-04-21T09:38:13Z","content_type":null,"content_length":"29240","record_id":"<urn:uuid:c23bd912-b7a1-40e3-bf67-0b40a29d533d>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00464-ip-10-147-4-33.ec2.internal.warc.gz"}
The Strength of Multilinear Proofs (Joint work with Ran Raz) We discuss an algebraic proof system that manipulates multilinear arithmetic formulas. We show this proof system to be fairly strong even when restricted to multilinear arithmetic formulas of a very small depth. Specifically, algebraic proofs manipulating depth 3 multilinear arithmetic formulas are strictly stronger than Resolution, Polynomial Calculus (and Polynomial Calculus with Resolution); and (over characteristic 0) admit polynomial-size proofs of the (functional) pigeonhole principle. Finally, we illustrate a connection between lower bounds on multilinear proofs and lower bounds on multilinear arithmetic circuits.
{"url":"http://www.newton.ac.uk/programmes/LAA/Abstract4/tzameret.html","timestamp":"2014-04-17T21:37:34Z","content_type":null,"content_length":"2670","record_id":"<urn:uuid:69bee5b1-6d37-4c19-91d6-0e0752c22512>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00179-ip-10-147-4-33.ec2.internal.warc.gz"}
Feature Article - Interpreting Time Series Data (This article was published in the March quarter 2002 issue of Western Australian Statistical Indicators (ABS Catalogue Number 1367.5)) Users of statistics regularly analyse time series data in an attempt to understand real world dynamics. For example, changes in the characteristics of the population, cyclical movements in economic markets, etc. This type of analysis is difficult when examining the original data as there can be seasonal or other influences masking the true direction of the series. For this reason, the ABS publishes seasonally adjusted and trend data for many of its series. While publication of these additional series can be extremely useful to the experienced analyst, it may result in some confusion for the general user in terms of understanding what each series is indicating. This article aims to explain the basic concepts of time series analysis, discuss issues users should be aware of, and provide an indication of the most appropriate series to use in different circumstances. WHAT IS A TIME SERIES? A time series is a collection of well-defined data points that have been measured at regular intervals of time. For example, the number of litres of milk produced each quarter would be a time series because a litre of milk is a well-defined concept and each measurement is taken over a period of three months. Data which are collected irregularly or only once cannot be defined as a time series. For example, a one-off count of the total number of persons who received the government's $14,000 First Home Owner Grant is not a time series. Time series can be classified as being either a stock or a flow series, depending on the type of measurements being taken. Stock series are measures, or counts, taken at a point in time. For example, the number of bicycles in a store on a particular day. This figure will change from day to day depending on the amount of stock received that day and the number of bicycles sold. Similarly, the Labour Force Survey takes stock of the number of people employed in a particular reference week and is therefore considered to be a stock series. Flow series are measures of activity over a given period of time. For example, the number of bicycles sold by a store in a particular month. This figure will change day by day, depending on the number of bicycles sold each day. At the end of the month, the total number of sales can be calculated. Similarly, the number of new motor vehicle sales each month is the sum of all new motor vehicles sold during each day of the month. The main difference between a stock and a flow series is that a flow series can be affected by trading day effects (see for Trading Day Effect section for further information). Apart from this, both stock and flow series are treated in much the same way in the time series analysis process. A time series can be thought of as comprising three separate components: ● any calendar related effects, and The trend component is a measure of the underlying behaviour of the series over time. That is, whether the series is generally increasing, decreasing or remaining stable over time. This underlying behaviour could be due to influences such as population growth, price inflation or general economic development, and can often be hidden in the original time series data by the calendar related and/ or residual effects. For example, consider the original data in the figure below. A superficial examination of the data at the current end of the series would suggest that the number of employed persons in WA has taken a downward turn in January 2002. However, upon further examination, it can be seen that there is also a downward turn for the previous three Januarys, which would indicate that there may be a seasonal factor influencing the original data. The fact that January appears to be a low seasonal month could be caused, for example, by a high number of employees ending their contracts in January after working over the Christmas period. An examination of the underlying behaviour of the series shows that the number of employed persons in WA has actually remained relatively stable over most of 2001 and, if anything, the series seems to be slowly increasing, not decreasing. Calendar related effects are systematic influences on the source data. They are predictable and persistent, and are sometimes referred to as 'seasonal effects' even though they encompass more than just seasonality. The four main types of calendar related effects are: ● moving holiday effects; and ● other systematic effects. Seasonal effects are factors which recur one or more times per year. They are reasonably stable with respect to annual timing, consistent direction and predictable magnitude. They can be due to natural factors (eg. seasons, harvests), administrative or legal matters (eg. tax payments) or social traditions (eg. Christmas). For example, the following figure shows large increases in the December retail turnover figures over the last five years. These increases are most likely due to increased Christmas spending in The presence of seasonal effects can also be seen in the following gas production graph. There are distinct increases each winter, when gas heating is in high demand, and marked decreases over the summer months. A trading day effect is caused by the number of high and low activity days in a given month. That is, since each month in the year has 28 days, plus one, two or three extra days, time series data can be affected by whether these extra days are high or low activity days. For example, in a 31 day month, if the three extra days were Sunday, Monday and Tuesday, then it would be expected that less retail sales would be recorded than if the three extra days were Thursday, Friday and Saturday, since there is generally a higher level of retail activity towards the end of the week. Series are also affected by the varying number of extra days in the month. For example, suppose that a factory's average production of jelly beans has the following distribution. AVERAGE JELLY BEAN PRODUCTION Day of Week Number Sunday 0 Monday 4,000 Tuesday 6,000 Wednesday 6,000 Thursday 5,000 Friday 3,000 Saturday 0 Weekly Total 24,000 If the above distribution remains consistent from year to year, then the only difference between the production of jelly beans in the same month across different years will be due to the activity on the extra days. As shown below, the number of working days in March 1999, March 2000, March 2001 and March 2002 were 23, 23, 22 and 21 respectively, and the extra working days were Monday, Tuesday & Wednesday in 1999, Wednesday, Thursday & Friday in 2000, Thursday & Friday in 2001, and Friday in 2002. March 1999 March 2000 Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa March 2001 March 2002 Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa Knowing the distribution of the factory's jelly bean production, the average number of jelly beans produced in each full four week period would be 24,000 x 4 = 96,000 and, hence, the total production of jelly beans in each March would have been: TOTAL JELLY BEAN PRODUCTION March Total number of jelly beans produced 1999 96,000 + 4,000 (Mo) + 6,000 (Tu) + 6,000 (We) = 112,000 2000 96,000 + 6,000 (We) + 5,000 (Th) + 3,000 (Fr) = 110,000 2001 96,000 + 5,000 (Th) + 3,000 (Fr) = 104,000 2002 96,000 + 3,000 (Fr) = 99,000 If no consideration was given to the trading day effect, it would appear as though jelly bean production had declined over the last four years, from 112,000 jelly beans in March 1999 to 99,000 jelly beans in March 2002. In reality, the production has remained constant and it is only the working days in March that have changed. Moving holiday effects are caused by regular holidays which do not occur at the same time each year. For example, both Easter and Chinese New Year occur once a year but, since they follow the cycles of the moon, the exact month in which they occur can vary. In most years, Easter falls in April, but can occur in March or March/April. The effects of Easter can be expected to be seen in confectionery production figures and tourism series as many people travel over the Easter holidays. Similarly, Chinese New Year normally occurs in February, but will sometimes fall in late January. Effects from this holiday are evident in Overseas Arrivals and Departure series from some Asian countries as many people travel over this holiday period. For example, the following graph shows short term visitor arrivals from China. Chinese New Year started on 16 February in 1999, 5 February in 2000, 24 January in 2001 and 12 February in 2002. Correspondingly, sharp increases in visitor arrivals can be seen in February 1999, February 2000, January 2001 and February 2002. Moving holidays can also affect data for months or quarters adjacent to the one where the holiday falls. This is called a proximity effect and will occur if the holiday falls close to the beginning or end of the month or quarter of interest. For example, the Retail Trade series is sometimes adjusted for an Easter proximity effect, depending on whether Easter falls in late March or early April. Other systematic effects can have an impact on time series. For example, government social security payments are typically paid fortnightly. In some months, this will result in two payments and in other months there will be three. A series measuring the total monthly government outlays on, say, the Age Pension, would be affected by this systematic effect. Residual effects (sometimes referred to as 'irregulars') are short term fluctuations in the data which are generally not systematic or predictable with regards to timing, duration and degree of impact. These random fluctuations are typically caused by sampling and non-sampling errors in the data. Sampling errors are found in data collected through sample surveys and exist as a result of not enumerating the entire population. Non-sampling errors are all other errors in the data (such as reporting errors, processing errors, coverage errors, etc) and can affect collections regardless of whether or not they are sample surveys. Aside from these random fluctuations, large impacts can sometimes be observed in the residual effect. For example, the effect of a flood on agricultural production data, or the effect of The New Tax System (TNTS) on retail turnover figures. As it is not possible to identify the cause, timing or magnitude of most irregular effects, they cannot normally be individually removed from the series (except for some large irregular effects). Instead, the ABS uses a generalised statistical procedure known as filtering, or smoothing, to remove the short term residuals from the series, as described further in the section Calculating the Seasonal adjustment is the process by which calendar related effects are removed from the original series. A seasonally adjusted series, then, will be the combination of the underlying trend of the series and the irregular factors. Whether the seasonally adjusted series is a good estimate of the trend will depend on the strength of the irregulars in the series. For example, as discussed above, the Monthly Retail Turnover series has strong seasonal factors (there are large spikes each December due to Christmas trading). When the series is seasonally adjusted, these factors are removed, as shown below. The seasonally adjusted series can be seen to be quite similar to the underlying trend of the series. This is because the strength of the irregulars is generally small relative to that of the trend component (except in mid-2000 where a strong GST-related irregular can be observed). In comparison, the seasonally adjusted Unemployed Females series shown below is relatively more volatile than retail sales and is therefore not as clear an indicator of the underlying direction of the series. The actual process for removing the calendar related effects is complex and will not be discussed in this article. Users who are interested in a technical explanation are referred to Information An Introductory Course on Time Series Analysis (Cat. no. 1346.0.55.001). In general, there are two approaches to the seasonal adjustment process: ● forward factors - where seasonal factors are estimated once a year and then kept fixed for a 12 month period; or ● concurrent adjustment - where the seasonal factors are re-estimated each time there is new data available. Most ABS series use the forward factor approach. The ABS recommends that at least seven years of data be used to ensure that the results of the seasonal adjustment process are reliable, as it can take some time for seasonal patterns to evolve. Experimental estimates are possible with fewer observations, although a minimum of five years of data is preferable. Once the original data has been seasonally adjusted, the underlying trend of that series can be estimated by removing the irregular effects. This can be done by applying a moving average to the seasonally adjusted series. The ABS uses a Henderson moving average because it is able to dampen the irregular component without distorting the timing of turning points, it is relatively reliable and is easy to produce. A 7-term Henderson moving average is generally used to smooth quarterly series while a 13-term is used for monthly series. This means that there are seven and thirteen data points respectively used to calculate the smoothed figure. The Henderson moving average is described as being 'centred' because the resulting values are placed in the centre of the series. For example, in the case of the 7-term moving average, the smoothed figure at time t is calculated using three past data points (up to time t-3), the data point at time t, and three future data points (up to time t+3), and the resulting moving average value is placed at time t. The mathematical formula for the 7-term Henderson moving average is: where A is the smoothed data at time t (trend); are the weights; and are the seasonally adjusted data points. The weights assign an importance to each data point in the calculation. There are specific techniques for deriving weights for different moving averages. For the 7-term symmetric Henderson moving average, the weighting pattern is: (-0.059, 0.059, 0.294, 0.412, 0.294, 0.059, -0.059) That is, the trend figure at the time is calculated as: A[t] = - 0.059x[t-3] + 0.059x[t-2] + 0.294x[t-1] + 0.412x[t] + 0.294x[t+1] + 0.059x[t+2] - 0.059x[t+3] For example, suppose the following hypothetical data corresponds to seasonally adjusted quarterly jelly bean production data from the factory discussed in an earlier example. HYPOTHETICAL JELLY BEAN PRODUCTION ('000) Sep Dec Mar Jun Sep Dec Mar Jun Sep Dec Mar Jun Sep Dec Mar Seasonally Adjusted 291.2 300.3 313.7 318.4 320.0 309.7 298.2 298.1 283.9 285.5 286.2 283.0 277.1 285.2 295.6 Henderson Moving Average (Trend) 318.7 317.3 310.5 301.6 293.3 288.1 285.6 283.7 281.7 The trend figure for June 1999 would be calculated as: A[t] = - (0.059 × 291.2) + (0.059 × 300.3) + (0.294 × 313.7) + (0.412 × 318.4) + (0.294 × 320.0) + (0.059 × 309.7) - (0.059 × 298.2) = 318.7 The trend series can only be calculated using this formula for the middle time periods because there are insufficient data points available at the ends of the series. That is, the above table shows that the latest time period for which trend data are available is June 2001 (281.7). To calculate a trend figure for September 2001 would require data for June 2002, which is yet to be collected. This is known as the end point problem and can be overcome by using asymmetric Henderson moving averages. That is, instead of using the symmetric weights provided above, asymmetric weighting patterns (which do not require the three future data points) are used. The asymmetric weighting patterns vary for each time period and across data series, hence have not been included here. The appropriate asymmetric weighting patterns have been used to calculate a trend figure for September 2001, December 2001 and March 2002 in the above table and the following graph shows the full jelly bean production series. It can be seen that the seasonally adjusted jelly bean production data is relatively stable with respect to the trend series, and that the factory's production of jelly beans is slowly starting to increase after declining since September 1999. When analysing seasonally adjusted or trend data, there are a number of important issues that users need to be aware of. These are described below. Revisions to the seasonally adjusted and trend data are common and can occur for a number of reasons. One of the major reasons for trend data revision is the 'end point problem' discussed earlier. That is, since there are insufficient data points available toward the ends of the series to use the standard smoothing technique, asymmetric Henderson moving averages are used. When the next data point becomes available, the type of moving average used (i.e. symmetric or asymmetric) is shifted across to the next time period, which results in changes to the trend estimates. For example, the following table shows that when data for the March 2002 reference period is released, the September 2001, December 2001 and March 2002 trend estimates are calculated using asymmetric Henderson moving averages. When data for the June 2002 reference period become available, the September 2001 trend estimates are re-calculated using the standard symmetric moving average. Furthermore, the availability of a new data point affects the values calculated in December 2001 and March 2002, which are also revised. END POINT PROBLEM, Timing of Symmetric and Asymmetric Moving Averages Trend Estimates Reference Period Jun 2000 Sep 2000 Dec 2000 Mar 2001 Jun 2001 Sep 2001 Dec 2001 Mar 2002 Jun 2002 Sep 2002 Mar 2002 Sym Sym Sym Sym Sym A - Sym A - Sym A - Sym Jun 2002 Sym Sym Sym Sym Sym Sym A - Sym A - Sym A - Sym Sep 2002 Sym Sym Sym Sym Sym Sym Sym A - Sym A - Sym A - Sym As a result of the end point problem, the most current trend estimate can be revised up to three times in a quarterly series and up to six times in a monthly series. Typically, the largest trend revisions occur the first time new data are available and are generally negligible after the first revision for quarterly series and after the third revision for monthly series. Revisions can also be made to the seasonally adjusted series as a result of evolving seasonal patterns and/or trading day effects. Unlike trend revisions, which typically affect the last few data points, the method used to revise seasonal factors results in a minimum of five years worth of seasonally adjusted data being affected. Any revisions which are made to the seasonally adjusted data will flow through to trend series revisions (although they have a small impact on the trend data). Similarly, any amendments made to the original data will flow through to both the seasonally adjusted and trend series. Generally, the degree of revision of the seasonally adjusted and trend data depends on the irregularity of the original series. Long spans of time series data are rarely consistent. They are prone to the effects of structural changes, such as changes in data item definitions, changes in the coverage of the collection, changes in administrative practices, technological innovation and social changes. Such changes can result in an abrupt discontinuity in the underlying level of the original series. This effect is generally referred to as a 'trend break'. For example, consider the new motor vehicle sales series shown below. There is a clear and abrupt increase in the underlying level of the series between June and July 2000 due to the introduction of TNTS. A 'seasonal break' can occur when the seasonal behaviour of the series abruptly changes from one year to the next. For example, consider the Commonwealth Government benefit payments series below. This series includes education and training payments such as Austudy. The mild seasonal pattern which can be observed from 1990 to 1995 changes abruptly in 1996 when the timing of Austudy payments changed. The seasonal pattern changed again in 1998 when the timing of fortnightly government payments were changed to be made on any day of the week. The series also shows a trend break in 2000 due to the Sole Parents Pension being taken over by Centrelink (and the corresponding data being included in another series). Time series data can be subject to large, one-off effects. These effects will remain in the seasonally adjusted series and can distort the trend path if they are not corrected during the trending For example, the following graph shows extremes in the number of Commonwealth wage and salary earners during the conduct of the 1986 Census, the 1987 Federal election, the 1988 Referendum, the 1991 Census and the 1993 Federal election, due to the employment of additional temporary staff. More recent elections and censuses have possibly used different employment arrangements which do not appear as large extremes. If these extremes were not taken into consideration during the trending process, the trend line would be distorted, as shown below using the 1991 Census as an example. The original, seasonally adjusted and trend series are all useful measures for time series analysts. They do, however, serve different purposes and it is important to be able to distinguish which is the most appropriate series to use under different circumstances. Often, users are interested in analysing the underlying direction of the series, unobscured by any seasonal or irregular effects, and in detecting possible turning points in the series. In such circumstances, the trend series would be the most appropriate to use as all seasonal and irregular effects have been removed. While the trend series provides useful information about the underlying direction of the data, it does not provide any information about the seasonal patterns in the data. Some users may be interested in, for example, the relative magnitudes of the seasonal peaks and troughs from year to year, or how the seasonal effects have evolved over the years. In this case, the original data, which has not had the seasonal effects removed, would be the most appropriate. Users who are interested in comparing one month to the next may find the seasonally adjusted data more useful than the original as it is not obscured by seasonal patterns. Some users may be interested in which months are the most or least irregular, or how much the irregularity is changing over time. Since the irregularity is removed from the trend series, the user would be interested in analysing the seasonally adjusted data. Other users may be interested in measuring the magnitude of the irregular so as to line it up with economic events or a change in government policy. For example, users may be interested in the magnitude of the impact of the Goods and Services Tax on retail turnover figures. Again, seasonally adjusted data would be the most appropriate for such purposes. Time series data are collected by a wide range of government and non-government organisations and the concepts described above, regarding the analysis of such data, are not solely applicable to ABS This article describes basic time series analysis concepts. It does not explain the complex statistical techniques actually used. ABS statistical consultants are available to assist external organisations with analysis of non-ABS time series. For further information and advice, contact the manager of Statistical Consultancy on (08) 9360 5144. Information Paper: A Guide to Smoothing Time Series - Estimates of "Trend" (Cat. no. 1316.0) Information Paper: Time Series Decomposition - An Overview (Cat. no. 1317.0) Information Paper: An Introductory Course on Time Series Analysis (Cat. no. 1346.0) Information Paper: A Guide to Interpreting Time Series - Monitoring "Trends" An Overview (Cat. no. 1348.0) Australian Economic Indicators, April 1991 (Cat. no. 1350.0) - Article titled "Picking Turning Points in the Economy" Australian Economic Indicators, March 1992 (Cat. no. 1350.0) - Article titled "Smarter Data Use" Australian Economic Indicators, January 1995 (Cat. no. 1350.0) - Article titled "A Guide to Interpreting Time Series" ││ │ ││ │ ││ (188Kb) │ This page last updated 10 September 2007
{"url":"http://www.abs.gov.au/AUSSTATS/abs@.nsf/featurearticlesbyReleaseDate/CFA19371D1BFAB40CA256F2A000FEB10?OpenDocument","timestamp":"2014-04-20T11:55:04Z","content_type":null,"content_length":"85860","record_id":"<urn:uuid:ea400c84-846e-42fd-a1db-d634d5a88010>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00103-ip-10-147-4-33.ec2.internal.warc.gz"}
lowest weight representation of loop groups up vote 6 down vote favorite I am trying to understand lowest representations of loop groups as developed in Pressley and Segal's book. Specifically I want to be able to compute the weight spaces that appear in a lowest weight representation. I realize there is a formula for this -my question is along the lines of how to apply the formula correctly. I tried to do a small example with $LSL_3$ (actually $\mathbb{C}^\times_{\theta} \ltimes \tilde LSL_3$) and something fishy happened so I was hoping someone could point out my mistake. The maximal torus is $\mathbb{C}^\times_\theta \times T \times \mathbb{C}^\times$ where $\mathbb{C}^\times_\theta$ is the loop rotations and the other $\mathbb{C}^\times$ is central. The fundamental weights are $w_0 = (0,0,1)$, $w_1 = (0,-\omega_1,1)$, $w_2 = (0, -\omega_2,1)$. The positive roots are $(0,\alpha_1,0), (0,\alpha_2,0),(1,-\alpha_3,0)$. Where $\omega_i,\alpha_i$ are fundamental weights and positive roots of $SL_3$. Pressley and Segal normalize the Killing form so $\langle H_{\alpha_i},H_{\alpha_i}\rangle = 2$. Choosing coordinates $H_{\alpha_1} = [1\ \ 0]^T$, $H_{\alpha_2} = [0\ \ 1]^T$, $\alpha_1 = [2 \ \ -1]$, $\alpha_2 = [-1 \ \ 2]$, $\omega_1 = [1\ \ 0]$, $\omega_2 = [0\ \ 1]$ and the restriction of the Killing form to the torus is just the Cartan matrix $(B_{11} = B_{22} = 2, B_{12} = B_{21} = -1)$. I'm interested in the representation $V_{\tilde \lambda}$ of lowest weight $\tilde\lambda = (0, - \alpha_3,3) = w_0 + w_1 + w_2$. Let $\tilde \mu = (m,\mu, 3)$ be a weight of $V_{\tilde \lambda}$. According to Loop Groups (11.1.1) it is the case that $\tilde \mu - \tilde \lambda = (m,\mu +\alpha_3, 0)$ is a sum of positive roots. Viewing $B$ as a map from co-characters to characters and noting that $\alpha_i = BH_{\alpha_i}$ it follows that we can write $\mu = B[a\ \ b]^T$ for some $a,b$. According to (9.3.7) on pg 180 of Loop Groups the $\tilde\mu =(m, \mu,3)$ which satisfy $3\langle \mu,\mu\rangle - 6m = 6 = 3\langle -\alpha_3,-\alpha_3\rangle$ appear among the weight of $V_{\tilde \lambda}$. This says $m = {1 \over 2}\langle\mu,\mu\rangle-1 = {1\over 2}[a\ \ b]B[a\ \ b]^T - 1 = a^2 + b^2- ab -1$. Taking $a,b = 0$ produces the weight $\tilde \mu = (-1, 0, 3)$ but then $\tilde \mu - \tilde \lambda = -(1,-\alpha_3,0)$. Which is certainly not a sum of positive roots. So what gives? rt.representation-theory lie-groups 1 I'm tempted in a situation like this to suggest that you try contacting one or both authors by email. That doesn't invariably get a helpful response (or any response), but on the other hand the authors of books have a vested interest in clarifying things for their readers. – Jim Humphreys May 5 '11 at 14:32 Why is the equation (9.3.7) as you have it? I would have thought $||\mu||^2-2mh=||\widetilde \lambda||^2$ should be $\langle \mu, \mu \rangle - 6m = \langle -\alpha_3, -\alpha_3 \rangle=2$ – charris May 6 '11 at 0:06 @charris My original thinking was that for a level $h$ representation you have to take $h$ times the standard pairing. But perhaps its as you say; that would certainly prevent negative values of $m$. – solbap May 6 '11 at 19:51 @charris actually that business of taking $h$ times the standard pairing is something I confused with representation of loop groups of tori (9.5.10) so I think you are absolutely right. – solbap May 6 '11 at 20:31 add comment 1 Answer active oldest votes The formula for the invariant bilinear form is given in $(4.9.3)$ on page 64 $$\langle (x_1,\xi_1, y_1),(x_2,\xi_2,y_2) \rangle=\langle \xi_1, \xi_2 \rangle - x_1 y_2-y_1x_2$$ As I mentioned in the comments, $(9.3.7)$ becomes then $||\mu||^2-6m=2$. So your last equation would be $m=\frac{1}{3}(a^2-ab+b^2)-\frac{1}{3}$. As you said, there's no more worries about up vote 4 negative $m$ and as a consistency check, for $m=0$, the solutions for $[a \ \ b]$ are $[\pm 1 \ \ 0]$, $[0 \ \ \pm 1]$, $[1 \ \ 1 ]$, and $[-1 \ \ -1]$. Applying, $B$ gives you the six down vote weights in the Weyl orbit of $-\alpha_3$ (the roots). add comment Not the answer you're looking for? Browse other questions tagged rt.representation-theory lie-groups or ask your own question.
{"url":"http://mathoverflow.net/questions/63978/lowest-weight-representation-of-loop-groups?sort=oldest","timestamp":"2014-04-18T19:05:51Z","content_type":null,"content_length":"57951","record_id":"<urn:uuid:0d4b989e-0a42-4bdf-a07a-a0911f8bb061>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00387-ip-10-147-4-33.ec2.internal.warc.gz"}
Numerical Simulations of Emission and Bistatic Scattering from Soils with Rough Surfaces of Exponential Correlation Functions. Conference Proceeding Numerical Simulations of Emission and Bistatic Scattering from Soils with Rough Surfaces of Exponential Correlation Functions. 01/2008; DOI:10.1109/IGARSS.2008.4780041 In proceeding of: IEEE International Geoscience & Remote Sensing Symposium, IGARSS 2008, July 8-11, 2008, Boston, Massachusetts, USA, Proceedings ABSTRACT In this paper, we report on the polarimetric active and passive microwave remote signatures for exponential correlation function surfaces. Applications are in soil moisture problems at the frequencies L, C and X band. We use the same physical parameters of rms heights and correlation lengths at the three frequencies. Results for 2D case with rms height up to 2 wavelengths at X band are shown. The hybrid UV-SMCG method for RWG basis is also used to accelerate MoM solution. Comparisons are made with SPM, KA and AIEM predictions. We also compare backscattering between horizontal and vertical polarization cases at different rms heights with exponential correlation. At small rms height, the backscattering for vertical polarization case is larger than that for horizontal polarization case. On the other hand, at large rms height, the backscattering for horizontal polarization case is larger. [show abstract] [hide abstract] ABSTRACT: 1] A UV multilevel partitioning method (UV-MLP) is developed to solve scalar wave three-dimensional (3-D) scattering problem. The method consists of setting up a table of transmitting and receiving block size and their separation using fast coarse-coarse sampling. For a specific scattering problem with given geometry, the scattering structure is partitioned into multilevel blocks. By looking up the rank in the static problem, the impedance matrix for a given transmitting and receiving block is expressed into a product of U and V matrix. In this paper the method is illustrated by applying to a 3-D scattering problem of random nonpenetrable rough surface. The cases of Dirichelt and Neumann boundary conditions are treated. Numerical simulation results are illustrated. For 65,536 boundary unknowns on a rough surface, and using a single processor of 2.66 GHz, it takes about 34 CPU min and 1.8 Gb of memory to compute the solution using conjugate gradient iterations and multilevel UV to accelerate the matrix-column vector multiplication. (2004), Wave scattering with UV multilevel partitioning method: 2. Three-dimensional problem of nonpenetrable surface scattering, Radio Sci., 39, RS5011, doi:10.1029/2003RS003010. Radio Science 01/2004; 39(5). · 1.00 Impact Factor [show abstract] [hide abstract] ABSTRACT: This paper presents a model of microwave emissions from rough surfaces. We derive a more complete expression of the single-scattering terms in the integral equation method (IEM) surface scattering model. The complementary components for the scattered fields are rederived, based on the removal of a simplifying assumption in the spectral representation of Green's function. In addition, new but compact expressions for the complementary field coefficients can be obtained after quite lengthy mathematical manipulations. Three-dimensional Monte Carlo simulations of surface emission from Gaussian rough surfaces were used to examine the validity of the model. The results based on the new version (advanced IEM) indicate that significant improvements for emissivity prediction may be obtained for a wide range of roughness scales, in particular in the intermediate roughness regions. It is also shown that the original IEM produces larger errors that lead to tens of Kelvins in brightness temperature, which are unacceptable for passive remote sensing. IEEE Transactions on Geoscience and Remote Sensing 02/2003; · 3.47 Impact Factor [show abstract] [hide abstract] ABSTRACT: In the numerical Maxwell-equation model (NMM3D) of rough-surface scattering, we solve Maxwell equations in three dimensions to calculate emissivities for applications in passive microwave remote sensing of soil and ocean surfaces. The difficult cases for soil surfaces are with exponential correlation functions when the surfaces have fine-scale structures of large slopes. The difficulty for ocean surfaces is that because the emissivities are close to that of a flat surface, the emissivities have to be calculated accurately to correctly assess the rough-surface effects. In this paper, the accuracies of emissivity calculations are improved by using Rao-Wilton-Glisson basis functions. We further use sparse matrix canonical method to solve the matrix equation of Poggio-Miller-Chang-Harrington-Wu integral equations. Energy conservation checks are provided for the simulations. Comparisons are made with results from the pulse basis function. Numerical results are illustrated for soil and ocean surfaces respectively with exponential correlation function and ocean spectrum. The emissivities of soil are illustrated at both L- and C-bands and at multiple incidence angles for the same physical roughness parameters. The brightness temperatures for ocean surfaces are illustrated for cases with various wind speeds. We compare results with those from the sparse matrix methods. Comparisons are also made with experimental emissivity measurements of soil surfaces. Parallel computation is also implemented. Lookup tables of emissivities based on NMM3D are provided. IEEE Transactions on Geoscience and Remote Sensing 09/2004; · 3.47 Impact Factor Data provided are for informational purposes only. Although carefully collected, accuracy cannot be guaranteed. The impact factor represents a rough estimation of the journal's impact factor and does not reflect the actual current impact factor. Publisher conditions are provided by RoMEO. Differing provisions from the publisher's actual policy or licence agreement may be applicable.
{"url":"http://www.researchgate.net/publication/220823477_Numerical_Simulations_of_Emission_and_Bistatic_Scattering_from_Soils_with_Rough_Surfaces_of_Exponential_Correlation_Functions","timestamp":"2014-04-23T10:18:31Z","content_type":null,"content_length":"151567","record_id":"<urn:uuid:b807ca6e-e3c9-40c2-918d-c47794126a05>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00202-ip-10-147-4-33.ec2.internal.warc.gz"}
Sampling the energy landscape: thermodynamics and rates Stationary points of the potential energy surface provide a natural way to coarse-grain calculations of thermodynamics and kinetics, as well as a framework for basin-hopping global optimisation. Thermodynamic properties can be obtained from samples of local minima using the basin-sampling approach, and kinetic information can be extracted if the samples are extended to include transition states. Using statistical rate theory a minimum-to-minimum rate constant can be associated with each transition state, and phenomenological rates between sets of local minima that define thermodynamic states of interest can be calculated using a new graph transformation approach. Since the number of stationary points grows exponentially with system size a sampling scheme is required to produce representative pathways. The discrete path sampling approach provides a systematic way to achieve this objective once a single connected path between products and reactants has been located. In large systems such paths may involve dozens of stationary points of the potential energy surface. New algorithms have been developed for both geometry optimisation and making connections between distant local minima, which have enabled rates to be calculated for a wide variety of systems.
{"url":"http://www.newton.ac.uk/programmes/SCB/abstract1/wales.html","timestamp":"2014-04-16T11:23:20Z","content_type":null,"content_length":"3232","record_id":"<urn:uuid:a56b7b6b-4a3a-4987-8b3e-4fc93f699a2c>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00470-ip-10-147-4-33.ec2.internal.warc.gz"}
A vending machine randomly dispenses four different types of Author Message A vending machine randomly dispenses four different types of [#permalink] 21 Mar 2008, 22:18 5% (low) Question Stats: prasannar (00:00) correct Director 100% (01:50) Joined: 05 Jan 2008 wrong Posts: 709 based on 1 sessions Followers: 2 A vending machine randomly dispenses four different types of fruit Kudos [?]: 74 [0], candy. There are twice as many apple candies as orange candies, given: 0 twice as many strawberry candies as grape candies, and twice as many apple candies as strawberry candies. If each candy cost $0.25, and there are exactly 90 candies, what is the minimum amount of money required to guarantee that you would buy at least three of each type of candy? A $3.00 B $20.75 C $22.50 D $42.75 E $45.00 Re: PS: Vending Machine [#permalink] 21 Mar 2008, 23:04 Expert's post from: "There are twice as many apple candies as orange candies, Joined: 17 Nov 2007 twice as many strawberry candies as grape candies, and twice as Posts: 3599 many apple candies as strawberry candies." Entrepreneurship, we can conclude following ratio: 4:2:2:1 or 40:20:20:10 To satisfy the condition of buying at least three of Schools: Chicago (Booth) - Class of each type of candy, we have to buy N=40+20+1=61 GMAT 1: 750 Q50 V40 So, I picked B. Followers: 324 BTW, 90*0.25$=22.5$, Therefore, C,D,E are out. Kudos [?]: 1573 [0], given: 354 A is too small. So B remains NEW! GMAT ToolKit 2 (iOS) / GMAT ToolKit (Android) - The must have GMAT prep app | PrepGame Re: PS: Vending Machine [#permalink] 22 Mar 2008, 01:30 prasannar wrote: A vending machine randomly dispenses four different types of fruit SVP candy. There are twice as many apple candies as orange candies, twice as many strawberry candies as grape candies, and twice as Joined: 04 May 2006 many apple candies as strawberry candies. If each candy cost $0.25, and there are exactly 90 candies, what is the minimum amount of Posts: 1943 money required to guarantee that you would buy at least three of each type of candy? Schools: CBS, Kellogg A $3.00 B $20.75 Followers: 14 C $22.50 D $42.75 E $45.00 I confused this: "three of each type of candy", if it means "each type three" I think A win. What do you think? CEO Re: PS: Vending Machine [#permalink] 22 Mar 2008, 01:43 Joined: 17 Nov 2007 Expert's post Posts: 3599 I've found my mistake Concentration: at least three of each type of candy: 3 apple, 3 orange, 3 strawberry, and 3 grape candies. Other N=40+20+20+3=83 Schools: Chicago P=83*0.25=20.75$ (Booth) - Class of 2011 _________________ GMAT 1: 750 Q50 V40 NEW! GMAT ToolKit 2 (iOS) / GMAT ToolKit (Android) - The must have GMAT prep app | PrepGame Followers: 324 Kudos [?]: 1573 [0], given: 354 Re: PS: Vending Machine [#permalink] 22 Mar 2008, 18:41 walker wrote: I've found my mistake Joined: 04 May 2006 at least three of each type of candy: 3 apple, 3 orange, 3 strawberry, and 3 grape candies. Posts: 1943 Schools: CBS, P=83*0.25=20.75$ Followers: 14 Why not N = 10 + 20 + 20 + 3 ? You are deserved to 51! Re: PS: Vending Machine [#permalink] 22 Mar 2008, 19:46 sondenso wrote: walker wrote: I've found my mistake prasannar at least three of each type of candy: 3 apple, 3 orange, 3 strawberry, and 3 grape candies. Director N=40+20+20+3=83 Joined: 05 Jan 2008 Posts: 709 Why not N = 10 + 20 + 20 + 3 ? Followers: 2 You are deserved to 51! Kudos [?]: 74 [0], given: 0 Sondenso, What is the guarantee that the last 3 are Grape Candies, they could be very well Apple[since Orange and Strawberry are completely used as we picked 20,20 of them] thus to make sure there are 3 of each, we need to make sure, we pick up ALL of the rest of the candies but for the least number variant. Hope this helps Re: PS: Vending Machine [#permalink] 22 Mar 2008, 21:20 prasannar wrote: sondenso wrote: walker wrote: sondenso I've found my mistake SVP at least three of each type of candy: 3 apple, 3 orange, 3 strawberry, and 3 grape candies. Joined: 04 May 2006 N=40+20+20+3=83 Posts: 1943 Schools: CBS, Kellogg Why not N = 10 + 20 + 20 + 3 ? Followers: 14 You are deserved to 51! What is the guarantee that the last 3 are Grape Candies, they could be very well Apple[since Orange and Strawberry are completely used as we picked 20,20 of them] thus to make sure there are 3 of each, we need to make sure, we pick up ALL of the rest of the candies but for the least number variant. Hope this helps Hey, this concept have just come up to me. I tried to understand the logic. But Honestly, I did not understand. Do you guys mind writting it in more detail? Many thanks. Re: PS: Vending Machine [#permalink] 22 Mar 2008, 21:56 This post received Expert's post walker we have: CEO the number of apple candies = Na=40 Joined: 17 Nov 2007 the number of orange candies = No=20 Posts: 3599 the number of strawberry candies= Ns = 20 Concentration: the number of grape candies = Ng = 10 Other our question: "what is the minimum amount of money Schools: Chicago required to guarantee (Booth) - Class of 2011 that you would buy at least three of each type of candy?" GMAT 1: 750 Q50 V40 If we get 40 candies, we can get only 40 apple candies and therefore N=40 does not guaranty that we will have always 3 of each type. Followers: 324 If we get 60 candies, we may get 40 apple candies and 20 orange candies - no guarantees Kudos [?]: 1573 [1] If we get 80 candies, we may get 40 apple candies, and 20 orange candies, and 20 strawberry - no guarantees , given: 354 If we get 82 candies, we may get 40 apple candies, and 20 orange candies, 20 strawberry, and 2 grape candies - no guarantees If we get 83 candies, we will get at least three of each type of candy: 3 apple, 3 orange, 3 strawberry, and 3 grape candies, that is, we will get 3 of each type regardless chance. NEW! GMAT ToolKit 2 (iOS) / GMAT ToolKit (Android) - The must have GMAT prep app | PrepGame Re: PS: Vending Machine [#permalink] 22 Mar 2008, 23:11 walker wrote: sondenso we have: the number of apple candies = Na=40 SVP the number of orange candies = No=20 the number of strawberry candies= Ns = 20 Joined: 04 May 2006 the number of grape candies = Ng = 10 Posts: 1943 our question: "what is the minimum amount of money required to guarantee that you would buy at least three of each type of candy?" Schools: CBS, If we get 40 candies, we can get only 40 apple candies and therefore N=40 does not guaranty that we will have always 3 of each type. If we get 60 candies, we may get 40 apple candies and 20 orange candies - no guarantees Followers: 14 If we get 80 candies, we may get 40 apple candies, and 20 orange candies, and 20 strawberry - no guarantees If we get 82 candies, we may get 40 apple candies, and 20 orange candies, 20 strawberry, and 2 grape candies - no guarantees If we get 83 candies, we will always get at least three of each type of candy: 3 apple, 3 orange, 3 strawberry, and 3 grape candies, that is, we will get 3 of each type regardless chance. Walker, thanh you Probabilty and Ratios [#permalink] 15 Sep 2011, 15:02 A vending machine randomly dispenses four different types of fruit candy. There are twice as many apple candies as orange candies, twice as many strawberry candies as grape Joined: 10 Aug 2011 candies, and twice as many apple candies as strawberry candies. If each candy cost $0.25, and there are exactly 90 candies in the machine, what is the minimum amount of money required to guarantee that you would buy at least three of each type of candy? Posts: 4 A $3.00 Location: United B $20.75 States C $22.50 D $42.75 GPA: 3.83 E $45.00 Followers: 0 Kudos [?]: 5 [0], given: 1 Re: Probabilty and Ratios [#permalink] 15 Sep 2011, 15:19 This post received This question isn't that tough when you break it all down. We are given the following ratios: pike A O S G Current Student 2 1 Joined: 08 Jan 2009 Consolidate these ratios: Posts: 335 GMAT 1: 770 Q50 V46 A O S G Followers: 21 4 2 2 1 Kudos [?]: 75 [1] , We have 90 items, so using the ratio above we have these pieces: given: 7 A O S G Now we need at least three of each type. Think about the worst case, if we buy 40, we might get all of A. If we buy 60, we might get all of A and O, if we buy 80 we might get A, O and S, so we need to buy another three and now we have guaranteed we have A,O,S and G. 83 * $0.25 = $20.75 My Debrief Re: Probabilty and Ratios [#permalink] 15 Sep 2011, 21:16 Expert's post sameer1986 wrote: A vending machine randomly dispenses four different types of fruit candy. There are twice as many apple candies as orange candies, twice as many strawberry candies as grape candies, and twice as many apple candies as strawberry candies. If each candy cost $0.25, and there are exactly 90 candies in the machine, what is the minimum amount of money required to guarantee that you would buy at least three of each type of candy? A $3.00 B $20.75 C $22.50 D $42.75 VeritasPrepKarishma E $45.00 Veritas Prep GMAT Given: Apple:Orange = 2:1 Joined: 16 Oct 2010 Strawberry:Grape = 2:1 Posts: 4170 Apple: Strawberry = 2:1 Location: Pune, India So if we have 1 Grape candy, we have 2 strawberry ones and 4 apple ones, which means we have 2 Orange candies. Followers: 894 So in all we would have 1+2+4+2 = 9 candies Kudos [?]: 3785 [0], Since we actually have 90 candies, we must have 10 Grape, 20 Strawberry, 40 Apple and 20 Orange candies. given: 148 If we need at least three of each type and the machine dispenses them randomly, we have to take the worst case scenario (we will have to buy maximum number of candies). In the worst case, we will get the grape candies at the end. We will end up buying all other 80 candies and then get 3 grape candies because only grape candies will be left. So we will need to buy 83 ($20.75) candies. Veritas Prep | GMAT Instructor My Blog Save $100 on Veritas Prep GMAT Courses And Admissions Consulting Enroll now. Pay later. Take advantage of Veritas Prep's flexible payment plan options. Veritas Prep Reviews Manager Re: PS: Vending Machine [#permalink] 18 Sep 2011, 11:57 Joined: 08 Sep 2011 B. (40+20+20+3)* .25 = 20.75 Posts: 77 I also missed the wording 3 of each kind. Finance, Strategy Followers: 3 Kudos [?]: 1 [0], given: 5 gmatclubot Re: PS: Vending Machine [#permalink] 18 Sep 2011, 11:57
{"url":"http://gmatclub.com/forum/a-vending-machine-randomly-dispenses-four-different-types-of-61628.html","timestamp":"2014-04-17T07:28:58Z","content_type":null,"content_length":"199370","record_id":"<urn:uuid:37ff5d7b-6785-45d1-a922-3a7db8910550>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00359-ip-10-147-4-33.ec2.internal.warc.gz"}
Wolfram Demonstrations Project Stolarsky Approximations to Popular Means The Stolarsky mean provides approximations to popular means, which shows that it generalizes some of them and that these means are strictly ordered by strict inequality over on . The Stolarsky mean is is derived from the mean value theorem and its form depends upon the independent parameter . As the parameter is varied, approximations can be made to popular means. Most approximations become exact for some value of ; the Stolarsky is a true generalization for those means. Strict ordering of the Pythagorean Means is the basis of many proofs in number theory. Note that, although we consider means of two positive numbers, we only need to consider for , because each of these means is a homogeneous function: where . Like the Hölder mean, the Stolarsky mean parameter will approximate, match exactly, or match in the limit many popular means. The following table contrasts these two generalized forms. Bookmarks provide the most uncluttered way to examine the estimation error for a given mean. Additional reference can be obtained by clicking a mean's name next to its checkbox.
{"url":"http://demonstrations.wolfram.com/StolarskyApproximationsToPopularMeans/","timestamp":"2014-04-18T03:01:22Z","content_type":null,"content_length":"45279","record_id":"<urn:uuid:80a38a81-aae7-4807-a72e-3d5aae71ab7c>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00298-ip-10-147-4-33.ec2.internal.warc.gz"}
Are Christoffel symbols measurable? Me: "The Rieman tensor already is an observable--masny of its components correspond to the componen6ts of the tidal force." Ben: "Once you choose a frame, you can measure its components in that frame, which are scalars" The components of the Riemann tensor are not scalars. Perhaps you think scalars means "single numbers". That is not what it means. Scalars do not change values under coordinate transformations. The components of the Riemann tensor can change values under coordinate transformations. If I choose some vector fields, call them [itex]a^\mu, b^\mu, c^\mu[/itex], then the quantity [tex]R_{\mu\nu\rho\sigma} a^\mu b^\nu c^\rho b^\sigma[/tex] is certainly a scalar, and it measures the components of the Riemann tensor along the given vector fields. This is analogous to computing matrix elements in quantum mechanics, if you've done that. Matrix elements are numbers, not operators; but they tell you how to construct an operator in a given basis.
{"url":"http://www.physicsforums.com/showthread.php?p=3768216","timestamp":"2014-04-20T14:24:22Z","content_type":null,"content_length":"78420","record_id":"<urn:uuid:cbf052cc-7c08-41d1-9e11-a6b2338cf893>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00289-ip-10-147-4-33.ec2.internal.warc.gz"}
Example (from Program LA_HPGVX_EXAMPLE) Next: LA_SBGV / LA_SBGVD / Up: Generalized Symmetric Eigenvalue Problems Previous: Arguments &nbsp Contents &nbsp Index The results below are computed with Matrices LA_HPGV. The call: CALL LA_HPGVX( AP, BP, W, 2, Z= Z, IL=4, IU=5, M= M, & IFAIL=IFAIL, ABSTOL=1.0E-3_wp ) Note: wp is a work precision; wp ::= KIND(1.0) 1.0D0) W, M, IFAIL and Z on exit: The last two eigenvalues of the problem The two eigenvectors converged successfully and are: Susan Blackford 2001-08-19
{"url":"http://www.netlib.org/lapack95/lug95/node289.html","timestamp":"2014-04-18T23:34:57Z","content_type":null,"content_length":"7099","record_id":"<urn:uuid:bd58e1ef-c538-4824-96fb-d1cefe9067c3>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00609-ip-10-147-4-33.ec2.internal.warc.gz"}
Western Illinois University Western Illinois University: Macomb Campus Audience Menu Web Tools and Search Bar Top Navigation Department of Mathematics undergraduate research-math 444 Name: Seyfi Turkelli Interest: Number Theory, Arithmetic Algebraic Geometry Description: I welcome all the students who enjoy learning mathematics and solving problems. My areas of interest are number theory and arithmetic algebraic geometry. In short, number theory is the study of integers and the (algebraic or analytic) objects that is made out of them. An example of a well known problem in number theory is the following: Are there infinitely many pairs of prime numbers that differ by 2? Very recently, it has been proven that there are infinitely many pairs of prime whose difference is less than 70,000,000. One wants to bring this 70-million bound all the way down to 2! Arithmetic algebraic geometry is the study of polynomial equations and their solutions in integers. For example, we know from high school math classes that we can find all non-zero integers x, y, z such that x^2 + y^2 = z^2-- that's to say this polynomial equation has infinitely many nontrivial solutions in integers--. Can you think of one? In 1637, Pierre de Fermat conjectured one cannot find ANY triple of all non-zero integers x, y, z such that x^n + y^n = z^n for n>2. This problem was solved by Andrew Wiles in 1993-- almost 400 years later! If these problems sound interesting to you, or if you'd simply like to talk about mathematics, please stop by my office. Maybe, we can find a problem for you!
{"url":"http://www.wiu.edu/cas/math/academic/ur_turkelli.php","timestamp":"2014-04-17T12:32:25Z","content_type":null,"content_length":"13376","record_id":"<urn:uuid:e6542d8f-1c3c-4947-a973-435a72dcfbcc>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00227-ip-10-147-4-33.ec2.internal.warc.gz"}