content
stringlengths
86
994k
meta
stringlengths
288
619
Calculating Glass Thickness for Aquariums Author: Warren Stilwell First published in Aquarium World February 2001 For too long now the thickness of glass required to make an aquarium has been a mystery. There are various tables and guidelines that specify the thickness of glass for a given size aquarium. The major drawback with the information is there is no indication of safety factors for the specified glass thickness or any indication of how the suggested thickness was calculated. This article is intended to help those people who are serious about aquarium design to calculate the correct thickness of glass based on what is an acceptable safety factor for them. There are other points to consider as well as the formula that will also be covered. This information is intended as a guide only, and is in no way a guaranteed formula for success. It is based solely on proven stress calculation methods and does not account for manufacturing defects or construction faults. The Nature of Glass Glass is a totally brittle substance. It will bend a very small amount, but has no capacity like most metals to deform. It will bend to a point and then break. It is this bending stress that is the focus for calculating the thickness. Glass also has a wide variability in strength. Testing samples of uniform manufacture has proved this (see specifications for glass, – Tensile Strength 19.3 to 28.4MPa). Glass is weak in tension, is elastic up to its breaking point, and has no ductility. It is not capable of being permanently deformed, and does not give any pre-warning of impending failure by showing a permanent set after an excessive load has been removed. An important characteristic is its ability to carry an impulse load approximately twice its rated load (i.e. banging the aquarium with your hand quite hard). This is inevitably what saves many aquariums when they are accidentally knocked. The variability of the strength of glass due to limitations of the manufacturing process means a suitable safety factor must be used when calculating glass thickness. The factor commonly used is 3.8. While not a perfect guarantee, it will remove all risk bar that of damaged or very poor quality glass. The main damage that will cause failures is scratches and chips. Also a point load on the glass surface will cause it to fail. For this reason a soft packer like polystyrene is used under aquariums to stop the point loading of dirt and grit. Also when manufacturing an aquarium, the joining compound (commonly silicone) must have a minimum thickness (0.5-1mm) to allow for irregularities along the glass edge. When glass is cut it is not flat along its edge unless it has been specially ground. It is possible to use a lower safety factor if the glass is of excellent quality and has no internal stress. It is at the designers risk however to lower the safety factor. Toughened glass is considerably stronger than standard glass. It cannot however be cut. If toughened glass is to be used it must first be cut to size, have its edges finished and then be send away for toughening. The thermal resistance properties of glass are also improved by toughening. Standard 6mm glass will rupture if plunged into water at 21°C if the temperature of the glass is more than 55°C hotter or colder. Toughened glass will rupture at approximately 250°C difference. Toughened glass also has a tensile strength greater than 5 times that of standard glass. Standard glass has a very important advantage when used on aquariums. It tends to fail in a non-spectacular manner, – typically a vertical or diagonal crack. Toughened glass however will fail completely, much like the old style car windscreen (100% shattering). Glass has a much lower coefficient of linear expansion that most metals. This is important if a metal frame is to be used as part of the structure of the aquarium. If so, the aquarium should be built and stored at a temperature similar to that which it will run at. The length of the aquarium will decide how much elasticity will need to be accommodated by the sealing compound used. Silicone Rubber is the most common sealing compound today. The thickness of the sealing layer needs to be changed as the seal length increases. A general rule of thumb is to allow 2-3mm per meter of joint length. This allows the silicone to take up the stretching forces between the glass and steel. Glass Physical Characteristics: Density: approx 2.5 at 21°C Coefficient of linear expansion: 86 x 10^-7m/°C Softening Point: 730°C Modulus of Elasticity: 69GPa (69 x 109 Pa) Poisson’s ratio: Float Glass 0.22 to 0.23 Compressive Strength: 25mm Cube: 248MPa (248 x 106 Pa) Tensile Strength: 19.3 to 28.4MPa for sustained loading Tensile Strength (toughened glass): 175MPa. Design Considerations: The calculations that follow expect the glass to be supported around its perimeter on all four sides. The calculation is the same regardless of whether the perimeter join is in compression or tension. Typical all glass aquariums have all their joins in either tension or shear or both. This method of construction relies 100% on the strength of the silicone holding it together, and is also the weakest join type when using silicone. Steel frame aquariums have the silicone under compression. The silicone is not required to have any strength for this type of aquarium and serves only as a sealer and packer. The thickness of the bottom glass is covered by the second set of calculations, but does not cover an aquarium which has a bottom glass that is well supported from below the aquarium in an even uniform manner. The surface must be very level. On very large aquariums this can be difficult to achieve and self-leveling filler may be needed between the polystyrene and the base. This should be applied just prior to fitting the aquarium to the base so that the aquarium’s weight levels out imperfections. Significant time must be allowed for the filler to fully cure before the aquarium is filled. If the bottom glass is only to be supported by all four edges then use the second set of calculations. The same thickness glass can be used on a uniformly supported bottom as well and this will significantly improve the safety factor. If the aquarium is to be supported from below in a uniform distributed manor, then the same thickness glass that is used for the largest side panel may be used. To do so requires the supporting base to support part of the load so therefore it must be VERY strong. NOTE: The calculations only consider the water to the top edge of the glass. If the glass is a window below the surface then it is outside the scope of this article. Terms Used: Length in mm (L): The length of the aquarium. Width in mm (W): The width of the aquarium from front to back. Height in mm (H): The overall depth of water that is in contact with the glass, but does not exceed its upper edge. Thickness in mm (t): The thickness of the Glass. Water Pressure (p): The force in Newton’s (N). Allowed Bending Stress (B): Tensile Strength / Safety Factor Modulus of Elasticity (E): Elastic Strength The length to height ratio effects the strength of the glass. The table below lists alpha and beta constants to be used based on with the length to height ratio. Table of Alpha and Beta Constants used in the Calculations For Side Panels For Bottom Panels Ratio of L/H Alpha Beta Alpha Beta 0.5 0.003 0.085 0.666 0.0085 0.1156 1.0 0.022 0.16 0.077 0.453 1.5 0.042 0.26 0.0906 0.5172 2.0 0.056 0.32 0.1017 0.5688 2.5 0.063 0.35 0.111 0.6102 3.0 0.067 0.37 0.1335 0.7134 When the ratio is less than 0.5, use Alpha and Beta values for 0.5. When the ration is greater than 3, use Alpha and Beta values for 3. Note: For bottom panel, use Length to Width ration (L/W). The water pressure (p) is directly proportional to the Height (H) x the force of gravity (approx 10 (9.81 for people who want to be exact)). p = H x 10 in N/mm^2 The bending stress allowed (B) is equal to the Tensile Strength of glass / safety factor. B = 19.2 / 3.8 = 5.05N/mm^2 (Safety factor = 3.8) Calculations for Front and Side Glass Panels: The thickness of the glass (t) is proportional to the (square root of width factor (beta) x height (H) cubed x 0.00001 / allowable bending stress (B)). so; t = SQR (beta x H^3 x 0.00001 / 5.05) in mm. Select beta and alpha from the previous chart based on the length to height ratio. The deflection of the glass is proportional to (alpha x water pressure (p) x 0.000001 x Height^4) (Modulus of elasticity (E) x Thickness (t) cubed). Deflection = (Alpha x p x 0.000001 x H^4) / (69000 x t^3) in mm. Example: (Warren’s new tank) Aquarium Length = 3000mm Aquarium Height = 950mm Safety Factor = 3.8 L/H >3 therefore Beta = 0.37 and Alpha = 0.067 p = 950 x 10 = 9500N/m^2 Side Thickness: t = SQR (0.37 x 950^3 x 0.00001 / 5.05) = 25.06mm Deflection = (0.067 x 9500 x 0.000001 x 950^4) / (69000 x 25^3) = 0.48mm Calculations for Bottom Glass Panel: There is a small difference when calculating the bottom panel thickness. Beta is now calculated from the Length/Width (where the length L is the larger dimension – therefore L/W is always >=1). The Height is still used to calculate the pressure. Be sure to use the Bottom Panel Alpha/Beta values. The thickness of the bottom glass (t) is proportional to the square root of width factor (beta) x height (H) cubed x 10^5 / allowable bending stress (B), – the same as the side panels. t = SQR (beta x H^3 x 0.00001 / 5.05) in mm Select beta and alpha from the previous chart based on the length to width ratio. The deflection of the glass is proportional to (alpha x water pressure (p) x 10^-6 x Height^4) / (Modulus of elasticity (E) x Thickness (t)cubed). Deflection = (Alpha x p x 0.000001 x H^4) / (69000 x t^3) in mm. Example: (Warren’s new tank) Aquarium Length = 3000mm Aquarium Width = 900mm Aquarium Height = 950mm Safety Factor = 3.8 L/W >3 therefore Beta = 0.7134 and Alpha = 0.1335 p = 950 x 10 = 9500N/m^2 Bottom Thickness: t = (SQR (0.7134 x 950^3 x 0.00001) / 5.05) = 34.8mm Deflection = (0.1335 x 9500 x 0.000001 x 950^4) / (69000 x 34.83^3) = 0.355mm Calculate the required panel thickness online here Calculate the safety factor of a tank online here or download the MS Excel Calculator © This item may not be reproduced without written permission
{"url":"http://www.fnzas.org.nz/?p=1732","timestamp":"2014-04-21T04:32:13Z","content_type":null,"content_length":"38786","record_id":"<urn:uuid:f612b2c8-4afd-42ba-a114-d83f4c1a3351>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00146-ip-10-147-4-33.ec2.internal.warc.gz"}
Yahoo Groups Thistlethwaite, human version Expand Messages View Source Over the past few years, I have been exploring new ways of solving the cube. I pretty much stopped speed cubing during this period, and just continued experimenting until I found a system I was willing to commit to. This is not it, but Ron suggested that I describe it to you all anyway. I have a habit of only revealing the systems that I have no intention of using (for instance, "tripod"). After all, I don't want to give my competitors the advantage :-) As I said, this system requires no thinking. All solutions to all cases can be memorised and applied, with an end result of maybe 13 seconds average. Maybe someone is interested in doing that. Personally, I find it hard to call that puzzle solving - it's more like running the 100 (btw, I am not competing in this year's championships so people have suggested that I reveal the system I actually use- I agree. Stay tuned.) ----- Forwarded message from Ryan Heise < > ----- From: Ryan Heise < Date: Sun, 22 Jun 2003 11:30:52 +1000 To: Ron van Bruchem < Subject: Thistlethwaite, human version On Sat, Jun 21, 2003 at 06:13:28PM +0200, Ron van Bruchem wrote: > Hi Ryan, > I am very interested in the ideas you have. > Please tell me something about the systems you came up with, and how many > algorithms you need per stage. Phase 1 -> <U,D,L,R,F2,B2> group - simple, no algorithms Phase 2 -> <U,D,L2,R2,F2,B2> group - Direct up/down edges to up/down face (simple, no algs) - Direct corners to up/down face (between 8 and 60 algs) Phase 3 -> <U2,D2,L2,R2,F2,B2> group - Corners (between 1 and 2 algs) - Edges (between 1 and 4 algs) Phase 4 -> place pieces - Corners (intuitive) - Edges (intuitive) DETAILS OF STEPS * PHASE 1 This is solved in 4.6 moves on average. * PHASE 2 EDGES This is rather simple. You can learn all 20-30 cases if you wish. I forget the exact number. This can be solved in an average of 4 moves. * PHASE 2 CORNERS I used a method similar to Gaetan - first get 3 corners oriented on one side, and then apply one of 8 algorithms. It is possible to directly learn all 60 cases if you want (I can't remember the exact number). I think they have an average of 8.5 moves. * PHASE 3 CORNERS In phase 3, it is important to do the corners first, because it is difficult to see whether they have made it into the U2D2L2R2F2B2 group. Just getting opposite colours on each side isn't enough. The algorithms you learn to fix this are shorter when you don't have to worry about the Here, I'll just describe the simplest technique that requires two algorithms, but is very quick for the fingers and brain: First, separate up/down colours (one colour on each side). Average 3.2 moves. There should be, for example, all red corners on top, and all orange corners on bottom. Now, pairs of adjacent corners will either match or mismatch. Our goal is to make them either all match, or all mismatch. So, in this step, we find the odd pairs out (whether they're matching or mismatching), and fix them so they match/mismatch like all the rest. There are 4 pairs. Either one pair is the odd one out, or two pairs are the odd ones out. For one pair: hold the pair at UF, and do R'FR'B2RF'R. It's a modification of the corner mover that doesn't care about the exact positions of corners. Two pairs: hold two pairs on F (you may need to move them there), and do R2UF2U2R2U. (if you needed to move them there first, there's also a trick to get it to work...) I looked for a long time to find other methods here that used fewer moves. I found some, but this way was definitely by far the quickest to PHASE 3 EDGES 4 cases - simple (2,4,6 or 8 bad edges). Average 6.1 moves. Total moves so far: 33.4. Obviously, fewer moves are necessary to achieve an average of 40 moves overall. I worked out some shortcuts, but I don't think they're worth it, because I could perform the longer way PHASE 4 (the end game) I think you already have a strategy for this. Corners, then edges. I think it's possible to learn all cases for the edges (about 150 I think, but easy to memorise). A downside is the number of double turns which are more difficult to perform. But I tried a few algorithms and they are possible to do quickly enough. I think the main benefit of this method is fast reaction time and no thinking. Another benefit is that it looks cool when you solve it. None of the pieces are placed until the very end. Above, I listed each individual step with no shortcuts. It is possible to combine steps, or do steps in different orders depending on opportunities. The basic method above, if you learnt all cases for each exact step, should give an average of 45.7 moves. ----- End forwarded message ----- Just a note I'd like to add: it is not necessary to stick to only moves within each group. For example, in phase 4, it is not necessary to stick to double turns. In fact, the shortest solutions for most cases involve single turns. The first half (phases 1-2) already has a lot of freedom of movement. I suppose it could be solved in 5 to 6 seconds. Maybe someone has a better way to finish phases 3-4? For example, one idea is to first get all red on top, and all orange on bottom, then permute. Another idea is to build up blocks like Fridrich and Petrus (more Your message has been successfully submitted and would be delivered to recipients shortly.
{"url":"https://groups.yahoo.com/neo/groups/speedsolvingrubikscube/conversations/topics/5113","timestamp":"2014-04-19T11:56:18Z","content_type":null,"content_length":"45145","record_id":"<urn:uuid:df57f6a3-a622-4110-aa60-db4a98f225ce>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00517-ip-10-147-4-33.ec2.internal.warc.gz"}
References on Taylor series expansion of Riemann xi function up vote 1 down vote favorite I am looking for the references on Taylor series expansion of Riemann xi function at $\frac{1}{2}$. $$ \xi (s)=\sum_0^{\infty}a_{2n}(s-\frac{1}{2})^{2n}$$ where $$a_{2n}=4\int_1^{\infty}\frac{d[x^{3/2}\psi'(x)]}{dx}\frac{(\frac{1}{2}ln(x))^{2n}}{(2n)!}x^{-1/4}dx$$ and $$\psi(x)=\sum_{m=1}^{\infty}e ^{-m^2\pi x}=\frac{1}{2}[\theta_3(0,e^{-\pi x})-1]$$ Specifically I would like to know how fast $a_{2n}$ goes to zero. Has anyone proved that $$a_0>a_2>a_4>...>a_{2n}>...>a_{\infty}=0$$ Thanks a lot! riemann-zeta-function taylor-series add comment Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook. Browse other questions tagged riemann-zeta-function taylor-series or ask your own question.
{"url":"http://mathoverflow.net/questions/132043/references-on-taylor-series-expansion-of-riemann-xi-function","timestamp":"2014-04-17T07:59:41Z","content_type":null,"content_length":"45657","record_id":"<urn:uuid:b73f19a6-e433-4978-a00f-1a646010a3d4>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00551-ip-10-147-4-33.ec2.internal.warc.gz"}
probability of rain September 8th 2012, 11:09 AM #1 Sep 2012 probability of rain Please help with this: Suppose in recent years, it has only rained an average of 5 days a year. The weather forecast, which has an accuracy of 90%, predicts rain for tomorrow. What is the probability of rain tomorrow? A: <1% B: between 5 and 15% C: between 15 and 80% D: between 80 and 90% E: > 90% Thank you! Last edited by wh88; September 8th 2012 at 08:39 PM. Follow Math Help Forum on Facebook and Google+
{"url":"http://mathhelpforum.com/statistics/203109-probability-rain.html","timestamp":"2014-04-16T10:15:23Z","content_type":null,"content_length":"28877","record_id":"<urn:uuid:c5f1eb14-e975-44a3-b408-1b2f8d21026f>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00548-ip-10-147-4-33.ec2.internal.warc.gz"}
Russell Wilson: 2012's Most Valuable Player Russell Wilson: 2012′s Most Valuable Player By Kevin Meers This post analyzes who deserves the NFL’s Most Valuable Player award (spoiler alert: don’t read the title) based on some different criteria than most people have used. I want to focus on the word There are many ways to interpret “valuable”, and you can reach very different conclusions based on how you define it. For me, valuabl e doesn’t mean best, most impressive, or most memorable. Value implies worth – the same production could be really valuable or worthless depending on how much it cost. An NFL team’s most precious commodity is its salary cap, so in any analysis of who has been the most valuable player, we have to account for how much salary cap they cost their team. We also must understand the replacement level of each position. Think back to your last fantasy football draft. Kickers score more points than many players, but they have relatively little value partly because they all score about the same number of points. This idea holds true in the NFL: the best player at a replaceable position is not particularly valuable. Applying these factors to the 2012 MVP race, we can better understand who really had the most valuable performances this season. To quantify how much each player produced, I use Expected Points Added (EPA) from Advanced NFL Stats. Unfortunately, this method excludes individual offensive linemen, which is a limitation that doesn’t present a good solution. Sometime soon (I hope), we’ll have an accurate way to give offensive linemen their due credit. For now, however, I have to exclude the phenomenal seasons enjoyed by Joe Staley, Evan Mathis, John Sullivan, and many others. To account for the replacement level, I first ranked each player by EPA within each position. I defined “replacement level” as the marginal starter at that position: the 33^rd quarterback, running back, and tight end; the 65^th wide receiver, defensive end, cornerback, and safety; the 49^th defensive tackle and the 113^th linebacker (assuming teams run a 4-3 and 3-4 defense each half of the time). Subtracting these values from every player’s EPA, we get their EPA Over Replacement (EPAOR). All that’s left now is to divide each player’s EPAOR by his 2012 salary cap hit, provided by Spotrac. Here are the top 5 players in EPAOR per $100,000 of cap hit from this regular season: │ Name │Position│Team│EPAOR per $100,000 │ │ Russell Wilson │ QB │SEA │ 22.23 │ │Danario Alexander │ WR │ SD │ 12.08 │ │ Eric Decker │ WR │DEN │ 8.47 │ │ Randall Cobb │ WR │ GB │ 7.87 │ │ Stevie Brown │ S │NYG │ 7.78 │ Russell Wilson dominates, and it’s not really close. He was twice as valuable as every other player in 2012 except for Danario Alexander, and Wilson was 84% more valuable than him. What about the two guys that might actually win 2012 MVP? Peyton Manning and Adrian Peterson finished the regular season with 1.02 and 0.36 EPAOR/$100,000 (Ranked 172nd and 304th respectively). In terms of raw EPA, Manning did produce more than Wilson. However, that is the wrong way to approach the MVP award. We have to consider how replaceable production at a position is (in Peterson’s case, very replaceable) and cap hit ($544,850 for Wilson, $18,000,000 for Manning). Wilson contributed more to his team above the replacement level player per dollar than anyone else, making him the MVP of the 2012 season. 14 Responses to Russell Wilson: 2012′s Most Valuable Player 1. Another way to measure value would be to use Value minus Salary, rather than divided. That would generate the actual value to the team, in dollars. To get value, you’d have to sum up all EPAOR and all Salary for those positions to get a multiplier to convert EPAOR to Salary Value produced. □ Hey Dan — I like this idea a lot, but I think it assumes that teams spend the “right” amount of money on each position, which is not necessarily true. I haven’t seen any research that shows how much a marginal EPAOR is to a team’s record (basically moving towards football’s WAR), which is what we really need to determine the dollar value of on-field production. I went for highest efficiency of EPAOR/dollar over raw total to avoid the issue, but agree that Value-Salary would be the best way to do it. – Kevin 2. The award doesn’t look at salary. You’ve defined valuable based upon dollar amount. The MVP trophy looks at value as only on the field production. The award you’ve given to Russell Wilson, deservingly so should be called “The player with the most dollar value.” 3. “The player with the most value to his team” would be accurate. Value to the team definitely hinges on contract. 4. Cool idea. One thing you might want to incorporate the next time around is that the goal of an NFL team is not to spend 50% of the cap and go 9-7, but to spend 100% of the cap and be as good as possible. One of my favorite Brian posts was this one, noting that draft picks are more like gladiators than bricklayers (http://www.advancednflstats.com/2009/04/ draft-picks-bricklayers-or-gladiators.html). This article seems to consider players as bricklayers. Even at $20M, by being the best QB in the league, Peyton Manning was pretty darn valuable. □ Definitely true. If Manning is actually worth $40 million, that $22 million surplus value is incredible for Denver. As I mentioned in replying to Dan (above), until we find a way to accurately measure that “true value”, our analysis will be limited. 5. Kevin- Nice post and similar to one I wrote a few weeks ago when someone mentioned the idea to me on Twitter. http://fantasydouche.com/2012/12/the-nfls-most-valuable-player-is-russell-wilson/ After I wrote the post though, I did sort of wonder whether calling a player “most valuable” based on his salary might be giving credit to the wrong person. After all, a player is always going to want to earn as much as possible, so Russell Wilson’s low salary isn’t really anything he can control. It’s actually the GM that is responsible for collecting good players at low cap numbers. To get to Chase’s point, teams all have to work under the salary cap, and the cap now has mandatory minimums (I think that was part of the new CBA) so teams are all interested in maximizing their I think another point in Kevin’s favor as it relates to this post is that if you were going to put both Manning and Wilson on the trading block at their current cap numbers, I suspect that Wilson would draw more in a trade. I don’t know, that’s just a guess and I’m sure there would be people who would disagree with me. I think another thing to consider is that if you tried to isolate value based on whose receivers were better, Wilson was throwing to a receiving corps that might be in the bottom half of the league, while Manning was throwing to probably a top 5 (or so) receiving corps. 6. As an add-on to the point that I was getting at re: QB/WR interaction, I did write this post last week http://fantasydouche.com/2013/01/ □ Frank — didn’t see that article until you posted it here. Looks like we agree. I like your point that giving Russell the MVP is almost like saying “Congratulations, you are the most underpaid player in the NFL!” I liked looking at this fairly static debate from a different perspective though, and thought it was a useful way to rethink what we mean by “value”. 7. Pingback: Harvard Sports MVP: Russell Wilson 8. I think the comments above have a point. I’m guessing you are an Econ concentrator so you’ve studied Net Present Value and Internal Rate of Return. Your measure is basically a variant on IRR, it tells you how much value you get for each dollar you invest. The alternative metrics are like NPV, they tell you how much value you netted by subtracting out the opportunity cost. There’s a great example people like to use to illustrate why generally NPV is preferred. Let’s set the discount rate to 3% for the rate we’ll say you’d get on long-term riskless bonds. Suppose you could choose between two investments, one that costs $1 million today and returns $1.5 million next year and one that costs $100,000 today and returns $500,000 next year. IRR for project 1: 50% IRR for project 2: 400% Project 2 blows project 1 out of the water. It’s like Russell Wilson. But consider how much money you would have in the end if you started with $1,000,000 and choose between option 1 and option 2, investing the leftovers on bonds. Project 1: $1.5 million Project 2: $1.427 million So most people focus on NPV-like measures instead of IRR-like measures because the objective is to make money (win games), not to get a high rate of return on each $ invested (win lots of games per $ invested). Also, the EPA per $1 million in cap number is <1 so the "bias" from leaving it out in discussion of value isn't that big. Even in the extreme case of Manning vs Wilson you're talking about giving Wilson a +18 handicap. Just ranking players by EPA pretty much gives you the list as using net EPA. □ Steve — Thanks for your thoughts. I think to do something like NPV, you’d need to find how much a marginal EPA is worth in cap dollars. I know Brian has done some work on this problem (http:/ /www.advancednflstats.com/2012/01/how-much-does-win-cost.html), but this measures the market value of a marginal EPA instead of the true impact of EPA on winning football games. There’s been enough interest from everyone here I’ll take a crack at converting EPA to wins to cap hit. If anyone has seen analysis of this, please post it! ☆ I looked at it using the “uncapped” year of 2010 since, in years where the cap is in place its not clear where the variation in cap numbers comes from and how to interpret it. Even for 2010 though I think the way bonuses are amortized into cap number hits is misleading so these data are noisy which will bias down the coeffs. With the small sample size regressions of cap number on EPA, WPA, and GWP (from AdvancedNFLStats) are, unsurprisingly, insignificant but they’re all positive. Each million in cap number is worth .77 EPA and 0.02 wins (using either WPA or GWP) over the season. When you throw out Washington, which is a major outlier for some reason (Albert Haynesworth?), those numbers rise to 1.43 and 0.035. The 95% CI on EPA goes up to 2.6 and 3.5 (w/o Washington) so its possible that extra $17.5 million Wilson saved for his team helped by 45 points (in expectation) but its probably more like 13 points. 9. One last thought here – given that football is a violent sport played by rosters of 53, I think there is an argument to be made for using your salary cap broadly across your value producing positions (yes QB, yes CB, yes DE, no to RB), rather than have it concentrated in a single player. For instance, while Manning’s money was not guaranteed this year (my recollection, not 100% sure) and it worked out fine, maybe if you play the 2012 season 100 or 1000 times, his neck or arm strength become an issue in a percent of those instances. Then in the alternate realities where an injury becomes an issue for the player you’ve invested so much in, your season is a total loss (see Colts 2011 season). Basically what I’m getting at here is a risk concentration issue. Large salaries mean you have a lot of the value of your team tied up in a single player, which is, to steal a term from Taleb, a fragile way to construct your team. This entry was posted in Business, NFL Football and tagged Advanced NFL Stats, MVP. Bookmark the permalink.
{"url":"http://harvardsportsanalysis.wordpress.com/2013/01/14/russell-wilson-2012s-most-valuable-player/","timestamp":"2014-04-17T21:43:45Z","content_type":null,"content_length":"101782","record_id":"<urn:uuid:327e75b6-6026-4a54-bdf8-6237930739de>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00458-ip-10-147-4-33.ec2.internal.warc.gz"}
Beeler, M., Gosper, R.W., and Schroeppel, R. HAKMEM. MIT AI Memo 239, Feb. 29, 1972. Retyped and converted to html ('Web browser format) by Henry Baker, April, 1995. Previous Up Next WARNING: Numbers in this section are octal (and occasionally binary) unless followed by a decimal point. 105=69.. (And 105.=69 hexadecimal.) [PDP-10 Info] ITEM 145 (Gosper): Proving that short programs are neither trivial nor exhausted yet, there is the following: 0/ TLCA 1,1(1) 1/ see below 2/ ROT 1,9 3/ JRST 0 This is a display hack (that is, it makes pretty patterns) with the low 9 bits = Y and the 9 next higher = X; also, it makes interesting, related noises with a stereo amplifier hooked to the X and Y signals. Recommended variations include: CHANGE: GOOD INITIAL CONTENTS OF 1: none 377767,,377767; 757777,,757757; etc. TLC 1,2(1) 373777,,0; 300000,,0 TLC 1,3(1) -2,,-2; -5,,-1; -6,,-1 ROT 1,1 7,,7; A0000B,,A0000B ROTC 1,11 ;Can't use TLCA over data. AOJA 1,0 ITEM 146: MUNCHING SQUARES Another simple display program. It is thought that this was discovered by Jackson Wright on the RLE PDP-1 circa 1962. DATAI 2 ADDB 1,2 ROTC 2,-22 XOR 1,2 JRST .-4 2=X, 3=Y. Try things like 1001002 in data switches. This also does interesting things with operations other than XOR, and rotations other than -22. (Try IOR; AND; TSC; FADR; FDV(!); ROT -14, -9, -20, ITEM 147 (Schroeppel): Munching squares is just views of the graph Y = X XOR T for consecutive values of T = time. ITEM 148 (Cohen, Beeler): A modification to munching squares which reveals them in frozen states through opening and closing curtains: insert FADR 2,1 before the XOR. Try data switches = 4000,,4 1000,,2002 2000,,4 0,,1002 (Notation: <left half>,,<right half>) Also try the FADR after the XOR, switches = 1001,,1. ITEM 149 (Minsky): CIRCLE ALGORITHM Here is an elegant way to draw almost circles on a point-plotting display: NEW X = OLD X - epsilon * OLD Y NEW Y = OLD Y + epsilon * NEW(!) X This makes a very round ellipse centered at the origin with its size determined by the initial point. epsilon determines the angular velocity of the circulating point, and slightly affects the eccentricity. If epsilon is a power of 2, then we don't even need multiplication, let alone square roots, sines, and cosines! The "circle" will be perfectly stable because the points soon become The circle algorithm was invented by mistake when I tried to save one register in a display hack! Ben Gurley had an amazing display hack using only about six or seven instructions, and it was a great wonder. But it was basically line-oriented. It occurred to me that it would be exciting to have curves, and I was trying to get a curve display hack with minimal instructions. ITEM 150 (Schroeppel): PROBLEM: Although the reason for the circle algorithm's stability is unclear, what is the number of distinct sets of radii? (Note: algorithm is invertible, so all points have predecessors.) ITEM 151 (Gosper): Separating X from Y in the above recurrence, X(N+1) = (2 - epsilon^2) * X(N) - X(N-1) Y(N+1) = (2 - epsilon) * Y(N) - Y(N-1). These are just the Chebychev recurrence with cos theta (the angular increment) = 1-epsilon^2/2. Thus X(N) and Y(N) are expressible in the form R cos(N theta + phi). The phi's and R for X(N) and Y(N) can be found from N=0,1. The phi's will differ by less than pi/2 so that the curve is not really a circle. The algorithm is useful nevertheless, because it needs no sine or square root function, even to get started. X(N) and Y(N) are also expressible in closed form in the algebra of ordered pairs described under linear recurrences, but they lack the remarkable numerical stability of the "simultaneous" form of the recurrence. ITEM 152 (Salamin): With exact arithmetic, the circle algorithm is stable iff |epsilon| < 2. In this case, all points lie on the ellipse X^2 - epsilon X Y + Y^2 = constant, where the constant is determined by the initial point. This ellipse has its major axis at 45 degrees (if epsilon > 0) or 135 degrees (if epsilon < 0) and has eccentricity sqrt(epsilon/(1 + epsilon/2)). ITEM 153 (Minsky): To portray a 3-dimensional solid on a 2-dimensional display, we can use a single circle algorithm to compute orbits for the corners to follow. The (positive or negative) radius of each orbit is determined by the distance (forward or backward) from some origin to that corner. The solid will appear to wobble rigidly about the origin, instead of simply rotating. ITEM 154 (Gosper): The myth that any given programming language is machine independent is easily exploded by computing the sum of powers of 2. • If the result loops with period = 1 with sign +, you are on a sign-magnitude machine. • If the result loops with period = 1 at -1, you are on a twos-complement machine. • If the result loops with period > 1, including the beginning, you are on a ones-complement machine. • If the result loops with period > 1, not including the beginning, your machine isn't binary -- the pattern should tell you the base. • If you run out of memory, you are on a string or Bignum system. • If arithmetic overflow is a fatal error, some fascist pig with a read-only mind is trying to enforce machine independence. But the very ability to trap overflow is machine dependent. By this strategy, consider the universe, or, more precisely, algebra: let X = the sum of many powers of two = ...111111 now add X to itself; X + X = ...111110 thus, 2X = X - 1 so X = -1 therefore algebra is run on a machine (the universe) which is twos-complement. ITEM 155 (Liknaitzky): To subtract the right half of an accumulator from the left (as in restarting an AOBJN counter): IMUL A,[377777,,1] ITEM 156 (Mitchell): To make an AOBJN pointer when the origin is fixed and the length is a variable in A: HRLOI A,-1(A) EQVI A,ORIGIN ITEM 157 (Freiberg): If instead, A is a pointer to the last word HRLOI A,-ORIGIN(A) EQVI A,ORIGIN Slightly faster: change the HRLOIs to MOVSIs and the EQVI addresses to -ORIGIN-1. These two routines are clearly adjustable for BLKOs and other fenceposts. ITEM 158 (Gosper, Salamin, Schroeppel): A miniature (recursive) sine and cosine routine follows. COS: FADR A,[1.57079632679] ;pi/2 SIN: MOVM B,A ;ARGUMENT IN A CAMG B,[.00017] ; <= 3^(1/3) / 2^13 POPJ P, ;sin X = X, within 27. bits FDVRI A,(-3.0) PUSHJ P,SIN ;sin -X/3 FMPR B,B FSC B,2 FADRI B,(-3.0) FMPRB A,B ;sin X = 4(sin -X/3)^3 - 3(sin -X/3) POPJ P, ;sin in A, sin or |sin| in B ;|sin| in B occurs when angle is smaller than end test Changing both -3.0's to +3.0's gives sinh: sinh X = 3 sinh X/3 + 4 (sinh X/3)^3. Changing the first -3.0 to a +9.0, then inserting PUSHJ P,.+1 after PUSHJ P,SIN gains about 20% in speed and uses half the pushdown space (< 5 levels in the first 4 quadrants). PUSHJ P,.+1 is a nice way to have something happen twice. Other useful angle multiplying formulas are tanh X = (2 tanh X/2) / (1 + (tanh X/2)^2) tan X = (2 tan X/2) / (1 - (tan X/2)^2), if infinity is handled correctly. For cos and cosh, one can use cos X = 1 - 2 (sin X/2)^2, cosh X = 1 + 2 (sinh X/2)^2. In general, to compute functions like e^X, cos X, elliptic functions, etc. by iterated application of double and triple argument formulas, it is necessary to subtract out the constant in the Taylor series and transform the range reduction formula accordingly. Thus: F(X) = cos(X)-1 F(2 X) = 2 F*(F+2) F(epsilon) = -epsilon^2/2 G(X) = e^X - 1 G(2 X) = G*(G+2) G(epsilon) = epsilon This is to prevent the destruction of the information in the range-reduced argument by the addition of a quantity near 1 upon the success of the epsilon test. The addition of such a quantity in the actual recurrences is OK since the information is restored by the multiply. In fact, a cheap and dirty test for F(epsilon) sufficiently small is to see if the addition step has no effect. People lucky enough to have a square root instruction can get natural log by iterating X <- X/(sqrt(1+X) + 1) until 1+X = 1. Then multiply by 2^(number of iterations). Here, a LSH or FSC would work. ITEM 159 (Gosper, Schroeppel): (Numbers herein are decimal.) The correct epsilon test in such functions as the foregoing SIN are generally the largest argument for which addition of the second term has no effect on the first. In SIN, the first term is x and the second is -x^3/6, so the answer is roughly the x which makes the ratio of those terms 1/2^27; so x = sqrt(3) / 2^13. But this is not exact, since the precise cutoff is where the neglected term is the power of 2 whose 1 bit coincides with the first neglected (28th) bit of the fraction. Thus, x^3/6 = 1/2^27 * 1/2^13, so x = 3^(1/3) / 2^13. ITEM 160 (Gosper): Here is a way to get log base 2. A and B are consecutive. Call by PUSHJ P,LOG2 with a floating point argument in A. LOG2: LSHC A,-33 MOVSI C,-201(A) TLC C,211000 ;Speciner's bum MOVI A,200 ;exponent and sign sentinel LOGL: LSH B,-9 REPEAT 7, FMPR B,B ;moby flunderflo LSH B,2 LSHC A,7 SOJG A,LOGL ;fails on 4th try LSH A,-1 FADR A,C POPJ P, ;answer in A Basically you just square seven times and use the low seven bits of the exponent as the next seven bits of the log. ITEM 161 (Gosper): To swap the contents of two locations in memory: EXCH A,LOC1 EXCH A,LOC2 EXCH A,LOC1 Note: LOC1 must not equal LOC2! If this can happen use MOVE-EXCH-MOVEM, clobbering A. ITEM 162 (Gosper): To swap two bits in an accumulator: TRCE A,BITS TRCE A,BITS TRCE A,BITS Note (Nelson): last TRCE never skips, and used to be a TRC, but TRCE is less forgettable. Also, use TLCE or TDCE if the bits are not in the right half. ITEM 163 (Sussman): To exchange two variables in LISP without using a third variable: (SETQ X (PROG2 0 Y (SETQ Y X))) ITEM 164 (Samson): To take MAX in A of two byte pointers (where A and B are consecutive accumulators): ROTC A,6 CAMG A,B EXCH A,B ROTC A,-6 ITEM 165 (Freiberg): A byte pointer can be converted to a character address < 2^18 by MULI A,<# bytes/word> followed by SUBI B,1-<# b/w>(A). To get full word character address, use SUB into a magic table. ITEM 166 (Gosper, Liknaitzky): To rotate three consecutive accumulators N < 37. places: ROTC A,N ROT B,-N ROTC B,N Thus M AC's can be ROTC'ed in 2M-3 instructions. (Stallman): For 73. > N > 35.: ROTC A,N-36. EXCH A,C ROT B,36.-N ROTC A,N-72. ITEM 167 (Gosper, Freiberg): ;B gets 7 bit character in A with even parity IMUL A,[2010040201] ;5 adjacent copies AND A,[21042104377] ;every 4th bit of left 4 copies + right copy IDIVI A,17<-7 ;casting out 15.'s in hexadecimal shifted 7 ;odd parity on 7 bits (Schroeppel) IMUL A,[10040201] ;4 adjacent copies IOR A,[7555555400] ;leaves every 3rd bit+offset+right copy IDIVI A,9<-7 ;powers of 2^3 are +-1 mod 9 ;changing 7555555400 to 27555555400 gives even parity ;if A is a 9 bit quantity, B gets number of 1's (Schroeppel) IMUL A,[1001001001] ;4 copies AND A,[42104210421] ;every 4th bit IDIVI A,17 ;casting out 15.'s in hexadecimal ;if A is 6 bit quantity, B gets 6 bits reversed (Schroeppel) IMUL A,[2020202] ;4 copies shifted AND A,[104422010] ;where bits coincide with reverse repeated base 2^8 IDIVI A,377 ;casting out 2^8 - 1's ;reverse 7 bits (Schroeppel) IMUL A,[10004002001] ;4 copies sep by 000's base 2 (may set arith. o'flow) AND A,[210210210010] ;where bits coincide with reverse repeated base 2^8 IDIVI A,377 ;casting out 377's ;reverse 8 bits (Schroeppel) MUL A,[100200401002] ;5 copies in A and B AND B,[20420420020] ;where bits coincide with reverse repeated base 2^10 ANDI A,41 ;" DIVI A,1777 ;casting out 2^10 - 1's ITEM 168 (PDP-1 hackers): foo, lat /DATAI switches adm a /ADDB and (707070 adm b iot 14 /output AC sign bit to a music flip-flop jmp foo Makes startling chords, arpeggios, and slides, with just the sign of the AC. This translates to the PDP-6 (roughly) as: FOO: DATAI 2 ADDB 1,2 AND 2,[707070707070] ;or 171717171717, 363636363636, 454545454545, ... ADDB 2,3 LDB 0,[360600,,2] JRST FOO Listen to the square waves from the low bits of 0. ITEM 169 (in order of one-ups-manship: Gosper, Mann, Lenard, [Root and Mann]): To count the ones in a PDP-6/10 word: LDB B,[014300,,A] ;or MOVE B,A then LSH B,-1 AND B,[333333,,333333] SUB A,B LSH B,-1 AND B,[333333,,333333] SUBB A,B ;each octal digit is replaced by number of 1's in it LSH B,-3 ADD A,B AND A,[070707,,070707] IDIVI A,77 ;casting out 63.'s These ten instructions, with constants extended, would work on word lengths up to 62.; eleven suffice up to 254.. ITEM 170 (Jensen): Useful strings of non-digits and zeros can arise when carefully chosen negative numbers are fed to unsuspecting decimal print routines. Different sets arise from different methods of character-to-digit conversion. Example (Gosper): DPT: IDIVI F,12 HRLM G,(P) ;tuck remainder on pushdown list SKIPE F PUSHJ P,DPT LDB G,[220600,,(P)] ;retrieve low 6 bits of remainder TRCE G,"0 ;convert digit to character SETOM CCT ;that was no digit! TYO: .IOT TYOCHN,G ;or DATA0 or IDPB ... AOS G,CCT POPJ P, This is the standard recursive decimal print of the positive number in F, but with a LDB instead of a HLRZ. It falls into the typeout routine which returns in G the number of characters since the last carriage return. When called with a -36., DPT types carriage return, line feed, and resets CCT, the character position counter. ITEM 171 (Gosper): Since integer division can never produce a larger quotient than dividend, doubling the dividend and divisor beforehand will distinguish division by zero from division by 1 or anything else, in situations where division by zero does nothing. ITEM 172 (Gosper): The fundamental operation for building list structure, called CONS, is defined to: find a free cell in memory, store the argument in it, remove it from the set of free cells, return a pointer to it, and call the garbage collector when the set is empty. This can be done in two instructions: CONS: EXCH A,[EXCH A,[...[PUSHJ P,GC]...]] EXCH A,CONS Of course, the address-linked chain of EXCH's indicated by the nested brackets is concocted by the garbage collector. This method has the additional advantage of not constraining an accumulator for the free storage pointer. UNCONS: HRLI A,(EXCH A,) EXCH A,CONS EXCH A,@CONS Returns cell addressed by A to free storage list; returns former cell contents in A. ITEM 173 (Gosper): The incantation to fix a floating number is usually MULI A,400 ;exponent to A, fraction to A+1 TSC A,A ;1's complement magnitude of excess 200 exponent ASH A+1,-200-27.-8(A) ;answer in A+1 If number is known positive, you can omit the TSC. On the PDP-10 UFA A,[+-233000,,] ;not in PDP-6 repertoire TLC A+1,233000 ;if those bits really bother you When you know the sign of A, and |A| < 2^26, you can FAD A,[+-233400,,] ;or FADR for rounded fix! TLC A,233400 ;if those bits are relevant where the sign of the constant must match A's. This works on both machines and doesn't involve A+1. On the 10, FADRI saves a cycle and a constant, and rounds. ITEM 174 (Gosper, Nelson): 21963283741. = 243507216435 is a fixed point of the float function on the PDP-6/10, i.e., it is the only positive number whose floating point representation equals its fixed. ITEM 175 (Gosper): To get the next higher number (in A) with the same number of 1 bits: (A, B, C, D do not have to be consecutive) MOVE B,A MOVN C,B AND C,B ADD A,C MOVE D,A XOR D,B LSH D,-2 IDIVM D,C IOR A,C Previous Up Next
{"url":"http://www.jjj.de/hakmem/hacks.html","timestamp":"2014-04-19T20:07:01Z","content_type":null,"content_length":"19930","record_id":"<urn:uuid:513b49f6-a3c4-4aaa-8f8a-afab49ea79a8>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00353-ip-10-147-4-33.ec2.internal.warc.gz"}
Compound Interest Interest is an amount charged for the use of money. There are different methods for calculating this charge. A common method is "compound interest"; this method adds the interest from prior periods to the amount invested or borrowed (the "principal") before the current period interest is calculated. In other words, prior periods' interest is considered to be reinvested, and current period interest is calculated on the principal plus prior periods' interest. Using Tables to Learn Compound Interest Although compound interest solutions can be quickly and precisely obtained by using hand-held calculators, software applications, and Internet websites, compound interest tables are also still widely Tables are especially helpful for learning how to solve various compound interest problems such as determining future values, present values, payment amounts, interest rates, and loan balances. There are several reasons for this: 1. Table values can be interpreted as multiples or ratios (to present and future single amounts, and, payments) which enhances understanding of the results of calculations. 2. The effects of changes in an interest rate or number of periods are easily viewed by looking at the changes in the multiples - sometimes these are quite dramatic. 3. By looking across rows and down columns, the direction of change in the multiples provides an intuitive understanding of the effects of compounding and discounting. Compound Interest Tables Other Common Methods Simple Interest: Simple interest calculates interest only on the principal. Prior periods' interest is not assumed to be reinvested; therefore, current interest is not earned on prior interest. The formula for simple interest is: P x i x n, where P is the principal amount, i is the interest rate and n is the number of periods. For example, if $10,000 were invested for 9 months at 10% annual interest, the total interest would be $10,000 x .10 x 9/12 = $750. (The interest rate is usually annual, so time is expressed in years or parts of a year.) Discount Method: Interest is calculated on a principal amount and subtracted from the principal at the time of a loan. The borrower receives the difference. For example, suppose that you borrowed the same $10,000 as in the above example. The $750 would be subtracted from the principal and you would have received $9,250 and be required to repay $10,000. Notice that the true interest rate is higher than the stated 10% because you have the use of only $9,250, not $10,000. $750 / $9,250 x 12/9 = 10.81% annual rate. Rule of 78: The rule of 78 is an older method used for relatively short-term consumer loans. A pre-computed finance charge is calculated and added to the principal, and then paid off in equal monthly installments. For example assume that you buy an appliance for $3,000 and a $300 finance charge is calculated. The loan term is for one year, so the equal payments would be $3,300 / 12 = $275. The amount of the finance charge earned by the lender is calculated by using a declining fraction. The numerator of the fraction is the most recent month and the denominator of the fraction is the sum of the months in the loan term. The denominator can be determined by the formula (n + n^2)/2 where n is the number of months. In the twelve-month example above, (12 + 12^2)/2 = 78 (That’s the source of the name). The lender earns $300 x 12/78 in the first month = $46.15. In the second month, the lender earns $300 x 11/78 = $42.31, and so on, until the last month, when the lender earns 1/ 78 of the finance charge. Making early or additional payments does not save interest. Only a full prepayment will save interest (sometimes called a ‘rebate’). Even with a full loan payoff, the lender earns a disproportionate amount of interest in the early months, to the disadvantage of the borrower. Introductory Accounting Education
{"url":"http://www.worthyjames.com/info-interest-tables.html","timestamp":"2014-04-19T04:20:41Z","content_type":null,"content_length":"17572","record_id":"<urn:uuid:c6fc595a-f144-4c9d-955f-0a80030ef4ee>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00373-ip-10-147-4-33.ec2.internal.warc.gz"}
Splittings of skeletal homotopy modules This thesis is devoted to determining structure results on a group relative to a subgroup, using information about the kernel of the boundary map of associated free resolutions. If Y is a CW-complex with homotopy type K(G,1) then for n ≥ 2 the nth skeletal homotopy module, hn(Y ) = πn(Y (n)), is a kernel of the nth boundary homomorphism of a free resolution of Z by free ZG-modules. By passing to the universal cover and using cellular homology, analogous descriptions in dimensions zero and one are available. Let (Y,X) be a pair of connected, aspherical CW complexes of type K(G,1) and type K (S,1) respectively. If the map on fundamental groups induced by the topological inclusion is injective, then S can be seen as a subgroup of G and the induced skeletal homotopy module for X, ZG ⊗S hn (X), naturally injects into the nth skeletal homotopy module of Y . We define three conditions on this injection of the induced module. When it is split injective over ZS we say SumZS(G,S) holds, when it is split over ZG we say SumZG(G,S) holds, and when it is split injective with ZG-projective cokernel we say PSumZG(G, S) holds. When PSumZG(G,S) holds, a theorem of Serre [Hue79] implies that every finite subgroup of G is determined by S. When SumZG(G,S) holds, a theorem of Howie and Schneebeli [HS81] implies that the intersection of S with its conjugates is torsion free. When SumZS(G,S) holds, results of Bogley and Dyer [BD93] are generalized in this thesis to show that either S is self-normalizing in G or S has cohomological dimension less than or equal to n+1. We also study the relationships amongst these conditions. Our main result along these lines is that each of SumZS(G,S), SumZG(G,S) and PSumZG(G,S) imply the same at dimension n + 1 and hence all higher dimensions. Meanwhile we provide an example to show that not even PSumZG (G, S ) implies SumZS (G, S ). Moreover, clearly n−1 PSumZG(G,S) implies SumZG(G,S) which implies SumZS(G,S), but we provide examples to show neither converse holds in general. We apply these splitting results to cyclically presented groups on n generators. We show that if SumZG(G, S) holds for the semi-direct product of a cyclically presented group n on n generators with a cyclic group of order n, then the shift automorphism has order n. Using work of [BP92] we provide a family of cyclically presented groups whose shift automorphism has order n and apply a theorem of [CRS05] to determine that these groups cannot be the fundamental group of any hyperbolic 3-orbifold of finite volume.
{"url":"http://ir.library.oregonstate.edu/xmlui/handle/1957/20860","timestamp":"2014-04-18T03:58:08Z","content_type":null,"content_length":"26658","record_id":"<urn:uuid:188aa60a-4b00-4df9-9296-f982481db7c2>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00217-ip-10-147-4-33.ec2.internal.warc.gz"}
Patent US4884021 - Digital power metering This application is a continuation of application Ser. No. 042,306, filed Apr. 24, 1987, now abandoned. Electrical utilities and principal power consuming industries have employed a variety of metering approaches to provide quantitative and qualitative evaluation of electrical power. The outputs provided by such metering systems vary to suit the particular needs of the user, selection of read-outs generally being made from the parameters including volthours, volt^2 hours, watthours, Qhours, varhours, and VA hours. These quantities are designated as in or out depending upon the direction of current flow, the term "out" representing delivery to the user and the term "in" representing return of power to the generating entity. Typically, a metering system monitors power supplied through isolation and scaling components to derive polyphase input representations of voltage and current. These basic inputs then are selectively treated to derive units of power and the like as above listed. The most extensively employed technique has been the measurement of watthours through the use of an electromechanical induction meter. However, such devices are limited and thus, there have developed electronic analog techniques for carrying out multiplication and phase adjustment to achieve higher accuracies and a multitude of Early analog approaches taken to provide power parameter outputs initially involved the use of thermally responsive coil elements and the like, the temperatures of which could be converted to outputs corresponding with power values. A lack of convenience and accuracy with such techniques lead to interest in the utilization of Hall effect devices as multipliers wherein voltage-proportional generated magnetic field and current were associated to provide a voltage output proportional to the product of current and voltage. Other devices have been developed which utilize an electronic arrangement serving to capitalize on the exponential transfer characteristic of solid-state devices to carry out multiplication. In general, these early analog multiplication techniques were somewhat unsatisfactory in exhibiting inaccuracies lower than desired as well as instabilities. Another analog multiplier technique currently popular in the industry utilizes the system concept of time division multiplication. For example, the multiplier produces a pulse waveform whose amplitude is proportional to one variable, whose length relative to period is a function of another variable, and whose average value is proportional to the product of the two values. A variety of improvements in such time division multiplier circuits have been developed with respect to controlling phase and phase derived inaccuracies. Such improvements, for example, are described in U.S. Pat. Nos. 4,356,446, issued October 26, 1982; and 4,408,283, issued October 4, 1983; both assigned to the Assignee of this invention. Analog approaches to electrical parameter monitoring and multiplication techniques physically are beset with problems in achieving desired output accuracy. Accurate drift-free analog multipliers are somewhat expensive and generally exhibit undesirable drift and component variations from device to device. Accordingly, a considerable amount of technical effort is required in their production and maintenance to provide for adjustment for these various inadequacies. As a consequence of these deficiencies, other approaches have been contemplated by investigators. For example, should the line inputs be purely sinusoidal, then straightforward peak detection techniques associated with mathmatical evaluation would be available. However, the line inputs experienced worldwide, while basically resembling sinusoids, exhibit substantial variations representing high and low frequency noise, high energy transients and a multitude of variations. These variations generally are caused by any of a number of external phenomena, for example, rapidly changing loads developed from solid-state controllers such as silicon controlled rectifier driven devices. In effect, portions of the waveform may be essentially missing due to high speed switching at loads. Purely digital approaches to measuring electric power have been contemplated as ideal. With such an arrangement, for example, high rates of sampling may be employed and the instantaneous sample values then may be converted or digitized as binary values. This ideal approach generally has been considered to require very high speed systems either unavailable or of such cost and complexity as to preclude utilization for the instant purpose. However, this idealized approach promises to avoid degradation of accuracy occuring due to component variations and drift phenomenon and environmental As a compromise to the above ideal high speed sampling, relatively slow sampling techniques, i.e. on the order of each 45° of a cycle, have been proposed. To regain accuracy, such sampling is randomized. When using such randomized data, the approaches then must employ an averaging of the sampled values and thus, the advantages of high sample rate, instantaneous evaluation of waveform are not achieved nor can the systems distinguish discrete vagaries in distorted sinusoids. A typical randomizing approach is described, for example, in U.S. Pat. No. 4,077,061, issued February 28, 1978. A further design aspect which has impeded the development of practical digital multiplication circuits resides in the somewhat limited range output of analog-to-digital conversion devices. Those available at practical cost, for example, provide a 12-bit output which generally will be found to be inadequate to achieve the scale of accuracy desired by industry. This particularly is true for those portions of a given sinusoid cycle which are of relatively lower amplitude as the cycle approaches cross-over. It is important that these lower level amplitudes be evaluated at high resolution accuracies for the approach to be practical. Some techniques for improving evaluation accuracies at lower amplitude have employed compressed scales to maximize resolution at lower levels. However, the full range bit resolution for such approaches remains unsatisfactory and complex and time demanding software overhead generally is consumed to accommodate to the compressed scaling. Notwithstanding the foregoing, should a practical digital approach with high speed sampling be achieved, such system still must be capable of measuring all of the above-listed electrical parameters. Further, the technique must have reasonable accuracy such that from a system approach including all scaling components involving transformers, resistors and the like operation within an allowable error of ±0.09% of input, ±0.005% of rated input. Further, the multiplier electronics should be capable of performance within ±0.06% of input, ±0.005% of rated input. Thus, an allowable error of ±0.03% would be available for the input analog or scaling portion of any such device. Further, these systems should exhibit a reasonable dynamic operating range such as ±20% nominal voltage input, 0.025 to 200% of nominal current input and any power factor. Additionally, such system should be operable in conjunction with either single or polyphase power systems. This requires an approach involving a single phase metering technique such that single or polyphase calibration procedures may be employed. Thus, such system may not rely on the 120° phase separation of three phase systems. The present invention is addressed to a method and apparatus for metering power supplies wherein advantageous high speed sampling of the electrical parameters of the current and voltage is carried out at regular intervals. With the approach, the current and voltage parameters are determined for each degree of the 360° of a sampled cycle. Employing conventional and thus practical analog-to-digital converting devices having, for example, 12-bit outputs, the technique of the metering approach still permits very high accuracies of read-out. This high accuracy is achieved through a dual sampling technique wherein each 1° sample is first submitted to conversion to binary form for the purpose of developing a scaling evaluation and a scaling factor. The scaling evaluation is utilized to selectively adjust the gain of an amplification stage to which the electrical parameter for the sampled 1° is submitted prior to a second conversion. This second conversion then provides a data read-out which is multiplied at high speed by the scaling factor to provide an expanded digital data value corresponding with the electrical parameters of voltage and current. The expanded values may, for example, have as high as 21 significant bits in conjunction with a sign bit. These expanded data then are selectively multiplied to develop digital representations for 12 power parameters, such multiplication being carried out for each of 360° of a sampled cycle. The metering apparatus responds to cross-over events of the sampled cycles to commence sampling on a degree-by-degree basis. Thus, the method is capable of metering both single-phase and poly-phase systems. Because of the responsiveness of the apparatus to cross-over locations, such otherwise evasive electrical quantities such as volt amperes are readily developed through the approach of measuring a cycle of voltage and a corresponding cycle of current, the sampling of each such cycle being commenced with the detection of a cross-over. By submitting the initial of the parameters to memory and subsequently carrying out multiplication, not only are the noted volt ampere quantities available with the method of the invention, but through selective delay techniques, Q and var quantities readily are determined. Because front end analog components necessarily are employed to provide step-down functions as well as a part of the analog-to-digital conversion function, the apparatus of the invention incorporates a zero offset evaluation technique which is carried out periodically during the operation of the metering function, for example, following the measuring of a cycle of each of three phases of power. In one embodiment of the invention, implementation of the conversion and multiplication procedures of the apparatus is provided through employment of a synchronous state machine operating in concert with a data signal processing device. With these components, the advantageous very high sampling rates are achieved. The digital approach applied with the method and apparatus also permits a digital calibration technique to be employed wherein calibration quantities provided by the manufacturer are retained in non-volatile memory and are employed as multiplication factors in the course of processing sampled data. Another feature of the invention is to provide an apparatus for metering an electrical power supply which includes a step-down arrangement connectable with the supply for deriving first and second electrical parameter sample signals of given waveform amplitudes during corresponding given cycles of the supply. An amplifier arrangement is provided having gain characteristics which are controllable in response to a gain control input for selectively amplifying the first and second sample signals to derive corresponding first and second scaled signals. A converter is actuable to convert the first and second sample signals to corresponding first and second range digital values and subsequently actuable to convert the first and second scaled signals to corresponding first and second data digital values and a control is provided for actuating the converter to derive the first and second range digital values and corresponding first and second scaling factors. The control further is responsive to the first and second range digital values for providing the gain control input at predetermined scaling levels and for subsequently actuating the converter to derive the first and second data digital values. This control further multiplies the first and second data digital values with respective first and second scaling factors to provide first and second expanded data digital values. The expanded data digital values are within a range extending to at least about 21 binary bits, the control means being responsive to a predetermined commencement location of the waveform of the first electrical parameter sample signal for commencing the actuation of the converter and subsequently affecting the actuation at predetermined, regular intervals. The control includes a parameter memory for selectively retaining the first expanded data digital values and is responsive to effect a multiplication of each parameter memory retained first expanded data digital value with a second expanded data digital value from the sequence thereof developed following a delay selected to derive predetermined power parameter metering output data. Another feature of the invention is to provide apparatus for metering a polyphase power supply of waveform exhibiting voltage and current electrical parameters of given amplitudes within cycles defined by cycle envelopes, which includes a step-down arrangement connectable with the power supply for deriving first and second electrical parameter sample signals of amplitudes corresponding with said given amplitudes. A conversion arrangement is responsive to the first and second electrical parameter sample signals and is actuable to derive respective first and second digital data values corresponding with the given amplitudes and having an extent within a range extending to at least 21 binary bits. A sampling control is provided for actuating the conversion arrangement to effect the derivation of first and second digital data values at a predetermined, regular sampling rate commencing at a predetermined commencement location of the cycle envelope of the first electrical parameter sample signal and including memory for selectively retaining the first digital data values derived with each actuation. The sampling control is responsive to effect a multiplication of the memory retained first digital data values and the second digital data values following selective power parameter defining delays to provide power parameter digital values with respect to each conversion arrangement actuation. Finally, a processing arrangement is provided which is responsive to effect integration of a sequence of the power parameter digital values for deriving meter output Another feature of the invention is to provide a method for metering a power supply of waveform exhibiting current and voltage electrical parameters of given amplitudes within cycles defined cycle envelopes which comprises the sequence of steps of: monitoring the source to provide first and second electrical parameter signals of amplitudes corresponding with the given amplitude; converting the first and second electrical parameter signals to respective first and second binary range values at a predetermined regular sampling rate commencing upon the occurrence of predetermined commencement locations of select cycle envelopes; deriving first and second scaling factors for each respective first and second binary range values; amplifying the first and second electrical parameter signals at gains corresponding with respective first and second binary range values; converting the amplified first and second electrical parameter signals to respective first and second binary data values at the predetermined regular sampling rate; multiplying the first and second binary data values with respective first and second scaling factors to derive corresponding first and second expanded binary data values within a range extending to at least about 21 binary significant bits; and selectively multiplying the first and second expanded binary data values together to derive predetermined metering outputs. Another feature of the invention is to provide a method of metering a power supply of waveform exhibiting current and voltage electrical parameter of given amplitude within cycles defined by cycle envelopes which comprises the steps of: monitoring the supply to provide first and second electrical parameter sample signals of amplitudes corresponding with the given amplitude; converting the first and second electrical parameter sample signals, commencing upon the occurrence of a predetermined commencement location of the cycle envelope of the first parameter sample signals, to respective first and second binary data values each exhibiting a range of significant bits extending to at least about 21, at a predetermined regular sample rate; retaining the first binary data values in memory for a delay interval selected for deriving said power parameter digital values; multiplying concurrently developed first and second binary data values to derive watt digital values; multiplying the second binary data values with the memory retained first binary data values following a predetermined delay to derive a select power parameter digital value; accumulating a sequence of watt digital values to derive watthour meter output signals; and accumulating a sequence of the select power parameter digital values to derive corresponding select meter output signals. Other objects of the invention will, in part, be obvious and will, in part, appear hereinafter. The invention, accordingly, comprises the apparatus and method possessing the construction, combination of elements, arrangement of parts and steps which are exemplified in the following disclosure. For a fuller understanding of the nature and objects of the invention, reference should be had to the following detailed description taken in connection with the accompanying drawings. FIG. 1 is a block diagrammatic representation of the metering apparatus of the invention; FIGS. 2A and 2B combine to represent a data flow block diagrammatic representation of the circuits employed for sampling and multiplying techniques according to the invention; FIGS. 3A and 3B combine as labelled to provide a diagram of a circuit structure for deriving the sampling and control according to the invention and including digital multiplication and digital signal processing functions; FIGS. 4A-4C are a program flow chart for the synchronous state machine components of the circuit of FIG. 3A; FIGS. 5A-5G combine to provide a flow chart representing the program for the digital signal processor of the circuit employed with the apparatus of the invention; FIG. 6 is a diagrammatic time representation of the activities of the synchronous state machine with respect to the digital signal processor of the circuit of the apparatus of the invention; and FIG. 7 is a block diagrammatic representation of a version of the apparatus of the invention employing two high speed digital signal processors. In its general aspects, the apparatus and method of the invention involve a highly enhanced sampling of the polyphase sinusoid input of a utility or the like. With the approach, sampling of a given sinusoid cycle may be carried out in successive 1° increments. Each of this relatively high number (360) of samples per cycle is converted or digitized to a digital value using a practical 12-bit analog-to-digital converter. Because such practical converters will provide a range from lowest sampled amplitudes to peak sampled amplitudes of 2^12 or 4096 increments, without more, evaluation at lower sample amplitudes would be ineffective in terms of their significant bit accuracy. However, conversion is carried out in two steps, the first being a range conversion wherein the sample amplitude is evaluated with respect to 11 possible ranges of amplitude or scaling factors. That range data then are stored and the sample then is amplified in accordance with a desired range code to again be submitted to the analog-to-digital converter to provide a data conversion. The product of these latter data and the range data is then found to, in effect, achieve an output having a significant bit range extending to 2^21 or 2,097,152 increments. Thus, a great improvement in accuracy of reading for each of the one degree samples is developed. This same multiplication function also is employed, where called for, to develop the 12 possible electrical parameters of the system with respect to each sample obtained. Because the system is digitized essentially from the point of front end digital conversion, calibrating corrections can be provided in digital form as opposed to the otherwise time consuming requirements of adjusting potentiometers and the like. Further, the digital technique permits an ongoing evaluation of any ambient affects upon the front end analog circuitry on a relatively rapid basis. Looking to FIG. 1, a representation of the metering approach of the invention is represented generally at 10. Device 10 is coupled typically to a polyphase line input and employs conventional step-down networks as represented at blocks 12 and 14 to provide respectively voltage and current related inputs, for example, for three phases: A, B, and C. In the latter regard, the phase A-C voltage input signals are provided at three-line grouping 16 while voltage signals corresponding with a current developed by current transformers are provided at three-line grouping 18. Line groupings 16 and 18 are directed to the sampling input of a high speed digital control stage represented at block 20. This control stage 20 includes the dual conversion components for range and data with respect to each sample, as well as multiplication components. The stage further includes a processing network for treating the parameters derived for each sample and developing pulse outputs which can be employed for readouts and the like as are conventionally used in industry. To achieve the speeds required for this latter processing, a general purpose digital signal processor (DSP) is Outputs for six selected electrical parameters which always will include watthours are provided by the latter processing function as represented by the six-line grouping 22. The pulse carrying outputs at line grouping 22 are employed in typical fashion to provide KYZ relay outputs as represented at tapping line grouping 24 and also to provide the inputs to a microprocessor driven electronic register represented within dashed boundary 26. Register 26 is controlled from a conventional microprocessor represented at block 28, the input ports of which are coupled to receive line grouping 22. In conventional fashion, the microprocessor 28 operates in conjunction with random access memory (RAM) as represented at block 30 as well as in conjunction with a program contained in read only memory (ROM) as well as electronically erasable read only memory (EEPROM) as shown at block 32. The electronically erasable read only memory as represented at block 32 functions to carry calibrating information which is submitted to the digital signal processor (DSP) function at block 20 at such time as the device 10 is powered up. This dual directional serial communication is represented by line 34. To maintain the data developed as outputs at array 22, a back-up battery is employed with the register 26 as represented at block 36 and line 38. The microprocessor 28 functions to treat the data received from line grouping 22 and provide a visual display, preferably through a liquid crystal (LCD) display represented at block 40. To permit the device 10 to be programmed remotely, a modem as represented at block 42 is provided which functions to permit carrying out of programming and communication via a telephone link as represented at line 44. Similarly, it is desirable to provide for on-site programming, for example, through an IR communications or optical link. This is provided through a serial port represented at block 46 and line 48. Also conventional, serial data communication may be provided, through the port 46 as represented at line 50. To achieve the processing speeds requisite to carrying out a sampling each degree of a conventional power cycle, for one embodiment of the invention, a synchronous state machine approach is employed. With such an approach, decisional software overhead and the like commonly encountered with microcomputers is avoided and a full development of requisite electrical parameters commencing with watthours is achieved for each sample degree or about each 46 microseconds. For example, operating at a 5.4 MHz clock speed, the sychronous state machine carries out 128 steps to process a 1° sample. As a prelude to considering the architecture of the circuitry for the sampling and multiplying technique, reference is made to FIGS. 2A-2B where the operation of the system is illustrated in data flow block diagrammatic fashion. The figures should be considered in an orientation corresponding with their associative labeling. FIG. 2A shows that data flow as established for three phases, A-C, and it may be observed that the components of the figure are identical and thus identically labeled. Accordingly, the same numeration is employed to describe corresponding components from phase to phase along with prime notations for phase B and double prime notations for phase C. Data flow is shown to commence with the insertion of voltage analog signals represented at arrow 60 to an analog-to-digital conversion function represented at block 62. These voltage analog signals will be provided for phases A-C as well as a zeroing or ground value employed for periodic adjustment of values of the system. In similar fashion, the corresponding current analog signals are provided as phase designated voltages as represented at arrow 64 shown being directed to an analog-to-digital function represented at block 66. Preferably, the inputs 60 and 64 are multiplexed in the sequence phase A-phase C in the noted 1° sampling intervals for a full cycle of 360°. In the order of sampling, first phase A is sampled, then phase B and then phase C following which a zeroing measurement is taken. Thus, any of the given phase cycles are measured approximately every third cycle. In the latter regard, 540° are used for each cycle in order to carry out multiplication to develop such parameters as var, Q, and volt amps. With the arrangement, the system is capable of operating in conjunction with single or polyphase inputs, and, in this regard, will be seen to react in conjunction with cross-over events to detect the commencement of the initial phase under sampling. The output the A/D function 62 is shown being directed in data flow fashion to a voltage scaler function as represented at block 72. In effect, two analog-to-digital conversions are taken with the system, one to provide the scaler data represented at block 72 in which the 12 bits of digital information representing the amplitude of the 1° sample and a sign bit are employed to establish 11 scaling levels of amplitude from 0 peak amplitude. The initial digital conversion is for this scaling function and, as represented by flow lines 74, 76 and block 78, this initial value of the amplitude of the sample is used to access a look-up table in random access memory (RAM) to determine an 11-bit scaling value or factor which is used as a multiplier. This 11-bit scaler then is provided as represented at lines 80 and 82 as an input to a multiplication step represented at circle 84. The voltage scaler value 72 additionally is used to provide an input to an amplification or treatment stage which amplifies the voltage sample input 60 prior to a next conversion by analog-to-digital function 62. Thus the conversion now represented along flow lines 68 and 70 and block 86 is one of voltage data of 11 data bits plus a sign bit. As represented at flow line 88, these 11 bits then are directed to the multiplication function 84, whereupon a scale adjusted valuation or expanded data digital value is developed of enhanced significant bits which may have an extent of 21 data bits plus a sign bit for a highest scale level and this enhanced and highly accurate representation of the amplitude of voltage for the sample degree then is available as represented at flow line 90. The current samples as described at line 64 are converted in similar fashion as represented at block 66 such that, initially, a scaler current valuation is made, as represented at flow lines 92 and 94 leading to the scaler function represented at block 96. This current scaler function, as before, provides an input as represented at lines 74 and 76 to a look-up table of 11 values in random access memory as represented at block 78. The resultant scaler or scaling factor, as before, is then provided to a multiplication function via flow lines 80 and 82. However, this same scaling value also is utilized to adjust the gain of an input amplification stage to the conversion function 66 such that a next data conversion then provides a digital current data signal as represented at block 98 having 11 data bits plus a sign bit. As before, as represented at line 100, this current data digital value then is submitted for multiplication as represented at circle 102 with the RAM contained scaling factor as represented being asserted from flow line 82. The resultant product, as represented at flow line 104 will be a highly accurate representation of the amplitude of the current sample having as many as 21 significant digital bits of information plus a sign bit. The sampling and digital multiplication function now has highly enhanced valuations of voltage and current for the given 1° sample. Returning to flow line 90, the voltage sample is adjusted or corrected for gain and phase errors. These errors will occur at the front end of the system where analog components such as transformers, scaling resistors, and conversion functions are carried out. Additionally, phase or time error can occur in consequence of the transforming as well as conversion. To correct for these normally encountered vagaries, each meter is tested in the course of its assembly, for example, in conjunction with a standard. Correction of the output of the meter under test with the standard then is carried out by providing a correction factor for each sample degree of any given cycle and such data are positioned in random access memory (RAM) at power-up. The look-up of the correction factor is represented at block 106 and the 21-bit data output thereof is represented at flow line 108 extending to a multiplication function represented at circle 110. A resultant corrected voltage sample digital representation which may have as many as 21 data bits plus a sign bit then is directed as represented by flow lines 112 and 114 to temporary storage in random access memory as represented at block 116. The voltage value for the given sample at line 112 is employed to develop a volthour parameter and thus, the flow line for the value is seen to progress, as represented by lines 118 and 120 for further treatment. However, as represented at line 122 and a multiplication function represented by circle 124, a volts^2 multiplication may be carried out to provide a volts^2 valuation for processing as represented at flow line 126. A watt valuation for the 1° sample is provided by a multiplication represented at circle 128 which provides the product of volts at line 112 with the corresponding current valuation from line 104 as represented at line 130. This product then is submitted for further processing as represented by flow line 132. The parameter, Q, represents a lag in phase of 60° with respect to watts. Accordingly, a multiplication function is provided as represented at circle 134 which carries out multiplication of the current digital value as represented at flow lines 104 and 136 with voltage only after a delay of 60°. Thus, the volt data are withdrawn from RAM memory function 116 as zero valuation represented at line 138 until after a 60° delay occurs to develop an output for the Q parameter as represented at flow line 140. Similarly, the var parameter is one representing a 90° delay. Thus, as represented by the multiplication function at circle 142, the current values for the given sample at lines 104 and 144 are multiplied by zero voltage digital values until after a 90° delay. Accordingly, the voltage digital valuations are active participants in the multiplication activity represented at circle 142 only after a 90° delay and the products of the multiplication extend in the diagram along line 143. A determination of the parameter volt amperes (VA) in essence, requires an alignment of the voltage and current cycle envelopes. Inasmuch as the current component may be spaced from the voltage component by as much as 180°, zero crossings or suitable predetermined commencement location are monitored for this function and, as the zero crossing of the current component is detected, then the voltage values stored earlier in RAM 116 are engaged in a multiplication function. The latter function is represented at circle 148 shown accessing the current valuation from line 104 and RAM contained voltage digital information from line 150. A resultant VA evaluation for the given sample then is submitted for further processing as represented at flow line 152. From the above, it may be apparent that, with a maximum possible delay of 180° to develop the VA output, the total number of sample degrees for each cycle evaluated will be 540°. The resultant outputs of all three phases A-C then are seen to be combined at earlier-described lines 120, 126, 132, 140, 143, and 152. Looking to FIG. 2B, the above designated output flow lines are seen directed to a data processing function which, as discussed above, is controlled by a digital signal processor (DSP). However, one further value is added to the products which are made from the conversion functions 62 and 66. At the conclusion of sampling a full cycle of phases A, B, and C, a zero valuation is asserted to the conversion function such that any offset values may be detected for summing correction in the processing procedure. Looking to the process, it may be observed that the volt digital values for each degree sample for each of phases A-C flow as data represented by lines 120 and 154 through a zero correction function represented at block 156. The resultant corrected valuation, which may be as high as 21 significant data bits is then submitted as represented at line 158 to an accumulating register represented at block 160. This register accumulates the values and provides, in effect, an integrating function which, upon reaching a predetermined value, develops a signal as represented at line 162 which is directed to an overflow register represented at block 164. Register 164 provides a pulsed output representative of volthours as depicted by flow line 166. Generally, the number of pulses corresponding with a given parameter valuation is determined by the end user. The data as represented at line 166 flows to a parameter selection function represented at block 168 for outputting as one of six channels of data represented at line grouping 170. These six channels correspond with line grouping 22 as described in conjunction with FIG. 1. In similar fashion, the volt^2 parameter data are shown flowing via line 126 and, as represented by line 172 and block 174, are corrected for zero offset, whereupon the data bits which may be as high as 21 are directed to an accumulating register as represented by line 176 and block 178. As before, the values accumulate for each phase and, at some predetermined overflow value, are submitted to an overflow register as represented by line 180 and block 182. A resultant pulse output is developed from the register function 182 as indicated by line 184 representing an integrated valuation for volt ^2 hour which then flows to the selection procedure at block 168 for possible election as an output at line six line grouping 170. Data flow representing the electrical parameter, watt, is shown flowing via line 132 and, as represented at line 186 and block 188, such data are adjusted for zero offset and submitted as represented by data flow line 190 and a selection function represented by switch S1 to either of two accumulating register functions represented at blocks 192 and 194 via respective lines 196 and 198. The register function represented at block 192 collects data corresponding with a positive or "watts out" cycle for integration, while a corresponding "watts in" compilation is evolved in conjunction with the accumulating register function 194. With the instant digital approach, a determination as to the appropriate polarity for a given sampled phase cycle is provided on a historical basis wherein the polarity then available at a polarity detector function represented at block 200 controls the orientation of the selection represented by switch S1. This control is represented by dashed line 202. The polarity detector function at block 200 may be implemented as an up/down counter performing in conjunction with the earlier-discussed sign bit of the converted data. This sign bit input to the register is represented by flow line 204 extending from flow line 190. Because of the vagaries of the system and slight phase deviations which will be encountered during sampling, the polarity detector will be incremented upwardly with positive sign bit inputs and, conversely, incremented downwardly with the input of negative bits. However, the overall history of signage for any given number of samplings, for example 360, will determine control over the switching function S1, i.e. that indication as to whether the information is with respect to watthours out or watthours in. As before, the accumulated valuations in register function 192 will be provided as outputs as represented at line 206 for a given threshold, which information is directed to an overflow register function represented at block 208. A pulse designated output occurs from register 208 as represented by flow line 210 which is directed to the selection function at block 168 for outputting as a channel at line grouping 170. Similarly, the watthour in data developed in accumulating register 194 is outputted, as represented by flow line 212 to an overflow register function represented at block 214 for presentation as pulse data, as represented at line 216 to the selection function at block 168. Q valuations, as represented by the data flow path at line 140, are shown being corrected for zero offset, as represented by flow line 218 and block 220, whereupon the data bits for this parameter are submitted to a selection function represented by a switch S2 and lines 224 and 226 to respective accumulating register functions represented at blocks 228 and 230. Block 228 functions to provide an integrated valuation for Qhours out, while line 230 provides the corresponding valuation for Qhours in. As before, the general flow of power, as developed by the historic accumulation of the polarity detector 200 determines the selection represented by switching function S2. Where Qhours out are at hand, then the overflow of the accumulating register function 228, as represented at line 232 is directed to an overflow register function represented at block 234. The resultant, pulse categorized data representing Qhours out are developed and submitted as represented by line 236 to the selection function represented at block 168 and, if selected, are provided at six line grouping 170. Correspondingly, the Qhour in integrated valuation evolved at the register 230 provides an overflow output at a predetermined level as represented at flow line 238 which is directed to an overflow register function represented at block 240. The resultant pulse designated Qhour in data then are submitted as represented flow line 242 to the selection function represented at block 168 and thence, if selected, to an output at six line grouping 170. The flow of var data, as represented at line 143 from the three phases is shown directed via line 244 to the earlier-described zero offset correction function represented at block 246. Upon correction, this data flow then is submitted to a dual selection logic to evolve four quadrant varhour metering. In this regard, as represented by data flow line 248 and sign bit flow line 250, the signage for each data sample is submitted to a polarity detector function represented at block 252 which, as before, may be implemented as an up/down counter. The polar sense of this counter, i.e. + for lag and - for lead will be determined on an historic basis as before, being an accumulation of, for example, 360 sign bit components. Thus, the valuation representing that history controls a selection function represented by switch S3, such control being represented by dashed line 254. With the lag and lead characteristic thus selected by the function represented at S3, the data are then distributed in accordance with overall power flow as represented by line 256 extending to a selection function represented by switch S4 leading, in turn, to lines 258 and 260. Selection function S4 is controlled, as above, from the polarity detector 200, as represented at dashed line 202. Line 258 extends to an accumulating register 262 which collects data valuations for varhour lag out and the overflow representing integrated increments thereof is represented, as presented at line 264, to an overflow register function represented at block 266. Pulse categorized outputs of function 266 are represented by data flow line 268 extending to the selection function represented at block 168 and, if elected, to six line grouping 170. An oppositely-disposed power flow selected by the function represented at switch S4 shows a data flow via line 260 through an accumulating data register function represented at block 270. Thus, for this direction of power flow, the overflow representing an integrated valuation of the register function 270 is provided at line 272 extending in data flow fashion to an overflow register function represented by block 274 and shown having a pulse categorized data output represented at line 276 corresponding with varhour lead in data. Where the polarity detection history represented at block 252 shows a positive valuation or lag condition, then the selection function represented by switch S3 will elect a data flow represented by path line 278 leading to the selection function represented by a switch S5. For power flow out conditions, then as represented by line 280 showing data flow to an accumulating register function represented at block 282, an integration occurs providing an overflow data flow at line 284 directed to an overflow register represented at block 286, which, in turn, provides a pulse categorized output data flow represented at line 288 corresponding with varhour lead out. The latter data are submitted to the selection function at block 168 and, if selected, will appear at the six line output grouping 170. Where the selection represented at switch S5 is a power flow to the utility, then the data flow is represented by line 290 as extending to an accumulating register represented at block 292 wherein the values of the sampled inputs are collected. The resultant integration provides an overflow as represented by the data flow line 294 extending to an overflow register represented at block 296. A pulse categorized output then is provided, as represented by data flow line 298 corresponding with varhour lag in data which is directed to the selection function represented at block 168 for presentation, if selected, to the six-line grouping 170. VA data flow is represented at line 152 as flowing as represented by line 290 to a zero offset correction function represented at block 292. The corrected data then flows to an accumulating register function through a selection feature represented by switch S6 controlled from the polarity detector 200 via line 202. For a positive history represented at the detector function at block 200, then the data flow is represented as along line 296 leading, in turn, to the accumulating register represented at block 298. An integration form of treatment ensues providing an overflow represented at flow path line 300 directed to an overflow register function represented at block 302. A resultant pulse categorized signal output representing VA hour out data then is directed to the selection function represented at block 168 for presentation, if selected, to a channel of the six-line output grouping 160. In the event the power flow is toward the utility, then the data flow from the selection function represented at switch S6 is along line 306 directed to an accumulating register function represented at block 308 for a value accumulation amounting to an integration. The overflow is then directed as represented by line 310 to an overflow register function represented at block 312 to provide a pulse categorized output data flow represented at line 314 corresponding with the data VA hour in. Flow line 314 is directed to the selection function represented at block 168 and if the subject data are selected, then it will be outputted at one channel of six-line grouping 170. Referring to FIGS. 3A-3B, the circuit structuring for deriving the sampling, control including digital multiplication and digital signal processing (DSP) functions described in conjunction with block 20 in FIG. 1 is revealed. In FIG. 3A, the analog networks for treating incoming three-phase power are represented at blocks 320 and 322. These step down functions will include conventional voltage and current transformers along with resistor and capacitive components suited for appropriate scaling and conversion of current to voltage. The resultant voltage analog signals are presented via three-line grouping 324 as labeled VA, VB, and VC to corresponding inputs of a phase multiplexer represented at block 326. Additionally provided as an input to the multiplexer 326 is a line labeled VZ representing a zero input for the earlier-described zero offset measurements. In similar fashion, the analog current signals for phases A, B and C are provided at three-line grouping 330 as labeled IA, IB and IC. Additionally, a line 332 labeled IZ and coupled to ground provides the noted zero offset input to the multiplexer for zero offset correction in conjunction with current. For the embodiment shown, the requisite speeds or operational rates for the components shown in FIG. 3A are derived employing a synchronous state machine form of control. This control provides a sampling rate such that the phase A (VA) input is sampled for example 360 times followed by the remaining voltage phases and a zero offset measurement at line 328. Selection of phase at multiplexer 326 is represented at line 334. Additionally, the current phases are sampled commencing with phase A (IA) and these current inputs are sampled for example 360 times per cycle. Commencement of the sampling procedure will be seen to be regulated in conjunction with the detection of zero cross-overs of the pertinent sinusoids. Looking momentarily to the counting components of this synchronous state machine, it may be observed that a state counter is provided at 336, the clock input to which is provided, for example, at 5.4 MHz at line 338. Reset from line 340 and having a carry output at line 342, the counter 336 provides a 7-bit output at 7-bit bus 344 which functions to sequentially address an EPROM program memory 346 so as to provide corresponding sequence of 128 instructions at the 16-bit output data bus 348. Three sets of these 128 instructions will be seen to be employed, one set as a zero cross routine; one set as a multiply routine and one set as a zero routine. Bus 348 extends, in turn, to a state data expander 350 which functions to provide the requisite number of control output lines, for example about 30 required for exerting control from the synchronous state network. These control outputs are represented at a line grouping represented generally at 352. The carry output, representing 128 events, having been completed, for example, for treating a 1° sample, is directed via line 354 to the clock input of a degree counter 356. Thus, with each clock input, the counter 356 will provide a progressive count presented at 9-bit output bus 358. Counter 356 is reset from lines 340 and 360. Bus 358 transmits the degree count information to a 90 count or 90° decoder 362, a 60° or 60 count decoder 364, and a 540° or 540 count decoder 366. The carry out terminal of 540° counter 366, in turn, is directed via line 368 to the clock input of a phase counter 370. Reset from lines 340 and 372, the phase counter 370 provides outputs corresponding with the completion of a full cycle sampling for each of phases A, B and C via earlier-described line input 334 to the multiplexer 326. Thus, the multiplexer 326 proceeds through the sequence of phases A, B and C for both voltage and current in addition to the earlier-noted zero offset measurement. The phase designated voltage output of phase multiplexer 326 is provided at line 374 for introduction to the input of a variable gain amplifier stage 376. Having a gain control represented at line 378 and an output at line 380, stage 376 provides the ranging input and subsequent scaled data input at line 380 to the input of a voltage analog-to-digital converter (A/D) 382. In similar fashion, the sequence of phases A-C for current samples are provided at line 384 by the phase multiplexer 326 for presentation to the input of a variable gain amplification stage 386. Gain control to stage 386 is represented at line 388 and the output thereof at line 390 is directed to the input of a current analog-to-digital converter (A/D) 392. Converters 382 and 392, respectively, are controlled from line grouping 352 of the state data expander as represented at lines 394 and 396 so as to perform two conversions for each degree of amplitude data. This conversion occurs at a rate adequate to achieve the noted 360 samples per phase cycle. Accordingly, the converters 382 and 392, which provide 12-bit outputs to a corresponding 12-bit bus 398 should carry out a conversion within about 5 microseconds. The converters may be provided, for example, as type AD7572 ADC converters marketed by Analog Devices of Norwood, Massachusetts. Activation of the converters for the commencement of any given operation commences with the carrying out of a zero crossing routine controlled from program memory 346. This 128 step routine is outputted at 16-bit bus 348, expanded at expander 350 and presented from line grouping 352 as controls to converters 382 and 392 via respective lines 394 and 396. Thus sampling occurs during this routine specifically with respect to the voltage inputs for phase A, conversions thereof at zero gain input from lines 378 being presented to 12-bit bus 398. Bus 398, in turn, extends to a zero cross-over detector network 400 which responds to the noted sign bits output of converter 382 to detect a change of polarity and thus a zero cross-over for any given sinusoid. Detector 400 is enabled for this search from line 399 extending from State Data Expander 350. When this cross-over or other suitable commencement location for voltage is detected, an output is provided at lines 402 and 404 which is directed, inter alia, to the program memory 346 to cause it to enter into a multiply routine which is another 128 steps in extent. At the commencement of this multiply routine, a range latch 406 is controlled via line grouping 352 of the synchronous state machine as represented at line 408 to provide a zero gain output control via lines 378 and 388 to respective gain stages 376 and 386. Stage 376 then provides a sample input for the first degree of sampling at line 380 which is converted by converter 382 to a 12-bit range digital value at bus 398 representing 11 bits of range data plus a sign bit. This information is provided simultaneously to the range latch 406 via bus 398 as well as to a range address latch 408. Latch 408 is controlled by the synchronous state machine from line grouping 352 as at line 410. Responding to the range data presented from bus 398, range latch 406 then adjusts the voltage gain at amplification stage 376 in accordance therewith. Thus, where the ranging value is higher, the gain is correspondingly set lower. Generally, eleven values of gain are provided representing eleven amplitude scaling regions. Voltage converter 382 then provides a data conversion which is presented at bus 398 and to a bus driver which directs the 11 bits of data digital values and sign bit to a bus driver 412. Controlled under the multiply routine from program memory 346 via line grouping 352 and specifically line 414, the bus driver responds at an appropriate time to present the 12 bits of data digital value data to 12 of the 24 bits of bus 416. 24-bit bus 416 extends to a 24×24 bit multiplier 418. Controlled by the synchronous state machine from output line grouping 352 and specifically as represented at line 420, the multiplier 418 preferaby is a high speed, low power 24×24-bit parallel multiplier fabricated in 1.5 micron CMOS and marketed as a type ADSP- 1024A by Analog Devices, Norwood, Massachusetts. The ranging data as presented to range address latch 408 are employed via 8-bit bus 422 to address a random access memory (RAM) 424 to find one of 11 scaling factors corresponding with the range data procured from converter 382. Upon being accessed from memory, this scale multiplier then is submitted via 24-bit bus 416 to the multiplier 418 for multiplication with the second conversion data digital values. The resultant accurate evaluation of voltage expanded data for the 1° sample at hand is returned via bus 416 to memory 424 for temporary storage. Very shortly (1 microsecond) following the sampling of data for 1° of voltage at line 374, current is sampled from line 384, being directed through variable gain amplifier 386 at a zero level of gain to be presented via line 390 to converter 392 to provide a range conversion. As before, this range digital value, present as 12 bits including one sign bit is submitted both to the range latch 406 and to the range address latch 408. At range latch 406, the data are employed to select an appropriate gain value of 11 levels for adjusting the gain at variable gain amplifier 386. With lower amplitudes, higher gain values are asserted. A next conversion by this amplified value at line 390 then is undertaken by converter 392 and presented at bus 398 as 12 bits of data digital values including a sign bit and is directed to bus driver 412, whereupon it is presented to multiplier 418 via 24-bit bus 416. Correspondingly, the range information supplied to the range address latch 408 is submitted via 8-bit bus 422 to a look-up table in RAM memory 424 to provide an appropriate multiplier or scaling factor corresponding with the scale level determination for submittal to multiplier 418. The resultant product for the 1° current sample will have as many as 21 significant bits plus a sign bit. Thus with very accurate digital representations for a sampled 1° of voltage and the corresponding sampled 1° of current of the phase A cycle detected, the synchronous state machine then proceeds to carry out necessary multiplications. As a first aspect of this procedure, the accurate voltage data now retained in RAM 424 are submitted via bus 416, bus driver 426 (FIG. 3B) and 24-bit bus component 428 to the data input of a digital signal processor (DSP) 430. During this interval of time, the 24-bit bus at 416 is under the control of the synchronous-state machine and thus, the DSP 430 is caused to respond to the asserted data via a data ready signal from the synchronous state machine developed from line grouping 352 and specifically shown presented via line 432. Processor 430 may, for example, be provided as a type TMS 32010 Digital Signal Processor which is a 16/32-bit single-chip microcomputer combining the flexibility of a high speed controller with the numerical capability of an array processor. The device offers an alternative to multi-chip bit/processors and is marketed by Texas Instruments, Inc., Houston, Texas. Device 430 functions with the synchronous state machine to provide the earlier-described 5.4 MHz clock output at line 338 and performs in conjunction with a program retained in a programmable read only memory (EPROM) 434. Device 434 carries the program control for DSP 430 and is shown coupled with 24-bit bus 428 as well as with a 16-bit address bus 436 in common with DSP 430. To provide further control, the 16-bit bus 436 is shown extending at 438 to a control expander 440 to provide a control input via line 442 to memory 434. Line 444 is the DSP 430 control to expander 440. Control additionally is asserted via line 446 to a bus driver 448 representing an interface with 8-bit bus 422. Line 450 is shown extending to control the bus driver 426, while adjacent line 452 represents a control input to RAM memory 424 thorugh OR gate 454 and line 456 to provide control thereover during those minor portions of the cycle wherein the DSP 430 has control over the major bus structure. During such control, for example at power up, DSP 430 functions to convey the earlier described data used for retaining magnitude/phase correction values developed during calibration to RAM 424 from processor 28 (FIG. 1). The opposite input to gate 454 emanates from line 458 representing a control from the line grouping 352 of the synchronous state machine. Finally, a control represented by line 460 extends from the control expander 440 to an output latching function represented at 462. This latching function 462 is coupled with 24-bit bus 428 and functions to develop the six channels of selected output described in conjunction with FIG. 1 at 22 and represented herein by the same numeration. Serial communication between the electronic register 26 shown in FIG. 1 at line 34 is represented with the same general numeration in FIG. 3B at lines 464-466. Lines 464-466 carry, respectively, reset data, received serial data and transmitted serial data. Returning to FIG. 3A, following the submission of voltage values as multiplied by the gain and phase correction value, the corrected voltage value is retained in RAM 424 and a DATA READY signal is submitted via line 432 to DSP 430 for submittal of that information thereto. The synchronous state machine then recalls the corrected voltage data from RAM 424 and again submits it twice to the multiplier 418 for developing a voltage^2 value. Again, the DATA READY signal is provided as represented at line 432 to DSP 430 for submitting the voltage^2 data thereto. The voltage data again are read from RAM 424 and are multiplied by the then-available current data to provide a watt valuation for the sampled degree. Accordingly, a DATA READY signal again is provided at line 432 to DSP 430 such that it might receive this information. The address to RAM 424 for submitting the corrected voltage data to multiplication in developing this watt value is developed from a watt degree counter 468 having an output coupled with bus 422 leading, in turn, to RAM 424. Watt degree counter 468 develops a succession of 360 addresses to RAM 424 in correspondence with a clock input thereto for each degree developed at line 342 and extending to the counter via line 470. Simultaneously with the commencement of the first address from the counter 468 to RAM 424, a 360 degree decoder function represented at block 472 is activated from line 473 of grouping 352 for a watt monitoring function under control from bus 422. At the termination of 360 degrees of watt evaluation, the watt degree counter 468 will be reset from decoder function 472 as represented at line 474. Clock enablement to the watt degree counter 468 is provided from line grouping 352 and specifically represented at line 476, while output enablement of the address devised by the counter 468 is provided from the same line grouping as represented at line 478. Line 476 extends from line 402 and the voltage zero crossing from zero cross detector 400. Thus, the watt degree counter is initially activated from this zero crossing or other suitable commencement location. The determination of Q valuations for each sample is determined with respect to a delay representing a phase difference of 60°. Accordingly, Q determinations are not made until 60 samples have been developed. To provide this feature, a Q degree counter 480 is provided which, for 60 samples, provides an address output at bus 422 serving to assert a zero voltage valuation from RAM 424 to the multiplication function 418. Thus, for those first 60 samples, the Q valuation will be zero. However, upon the 60th sample, the Q counter then functions to submit the corrected voltage valuations from the single degree sampling in sequence by addressing RAM 424. These values then are multiplied at multiplier 418 by the then instantaneous valuation for current to provide a Q valuation. At the commencement of counting following the 60° lag, 360° decoder 472 commences to count through 360° and to provide a reset to the Q degree counter 480 as represented at line 482 to determine the end of a Q evaluation. Counter 480 is enabled from the line grouping 352 as specifically represented at line 484 by assertion of a clock enable signal thereto and its output is enabled as above discussed from the earlier-described 60° decoder 364 via line 486. One degree clocking to the counter 480 is provided from earlier-described line 342 through line 488. Var valuations are characterized by a 90° phase variation. Thus, a var degree counter 490 is provided which functions to address the RAM 424 via bus 422 to output a zero voltage value for the first 90 samples or 90°. A determination of the 90th degree is provided by the earlier-described 90 degree decoder 362 and the information corresponding thereto is provided at line 492 for assertion at the clock enable input of counter 490. The output enable for counter 490 is provided from line grouping 352 as represented at line 494 while the clock input thereto derives from earlier-described line 342 and line 496. As before, counter 490 further is monitored by the 360° decoder function 472 such that upon the 91st sample or degree, 360 samples are decoded following which the counter 490 is reset by an input from decoder 472 as represented at line 496. As noted earlier, the development of a volt ampere (VA) quantity requires, in effect, a coincidence of the envelopes of the voltage and current sinusoids for a given cycle. Accordingly, a VA degree counter is provided at 498 which is activated at its clock enable input by a zero crossing of the current signal as detected by detector 400 or suitable commencement location corresponding with that of the voltage signal, and asserted to the counter from line 500. Counter 498 is clocked from earlier-described line 342 and its output is enabled from the synchronous state machine line grouping 352 as specifically represented at line 502. The counter 498 is monitored by the 360 degree decoder function 472 such that it is reset following 360 degrees of counting and addressing memory 424 via line 504. All of the above determinations are made throughout a span of 540 degrees or 540 clock counts. Accordingly, at the termination of devising volts, volt^2, watt, Q, var and VA values, the 540 degree decoder 366 provides a clock input to phase counter 370 via line 368. Counter 370 then provides an output via lines 334 to phase multiplexer 326 to commence with the evaluation of phase B of the input. Control with respect to recommencing a search for a zero crossing of the voltage B phase is provided to function 400 as represented at line 506 and earlier-noted enablement line 399. Following an initial zero cross routine, the synchronous state machine essentially repeats the above detailed procedure through each of the phases A, B and C of the input. It then enters a zero or offset determining routine wherein the sample inputs essentially are brought to a zero value and introduced to the phase multiplexer 326 as applied from lines 328 and 332. Under the control of the synchronous state machine via line grouping 352, the device carries out an initial ranging input for voltage and current through respective amplifiers 376 and 386 following which, range information is supplied to the latch 406 as well as to the address latch 408. Latch 406 then adjusts the gains of amplifiers 376 and 386 in accordance with the evaluated range and data sampling then takes place for the first of 360 samples. The range codes in RAM 424 then are multiplied with the data to provide an enhanced voltage or current evaluation and, for each of the 360 samples, the DSP 430 is interrupted by a DATA READY signal at line 432 such that zero interrupt data may be provided. These data are stored in onboard random access memory. At the termination of this zero routine on the part of the synchronous state machine, then an end of program signal is developed at line 508 extending from line grouping 352 of the state data expander 350. Line 508 is seen to extend to earlier-described line 340 which functions to reset the counters 336, 356 and 370 as well as to provide an end of program input pulse to the DSP 430. The synchronous state machine then enters the noted zero cross routine for commencing a next three phase and offset or zero setting evaluation. Referring to FIGS. 4A-4C, a program flow chart for the synchronous state machine components as discussed in conjunction with FIG. 3A is set forth. This program also may be employed to operate a high speed general purpose digital signal processor as an alternative to the synchronous state machine aproach described above. In general, the synchronous state components as described in conjunction with FIG. 3A operate in relative independence from the DSP 430 driven processing described in conjunction with FIG. 3B. A communication between these two functions occurs when the synchronous state machine indicates a DATA READY condition as described in FIG. 3A in conjunction with line 432. The DSP 430 functions to load calibration constants into RAM 424, whereupon the synchronous state machine is permitted to perform. At such time, the synchronous state machine is stopped so that the DSP 430 can take control of the data bus components 426-428. FIG. 4A shows a zero cross routine region represented by vertical line 520. This initial portion of the routine is shown commencing at line 522 leading to the instructions represented at block 524. At this position, the synchronous state machine waits for a stop command of the DSP 430. In the event such a stop occurs for calibration constant loading into RAM 424, then at some point dictated by the DSP 430, the instant program recommences, as represented at line 526 and block 528 to carry out voltage conversion, as described in conjunction with phase A and analog-to-digital converter 382. This voltage conversion continues until such time as the zero cross detector network 400 detects a voltage phase A zero cross-over. Thus, the program proceeds as represented at line 530 and block 532 to provide the query as to whether a volt zero crossing has occurred. If it has not, then as represented by loop line 534, the program waits until such volt zero crossing has occurred. Where such crossing does occur as detected by the detector network 400, then as represented at line 536 the zero cross routine is exited and a multiply routine commences. The extent of this multiply routine is represented in the figures by vertical line 538. Line 536 is seen to lead to the instructions at block 540. At this position, the volt and current range conversion is carried out by respective converters 382 and 392 to determine the scaling or range digital values as 12 bit outputs, including a sign bit. Accordingly, as represented at line 542 and block 544, these range values are stored, for example, in range latch 406 and range address latch 408. From range latch 408, scaling factors are addressed and accessed from RAM 424. Upon completing such storage, as represented at line 546 and block 548, the A/D converters are ranged by applying appropriate amplification gain input to amplifiers 376 and 386. Upon completion of ranging, as represented at line 550 and block 552, volt and current data conversions are carried out to provide 12 bits of data from each converter, the latter incorporating a sign bit. Following the conversion of volt and current data, as represented by line 554 and block 556, the range code or scaling factor is multiplied by the subsequently obtained amplitude (digital) values to derive an accurate, expanded voltage data valuation for the degree being sampled, which, for example, may have an extent as high as 21 significant bits. Then, as represented by line 558 and block 560 the resultant voltage data are multiplied by a gain and phase correction value again at the multiplier 418. Such values were inserted as calibration constants in RAM 424. The program then proceeds as represented at line 562 and block 564, the gain and phase correction voltage data are stored in RAM 424 and DSP 430 is interrupted with a DATA READY signal as described at line 432. Volt data then are made available for processing by DSP 430 in the manner thus far described in connection with FIG. 2B. Following such volt data submission, as represented at line 566 and block 568 the corrected volt data are withdrawn from RAM 424 and multiplied in squaring fashion at multiplier 418 to derive volt^2 data. Such data are submitted to the DSP 430 in conjunction with a DATA READY signal as provided from line 432. Thus, at this juncture, the DSP 430 is carrying out development of volthour data and volt^2 hour data. As represented at line 570 and block 572, the current range code is drawn from RAM 424 and multiplied with the second data conversion for current as provided from converter 392 to develop expanded current value data of high accuracy having a possible extent of 21 bits. Line 574 then shows the program leading to instructions for reading volt data from RAM 424 and multiplying it by the noted current data to provide watt data as shown at block 576. These watt data then are submitted to the DSP 430 in conjunction with a DATA READY signal from line 432. Line 578 shows the multiply routine then leading to the instructions of block 580 providing for the reading of volt data and multiplying it by current data under the conditions asserted by the var degree counter 490, providing for a 90 degrees or sample step delay. At the conclusion of the determination of var data, the DSP 430 is interrupted with a DATA READY signal from line 432 and the var data are read into it for the instant one degree sample. Line 582 shows the multiply routine continuing to the instructions at block 584 for developing Q data as a multiplication of current data by volt data delayed by 60 degrees or sampling steps to achieve a Q data valuation. These Q data are read into DSP 430 in conjunction with a DATA READY signal 432 and the multiply routine continues as represented at line 586. Line 586 leads to the instructions at block 588 providing for the reading of volt data and multiplying it by current data which, as described above, are developed only following the detection of a current zero cross-over by network 400. Upon completion of the multiplication, DSP 430 is provided these VA data in conjunction with a DATA READY signal at line 432 and the program continues as represented at line 590. Line 590 leads to a query as to whether 540 degrees have been sampled as represented at block 592. In the event of a negative determination, then the given phase of phases A, B or C has not been fully sampled 540 times and, as represented by loop line 594 the program returns to line 536 to await completion of the computation of all electrical parameters for a given phase full cycle of 360° . In the event the determination of the inquiry at block 592 is in the affirmative, then as represented at line 596 and block 598, a determination is made as to whether three phases, A, B, and C have been evaluated to the extent of a full 540° cycle each. If that is not the case, then as represented by loop line 600 the program returns to line 522 (FIG. 4A) to again carry out the zero cross and multiply routines. Where the inquiry at block 598 is in the affirmative, then as represented at line 602, the program enters the zero routine within the flow diagram region represented by vertical line 604. Line 602 leads to the instructions at block 606 wherein a watchdog synchronizing register is set. DSP 430 responds to this register to effect a synchronization by self-adjustment and the zero routine continues as represented at line 608 and block 610. Generally, the zero routine repeats the procedural steps which are carried out in performing a watt calculation. In this regard, block 610 shows that voltage and current range conversions are carried out at respective analog-to-digital converters 382 and 392. Thus, the scaling range data are obtained and, as represented at line 612 and block 614 the range codes are stored, following which, as represented at line 616 and block 618, the A-to-D converters 382 and 392 are appropriately ranged in consonance with the determined range codes by appropriate gain adjustments of respective amplifiers 376 and 386. Voltage and current data conversion then are carried out as represented at line 620 and block 622, whereupon, as represented at line 624 and block 626 the range code or scaling factors accessed from RAM 424 are multiplied by the data to achieve an accurate representation for the zeroing determination, which may have as many as 21 bits of voltage data. Similarly, as represented at line 628 and block 630 the same multiplication approach is carried out with respect to current data. The zero routine then continues as represented at line 632 and block 634 to provide for the multiplication of volt data by current data, following which the DSP 430 is interrupted with a DATA READY signal at line 432 to provide for storage of the zero offset data. As these data are stored, as represented at line 636, block 638 and line 640 an end-of-program enablement signal is passed to DSP 430 via lines 508 and 340. It may be recalled in conjunction with FIG. 3A that this signal also functions to reset the state counter 336, the degree counter 356, and the phase counter 370. The synchronous static machine then continues to repeat the program as represented by line 640 extending to line 522 (FIG. 4A). Looking to FIGS. 5A-5G, a flow chart is revealed representing the program for the digital signal processor 430 as retained in memory 434. These figures should be considered in a mutual vertical orientation in the order of their alphabetical suffixes. As represented by the vertical region lines 650-653, the instant program is comprised of four component parts, a communications routine at 650, a read zero data routine at 651, a read six measured quantity data routine 652, and a process measured quantities and outputs routine 653. FIG. 5A shows entry of the program with the communications routine 650 as commencing with a reset input from the electronic register 26 (FIG. 1) as asserted as described in FIG. 3B at line 464. This reset is shown entering the program at line 656 and is seen to function to provide a stop synchronous state machine instruction at block 658. The latter command shows a position in program where the synchronous state machine can be halted such that control of the bus components falls under DSP 430. This stop command corresponds with the wait for stop instruction at block 524 in FIG. 4A. Control of the bus components is established by the DSP 430, then as represented at line 660 and block 662, serial communication is established with the processor register 26 (FIG. 1) and magnitude and phase correction values are loaded by DSP 430 into the RAM 424 of the synchronous-state machine. It may be recalled that these correctional data are maintained in the register 26 on a non-volatile basis due to the use of an EPROM 32. Following the loading of requisite constants into RAM 424, as represented at line 664 and block 666, the synchronous state machine is released and, as set forth in FIG. 4A at block 528, voltage conversion activities ensue and the synchronous state machine proceeds to block 606 at the commencement of its zero routine to set a watchdog synchronizing register. The DSP 430 awaits this position in the program as represented at line 668 and block 670 wherein a query is made as to whether the watchdog synchronous register has been set. In the event it has not been set, then as represented at loop line 672, the instant program awaits such activity prior to entering a read zero data routine. By so operating the synchronous state machine and ignoring the output, the machine in effect is cleared of spurious data and the like to assure accuracy at such time as viable readings commence to be taken. When the watchdog register is set as represented at block 606 in FIG. 4C, the synchronous state machine program commences its zero routine represented at vertical line 604 while, simultaneously, the instant program commences to read the outputs of that routine. Thus, with an affirmative determination at block 670, as represented at line 672 and block 674, the program enables its interrupt for purposes of processing zero data as available and, in the meantime, as represented at line 676 and block 678 any output routines are serviced. However, when zero data are ready, the DSP 430 is interrupted with a DATA READY input as described in connection with line 432. Such an interrupt is shown at line 680 leading to block 682 providing for the reading of zero data as provided in conjunction with block 634 (FIG. 4C). Upon reading such zero data, as represented at line 684 and block 686, the interrupt register is set and, as represented at line 688, the interrupt routine returns. Prior to the interrupt, the service output routine as represented at block 678 continues as represented at line 690 and block 692 until such time as the above-discussed interrupt register is set. Until such time, as represented at line 694, the service routine loops awaiting the interrupt. Following the setting of the interrupt register, as represented at line 696 and block 698, the program changes the interrupt vector as it enters the read six measured quantity data routine. During this routine, the data comprised of volts, volts^2, watt, Q, var and volt amperes (VA) are read by DSP 430. Accordingly, as each of these interrupts occurs as described at blocks 564, 568, 576, 580, 584 and 588 in FIGS. 4A-4B, the measured quantity of data is read as represented at lines 700 and 702 leading to block 704 describing the reading of measured quantity data followed, as represented at line 706 and block 708 by the incrementing of the interrupt counter. Following such incrementation, as represented at line 710 and block 712, the interrupt is enabled and the routine returns as represented at line 714. Line 700 is shown leading to block 716 which functions to determine whether or not six interrupts have been received, in the event they have not, then as represented at loop line 718, the program dwells until such sixth interrupt occurs. Upon the occurrence of the sixth interrupt, as represented at line 720 and block 722 the program enters the processing of measured quantity output routine having now completed the reading of computed quantities. Looking momentarily to FIG. 6, a time representation of the activities of the synchronous state machine (SSM) with respect to the digital signal processor (DSP) 430 is represented. Where sampling a 60 Hz signal, each degree or sample will persist for 46.3 microseconds as labelled above time line 724 in the figure. Above line 724 as labelled "SSM", as an example, a sequence of degrees, ranging from degree 9 through degree 12 are depicted. It may be observed that about one-half of the elapsed time of the sample degree interval will be taken up with the earlier-described sampling procedures as labelled "S". The remaining portion of the given degree interval will be involved with the earlier-described multiplication procedures of the SSM 430. DSP 430 commences the above-described reading procedure as represented at vertical line region 652 for about a period of time corresponding with the multiplication procedures of the same degree under analysis as labelled "RD". There then ensues the instant processing routine for the remainder of that sampled degree. Returning to FIG. 5B, block 722 shows that the processing procedure carried out by DSP 430 commences with the adding of zero data to watt data following which, as represented at line 726 and block 728, a determination is made as to whether the sign bit of the watt valuation is 1 or 0. In the event that it is a 1, then as represented at line 730 and block 732 a watt polarity register is incremented and the program continues as represented at line 734. On the other hand, in the event the sign bit determination at block 728 shows a zero value, then as represented at line 736 and block 738 a decrementation of the watt polarity register is made and the program continues as represented at line 734. Generally, three register functions will be seen to be involved in the program, one serving to evolve the history of polarity as described in conjunction with blocks 732 and 738; register function accumulating data values, one for plus and one for minus in that regard; and a third register function that provides the earlier-described pulse forming overflow accounting. Line 734 is seen extending to the input of block 740 wherein a query is made as to whether the watt polarity register is less than zero. In effect, this register is an up/down counter such that incremented or decremented it moves about a neutral zero value in either a positive or negative direction. A positive direction is one considered to be a history of 360 samples indicating that the power flow is "out" in the accepted commercial sense, while a corresponding history representing a negative valuation is considered a power flow in the "in" convention. Thus, where the query at block 740 shows that the polarity is not less than zero, then as represented at line 742 and block 744 the watt data are added to the watthour out output register and the program proceeds as represented at line 746. On the other hand, an affirmative response to the query at block 740 provides, as represented at line 748 and block 750, that the watt data are added to the watthour in output register and the program proceeds as represented at line 746. Line 746 then is seen leading to block 752 representing a servicing of the watthour out output routine for developing the earlier-noted pulsed output quantities. This is carried out by adding the data to an output register and, when that compiled data are above a predetermined threshold, pulses are outputted corresponding with watthours. The routine then proceeds as represented at line 752 and block 756 to determine whether or not the watthour in output function has been enabled. Such enablement will be at the election of the user, a total of six output channels being elected with the instant circuit architecture. In the event watthours have been enabled, then as represented at line 758 and block 760 the watthour in output routine is serviced as described in conjunction with watthour out at block 752. The routine then proceeds as represented at line 762. In the event of a negative determination at block 756, then as represented at line 764, this servicing is ignored and the routine progresses to the inquiry at block 766 for a determination as to whether the Qhour out feature has been enabled. In the event that it has, then as represented at line 768 and block 770, the zero offset data is added to the Q data and, as represented at line 772 and block 774 a determination is made as to whether the watt polarity register is less than zero. As before, this provides a historical determination as to the direction of power flow. In the event the determination at block 774 shows the polarity to be less than zero, then as represented at line 776 and block 778, the Q data are added to the Qhour out output register and the program proceeds as at line 780. On the other hand, where the inquiry at block 774 indicates that the polarity is greater than zero, then as represented at line 782 and block 784, the Q data are added to the Q hour in output register and the routine proceeds as at line 780 to the instructions at block 786. These instructions provide for the servicing of the Qhour out output routine in the manner described in conjunction with block 752 above. The routine then continues as represented at line 788 to the inquiry at block 790 wherein a determination is made as to whether the Qhour in feature has been enabled at the behest of the user. In the event that it has, then as represented at line 792 and block 794 the Qhour in output routine is serviced as above-described and the program proceeds as at line 796. In the event the determination at line 790 is in the negative, then as represented at line 798 the routine proceeds to the inquiry at block 800. Returning to the inquiry at block 766, in the event the Qhour out feature of the system is not elected by the user, then as represented at line 802 the routine skips to the input to the instantly considered inquiry at block 800 determining whether or not a varhour lag out feature has been enabled in conjunction with election by the user. In the event that it has, then as represented at line 804 and block 806 zero offset correction data are added to the var data and the routine continues as represented at line 808 to the inquiry at block 810. Thus, four quadrant varhour metering procedures are undertaken. In this regard, the inquiry at block 810 determines whether the var sign bit is a 1 or a 0. In the event it is a 1, then as represented at line 812 and block 814, the var polarity register is incremented and the program proceeds as represented at line 816. On the other hand, where the var sign bit is a zero, then as represented at line 818 and block 820, the var polarity register is decremented and the program proceeds ia line 816 to the inquiry represented at block 822. At block 822, a determination is made as to whether the var polarity register is valued below zero. In the event that it is not, then as represented at line 824 and block 826, power flow is determined with respect to the condition of the watt polarity register. Where that condition is less than zero, then as represented at line 828 and block 830, the var data are added to the varhour lag out output register and the routine continues as represented at line 832. Where the determination at block 826 is in the affirmative, then as represented at line 834 and block 836, the var data are added to the varhour lead in output register and the routine continues as at line 832. Returning to block 822, where the var polarity register is less than zero, then as represented at line 838 and block 840, a determination is made again as to whether the watt polarity register is less than zero. In the event of a negative determination, as represented at line 842 and block 844, the var data are added to the varhour lead out output register and the routine continues as at line 832. On the other hand, where the determination at block 840 shows an affirmative determination, then as represented at line 846 and block 848, the var data are added to the varhour lag in output register and the routine continues as at line 832. Line 832 is seen then leading to block 850 representing a servicing of the varhour lag out output routine, that parameter having been determined to be enabled earlier in conjunction with block 800. As before, this servicing involves the determination as to whether quantity numerical values have reached a threshold value in register so as to evolve a pulse output representing a time based integration. The routine then continues as represented at line 852 and block 854 to a determination as to whether varhour lead out has been enabled. In the event that it has, then as indicated at line 856 and block 858, the varhour lead out output routine is serviced in the manner described in conjunction with block 850. The routine then continues as represented at line 860. In the event the determination at block 854 is in the negative, then as represented by line 862 the program continues to the determination at block 864. At the latter block, a determination is made as to whether the varhour lag in feature has been enabled in consonance with the desires of the user. In the event that it has, then as represented at line 866 and block 868, the varhour lag in output routine is serviced as described in the above service procedures. The routine then proceeds as represented at line 870. Where the determination at block 864 is in the negative, then as represented by line 872, the routine continues to the determination at block 874 querying whether the varhour lead in feature has been enabled in consequence of the user requirement. Where that is the case, then as represented at line 876 and block 888, the varhour lead in output routine is serviced to provide a pulse categorized data output and the routine continues as at line 890. Where the determination at block 874 is in the negative, then as represented at line 892, the routine continues without servicing procedures. Returning to FIG. 5D, where the determination at block 800 that varhour lag out features have not been enabled, then it is the design of the program that the user will not have requested further var data. Accordingly, as represented at line 894, the routine jumps to line 890 and the next inquiry at block 896 wherein a determination as to whether VAhour out has been enabled is made. Where that is the case, then as represented at line 898 and block 900, the zero offset data are added to VA data and the routine continues as represented by line 902 to the inquiry represented at block 904. At this position, a determination is made as to whether the watt polarity register is less than zero. In the event it is not, then as represented at line 906 and block 908, the VA data are added to the VAhour out output register and the routine proceeds as indicated at line 910. On the other hand, where the watt polarity register indicates a value less than zero representing a power flow towards the utility, then as represented at line 912 and block 914, the VA data are added to the VAhour in output register and the routine proceeds as at line 910 to the instructions at block 916 providing for the servicing of the VAhour output routine in the above-discussed manner wherein values are added to a cumulative register and the overflow above a given threshold therein is employed to produce a pulse categorized The routine then proceeds as represented at line 918 to the inquiry at block 920 wherein a determination is made as to whether the VAhour in output category has been elected by the user. In the event that it has, then as represented at line 922 and block 924, the VAhour in output routine is serviced in the manner above-discussed and the routine continues as at line 926. Where the determination is in the negative at block 920, then as represented at block 928, the routine continues to a determination as represented at block 930 concerning volthours. Returning momentarily to FIG. 5E, it may be observed that where the determination at block 896 has been made that the VAhour out parameter has not been enabled, then as represented at line 932 the routine skips to the volthour determination routine commencing with block 930 querying as to whether volthour parameters have been elected by the user as by enablement in the program. Where that is the case, then as represented at line 934 and block 936, the volt data are added to the volthour output register and, as represented at line 938 and block 940 the volthour output routine is serviced to generate a pulse categorized output signal and the routine continues as at line 942. Where the determination at block 930 is in the negative, then the routine jumps as represented by line 944 to line 942 and the query at block 946 determining whether the volt^2 hour parameter has been enabled in accordance with user desire. Where it has, then as represented at line 948 and block 950 the volt^2 data are added to the volt^2 hour output register and, as represented at line 954 and block 955 the volt^2 hour output routine is serviced to generate a pulse categorized output for the register 26. The routine then continues as represented at line 956. Where the determination at block 946 is in the negative, then as represented at line 958, the program path, in turn, loops as represented at line 956 to the inquiry at block 960 shown in FIG. 5A determining whether or not the watchdog synchronous register has been set. In the event that it has not, then, as represented at line 962, the DSP program continues with the reading of six more measured quantities of data and proceeds to process such data. On the other hand, where the watchdog synchronizing register has been set, then as represented at line 964, the program returns to change the interrupt vector as represented at block 965. The program then returns to enable the interrupt as represented by line 967 leading to block 674 and continues as above-described. The implementation of the programs described in conjunction with FIGS. 5 and 6 can be provided with other computational or processing devices than those heretofore described. For example, as more advanced general purpose digital signal processors become available, they can be substituted, particularly for the synchronous state machine heretofore described. One such device identified as a 56-bit general purpose DSP marketed under Model No. DSP56001 by Motorola, Inc. features 512 words of full speed on-chip program RAM memory, to preprogram data ROMs and special on-chip bootstrap hardware to permit convenient loading of user programs into the program RAM. A co-feature of the device is to provide for 10.25 million instructions per second (MIPS). Looking to FIG. 7, the implementation of such device is portrayed in general at 970 in block schematic form. One such DSP device which substitutes for the synchronous state machine earlier described is represented at block 972. The DSP 972 receives the 11 bits of data and a sign bit along bus 974 an provides control outputs as represented in general at line grouping 976. One such control is represented as lines 978 and 980 extending as controls to a voltage phase multiplexer 982 and a current phase multiplexer 984. Note that voltage analog inputs for phases A, B and C as well as a zeroing input, Z, are introduced via line grouping 986 to multiplexer 982, while, corresponding, scaled current analog inputs for phases A, B and C along with a zero input, Z, are provided from along line grouping 988 to multiplexer Phase control is provided to the multiplexers 982 and 984 via respective lines 980 and 978 and the analog signals are presented therefrom via respective lines 990 and 992 to respective variable gain amplification stages 994 and 996. As before, an initial ranging unity gain setting is provided at amplifiers 994 and 996 via controls represented at respective lines 998 and 1000 emanating from line grouping 976. The initial ranging conversion then is provided from stage 994 via line 1002 to analog-to-digital converter stage 1004 while, correspondingly, this initial ranging conversion for current is provided from stage 996 via line 1006 to analog-to-digital conversion stage 1008. Control to inverters 1004 and 1008 is provided from the DSP 972 as represented by respective lines 1010 and 1012. The outputs of these converters are shown being directed via lines 1014 and 974 to the RAM components of DSP 972 as 11 data bits plus a sign bit. The DSP 972 then functions to alter the gain values based upon amplitude range of the sampled signals by control asserted to stages 994 and 996 from respective lines 998 and 1000. A second conversion then takes place, with selectively amplified inputs from stages 994 and 996 to provide data conversions at converters 1004 and 1008. These data, again provided as 11 data bits and a sign bit are submitted to device 972 via lines 1014 and 974. The DSP 972 carries out cross-over detection, control and multiplication functions as described in conjunction with a synchronous state machine 20 to provide read-outs having as many as 21 significant data bits plus sign bits for processing by a similar DSP shown at 1016 via 24-bit bus 1018. Control between DSP 972 and DSP 1016 is represented at line 1020, while the corresponding six channel output of the DSP 1016 representing pulse data is shown generally at line grouping 1022. These six line groupings provide KYZ relay outputs as represented at corresponding line grouping 1024 and are seen to be directed to an electronic register represented at block 1026. Register 1026 communicates in serial communications transfer relationship with DSP 1016 as represented at line 1028 and provides the earlier-noted features described in conjunction with electronic register 26 described in connection with FIG. 1. In this regard, a telephone linkage through an appropriate modem is represented at line 1030; an optical port for providing serial data change is provided as represented at line 1032; and a data transfer port such as an RS232 variety is represented at line 1034. Since certain changes may be made in the above apparatus and method without departing from the scope of the invention herein involved, it is intended that all matter contained in the above description or shown in the accompanying drawings shall be interpreted as illustrative and not in a limiting sense.
{"url":"http://www.google.com/patents/US4884021?dq=6,921,985","timestamp":"2014-04-24T08:52:16Z","content_type":null,"content_length":"267783","record_id":"<urn:uuid:bd1ba52a-85cb-484f-8cbb-1ab44d8e6a75>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00105-ip-10-147-4-33.ec2.internal.warc.gz"}
connection on a bundle $\infty$-Chern-Weil theory Differential cohomology Connections on bundles Higher abelian differential cohomology Higher nonabelian differential cohomology Fiber integration Application to gauge theory A connection on a bundle $P \to X$ – a principal bundle or an associated bundle like a vector bundle – is a rule that identifies fibers of the bundle along paths in the base space $X$. There are several different but equivalent formalizations of this idea: • as a parallel transport functor, • as a rule for a covariant derivative, • as a distribution (field) of horizontal subspaces – an Ehresmann connection – and via a connection $1$-form which annihilates the distribution of horizontal subspaces. The connection in that sense induces a smooth version of Hurewicz connection. The usual textbook convention is to say just connection for the distribution of horizontal subspaces, and the objects of the other three approaches one calls more specifically covariant derivative, connection $1$-form and parallel transport. In the remainder of this Idea-section we discuss a bit more how to understand connections in terms of parallel transport. Given a smooth bundle $P \to X$, for instance a $G$-principal bundle or a vector bundle, a connection on $P$ is a prescription to associate with each path $\gamma : x \to y$ in $X$ (which is a morphism in the path groupoid $\mathbf{P}_1(X)$) a morphism $tra(\gamma)$ between the fibers of $P$ over these points $\array{ P_x &\stackrel{tra(\gamma)}{\to}& P_y \\ x &\stackrel{\gamma}{\to}& y }$ such that • this assignment respects the structure on the fibers $P_x$ (for instance is $G$-equivariant in the case that $P$ is a $G$-bundle or that is linear in the case that $P$ is a vector bundle); • this assignment is smooth in a suitable sense; • this assignment is functorial in that for all pairs $x \stackrel{\gamma}{\to} y$, $y \stackrel{\gamma'}{\to} z$ of composable paths in $X$ we have $\array{ P_x &\stackrel{tra(\gamma)}{\to}& P_y &\stackrel{tra(\gamma')}{\to}& P_z \\ x &\stackrel{\gamma}{\to}& y &\stackrel{\gamma'}{\to}& z } \;\;\; = \;\;\; \array{ P_x &\stackrel{tra(\gamma' \circ \gamma)}{\to}& P_z \\ x &\stackrel{\gamma'\circ \gamma}{\to}& z }$ In other words, a connection on $P$ is a functor $tra : \mathbf{P}_1(X) \to At''(P)$ from the path groupoid of $X$ to the Atiyah Lie groupoid of $P$ that is smooth in a suitable sense and splits the Atiyah sequence in that $\mathbf{P}_1(X) \stackrel{tra}{\to} At''(X) \to \mathbf{P}_1 (X)$ (see the notation at Atiyah Lie groupoid). The functor $tra$ is called the parallel transport of the connection. This terminology comes from looking at the orbits of all points in $P$ under $tra$ (i.e. from looking at the category of elements of $tra$): these trace out paths in $P$ sitting over paths in $X$ and one thinks of the image of a point $p \in P_x$ under $tra(\gamma)$ as the result of propagating $p$ parallel to these curves in Flat connections It may happen that the assignment $tra : \gamma \mapsto tra(\gamma)$ only depends on the homotopy class of the path $\gamma$ relative to its endpoints $x, y$. In other words: that $tra$ factors through the functor $P_1(X) \to \Pi_1(X)$ from the path groupoid to the fundamental groupoid of $X$. In that case the connection is called a flat connection. More concrete picture By Lie differentiation the functor $tra$, i.e. by looking at what it does to very short pieces of paths, one obtains from it a splitting of the Atiyah Lie algebroid sequence, which is a morphism $abla : T X \to at(P)$ of vector bundles. Locally on $X$ – meaning: when everything is pulled back to a cover $Y \to X$ of $X$ – this is a $Lie(G)$-valued 1-form $A \in \Omega^1(Y, Lie(G))$ with certain special properties. In particular, since every $G$-principal bundle canonically trivializes when pulled back to its own total space $P$, a connection in this case gives rise to a 1-form $A \in \Omega^1(P)$ satisfying two conditions. This data is called an Ehresmann connection. If instead $P = E$ is a vector bundle, then the two conditions satisfies by $A$ imply that it defines a linear map $abla : \Gamma(E) \to \Omega^1(X) \otimes \Gamma(E)$ from the space $\Gamma(E)$ of section of $E$ that satisfies the properties of a covariant derivative. If again the connection is flat, then this is manifestly the datum of a Lie infinity-algebroid representation of the tangent Lie algebroid $T X$ of $X$ on $E$: it defines the action Lie algebroid which is the Lie version of the Lie groupoid classified by the parallel transport functor. More abstract picture Recall from the discussion at $G$-principal bundle that the $G$-bundle $P \to X$ is encoded in a a suitable morphism $X \to \mathbf{B}G$ (namely a morphism in the right (infinity,1)-category which may be represented by an anafunctor). It turns out that similarly suitable morphisms $\mathbf{P}_1(X) \to \mathbf{B}G$ encode in one step the $G$-bundle together with its parallel transport functor. This is described in great detail in the reference by Schreiber–Waldorf below. (…am running out of time… ) Let $G$ be a Lie group. We recall briefly the following discussion of $G$-principal bundles. For an in-depth discussion see Smooth∞Grpd. $\mathbf{B}G : U \mapsto ( Hom_{Diff}(U,G) \stackrel{\to}{\to} *)$ for the functor that sends a Cartesian space $U$ to the delooping groupoid of the group of $G$-valued smooth functions on $U$: the groupoid with a single object and the group $Hom_{Diff}(U,G)$ of maps as its set of morphisms. This is a groupoid-valued sheaf on the site CartSp${}_{smooth}$ and in fact is a (2,1)-sheaf/stack. For $X$ a paracompact smooth manifold, we may also regard it as a (2,1)-sheaf on CartSp in an evident way. A detailed discussion of this is at Smooth∞Grpd in the section on Lie groups. Now write $\mathfrak{g}$ for the Lie algebra of $\mathfrak{g}$. Then consider the functor $\mathbf{B} G_{conn} : U \mapsto [\mathbf{P}_1(U),\mathbf{B}G] = \left\{ A \stackrel{g}{\to} (g^{-1} A g + g^{-1} d g) | A \in \Omega^1(U,\mathfrak{g})\,, g \in C^\infty(U,G) \right\}$ that sends a Cartesian space $U$ to the groupoid of Lie-algebra valued 1-forms over $U$. There is an evident morphism of (2,1)-sheaves $\mathbf{B}G_{conn} \to \mathbf{B}G$ that forgets the 1-forms on each object $U$. A connection on a smooth $G$-principal bundle $g : X \to \mathbf{B}G$ is a lift $abla$ to $\mathbf{B}G_{conn}$ $\array{ && \mathbf{B}G_{conn} \\ & {}^{\mathllap{abla}}earrow & \downarrow \\ X &\stackrel{g}{\to}& \mathbf{B}G } \,.$ The groupoid of $G$-principal bundles with connection on $X$ is $G Bund_abla(X) := Hom(X,\mathbf{B}G_{conn}) \,.$ Explicitly, a morphism $g : X \to \mathbf{B}G$ is a nonabelian Cech cohomology cocycle on $X$ with values in $G$: 1. a choice of good open cover $\{U_i \to X\}$ of $X$; 2. a collection of smooth functions $(g_{i j} \in C^\infty(U_i \cap U_j), G)$ such that on $U_i \cap U_j \cap U_k$ the equation • $g_{i j} g_{j k} = g_{i k}$ A lift $abla : X \to \mathbf{B}G_{conn}$ of this is in addition 1. a choice of Lie-algebra valued 1-forms $(A_i \in \Omega^1(U_i, \mathfrak{g}))$ such that on $U_i \cap U_j$ the equation • $A_j = g^{-1} A_i g + g^{-1} d g$ holds, where on the right we have the pullback $g^* \theta$ of the Maurer-Cartan form on $G$ (see there). Existence of connections (existence of connections) Every $G$-principal bundle admits a connection. In other words, the forgetful functor $Hom(X, \bar \mathbf{B}G_{conn}) \to Hom(X,\mathbf{B}G)$ Choose a partition of unity $(\rho_i \in C^\infty(X,\mathbb{R}))$ subordinate to the good open cover $\{U_i \to X\}$ with respect to which a given cocycle $g : X \to \mathbf{B}G$ is expressed: Then set $A_i := \sum_{i_0} \rho_{i_0}|_{U_{i_0}} g_{i_0 i}|^{-1}_{U_{i_0}} d g_{i_0 i}|_{U_{i_0}} \,.$ By slight abuse of notation we shall write this and similar expressions simply as $A_i := \sum_{i_0} \rho_{i_0}(g_{i_0 i}^{-1} d_{dR} g_{i_0 i}) \,.$ Using the that $(g_{i j})$ satisfies its cocycle condition, one checks that this satisfies the cocycle condition for the 1-forms: \begin{aligned} A_j - g_{i j}^{-1} A_i g_{i j} &= \sum_{i_0} \rho_{i_0} ( g_{i_0 j}^{-1} d g_{i_0 j} - ( g_{i_0 i} g_{i j}) ^{-1} (d g_{i_0 i}) g_{i j} ) \\ & = \sum_{i_0} \rho_{i_0} ( g_{i j}^{-1} d g_{i j} ) \\ & = g_{i j}^{-1} d g_{i j} \end{aligned} \,. Special cases Connections on the tangent bundle Connections on tangent bundles are also called affine connections, or Levi-Civita connections. They play a central role for instance on Riemannian manifolds and pseudo-Riemannian manifolds. From the end of the 19th century and the beginning of the 20th centure originates a language to talk about these in terms of Christoffel symbols. Connections in physics In physics connections on bundles model gauge fields. For more on this see higher category theory and physics. Generalizing the parallel transport definition from ordinary manifolds to supermanifolds yields the notion of superconnection. Simons-Sullivan structured bundles When the notion of connection on a principal bundle is slightly coarsened, i.e. when more connections are regarded as being ismorphic than usual, one arrives at a structure called a Simons-Sullivan structured bundle. This has the special property that for $G = U$ the unitary group, the corresponding Grothendieck group of such bundles is a model for differential K-theory. Connections on a principal $\infty$-bundle See connection on a principal ∞-bundle. gauge field: models and components A classical textbook reference is References for the description of connections in terms of their parallel transport are collected at Basic facts about connections, such as the existence proof in terms of Cech cocycles, are collected in the brief lecture note
{"url":"http://ncatlab.org/nlab/show/connection%20on%20a%20bundle","timestamp":"2014-04-19T17:08:32Z","content_type":null,"content_length":"121409","record_id":"<urn:uuid:16387404-a8af-43ff-9cbd-25ab416f5582>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00370-ip-10-147-4-33.ec2.internal.warc.gz"}
endomorphism operad Higher algebra Algebraic theories Algebras and modules Higher algebras Model category presentations Geometry on formal duals of algebras The endomorphism operad of a monoidal category $C$ – also called the multicategory represented by $C$ – is an operad whose $n$-ary operations are the morphisms out of $n$-fold tensor products in $C$, $End(C)_n(c_1, \cdots, c_n,c) := Hom_C(c_1\otimes \cdots \otimes c_n, c) \,.$ Endomorphism operads come in two flavors, one being a planar operad, the other a symmetric operad. Mostly the discussion of both cases proceeds in parallel. We first give the simple pedestrian definition in terms of explicit components, and then a more abstract definition, which is useful for studying some general properties of endomorphism operads. In terms of components For $(C,\otimes, I)$ a (symmetric) monoidal category, the endomorphism operad $End_C(X)$ of $X$ in $C$ is the symmetric operad/ planar operad whose colors are the objects of $C$, and whose objects of $n$-ary operations are the hom objects $End_C(X)(c_1, \cdots, c_n ; c) := C(c_1 \otimes \cdots \otimes c_n,\; c) \,,$ This comes with the obvious composition operation induced from the composition in $C$. Moreover, in the symmetric case there is a canonical action of the symmetric group induced. For $S \subset Obj(C)$ any subset of objects, the $S$-colored endomorphism operad of $C$ is the restriction of the endomorphism operad defined to the set of colors being $S$. In particular, the endomorphism operad of a single object $c \in C$, often denoted $End(c)$, is the single-colored operad whose $n$-ary operations are the morphism $c^{\otimes n}\to c$ in $C$. In terms of Cartesian monads Let $T : Set \to Set$ be the free monoid monad. Notice, from the discussion at multicategory, that a planar operad $P$ over Set with set of colors $C$ is equivalently a monad in the bicategory of $T$ $\array{ && P \\ & \swarrow && \searrow \\ T C && && C } \,.$ In this language, for $C$ a (strict) monoidal category, the corresponding endomorphism operad is given by the $T$-span $\array{ && & & T Obj(C) \times_{Obj(C)} Mor(C) \\ && & \swarrow && \searrow \\ && T Obj(C) && && Mor(C) \\ & {}^{\mathllap{id}}\swarrow && \searrow^{\mathrlap{\otimes}} && {}^{\mathllap{s}}\swarrow && \searrow^{\mathrlap{t}} \\ T Obj(C) &&&& Obj(C) &&&& Obj(C) } \,,$ where $\otimes : T Obj(C) \to C$ denotes the iterated tensor product in $C$, and where the top square is defined to be the pullback, as indicated. The structure of an algebra over an operad on an object $A \in C$ over $P$ is equivalently a morphism of operads $\rho : P \to End(A)$ Relation to categories of operators To every operad $P$ is associated its category of operators $P^{\otimes}$, which is a monoidal category. With that suitably defined, forming endomorphism operads is right 2-adjoint to forming categories of operators. See (Hermida, theorem 7.3) for a precise statement in the context of non-symmetric operads and strict monoidal categories. The basic definition of symmetric endomorphism operads is for instance in section 1 of A general account of the definition of representable multicategories is in section 3.3 of The notion of representable multicategory is due to • Claudio Hermida, Representable multicategories, Adv. Math. 151 (2000), no. 2, 164-225 (pdf) Discussion of the 2-adjunction with the category of operators-construction is around theorem 7.3 there. Characterization of representable multicategories by fibrations of multicategories is in • Claudio Hermida, Fibrations for abstract multicategories, Field Institute Communications, Volume 43 (2004) (pdf) and in section 9 of Discussion in the context of generalized multicategories is in section 9 of • G. Cruttwell, Mike Shulman, A unified framework for generalized multicategories Theory and Applications of Categories, Vol. 24, 2010, No. 21, pp 580-655. (TAC)
{"url":"http://www.ncatlab.org/nlab/show/endomorphism+operad","timestamp":"2014-04-16T19:02:32Z","content_type":null,"content_length":"38007","record_id":"<urn:uuid:36f2b467-2bb8-4f7b-9e40-afcc4f28f03b>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00175-ip-10-147-4-33.ec2.internal.warc.gz"}
[FOM] FOM Digest, Vol 66, Issue 5, Predicativity in CZF (Daniel M?hkeri) rathjen@maths.leeds.ac.uk rathjen at maths.leeds.ac.uk Tue Jun 10 17:33:01 EDT 2008 Predicativity of CZF is usually not argued for directly. There is a constructive notion of set which can be formalized in Martin-Loef type theory (MLTT). Peter Aczel has given an interpretation of CZF based on this notion in MLTT. The argument for CZF's constructivity can thus be based on that of MLTT. The brand of predicativity adhered to in MLTT is often called "generalized predicative" as is allows for the formation of inductively defined sets other than the natural numbers. Constructive set theory originated with John Myhill. In his 1975 JSL paper he presented a formal theory CST (Constructive Set Theory). One of the axioms of CST is exponentiation, that is the statement that given two sets A and B, the set of all functions from A to B, A->B, forms a set. The subset collection axiom of CZF is a generalization of exponentiation. Myhill rejects the power set axiom but accepts exponentiation: ``Power set seems especially nonconstructive and impredicative compared with the other axioms .... One could make the admittedly vague, objection to the existence of the set A->B of mappings of A into B but I do not think the situation is parallel - a mapping or function is a rule, a finite object which can actually be given .." Michael Rathjen More information about the FOM mailing list
{"url":"http://www.cs.nyu.edu/pipermail/fom/2008-June/012935.html","timestamp":"2014-04-18T06:14:24Z","content_type":null,"content_length":"3862","record_id":"<urn:uuid:d0966e4e-e2a0-4fee-aa16-ba9f0c842ed3>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00247-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: find the second derivative of • one year ago • one year ago Best Response You've already chosen the best response. Best Response You've already chosen the best response. Basically, you just need to take an implicit derivative. Since you have a fraction, you need to remember to use the quotient rule. So you get \[y\prime\prime=\frac{y(1)-(x+1)y\prime}{y^2}\]Now you just have to substitute \(y\prime\) for the expression given to you, and you should be golden. Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/507dd2eee4b04f46a00c6294","timestamp":"2014-04-17T19:18:37Z","content_type":null,"content_length":"40284","record_id":"<urn:uuid:fa06e082-e8cc-4e94-a6e6-c5382f8345b0>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00483-ip-10-147-4-33.ec2.internal.warc.gz"}
Problem using the difference quotient July 17th 2012, 07:29 AM #1 Problem using the difference quotient I'm trying to differentiate a sum of functions by means of the difference quotient from a question in my book. The derivative is a given, but I can't end up at the final result (I can, but only if I use $anx^{n -1}$) I feel like I'm close but I now need assistance. $f(x) = \frac{7}{2} x^\frac{1}{2} + \frac{3}{x^2} - 9$ $f(x + \Delta x) = \frac{7}{2} (x + \Delta x)^\frac{1}{2} + \frac{3}{(x + \Delta x)^2} - 9$ $f(x + \Delta x) - f(x) = \left[\frac{7}{2} (x + \Delta x)^\frac{1}{2} + \frac{3}{(x + \Delta x)^2} - 9\right] - \left[\frac{7}{2} x^\frac{1}{2} + \frac{3}{x^2} - 9\right]$ $\frac{f(x + \Delta x) - f(x)}{\Delta x} = \frac{\frac{7}{2}\left[(x +\Delta x)^\frac{1}{2} - x^\frac{1}{2}\right] + 3\left[ \frac{1}{(x + \Delta x)^2} - \frac{1}{x^2}\right]}{\Delta x}$ Now this is where I'm stuck. If I assume I can eliminate $\left[(x +\Delta x)^\frac{1}{2} - x^\frac{1}{2}\right]$ by way of the difference of two squares. Then I can multiply both the numerator and denominator by $\left[(x +\Delta x)^\frac{1}{2} + x^\frac{1}{2}\right]$ in order to replace it with $(x + \Delta x) - x$ in the numerator. As for $\left[\frac{1}{(x + \Delta x)^2} - \frac{1}{x^2}\right]$ again I think I can eliminate the squares to a more easier form to deal with by way of the difference of two squares but don't know how? Thank you for your attention. Re: Problem using the difference quotient On the left side, rationalize the numerator (this is a strategy that is often helpful in situations like this), and on the right, combine fractions and expand. $\lim_{\Delta x\to0}\frac{f(x+\Delta x) - f(x)}{\Delta x} = \lim_{\Delta x\to0}\frac{7(x+\Delta x)^{1/2}-7x^{1/2}}{2\Delta x} + \lim_{\Delta x\to0}\frac1{\Delta x}\left[\frac3{(x+\Delta x)^2} - \ $= \frac72\lim_{\Delta x\to0}\frac{(x+\Delta x)^{1/2}-x^{1/2}}{\Delta x} + 3\lim_{\Delta x\to0}\frac{x^2 - (x+\Delta x)^2}{x^2\Delta x(x+\Delta x)^2}$ $= \frac72\lim_{\Delta x\to0}\frac{\left[(x+\Delta x)^{1/2}-x^{1/2}\right]\left[(x+\Delta x)^{1/2}+x^{1/2}\right]}{\Delta x\left[(x+\Delta x)^{1/2}+x^{1/2}\right]} + 3\lim_{\Delta x\to0}\frac{x^2 - \left(x^2+2x\Delta x+\Delta x^2\right)}{x^2\Delta x(x+\Delta x)^2}$ $= \frac72\lim_{\Delta x\to0}\frac{(x+\Delta x)-x}{\Delta x\left[(x+\Delta x)^{1/2}+x^{1/2}\right]} - 3\lim_{\Delta x\to0}\frac{2x\Delta x+\Delta x^2}{x^2\Delta x(x+\Delta x)^2}$ $= \frac72\lim_{\Delta x\to0}\frac{\Delta x}{\Delta x\left[(x+\Delta x)^{1/2}+x^{1/2}\right]} - 3\lim_{\Delta x\to0}\frac{\Delta x\left(2x+\Delta x\right)}{x^2\Delta x(x+\Delta x)^2}$ $= \frac72\lim_{\Delta x\to0}\frac1{(x+\Delta x)^{1/2}+x^{1/2}} - 3\lim_{\Delta x\to0}\frac{2x+\Delta x}{x^2(x+\Delta x)^2}$ $= \frac7{2\left(x^{1/2}+x^{1/2}\right)} - \frac{3(2x)}{x^2\cdot x^2}$ $= \frac7{4\sqrt x} - \frac6{x^3}$ Re: Problem using the difference quotient Beautiful! Thanks! July 17th 2012, 08:54 AM #2 July 17th 2012, 09:10 AM #3
{"url":"http://mathhelpforum.com/calculus/201068-problem-using-difference-quotient.html","timestamp":"2014-04-19T05:33:37Z","content_type":null,"content_length":"44919","record_id":"<urn:uuid:069bd3e5-0383-4940-858a-f1f0d2afb787>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00475-ip-10-147-4-33.ec2.internal.warc.gz"}
Lower Bounds on the Obstacle Number of Graphs Given a graph $G$, an obstacle representation of $G$ is a set of points in the plane representing the vertices of $G$, together with a set of connected obstacles such that two vertices of $G$ are joined by an edge if and only if the corresponding points can be connected by a segment which avoids all obstacles. The obstacle number of $G$ is the minimum number of obstacles in an obstacle representation of $G$. It is shown that there are graphs on $n$ vertices with obstacle number at least $\Omega({n}/{\log n})$. Obstacle number; visibility graph; graph representation Full Text:
{"url":"http://www.combinatorics.org/ojs/index.php/eljc/article/view/v19i2p32","timestamp":"2014-04-20T06:58:43Z","content_type":null,"content_length":"15374","record_id":"<urn:uuid:6d5920f1-94f1-4567-82ae-5805fe346e4c>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00296-ip-10-147-4-33.ec2.internal.warc.gz"}
Matches for: AMS Chelsea Publishing 1969; 141 pp; hardcover Volume: 353 Reprint/Revision History: reprinted 2005 ISBN-10: 0-8218-3887-3 ISBN-13: 978-0-8218-3887-7 List Price: US$30 Member Price: US$27 Order Code: CHEL/353.H This little book is a brilliant introduction to an important boundary field between the theory of probability and differential equations. --E. B. Dynkin, Mathematical Reviews This well-written book has been used for many years to learn about stochastic integrals. The book starts with the presentation of Brownian motion, then deals with stochastic integrals and differentials, including the famous Itô lemma. The rest of the book is devoted to various topics of stochastic integral equations, including those on smooth manifolds. Originally published in 1969, this classic book is ideal for supplementary reading or independent study. It is suitable for graduate students and researchers interested in probability, stochastic processes, and their applications. Graduate students and research mathematicians interested in probability, stochastic processes, and their applications. • Brownian motion • Stochastic integrals and differentials • Stochastic integral equations \((d=1)\) • Stochastic integral equations \((d\geq2)\) • References • Subject index • Errata
{"url":"http://ams.org/bookstore?fn=20&arg1=chelsealist&ikey=CHEL-353-H","timestamp":"2014-04-18T13:15:45Z","content_type":null,"content_length":"15087","record_id":"<urn:uuid:71edf245-ce74-46d4-9039-335dae9ab184>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00256-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Tools Discussion: All Topics, Algebra Balance Scales Discussion: All Topics Topic: Algebra Balance Scales Related Item: http://mathforum.org/mathtools/tool/458/ << see all messages in this topic < previous message | next message > Subject: RE: Problems with negative numbers Author: djw Date: Jun 17 2005 17 June 2005 did you post the applet using negative numbers? On Feb 17 2004, Joel Duffin wrote: > On Feb 14, 2004, Julee wrote: One limitation I found with this > applet is that the problems I encountered all involved positive > numbers. Most students don't have a problem with algebra equations > where they simply take away or divide. Also, we used balloons > tied to our scales to represent negative numbers or anti-wieghts. > This applet would have been more useful in helping students > visualize the negative numbers. >>> My reply We have worked > on a version of this applet that includes problems involving > negative numbers and uses balloons to represent them. I will see if > I can dig it up and post it. Reply to this message Quote this message when replying? yes no Post a new topic to the tool: Algebra Balance Scales discussion Visit related discussions: Algebra Balance Scales tool Discussion Help
{"url":"http://mathforum.org/mathtools/discuss.html?context=all&do=r&msg=19419","timestamp":"2014-04-21T14:58:26Z","content_type":null,"content_length":"16572","record_id":"<urn:uuid:d8613747-6285-4f31-8160-ed08bd836c5d>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00230-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Goes Pop! If you read about math and enjoy the internet, chances are you saw this op-ed in the New York Times over the weekend. The piece, titled “Is Algebra Necessary?,” argues that math requirements, algebra in particular, are prohibitively difficult for many people, and may be contributing to high school and college dropout rates. Instead of imposing an algebra restriction, author Andrew Hacker suggests restructuring the curriculum around “citizen statistics” and “quantitative reasoning.” Despite the jargon-y names, he insists courses like this could be developed without sacrificing rigor or dumbing down the curriculum. As might be expected, the piece has furrowed quite a few brows. A few friends have asked me for my opinion, but I’m a little late to the game, and there are a number of people who have expressed my views in their own words quite well. I’ll briefly add my own to cents, peppered with links throughout. . . . → Read More: Asking the right questions A couple of days ago I watched a video that really depressed me. Here‘s a link to a local news story from Ankeny, Iowa – I’d encourage you to take a look at the news clip there (unfortunately, I can’t embed it here). The story concerns a 6th grade student who has memorized the decimal expansion of pi to 340 or so digits. In and of itself, this might not seem like a particularly newsworthy achievement – as any Pi Day aficionado can tell you, there are people who have memorized more digits. Perhaps what makes it newsworthy is the fact that the student is only twelve years old, or, more perversely, the fact that his accomplishment came in response to the challenge of his math teacher, who asked his students to memorize as many digits of pi as possible. By far the most depressing part of the video is . . . → Read More: Pi, I Shake My Fist at You A couple of weeks ago, the Washington Post ran an op-ed written by G. V. Ramanathan, emeritus Professor in mathematics, statistics, and computer science, entitled “How much math do we really need?” As the title suggests, Ramanathan uses his space in the paper to argue against the grain of conventional wisdom when it comes to mathematics education; his point is that American students are actually receiving too MUCH math, rather than not enough. It’s an appealing thesis, especially for those looking for an excuse to embrace their own math phobia, but ultimately I find it to be less than responsible. Consider, for example, the following passage: How much math do you really need in everyday life? Ask yourself that — and also the next 10 people you meet, say, your plumber, your lawyer, your grocer, your mechanic, your physician or even a math Unlike literature, history, politics and music, . . . → Read More: A Sufficient Mathematical Background Late last month there was apparently a bit of a ruckus over whether or not California should adopt new national education standards as part of a competition among the states dubbed “Race to the Top.” Although Race to the Top (the brain child of education secretary Arne Duncan) hasn’t received much media attention, it was one of the many byproducts of last year’s economic stimulus act. Recently, though, it’s been the subject of more discussion – a relatively detailed article on the program was published over the weekend, for example. For Californians (and residents of other states, I’m sure), participation in Race to the Top has been met with some controversy. The latest debate, as I mentioned above, has been about education standards. Race to the Top comes with its own set of national education standards, and adopting those standards helps a state’s odds of winning some federal education funding. . . . → Read More: Race to Where? Last year, I remarked on a TED talk from mathemagician Arthur Benjamin, who argued for the displacement of Calculus by Statistics in the hierarchy of high school mathematics. This year, TED has sponsored a talk by high school math teacher Dan Meyer, who discusses what, in his view, are the major problems with the way mathematics is currently taught to kids, and what can be done to fix His opening is spot on: “I teach high school math. I sell a product to a market that doesn’t want it, but is forced by law to buy it.” He goes on to argue that the problem with math education, a problem exacerbated by most textbooks, is that it discourages what he terms patient problem solving. Problems in textbooks rarely reflect the types of problems one encounters in real life: textbook problems usually supply you with just . . . → Read More: Patient Problem Solving Late last year, a study was published in Proceedings of the National Academy of Sciences which tried to pin down origins for the gender gap in mathematics education. As I’ve discussed before, the gender gap in math education is shrinking, and has been shown to be less about biology and more about culture – in cultures where gender equality is weaker, the gender gap is stronger. Nevertheless, even in American culture, the gender gap still persists, and this study by Sian Beilock and others has tried to figure out how, if the gender gap is culturally based, it comes about in young students. The original study can be found here, while a discussion of the study that was featured in the news can be found here. Professor Beilock and her colleagues tried to correlate young students’ math anxiety with the math anxiety of their teachers. In particular, they looked . . . → Read More: Gender Gap Genesis Earlier this month, Wired published an article written by Daniel Roth, enticingly titled “Making Geeks Cool Could Reform Education.” It serves as an interesting counterpoint to the commonly used argument that the best way to reform education is to better integrate it with the most current technology, so that going to school feels less like going to school and more like playing video games (family friendly ones, of course). The essay in Wired takes a slightly different approach – it profiles schools that have successfully channeled the inner geeks of their students, the argument being that the geek subculture rewards intelligence with popularity. To do this, schools must make learning seem cool. This is a feat which is easier said than done, because, as we all know, there’s no better way to convince a teenager that something . . . → Read More: Reforming Education through Geek Chic Let me begin by saying that, in response to the question Why is 9/09/09 so special?, my response is simple: it’s not. In fact, I would argue that 09/08/09 is much more interesting. This claim has nothing to do with numerology, and everything to do with President Obama’s speech to the youth of America on the value of education. The speech made very clear the importance of taking education seriously, and hopefully convinced students that a good education benefits not only themselves, but also society at large. In case you missed the speech, the transcript can be found here. Although the speech was about education in general, mathematics got a little bit of love too. Here’s one such example: What you make of your education will decide nothing less than the future of this country. What you’re learning in school today will determine whether we as a nation . . . → Read More: Make Money Money, Make Money Money Money! (and Learn Math, too) Race to Where? Patient Problem Solving Gender Gap Genesis
{"url":"http://www.mathgoespop.com/category/math-education","timestamp":"2014-04-18T13:27:38Z","content_type":null,"content_length":"102548","record_id":"<urn:uuid:4b699119-7224-4e0d-84f5-59a6b5c487c1>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00331-ip-10-147-4-33.ec2.internal.warc.gz"}
Remains of the Day It would be rather useless for me to simply list the talks for the rest of Tuesday, as that information is already available. Still, as sort of Rorschach test, it’s sorta useful to group them thematically. Ooguri, Kachru and — to a lesser extent — Douglas focussed on supersymmetry-breaking. The big breakthrough, of course, was the realization that long-lived metastable SUSY-breaking vacua are good enough for our purposes, and these are much more readily available than theories with no SUSY vacua. The most interesting part of Kachru’s talk was his discussion of the realization of conformal sequestering in a stringy context. Probably, a proper discussion of that will require another post, starting with a review of Schmaltz and Sundrum. Beisert and Zarembo talked about integrability in large-$N$$\mathcal{N}=4$ Super Yang Mills, and in the dual AdS string theory. Beisert’s talk featured considerable progress towards an all-orders conjecture for the scaling dimensions of operators in the CFT, and completely illegible formulæ on his slides. Roberto Emparan gave a very nice review of the status of rotating black holes in higher dimensions. In 4 dimensions, there are theorems which give a unique steady-state configuration for given blackhole mass, $M$ and angular momentum, $J$, and an upper bound on $J$, for fixed $M$. Already in 5 dimensions, there are black rings (horizon topology $S^1\times S^2$ instead of $S^3$). So that, for some range of $M,J$, there are three configurations (two stable) for given $M,J$. When you go to $d\geq 6$, things get even worse: there’s no upper bound on $J$ (this results from the obvious observation that the centrifugal term in $-\frac{G M}{r^{d-3}}+\frac{J^2}{M^2 r^2}$ wins for $d\geq 6$) and it’s not even known what the range of possible horizon topologies is. And then there was Riccioni’s talk on $E_{11}$. Dunno what to make of that … Posted by distler at June 27, 2007 1:44 AM
{"url":"http://golem.ph.utexas.edu/~distler/blog/archives/001337.html","timestamp":"2014-04-16T22:55:50Z","content_type":null,"content_length":"15356","record_id":"<urn:uuid:1a01d4c1-0521-4312-9c48-1e7f2af9f7b6>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00368-ip-10-147-4-33.ec2.internal.warc.gz"}
ISSUE-2083 (Paced-anim-complex): Paced animation and complex types [Last Call: SVG 1.2 Tiny ] From: SVG Working Group Issue Tracker <sysbot+tracker@w3.org> Date: Wed, 1 Oct 2008 14:15:01 +0000 (GMT) To: public-svg-wg@w3.org Message-Id: <20081001141501.52B006B62B@tibor.w3.org> ISSUE-2083 (Paced-anim-complex): Paced animation and complex types [Last Call: SVG 1.2 Tiny ] Raised by: Doug Schepers On product: Last Call: SVG 1.2 Tiny Dr. Olaf Hoffmann 16.2.7 Paced animation and complex types notes now for <list-of-points>, that 'There is no defined formula to pace a list of points. The request to pace should be ignored and the value of linear used instead.' This is understandable and fine because 'Distance is defined for types which can be expressed as a list of values, where each value is a vector of scalars in an n-dimensional space. For example, an angle value is a list of one value in a 1-dimensional space and a color is a list of 1 value in a 3-dimensional space.' and even more relevant: Defines interpolation to produce an even pace of change across the animation. This is only supported for values that define a linear numeric range, and for which some notion of "distance" between points can be calculated (e.g. position, width, height, etc.)." And a list is not always a vector (a vector has an absolute value and a direction) - <list-of-points> - no vector, no formula for a distance, therefore no paced animation defined in general. And there is no way to get an interpolation with an even pace for such a list. 1. Surprisingly there are (still) formulas given for <list-of-lengths>, <list-of-coordinates>, <list-of-numbers>, <path-data>. These lists are no vectors either, <path-data> is not even a list which can be confused with a vector. Especially <list-of-points> is equivalent to a subset of <path-data> - and if the equivalence of a subset of <path-data> has no formula for a distance as identified already correctly, obviously the currently given formula for <path-data> results in nonsense and in general not into an interpolation with an even pace. The same applies for the other lists, because they do not represent vectors with one absolute value and one direction. Therefore there should be no formula too and authors should be discouraged to use calcMode paced with such types, because if there are already some formulas implemented due to SVG 1.1 or previous drafts, this will result in nonsense anyway. Note, that the related sample(s) in the test suite need to be fixed/removed too, for example animate-elem-53 (calcMode paced for points, <list-of-points> as already excluded in the current draft) 2. transform type scale is pretty fine now (according to the curent formula it is more one 2-dimensional value, not two 1-dimensional values) as translate is and skewX/Y. rotate is more critical, but the new formula is already a big improvement compared to previous attempts. If the rotation center is not changed within the animation, this results indeed in a paced animation ;o) But it should be at least noted for authors, that it exists in general no paced animation, if the rotation center changes within the animation, neither with the given formula nor with any other or whatever is already implemented in previous viewer versions or noted in SVG 1.1. Note, that in the test suite animate-elem-82 describes still the (wrong) behaviour of SVG 1.1 for paced animateTransform and paced translate animateTransform is still insensitive to implemented timing and therefore useless (see long discussion from last year about this test including a provided alternative). Received on Wednesday, 1 October 2008 14:15:35 GMT This archive was generated by hypermail 2.2.0+W3C-0.50 : Wednesday, 1 October 2008 14:15:36 GMT
{"url":"http://lists.w3.org/Archives/Public/public-svg-wg/2008OctDec/0004.html","timestamp":"2014-04-19T00:08:15Z","content_type":null,"content_length":"12022","record_id":"<urn:uuid:5f4342a8-a3d5-440e-9807-798d68d9b607>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00427-ip-10-147-4-33.ec2.internal.warc.gz"}
Natural logarithm, element-wise. The natural logarithm log is the inverse of the exponential function, so that log(exp(x)) = x. The natural logarithm is logarithm in base e. x : array_like Parameters : Input value. Returns : Logarithm is a multivalued function: for each x there is an infinite number of z such that exp(z) = x. The convention is to return the z whose imaginary part lies in [-pi, pi]. For real-valued input data types, log always returns real output. For each value that cannot be expressed as a real number or infinity, it yields nan and sets the invalid floating point error For complex-valued input, log is a complex analytical function that has a branch cut [-inf, 0] and is continuous from above on it. log handles the floating-point negative zero as an infinitesimal negative number, conforming to the C99 standard. [R41] M. Abramowitz and I.A. Stegun, “Handbook of Mathematical Functions”, 10th printing, 1964, pp. 67. http://www.math.sfu.ca/~cbm/aands/ [R42] Wikipedia, “Logarithm”. http://en.wikipedia.org/wiki/Logarithm >>> np.log([1, np.e, np.e**2, 0]) array([ 0., 1., 2., -Inf])
{"url":"http://docs.scipy.org/doc/numpy-1.6.0/reference/generated/numpy.log.html","timestamp":"2014-04-20T08:19:26Z","content_type":null,"content_length":"10190","record_id":"<urn:uuid:74115105-264d-44e8-a5d7-a96e87183366>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00058-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: I just want to make sure that I graphed this correctly: y=4x+6 Can someone please help me? • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50cdf8dde4b0031882dc5d1c","timestamp":"2014-04-16T07:57:03Z","content_type":null,"content_length":"123056","record_id":"<urn:uuid:c5a29164-7b2d-4933-9323-56c1c6cbff09>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00162-ip-10-147-4-33.ec2.internal.warc.gz"}
Apps in Education Pi Cubed: $9.99 visual math application designed from the ground up for a touch-based interface. Unlike traditional calculators, Pi Cubed lets you construct, typeset, and instantly evaluate mathematical expressions using an interactive menu system. Quick Graph : FREE This is a powerful graphic calculator that takes full advantage of the multitouch display and the powerful graphic capabilities of the iPad and iPhone, both in 2D and 3D. A simple, yet intuitive interface that makes it easy to enter and edit equations and visualize them. Equation Genius: FREE Equation Genius will help you do you math in seconds. It supports: 1st degree equations (ax + b = c), Quadratic (2nd degree) equations (ax^2 + bx + c = 0), Cubic (3rd degree) equations (ax^3 + bx^2 + cx + d = 0), System of two unknowns (2 variables), System of three unknowns (3 variables). MyCalculator $1.99 A scientific calculator for iPad and iPhone that solves as you type! MyCalculator also features an innovative memory system to store and recall answers. Simply touch answers to store them in multiple memory slots. Scientific Calculator: $ 1.99 Solve is an Touch calculator replacement designed with simplicity, usability, and beauty in mind...Once you try Solve you never go back. Access to 4 different calculators: 1- Handwriting calculator 2- Scientific calculator, 3- Quadratic equation solver 4- Linear equations. Maths Ref: $2.99 With over 250 equations it will be tough to find the one you need. In 3.1, we added super smart search bar to aid your searching. Just pull down the list to reveal the bar and start typing. If you have sinh() in mind, type sinh or hyperbolic. You will be amazed how useful it is. iFormulas: FREE is a clean, simple, easy to use mathematical formula application. Provides basics to survive your Algebra, Calculus, Geometry or Trigonometry class. It does not give you answers but provides an easy navigational guide. Over 330 different formulas, definitions, laws, properties, etc. Formulus: $1.99 A perfect study tool. It is a simple, easy to use, easy to navigate collection of the most important formulas and topics for high school and college students taking Calculus and Differential Equation Solver: FREE Math Homework Solver (MHS) is making math easy. Get solutions to dozens of problems in various topics such as: Arithmetic or Geometric sequence, Line problems, Equations, Proportions, Percentages, Complex Numbers, Inequalities, Vectors etc, Fractions Calculator: $ 0.99 Just type in the fractions you want using the keypad. Select any of the editable fields and the keypad appears. The field being editing is shown with a yellow background. You can change more than one number at a time by selecting each field and typing the number. When you have set the fractions as you want then select 'OK'. DiaMath: $0.99 configurable to quiz you with 10, 20 or 30 problems. All the problems are generated by the iPhone and randomized so there is a virtual unlimited set of possibilities. DiaMath provides three different levels of difficulty as well as allowing you to choose a range of numbers from 1 to 99. Additionally, DiaMath has a timer. Talking Calculator: $0.99 Hear all of your numbers and results announced as they appear on the screen! Great for visually impaired or for anyone who wants to verify accuracy without looking back at the screen. Also a wonderful educational app for children who are learning math because it gives them audible reinforcement. Trigonometry Calculator: $2.99 Interactive Trigonometry Calculator contains all Trigonometry functions and formulas. Users can solve any triangles, circles and unit circles problems very easily using this calculator. No more trying to grab your calculator or trying to recall the sin, cos & tan rules. This is the only trigonometry calculator you will ever need. Mathsmagic: $0.99 Amaze and delight others as you multiply, divide, and square at lightning fast speed. Learn and practice the tricks of mental math calculation in a fun and engaging application. Study any of the math tricks and then practice them as you progress through various levels of proficiency. Maths Plus: $0.99 by presenting you with randomly selected simple maths problems that require combining basic arithmetic knowledge, helps you to develop your problem solving skills, and by allowing you to play with others, it adds a level of fun and challenge to this process. TanZen: $1.99 Tangrams have been challenging minds for centuries, with the deceptively simple goal of combining seven geometric pieces into a shape. Choose a puzzle to solve, and try to fit all seven game pieces within the shaded puzzle area without overlapping. TanZen will recognize when the puzzle is finished. Symbolic Calculator: $0.99 With full-featured algebra capabilites on par with high-end scientific calculators this applications lets you perform all kind of different operations from the simplest multiplications to computing the integrals of the most complicated functions. Alegbra Solver: $0.99 This program is NOT just a flash card application. Algebra Solver SOLVES math formulas and equations; after you solve them, you can even EMAIL them to others using the built-in email button. From simple formulas, to more complex physics problems,. My Formulas: $0.99 Fully customizable formula calculator. Keeping ease of use in mind, myFormulas provides a new and intuitive way of creating, editing and calculating diverse formulas. You can make a wide variety of formulas which can be as simple as a tip calculator and a unit converter, or as sophisticated as an electronic engineering calculator. The Mathmaster: $0.99 Need more help with your math facts? Chances are yes, and that's exactly why there is MathMaster. MathMaster also includes a powerful settings page allowing you to fully customize which problems it generates. MathMaster supports a variety of mathematical operations. • 3D Plotter XL • 4D Spin • 600 Formulas • Abakus • AceKids Maths Game HD • Algebra Champ • Algebra Helper • Algebra Pro • Algebra Touch • All-in-One Maths • Alien Equation • Angles Calculator • Arithmetric • Arithfit • Basic maths • Calc It • Calculus Pro • Calculator XL • Calculator AXL • Calculator LCD HD PRO • Calcbot • CalCul • CalcTraining • Converter for iPad • Conversion Calculator • Couch Calculator • Digits Calculator for iPad • Dimension Calculator • Dr Pi • Easel Algebra • eSolver • Equation • Equations • Everyday Mathematics • Fastcalc • Flash Maths • FlowMath • Formula Pro • Fractions App • Fractals • Fractional Editor • Free Graphic Calculator • Fractions Helper • fx Pad • fx Integer • Geometry • Geometry Combat • Geometry Wars • Geometry Stash • Globe Convert • Grafly • Graph It • Graphing Calculator • Graphing Calculator PLUS • Graphulator • Gravity • GroupCalc • HiCalc • iArithmetric • iAttractor • iFractual • iFactor • iFactor Quadratics • Interactive Trigonometry Calculator • Integral Calculator • Intro to Maths • iMath • iMaths • iMathsLab • iMathematica • iMathematics 9 in 1 • iMeasure • IQ Gym • iTrig • Lemonade Stand • Long Division • MaCalc • MandelBrot • MathBoard • Math Bingo • Maths Cards • Math Drills • MathGirl Number Garden • Math Magic • Math Ninja HD • Mathematical Formulas • Mathination • Mathomatic • Maths Equation Solver • Maths Flash Cards • Maths Formulas • Maths Homework Solver • Maths Lab • Maths PRO • Math Ref • Maths Trivia Quiz • Maths Tricks • MeStudying Algebra • MightyMaths • Motion Maths • Multimeasures HD • MyAttractor • NineGaps - Puzzle Game • NoteCalc • Number Line • Oxford Dictionary of Mathematics • P183 Graphing Calculator • Pad Maths • PCalc Lite • Pi Cubed • Polar Sweep • Polysolve • PopMaths • PopMaths Basic Maths • Powerone Calculator • Probability PRO • Quadratic Master • Quadratic Equation Solver • Quick Multiplication • Quick Protactor • Rec Polar Note Calc HD • Rocket Maths • Ruler • Scientific Calculator • Shady Puzzles • Smart Convertor • Smart Maths • Solve24 • SpaceTime • Sumstacker • Statistics 1 • Statistic Toolkit • Statistics Visualiser • StatsMate • Talkulator • TallyZoo • Timestables • TouchPlot • Touchy Math • Trade First Subtraction • Trigger • UslideRule • Vectorama • Vector Calculator • Writeanswer • Wolfram Algebra Course
{"url":"http://appsineducation.blogspot.ca/p/maths-ipad-apps.html","timestamp":"2014-04-20T05:43:57Z","content_type":null,"content_length":"120255","record_id":"<urn:uuid:18727bff-9699-49c3-8d6a-f5b5cb812d64>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00258-ip-10-147-4-33.ec2.internal.warc.gz"}
3) Andre Candess manages an office supply store. One product in the store is computer pape Posted in Category : 3) Andre Candess manages an office supply store. One product in the store is computer paper. Andre knows that 10,000 boxes will be sold this year at a constant rate throughout the year. There are 250 working days per year and the lead-time is 3 days. The cost of placing an order is $30, while the holding cost is $15 per box per year. If Andre orders 500 boxes each time he orders from his supplier, what would his total annual inventory cost be (holding cost plus ordering B) $3,075 C) $3,750 D) $4,350 E) none of the above Time Left: -838:59:59 Status: Awaiting Expert Reply Note: Answers Not shown.
{"url":"http://expresshelpline.com/question.php?qid=2570","timestamp":"2014-04-18T16:15:36Z","content_type":null,"content_length":"15774","record_id":"<urn:uuid:55f652d5-226c-4844-ad2f-d0e5009ba9ae>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00644-ip-10-147-4-33.ec2.internal.warc.gz"}
Approximate closest-point queries in high dimensions Results 1 - 10 of 37 - ACM-SIAM SYMPOSIUM ON DISCRETE ALGORITHMS , 1994 "... Consider a set S of n data points in real d-dimensional space, R d , where distances are measured using any Minkowski metric. In nearest neighbor searching we preprocess S into a data structure, so that given any query point q 2 R d , the closest point of S to q can be reported quickly. Given any po ..." Cited by 786 (31 self) Add to MetaCart Consider a set S of n data points in real d-dimensional space, R d , where distances are measured using any Minkowski metric. In nearest neighbor searching we preprocess S into a data structure, so that given any query point q 2 R d , the closest point of S to q can be reported quickly. Given any positive real ffl, a data point p is a (1 + ffl)-approximate nearest neighbor of q if its distance from q is within a factor of (1 + ffl) of the distance to the true nearest neighbor. We show that it is possible to preprocess a set of n points in R d in O(dn log n) time and O(dn) space, so that given a query point q 2 R d , and ffl ? 0, a (1 + ffl)-approximate nearest neighbor of q can be computed in O(c d;ffl log n) time, where c d;ffl d d1 + 6d=ffle d is a factor depending only on dimension and ffl. In general, we show that given an integer k 1, (1 + ffl)-approximations to the k nearest neighbors of q can be computed in additional O(kd log n) time. , 1998 "... The nearest neighbor problem is the following: Given a set of n points P = fp 1 ; : : : ; png in some metric space X, preprocess P so as to efficiently answer queries which require finding the point in P closest to a query point q 2 X. We focus on the particularly interesting case of the d-dimens ..." Cited by 715 (33 self) Add to MetaCart The nearest neighbor problem is the following: Given a set of n points P = fp 1 ; : : : ; png in some metric space X, preprocess P so as to efficiently answer queries which require finding the point in P closest to a query point q 2 X. We focus on the particularly interesting case of the d-dimensional Euclidean space where X = ! d under some l p norm. Despite decades of effort, the current solutions are far from satisfactory; in fact, for large d, in theory or in practice, they provide little improvement over the brute-force algorithm which compares the query point to each data point. Of late, there has been some interest in the approximate nearest neighbors problem, which is: Find a point p 2 P that is an ffl-approximate nearest neighbor of the query q in that for all p 0 2 P , d (p; q) (1 + ffl)d(p 0 ; q). We present two algorithmic results for the approximate version that significantly improve the known bounds: (a) preprocessing cost polynomial in n and d, and a trul... - In Int. Conf. on Database Theory , 1999 "... . We explore the effect of dimensionality on the "nearest neighbor " problem. We show that under a broad set of conditions (much broader than independent and identically distributed dimensions), as dimensionality increases, the distance to the nearest data point approaches the distance to the fa ..." Cited by 292 (1 self) Add to MetaCart . We explore the effect of dimensionality on the "nearest neighbor " problem. We show that under a broad set of conditions (much broader than independent and identically distributed dimensions), as dimensionality increases, the distance to the nearest data point approaches the distance to the farthest data point. To provide a practical perspective, we present empirical results on both real and synthetic data sets that demonstrate that this effect can occur for as few as 10-15 dimensions. These results should not be interpreted to mean that high-dimensional indexing is never meaningful; we illustrate this point by identifying some high-dimensional workloads for which this effect does not occur. However, our results do emphasize that the methodology used almost universally in the database literature to evaluate high-dimensional indexing techniques is flawed, and should be modified. In particular, most such techniques proposed in the literature are not evaluated versus , 1999 "... Two different techniques of browsing through a collection of spatial objects stored in an R-tree spatial data structure on the basis of their distances from an arbitrary spatial query object are compared. The conventional approach is one that makes use of a k-nearest neighbor algorithm where k is kn ..." Cited by 291 (19 self) Add to MetaCart Two different techniques of browsing through a collection of spatial objects stored in an R-tree spatial data structure on the basis of their distances from an arbitrary spatial query object are compared. The conventional approach is one that makes use of a k-nearest neighbor algorithm where k is known prior to the invocation of the algorithm. Thus if m#kneighbors are needed, the k-nearest neighbor algorithm needs to be reinvoked for m neighbors, thereby possibly performing some redundant computations. The second approach is incremental in the sense that having obtained the k nearest neighbors, the k +1 st neighbor can be obtained without having to calculate the k +1nearest neighbors from scratch. The incremental approach finds use when processing complex queries where one of the conditions involves spatial proximity (e.g., the nearest city to Chicago with population greater than a million), in which case a query engine can make use of a pipelined strategy. A general incremental nearest neighbor algorithm is presented that is applicable to a large class of hierarchical spatial data structures. This algorithm is adapted to the R-tree and its performance is compared to an existing k-nearest neighbor algorithm for R-trees [45]. Experiments show that the incremental nearest neighbor algorithm significantly outperforms the k-nearest neighbor algorithm for distance browsing queries in a spatial database that uses the R-tree as a spatial index. Moreover, the incremental nearest neighbor algorithm also usually outperforms the k-nearest neighbor algorithm when applied to the k-nearest neighbor problem for the R-tree, although the improvement is not nearly as large as for distance browsing queries. In fact, we prove informally that, at any step in its execution, the incremental... - in Proc. 11th Annu. ACM Sympos. Comput. Geom , 1995 "... The range searching problem is a fundamental problem in computational geometry, with numerous important applications. Most research has focused on solving this problem exactly, but lower bounds show that if linear space is assumed, the problem cannot be solved in polylogarithmic time, except for the ..." Cited by 86 (20 self) Add to MetaCart The range searching problem is a fundamental problem in computational geometry, with numerous important applications. Most research has focused on solving this problem exactly, but lower bounds show that if linear space is assumed, the problem cannot be solved in polylogarithmic time, except for the case of orthogonal ranges. In this paper we show that if one is willing to allow approximate ranges, then it is possible to do much better. In particular, given a bounded range Q of diameter w and ffl ? 0, an approximate range query treats the range as a fuzzy object, meaning that points lying within distance fflw of the boundary of Q either may or may not be counted. We show that in any fixed dimension d, a set of n points in R d can be preprocessed in O(n log n) time and O(n) space, such that approximate queries can be answered in O(logn + (1=ffl) d ) time. The only assumption we make about ranges is that the intersection of a range and a d-dimensional cube can be answered in const... - PAMI , 2003 "... Complex data types—such as images, documents, DNA sequences, etc.—are becoming increasingly important in modern database applications. A typical query in many of these applications seeks to find objects that are similar to some target object, where (dis)similarity is defined by some distance functi ..." Cited by 80 (4 self) Add to MetaCart Complex data types—such as images, documents, DNA sequences, etc.—are becoming increasingly important in modern database applications. A typical query in many of these applications seeks to find objects that are similar to some target object, where (dis)similarity is defined by some distance function. Often, the cost of evaluating the distance between two objects is very high. Thus, the number of distance evaluations should be kept at a minimum, while (ideally) maintaining the quality of the result. One way to approach this goal is to embed the data objects in a vector space so that the distances of the embedded objects approximates the actual distances. Thus, queries can be performed (for the most part) on the embedded objects. In this paper, we are especially interested in examining the issue of whether or not the embedding methods will ensure that no relevant objects are left out (i.e., there are no false dismissals and, hence, the correct result is reported). Particular attention is paid to the SparseMap, FastMap, and MetricMap embedding methods. SparseMap is a variant of Lipschitz embeddings, while FastMap and MetricMap are inspired by dimension reduction methods for Euclidean spaces (using KLT or the related PCA and SVD). We show that, in general, none of these embedding methods guarantee that queries on the embedded objects have no false dismissals, while also demonstrating the limited cases in which the guarantee does hold. Moreover, we describe a variant of SparseMap that allows queries with no false dismissals. In addition, we show that with FastMap and MetricMap, the distances of the embedded objects can be much greater than the actual distances. This makes it impossible (or at least impractical) to modify FastMap and MetricMap to guarantee no false dismissals. , 1997 "... This is the preliminary version of a chapter that will appear in the Handbook on Computational Geometry, edited by J.-R. Sack and J. Urrutia. A comprehensive overview is given of algorithms and data structures for proximity problems on point sets in IR D . In particular, the closest pair problem, th ..." Cited by 65 (14 self) Add to MetaCart This is the preliminary version of a chapter that will appear in the Handbook on Computational Geometry, edited by J.-R. Sack and J. Urrutia. A comprehensive overview is given of algorithms and data structures for proximity problems on point sets in IR D . In particular, the closest pair problem, the exact and approximate post-office problem, and the problem of constructing spanners are discussed in detail. Contents 1 Introduction 1 2 The static closest pair problem 4 2.1 Preliminary remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 2.2 Algorithms that are optimal in the algebraic computation tree model . 5 2.2.1 An algorithm based on the Voronoi diagram . . . . . . . . . . . 5 2.2.2 A divide-and-conquer algorithm . . . . . . . . . . . . . . . . . . 5 2.2.3 A plane sweep algorithm . . . . . . . . . . . . . . . . . . . . . . 6 2.3 A deterministic algorithm that uses indirect addressing . . . . . . . . . 7 2.3.1 The degraded grid . . . . . . . . . . . . . . . . . . ... , 1998 "... This paper proposes new methods to answer approximate nearest neighbor queries on a set of n points in d-dimensional Euclidean space. For any fixed constant d, a data structure with O(" (1\ Gammad)=2 n log n) preprocessing time and O(" (1\Gammad)=2 log n) query time achieves approximation factor ..." Cited by 57 (3 self) Add to MetaCart This paper proposes new methods to answer approximate nearest neighbor queries on a set of n points in d-dimensional Euclidean space. For any fixed constant d, a data structure with O(" (1\Gammad)=2 n log n) preprocessing time and O(" (1\Gammad)=2 log n) query time achieves approximation factor 1 + " for any given 0 ! " ! 1; a variant reduces the "-dependence by a factor of " \Gamma1=2 . For any arbitrary d, a data structure with O(d 2 n log n) preprocessing time and O(d 2 log n) query time achieves approximation factor O(d 3=2 ). Applications to various proximity problems are discussed. 1 Introduction Let P be a set of n point sites in d-dimensional space IR d . In the well-known post office problem, we want to preprocess P into a data structure so that a site closest to a given query point q (called the nearest neighbor of q) can be found efficiently. Distances are measured under the Euclidean metric. The post office problem has many applications within computational... - IN PROC. 10TH ANNU. ACM SYMPOS. COMPUT. GEOM , 1994 "... Ray (segment) shooting is the problem of determining the first intersection between a ray (directed line segment) and a collection of polygonal or polyhedral obstacles. In order to process queries efficiently, the set of obstacle polyhedra is usually preprocessed into a data structure. In this pa ..." Cited by 48 (10 self) Add to MetaCart Ray (segment) shooting is the problem of determining the first intersection between a ray (directed line segment) and a collection of polygonal or polyhedral obstacles. In order to process queries efficiently, the set of obstacle polyhedra is usually preprocessed into a data structure. In this paper, we propose a query-sensitive data structure for ray shooting, which means that the performance of our data structure depends on the "local" geometry of obstacles near the query segment. We measure the complexity of the local geometry near the segment by a parameter called the simple cover complexity , denoted by scc(s) for a segment s. Our data structure consists of a subdivision that partitions the space into a collection of polyhedral cells of O(1) complexity. We answer a segment shooting query by walking along the segment through the subdivision. Our first result is that, for any fixed dimension d, there exists a simple hierarchical subdivision in which no query segment s , 1999 "... In spite of extensive and continuing research, for various geometric search problems (such as nearest neighbor search), the best algorithms known have performance that degrades exponentially in the dimension. This phenomenon is sometimes called the curse of dimensionality. Recent results [38, 37, 40 ..." Cited by 47 (2 self) Add to MetaCart In spite of extensive and continuing research, for various geometric search problems (such as nearest neighbor search), the best algorithms known have performance that degrades exponentially in the dimension. This phenomenon is sometimes called the curse of dimensionality. Recent results [38, 37, 40] show that in some sense it is possible to avoid the curse of dimensionality for the approximate nearest neighbor search problem. But must the exact nearest neighbor search problem suffer this curse? We provide some evidence in support of the curse. Specifically we investigate the exact nearest neighbor search problem and the related problem of exact partial match within the asymmetric communication model first used by Miltersen [43] to study data structure problems. We derive non-trivial asymptotic lower bounds for the exact problem that stand in contrast to known algorithms for approximate nearest neighbor search. 1
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=97433","timestamp":"2014-04-19T18:06:38Z","content_type":null,"content_length":"41762","record_id":"<urn:uuid:f1c02ca3-53e8-4e49-a32a-24c9c20f13e7>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00471-ip-10-147-4-33.ec2.internal.warc.gz"}
PostgreSQL Extension Network godel_logic 1.1.0 lukasiewicz_logic 1.1.0 product_logic 1.1.0 Fuzzy logic This extension provides basic logical operators (conjunction, disjunction, implication and negation) for three basic fuzzy logics - Łukasiewicz, Gödel and product. For the Łukasiewicz logic, there are also operators for weak conjunction and disjunction. Technically, there are three extensions - one for each logic. You have to choose just one of them, as all of them define the same operators. • godel_logic - Gödel logic • lukasiewicz_logic - Łukasiewicz logic • product_logic - product logic So let's say you've chosen Łukasiewicz logic, therefore you want to install the lukasiewicz_logic extension. If you're on 9.1 (or newer), all you need to do to install it is $ make install and then db=# CREATE EXTENSION lukasiewicz_logic; This should create a fuzzy_boolean data type (technically a FLOAT domain) and four basic logical operators (shared by all three extensions): • & - conjunction (AND) • | - disjunction (OR) • ! - negation (NOT) • -> - implication and two logical operators (just for Łukasiewicz logic) • && - weak conjunction • || - weak disjunction So now when the logic is installed, let's use it. Using the extension is quite straightforward - get somewhere a fuzzy boolean value and apply the operators to it. E.g. you can do this db=# SELECT (0.5 & 0.5) -> (!0.3 | 0.3); or you may create a table with fuzzy_boolean column. Or maybe you can define predicates - functions returning fuzzy_boolean values and then use them like this db=# SELECT is_fast(speed) & (! is_expensive(price)) FROM cars; and so on. The first thing to realize is that with fuzzy logic the world is not just black and white anymore. There's not just perfect truth and falsehood - there're many degrees of truth. The unpleasant consequence is that the indexing does not work as efficiently as with plain boolean values. You can make it work with simple conditions like these db=# SELECT * FROM cars WHERE is_fast > 0.8 or with a predicate and an expression index db=# SELECT * FROM cars WHERE is_fast(speed) > 0.8 But once you start combining the conditions, the indexing does not work. Consider for example this query db=# SELECT * FROM cars WHERE is_fast & (! is_expensive) > 0.75 With plain boolean conditions, it could be evaluated using a bitmap index scan, but with fuzzy logic that's not possible.
{"url":"http://www.pgxn.org/dist/fuzzy_logic/1.1.0/","timestamp":"2014-04-19T04:36:47Z","content_type":null,"content_length":"10728","record_id":"<urn:uuid:8e1c4ee4-d51c-47b8-bdd8-e91524b6e56c>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00428-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: You are planning to use a ceramic tile design in your new bathroom. The tiles are blue-and-white equilateral triangles. You decide to arrange the blue tiles in a hexagonal shape as shown. If the side of each tile measures 7 centimeters, what will be the exact area of each hexagonal shape? (1 point) 21 cm² 73.5sqrt3cm² 98sqrt3cm² 1,029 cm² • one year ago • one year ago Best Response You've already chosen the best response. Best Response You've already chosen the best response. I found the answer nevermind:) Best Response You've already chosen the best response. Oh, ok :), lol, I was going to try and solve it Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/51606a1ae4b06dc163b98751","timestamp":"2014-04-16T17:26:17Z","content_type":null,"content_length":"33566","record_id":"<urn:uuid:fcab23b4-c21a-4d6c-ad82-be07edc8c01d>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00260-ip-10-147-4-33.ec2.internal.warc.gz"}
AP Calculus In this section, you'll find study notes for AP calculus AB/BC. Each study guide comes complete with practice problems and thorough solution explanations. There are also helpful study strategies on how to tactfully answer multiple-choice and free-response questions on the AP exam. Browse through this AP calculus information center and you’ll be one step closer to getting a 5 on your AP exam. The most popular articles in this category
{"url":"http://www.education.com/study-help/ap-notes-calculus/","timestamp":"2014-04-19T04:39:42Z","content_type":null,"content_length":"101798","record_id":"<urn:uuid:d4af6e63-3cca-4e6f-8a2e-0529d6bad195>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00385-ip-10-147-4-33.ec2.internal.warc.gz"}
No data available. Please log in to see this content. You have no subscription access to this content. No metrics data to plot. The attempt to load metrics for this article has failed. The attempt to plot a graph for these metrics has failed. FIG. 1. Measures of the chain structure: (a) Radius of gyration plotted against N. The line with slope +1 is a guide to the eye to show agreement with the scaling law. (b) Mean square internal distance d(s) for the chain length N = 100 scaled by the bead distance s. Empty symbols illustrate the system without slip-springs, i.e., classical DPD simulation. Filled symbols show the behavior of the system with slip-springs. FIG. 2. Mean square displacement of the central bead g [1,mid ](t) scaled by t ^1/2 for chains without (dashed curves) and with slip-springs (solid curves). The chain length N varies from 10 to 100. The line is a guide to the eye and shows the t ^1 regime. FIG. 3. Mean square displacement of the central bead g [1,mid ](t) scaled by t ^1/4 for chains without (dashed curves) and with slip-springs (solid curves). The chain length N varies from 10 to 100. The two lines are guides to the eye and show the t ^1 and t ^1/2 regime, respectively. FIG. 4. Anisotropy coefficient A [ cm ](t) for chains without (empty symbols) and with slip-springs (filled symbols). The chain length N = 100 is considered either completely (circles) or without the last 20 beads at both chain ends (squares). FIG. 5. Zero shear relaxation modulus G(t) for chains without (dashed curves) and with slip-springs (solid curves). The chain length N ranges from 40 to 100. The straight line is a guide to the eye and demonstrates the Rouse scaling with t ^−1/2. FIG. 6. Zero shear relaxation modulus G(t) for chains without (dashed curves) and with slip-springs (solid curves) multiplied by t ^1/2 to point out Rouse behavior as horizontal line. The chain length N ranges from 40 to 100. FIG. 7. Diffusion coefficient of the center of mass D [ com ] scaled by 6N as a function of chain length for chains without (empty symbols) and with slip-springs (filled symbols). The two lines are guides to the eye and demonstrate the diffusion power law for reptation dynamics (−2) and the experimentally observed scaling behavior (−2.3). FIG. 8. Rotational relaxation time τ [ rot ] scaled by N ^−2 against the chain length N for chains without (filled symbols) and with slip-springs (empty symbols). The two lines are guides to the eye and show the power 3 scaling law for reptation dynamics and power 3.4 scaling behavior observed from experiments. FIG. 9. Relaxation modulus G(t) in comparison with KG simulations to evaluate computational efficiency of the DPD slip-spring model. The KG chains are shown by symbols and their length N = 50, 100 and 200 from left to right. The DPD slip-spring chains are shown by solid curves with the length N = 8, 15, and 30 from left to right. FIG. 10. Mean square displacement of the central bead g [1,mid ](t) scaled by t ^1/4 for systems with different MC sequence lengths. The DPD sequence length is same for all systems, i.e., n[DPD] = 500. Hereby, n[MC] = 500 is the system on which the dynamical analysis from the Results section was carried out. FIG. 11. Mean square displacement of the central bead g [1,mid ](t) scaled by t ^1/4 for systems with different DPD and MC sequence lengths. The ratio n[DPD]/n[MC] is unity for all systems. Hereby, n[DPD] = n [MC] = 500 refers to the system on which the dynamical analysis from the Results section was carried out. FIG. 12. Mean square displacement of the central bead g [1,mid ](t) scaled by t ^1/2 to amplify the onset of the disengagement time τ [ d ] for systems with different DPD and MC sequence lengths. The ratio n [DPD]/n[MC] is unity for all systems. Hereby, n[DPD] = n[MC] = 500 refers to the system on which the dynamical analysis from the Results section was carried out. Article metrics loading...
{"url":"http://scitation.aip.org/content/aip/journal/jcp/138/10/10.1063/1.4794156","timestamp":"2014-04-19T00:36:03Z","content_type":null,"content_length":"85140","record_id":"<urn:uuid:a45e5af0-11b3-4681-988c-aeea543968db>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00447-ip-10-147-4-33.ec2.internal.warc.gz"}
Deflections Due To Seismic Loads haynewp (Structural) 6 Mar 04 12:46 I was under the impression that for single story buildings, there was no drift limit. IBC 2000, Note "a" under table 1617.3 However, if you are worried about how finishes will respond to the actual deflection in an earthquake, E should not be divided by 1.4 (1617.4.6.1) UBC 97 looks to me in 1630.9.1 to reference section 1612.2 combinations when using ASD. 1612.2 uses 1.0E not E/1.4
{"url":"http://www.eng-tips.com/viewthread.cfm?qid=89118","timestamp":"2014-04-19T14:29:27Z","content_type":null,"content_length":"29077","record_id":"<urn:uuid:a171be13-c7c0-4594-82dc-ab2a86b91aa9>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00439-ip-10-147-4-33.ec2.internal.warc.gz"}
How does the distribution of Erdős number evolve over time ? How to build a model to fit the real data ? up vote 5 down vote favorite Let $E(n,t)$ be the number of mathematicians with finite positive Erdős number $n$ at time $t$. As old mathematicians leave, and new mathematicians come, how does $E(n,t)$ change over time ? We can consider only the cooperations between two mathematicians, and assume that every year, the number of new articles and new mathematicians are both constant. We can fix the length of career for every mathematician. Feel free to add any other assumptions. Some obvious facts are • for every $n>0$, there exists a time $T(n)$, such that $E(n,t)$ is constant for $t>T(n)$. • the average Erdős number will increase over time, but is it linear ? • ... It would be nice to find a model for the growth of network that can fit the real data. But since we are not far from Erdős ($T(1)\leq Y1996$), no data is available for large $t$, unfortunately. The distribution seems to be converging towards low Erdős numbers at this time. random-graphs graph-theory stochastic-processes 1 I would reformulate the question like this: how to build the model such that it will fit real data ? (It is difficult but theoretically possible to get actual value of E(n,t) from real world. It would be nice then to compare it with theory). – Alexander Chervov Jun 20 '12 at 9:13 agree. updated. – Hao CHEN Jun 20 '12 at 9:23 PS Good question. I somewhat envy that it did not come to my mind :):) (Joking). I always keep in my mind the question what kind mathemetically precise questions we ask about social networks ? That is one of them. – Alexander Chervov Jun 20 '12 at 9:53 We can ask the same about "mathoverflow number" (and the answer can be much more easy to check in practice). I mean let say two users of MO are "coauthors" in they contribute to the same question. Let us select one user (some one like Erdos e.g. some one who have many coauthors) and form the "mathoverflow number" as distance to this user. Then we can ask the same question and actually more - does the evolution depends on the initial user ? Does the distribution depends on the initial user ? How much statistics we need to get stable results ? – Alexander Chervov Jun 20 '12 at 10:07 2 A related MO question, with references: "The diameter of the Erdös component of the collaboration graph" mathoverflow.net/questions/45586 – Joseph O'Rourke Jun 20 '12 at 10:20 show 1 more comment Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook. Browse other questions tagged random-graphs graph-theory stochastic-processes or ask your own question.
{"url":"http://mathoverflow.net/questions/100099/how-does-the-distribution-of-erdos-number-evolve-over-time-how-to-build-a-mode","timestamp":"2014-04-24T19:13:51Z","content_type":null,"content_length":"54759","record_id":"<urn:uuid:50e0a871-9fec-4bf0-bce5-fd9b01e610d0>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00527-ip-10-147-4-33.ec2.internal.warc.gz"}
SparkNotes: SAT Physics: Circuits 14.1 Voltage 14.6 Capacitors 14.2 Current 14.7 Key Formulas 14.3 Resistance 14.8 Practice Questions 14.4 Energy, Power, and Heat 14.9 Explanations 14.5 Circuits Most SAT II Physics questions on circuits will show you a circuit diagram and ask you questions about the current, resistance, or voltage at different points in the circuit. These circuits will usually consist of a power source and one or more resistors arranged in parallel or in series. You will occasionally encounter other circuit elements, such as a voltmeter, an ammeter, a fuse, or a capacitor. Reading the diagrams is not difficult, but since there will be a number of questions on the test that rely on diagrams, it’s important that you master this skill. Here’s a very simple circuit diagram: Zigzags represent resistors, and a pair of parallel, unequal lines represents a battery cell. The longer line is the positive terminal and the shorter line is the negative terminal. That means the current flows from the longer line around the circuit to the shorter line. In the diagram above, the current flows counterclockwise. Often, more than one set of unequal parallel lines are arranged together; this just signifies a number of battery cells arranged in series. In the diagram above, R = You don’t really need to refer to the diagram in order to solve this problem. As long as you know that there’s a circuit with a six-volt battery and a 12-ohm resistor, you need only apply Ohm’s Law and the formula for power. Since I = V/R, the current is: The power is: Resistors in Series Two resistors are in series when they are arranged one after another on the circuit, as in the diagram below. The same amount of current flows first through one resistor and then the other, since the current does not change over the length of a circuit. However, each resistor causes a voltage drop, and if there is more than one resistor in the circuit, the sum of the voltage drops across each resistor in the circuit is equal to the total voltage drop in the circuit. The total resistance in a circuit with two or more resistors in series is equal to the sum of the resistance of all the resistors: a circuit would have the same resistance if it had three resistors in series, or just one big resistor with the resistance of the original three resistors put together. In equation form, this principle is quite simple. In a circuit with two In the figure above, a battery supplies 30 V to a circuit with a 10 What is the current in the circuit? We can determine the current in the circuit by applying Ohm’s Law: I = V/R. We know what V is, but we need to calculate the total resistance in the circuit by adding together the individual resistances of the two resistors in series: When we know the total resistance in the circuit, we can determine the current through the circuit with a simple application of Ohm’s Law: What is the voltage drop across each resistor? Determining the voltage drop across an individual resistor in a series of resistors simply requires a reapplication of Ohm’s Law. We know the current through the circuit, and we know the resistance of that individual resistor, so the voltage drop across that resistor is simply the product of the current and the resistance. The voltage drop across the two resistors is: Note that the voltage drop across the two resistors is 10 V + 20 V = 30 V, which is the total voltage drop across the circuit. Resistors in Parallel Two resistors are in parallel when the circuit splits in two and one resistor is placed on each of the two branches. In this circumstance, it is often useful to calculate the equivalent resistance as if there were only one resistor, rather than deal with each resistor individually. Calculating the equivalent resistance of two or more resistors in parallel is a little more complicated than calculating the total resistance of two or more resistors in series. Given two resistors, When a circuit splits in two, the current is divided between the two branches, though the current through each resistor will not necessarily be the same. The voltage drop must be the same across both resistors, so the current will be stronger for a weaker resistor, and vice versa. What is the total resistance in the circuit? Answering this question is just a matter of plugging numbers into the formula for resistors in parallel. So 4 What is the current running through R1 and R2? We know that the total voltage drop is 12 V, and since the voltage drop is the same across all the branches of a set of resistors in parallel, we know that the voltage drop across both resistors will be 12 V. That means we just need to apply Ohm’s Law twice, once to each resistor: If we apply Ohm’s Law to the total resistance in the system, we find that 12 V)/(4 What is the power dissipated in the resistors? Recalling that P = I^2R, we can solve for the power dissipated through each resistor individually, and in the circuit as a whole. Let Note that Circuits with Resistors in Parallel and in Series Now that you know how to deal with resistors in parallel and resistors in series, you have all the tools to approach a circuit that has resistors both in parallel and in series. Let’s take a look at an example of such a circuit, and follow two important steps to determine the total resistance of the circuit. 1. Determine the equivalent resistance of the resistors in parallel. We’ve already learned to make such calculations. This one is no different: So the equivalent resistance is 6 6 1. Treating the equivalent resistance of the resistors in parallel as a single resistor, calculate the total resistance by adding resistors in series. The diagram above gives us two resistors in series. Calculating the total resistance of the circuit couldn’t be easier: Now that you’ve looked at this two-step technique for dealing with circuits in parallel and in series, you should have no problem answering a range of other questions. Consider again the circuit whose total resistance we have calculated. What is the current through each resistor? What is the power dissipated in each resistor? What is the current running through each resistor? We know that resistors in series do not affect the current, so the current through Therefore, the current through 3 A. But be careful before you calculate the current through 30 V. The sum of the voltage drops across 30 V, so the voltage drop across just the resistors in parallel is less than 30 V. If we treat the resistors in parallel as a single equivalent resistor of 6 Now, recalling that current is divided unevenly between the branches of a set of resistors in parallel, we can calculate the current through What is the power dissipated across each resistor? Now that we know the current across each resistor, calculating the power dissipated is a straightforward application of the formula P = I^2R: Common Devices in Circuits In real life (and on SAT II Physics) it is possible to hook devices up to a circuit that will read off the potential difference or current at a certain point in the circuit. These devices provide SAT II Physics with a handy means of testing your knowledge of circuits. Voltmeters and Ammeters A voltmeter, designated: measures the voltage across a wire. It is connected in parallel with the stretch of wire whose voltage is being measured, since an equal voltage crosses both branches of two wires connected in An ammeter, designated: is connected in series. It measures the current passing through that point on the circuit. What does the ammeter read? Since the ammeter is not connected in parallel with any other branch in the circuit, the reading on the ammeter will be the total current in the circuit. We can use Ohm’s Law to determine the total current in the circuit, but only if we first determine the total resistance in the circuit. This circuit consists of resistors in parallel and in series, an arrangement we have looked at before. Following the same two steps as we did last time, we can calculate the total resistance in the 1. Determine the equivalent resistance of the resistors in parallel. We can conclude that 4 1. Treating the equivalent resistance of the resistors in parallel as a single resistor, calculate the total resistance by adding resistors in series. Given that the total resistance is 9 9 V, Ohm’s Law tells us that the total current is: The ammeter will read 1 A. What does the voltmeter read? The voltmeter is connected in parallel with We know that the total voltage drop across the circuit is 9 V. Some of this voltage drop will take place across 9 V, we will have the voltage drop across the resistors in parallel, which is what the voltmeter measures. If the voltage drop across 5 V, then the voltage drop across the resistors in parallel is 9 V – 5 V = 4 V. This is what the voltmeter reads. A fuse burns out if the current in a circuit is too large. This prevents the equipment connected to the circuit from being damaged by the excess current. For example, if the ammeter in the previous problem were replaced by a half-ampere fuse, the fuse would blow and the circuit would be interrupted. Fuses rarely come up on SAT II Physics. If a question involving fuses appears, it will probably ask you whether or not the fuse in a given circuit will blow under certain circumstances. Kirchhoff’s Rules Gustav Robert Kirchhoff came up with two simple rules that simplify many complicated circuit problems. The junction rule helps us to calculate the current through resistors in parallel and other points where a circuit breaks into several branches, and the loop rule helps us to calculate the voltage at any point in a circuit. Let’s study Kirchhoff’s Rules in the context of the circuit represented below: Before we can apply Kirchhoff’s Rules, we have to draw arrows on the diagram to denote the direction in which we will follow the current. You can draw these arrows in any direction you please—they don’t have to denote the actual direction of the current. As you’ll see, so long as we apply Kirchhoff’s Rules correctly, it doesn’t matter in what directions the arrows point. Let’s draw in arrows and label the six vertices of the circuit: We repeat, these arrows do not point in the actual direction of the current. For instance, we have drawn the current flowing into the positive terminal and out of the negative terminal of The Junction Rule The junction rule deals with “junctions,” where a circuit splits into more than one branch, or when several branches reunite to form a single wire. The rule states: The current coming into a junction equals the current coming out. This rule comes from the conservation of charge: the charge per unit time going into the junction must equal the charge per unit time coming out. In other words, when a circuit separates into more than one branch—as with resistors in parallel—then the total current is split between the different branches. The junction rule tells us how to deal with resistors in series and other cases of circuits branching in two or more directions. If we encounter three resistors in series, we know that the sum of the current through all three resistors is equal to the current in the wire before it divides into three parallel branches. Let’s apply the junction rule to the junction at B in the diagram we looked at earlier. According to the arrows we’ve drawn, the current in the diagram flows from A into B across B in two branches: one across E and the other toward C. According to the junction rule, the current flowing into B must equal the current flowing out of B. If we label the current going into B as B toward E as B toward C is B is B is The Loop Rule The loop rule addresses the voltage drop of any closed loop in the circuit. It states: The sum of the voltage drops around a closed loop is zero. This is actually a statement of conservation of energy: every increase in potential energy, such as from a battery, must be balanced by a decrease, such as across a resistor. In other words, the voltage drop across all the resistors in a closed loop is equal to the voltage of the batteries in that loop. In a normal circuit, we know that when the current crosses a resistor, R, the voltage drops by IR, and when the current crosses a battery, V, the voltage rises by V. When we trace a loop—we can choose to do so in the clockwise direction or the counterclockwise direction—we may sometimes find ourselves tracing the loop against the direction of the arrows we drew. If we cross a resistor against the direction of the arrows, the voltage rises by IR. Further, if our loop crosses a battery in the wrong direction—entering in the positive terminal and coming out the negative terminal—the voltage drops by V. To summarize: • Voltage drops by IR when the loop crosses a resistor in the direction of the current arrows. • Voltage rises by IR when the loop crosses a resistor against the direction of the current arrows. • Voltage rises by V when the loop crosses a battery from the negative terminal to the positive terminal. • Voltage drops by V when the loop crosses a battery from the positive terminal to the negative terminal. Let’s now put the loop rule to work in sorting out the current that passes through each of the three resistors in the diagram we looked at earlier. When we looked at the junction rule, we found that we could express the current from A to B—and hence the current from E to D to A—as B to E as B to C—and hence the current from C to F to E—as ABED. Remember that we’ve labeled the current between A and B as B and E as E to A is the same as that flowing from A to B, we know this part of the circuit also has a current of Tracing the loop clockwise from A, the current first crosses 12 V. The loop rule tells us that the net change in voltage is zero across the loop. We can express these changes in voltage as an equation, and then substitute in the values we know for Now let’s apply the loop rule to the loop described by BCFE. Tracing the loop clockwise from B, the arrows cross 8 V. Next, the current crosses Plugging this solution for 4 = 12, we get: So the current across 28/13 A. With that in mind, we can determine the current across The negative value for the current across It doesn’t matter how you draw the current arrows on the diagram, because if you apply Kirchhoff’s Rules correctly, you will come up with negative values for current wherever your current arrows point in the opposite direction of the true current. Once you have done all the math in accordance with Kirchhoff’s Rules, you will quickly be able to determine the true direction of the current.
{"url":"http://www.sparknotes.com/testprep/books/sat2/physics/chapter14section5.rhtml","timestamp":"2014-04-17T10:02:37Z","content_type":null,"content_length":"88343","record_id":"<urn:uuid:da205a76-9f74-442d-8188-dbc5c07a28d4>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00601-ip-10-147-4-33.ec2.internal.warc.gz"}
iDEA: Drexel E-repository and Archives: A local limit theorem in the theory of overpartitions iDEA: Drexel E-repository and Archives > Drexel Academic Community > College of Arts and Sciences > Department of Mathematics > Faculty Research and Publications (Mathematics) > A local limit theorem in the theory of overpartitions Please use this identifier to cite or link to this item: http://hdl.handle.net/1860/1634 Title: A local limit theorem in the theory of overpartitions Corteel, Sylvie Authors: Goh, William M.Y. Hitczenko, Pawel Keywords: Partitions;Combinatorial probability;Local limit theorem;Asymptotic analysis Issue Date: 2006 Publisher: Springer Verlag Citation: Algorithmica, 46(3-4): pp. 329-343. An overpartition of an integer n is a partition where the last occurrence of a part can be overlined. We study the weight of the overlined parts of an overpartition counted with or without their multiplicities. This is a continuation of a work by Corteel and Hitczenko where it was shown that the expected weight of the overlined parts is asymptotic to n/3 as n ! 1 and that the expected weight of the of the overlined parts counted with multiplicity is n/2. Here we refine these results. We first compute the asymptotics of the variance of the weight Abstract: of the overlined parts counted with multiplicity. We then asymptotically evaluate the probability that the weight of the overlined parts is n/3 ± k for k = o(n) and the probability that the weight of the overlined parts counted with multiplicity is n/2 ± k for k = o(n). The first computation is straightforward and uses known asymptotics of partitions. The second one is more involved and requires a sieve argument and the application of the saddle point method. From that we can directly evaluate the probability that two random partitions of n do not share a part. URI: http://www.doi.org/10.1007/s00453-006-0102-z Appears in Faculty Research and Publications (Mathematics) View Statistics Items in iDEA are protected by copyright, with all rights reserved, unless otherwise indicated.
{"url":"http://idea.library.drexel.edu/handle/1860/1634","timestamp":"2014-04-16T13:03:16Z","content_type":null,"content_length":"18301","record_id":"<urn:uuid:38ac701e-5db2-4c8c-b695-c4a801085ce1>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00259-ip-10-147-4-33.ec2.internal.warc.gz"}
How do CAS evaluate derivatives From what I heard CAS stores the information as a directed graph. In Mathematica you can use the FullForm command to see it directly for example would be It then has rules for how to manipulate these objects. So the derivative operator D (I'm assuming wrt x) interacts with Plus via the rule D[Plus[f,g]] = Plus[D[f],D[g]] Mathematica knows that 3 is constant and so D[3]=0. It then reduces Plus[0,?] to just ?. So we now have It allies its chain rule and is programmed so that D[Sin] = Cos: And we know that the derivative of Power[x,2] as Multiply[2,x]
{"url":"http://www.physicsforums.com/showthread.php?p=3773414","timestamp":"2014-04-17T21:32:59Z","content_type":null,"content_length":"30280","record_id":"<urn:uuid:b4949ae1-7d1c-4c75-8851-4ab0eebc7c3a>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00559-ip-10-147-4-33.ec2.internal.warc.gz"}
Edison, NJ Algebra 2 Tutor Find an Edison, NJ Algebra 2 Tutor ...Financial experience includes Fixed Income, Asset & Wealth Management, Foreign Exchange Trading System, Equity Trading and 24x7 Production Support. My work experience includes employment/ consulting with such companies as "Priceline.com", "Merrill-Lynch", "JPMorgan-Chase", "CIBC World Markets" and "Verizon Wireless". Thank you. 45 Subjects: including algebra 2, reading, English, geometry ...This attitude has always served me well in patience with exploring new topics and needing to explore alternate routes of explanation. I do hold high expectations on both parties, and do understand that this is a process that evolves as a deeper relationship is formed. Currently I am employed as the Physical Science and Physics teacher at St. 9 Subjects: including algebra 2, calculus, physics, algebra 1 ...I also have extensive experience applying mathematics to real problems arising in the oil, aerospace and investment management businesses. My primary focus is calculus, but I cover other areas at lower or higher levels as well. I will help students gain an advantage in preparing for college entrance and advanced placement exams. 11 Subjects: including algebra 2, calculus, algebra 1, SAT math ...My instruction begins with a brief discussion of how this math relates to everyday life. After the student grasps the meaning, then we can work together to obtain the knowledge and review the steps necessary to successfully solve the problem. This is a process that will result in better grades. 14 Subjects: including algebra 2, calculus, geometry, statistics ...I took several classes in Anthropology and achieved at least a B in any Anthropology class that I've taken. I recently sat for and passed the New York and New Jersey Bar Exams on the first attempt. I developed my own study plan which I plan to utilize with students. 34 Subjects: including algebra 2, English, reading, writing Related Edison, NJ Tutors Edison, NJ Accounting Tutors Edison, NJ ACT Tutors Edison, NJ Algebra Tutors Edison, NJ Algebra 2 Tutors Edison, NJ Calculus Tutors Edison, NJ Geometry Tutors Edison, NJ Math Tutors Edison, NJ Prealgebra Tutors Edison, NJ Precalculus Tutors Edison, NJ SAT Tutors Edison, NJ SAT Math Tutors Edison, NJ Science Tutors Edison, NJ Statistics Tutors Edison, NJ Trigonometry Tutors
{"url":"http://www.purplemath.com/Edison_NJ_Algebra_2_tutors.php","timestamp":"2014-04-19T04:49:56Z","content_type":null,"content_length":"24151","record_id":"<urn:uuid:1e272146-89c1-451b-be32-2f4bc20fa9d6>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00471-ip-10-147-4-33.ec2.internal.warc.gz"}
Investment Calc 02-28-2010, 10:25 AM Investment Calc Spent hours on this. Help please. Have an assignment as follows: Write a program that prompts the user for an initial investment amount and a goal investment amount and calculate how many years it will take to grow from the initial amount to the goal amount with a fixed interest rate (ie: 5 %). (use the for loop) Prompt the user if they would like to run the program with other amounts or quit. (use the while loop). Output the initial amount, the goal amount and the number of years to reach that amount. Cannot get it to work. Below is what I have. Any suggestions would be greatly appreciated. import static java.lang.System.out; import java.util.Scanner; import java.io.*; public class Project_5 {//main public static void main(String[] args) { double principle = 0;//initial amount investing double interest = 0; double rate = 0.05;//the fixed interest amount double years = 0;//amout of years it will take to achieve goal double goal = 0;//amount wanting to acquire goal:mad::mad: Scanner myScanner = new Scanner(System.in); System.out.println("****************************** ********* "); System.out.println("* Welcome to the Investment Calculator * "); System.out.println("****************************** ********* "); System.out.println ("Enter your initial investment amount: if you want to exit enter 0."); inputNumber = myScanner.nextInt(); principle = inputNumber; if (inputNumber == 0){//if num = 0 exit class System.out.println ("Enter your goal investment amount: "); inputNumber2 = myScanner.nextInt (); goal = InputNumber2; System.out.println ("The fixed interest rate is 5%"); for (years = 0; years < goal; years++){ interest = principal * rate; sum = sum + years; System.out.print("The number of years you must invest after giving $ " + (goal); System.out.println("is ") + (years) + (" years"); 02-28-2010, 11:40 AM This doesn't make sense; suppose I want to have 1000 bucks somewhere in the future, i.e. goal == 1000. It doesn't make sense to loop 1000 years ... Just do you maths: my initial deposit is 'principal' bucks and the interest rate is 'interest' (say 5%) so after one year my total amount of money is: total= (1+interest)*principal so after two years my amount of money is: total= (1+interest)*total. etc. etc. You should keep on looping until total >= goal. kind regards, 02-28-2010, 12:00 PM for loop I see....so how and where do I place this in the program. Do I replace the for loop?.... Thanks for your help.. 02-28-2010, 01:12 PM Of course because the current one is incorrect. I read your private message; let's solve this step by step; first we want to make our amount of money grow until we've reached a goal (an amount of total= principal; // start with our initial amount of money for (; total < goal; total= (1+interest)*total); This loop simply goes on and on until the 'goal' amount of money is reached. The number of years that are needed equals the number of times the (now) empty body of the loop executes; we have to count those: int years= 0; // not looped yet total= principal; // start with our initial amount of money for (; total < goal; total= (1+interest)*total) years++; // add one more loop iteration There, that's all there is to it. kind regards, 02-28-2010, 02:19 PM @OP: I read your private message again; please post your problems in here and not through private messages. kind regards, 02-28-2010, 02:29 PM Sorry about that Kind of new to this so didn't know. I finally got the compile and got it to run. It lets me input the initial investment amount and the goal investment amount but then stops running and doesn't go through the loop. Here is what prints: * Welcome to the Investment Calculator * Enter your initial investment amount: if you want to exit enter 0. Enter your goal investment amount: The fixed interest rate is 5%_________________________________________ Something is wrong with this section of the program: total= principal; for (; total < goal; total= (1+interest)*total); //interest = principal * rate; //sum = sum + years; System.out.print("The number of years you must invest after giving $ "); System.out.print(" is"); System.out.println(" years"); Thanks for all of your help. 02-28-2010, 02:37 PM 02-28-2010, 03:23 PM still stuck. Looks like the for loop format might not be right. The setup should look something like this format, right? for (initialization; termination; increment) would this be right.. for (years=0; total < goal; years++) 02-28-2010, 03:45 PM That is correct; my version is also correct; all three of the for header parts are optional. Ultimately: ... would be correct as well. Can you show us your code as it is now? kind regards, 02-28-2010, 05:55 PM Still having issues with this....I get so close and then have another issue. The program got all the way to the for loop and it begins an infinite loop. The loop kept running until it crashed... The program looked like this... * Welcome to the Investment Calculator * Enter your initial investment amount: if you want to exit enter 0. Enter your goal investment amount: The fixed interest rate is 5% The number of years you must invest after giving $1000 is 3361 years. You could see it running through the loop up to that number and then freezing. Let me know what you think. 02-28-2010, 05:58 PM Still having issues with this....I get so close and then have another issue. The program got all the way to the for loop and it begins an infinite loop. The loop kept running until it crashed... The program looked like this... * Welcome to the Investment Calculator * Enter your initial investment amount: if you want to exit enter 0. Enter your goal investment amount: The fixed interest rate is 5% The number of years you must invest after giving $1000 is 3361 years. You could see it running through the loop up to that number and then freezing. Let me know what you think. I think that I want to see your code. kind regards, 02-28-2010, 06:07 PM Here is what my entire code looks like: import static java.lang.System.out; import java.util.Scanner; import java.io.*; public class Project_5 {//main public static void main(String[] args) {//begins body double principal = 0;//initial amount investing double interest = 0; double rate = 0.05;//the fixed interest amount int years = 0;//amout of years it will take to achieve goal double goal = 0; double total = 0; Scanner myScanner = new Scanner(System.in); System.out.println("*************************************** "); System.out.println("* Welcome to the Investment Calculator * "); System.out.println("*************************************** "); System.out.println ("Enter your initial investment amount: if you want to exit enter 0."); int inputNumber = myScanner.nextInt(); principal = inputNumber; if (inputNumber == 0){//if num = 0 exit class System.out.println ("Enter your goal investment amount: "); int inputNumber2 = myScanner.nextInt (); goal = inputNumber2; System.out.println ("The fixed interest rate is 5%"); total= principal; total= (1+interest)*total; for (years=0; total < goal; years++) //interest = principal * rate; //sum = sum + years; { System.out.print("The number of years you must invest to meet your goal of $ "); System.out.print(" is"); System.out.println(" years"); Moderator edit: color tags changed to code tags 02-28-2010, 06:21 PM Moderator edit above: color tags changed to code tags. For more on this, click on the link in my signature below. Thanks and good luck. 02-28-2010, 06:21 PM Still that loop doesn't make any sense, reread my previous replies; also check your 'fixed' interest rate. It is zero in your code instead of 5%. kind regards,
{"url":"http://www.java-forums.org/new-java/26109-investment-calc-print.html","timestamp":"2014-04-21T10:54:18Z","content_type":null,"content_length":"19853","record_id":"<urn:uuid:37e7f398-95d2-46df-a221-c7c7132c6f6b>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00085-ip-10-147-4-33.ec2.internal.warc.gz"}
[Numpy-discussion] Core math library in numpy Charles R Harris charlesr.harris@gmail.... Tue Feb 24 12:26:47 CST 2009 On Tue, Feb 24, 2009 at 11:09 AM, Matthieu Brucher < matthieu.brucher@gmail.com> wrote: > In fact, the __inline is not helpful. It's the static keyword that > enables the compiler to inline the function if the function is small > enough. As the static indicates that the function will not be seen > from the outside, it can do this. Good point. However, most of the ufuncs involving standard functions like sin, cos, etc. are implemented as generic loops that are passed a function pointer and for such functions the call overhead is probably not significant in the absence of intrinsic hardware implementations. The complex functions could probably use some inlining as they call other functions. That could implemented by using some local static functions in the library code that could be inlined when the library is compiled. I think that the first priority should be correctness and portability. Speed optimizations can come later. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://projects.scipy.org/pipermail/numpy-discussion/attachments/20090224/a162d1e3/attachment.html More information about the Numpy-discussion mailing list
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2009-February/040605.html","timestamp":"2014-04-18T23:26:10Z","content_type":null,"content_length":"3966","record_id":"<urn:uuid:943d7bb1-6bd6-42e1-9c38-b16fea7f8460>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00131-ip-10-147-4-33.ec2.internal.warc.gz"}
Edison, NJ Algebra 2 Tutor Find an Edison, NJ Algebra 2 Tutor ...Financial experience includes Fixed Income, Asset & Wealth Management, Foreign Exchange Trading System, Equity Trading and 24x7 Production Support. My work experience includes employment/ consulting with such companies as "Priceline.com", "Merrill-Lynch", "JPMorgan-Chase", "CIBC World Markets" and "Verizon Wireless". Thank you. 45 Subjects: including algebra 2, reading, English, geometry ...This attitude has always served me well in patience with exploring new topics and needing to explore alternate routes of explanation. I do hold high expectations on both parties, and do understand that this is a process that evolves as a deeper relationship is formed. Currently I am employed as the Physical Science and Physics teacher at St. 9 Subjects: including algebra 2, calculus, physics, algebra 1 ...I also have extensive experience applying mathematics to real problems arising in the oil, aerospace and investment management businesses. My primary focus is calculus, but I cover other areas at lower or higher levels as well. I will help students gain an advantage in preparing for college entrance and advanced placement exams. 11 Subjects: including algebra 2, calculus, algebra 1, SAT math ...My instruction begins with a brief discussion of how this math relates to everyday life. After the student grasps the meaning, then we can work together to obtain the knowledge and review the steps necessary to successfully solve the problem. This is a process that will result in better grades. 14 Subjects: including algebra 2, calculus, geometry, statistics ...I took several classes in Anthropology and achieved at least a B in any Anthropology class that I've taken. I recently sat for and passed the New York and New Jersey Bar Exams on the first attempt. I developed my own study plan which I plan to utilize with students. 34 Subjects: including algebra 2, English, reading, writing Related Edison, NJ Tutors Edison, NJ Accounting Tutors Edison, NJ ACT Tutors Edison, NJ Algebra Tutors Edison, NJ Algebra 2 Tutors Edison, NJ Calculus Tutors Edison, NJ Geometry Tutors Edison, NJ Math Tutors Edison, NJ Prealgebra Tutors Edison, NJ Precalculus Tutors Edison, NJ SAT Tutors Edison, NJ SAT Math Tutors Edison, NJ Science Tutors Edison, NJ Statistics Tutors Edison, NJ Trigonometry Tutors
{"url":"http://www.purplemath.com/Edison_NJ_Algebra_2_tutors.php","timestamp":"2014-04-19T04:49:56Z","content_type":null,"content_length":"24151","record_id":"<urn:uuid:1e272146-89c1-451b-be32-2f4bc20fa9d6>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00471-ip-10-147-4-33.ec2.internal.warc.gz"}
Thomas Anberr Thomas Anberrée Division of Computer Science The University of Nottingham Ningbo China Room 438, Science and Engineering building, 199 Taikang East Road, 315100 Ningbo, China Tel. +86 (0)574 8818 0217 I am a Lecturer at the School of Computer Science of the University of Nottingham (Ningbo Campus) and a member of the Functional Programming Laboratory. I obtained a MSc in foundations of mathematics at the University of Paris VII and then did a PhD in theoretical computer science under the supervision of M. Escardó at the University of Birmingham. I then took up my present position as a Lecturer at UNNC in spring 2008. Current teaching • AE1MCS - Mathematics for Computer Science [Local Only] • AE1FUN - Functional Programming Current Research Grants (as principal investigator) • National Natural Science Foundation of China (NSFC) 300 000 RMB (full funding) granted for the project : “Reducing programming errors: development of PiSigma, a novel, fast and high-level dependently-typed programming language, based on a certified kernel.” • Ningbo Municipal Natural Science Foundation, China (NBNSF) 30 000 RMB granted for a project on quotients types in dependent type programming • See also As co-investigator PhD Students • Li Nuo is doing his PhD at the Functional Programming Laboratory under my supervision and Dr.Thorsten Altenkirch's. Research Fellows • Dr Zhou Mianlai has recently joined our team to work on the implementation of a dependently typed language under my supervision. Conference presentations • July 2009, CiE 2009, Heidelberg. First-Order Universality for Real Programs • April 2008, TAMC 2008, Xian. A Denotational Semantics for Total Correctness of Sequential Exact Real Programs. • April 2007, MFPS XXIII, New-Orleans. On the non-sequential nature of domain models of real-number computation. • Sept. 2007 Domains VIII, Novosibirsk. Total Correctness for Sequential Real Programs. Administrative responsibilities • UNNC Examinations Officer for the School of Computer Science • UNNC Information Services Representative for the School of Computer Science • UNNC Academic Committee Member
{"url":"http://www.cs.nott.ac.uk/~tfa/","timestamp":"2014-04-21T04:42:11Z","content_type":null,"content_length":"7190","record_id":"<urn:uuid:4b5bf22d-8a73-4bc1-9db8-3a7443c94445>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00514-ip-10-147-4-33.ec2.internal.warc.gz"}
HSG-C Circles │More High School Geometry Resources│ │ │ │ │ Understand and apply theorems about circles HSG-C.1 Prove that all circles are similar. Aligned Resources HSG-C.2 Identify and describe relationships among inscribed angles, radii, and chords. Include the relationship between central, inscribed, and circumscribed angles; inscribed angles on a diameter are right angles; the radius of a circle is perpendicular to the tangent where the radius intersects the circle. Aligned Resources HSG-C.3 Construct the inscribed and circumscribed circles of a triangle, and prove properties of angles for a quadrilateral inscribed in a circle. Aligned Resources HSG-C.4 (+) Construct a tangent line from a point outside a given circle to the circle. Aligned Resources Find arc lengths and areas of sectors of circles HSG-C.5 Derive using similarity the fact that the length of the arc intercepted by an angle is proportional to the radius, and define the radian measure of the angle as the constant of proportionality; derive the formula for the area of a sector. Aligned Resources
{"url":"http://www.sharemylesson.com/article.aspx?storyCode=50010489","timestamp":"2014-04-21T15:07:16Z","content_type":null,"content_length":"22035","record_id":"<urn:uuid:744080ce-c9f2-4504-86a3-d787205df5db>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00360-ip-10-147-4-33.ec2.internal.warc.gz"}
search results Expand all Collapse all Results 1 - 25 of 408 1. CMB Online first Free Locally Convex Spaces and the $k$-space Property Let $L(X)$ be the free locally convex space over a Tychonoff space $X$. Then $L(X)$ is a $k$-space if and only if $X$ is a countable discrete space. We prove also that $L(D)$ has uncountable tightness for every uncountable discrete space $D$. Keywords:free locally convex space, $k$-space, countable tightness Categories:46A03, 54D50, 54A25 2. CMB Online first Characters on $C( X)$ The precise condition on a completely regular space $X$ for every character on $C(X) $ to be an evaluation at some point in $X$ is that $X$ be realcompact. Usually, this classical result is obtained relying heavily on involved (and even nonconstructive) extension arguments. This note provides a direct proof that is accessible to a large audience. Keywords:characters, realcompact, evaluation, real-valued continuous functions Categories:54C30, 46E25 3. CMB Online first Fourier Coefficients of Vector-valued Modular Forms of Dimension $2$ We prove the following Theorem. Suppose that $F=(f_1, f_2)$ is a $2$-dimensional vector-valued modular form on $\operatorname{SL}_2(\mathbb{Z})$ whose component functions $f_1, f_2$ have rational Fourier coefficients with bounded denominators. Then $f_1$ and $f_2$ are classical modular forms on a congruence subgroup of the modular group. Keywords:vector-valued modular form, modular group, bounded denominators Categories:11F41, 11G99 4. CMB Online first On an Exponential Functional Inequality and its Distributional Version Let $G$ be a group and $\mathbb K=\mathbb C$ or $\mathbb R$. In this article, as a generalization of the result of Albert and Baker, we investigate the behavior of bounded and unbounded functions $f\colon G\to \mathbb K$ satisfying the inequality $ \Bigl|f \Bigl(\sum_{k=1}^n x_k \Bigr)-\prod_{k=1}^n f(x_k) \Bigr|\le \phi(x_2, \dots, x_n),\quad \forall\, x_1, \dots, x_n\in G, $ where $\phi\ colon G^{n-1}\to [0, \infty)$. Also, as a distributional version of the above inequality we consider the stability of the functional equation \begin{equation*} u\circ S - \overbrace{u\otimes \cdots \otimes u}^{n-\text {times}}=0, \end{equation*} where $u$ is a Schwartz distribution or Gelfand hyperfunction, $\circ$ and $\otimes$ are the pullback and tensor product of distributions, respectively, and $S(x_1, \dots, x_n)=x_1+ \dots +x_n$. Keywords:distribution, exponential functional equation, Gelfand hyperfunction, stability Categories:46F99, 39B82 5. CMB Online first On Set Theoretically and Cohomologically Complete Intersection Ideals Let $(R,\mathfrak m)$ be a local ring and $\mathfrak a$ be an ideal of $R$. The inequalities \[ \operatorname{ht}(\mathfrak a) \leq \operatorname{cd}(\mathfrak a,R) \leq \operatorname{ara}(\ mathfrak a) \leq l(\mathfrak a) \leq \mu(\mathfrak a) \] are known. It is an interesting and long-standing problem to find out the cases giving equality. Thanks to the formal grade we give conditions in which the above inequalities become equalities. Keywords:set-theoretically and cohomologically complete intersection ideals, analytic spread, monomials, formal grade, depth of powers of ideals Categories:13D45, 13C14 6. CMB Online first Measures of Noncompactness in Regular Spaces Previous results by the author on the connection between three of measures of non-compactness obtained for $L_p$, are extended to regular spaces of measurable functions. An example of advantage in some cases one of them in comparison with another is given. Geometric characteristics of regular spaces are determined. New theorems for $(k,\beta)$-boundedness of partially additive operators are Keywords:measure of non-compactness, condensing map, partially additive operator, regular space, ideal space Categories:47H08, 46E30, 47H99, 47G10 7. CMB Online first Limited Sets and Bibasic Sequences Bibasic sequences are used to study relative weak compactness and relative norm compactness of limited sets. Keywords:limited sets, $L$-sets, bibasic sequences, the Dunford-Pettis property Categories:46B20, 46B28, 28B05 8. CMB Online first On the ${\mathcal F}{\Phi}$-Hypercentre of Finite Groups Let $G$ be a finite group, $\mathcal F$ a class of groups. Then $Z_{{\mathcal F}{\Phi}}(G)$ is the ${\mathcal F}{\Phi}$-hypercentre of $G$ which is the product of all normal subgroups of $G$ whose non-Frattini $G$-chief factors are $\mathcal F$-central in $G$. A subgroup $H$ is called $\mathcal M$-supplemented in a finite group $G$, if there exists a subgroup $B$ of $G$ such that $G=HB$ and $H_1B$ is a proper subgroup of $G$ for any maximal subgroup $H_1$ of $H$. The main purpose of this paper is to prove: Let $E$ be a normal subgroup of a group $G$. Suppose that every noncyclic Sylow subgroup $P$ of $F^{*}(E)$ has a subgroup $D$ such that $1\lt |D|\lt |P|$ and every subgroup $H$ of $P$ with order $|H|=|D|$ is $\mathcal M$-supplemented in $G$, then $E\leq Z_{{\mathcal U}{\Phi}} Keywords:${\mathcal F}{\Phi}$-hypercentre, Sylow subgroups, $\mathcal M$-supplemented subgroups, formation Categories:20D10, 20D20 9. CMB Online first Orbits of Geometric Descent We prove that quasiconvex functions always admit descent trajectories bypassing all non-minimizing critical points. Keywords:differential inclusion, quasiconvex function, self-contracted curve, sweeping process Categories:34A60, 49J99 10. CMB Online first Restriction Operators Acting on Radial Functions on Vector Spaces Over Finite Fields We study $L^p-L^r$ restriction estimates for algebraic varieties $V$ in the case when restriction operators act on radial functions in the finite field setting. We show that if the varieties $V$ lie in odd dimensional vector spaces over finite fields, then the conjectured restriction estimates are possible for all radial test functions. In addition, assuming that the varieties $V$ are defined in even dimensional spaces and have few intersection points with the sphere of zero radius, we also obtain the conjectured exponents for all radial test functions. Keywords:finite fields, radial functions, restriction operators Categories:42B05, 43A32, 43A15 11. CMB Online first Property T and Amenable Transformation Group $C^*$-algebras It is well known that a discrete group which is both amenable and has Kazhdan's Property T must be finite. In this note we generalize the above statement to the case of transformation groups. We show that if $G$ is a discrete amenable group acting on a compact Hausdorff space $X$, then the transformation group $C^*$-algebra $C^*(X, G)$ has Property T if and only if both $X$ and $G$ are finite. Our approach does not rely on the use of tracial states on $C^*(X, G)$. Keywords:Property T, $C^*$-algebras, transformation group, amenable Categories:46L55, 46L05 12. CMB Online first Classification of Integral Modular Categories of Frobenius--Perron Dimension $pq^4$ and $p^2q^2$ We classify integral modular categories of dimension $pq^4$ and $p^2q^2$, where $p$ and $q$ are distinct primes. We show that such categories are always group-theoretical except for categories of dimension $4q^2$. In these cases there are well-known examples of non-group-theoretical categories, coming from centers of Tambara-Yamagami categories and quantum groups. We show that a non-group-theoretical integral modular category of dimension $4q^2$ is equivalent to either one of these well-known examples or is of dimension $36$ and is twist-equivalent to fusion categories arising from a certain quantum group. Keywords:modular categories, fusion categories 13. CMB 2014 (vol 57 pp. 431) The Rasmussen Invariant, Four-genus and Three-genus of an Almost Positive Knot Are Equal An oriented link is positive if it has a link diagram whose crossings are all positive. An oriented link is almost positive if it is not positive and has a link diagram with exactly one negative crossing. It is known that the Rasmussen invariant, $4$-genus and $3$-genus of a positive knot are equal. In this paper, we prove that the Rasmussen invariant, $4$-genus and $3$-genus of an almost positive knot are equal. Moreover, we determine the Rasmussen invariant of an almost positive knot in terms of its almost positive knot diagram. As corollaries, we prove that all almost positive knots are not homogeneous, and there is no almost positive knot of $4$-genus one. Keywords:almost positive knot, four-genus, Rasmussen invariant Categories:57M27, 57M25 14. CMB 2014 (vol 57 pp. 264) On Semisimple Hopf Algebras of Dimension $pq^n$ Let $p,q$ be prime numbers with $p^2\lt q$, $n\in \mathbb{N}$, and $H$ a semisimple Hopf algebra of dimension $pq^n$ over an algebraically closed field of characteristic $0$. This paper proves that $H$ must possess one of the following structures: (1) $H$ is semisolvable; (2) $H$ is a Radford biproduct $R\# kG$, where $kG$ is the group algebra of group $G$ of order $p$, and $R$ is a semisimple Yetter--Drinfeld Hopf algebra in ${}^{kG}_{kG}\mathcal{YD}$ of dimension $q^n$. Keywords:semisimple Hopf algebra, semisolvability, Radford biproduct, Drinfeld double 15. CMB Online first Helicoidal Minimal Surfaces in a Finsler Space of Randers Type We consider the Finsler space $(\bar{M}^3, \bar{F})$ obtained by perturbing the Euclidean metric of $\mathbb{R}^3$ by a rotation. It is the open region of $\mathbb{R}^3$ bounded by a cylinder with a Randers metric. Using the Busemann-Hausdorff volume form, we obtain the differential equation that characterizes the helicoidal minimal surfaces in $\bar{M}^3$. We prove that the helicoid is a minimal surface in $\bar{M}^3$, only if the axis of the helicoid is the axis of the cylinder. Moreover, we prove that, in the Randers space $(\bar{M}^3, \bar{F})$, the only minimal surfaces in the Bonnet family, with fixed axis $O\bar{x}^3$, are the catenoids and the helicoids. Keywords:minimal surfaces, helicoidal surfaces, Finsler space, Randers space Categories:53A10, 53B40 16. CMB Online first On the Hereditary Paracompactness of Locally Compact, Hereditarily Normal Spaces We establish that if it is consistent that there is a supercompact cardinal, then it is consistent that every locally compact, hereditarily normal space which does not include a perfect pre-image of $\omega_1$ is hereditarily paracompact. Keywords:locally compact, hereditarily normal, paracompact, Axiom R, PFA$^{++}$ Categories:54D35, 54D15, 54D20, 54D45, 03E65, 03E35 17. CMB Online first Strong Asymptotic Freeness for Free Orthogonal Quantum Groups It is known that the normalized standard generators of the free orthogonal quantum group $O_N^+$ converge in distribution to a free semicircular system as $N \to \infty$. In this note, we substantially improve this convergence result by proving that, in addition to distributional convergence, the operator norm of any non-commutative polynomial in the normalized standard generators of $O_N^+$ converges as $N \to \infty$ to the operator norm of the corresponding non-commutative polynomial in a standard free semicircular system. Analogous strong convergence results are obtained for the generators of free unitary quantum groups. As applications of these results, we obtain a matrix-coefficient version of our strong convergence theorem, and we recover a well known $L^2$-$L^\ infty$ norm equivalence for non-commutative polynomials in free semicircular systems. Keywords:quantum groups, free probability, asymptotic free independence, strong convergence, property of rapid decay Categories:46L54, 20G42, 46L65 18. CMB 2014 (vol 57 pp. 231) On the Multiplicities of Characters in Table Algebras In this paper we show that every module of a table algebra can be considered as a faithful module of some quotient table algebra. Also we prove that every faithful module of a table algebra determines a closed subset which is a cyclic group. As a main result we give some information about multiplicities of characters in table algebras. Keywords:table algebra, faithful module, multiplicity of character Categories:20C99, 16G30 19. CMB Online first A short note on short pants It is a theorem of Bers that any closed hyperbolic surface admits a pants decomposition consisting of curves of bounded length where the bound only depends on the topology of the surface. The question of the quantification of the optimal constants has been well studied and the best upper bounds to date are linear in genus, a theorem of Buser and Seppälä. The goal of this note is to give a short proof of a linear upper bound which slightly improve the best known bound. Keywords:hyperbolic surfaces, geodesics, pants decompositions Categories:30F10, 32G15, 53C22 20. CMB Online first Factorisation of Two-variable $p$-adic $L$-functions Let $f$ be a modular form which is non-ordinary at $p$. Loeffler has recently constructed four two-variable $p$-adic $L$-functions associated to $f$. In the case where $a_p=0$, he showed that, as in the one-variable case, Pollack's plus and minus splitting applies to these new objects. In this article, we show that such a splitting can be generalised to the case where $a_p\ne0$ using Sprung's logarithmic matrix. Keywords:modular forms, p-adic L-functions, supersingular primes Categories:11S40, 11S80 21. CMB Online first Constructive Proof of Carpenter's Theorem We give a constructive proof of Carpenter's Theorem due to Kadison. Unlike the original proof our approach also yields the real case of this theorem. Keywords:diagonals of projections, the Schur-Horn theorem, the Pythagorean theorem, the Carpenter theorem, spectral theory Categories:42C15, 47B15, 46C05 22. CMB Online first Short Probabilistic Proof of the Brascamp-Lieb and Barthe Theorems We give a short proof of the Brascamp-Lieb theorem, which asserts that a certain general form of Young's convolution inequality is saturated by Gaussian functions. The argument is inspired by Borell's stochastic proof of the Prékopa-Leindler inequality and applies also to the reversed Brascamp-Lieb inequality, due to Barthe. Keywords:functional inequalities, Brownian motion Categories:39B62, 60J65 23. CMB Online first On $3$-manifolds with Torus- or Klein-bottle Category Two A subset $W$ of a closed manifold $M$ is $K$-contractible, where $K$ is a torus or Kleinbottle, if the inclusion $W\rightarrow M$ factors homotopically through a map to $K$. The image of $\pi_1 (W) $ (for any base point) is a subgroup of $\pi_1 (M)$ that is isomorphic to a subgroup of a quotient group of $\pi_1 (K)$. Subsets of $M$ with this latter property are called $\mathcal{G} _K$-contractible. We obtain a list of the closed $3$-manifolds that can be covered by two open $\mathcal{G}_K$-contractible subsets. This is applied to obtain a list of the possible closed prime $3$-manifolds that can be covered by two open $K$-contractible subsets. Keywords:Lusternik--Schnirelmann category, coverings of $3$-manifolds by open $K$-contractible sets Categories:57N10, 55M30, 57M27, 57N16 24. CMB 2013 (vol 57 pp. 119) Splitting Families and Complete Separability We answer a question from Raghavan and SteprÄ ns by showing that $\mathfrak{s} = {\mathfrak{s}}_{\omega, \omega}$. Then we use this to construct a completely separable maximal almost disjoint family under $\mathfrak{s} \leq \mathfrak{a}$, partially answering a question of Shelah. Keywords:maximal almost disjoint family, cardinal invariants Categories:03E05, 03E17, 03E65 25. CMB Online first Left-orderable fundamental group and Dehn surgery on the knot $5_2$ We show that the resulting manifold by $r$-surgery on the knot $5_2$, which is the two-bridge knot corresponding to the rational number $3/7$, has left-orderable fundamental group if the slope $r$ satisfies $0\le r \le 4$. Keywords:left-ordering, Dehn surgery Categories:57M25, 06F15
{"url":"http://cms.math.ca/cmb/kw/f","timestamp":"2014-04-17T06:55:30Z","content_type":null,"content_length":"65080","record_id":"<urn:uuid:4ada69a8-e22f-461c-9654-477951e05985>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00643-ip-10-147-4-33.ec2.internal.warc.gz"}
ShareMe - free Making Math Symbols In Microsoft download Asan Download Hercules Deluxe Webcam Internet Explorer Wont Work Thesaurus Plugin For Indesign Premiere Shine Plug In Toolbar Net Simplified Mri Proxy Download For Facook In Chinak Wmv Joiner File Acer Tablet Pc Sync German Nazi Flags Emulator Emurayden Ps2 Program Huawei 320x240 Dj Promixer Copyto Cenon 1200 Javascript Remove Spaces Making Math Symbols In Microsoft From Title 1. symbols & Runes - Games/Puzzle & Word ... This is a very simple game that allows you to develop your visual memory. The goal of the game is to find and delete pair symbols from the playing board by selecting these symbols. The game contains 4 levels: 2 levels with 2x2 boards and 2 levels with 6x6 boards. ... 2. Max symbols - Utilities ... Max symbols 1.02 is a magnificent free collection of all types of symbols that you could need, to copy and paste into any document or text you want.There are classic ASCII symbols and others that are very curious and practical for any text you might write. ... 3. Atoms, symbols and Equations - Home & Personal/Misc ... Unique interactive multimedia Chemistry teaching software that tests students as they learn. Topics covered include: elements, atoms and molecules, word equations, chemical symbols, Periodic Table and chemical formulas. Plus interactive simulations to teach the balancing of chemical equations and the formulas of ionic compounds. As far as possible, skills are taught through familiar examples, to help reinforce general chemical knowledge. Written by a teacher for use in schools, but an excellent ... 4. Weld symbols VisualWeld - Multimedia & Design/Graphic & Design ... The program for adding Weld symbols in AutoCAD drawings. Advanced interface and options of this program will help you in your work. Almost all possible weld symbols can be designed in form and added in drawing. ... 5. ALL symbols Label Generator - Business & Productivity Tools/Inventory Systems ... ALL symbols Label Generator is a powerful Barcode editor and Printer, which can generate all types of in common used 1D barcode, can support any paper of sales on the market, can import data from every database provider, completely WYSIWYG(What You See Is What You Get), absolutely easy to use.ALL symbols Label Generator works as a stand-alone program, and you can export Barcode images to other graphics formats such as GIF, JPG, BMP etc. Using copy/paste function, you can paste the Barcode image ... 6. Yahoo Stock symbols - Utilities/Mac Utilities ... Yahoo Stock symbols is a python script that pulls over 34,000 stock symbols from the yahoo finance website. ... 7. PDSYMS DWG symbols Library - Multimedia & Design/Illustration ... PDSYMS v2.0 is an Architectural/Interiors DWG-FORMAT symbols Library Add-on for all CADD packages running on all platforms and Operating Systems, including but notlimited to:IntelliCAD (2000 and higher), Microstation, eCADlite, CADvance, TurboCAD, GenericCADD,DataCAD, AutoSketch, QuickCADD, VisualCADD, LinuxCAD, VersaCAD, CADintosh,QCad, DenebaCAD, PCDraft, MacDraft, RealCADD, CADStd, FreeDraft, VDraft,PowerCADD, Vellum/Draft, FastCAD, EasyCAD, DynaCADD, DesignCAD, GizaCAD,as well as full ... 8. PDSYMS DXF symbols Library - Multimedia & Design/Graphic & Design ... PDSYMS v2.0 is an Architectural/Interiors DXF-FORMAT symbols Library Add-on for all CADD packages running on all platforms and Operating Systems, including but notlimited to:IntelliCAD (2000 and higher), Microstation, eCADlite, CADvance, TurboCAD, GenericCADD,DataCAD, AutoSketch, QuickCADD, VisualCADD, LinuxCAD, VersaCAD, CADintosh,QCad, DenebaCAD, PCDraft, MacDraft, RealCADD, CADStd, FreeDraft, VDraft,PowerCADD, Vellum/Draft, FastCAD, EasyCAD, DynaCADD, DesignCAD, GizaCAD,as well as full ... 9. Chinese symbols for Words screensaver - Desktop Enhancements/Screen Savers ... This screensaver includes 35 cool Chinese symbols with several calligraphic styles, together with inspiring background music.you can also enjoy the beautiful chinese handwriting. ... 10. making PDF - Business & Productivity Tools/Office Suites & Tools ... Make PDF documents and fillable PDF forms with making PDFLearn how to create PDF files using a variety of programs and techniques. The PDF tutorials in this section cover both basic PDF creation and ways to enhance your PDF files with links, bookmarks, and other options.. ... Making Math Symbols In Microsoft From Short Description 1. LangPad - math & Currency Characters - Utilities/System Utilities ... LangPad - math & Currency Characters provides an easy way to insert math & Currency characters and symbols into your WordPad and Notepad text. Click the mouse on a character or symbol in the chart, and it will be inserted into your text. ... 2. LangPad - International Characters - Utilities/System Utilities ... LangPad - International Characters provides an easy way to insert foreign language characters, math, currency, and literary symbols into your WordPad and Notepad text. Click the mouse on a character or symbol in the chart, and it will be inserted into your text. ... 3. MathCards - Educational/Mathematics ... math Cards helps students reinforce their math skills by building math equations or verifying facts.Children and teenagers can play against the computer or a partner in this mathematical facts game.Reinforce whole numbers, integers, fractions, roots, percents, exponents, equations and roman numerals. ... 4. Mad math - Utilities/Other Utilities ... Mad math is a math API for Java that is aimed at adding needed math functions and formulas that are not in the regular Java math API, such as fibonacci sequence, is prime, area formulas, greatest common factor, least common multiple etc. ... 5. Task Light - Educational/Mathematics ... Free multilingual test authoring mathematics software enables math teachers and tutors to easily prepare math tests, quizzes and homeworks from a repository of more than 500 of solved math problems in arithmetic, pre-algebra, algebra, trigonometry and hyperbolic trigonometry, and develop numerous variant tests around each prepared test. All tests with or without the solutions can be printed out. The software includes basic math problems and advanced tasks, such as solutions of linear, quadratic, ... 6. EMTask Light - Educational/Teaching Tools ... Free multilingual test authoring mathematics software enables math teachers and tutors to easily prepare math tests, quizzes and homeworks from a repository of more than 500 of solved math problems in arithmetic, pre-algebra, algebra, trigonometry and hyperbolic trigonometry, and develop numerous variant tests around each prepared test. All tests with or without the solutions can be printed out. The software includes basic math problems and advanced tasks, such as solutions of linear, quadratic, ... 7. PPT Create - Multimedia & Design/Animation ... Program PPT CREATE (making the presentations)The Program is intended for making the presentations in format microsoft Power Point. The Primary task to help in quick making the presentations, complimentary messages, facetious assemblies!The Program executes 95 percent of functioning and making the presentations is produced for seconds (for instance 20 sheets of presentation occupy 0.4 sec) while manually this can take up to ???????.Order of functioning- Unpackin any directory to start program ... 8. WizFlow Flowcharter - Multimedia & Design/Graphic & Design ... WizFlow helps you create professional flowcharts and similar diagrams. It is uniquely tailored for very easy use with little or no training required. Wizflow allows you to work with a single object or a group of objects, drawing boxes or symbols of many shapes and connecting them with lines of various types. You can enter explanatory text at any location, using a grid that helps you to keep your work symmetrical and aligned. WizFlow handles the hard parts of drawing a flowchart and leaves you ... 9. math ODF Recovery - Utilities/Other Utilities ... Kernel for math, easily available file repair software for OpenOffice math files. Free to download and easy to understand are the main attraction of odf file repair software. ... 10. Zozo's Flying math - Utilities/Other Utilities ... Zozo's Flying math is a math flashcards program appropriate for grade schoolers. It allows parents to configure the types of math problems presented, and then drills the student using a very simple and clean interface. ... Making Math Symbols In Microsoft From Long Description 1. Rapid-Pi - Business & Productivity Tools/Word Processing ... Rapid-Pi is an add-on for microsoft Word (and other word processors) that will transform the way you enter mathematical formulae, equations and expressions into documents. Rapid-Pi was designed with a single purpose in mind - to save you time when editing math in documents. Rapid-Pi's text-based input is a simply faster way to input math. Most equation editing programs require you to click on toolbar buttons or go through menus in order to insert symbols and expressions. This process ... 2. math Coloring Book: Kindergarten - Games/Kids ... Solve kindergarten level math problems while coloring these 50 pages. Discover a hidden picture on each page by coloring by numbers, shapes, counting, or other math concepts. The hidden pictures are revealed as the pages are colored correctly. The pages in this program focus on essential math skills children need to succeed in kindergarten. Skills include number recognition, shapes, counting, and math symbols. Learn while having fun! Download a free trial version of the math Coloring Book for ... 3. The math quizzer - Educational/Mathematics ... The math quizzer is a freeware designed to help students to improve there math skills by random math quizzes. The application is designed for all ages and have a number of difficult types. There is also a math game. The math quizzer is also connected to a web site in order to provide math learning content online. ... 4. Mathpad - Educational/Mathematics ... Mathpad is an easy to use text editor for mathematics. You can mix together ordinary text and any mathematical expression. Ideal for math teachers to create quizzes, tests and handouts. Also, you can save the formatted text as an image. With most equation editors you choose a template with the mouse, type a few keystrokes and repeat the process. Mathpad doesn't work this way. Mainly, you just type characers at the keyboard. Most people find this to be easier and faster.Mathpad has some unusual ... 5. Buildbug math for kids - Educational/Kids ... Buildbug kids math online game. Offers free math lessons and homework help, with an emphasis on geometry, algebra, statistics, and calculus. Also provides calculators and games. Due to heavy traffic this site has been experiencing some delays. The math Forum's Internet math Library is a comprehensive catalog of Web sites and Web pages relating to the study of mathematics. Every week I try to incorporate a cooperative lesson into our math class. Take math grades once a week instead of daily. ... 6. FlowBreeze Standard Flowchart Software - Business & Productivity Tools/Presentation Tools ... FlowBreeze Flowchart Generator for microsoft Excel ... the Fastest, Easiest Way to Create Professional Looking Flow Charts. ... Make flowcharts just by typing text - FlowBreeze automatically does the rest ... Over 100 built-in Flowchart Templates...Text-To-Flowchart Wizard creates flowcharts from exisitng text in just a few clicks...It generates the flow chart symbols for you, based on the words you use ... It automatically applies the formatting you choose, adds flow lines, and aligns the flow ... 7. OpenSAL - Utilities/Other Utilities ... OpenSAL is a Vector math scientific algorithm library and API designed to abstract the complexities of math libraries from the application developer. A C language reference design is provided for over 400 math functions. ... 8. LiteralMath - Educational/Mathematics ... Text editor with the additional capabilities of math notation and hypertext, aimed at the high school / college environment. Uses the RTF format known to Wordpad, Word. Generates HTML, so that math notation can be displayed by popular browsers. Users may specify links to other web documents without knowledge of HTML. math notation is based on special fonts, thereby allowing browsers to render math at the speed of text. Very useful for math communication over the web, with clarity, simplicity, ... 9. math Wizard - Utilities/Other Utilities ... math Wizard is a group of math equation solvers. The programs included with math Wizard are coded by Connor Smith. ... 10. Science Teacher's Helper - Educational/Science ... Science Teacher's Helper is an add-on for microsoft Word, it was designed with a single purpose in mind - to save you time when editing math, chemistry and physics in documents. You can easily add 1200 functions, graphs and charts of physical, chemical and math into your MS WORD document. ... Making Math Symbols In Microsoft Related Searches:
{"url":"http://shareme.com/programs/making/math-symbols-in-microsoft","timestamp":"2014-04-19T09:48:49Z","content_type":null,"content_length":"53854","record_id":"<urn:uuid:dbfc3ebe-c419-4d29-8def-c0a88eb198e4>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00611-ip-10-147-4-33.ec2.internal.warc.gz"}
Papers Published 1. S. Tlupova and J. T. Beale, Nearly singular integrals in 3D Stokes flow, Commun. Comput. Phys., vol. 14 (2013), pp. 1207-27 [pdf] 2. W. Ying and J. T. Beale, A fast accurate boundary integral method for potentials on closely packed cells, Commun. Comput. Phys., vol. 14 (2013), pp. 1073-93 [pdf] 3. A. T. Layton and J. T. Beale, A partially implicit hybrid method for computing interface motion in Stokes flow, Discrete and Continuous Dynamical Systems B, vol. 17 (2012), pp. 1139-53 [pdf] 4. J. T. Beale, Partially implicit motion of a sharp interface in Navier-Stokes flow, J. Comput. Phys., vol. 231 (2012), pp. 6159-72 [pdf] 5. J. T. Beale, Smoothing properties of implicit finite difference methods for a diffusion equation in maximum norm, SIAM J. Numer. Anal., vol. 47 (2009), pp. 2476-95 [pdf] 6. J. T. Beale and A. T. Layton, A velocity decomposition approach for moving interfaces in viscous fluids, J. Comput. Phys. 228, 3358-67 (2009) [pdf] 7. J. T. Beale, D. Chopp, R LeVeque, and Z. Li, Correction to the article A comparison of the extended finite element method with the immersed interface method for elliptic equations with discontinuous coefficients and singular sources by Vaughan et al., Commun. Appl. Math. Comput. Sci., vol. 3 (2008), pp. 95-100 [pdf] 8. J. T. Beale, A proof that a discrete delta function is second-order accurate, J. Comput. Phys., vol. 227 (2008), pp. 2195-97 [pdf] 9. J. T. Beale and J. Strain, Locally corrected semi-Lagrangian methods for Stokes flow with moving elastic interfaces, J. Comput. Phys., vol. 227 (2008), pp. 3896-3920 [pdf] 10. J. T. Beale and A. T. Layton, On the accuracy of finite difference methods for elliptic problems with interfaces, Commun. Appl. Math. Comput. Sci., vol. 1 (2006), pp. 91-119 [pdf] 11. G. R. Baker and J. T. Beale, Vortex blob methods applied to interfacial motion, J. Comput. Phys., vol. 196 (2004), pp. 233-58 [pdf] 12. J. T. Beale, A grid-based boundary integral method for elliptic problems in three dimensions, SIAM J. Numer. Anal., vol. 42 (2004), pp. 599-620 [pdf] 13. J. T. Beale, Methods for computing singular and nearly singular integrals, J. Turbulence, vol. 3, (2002), article 041 (4 pp.) [pdf] 14. J. T. Beale, Discretization of Layer Potentials and Numerical Methods for Water Waves, Proc. of Workshop on Kato's Method and Principle for Evolution Equations in Mathematical Physics, H. Fujita, S. T. Kuroda, H.Okamoto, eds., Univ. of Tokyo Press, pp. 18-26. 15. J. T. Beale, M.-C. Lai, A Method for Computing Nearly Singular Integrals, SIAM J. Numer. Anal., 38 (2001), 1902-25 [ps] 16. J. T. Beale, A Convergent Boundary Integral Method for Three-Dimensional Water Waves, Math. Comp. 70 (2001), 977-1029 [ps] 17. J. T. Beale, Boundary Integral Methods for Three-Dimensional Water Waves, Equadiff 99, Proceedings of the International Conference on Differential Equations, Vol. 2, pp. 1369-78 [ps] 18. J. T. Beale, T.Y. Hou, J.S. Lowengrub, Stability of Boundary Integral Methods for Water Waves, Nonlinear Evolutionary Partial Differential Equations, X. X. Ding and T.P. Liu eds., A.M.S., 1997, 19. J. T. Beale, T. Y. Hou and J. S. Lowengrub, Convergence of a Boundary Integral Method for Water Waves, SIAM J. Numer. Anal. 33 (1996), 1797-1843. 20. J. T. Beale, A. Lifschitz, W.H. Suters, The Onset of Instability in Exact Vortex Rings with Swirl, J. Comput. Phys. 129 (1996) 8-29 21. J. T. Beale, T.Y. Hou, J.S. Lowengrub, Stability of Boundary Integral Methods for Water Waves, Advances in Multi-Fluid Flows, Y. Renardy et al., ed., pp. 241-45, SIAM, Philadelphia, 1996. 22. J. T. Beale, A. Lifschitz, W.H. Suters, A Numerical and Analytical Study of Vortex Rings with Swirl, Vortex Flows and Related Numerical Methods, II, ESAIM Proc. 1, 565-75, Soc. Math. Appl. Indust., Paris, 1996. 23. J. T. Beale, Analytical and Numerical Aspects of Fluid Interfaces, Proc. International Congress of Mathematicians 1994, S. Chatterji, ed., Vol. II, pp. 1055-64, Birkhauser, Basel, 1995. 24. J. T. Beale, C. Greengard, Convergence of Euler-Stokes Splitting of the Navier-Stokes Equations, Comm. Pure Appl. Math. 47 (1994), 1083-1115. 25. J. T. Beale, T. Y. Hou, J. S. Lowengrub, and M. Shelley, Spatial and Temporal Stability Issues for Interfacial Flows with Surface Tension, Mathl. Comput. Modeling 20 (1994), No. 10/11, 1-27 26. A. Bourgeois, J. T. Beale, Validity of the Quasigeostrophic Model for Large Scale Flow in the Atmosphere and Ocean, SIAM J. Math. Anal. 25 (1994), 1023-68. 27. J. T. Beale, T. Y. Hou, J. S. Lowengrub, Growth rates for the linearized motion of fluid interfaces away from equilibrium, Comm. Pure Appl. Math. 46 (1993), 1269-1301. 28. J. T. Beale, T. Y. Hou, J. S. Lowengrub, On the well-posedness of two-fluid interfacial flows with surface tension, Singularities in Fluids, Plasmas, and Optics, R. Caflisch et al., ed., pp. 11-38, NATO ASI Series, Kluwer, 1993. 29. J. T. Beale, E. Thomann, C. Greengard, Operator splitting for Navier-Stokes and the Chorin-Marsden product formula, Vortex Flows and Related Numerical Methods, J. T. Beale et al., ed., pp. 27-38, NATO ASI Series, Kluwer, 1993. 30. J. T. Beale, The approximation of weak solutions to the Euler equations by vortex elements, Multidimensional Hyperbolic Problems and Computations, J. Glimm et al., ed., pp. 23-37, Springer-Verlag, New York, 1991. 31. J. T. Beale, Exact solitary water waves with capillary ripples at infinity, Comm. Pure Appl. Math. 44 (1991), 211-257. 32. J. T. Beale, A. Eydeland, B. Turkington, Numerical tests of 3-D vortex methods using a vortex ring with swirl, Vortex Dynamics and Vortex Methods, C. Anderson and C. Greengard, ed., pp. 1-9, A.M.S., 1991. 33. J. T. Beale, Solitary water waves with ripples beyond all orders, Asymptotics beyond All Orders, H. Segur et al., ed., pp. 293-98, NATO ASI Series, Plenum, 1991. 34. J. T. Beale, Large-time behavior of model gases with a discrete set of velocities, Mathematics Applied to Science, J. Goldstein et al., ed. pp. 1-12, Academic Press, Orlando, 1988. 35. J. T. Beale, On the accuracy of vortex methods at large times, Computational Fluid Dynamics and Reacting Gas Flows, B. Engquist et al., ed., pp. 19-32, Springer-Verlag, New York, 1988. 36. J. T. Beale, D. Schaeffer, Nonlinear behavior of model equations which are linearly ill-posed, Comm. P. D. E. 13 (1988), 423-67. 37. J. T. Beale, Existence, regularity, and decay of viscous surface waves, Nonlinear Systems of Partial Differential Equations in Applied Mathematics, Part 2, Lectures in Applied Mathematics, Vol. 23, A.M.S., Providence, 1986, 137-48. 38. J. T. Beale, A convergent three-dimensional vortex method with grid-free stretching, Math. Comp. 46 (1986), 401-24 and S15-S20. 39. J. T. Beale, Large-time behavior of discrete velocity Boltzmann equations, Comm. Math. Phys. 106 (1986), 659-78. 40. J. T. Beale, A. Majda, High order accurate vortex methods with explicit velocity kernels, J. Comp. Phys. 58 (1985), 188-208. 41. J. T. Beale, T. Nishida, Large-time behavior of viscous surface waves, North-Holland Mathematics Studies, 128 (1985), 1-14. 42. J. T. Beale, Large-time behavior of the Broadwell model of a discrete velocity gas, Comm. Math. Phys. 102 (1985), 217-35. 43. J. T. Beale, Large-time regularity of viscous surface waves, Arch. Rational Mech. Anal. 84 (1984), 307-52. 44. J. T. Beale, T. Kato, A. Majda, Remarks on the breakdown of smooth solutions for the 3-D Euler equations, Comm. Math. Phys. 94 (1984), 61-66. 45. J. T. Beale, A. Majda, Vortex methods for fluid flow in two or three dimensions, Contemp. Math. 28 (1984), 221-29. 46. J. T. Beale, Large-time regularity of viscous surface waves, Contemp. Math. 17 (1983), 31-33. 47. J. T. Beale, A. Majda, Vortex methods I: Convergence in three dimensions, Math. Comp. 39 (1982), 1-27. 48. J. T. Beale, A. Majda, Vortex methods II: Higher order accuracy in two and three dimensions, Math. Comp. 39 (1982), 29-52. 49. J. T. Beale, A. Majda, The design and numerical analysis of vortex methods, Transonic, Shock, and Multidimensional Flows, R. E. Meyer, ed., Academic Press, New York, 1982. 50. J. T. Beale, The initial value problem for the Navier-Stokes equations with a free surface, Comm. Pure Appl. Math. 34 (1981), 359-392. 51. J. T. Beale, A. Majda, Rates of convergence for viscous splitting of the Navier-Stokes equations, Math. Comp. 37 (1981), 243-259. 52. J. T. Beale, Water waves generated by a pressure disturbance on a steady stream, Duke Math. J. 47 (1980), 297-323. 53. J. T. Beale, The existence of cnoidal water waves with surface tension, J. Differential Eqns. 31(1979), 230-263. 54. J. T. Beale, Acoustic scattering from locally reacting surfaces, Indiana Univ. Math. J. 26 (1977), 199-222. 55. J. T. Beale, Eigenfunction expansions for objects floating in an open sea, Comm. Pure Appl. Math. 30 (1977), 283-313. 56. J. T. Beale, The existence of solitary water waves, Comm. Pure Appl. Math. 30 (1977), 373-389. 57. J. T. Beale, Spectral properties of an acoustic boundary condition, Indiana Univ. Math. J. 25 (1976), 895-917. 58. J. T. Beale, Purely imaginary scattering frequencies for exterior domains, Duke Math. J. 41 (1974), 607-637. 59. J. T. Beale, S. I. Rosencrans, Acoustic boundary conditions, Bull. Amer. Math. Soc. 80 (1974), 1276-1278. 60. J. T. Beale, Scattering frequencies of resonators, Comm. Pure Appl. Math. 26 (1973), 549-563.
{"url":"http://fds.duke.edu/db/aas/math/CNCS/beale/publications","timestamp":"2014-04-17T16:16:38Z","content_type":null,"content_length":"18363","record_id":"<urn:uuid:f931693a-4f29-4672-b85f-e989c31844f9>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00143-ip-10-147-4-33.ec2.internal.warc.gz"}
Mathematics and Computer Science Print this Page | Search Catalog | Department Website Mathematics and Computer Science Chair James P. Brumbaugh-Smith, Timothy M. Brauch, Young S. Lee, Robin R, Mitchell, Andrew F. Rich, Eva G. Sagan The Department of Mathematics & Computer Science seeks to graduate students who can: appropriately analyze a wide variety of mathematical and computing problems, understand and apply relevant theory and technology to solve real-world problems, develop and implement insightful and efficient solutions, and effectively communicate both abstract ideas and practical solutions. Entering students take a placement test in mathematics prior to enrolling in courses. The test results in conjunction with other criteria are used to place students in an appropriate mathematics course. Advanced placement credit in calculus and statistics is possible for students who have an especially strong mathematical background. Courses in mathematics are designed for students who want to: acquire cultural knowledge of mathematics and its applications, apply mathematical principles of analysis and modeling in the natural and social sciences and also in industry, prepare for graduate studies in mathematics or related disciplines, and become teachers of mathematics at the precollege and college levels. Baccalaureate Degree Courses listed in parentheses are prerequisites. Major in mathematics, 43 hours: MATH 121, 122, 130, 231, 240, 251, 421, 433, three hours of 499; Nine hours of approved electives, selected from: MATH 233, 245, 306, 330, 340, 380 or 480, 385 or 485; (PHYS 210, 220) PHYS 301 or (CHEM 211 and PHYS 210, 220) CHEM 341; (ECON 221) ECON 350; (CPTR 205) CPTR 310, 499. Majors must successfully complete the senior comprehensive evaluation prior to graduation. Details are available from the department chair. Minor in mathematics, 25 hours: MATH 121, 130; 17 hours of electives selected from: MATH 122, 231, (CPTR 105) 233, MATH 240, 245, 251, 306, 330, 340, 421, 433, 380 or 480, 385 or 485; (PHYS 210, 220) PHYS 301 or (CHEM 211 and PHYS 210, 112) CHEM 341; (CPTR 205) CPTR 310; (ECON 221) ECON 350. Certificate in Scientific Computing; Jim Brumbaugh-Smith, coordinator; CPTR 105; MATH 121, 233; successful completion of an applied experience approved by the coordinator. Requirements for teaching majors are available in the Office of Teacher Education. 100 BASIC MATHEMATICS - 2 hours A review of topics in arithmetic including: fractions, decimals, proportions and percents, signed numbers, order of operations, approximation and rounding, unit conversion, exponents, small and large numbers, and scientific notation. Fall. 105 BASIC ALGEBRA - 2 hours A review of topics in elementary algebra including: inequalities; graphing of equations; problem solving using linear, quadratic and exponential equations; solving equations involving exponents and roots. Prerequisite: MATH 100 or placement. Fall. January or Spring. 107 MATHEMATICS FOR ELEMENTARY TEACHERS - 3 hours A course designed especially for the teacher of elementary school mathematics. Topics include: sets, logic, problem solving, functions, intuitive geometry, transformational geometry and measurement. Prerequisite: MATH 105 or placement. Fall. Spring. 112 COLLEGE ALGEBRA - 3 hours Topics include: exponents and radicals, factoring, linear and quadratic equations, linear inequalities, graphs and functions, polynomials, exponential and logarithmic functions, and systems of linear equations. Prerequisite: placement. 113 QUANTITATIVE REASONING - 3 hours A survey of skills for understanding quantitative data in modern life. This course focuses on: interpretation (and misinterpretation) of percentages, probabilities and statistics in contemporary decision-making; understanding of survey and experimental results as reported in mass media; and making logical and persuasive quantitative arguments. Course is designed primarily for students seeking the B.A. degree and does not satisfy the quantitative requirement for B.S. students. This course may not be taken by students who have previous credit for (or are concurrently enrolled in) MATH 115, 210 or 240. Prerequisite: MATH 105 or placement. Spring. C-1Q. 115 ELEMENTARY PROBABILITY AND STATISTICS - 3 hours A course focusing on problem-solving and decision-making skills using the tools of probability and statistics. Topics include: basic and conditional probabilities, probability trees, expected value, normal distributions, application of randomization to sampling and experimentation, graphical and numerical summaries of data, uses and abuses of statistical data, and introduction to confidence intervals, hypothesis testing and regression models. This course satisfies the Q requirement for both B.A. and B.S. students. This course may not be taken by students who have previous credit for (or are concurrently enrolled in) MATH 210 or 240. Prerequisite: MATH 105 or placement. Fall. Spring. C-1Q. 120 PRECALCULUS - 3 hours Topics include: graphs and functions, polynomials and their zeros, complex numbers, exponential and logarithmic functions, trigonometry (functions, graphs and identities) and applications. Prerequisite: MATH 105 or placement. Fall. January. 121 CALCULUS I - 4 hours An introduction to calculus including limits, continuity, derivatives and their applications, curve sketching, integrals and the Fundamental Theorem of Calculus. Trigonometric, exponential and logarithmic functions are included. Graphing calculators will be used. Prerequisite: MATH 120 or placement. Fall. Spring. C-1Q. 122 CALCULUS II - 4 hours Topics include: numerical integration, applications of integration, techniques of integration, inverse trigonometric functions, an introduction to differential equations, improper integrals, sequences and series and Taylor’s Theorem. A computer-algebra system will be used. Prerequisite: MATH 121. Fall. Spring. 130 DISCRETE MATHEMATICS - 4 hours An introduction to discrete methods used in mathematics and computer science. Principal topics covered are: logic, sets, algorithms, number theory, reasoning and proof, recursion, combinatorics, relations and graph theory. Prerequisite: MATH 120. Spring. 210 STATISTICAL ANALYSIS - 4 hours An introduction to statistical techniques used in the social and natural sciences. Topics include: graphical and numerical summaries of data; sampling and experimental design; elementary probability; binomial, uniform, normal, student’s t, and chi-squared distributions; hypothesis tests and confidence intervals for means and proportions, ANOVA, and linear regression. Statistical software is introduced during weekly lab sessions. Students are expected to be proficient in using computer applications and the campus network. This course satisfies the Q requirement for both B.A. and B.S. students. This course may not be taken by students who have previous credit for (or are concurrently enrolled in) MATH 240. Prerequisite: MATH 105 or placement. Fall. January. Spring. C-1Q. 214 HISTORY OF MATHEMATICS - 3 hours An overview of aspects of the history of mathematics from ancient times through the development of abstraction in the nineteenth century. The course will consider both the growth of mathematical ideas and the context in which these ideas developed in various civilizations. Prerequisite: MATH 121, 130. 231 MULTIVARIABLE CALCULUS - 4 hours Topics include: vector analysis in two-and three-dimensional spaces, polar and spherical coordinates, curves in space; multivariable functions and their derivatives, multiple integrals, line integrals, and Green’s and Stokes’ Theorems. Prerequisites: MATH 122, 251. Spring. 233 SCIENTIFIC COMPUTING - 3 hours A study of computational issues and methods used in applied mathematics and scientific computing. Topics include: computation errors; interpolation; convergence of numerical methods; approximate integration; numerical solution of ordinary differential equations; and numerical solution to systems of linear and non-linear equations. The course is oriented toward machine computation and involves programming of various solution techniques for problems in science, technology, engineering, and mathematics. Prerequisite: MATH 121. 240 MATHEMATICAL STATISTICS - 4 hours Basic concepts of probability; expectation; variance, covariance, distribution functions; bivariate, marginal and conditional distributions. Treatment of experimental data; normal sampling theory; confidence intervals and test of hypotheses; introduction to regression and to analysis of variance. Prerequisite: MATH 122. Fall, odd years. 245 ORDINARY DIFFERENTIAL EQUATIONS - 3 hours Topics include: classification of differential equations; methods of solving first order equations, second and higher order linear equations, and systems of linear equations; series solutions; and existence theorems. Prerequisite: MATH 122. Spring, even years. 251 LINEAR ALGEBRA I - 4 hours Solution of linear systems, matrices and determinants, eigenvalues and eigenvectors, vector algebra, representation of lines and planes in Rn, linear transformations and mathematical models using matrix algebra. Prerequisites: MATH 121, 130. Fall. 303 MATHEMATICS CURRICULUM AND METHODS - 3 hours The study of curriculum, methodology, computer applications, materials, and assessment appropriate for early childhood and elementary school (preK-6th grades) mathematics programs. Field experience is a required component. Taken as part of the Elementary Methods Block. Prerequisites: MATH 107 and EDUC 340. 306 GEOMETRY - 3 hours A study of the logical structure and content of both Euclidean and non-Euclidean geometries. The approach to Euclidean geometry is via Hilbert’s axioms. Prerequisite: MATH 251. Fall, even years. 330 OPERATIONS RESEARCH MODELS - 3 hours Introduction to mathematical modeling processes; allocation models involving linear programming; simplex algorithm; dynamic programming; transportation models; network models; graph theory; Markov chain models; queuing theory and game theory. Prerequisite: MATH 130 or 251. January or spring, even years. 340 LINEAR ALGEBRA II - 3 hours Numerical methods for solving linear systems, the four fundamental subspaces and applications, orthogonality and approximation, eigenvectors, eigenvalues, and diagonalization of matrices and applications. Prerequisite: MATH 251. Spring, odd years. 421 REAL ANALYSIS - 3 hours Topics include: the completeness of the real number system; sequences and their limits; elementary point-set topology; and continuity and uniform continuity. The theory of series, the derivative and the Riemann integral will be treated as time permits. Prerequisites: MATH 130, 231. Fall, even years. 433 ALGEBRAIC STRUCTURES - 4 hours Basic properties of groups, rings, factor groups, ideals, quotient rings, integral domains, fields, polynomials and elementary number theory. Prerequisite: MATH 251. Fall, odd years. 440 SECONDARY MATHEMATICS METHODS (W) - 3 hours The study of standards, curriculum, teaching methods and assessment appropriate for middle and secondary school (5-12) mathematics programs. Topics will include appropriate use of mathematical technology, history of mathematics, approaches to problem solving and modes of mathematical understanding. Prerequisites: EDUC 111, EDUC 230, MATH 130, MATH 240. Enrollment in MATH 240 may be concurrent. Fall alternate years. 475 INTERNSHIP IN MATHEMATICS - 1-3 hours Students work in business, industry, government or other agencies applying mathematical tools (e.g., probability, statistics, optimization) to real-world problems. Students are supervised by a professional with significant experience in such applications and also a faculty member. A written report describing the overall project and the student’s contribution will complete the course. Students must formally enroll in this course prior to beginning their work experience. Course may be repeated once for a maximum of four hours credit. Prerequisite: MATH 130, 122; permission of the department chair. 499 SENIOR PROJECT (W) - 1-3 hours An in-depth study of some area of mathematics under the guidance of a primary and secondary faculty advisor. Students will write a thesis and give an oral presentation based on the thesis. Students will enroll either once or twice for a total of three hours credit. Prerequisite: ENG 111; permission of the department chair. 380 or 480 SPECIAL PROBLEMS - 1-4 hours A student who has demonstrated ability to work independently may propose a course and pursue it with a qualified and willing professor. The department chair and the vice president and dean for academic affairs must also approve. A set of guidelines is available at the Office of the Registrar. 385 or 485 SEMINAR - 1-4 hours An in-depth consideration of a significant scholarly problem or issue. Students pursue a supervised, independent inquiry on an aspect of the topic and exchange results through reports and Courses in computer science are designed for students who want to: acquire a conceptual foundation for understanding and working with computers in a continuously changing field, learn practical skills in programming and software development, prepare for careers in computing in business and industry, and prepare for further study in computer science or information systems. Emphasis is placed on working with a variety of industries and software companies to provide students with real-world software experience through classroom projects, internships and senior research. Baccalaureate Degree Courses listed in parentheses are prerequisites. Major in computer science, 43-44 hours: CPTR 105, 205, 225, 308, 310, 314, 331, 333; three hours of CPTR 475 or 499; MATH 121, 130, 251; one course selected from: CPTR 121, 324, 410, 415; MATH 233. Majors must successfully complete the senior comprehensive evaluation prior to graduation. Details are available from the department chair. Minor in computer science, 22-24 hours: CPTR 105, 205; MATH 130; four courses selected from : CPTR 121, 225, 308, 310, 314, 324, 331, 333, 410, 415; MATH 233. Minor in information systems, 27-28 hours: ACCT 211, BUS 111, 310; CPTR 105, 205; (MATH 120) MATH 130; two hours of BUS 106 on different topics; one course selected from: CPTR 121, 225, 308, 314. Associate of Arts Degree Major in computer applications, 23-25 hours: CPTR 105, 205; MATH 130, 120 or 121; three hours of BUS 106 on different topics; two courses selected from CPTR 121, 221, 225, 308, 314. Courses CPTR 105 COMPUTER PROGRAMMING I - 3 hours A first course in computer programming. Students will learn how to conceptualize, write and run programs. Programming topics include variables and types, methods, decision structures, loops, arrays, classes and objects. In addition to the syntax and semantics of programming, debugging, documentation, and programming aesthetics are also emphasized. Prerequisite: MATH 105 or higher mathematics placement. Fall. Spring. 121 WEB DEVELOPMENT - 3 hours An introductory course in developing applications for the Web. The student will develop analytical, technical, and design skills necessary for building interactive, functional, and usable websites using cutting edge tools. Topics will include: creating static web pages, client-side scripting, server management, dynamic websites using databases, graphic design, version control, typography, usability, and accessibility. Technologies used will include: HTML/XHTML, XML, RSS, JavaScript, CSS, PHP, MySQL, Apache, and Subversion. Prerequisite: CPTR 105. Spring, even years. 205 COMPUTER PROGRAMMING II - 3 hours A course focusing on advanced programming concepts emphasizing object-oriented programming. Topics include data abstraction, polymorphism and file I/O. Basic algorithmic analysis and use of data structures is also introduced. Students will write several large programs and gain an overall understanding of software development. Prerequisite: CPTR 105. Spring. 221 SOFTWARE DEVELOPMENT - 4 hours Combines a range of material related to the design, implementation and testing of software systems with the practical experience of implementing such a system as a member of a programming team. The course covers software process models, requirements, specification, design, documentation, validation and project management. In addition, it includes discussion of professional and ethical responsibilities in software development. Prerequisite: CPTR 205. Spring, odd years. 225 DATABASE PROGRAMMING - 3 hours This course introduces the fundamental topics in database design and database-backed application development. Overall focus is on building applications with the efficient use of databases. Topics will include the relational model, SQL, dependencies, normalization, XML, JDBC, Web program. Prerequisites: CPTR 205; MATH 130. Spring, odd years 308 COMPUTER ARCHITECTURE - 3 hours An introduction to the organization of computers. Topics include: information representation, assembly language programming, registers, linkage, I/0 and device handlers, architectural performance. Prerequisite: CPTR 205; MATH 130. Fall, even years. 310 ALGORITHMS AND DATA STRUCTURES - 3 hours This course explores the mathematical modeling of problems in computing. We will study the algorithms and data structures used for common tasks such as searching, sorting, and solving graph and geometric problems. The course will rely heavily on programming as the means for presenting the solutions. The emphasis will be on constructing correct and efficient algorithms and on analyzing their performance. Prerequisite: CPTR 205; MATH 130. Fall, odd years. 314 OPERATING SYSTEMS AND NETWORKS - 4 hours An overview of the key components and functions of computer operating systems and local-area networks. Topics include: file systems, system processes (including issues of concurrency, synchronization and deadlock), scheduling, memory management, data communications and networks. Prerequisite: CPTR 205; MATH 130. Spring, even years. 324 COMPUTER GRAPHICS - 3 hours An introduction to the theory of three-dimensional (3D) computer graphics and the development of graphical applications. The student will learn concepts and techniques that form the backbone of modern computer graphics. The course will be focused on using free or open-source tools such as Processing, Blender, and the OpenGL library. Topics include: graphics hardware and software, vision, light and shading, object modeling techniques, curves and curved surfaces, textures, and shadows. Prerequisites: CPTR 105; MATH 251. Spring, even years. 331 SOFTWARE DEVELOPMENT I - 3 hours Covers the design, implementation and testing of software systems. The course will introduce the current technology and tools used for software development. Topics include software process models, requirements, specification, design, documentation, validation and project management. Professional and ethical responsibilities in software development will also be included. Prerequisite: CPTR 205; MATH 130. Fall, even years. 333 SOFTWARE DEVELOPMENT II - 3 hours This course focuses on putting software engineering theory into practice. Students will work in a team on a semester-length project for a real customer, while applying a chosen software process model to their software development. Emphasis will be placed on structured engineering, design and usability, testing, team management, version control, customer relations and meeting project milestones. Prerequisite: CPTR 331. Spring, odd years. 410 TOPICS IN COMPUTER SCIENCE - 3 or 4 hours This course will be offered based on sufficient interest of students and faculty in particular areas of computer science. Possible topics include: artificial intelligence, numerical computation, computer graphics, expert systems, real-time systems, simulation, telecommunications, resource utilization, coding theory, UNIX and compiler design. This course requires significant independent work including a major research or programming project. Prerequisite: varies depending on topic. 415 PRINCIPLES OF PROGRAMMING LANGUAGES - 4 hours A course on the design and implementation of programming languages. Major areas are: language syntax (lexical properties, Backus-Naur form, parsing), language representations (data structures, control structures, binding, execution environment, formal semantic models) and language styles (procedural, functional and object-oriented languages). Prerequisites: CPTR 205, 310. Fall, odd years. 475 INTERNSHIP IN COMPUTER SCIENCE (W) - 1-3 hours Students work in the computer field in the development of software or hardware algorithms or applications. Students are supervised by a computer science professional and a faculty member. A written report describing the overall project and the student’s contribution will complete the course. Students must formally enroll in this course prior to beginning work experience. Students may enroll twice for up to four hours credit. Prerequisites: two courses beyond CPTR 205; permission of department chair. 499 SENIOR PROJECT (W) - 1-3 hours Students will conduct a significant research project to consist of the development, analysis and/or implementation of an algorithm or software system, or an in-depth study in some area of computer science. A formal paper as well as an oral presentation will be required. Course may be repeated once for a maximum of three hours credit. Prerequisite: Permission of the department chair. 380 or 480 SPECIAL PROBLEMS - 1-4 hours A student who has demonstrated ability to work independently may propose a course and pursue it with a qualified and willing professor. The department chair and the vice president and dean for academic affairs must also approve. A set of guidelines is available at the Office of the Registrar. 385 or 485 SEMINAR - 1-4 hours An in-depth consideration of a significant scholarly problem or issue. Students pursue a supervised, independent inquiry on an aspect of the topic and exchange results through reports and
{"url":"http://www.manchester.edu/catalog/math_cptr.htm","timestamp":"2014-04-20T08:16:29Z","content_type":null,"content_length":"67600","record_id":"<urn:uuid:96c83609-24b7-450b-808d-d4289b5a1fd1>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00212-ip-10-147-4-33.ec2.internal.warc.gz"}
Finite Model Theory/Model Theory From Wikibooks, open books for an open world Many important theorems of Model Theory do not hold when restricted to the finite case, like Gödel's completeness theorem or the compactness theorem: Failure of Compactness Theorem[edit] Consider the following sentence σ3 $\exists_x \exists_y \exists_z (xe y \and ye z \and xe z)$ that says that there are at least 3 different elements in a universe. One can expand σ3 easily for n other than 3. So, let Σ = {σ1, σ2, σ3, ...} be the infinite set of all these sentences. Now Σ is obviously not satisfiable by a finite model, although every finite subset of Σ is. Ok, but why does that matter? One of the most useful tools in general Model theory is the Compactness theorem, stating: "Let Σ be a set of FO sentences. If every finite subset of Σ is satisfiable, then Σ is satisfiable." But as just shown this doesn't hold for the finite case, thus there is no Compactness theorem in Finite Model Theory!
{"url":"https://en.wikibooks.org/wiki/Finite_Model_Theory/Model_Theory","timestamp":"2014-04-16T08:34:33Z","content_type":null,"content_length":"25190","record_id":"<urn:uuid:5a58ec76-5501-4af6-bcf7-495baeb54a67>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00538-ip-10-147-4-33.ec2.internal.warc.gz"}
Solving fourth order differential equation URGENT Differential Equations Question Originally Posted by sasikanth I have two second order differential equation which needs to be solved. x1''(t) = 8 x2(t) x2''(t) = 2 x1(t) Your equations are linear with constant coefficients. I would handle either the following ways 1. Apply the Laplace transform to the equations - this will transform the problem to solving algebraic system of equations or 2. Let the solution be x and x . Substitute this assumption and you can determine r, q and q from eigenvalue-eigenvector equation.
{"url":"http://www.allquests.com/question/4091557/Solving-fourth-order-differential-equation-URGENT.html","timestamp":"2014-04-19T07:18:02Z","content_type":null,"content_length":"15939","record_id":"<urn:uuid:0cbbed13-8322-42dc-b9ed-e646a03cfbe7>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00323-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: Spherical Geometry- is it non-Euclidean Replies: 3 Last Post: Feb 16, 1998 8:21 AM Messages: [ Previous | Next ] Re: Spherical Geometry- is it non-Euclidean Posted: Feb 15, 1998 9:46 PM David A. wrote: > I've seen a lot about spherical geometry and wether it's non-Euclidean > or not, and I don't want to ressurect the whole debate. I just have > one question. One of Euclid's postulates was that any two points > determine a line. If you take the globe, and pick two points on the > same line of latitude, do they determine a line. I had thought that > lines on a sphere were defined as great circles. > Thanks, > David Yes, they do determine a great circle (line). To find the great circle, intersect the plane that passes through the two points and the center of the globe with the surface of the globe. Of course, in most cases this great circle will be different from the common line of latitude (that is, except when the points are on the equator.) By the way, the shortest distance between two points on the globe lies on a great circle, so this shows that the shortest distance between two points on the same line of latitutde is generally NOT on along the common line of latitude. The shortest route from New York to Tokyo passes close to the north pole, I believe. Date Subject Author 2/15/98 Spherical Geometry- is it non-Euclidean David A. 2/15/98 Re: Spherical Geometry- is it non-Euclidean Peter Ash 2/16/98 Re: Spherical Geometry- is it non-Euclidean Lou Talman 2/16/98 Re: Spherical Geometry- is it non-Euclidean LCrand2228
{"url":"http://mathforum.org/kb/thread.jspa?threadID=351589&messageID=1077723","timestamp":"2014-04-17T19:16:19Z","content_type":null,"content_length":"20626","record_id":"<urn:uuid:7d18eae6-ce65-4de3-8bd2-43549013f850>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00495-ip-10-147-4-33.ec2.internal.warc.gz"}
Understanding Monads Via Python List Comprehensions An attempt to explain monads and their usefulness in Haskell, assuming only some simple knowledge of Python. List comprehensions are a great Python feature that have been borrowed from Haskell/ML, and ultimately from set theory. As you probably know, they provide a convenient syntax for building lists from other lists or sequences: >>> lst = [1, 2, 3] >>> [x*2 for x in lst] [2, 4, 6] The alternative to the list comprehension is something like this: >>> newlist = [] >>> for x in lst: ... newlist.append(x*2) List comprehensions remove this annoying bit of boilerplate, helping to give Python the conciseness we love. The list comprehension knows all about creating an empty initial list and appending values etc, so we don't have to do that each time. While learning Haskell, I found list comprehensions a reassuring plateau of familiarity amidst the otherwise rather treacherous, alien landscape, but they also provide a way in to the impenetrable forest of 'Monads'. In Haskell, the above Python code looks very similar (at a GHCi interactive prompt, where Prelude> is the standard prompt): Prelude> let lst = [1,2,3] Prelude> [ x*2 | x <- lst ] That should be pretty easy to understand if you know Python, but here is a full 'translation' of the list comprehension into English: unpack each value in the list 'lst' in turn, give it the label 'x', and for each one return 'x * 2' as a value in a new list. Haskell has another way of implementing the same thing, though -- using its 'do' notation: Prelude> do { x <- lst; return (x*2) } The similarity with the above should be pretty obvious, and in fact it does exactly the same thing. You'll notice that the order is reversed so it is more like our English version, and matches an imperative style of thinking (do this, then that), whereas the list comprehension syntax reflects its origins in the mathematical notations of set theory. However, there is one difference. While list comprehension are specific to creating lists, 'do' notation has no knowledge of lists at all. In fact, the only list in this expression is the input value 'lst'. So, to prove a point, let's eliminate even that bit, by creating a function out of this expression: Prelude> let double val = do { x <- val; return (x*2) } This is a definition of the function 'double', that takes a single value 'val'. (At the interactive prompt, we have to add the keyword 'let', which wouldn't be present in normal Haskell source code). We can now use our function as before: Prelude> double [0,1,2] GHCi doesn't complain about any of this, even though Haskell is statically typed and we haven't told it what type of thing val is. So what type of thing is val? We can ask GHCi using :type, but we'll have to do it indirectly, by asking it the type of our function double: [WARNING: the next few paragraphs are a bit tricky, but it gets easier again soon - hang in there!] Prelude> :type double double :: (Monad m, Num a) => m a -> m a Woah! What on earth does all that mean? Well, let's ignore the bit in brackets, and look at the end first: m a -> m a The arrow tells us we have a function (GHC has at least got that bit right), and it takes values of type m a and returns values of type m a. m a is a 'parameterised type' -- as a simplification, we can say that 'm' is a placeholder for a container type, and 'a' is a placeholder for the type of thing it contains. In the case of our input lst, m is a list (represented as [] usually) and a is an integer. This is normally written [Integer] (a list of integers), rather than [] Integer. In our instance, double took a list of integers and returned a list of integers. But GHC knew that our function was more generic than simply operating on lists of integers. From the 'do' notation, and the lack of anything about lists, it deduced that 'm' can actually be any 'Monad'. From the use of multiplication in '* 2', it deduced that 'a' has to be restricted to numbers -- more specifically, a type which implements the 'Num' interface (which includes integers, fractional numbers etc). The (Monad m, Num a) => bit is just indicating these restrictions. So, how did our 'double' function achieve the feat of unpacking the data from the list, doubling each item, and returning the data in a new list, all without knowing anything about lists? Well, by using the 'do' notation, it was implicitly using a set of methods that are defined in the 'Monad' interface. Since lists are instances of the Monad interface, and defines all the methods correctly, it just worked. Are there other Monads that we could use instead of lists? Well, of course, or no-one would have bothered to create an interface if there was only a single instance of it. One example would be the Maybe monad, that contains either Nothing, or an actual value, written Just somevalue. The Maybe monad encapsulates a value that might be there, but might not, and the logic that if it contains Nothing, then any function operating on it will have to return Nothing too. So we can now do this: Prelude> double Nothing Prelude> double (Just 1.5) Just 3.0 Magic! Trying to do Nothing * 2 will give you a type error, but by using the Maybe monad, and our function that was generic over monads, we did it easily, with no extra work. Impressed? In other languages you can create functions that work with different types of collections, by using, for example, the iterator protocol in Python, or the IEnumerable interface in C#. But here we have taken it to a higher level -- the Monad interface is like an abstraction of any kind of container. Alternatively, Maybe and lists can both be thought of as encapsulating different strategies for computing values. Maybe handles the case of 0 or 1 values, while list can handle any number of values, and it tries them all. This in turn leads to the concept that a monadic value represents a computation -- a method for computing a value, bound together with its input value. This becomes especially important when you move on to 'State transformation' monads, such as the famous IO monad. In these cases, the container is actually a function (but just don't think about that if it hurts your So, in Haskell Monads are simply an interface for a very generic container, but an interface that is so important that it has special syntactic support in the language, similarly to how the iterator protocol and lists in Python have various bits of syntactic support (such as for, in, list comprehensions, etc). The interface is more abstract than that of collections, which makes it more difficult to understand, but more powerful too, and the syntactic sugar all the more sweet, as it is much more broadly applicable. For instance, monads were used to write the Parsec parser library which ends up with a syntax that allows a pretty direct translation from BNF, and is in fact more readable. The parser monads know all about applying constraints, backtracking etc, in the same way that the list monad knows about how to take each element and apply the function to each one. Writing monads is hard, but it pays off as using them in Haskell is surprisingly easy, and allows you to do some very powerful things. I hope that begins to explain why monads are useful. The next difficulty is understanding the methods that make up the Monad interface, which is beyond the scope of this article really, but I'll try to give an introduction. You can already guess something about the Monad methods. One of them you have seen explicitly -- it's the 'return' method, responsible for packing things up into the monad. The other is called 'bind' or '>>=', and it does the 'unpacking' involved with the <- arrow in the do notation. Actually, the 'bind' method doesn't really unpack and return the data. Instead, it is defined in such a way that it handles all unpacking 'internally', and you have to provide functions that always have to return data inside the monad. Why is this important? Well, some uses of monads, especially the IO monad, need to ensure that data can't 'escape', in order to be able to make certain guarantees that keep your program working as expected. Monads like Maybe and list are much less possessive -- they are quite happy for you to get the data out. But by defining the Monad interface in this way, it can handle both cases, and it turns out to be quite convenient for both. What is the <- in the do notation then? It is simply some syntactic sugar that allows you to define the right kind of functions easily and painlessly. It looks very much like 'unpack this data from the monad so I can use it', so it helps conceptually. In fact, together with the rest of the body of the 'do' block it forms an anonymous lambda function, and we could write our double function something pretty much like this in Python: def double(val): return val.bind(lambda x: val.return_(x*2)) (I've had to use return_ to avoid the clash with Python's 'return' keyword). Haskell's do notation eliminates the explicit call to bind, and the lambda, making it quite a bit easier to use. This becomes especially important when you have long 'do' blocks, and functions with multiple monadic input values. One final thing to say - you will be pleased to learn that in Haskell you can actually use whitespace (newlines and indentation) instead of semi-colons and braces! (OK, OK, calm down you Pythonistas at the back, that's enough cheering now ;-). I've done a complete example implementation of the List and Maybe monads in Python, along with the double function as above, trying to stay close to how it works in Haskell. You can't really translate Haskell's type system, but Python can do a pretty good job of implementing this, partly due to the fact that non-instance methods can be polymorphic, unlike many other OOP languages. The code is also a nice example of how succinct functional style code often is -- there isn't a function more than 4 lines long. Has this helped at all? I'd be interested in any feedback, or corrections. I'm a Haskell newbie myself, so I may have got some things wrong, in which case I'll pretend that I was simplifying things for the sake of clarity :-) . • Why monads are important, and an alternative explanation of how monads work, by Shannon Behrens: Everything Your Professor Failed to Tell You About Functional Programming • To understand the IO monad, I've found this IO inside article the most helpful. It starts from a completely different angle, though, so don't be surprised. Eventually the different concepts do converge :-) • 2006-07-28: added a translation of the 'double' function into Python, to explain the do notation's implicit lambda. • 2006-08-15: added some Python code that implements the same thing in Python. • 2006-09-11: fixed some bugs in the downloadable Python code, and adding 'lifting' examples to it.
{"url":"http://lukeplant.me.uk/blog/posts/understanding-monads-via-python-list-comprehensions/","timestamp":"2014-04-19T02:34:31Z","content_type":null,"content_length":"19496","record_id":"<urn:uuid:1fba5296-3f4a-497e-a661-317a8d9e6677>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00018-ip-10-147-4-33.ec2.internal.warc.gz"}
Prealgebra for Two-Year Colleges/Appendix (procedures)/Multiplying fractions From Wikibooks, open books for an open world Multiply the numerators to find the numerator of the result. Multiply the denominators to find the denominators of the result. Then reduce the result. For example, $\frac{2}{21} \times \frac{35}{22} = \frac{70}{462}$. We can then reduce $\frac{70}{462} = \frac{70 {\color{Red}\div 2}}{462 {\color{Red}\div 2}} = \frac{35}{231} = \frac{35 {\color{Red}\div 7}}{231 {\color{Red}\div 7}} = \frac{5}{33}$. An easier way to multiply the same fractions is first find the prime factors of the numerators and denominators, then reduce, then multiply. $\frac{2}{21} \times \frac{35}{22} = \frac{2}{3 \cdot 7} \times \frac{5 \cdot 7}{2 \cdot 11} = \frac{\color{Red}ot{2}}{3 \cdot {\color{Blue}ot{7}}} \times \frac{5 \cdot {\color{Blue}ot{7}}}{{\ color{Red}ot{2}} \cdot 11} = \frac{5}{33}$.
{"url":"http://en.wikibooks.org/wiki/Prealgebra_for_Two-Year_Colleges/Appendix_(procedures)/Multiplying_fractions","timestamp":"2014-04-19T04:30:28Z","content_type":null,"content_length":"26658","record_id":"<urn:uuid:951f6977-3713-4321-817f-d64913480059>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00458-ip-10-147-4-33.ec2.internal.warc.gz"}
Concrete Mixed Design Method (BS Method) Objective of Concrete Mix Design Two main objectives for concrete mix design: • To determine the proportions of concrete mix constituents of; Cement, Fine aggregate (or normally Sand), Coarse aggregate, and Water. • To produce concrete of the specified properties. • To produce a satisfactory of end product, such as beam, column or slab as economically as possible. Theory of Mix Designs The Process of Concrete Mix Design The method of concrete mix design applied here is in accordance to the method published by the Department of Environment, United Kingdom (in year 1988). There are two categories of initial information required: 1. Specified variables; the values that are usually found in specifications. 2. Additional information, the values normally available from the material supplier. Reference data consists of published figures and tables is required to determine the design values including; • Mix parameters such as target mean strength, water-cement ratio and concrete density. • Unit proportions such as the weight of materials. The design process can be divided into 5 primary stages. Each stage deals with a particular aspect of the concrete mix design: Stage 1: Determining the Free Water/ Cement Ratio i) Specify the required characteristic strength at a specified age, f[c] ii) Calculate the margin, M. M = k x s ….. [ F1 ] k = A value appropriate to the defect percentage permitted below the characteristic strength. [ k = 1.64 for 5 % defect ] s = The standard deviation (obtained from CCS 1). iii) Calculate the target mean strength, f[m] f[m ]= f[c] + M ….. [ F2 ] f[m] = Target mean strength f[c] = The specified characteristic strength iv) Given the type of cement and aggregate, use the table of CCS 1 to obtain the compressive strength, at the specified age that corresponds to a free water/cement ratio of 0.5. v) In figure CCS 4, follow the ‘starting line’ to locate the curve which passes through the point (the compressive strength for water/cement ratio of 0.5). To obtain the required curve representing the strength, it is necessary to interpolate between the two curves in the figure. At the target mean strength draw horizontal line crossing the curve. From this point the required free water/cement ratio can be determined. Stage 2: Determining the Free-Water Content Given the Concrete Slump or Vebe time, determine the free water content from table CCS 2. Stage 3: Determining the Cement Content Cement Content = Free Water Content / Free-water or Cement Ratio ….. [ F3 ] The resulting value should be checked against any maximum or minimum value that may be specified. If the calculated cement content from F3 is below a specified minimum, this minimum value must be adopted resulting in a reduced water/cement ratio and hence a higher strength than the target mean strength. If the calculated cement content is higher than a specified maximum, then the specified strength and workability simultaneously be met with the selected materials; try to change the type of cement, the type and maximum size of the aggregate. Stage 4: Determining the Total Aggregate Content This stage required the estimate of the density of fully compacted concrete which is obtained from figure CCS 5. This value depends upon the free-water content and the relative density of the combined aggregate in the saturated surface-dry condition. If no information is available regarding the relative density of the aggregate, an approximation can be made by assuming a value of 2.6 for un-crushed aggregate and 2.7 for crushed aggregate. With the estimate of the density of the concrete the total aggregate content is calculated using equation F4: Total Aggregate Content = D – C – W ….. [ F4 ] D = The wet density of concrete ( in kg/m^3) C = The cement content (in kg/m^3) W = The free-water content (in kg/m^3) Stage 5: Determining of The Fine and Coarse Aggregate Contents This stage involves deciding how much of the total aggregate should consist of materials smaller than 5 mm, i.e. the sand or fine aggregate content. The figure CCS 6 shows recommended values for the proportion of fine aggregate depending on the maximum size of aggregate, the workability level, the grading of the fine aggregate (defined by the percentage passing a 600 μm sieve) and the free-water / cement ratio. The best proportion of fines to use in a given concrete mix design will depend on the shape of the particular aggregate, the grading and the usage of the concrete. The final calculation, equation F5, to determine the fine and coarse aggregate is made using the proportion of fine aggregate obtained from figure CCS 6 and the total aggregate content derived from Stage 4. Fine Aggregate Content = Total Aggregate Content x Proportion of Fines ….. [ F5 ] Coarse Aggregate Content = Total Aggregate Content – Fine Aggregate Procedures of Design Mixing Production of Trial Mix Design 1. The volume of mix, which needs to make three cubes of size 100 mm is calculated. The volume of mix is sufficient to produce 3 numbers of cube and to carry out the concrete slump test. 2. The volume of mix is multiplied with the constituent contents obtained from the concrete mix design process to get the batch weights for the trial mix. 3. The mixing of concrete is according to the procedures given in laboratory guidelines. 4. Firstly, cement, fine and course aggregate are mixed in a mixer for 1 minute. 5. Then, water added and the cement, fine and course aggregate and water mixed approximately for another 1 minute. 6. When the mix is ready, the tests on mix are proceeding. Tests on Trial Mix Design 1. The slump tests are conducted to determine the workability of fresh concrete. 2. Concrete is placed and compacted in three layers by a tamping rod with 25 times, in a firmly held slump cone. On the removal of the cone, the difference in height between the uppermost part of the slumped concrete and the upturned cone is recorded in mm as the slump. 3. Three cubes are prepared in 100 mm x 100 mm each. The cubes are cured before testing. The procedures for making and curing are as given in laboratory guidelines. Thinly coat the interior surfaces of the assembled mould with mould oil to prevent adhesion of concrete. Each mould filled with two layers of concrete, each layer tamped 25 times with a 25 mm square steel rod. The top surface finished with a trowel and the date of manufacturing is recorded in the surface of the concrete. The cubes are stored undisturbed for 24 hours at a temperature of 18 to 22^0C and a relative humidity of not less than 90 %. The concrete all are covered with wet gunny sacks. After 24 hours, the mould is striped and the cubes are cured further by immersing them in water at temperature 19 to 21^oC until the testing date. 4. Compressive strength tests are conducted on the cubes at the age of 7 days. Then, the mean compressive strengths are calculated. The Calculations Here is one example of calculation from one of the concrete mix design obtained from the laboratory. We have to fill in all particulars in the concrete mix design form with some calculations… Firstly, we specified 30 N/mm^2^ at 7 days for the characteristic strength. Then, we obtained the standard deviation, s from the figure CCS 3. So, s = 8 N/mm^2. From the formula F1, k = 1.64 for 5 % defect. The margin, M is calculated as below: M = k x s = 1.64 x 8 = 13.12 N/mm^2 With the formula F2, target mean strength, f[m] is calculated as below: Target mean strength, f[m] = f[c] + M = 30 + 13.12 = 43.12 N/mm^2 The type of cement is Ordinary Portland Cement (OPC). For the fine and course aggregate, the laboratory’s fine aggregate is un-crushed and for coarse aggregate is crushed before producing concrete. Then, we obtain the free-water/ cement ratio from table CCS 1. For OPC ( 7 days ) using crushed aggregate, water/cement ratio = 36 N/mm^2. After that, from the figure CCS 4, the curve for 42 N/mm^2 at 0.5 free-water ratio is plotted and obtained the free-water ratio is 0.45 at the target mean strength 43.12 N/mm^2. Next, we specified the slump test for slump about 20 mm and the maximum aggregate size we used in laboratory is 10 mm. For the specified above, we can obtained the free-water content from table CCS 2 at slump 10 – 30 mm and maximum size aggregate 10 mm, the approximate free-water content for the un-crushed aggregates is 180 kg/m^3 and for the crushed aggregates is 205 kg/m^3. Because of the coarse and fine aggregates of different types are used, the free-water content is estimated by the expression: Free-water Content, W = ^2/[3] W[f] + ^1/[3] W[c] = (^2/[3] x 180) + (^1/[3] x 205) = 188.33 kg/m^3 W[f] = Free-water content appropriate to type of fine aggregate W[c] = Free-water content appropriate to type of coarse aggregate Cement content also can obtained from the calculation with the expression at F3: Cement Content, C = Free Water Content / Free-water or Cement Ratio = 188.33 / 0.45 = 418.52 kg/m^3 We assumed that the relative density of aggregate (SDD) is 2.7. Then, from the figure CCS 5 with the free-water content 188.33 kg/m^3, obtained that concrete density is 2450 kg/m^3. The total aggregate content can be calculated by: Total Aggregate Content = D – C – W = 2450 – 418.52 – 188.33 = 1843.15 kg/m^3 The percentage passing 600 μm sieve for the grading of fine aggregate is about 60 %. The proportion of the fine aggregate can be obtained from the figure CCS 6, which is 38 %. Then, the fine and course aggregate content can be obtained by calculation: Fine Aggregate Content = Total Aggregate Content x Proportion of Fines = 1868.74 x 0.38 = 700.40 kg/m^3 Coarse Aggregate Content = Total Aggregate Content – Fine Aggregate = 1843.15 – 700.40 = 1142.75 kg/m^3 The quantity per m^3 can be obtained, which is; Cement = 418.52 kg Water = 188.33 kg Fine aggregate = 700.40 kg Coarse aggregate (10 mm) = 1142.75 kg The volume of trial mix for 3 cubes = [(0.1 x 0.1 x 0.1) x 3] + [25% contingencies of trial mix volume] = 0.003 + 0.00075 = 0.00375 m^3 The quantities of trial mix = 0.00375 m^3, in which is; Cement = 1.57 kg Water = 0.71 kg Fine aggregate = 2.61 kg Coarse aggregate (10 mm) = 4.29 kg The Results of Mix Design Slump Test = True Slump of 55 mm… All the 3 concrete cubes produced were then cured for 7 days. After that, the compressive cube test is carried out. The results are as follows: │Sample │1 │2 │3 │ │Compressive Strength │32.37 │33.54 │35.70 │ │Average │(32.37 + 33.54 + 35.70) / 3 = 33.87│ For cubes after 7 days of curing, compressive strength should not be less than 2/3 target mean strength. = 2/3 × 43.12 = 28.75 N/mm^2 < 33.9 N/mm^2 After 7 days of curing, the compressive strength of concrete cubes produced by the mix design method pass the specific strength requirements. Discussions Upon Concrete Mix Designs Although our compressive strength passes the specific requirements, we still identified several factors which contribute to the lacking of compressive strength of concrete mixes produced in the experiment. However, the main factor is the condition of aggregates whether it is exposed to sunlight or rainfall. When the free water/cement ration is high, workability of concrete is improved. However, excessive water causes “honey-comb” effect in the concrete produced. The concrete cubes become porous, and hence its compressive strength is well below the design value. Other possible reasons include over compaction, improper mixing methods and some calculation errors. Few suggestion upon several steps to avoid the problems previously faced: • All the raw materials, which is cement, aggregates, and sand should be protected from precipitation or other elements which may affect its physical properties. • The quantity of ingredients may be adjusted if necessary, theoretical values are not always suitable. For example, if the aggregates are wet or saturated, less amount of water should be added, vice versa. • Compaction should be done carefully, as either under or over-compaction will bring significant negative effect on the concrete produced. The Conclusion 1. By using the concrete mix design method, we have calculated the quantities of all ingredients, that is water, cement, fine and coarse aggregate according to specified proportion. 2. The concrete produced did not fulfill the compressive strength requirements due to several reasons. Furthermore, some steps mentioned above should be taken into consideration to overcome this 1 টি মন্তব্য: 1. I need to pour a patio size slab of concrete. What will it cost me to rent a small mixer and supplies to make the cement slab? Is there a brand of cement that works best in this situation? How long will it take for the cement to settle with a 10x10 area? bryanflake1984| http://www.carmixcanada.ca/
{"url":"http://advancedcivilengineering.blogspot.com/2011/10/concrete-mixed-design-method-bs-method.html","timestamp":"2014-04-20T05:49:44Z","content_type":null,"content_length":"130673","record_id":"<urn:uuid:d86bb9ee-6469-4635-be85-919ece4fde88>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00399-ip-10-147-4-33.ec2.internal.warc.gz"}
Scientific Notation - Problem 2 Rewriting a number in standard notation into scientific notation; so what we want to do when we are writing a number in scientific notation is first figure out where our decimal should go. Remember that the first term in our scientific notation has to be between 1 and 10, absolute value it has to be between 1 and 10. So we're looking at our first couple of numbers, I realize that my decimal place has to go between the 1 and the 4. So then all we want to do is count the number of decimal places that we have to move our decimal. Right now it's right here so I move it one spot 2, 3, 4, 5, 6, 7, so I know that I have 1.47 times 10 to the 7th and I know it's a +7 because I have a big number here, -7's are going to move it to a smaller decimal. Same idea for this one here when we're dealing with a decimal. The first thing is we want to figure out where our decimal spot should go in our scientific notation form, giving us a number between 1 and 10 and we have to go between the 9 and then count the number of spots we need to move our decimal 1, 2, 3, 4, 5, so therefore I know I have 8.92 times 10 and this time I am going to a smaller decimal, so therefore I need my exponent to be negative. Making a big number you have a positive exponent, small decimal you have a negative exponent and then just making sure your decimal is in the appropriate spot. scientific notation standard notation big numbers small numbers
{"url":"https://www.brightstorm.com/math/algebra-2/exponents-2/scientific-notation-2-problem-2/","timestamp":"2014-04-24T09:54:15Z","content_type":null,"content_length":"51354","record_id":"<urn:uuid:1924c891-3afc-404c-b983-88cb02d1f03b>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00536-ip-10-147-4-33.ec2.internal.warc.gz"}
What are binary, octal, and hexadecimal notations? Binary Notation All data in modern computers is stored as series of bits. A bit can take on one of two values. The two values are generally represented as the numbers 0 and 1. The most basic form of representing computer data, then, is to represent a piece of data as a string of 1's and 0's, one for each bit. What you end up with is a binary, or base-2 number; this is binary notation. For example, the number 42 would be represented in binary as Interpreting Binary Notation As with normal decimal (base-10) notation each digit moving from right to left represents an increasing order of magnitude (or power of ten). With decimal notation each succeeding digit's contribution is ten times greater than the previous digit. Increasing the first digit by one increases the number represented by one, increasing the second digit by one increases the number by ten, the third digit increases the number by 100, and so on. The number 111 is one less than 112, ten less than 121, and one hundred less than the number 211. The concept is the same with binary notation except that each digit is a power of two greater than the preceding digit rather than a power of ten. Instead of 1's, 10's, 100's, and 1000's digits, binary numbers have 1's, 2's, 4's, and 8's. Thus, the number two in binary would be represented as a 0 in the ones place and a 1 in the twos place, i.e., 10. Three would be 11, a one in the ones place and a 1 in the twos place. No numeral greater than 1 is ever used in binary notation. Octal and Hexadecimal Notation Since binary notation can be cumbersome, two more compact notations are often used, octal and hexadecimal. Octal notation represents data as a base-8 number. Each digit in an octal number represents three bits. Similarly, hexadecimal notation uses base-16 numbers, representing four bits with each digit. Octal numbers use only the digits 0-7, while hexadecimal numbers use all ten base-10 digits (0-9) and the letters a-f (represent the numbers 10-15). The number 42 is written in octal as and in hexadecimal as It can sometimes be difficult to tell whether data is being represented as octal, or hexadecimal (especially if a hexadecimal number doesn't use one of the digits 8-f), so one convention that is often used to distinguish these is to put 0x in front of hexadecimal numbers. Thus, you will often see as another less ambiguous way of representing the number 42 in hexadecimal. An example of this usage can be seen here: Character set comparison chart. Note: The term binary when used in phrases such as "binary" or "binary attachment" has a related but slightly different meaning than the one discussed here. A binary file is one in which the eighth bit of each byte is used for data. Computers and programs can read binary files, but people cannot. Executable files, compiled programs, WordPerfect documents, SAS and SPSS system files, and spreadsheets are all examples of binary files. Files that contain machine-specific codes (i.e., processor-specific microcode) are binary files. However, not all binary files contain processor-specific codes. Some binary files contain text or data in a non-ASCII format that is unrelated to the microcode used by the processor. For example, most graphics files, all compressed files, and many other file types use all eight bits per byte, so are termed "binary". A bit is a binary digit, the smallest increment of data on a machine. A bit can hold only one of two values: 0 or 1. Because bits are so small, you rarely work with information one bit at a time. Bits are usually assembled into a group of 8 to form a byte. A byte contains enough information to store a character, like "h". Byte is an abbreviation for "binary term". A single byte is composed of 8 consecutive bits capable of storing a single character.
{"url":"http://www.dewassoc.com/support/msdos/what_are_binary.htm","timestamp":"2014-04-18T00:38:30Z","content_type":null,"content_length":"6656","record_id":"<urn:uuid:7d4391d7-1c40-4499-8049-92a881de068c>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00616-ip-10-147-4-33.ec2.internal.warc.gz"}
the first resource for mathematics The double bubble conjecture (now theorem) is that, in ${ℝ}^{3}$, the unique perimeter-minimizing double bubble enclosing and separating regions ${R}_{1}$ and ${R}_{2}$ of prescribed volumes ${v}_{1} $ and ${v}_{2}$ are a standard double bubble consisting of three spherical caps meeting along a common circle at 120-degree angles (for equal volumes, the middle cap is a flat disc). The physical fact expressed by the double bubble conjecture was observed and published by Plateau in 1873 (see JFM 06.0516.03). One of the difficulties is that existence proofs rely on allowing the regions ${R}_ {1}$ and ${R}_{2}$ to be disconnected. A priori even the exterior region complementary to ${R}_{1}$ and ${R}_{2}$ might be disconnected. In this paper, an estimate of Hutchings is used to prove that the larger region is connected. A stability argument is used to show that the smaller region has at most two components. A second crucial part of the argument involves the consideration of rotations about an axis orthogonal to the axis of symmetry of the double bubble. The proper choice of axis allows the construction of variations that respect both volume constraints. Stability implies that the variation satisfies a nice differential equation leading to sufficient information to conclude that the surface is made up of pieces of spheres. 53A10 Minimal surfaces, surfaces with prescribed mean curvature 53C42 Immersions (differential geometry)
{"url":"http://zbmath.org/?q=an:1009.53007","timestamp":"2014-04-17T15:37:53Z","content_type":null,"content_length":"22720","record_id":"<urn:uuid:7a77f4df-dcb7-4a82-a0ef-b0ebb29585f5>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00293-ip-10-147-4-33.ec2.internal.warc.gz"}
Left Associativity Date: 05/27/99 at 00:45:57 From: Mike Subject: Left to right rule I'm just wondering... When we add and subtract we usually do it from left to right. For example, 5 - 3 - 2 = ?, we do 5-3 = 2, then 2-2 = 0. Is this left to right thing a law for mathematics or do we just use it for If it isn't any sort of law for math, then in the above example the answer could be 4. Thus, as a professor once told me, this question would contain no solution. Date: 05/27/99 at 08:47:58 From: Doctor Rick Subject: Re: Left to right rule Hi, Mike, thanks for your question. This rule ("left associativity") is not a law but a convention. Yes, we do this for consistency. We could have chosen to associate right, as long as we were consistent. We could also have chosen to have no such rule at all. As the professor said, _if_ the left associativity convention were annulled, expressions without parentheses would be ambiguous; they would have more than one solution, and therefore no solution. In the latter case, we would have no way to determine how to evaluate 5-3-2, and we would have to use parentheses to "disambiguate" the expression: we would write either (5-3)-2 or 5-(3-2). As it is, we do adhere to the left associativity convention, so 5-3-2 is meaningful. But you can always use parentheses to prescribe either left or right associativity explicitly. All the left-associativity rule does is to tell us where the parentheses belong by default (if they are omitted). Computer languages have precedence and associativity rules that describe exactly how the computer is to evaluate any expression you can throw at it. For instance, my C++ programming manual has this Operators Associativity Type --------- ------------- ---- () left to right parentheses * / % left to right multiplicative + - left to right additive << >> left to right stream insertion/extraction < <= > >= left to right relational == != left to right equality = right to left assignment If you don't know C++ (or C), don't worry about what some of these operators mean; the point is that the precedence and associativity are prescribed. The "order of operations" is defined by the top-to-bottom order: multiplication is done before addition, for instance. If two operators are on the same precedence level, they are carried out either left to right or right to left; note that in C++, there is one operator that has right-to-left associativity. It's harder for people to keep all these rules straight than it is for computers, so it's good programming style to put in parentheses wherever a human reader might possibly get confused, even though the computer knows exactly what to do. The same goes double for math that is written solely for human readers: use parentheses wherever they might avoid confusion. - Doctor Rick, The Math Forum
{"url":"http://mathforum.org/library/drmath/view/54342.html","timestamp":"2014-04-16T22:08:46Z","content_type":null,"content_length":"8052","record_id":"<urn:uuid:bd5c2943-2bb1-4679-8c98-9448acbea3a2>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00453-ip-10-147-4-33.ec2.internal.warc.gz"}
Riverton, NJ Math Tutor Find a Riverton, NJ Math Tutor ...I also have 7+ years experience teaching college-level math. I am a world-renowned expert in the Maple computer algebra system, which is used in many math, science, and engineering courses. My tutoring is guaranteed: During our first session, I will assess your situation and determine a grade that I think you can get with regular tutoring. 11 Subjects: including differential equations, logic, calculus, precalculus ...I have a B.A. in scientific illustration. I was very effective in critiques and frequently assisted the other students in my classes. I've been painting with acrylic paint for over 10 years and it is one of my primary media. 19 Subjects: including algebra 1, algebra 2, calculus, grammar I graduated from West Point with a Bachelor of Science degree in Engineering Management, and I currently teach mathematics, physics and engineering at an independent school in the Philadelphia suburbs. I have tutored middle and high school students in the areas of PSAT/SAT/ACT preparation, math (Al... 19 Subjects: including algebra 1, algebra 2, calculus, ACT Math ...In addition, I have worked as an assistant in a kindergarten classroom. As an Elementary Ed. major, I have taken and excelled in many courses on teaching Reading/Language Arts, such as Teaching Literacy and Differentiated Literacy. These courses have taught me various strategies about phonemic awareness, phonics, and decoding words. 15 Subjects: including prealgebra, algebra 1, reading, English ...I find knowing why the math is important goes a long way towards helping students retain information. After all, math IS fun!In the past 5 years, I have taught differential equations at a local university. I hold degrees in economics and business and an MBA. 13 Subjects: including algebra 1, algebra 2, calculus, geometry Related Riverton, NJ Tutors Riverton, NJ Accounting Tutors Riverton, NJ ACT Tutors Riverton, NJ Algebra Tutors Riverton, NJ Algebra 2 Tutors Riverton, NJ Calculus Tutors Riverton, NJ Geometry Tutors Riverton, NJ Math Tutors Riverton, NJ Prealgebra Tutors Riverton, NJ Precalculus Tutors Riverton, NJ SAT Tutors Riverton, NJ SAT Math Tutors Riverton, NJ Science Tutors Riverton, NJ Statistics Tutors Riverton, NJ Trigonometry Tutors Nearby Cities With Math Tutor Audubon Park, NJ Math Tutors Beverly, NJ Math Tutors Bristol, PA Math Tutors Cinnaminson Township, NJ Math Tutors Delair, NJ Math Tutors Delanco Township, NJ Math Tutors Delran Township, NJ Math Tutors Jenkintown Math Tutors Lawnside Math Tutors Palmyra, NJ Math Tutors Philadelphia Ndc, PA Math Tutors Rancocas Math Tutors Riverside, NJ Math Tutors Tullytown, PA Math Tutors Woodlynne, NJ Math Tutors
{"url":"http://www.purplemath.com/Riverton_NJ_Math_tutors.php","timestamp":"2014-04-16T10:28:56Z","content_type":null,"content_length":"24003","record_id":"<urn:uuid:f714e22b-7782-456d-942d-f5c4441ef31c>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00035-ip-10-147-4-33.ec2.internal.warc.gz"}
Fluid Dynamics and the Navier-Stokes Equations instead modeled using one of a number of turbulence models and coupled with a flow solver that assumes laminar flow outside a turbulent region. Turbulence usually occurs below a Reynold's numbers of 3000. It causes increased energy loss (as heat), more drag (on the moving body), and generates sound wave (noise).
{"url":"http://universe-review.ca/R13-10-NSeqs.htm","timestamp":"2014-04-17T09:45:17Z","content_type":null,"content_length":"9015","record_id":"<urn:uuid:465be5c8-ca0e-4498-b1dc-659a2e36f2c5>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00618-ip-10-147-4-33.ec2.internal.warc.gz"}
Closedness properties of internal relations II: Bourn localization Zurab Janelidze We say that a class $\mathbb{D}$ of categories is the Bourn localization of a class $\mathbb{C}$ of categories, and we write $\mathbb{D} = \mathrm{Loc}\mathbb{C}$, if $\mathbb{D}$ is the class of all (finitely complete) categories $\mathcal{D}$ such that for each object $A$ in $\mathcal{D}$, $\mathrm{Pt}(\mathcal{D}\downarrow A) \in \mathbb{C}$, where $\mathrm{Pt}(\mathcal{D}\downarrow A)$ denotes the category of all pointed objects in the comma-category $(\mathcal{D}\downarrow A)$. As D. Bourn showed, if we take $\mathbb{D}$ to be the class of Mal'tsev categories in the sense of A. Carboni, J. Lambek, and M. C. Pedicchio, and $\mathbb{C}$ to be the class of unital categories in the sense of D. Bourn, which generalize pointed Jónsson-Tarski varieties, then $\mathbb{D} = \mathrm {Loc}(\mathbb{C})$. A similar result was obtained by the author: if $\mathbb{D}$ is as above and $\mathbb{C}$ is the class of subtractive categories, which generalize pointed subtractive varieties in the sense of A. Ursini, then $\mathbb{D} = \mathrm{Loc}(\mathbb{C})$. In the present paper we extend these results to abstract classes of categories obtained from classes of varieties. We also show that the Bourn localization of the union of the classes of unital and subtractive categories is still the class of Mal'tsev categories. Keywords: Mal'tsev, unital and subtractive categories; fibration of points 2000 MSC: 18C99, 08B05, 18A25 Theory and Applications of Categories, Vol. 16, 2006, No. 13, pp 262-282. TAC Home
{"url":"http://www.emis.de/journals/TAC/volumes/16/13/16-13abs.html","timestamp":"2014-04-16T14:07:21Z","content_type":null,"content_length":"3015","record_id":"<urn:uuid:a1921fe7-c102-4360-a038-75f4b647488a>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00182-ip-10-147-4-33.ec2.internal.warc.gz"}
Neural networks for modeling gene-gene interactions in association studies Our aim is to investigate the ability of neural networks to model different two-locus disease models. We conduct a simulation study to compare neural networks with two standard methods, namely logistic regression models and multifactor dimensionality reduction. One hundred data sets are generated for each of six two-locus disease models, which are considered in a low and in a high risk scenario. Two models represent independence, one is a multiplicative model, and three models are epistatic. For each data set, six neural networks (with up to five hidden neurons) and five logistic regression models (the null model, three main effect models, and the full model) with two different codings for the genotype information are fitted. Additionally, the multifactor dimensionality reduction approach is applied. The results show that neural networks are more successful in modeling the structure of the underlying disease model than logistic regression models in most of the investigated situations. In our simulation study, neither logistic regression nor multifactor dimensionality reduction are able to correctly identify biological interaction. Neural networks are a promising tool to handle complex data situations. However, further research is necessary concerning the interpretation of their parameters. The investigation of complex diseases plays an important role in genetic epidemiology where the identification of genetic risk factors is of great interest. Besides the study of main effects, the interplay of two or more genetic risk factors gains more and more attention. The identification of such a biological interaction or epistasis, however, is linked to new challenges for statistical methods. A major problem is the discrepancy between statistical and biological interaction. Statistical interaction is commonly defined as the deviation from an additive effect of single risk factors on the outcome, respectively on the transformed outcome. In logistic regression models, for example, a multiplicative structural model is applied and an additive effect on the logit-transformed outcome implies a multiplicative effect on the untransformed outcome. Therefore, statistical interaction in a logistic regression model is understood as deviation from a multiplicative effect. On the contrary, biological interaction is present if one gene is influencing the effect of another one [1]. Both terms do not coincide as was shown for example by North et al. [2] or Foraita et al. [3]. Nevertheless, a meaningful interpretation of genetic studies requires the detection of biological interaction with statistical methods (cf. [4,5]). A variety of parametric and non-parametric methods has been proposed for modeling and detecting gene-gene interaction, e.g. support-vector machines [6], random forests [7,8], multi-factor dimensionality reduction (MDR, [9,10]), combinatorial partitioning methods [11], focused interaction testing framework [12], classification and regression trees (CART, [13]), logic regression [14], and lasso regression [15]. A useful classification is given by Musani et al. [16], who distinguish between regression-based methods, data reduction-based methods, and pattern recognition methods in their overview. Despite the wealth of these approaches, none of the proposed methods is optimal for all two-locus disease models (see e.g. [17-19]). Consequently, there is no established method for analyzing gene-gene interactions so far [20]. Since parametric methods have problems to detect interaction in the absence of main effects and non-parametric approaches are ineffective when main effects are present [16,21], it might well be that there is no single approach appropriate for all types of biological interaction. Currently, generalized linear models, and here logistic regression models, as well as MDR are predominantly applied (see e.g. [22-27]). Another tool that has been employed in genetic epidemiology during the last 15 years is the neural network approach (see e.g. [28-32]). Neural networks are a flexible statistical tool to model any functional relationship between covariates and response variables. Therefore, they represent a promising approach to deal with the difficulties associated with modeling biological gene-gene interactions. They have as well been successfully applied for variable selection as for example with genetic programming neural networks (GPNN, [33-36]) or grammatical evolution neural networks (GENN, [37,38]). Both approaches were developed to identify an optimal network topology. Motsinger et al. [39] successfully applied GENN to simulated genome wide association data with 500,000 Single Nucleotide Polymorphisms (SNPs) showing the general ability of neural networks to handle such large data sets. However, variable selection is not the focus of this paper. The aim of this paper is to explore the ability of neural networks to model different types of biological gene-gene interactions. For this purpose, a simulation study is conducted to investigate the behavior of neural networks in various situations. We assume a case-control study with equal numbers of cases and controls. Following the scenarios of Risch [40] and the concept of epistatic models as classified by Li and Reich [41], different theoretical types of gene-gene interactions are studied. There are exactly two loci involved, i.e. variable selection is not a problem. The results are compared with those of logistic regression models and those of MDR analyses. Finally, the advantages and disadvantages of using a neural network approach are discussed. Neural networks A feed-forward multilayer perceptron (MLP) is chosen as neural network [42]. The general idea of an MLP is to approximate arbitrary functional relationships between covariates and response variables. The underlying structure of an MLP is a weighted, directed graph, whose vertices are called neurons and whose edges are called synapses. The neurons are organized in layers and each layer is fully connected by synapses to the next layer. The input layer contains all considered covariates and the output layer the response variables. An arbitrary number of so-called hidden layers can be included between the input and the output layer. See Figure 1 for an example of a neural network with one hidden layer. Figure 1. Neural network. Neural network with one hidden layer consisting of three hidden neurons. Data is passing the neural network as signals. These signals travel the synapses and pass the neurons where the signals are processed. All incoming signals are added and the activation function σ is applied to the resulting sum. Additionally, a weight is attached to each of the synapses. A positive weight indicates an amplifying, a negative weight a repressing effect on the signal. During the training process, the weights are modified by a learning algorithm. The learning algorithm minimizes an error function that depends on the difference between the given output and the output estimated by the neural network. In general, the strength of the modification depends on a specified learning rate. The minimal MLP without hidden layer is equivalent to the generalized linear model [43] and computes the function where w denotes the weight vector including intercept, x the input vector, and σ the activation function. Any arbitrary function can be chosen as activation function, although most learning algorithms require a differentiable activation function. Choosing the inverse of the link function used for the logistic regression model σ (z) = 1/(1 + exp(-z)), the MLP without hidden layer is algebraically equivalent to the logistic regression model and computes In this case, all weights w[i ]of the MLP correspond to the regression coefficients β[i ]of the logistic regression model. Hidden layers can be included to increase the modeling flexibility. An MLP with one hidden layer computes the following function and is capable to model any piecewise continuous function [44]. Here, there is a lack of interpretation of the parameters. In the present paper, we investigate MLPs with at most one hidden layer. Resilient backpropagation [45] and cross entropy are chosen as learning algorithm and error function, respectively. The latter choice guarantees equivalence of the trained weights to maximum-likelihood estimation (see e.g. [46]). The employment of resilient backpropagation as learning algorithm does not require a transformation of continuous data. It solves the problem of choosing an appropriate learning rate for each data situation. Design of the simulation study We conduct a simulation study, where neural network models are used to fit different two-locus disease models in a case-control design. For each of these models, one low risk and one high risk scenario is simulated. Unconditional logistic regression models are fitted to the same data sets to compare the results with an established method. For judging the ability to model the underlying disease model, the estimated penetrance matrices are compared to the theoretical penetrance matrices. Two-locus disease models Six different two-locus disease models are considered: three models introduced by Risch [40] and three different epistatic models. They can be distinguished by the structure of their penetrance matrices f = [f[ij]][i, j], where i, j ∈ {0, 1, 2} represent the genotype at the two loci. 1. The first two-locus disease model is Risch's additivity model (ADD). Here, the penetrance matrix is given by summing the so-called penetrance terms a[i ]and b[j] where Y denotes the case-control status and G[A ]and G[B], G[A], G[B ]∈ {0, 1, 2}, the genotypes at the two involved loci. The penetrance terms a[i ]and b[j ]are restricted to 0 ≤ a[i], b[j ]≤ 1 and a[i ]+ b[j ]≤ 1. This model represents biological independence of both loci. 2. For Risch's heterogeneity model (HET), the penetrance matrix is also determined by the penetrance terms Like the additivity model, the heterogeneity model describes a model of biological independence for 0 ≤ a[i], b[j ]≤ 1. However, in this case no further constraints on the penetrance terms are 3. The third setting is Risch's multiplicative model (MULT). The penetrance matrix is given by the penetrance terms as follows The multiplicative model represents biological interaction. 4. In the first epistatic model (EPI RR), the penetrance matrix is given by a matrix of the following type: where the constant term c denotes the baseline risk of getting the disease and r the risk increase or decrease. This model assumes that both genes have a recessive effect on the disease, since there is only an increased or decreased risk if both loci carry two mutated alleles. 5. The penetrance matrix of the second epistatic model (EPI DD) is as follows i.e. both loci are assumed to be dominant. In this setting, an increased or decreased risk is only observed if both loci carry at least one mutated allele. 6. The last considered scenario is a mixed epistatic model (EPI RD). The penetrance matrix is given by In this situation, one gene (A) has a recessive and one gene (B) has a dominant effect on the disease. All epistatic models represent gene-gene interaction. By choosing the parameters r, r[1], r[2 ]and the ratios a[1]/a[0], a[2]/a[0], b[1]/b[0], and b[2]/b[0], respectively, different risk scenarios can be generated. Data generation The data generation follows a two-step procedure. As a first step, basic populations with one million observations are simulated. For the six two-locus disease models introduced above we investigate two risk scenarios each (see Table 1). This results in 12 basic populations with two biallelic loci, A and B. The genetic information is drawn randomly with a minor allele frequency for both loci of 0.3 to ensure sufficient cell frequencies in the final case-control samples. Both loci are assumed to be in linkage equilibrium and it is assumed that the Hardy-Weinberg equilibrium holds. The case-control status is drawn according to probabilities of a given penetrance matrix in relation to the respective disease model and the risk scenario. In all 12 settings, parameters are chosen such that the overall disease prevalence is equal to 0.01. The genotype information is described by a codominant coding, i.e. the genotype at each locus represents the number of mutated alleles. As a second step, 100 case-control samples with 1,000 cases and 1,000 controls are drawn randomly from each basic population, i.e. each combination of two-locus disease model and risk scenario. Overall, this results in 12 times 100 case-control samples that will be analyzed. Modeling the data Model-building with neural networks is done using six different network topologies from zero neurons in the hidden layer (i.e. no hidden layer) up to five neurons in the hidden layer. Each topology is trained five times with synaptic weights initialized with random numbers drawn from a standard normal distribution to avoid local minima. From these fitted models, the best model for each data set, i.e. the network topology, is chosen using Akaike's Information Criterion (AIC, [47]). The following five logistic regression models are fitted to each data set: the null model (NM), three main effect models (only locus A (SiA), only locus B (SiB), both main effects (ME)), and a full model including both main effects and an interaction term (FM). The best model for each data set is chosen based on the AIC. Note that the neural network with zero neurons in the hidden layer is algebraically equivalent to the main effect model ME. In a second approach, logistic regression models are fitted to the data with two dichotomous design variables representing each locus. Instead of counting the number of mutated alleles, these two variables reflect the heterozygous genotype and the homozygous genotype with two mutated alleles, respectively. For instance, the main effect model for locus A only (SiA) is modeled with a codominant coding as as opposed to with design variables. The observation is indexed by k, β represents the regression coefficients and 1 an indicator function. Table 2 gives an overview of the fitted statistical models and the numbers of needed parameters for all considered models. These three applied statistical methods deliver as output an estimation of the probability to be a case, i.e. the penetrance for each genotype-genotype combination. We compare these estimated penetrance matrices to the theoretical ones to judge the ability of the statistical methods to model the underlying two-locus disease model. A penetrance matrix derived from a case-control sample differs considerably from one derived from the basic population, since the penetrance matrix depends on the prevalence of disease in the considered data. Therefore, we have to compute the theoretical penetrance matrix for the case-control sample using the penetrance matrix from the basic population, the allele frequencies and the prevalence of the population (see appendix for an example). The comparison of the obtained theoretical penetrance matrix with the penetrance matrices estimated by the three different statistical approaches gives results which are independent from sampling error, since the theoretical penetrance matrix symbolizes a perfectly drawn case-control sample. For each of the 12 populations, the mean absolute difference between theoretical and estimated penetrance matrix is calculated element by element for each genotype-genotype combination over the n = 100 case-control samples: where i, j ∈ {0, 1, 2}, and f[ij ]and kth sample, respectively. Furthermore, the sum of the mean absolute differences ∑[i, j]E[ij ]is considered. The data generation and the statistical analyses for neural network and logistic regression are performed using R [48]. The package for the MLP, neuralnet, was newly implemented by our group and is published on CRAN [49]. Additionally, the MDR approach is applied to the data. The analyses are conducted by the java-based open source software MDR release 1.2.5 with default configurations [50]. In particular, analysis configurations are specified as follows: the random seed is set to zero, the attribute count maximum is set to two and the cross-validation count to ten. The MDR identifies a set of functional variables that is best for classifying cases and controls. Due to the number of simulated loci, the software can only select one of three sets: either locus A or locus B only or both loci. Additionally, it provides a dendrogram to distinguish between redundant and synergistic variables based on information theory [51]. In a first step, we investigate the ability of neural networks and logistic regression models to model different two-locus disease models. Table 3 shows the results for Risch's additivity model. Here, the sum of the mean absolute differences between estimated penetrance and theoretical penetrance matrix is lowest for the neural networks. This is most pronounced in the high risk scenario (∑E[ ij ]= 0.2059 for neural networks versus ∑E[ij ]= 0.2544 and ∑E[ij ]= 0.2804 for logistic regression models without and with design variables). Logistic regression models with design variables have in general higher deviations than those without design variables. These results are also reflected in the element-wise comparison of the estimated matrices. For each of the risk scenarios, the neural network estimates five out of nine penetrances with the highest accuracy, i.e. with smallest difference to the theoretical penetrance, compared to the logistic regression models. The heterogeneity model yields virtually the same results as the additivity model (results not shown). For Risch's multiplicative model (see Table 4), the logistic regression models with design variables have the best fit to the underlying data as is reflected by the lowest mean absolute difference of the estimated to the theoretical penetrance matrix (∑E[ij ]= 0.1637 resp. ∑E[ij ]= 0.1833 for the two risk scenarios). This holds true for the sum as well as for the single entries in both risk scenarios. Although neural networks show worse accuracy for both risk scenarios (∑E[ij ]= 0.2428 resp. ∑E[ij ]= 0.2178), they mostly need two neurons in the hidden layer (results not shown), that is nine parameters as opposed to five parameters that are used most often in the logistic regression models with design variables. This implies that the higher degrees of freedom do not lead to a better fit in the situation of a multiplicative model. Furthermore, logistic regression models without design variables are not able to model this disease model (∑E[ij ]= 0.3965 resp. ∑E[ij ]= 0.4887). The results for the epistatic models are presented in Tables 5, 6 and 7. In the first epistatic model, the mean absolute differences between the theoretical penetrance matrices and the estimated penetrance matrices of the neural networks are generally lower (sum and single entries) than those of the logistic regression models (see Table 5). In particular, the logistic regression model without design variables performs poorly in the high risk scenario (∑E[ij ]= 0.6150 for logistic regression models without design variables versus ∑E[ij ]= 0.1410 for neural networks). Table 5. Epistatic model - recessive (EPI RR). Table 6. Epistatic model - dominant (EPI DD). Table 7. Epistatic model - mixed (EPI RD). The results for the epistatic model with two dominant loci are different for the two risk scenarios (see Table 6). In the low risk scenario, none of the three statistical approaches is able to satisfactorily estimate the theoretical penetrance matrix of the disease model. The sum of the mean absolute differences ranges from ∑E[ij ]= 0.3071 to ∑E[ij ]= 0.3132 for the three approaches. In the high risk scenario, neural networks slightly outperform the logistic regression models with design variables, whereas the regression models without design variables completely fail to detect the characteristic structure of the underlying penetrance matrix (∑E[ij ]= 0.2524 for neural networks versus ∑E[ij ]= 0.2648 and ∑E[ij ]= 0.6528 for logistic regression models with respectively without design variables). The better fit of neural networks and logistic regression models with design variables is traded off by a high number of parameters: both approaches need on average about 9 parameters (results not shown). The structure of the theoretical penetrance matrices given by the mixed epistatic model with one dominant and one recessive locus is again best modeled by neural networks (see Table 7). This can be observed for the sum and for the single entries of the mean absolute differences between the theoretical and the estimated penetrance matrices in both risk scenarios. The logistic regression models without design variables are again not able to identify this structure. The mean absolute differences are much higher as opposed to the differences of the other approaches (e.g ∑E[ij ]= 0.8658 and ∑E [ij ]= 0.2329 for logistic regression models without respectively with design variables and ∑E[ij ]= 0.1563 for neural networks in the high risk scenario). In a second step, we investigate whether the standard methods logistic regression and MDR are able to detect the interaction given by the four two-locus disease models representing biological interaction. Table 8 summarizes the results of the logistic regression models with and without design variables regarding the selected models for each population. The bold numbers mark the mode of the selected models. In the upper part of the table, the two-locus disease model (ADD, HET) agrees with the statistical model when a statistical model of independence (NM, SiA, SiB, ME) is selected. In the lower part of the table, the two-locus disease model representing biological interaction (MULT, EPI RR, EPI DD, EPI RD) agrees with the statistical model when the full model (FM) is selected. Both logistic regression models yield similar results for the additivity and the heterogeneity model. In most cases, interaction terms are included into the statistical models despite the fact that the underlying data follows a disease model representing independence. This is especially true in the high risk scenario. In the low risk scenario there is one notable exception for the heterogeneity model: in more than half of the replications, the logistic regression models with design variables contain no interaction term. Table 8. Selected logistic regression models (LRM). Different two-locus disease models representing gene-gene interaction lead to varying results when logistic regression models are applied. The logistic regression models do not include an interaction term in most replications when the multiplicative model is the underlying disease model. That means that the logistic regression models fail to detect the underlying biological interaction. The recessive and the dominant epistatic model are correctly represented by the full model in most situations. Only in the low risk scenario of the recessive epistatic model, the logistic regression models without design variables choose a broad variety of models in a quarter of the replications. For the mixed epistatic models, the logistic regression models perform poorly: Since model SiA is mostly selected, the main effect for the (dominant) locus B is not detected in more than half of the replications and the interaction effect is included only in about 20% of the replications. Table 9 summarizes the results for the MDR analyses. It shows the selected variables for each population in combination with their identification as synergistic or redundant. Bold numbers again mark the mode of selected sets in each population. Even though both main effects are present in all populations, the MDR approach often selects a set consisting of only one locus independent of whether the underlying two-locus disease model represents independent effects or biological interaction. This holds true for the additive and the heterogeneity model in the low risk scenario, where only locus B is selected for most of the 100 data sets, and the mixed epistatic model, where a set consisting of locus A only is mainly selected. Apart from the mixed epistatic model, both variables are selected correctly for the disease models representing biological interaction. As for the logistic regression model, the sets of selected variables strongly vary for the recessive epistatic model. Table 9. MDR analyses: selected variables and identification as redundant or synergistic behavior. Additionally, the provided dendrogram can be applied to distinguish between redundancy and synergism. These concepts are related to independence and interaction in our context [52]. Both loci are categorized as redundant for most of the investigated populations. Only the dominant epistatic model is correctly identified as a synergistic model for the majority of the data sets. No similar statement about the agreement of disease and statistical model can be made for neural networks since there is no equivalent to the concept of interaction terms. Neural networks with one or two neurons in the hidden layer (i.e. models with five or nine parameters) are the most frequent models selected in the simulation study. In our simulation study, we investigated whether neural networks are able to model different types of gene-gene interaction in case-control data. For this purpose, we analyzed simulated data of six different two-locus disease models in two different risk scenarios with neural networks and compared the results to logistic regression models using two different approaches for coding the genotype information. Additionally, we investigated whether logistic regression models or the MDR approach, which are two widely used methods in applications, are suitable to identify biological interaction. For the majority of the investigated situations, the theoretical penetrance matrix is estimated most accurately by neural networks as opposed to logistic regression models. The exception is the multiplicative model in both risk scenarios and the dominant epistatic model in the low risk scenario. Although, in these situations, neural networks use two neurons in the hidden layer, i.e. nine parameters, in most replications, they are not able to exploit the flexibility to correctly represent this disease model. For the logistic regression models, it can be stated that the disease models of independence are better represented by a logistic regression model without design variables and the disease models of interaction are better represented by a logistic regression model with design variables. In situations where interaction is present using a logistic regression model without design variables might lead to wrong results. Since the underlying disease model is usually not known beforehand, no recommendation can be given whether to employ design variables or not. Both logistic regression models mostly select a main effect model to represent the multiplicative model. The inclusion of interaction terms signifies deviations from the structural model rather than from the disease model representing independence. Consequently, the underlying biological interaction represented by the multiplicative and the epistatic models cannot be read off the fitted logistic regression models. The same holds true for the MDR approach. It is not possible to correctly identify biological interaction based on the sets of selected variables or based on the dendrograms since the additive and the heterogeneity model as independence models cannot be distinguished from the four models representing biological interaction with neither of these two criteria. The results confirm previous studies that demonstrate the excellent modeling capacities of neural networks [32]. We investigated, whether the weaker performance of the neural network especially for the multiplicative model might be due to a wrong model selection criterion. Alternatively to the AIC, we calculated Bayes Information Criterion (BIC, see [53]) for all models (results not shown). However, employing the BIC for model selection does not improve the performance of the neural network as opposed to the logistic regression models. In fact, the stronger performance of the logistic regression model is supposed to be due to the fact that the multiplicative model exactly corresponds to the structural model of the logistic regression model. It might be disputed whether the applied risk scenarios feature too large genotype relative risks to be meaningful for real-data applications. For the recessive epistatic model as the most extreme situation, alternative scenarios were investigated employing smaller risks. All investigated approaches have difficulties detecting these smaller risks. For the logistic regression models, the null model is mostly chosen, thus, neglecting the elevated penetrance when both loci carry two mutated alleles. Neural networks do not explicitly use interaction terms for modeling data. Unlike in logistic regression models, where an interaction term might become significant or not, there is no easy way to assess whether interaction is present using a neural network. Moreover, in models with one or more hidden layers there is no direct interpretation of the estimated parameters and the MLP is generally considered as a black-box approach. This can be seen as the biggest drawback when employing neural networks for data analyses where interpretation is a major concern. However, the modeling capacities of a neural network allow to adjust to practically any given data structure, including any interaction structure, which makes it an extremely powerful statistical tool. This advantage might even be more pronounced when modeling continuous variables, for example when modeling gene-environment interactions. The use of neural networks in applications is currently still limited because of existing research gaps. Especially, the interpretability of the estimated weights is not yet given. Nevertheless, they offer a promising tool for exploratory analyses in candidate gene studies. For instance, they can well be applied when one is interested in odds ratios for single SNPs. The estimated odds ratios are more realistic than those estimated by logistic regression models in a lot of situations since the estimated output of neural networks better represents the underlying population. As initially stated, we did not explore the ability of neural networks for variable selection, which is a key problem in genome-wide association (GWA) studies. We explored the ability of neural networks to model different types of biological gene-gene interactions and compared them to logistic regression models and the MDR approach. The latter methods do not allow reading off the underlying two-locus disease models. Neural networks do not explicitly include an interaction term but they are able to model any data structure. Even though the estimated weights are not interpretable, this makes them a powerful statistical tool. Further research should be devoted to develop a framework for interpreting the parameters estimated by a neural network to allow a broader use of these tools. Authors' contributions FG planned and carried out the simulation study and drafted the manuscript. NW drafted the manuscript. KB planned the simulation study and drafted the manuscript. All authors read and approved the final manuscript. To illustrate the calculation of the theoretical penetrance matrix, we consider the epistatic model with two recessive loci. We assume that the two considered loci are in linkage equilibrium, i.e. they are marginal independent, and that the Hardy-Weinberg equilibrium holds. In the population, the probabilities are denoted as follows This enables us to express the conditional probabilities of the genotypes given the case-control status as: where P^s indicates a probability in a case-control sample. There are only changes in the joint probabilities of the genotypes P^s(G[A ]= i, G[B ]= j) because of the change of prevalence: P^s(Y = 1) = P^s(Y = 0) = 0.5. The joint probabilities can be calculated as The theoretical penetrance matrix of the sample can now be calculated as: For example, for the low risk scenario (r = 5) and an overall prevalence in the population of K = 0.01, the constant c can be calculated as c = 0.009686 and the theoretical penetrance matrix of the sample results in This theoretical penetrance matrix of the sample is compared to the predicted penetrance matrices generated by the different models to judge the ability of neural networks and logistic regression models to model different two-locus disease models. The authors thank Iris Pigeot for reading preliminary versions of the paper and for giving helpful comments and kind support. Additionally, we thank five anonymous reviewers for their valuable suggestions and remarks. We gratefully acknowledge the financial support of this research by the grant PI 345/3-1 from the German Research Foundation (DFG). 1. Cordell HJ: Epistasis: what it means, what it doesn't mean, and statistical methods to detect it in humans. Hum Mol Gen 2002, 11(20):2463-2468. PubMed Abstract | Publisher Full Text 2. North B, Curtis D, Sham PC: Application of logistic regression to case-control association studies involving two causative loci. Hum Hered 2005, 59(2):79-87. PubMed Abstract | Publisher Full Text 3. Foraita R, Bammann K, Pigeot I: Modeling gene-gene-interactions using graphical chain models. Hum Hered 2008, 65:47-56. PubMed Abstract | Publisher Full Text 4. Wade MJ, Winther RG, Agrawal AF, Goodnight CJ: Alternative definitions of epistasis: dependence and interaction. Trends Ecol Evol 2001, 16:498-504. Publisher Full Text 5. Moore JH, Williams SM: Traversing the conceptual divide between biological and statistical epistasis: systems biology and a more modern synthesis. Bioessays 2005, 27:637-646. PubMed Abstract | Publisher Full Text 6. Chen SH, Sun J, Dimitrov L, Turner AR, Adams TS, Meyers DA, Chang BL, Zheng SL, Grönberg H, Xu J, Hsu FC: A support vector machine approach for detecting gene-gene interaction. Genet Epidemiol 2008, 32:152-167. PubMed Abstract | Publisher Full Text 7. Amit Y, Geman D: Shape quantization and recognition with randomized trees. Neural Comput 1997, 9:1545-1588. Publisher Full Text 8. Mach Learn 2001, 45:5-32. Publisher Full Text 9. Ritchie MD, Hahn LW, Roodi N, Bailey LR, Dupont WD, Parl FF, Moore JH: Multifactor-dimensionality reduction reveals high-order interactions among estrogen-metabolism genes in sporadic breast Am J Hum Genet 2001, 69:138-147. PubMed Abstract | Publisher Full Text | PubMed Central Full Text 10. Hahn LW, Ritchie MD, Moore JH: Multifactor dimensionality reduction for detecting gene-gene and gene-environment interactions. Bioinformatics 2003, 19:376-382. PubMed Abstract | Publisher Full Text 11. Nelson MR, Kardia SLR, Ferrell RE, Sing CF: A combinatorial partitioning method to identify multilocus genotypic partions that predict quantitative trait variation. Genome Res 2001, 11:458-470. PubMed Abstract | Publisher Full Text | PubMed Central Full Text 12. Millstein J, Conti DV, Gilliland FD, Gauderman WJ: A testing framework for identifying susceptibility genes in the presence of epistasis. Am J Hum Genet 2006, 78:15-27. PubMed Abstract | Publisher Full Text | PubMed Central Full Text 13. Cook NR, Zee RYL, Ridker PM: Tree and spline based association analysis of gene-gene interaction models for ischemic stroke. Stat Med 2004, 23:1439-1453. PubMed Abstract | Publisher Full Text 14. Ruczinski I, Kooperberg C, LeBlanc M: Logic regression. J Comput Graph Stat 2003, 12(3):475-511. Publisher Full Text 15. Musani SK, Shriner D, Liu N, Feng R, Coffey CS, Yi N, Tiwari HK, Allison DB: Detection of gene × gene interactions in genome-wide association studies of human population data. Hum Hered 2007, 63:67-84. PubMed Abstract | Publisher Full Text 16. Heidema AG, Boer JMA, Nagelkerke N, Mariman ECM, van der ADL, Feskens EJM: The challenge for genetic epidemiologists: how to analyze large numbers of SNPs in relation to complex diseases. BMC Genet 2006, 7:23. PubMed Abstract | BioMed Central Full Text | PubMed Central Full Text 17. Briollais L, Wang Y, Rajendram I, Onay V, Shi E, Knight J, Ozcelik H: Methodological issues in detecting gene-gene interaction in breast cancer susceptibility: a population-based study in BMC Med 2007, 5:22. PubMed Abstract | BioMed Central Full Text | PubMed Central Full Text 18. Milne RL, Fagerholm R, Nevanlinna H, Benítez J: The importance of replication in gene-gene interaction studies: multifactor dimensionality reduction applied to a two-stage breast cancer case-control study. Carcinogenesis 2008, 29(6):1215-1218. PubMed Abstract | Publisher Full Text 19. Lanktree MB, Hegele RA: Gene-gene and gene-environment interactions: new insights into the prevention, detection and management of coronary artery disease. Genome Med 2009, 1:28. PubMed Abstract | BioMed Central Full Text | PubMed Central Full Text 20. Motsinger-Reif AA, Reif DM, Fanelli TJ, Ritchie MD: A comparison of analytical methods for genetic association studies. Genet Epidemiol 2008, 32:767-778. PubMed Abstract | Publisher Full Text 21. Sáez ME, Grilo A, Morón FJ, Manzano L, Martínez-Larrad MT, González-Pérez A, Serrano-Hernando J, Ruiz A, Ramírez-Lorca R, Serrano-Ríos M: Interaction between Calpain 5, Peroxisome proliferator-activated receptor-gamma and Peroxisome proliferator-activated receptor-delta genes: a polygenic approach to obesity. Cardiovasc Diabetol 2008, 7:23. PubMed Abstract | BioMed Central Full Text | PubMed Central Full Text 22. Branicki W, Brudnik U, Wojas-Pelc A: Interactions between HERC2, OCA2 and MC1R may influence human pigmentation phenotype. Ann Hum Genet 2009, 73:160-170. PubMed Abstract | Publisher Full Text 23. Liu J, Sun K, Bai Y, Zhang W, Wang X, Wang Y, Wang H, Chen J, Song X, Xin Y, Liu Z, Hui R: Association of three-gene interaction among MTHFR, ALOX5AP and NOTCH3 with thrombotic stroke: a multicenter case-control study. Hum Genet 2009, 125:649-656. PubMed Abstract | Publisher Full Text 24. Qi Y, Niu WQ, Zhu TC, Liu JL, Dong WY, Xu Y, Ding SQ, Cui CB, Pan YJ, Yu GS, Zhou WY, Qiu CC: Genetic interaction of Hsp70 family genes polymorphisms with high-altitude pulmonary edema among Chinese railway constructors at altitudes exceeding 4000 meters. Clin Chim Acta 2009, 405:17-22. PubMed Abstract | Publisher Full Text 25. Broberg K, Huynh E, Schläwicke Engström K, Björk J, Albin M, Ingvar C, Olsson H, Höglund M: Association between polymorphisms in RMI1, TOP3A, and BLM and risk of cancer, a case-control study. BMC Cancer 2009, 9:140. PubMed Abstract | BioMed Central Full Text | PubMed Central Full Text 26. Tang X, Guo S, Sun H, Song X, Jiang Z, Sheng L, Zhou D, Hu Y, Chen D: Gene-gene interactions of CYP2A6 and MAOA polymorphisms on smoking behavior in Chinese male population. Pharmacogenet Genomics. 2009, 19(5):345-352. PubMed Abstract | Publisher Full Text 27. Lucek PR, Ott J: Neural network analysis of complex traits. Genet Epidemiol 1997, 14:1101-1106. PubMed Abstract | Publisher Full Text 28. Ott J: Neural networks and disease association studies. Am J Med Genet 2001, 105:60-61. PubMed Abstract | Publisher Full Text 29. Flouris AD, Duffy J: Applications of artificial intelligence systems in the analysis of epidemiological data. Eur J Epidemiol 2006, 21:167-170. PubMed Abstract | Publisher Full Text 30. McKinney BA, Reif DM, Ritchie MD, Moore JH: Machine learning for detecting gene-gene interactions. Appl Bioinformatics 2006, 5(2):77-88. PubMed Abstract | Publisher Full Text 31. Motsinger-Reif AA, Ritchie MD: Neural networks for genetic epidemiology: past, present, and future. BioData Min 2008, 1:3. PubMed Abstract | BioMed Central Full Text | PubMed Central Full Text 32. Koza JR, Rice JP: Genetic generation of both the weights and architecture for a neural network. In Proc Int Joint Conf Neural Netw. Volume II. IEEE Press; 1991::397-404. 33. Ritchie MD, White BC, Parker JS, Hahn LW, Moore JH: Optimization of neural network architecture using genetic programming improves detection and modeling of gene-gene interactions in studies of human diseases. BMC Bioinformatics 2003, 4:28. PubMed Abstract | BioMed Central Full Text | PubMed Central Full Text 34. Bush WS, Motsinger AA, Dudek SM, Ritchie MD: Can neural network constraints in GP provide power to detect genes associated with human disease? 35. Motsinger AA, Lee SL, Mellick G, Ritchie MD: GPNN: Power studies and applications of a neural network method for detecting gene-gene interactions in studies of human disease. BMC Bioinformatics 2006, 7:39. PubMed Abstract | BioMed Central Full Text | PubMed Central Full Text 36. Motsinger AA, Dudek SM, Hahn LW, Ritchie MD: Comparison of neural network optimization approaches for studies of human genetics. Lect Notes Comput Sc 2006, 3907:103-114. Publisher Full Text 37. Motsinger-Reif AA, Fanelli TJ, Davis AC, Ritchie MD: Power of grammatical evolution neural networks to detect gene-gene interactions in the presence of error. BMC Res Notes 2008, 1:65. PubMed Abstract | BioMed Central Full Text | PubMed Central Full Text 38. Motsinger-Reif AA, Dudek SM, Hahn LW, Ritchie MD: Comparison of approaches for machine-learning optimization of neural networks for detecting gene-gene interactions in genetic epidemiology. Genet Epidemiol 2008, 32:325-340. PubMed Abstract | Publisher Full Text 39. Risch N: Linkage strategies for genetically complex traits. I. Multilocus models. Am J Hum Genet 1990, 46:222-228. PubMed Abstract | PubMed Central Full Text 40. Li W, Reich J: A complete enumeration and classification of two-locus disease models. Hum Hered 2000, 50:334-349. PubMed Abstract | Publisher Full Text 41. Riedmiller M: Advanced supervised learning in multi-layer perceptrons - from backpropagation to adaptive learning algorithms. Int J Comput Stand Interf 1994, 16:265-275. Publisher Full Text 42. Bammann K: Auswertung von epidemiologischen Fall-Kontroll-Studien mit künstlichen neuronalen Netzen. PhD thesis. University of Bremen; 2001. 43. Akaike H: Information theory and an extension of the maximum likelihood principle. In Second international symposium on information theory. Edited by Petrov BN, Csaki BF. Budapest: Academiai Kiado; 1973:267-281. 44. R Development Core Team: R: A language and environment for statistical computing. [http://www.R-project.org] webcite R Foundation for Statistical Computing, Vienna, Austria; 2008. [ISBN 3-900051-07-0] 45. Fritsch S, Günther F: neuralnet: Training of neural networks. [http://cran.r-project.org/web/packages/neuralnet/index.html ] webcite [R package version 1.2] 46. Computational Genetics Laboratory[http://www.epistasis.org/] webcite Norris-Cotton Cancer Center and Dartmouth Medical School, Lebanon, New Hampshire; 47. Moore JH, Gilberta JC, Tsaif CT, Chiangf FT, Holdena T, Barneya N, Whitea BC: A flexible computational framework for detecting, characterizing, and interpreting statistical patterns of epistasis in genetic studies of human disease susceptibility. J Theor Biol 2006, 241:252-261. PubMed Abstract | Publisher Full Text 48. Schwarz G: Estimating the dimension of a model. Ann Stat 1978, 6:461-464. Publisher Full Text Sign up to receive new article alerts from BMC Genetics
{"url":"http://www.biomedcentral.com/1471-2156/10/87?fmt_view=classic","timestamp":"2014-04-20T16:09:01Z","content_type":null,"content_length":"149579","record_id":"<urn:uuid:fc975ed7-b947-4592-b80f-10e5570ba0be>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00238-ip-10-147-4-33.ec2.internal.warc.gz"}
Fox River Grove Math Tutor Find a Fox River Grove Math Tutor ...I teach Elementary math, Pre-Algebra, Algebra 1, Algebra 2, Geometry, Trigonometry, PreCalculus, Calculus. If you are interested in taking home tutoring classes for your kids, and improving their grades, do not hesitate to contact me. Qualification: Masters in Computer Applications My Approac... 8 Subjects: including algebra 1, algebra 2, geometry, prealgebra ...If a student is at the end of the radius, I would prefer to find a meeting place between both locations, such as a public library, etc. If this is not an option, I will drive the full distance with an increase in the hourly tutoring rate. In addition, I do have a cancellation policy. 15 Subjects: including algebra 1, algebra 2, statistics, trigonometry ...I learned patience and an appreciation for different learning styles from those students. In particular, I became acquainted with the concept of learning styles, sometimes called VARK (Visual/ Auditory/Reading/Kinesthetic).My favorite way to teach is one on one and responding to questions. I str... 17 Subjects: including algebra 1, algebra 2, biology, chemistry ...They usually call me for help right before their exam. I had one friend who had trouble with 'memorizing' trigonometry last year. I helped her to learn how to draw unit circle and trig graphs which she still remembers. 6 Subjects: including algebra 1, algebra 2, geometry, precalculus Harvard/Johns Hopkins Grad- High Impact Math/Verbal Reasoning Tutoring I am a certified teacher who offers ACT/SAT prep for high school students, ISAT/MAP test prep for elementary school students, and GRE/GMAT prep for adults. As a graduate of Northside College Prep, I am also well versed in the s... 38 Subjects: including algebra 1, reading, trigonometry, statistics
{"url":"http://www.purplemath.com/Fox_River_Grove_Math_tutors.php","timestamp":"2014-04-16T16:26:05Z","content_type":null,"content_length":"24068","record_id":"<urn:uuid:9daebb18-bde7-4475-873c-9312f50a14e7>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00383-ip-10-147-4-33.ec2.internal.warc.gz"}
Mplus Discussion >> Chi square significance and sample size Peter Croy posted on Saturday, April 14, 2007 - 7:33 pm Can someone tell me why it is that the chi square test is "almost always" significant when sample size is large. I have 2500 participants and always get a significant chi square despite good fit for other indices (e.g., CFI>0.96). What is the technical explanation for the sensitivity of chi square to sample size. Linda K. Muthen posted on Sunday, April 15, 2007 - 8:30 am The likelihood-ratio chi-square test of model fit for the H0 model against the H1 model is 2 times the sample size times a fitting function. See Technical Appendix 1. If you want to test whether the poor fit is actually due to the sensitivity of chi-square, you can free parameters until you get a well-fitting model according to chi-square and compare the parameter estimates from this analysis to the one with fewer parameters. If the original parameter estimates are reproduced in the less parsimonious model, then you might have a case for chi-square Peter Croy posted on Sunday, April 15, 2007 - 8:11 pm I have already correlated some residuals, but otherwise my model is based on Mplus defaults. I have three latent variables predicting a fourth LV. All LVs have at least 3 (observed) indicators. I still get a sig Chi square using ML (MLM improved/lowered chi square but it was still sig). What parameters do you suggest that I free in order to test for chi square sensitivity? Linda K. Muthen posted on Monday, April 16, 2007 - 7:57 am You can look at modification indices and free the parameters with the larger modification indices. Peter Croy posted on Monday, April 16, 2007 - 11:46 pm I have already done this to correlate residuals ... there are no further MIs of any great effect size. So, where to from here? Do I rely on the often cited claim that large sample size tends very strongly to produce large chi square and, on that basis, chi square tests of model fit can be ignored and,instead, indices such as CFI should be used? Linda K. Muthen posted on Tuesday, April 17, 2007 - 8:13 am Eventually if you free enough fixed parameters, you will obtain a well-fitting chi-square. The question then is did your original model fall apart or not. If so, you can't blame the sensitivity of chi-square. In some cases, there are no single large modification indices that reduce chi-square sufficiently but a set of moderate sized ones. This could point to a poor model. A factor analysis model is not always most appropriate for the data. Rachel Dyane Upton posted on Monday, November 10, 2008 - 10:26 pm Hello. I am trying to run a latent class analysis with ordinal indicators (there are 8 ordinal variables with between 4 and 5 categories each) and 2 latent classes. For many of my models I've been getting a p-value of 1 for the likelihood-ratio chi-square test, and a p-value of between .7 and .3 for the Pearson's chi-square test. I am not receiving any error messages that warn me of singularity problems, etc., so should I ignore the p-values for the likelihood-ratio chi-square test, or is it in fact an indication that something serious is wrong? Thank you. Linda K. Muthen posted on Tuesday, November 11, 2008 - 1:06 pm The likelihood ratio and Pearson chi-square test work best with around 8 or fewer items. In these cases, they are trustworthy if they agree. If they do not agree, I would not use them. Lois Downey posted on Friday, February 06, 2009 - 11:57 am My 5-factor CFA model with 17 ordinal indicators, based on 1291 cases, has good fit except for the chi-square test. Following your instructions, I freed parameters until the chi-square test was non-significant -- in the process allowing 13 indicator pairs to have correlated residuals. Can you tell me how close the parameter estimates in the two models must be in order for me to conclude that the misfit of the original model is due to chi-square sensitivity? All of the factor loadings in the less parsimonious model remain statistically significant, with the absolute difference between the standardized factor loadings for the two models varying between .000 and .066 (mean absolute difference = .019). However, 10 of the 13 pairs of residuals have correlations significantly different from 0, and the absolute values of some of those 10 are quite large (with the largest five falling between .30 and Should I conclude that the original model shows unacceptable fit? Lois Downey Linda K. Muthen posted on Friday, February 06, 2009 - 4:02 pm Given the number of number of significant residual covariances, you might want to go back to an EFA or the method described in the following paper which is available on the website: Asparouhov, T. & Muthén, B. (2008). Exploratory structural equation modeling. Accepted for publication in Structural Equation Modeling. Jerry Cochran posted on Wednesday, October 20, 2010 - 5:51 pm Hi Dr. Muthen, I have a couple of questions on LCA and chi square significance: 1) Are the Pearson Chi-Square and the Likelihood Ratio Chi-Square both supposed to have p-values greater than .05 to have a good fitting model? 2) If so, what if one or both of them become less than .05 during the process of adding classes to find the optimal number of classes? Linda K. Muthen posted on Thursday, October 21, 2010 - 2:22 pm 1. Yes. 2. These tests usually don't work well with more than 8 latent class indicators. If they are not pretty close, they should both be ignored. s v posted on Friday, November 26, 2010 - 11:18 am hi, I have a question relating to sample size. I have 2 categories (subject can response to 'test' or to 'control', not both). I’m using a Chi2 test; in order to get a confidence interval of 95% (alpha = 0.05), how large would my minimum sample size have to be? Is a sample size of 10 enough? Félix Caballero posted on Wednesday, February 13, 2013 - 8:52 am Hello. Is the ratio of chi-square to degrees-of-freedom also influenced by large sample sizes? This ratio is used to assess the fit of a model with ratio <3 being considered acceptable. I have a two-factors model with 22 free parameters and a sample size higher than 10,000, and I wonder whether I should use this criteria to assess the goodness-of-fit of the model. Linda K. Muthen posted on Wednesday, February 13, 2013 - 10:25 am We don't advocate the use of this ratio. You might want to post your question on SEMNET or a general discussion forum for other opinions. Back to top
{"url":"http://www.statmodel.com/cgi-bin/discus/discus.cgi?pg=next&topic=11&page=2156","timestamp":"2014-04-20T13:25:02Z","content_type":null,"content_length":"36853","record_id":"<urn:uuid:0d46e92a-c6e8-43b2-b78e-70edacf32a15>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00171-ip-10-147-4-33.ec2.internal.warc.gz"}
Tangent to the curve February 9th 2009, 12:48 PM #1 Sep 2008 Tangent to the curve I need help finding the tangent to the curve $y = \frac{3}{1 + \sqrt{x}}$, at the point (4,1). Im really only having problems finding the gradent of the tangent, i already know the answer but everytime i try i either get the gradent as 0 or infinity which isnt correct. Im not ment to be using differentiation, but instead using limits. I need help finding the tangent to the curve $y = \frac{3}{1 + \sqrt{x}}$, at the point (4,1). Im really only having problems finding the gradent of the tangent, i already know the answer but everytime i try i either get the gradent as 0 or infinity which isnt correct. Im not ment to be using differentiation, but instead using limits. Let $f(x)=\frac{3}{1+\sqrt{x}}$ You have to find $f'(4)$ in order to find the gradient. So since you have to use limits and not differentiation, remember that : $f'(a)=\lim_{h \to 0} \frac{f(a+h)-f(a)}{h}$ So here : $f'(4)=\lim_{h \to 0} \frac{f(4+h)-f(4)}{h}$ $=\lim_{h \to 0} \frac{\frac{3}{1+\sqrt{4+h}}-\frac{3}{1+\sqrt{4}}}{h}$ $=\lim_{h \to 0} \frac{\frac{3}{1+\sqrt{4+h}}-1}{h} \cdot \frac{1+\sqrt{4+h}}{1+\sqrt{4+h}}$ $=\lim_{h \to 0} \frac{3-(1+\sqrt{4+h})}{h(1+\sqrt{4+h})}$ $=\lim_{h \to 0} \frac{2-\sqrt{4+h}}{h(1+\sqrt{4+h})} \cdot \frac{2+\sqrt{4+h}}{2+\sqrt{4+h}}$ (it's multiplying by the conjugate of the numerator and then use the identity $(a-b)(a+b)=a^2-b^2)$ $=\lim_{h \to 0} \frac{4-(4+h)}{h(1+\sqrt{4+h})(2+\sqrt{4+h})}$ $=\lim_{h \to 0} \frac{-h}{h(1+\sqrt{4+h})(2+\sqrt{4+h})}$ $=\lim_{h \to 0} \frac{-1}{(1+\sqrt{4+h})(2+\sqrt{4+h})}$ And now the limit is defined... February 11th 2009, 09:12 AM #2
{"url":"http://mathhelpforum.com/calculus/72695-tangent-curve.html","timestamp":"2014-04-18T11:16:26Z","content_type":null,"content_length":"36942","record_id":"<urn:uuid:ec2cdcf5-31c2-41fc-911a-68eba962a8a8>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00200-ip-10-147-4-33.ec2.internal.warc.gz"}
MathGroup Archive: June 2013 [00054] [Date Index] [Thread Index] [Author Index] Re: Rookie questions about solving for small numbers and others • To: mathgroup at smc.vnet.net • Subject: [mg131039] Re: Rookie questions about solving for small numbers and others • From: Peter Klamser <klamser at googlemail.com> • Date: Wed, 5 Jun 2013 03:30:01 -0400 (EDT) • Delivered-to: l-mathgroup@mail-archive0.wolfram.com • Delivered-to: l-mathgroup@wolfram.com • Delivered-to: mathgroup-outx@smc.vnet.net • Delivered-to: mathgroup-newsendx@smc.vnet.net • References: <20130604055823.C8D676A27@smc.vnet.net> You have two problems: 1. Finding the root of a expression in an area with very small numbers. 2. In that area with very small numbers you have a very flat gradient. Because I can not read the expressions you posted with Mathematica 9 I can not help you further on. 2013/6/4 Samuel Mark Young <sy81 at sussex.ac.uk>: > Hello, > I'm trying to solve a problem involving various integrals. Essentially, I= 'm perturbing a gaussian distribution, and then trying to find a value for = sigma (the standard deviation) for which there is a 10^-5 chance of being g= reater than 1 (i.e. What value of sigma gives a value of 10^-5 when the pdf= is integrated from 1 to infinity). The aim is to find how sigma changes wi= th the different perturbations. The code below is a shortened version of wh= at I'm currently using - you may not need to know all the relevant details. > Explanation: Here I'm adding a quadratic perturbation with coefficient f = to the Gaussian variable x to make a new variable zeta. The aim here is to = plot a graph (eventually) showing how sigma changes with f. Because I want = to use this code to work with higher order equations (for which there is no= analytic solution), I use a temporary value for sigma (=1B$B&R=1B(Btem= p - provided that the value of sigma found is close to =1B$B&R=1B(Btemp= , this works well) and solve numerically. Similarly, the easiest way to han= dle the in tegrations is just to find the relevant values of x for which ze= ta>1, and integrate the Gaussian distribution over those values. I then use= FindRoot to find a value of sigma which satisfies the equation. > f = 0.5; > =1B$B&R=1B(Btemp = 0.2; The values here have been picked arbitraril= > zeta = x + f (x^2 - =1B$B&R=1B(B^2); > xCritical = x /. NSolve[1 == zeta /. =1B$B&R=1B(B -> =1B$B&R= =1B(Btemp, x, Reals]; > yCritical = xCritical/=1B$B&R=1B(B; This calculates the critical va= lues to integrate between > =1B$B&R=1B(Btemp = =1B$B&R=1B(B /. > FindRoot[ > Sum[(1/ > Sqrt[2*Pi]) (If[(Abs[D[zeta, x]] /. x -> xCritical[[n]]) > > 0, If[(D[zeta, x] /. x -> xCritical[[n]]) > 0, 1, -1], > 0]) (Integrate[Exp[-(y^2)/2], {y, yCritical[[n]], =1B$B!g=1B= (B}]), {n, > Length[xCritical]}] The Sum[] command generates a series of ERFC'= s (this code will throw up errors for certain values of f, but the full cod= e doesn't) > + If[(Simplify[D[zeta, x] /. x -> xCritical[[1]]]) < 0, 1, 0] = = 10^(-5), {=1B$B&R=1B(B, 0.00001, 2}] FindRoot searches for a value of s= igma between 0 and 2 (the solution should always lie in this range - though= putting in zero exactly results in divide by zero errors > There are currently 3 problems I'm having: > 1) Underflow occurs in the computation a lot for certain values of f > 2) I want to be able to solve for when the integrals equal 10^-20 (as opp= > e to 10^-5) - which is greater than machine precision. I've tried fiddlin= g = > with settings like AccuracyGoal, PrecisionGoal and WorkingPrecision but c= > 't find anything that makes it work reliably (instead of a smooth curve, = I = > end up with jagged spikes. Its entirely possible, even likely, that I'm m= > sing something obvious. > 3) For negative values of f, there are no solutions to x + f (x^2 - =1= > R=1B(Btemp^2)=1 if =1B$B&R=1B(Btemp is too small. The problem the= n, is th= > at, when =1B$B&R=1B(Btemp is increased, unless there is remarkable fi= ne tun= > ing, the final value of sigma found is not similar to =1B$B&R=1B(Btem= p. The= > only way I've found to handle this is to very slowly increment =1B$B&R= > Btemp until there are real solutions, then use FindRoot to find a value f= > sigma, and compare sigma to =1B$B&R=1B(Btemp to see if they match (t= o 4 s.= > f. is fine). However, this takes a very long time when I want to do this = > peatedly. > Many thanks in advance for taking the time to read this, and any help is = > ry well appreciated. I think I have included all the information needed, = > t please ask if you need more. Please feel free to contact me directly at= > y81 at sussex.ac.uk with any questions. > Regards, > Sam Young • References:
{"url":"http://forums.wolfram.com/mathgroup/archive/2013/Jun/msg00054.html","timestamp":"2014-04-17T18:49:32Z","content_type":null,"content_length":"30297","record_id":"<urn:uuid:aa76f291-f462-4f93-8074-3cccff867262>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00272-ip-10-147-4-33.ec2.internal.warc.gz"}
Title page for ETD etd-06022010-020350 Random or stochastic integral equations occur frequently in the mathematical description of random phenomena in engineering, physics, biology, and oceanography. The present study is concerned with random or stochastic integral equations of the Volterra type in the form x(t;w) = h(tiW) + fa k(t,T~w)f(T,x(Tjw»dT, t > 0, and of the Fredholm type in the form 00 x(tjw) = h(t:w) + fa ko(t,T;w)e(T,x(T;w»dT, t ~ 0, where w £ Q, the supporting set of a complete probability measure space (n,A,p). A random function x(t:w} is said to be a random solution of an equation such as those above if it satisfies the equation with probability one. It is also required that X(tiW) be a second order stochastic process. The purpose of this dissertation is to investigate the existence, uniqueness, and stochastic stability properties of a random solution of these Volterra and Fredholm stochastic integral equations using the "theory of admissibility" and probabilistic functional analysis. The techniques of successive approximations and stochastic approximation are employed to approximate the random solution of the stochastic Volterra integral equation, and the convergence of the approximations to the unique random solution in mean square and with probability one is proven. Problems in telephone traffic theory, hereditary mechanics, population growth, and stochastic control theory are formulated, and some of the results of the investigation are applied. Finally, a discrete version of the above random integral equations is given, and several theorems concerning the existence, uniqueness, and stochastic stability of a random solution of the discrete equation are proven. Approximation of the random solution of the discrete version is obtained, and its convergence to the random solution is studied. This work extends and generalizes the work done by C. P. Tsokos in Mathematical Systems Theory 3 {1969}, pages 222-231, and M. W. Anderson in his Ph.D. dissertation at the University of Tennessee, 1966, among others. Extensions:of this research to several areas of application are proposed.
{"url":"http://scholar.lib.vt.edu/theses/available/etd-06022010-020350/","timestamp":"2014-04-21T04:39:58Z","content_type":null,"content_length":"11090","record_id":"<urn:uuid:acb622ed-fc4b-4cc5-9520-0fc98e58d0c4>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00517-ip-10-147-4-33.ec2.internal.warc.gz"}
Appendices for automated theorem proving in Euler diagram systems Stapleton, G., Masthoff, J., Flower, J., Fish, A. and Southern, J. (2006) Appendices for automated theorem proving in Euler diagram systems University of Brighton, Brighton, UK. (Unpublished) This report is a series of appendices to accompany the paper Automated Theorem Proving in Euler Diagram Systems. Here we include some details omitted from that paper and some additional discussions that may be of interest. In appendix A, we give an overview of the A* search algorithm in the context of theorem proving. We establish the expressiveness of Euler diagrams in appendix B. A complete worked example showing how to calculate the restrictive heuristic is given in appendix C. The proofs of the three theorems given in the paper are included in appendix D. The notion of clutter in Euler diagrams and how our tactics steer Edith towards proofs containing diagrams with low `clutter scores' is covered in appendix E. Details on how we generated proof tasks to evaluate Edith are given in appendix F. Finally, much of our evaluation is presented in appendix G, although the main results are included in the paper. Item Type: Other form of assessable output Subjects: G000 Computing and Mathematical Sciences > G100 Mathematics DOI (a stable link to the resource): VMG.06.2 Faculties: Faculty of Science and Engineering > School of Computing, Engineering and Mathematics > Visual Modelling ID Code: 3000 Deposited By: Helen Webb Deposited On: 10 Nov 2007 Last Modified: 13 Jul 2012 17:37 Repository Staff Only: item control page
{"url":"http://eprints.brighton.ac.uk/3000/","timestamp":"2014-04-20T21:05:32Z","content_type":null,"content_length":"22325","record_id":"<urn:uuid:10a83211-138c-447b-931b-845907cead2f>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00580-ip-10-147-4-33.ec2.internal.warc.gz"}
[SciPy-User] Orthogonal distance regression in 3D Владимир draft2008@bk... Fri Mar 2 00:02:43 CST 2012 I'm working with orthogonal distance regression (scipy.odr). I try to fit the curve to a point cloud (3d), but it doesn work properly, it returns wrong results For example I want to fit the simple curve y = a*x + b*z + c to some point cloud (y_data, x_data, z_data) def func(p, input): x,z = input x = np.array(x) z = np.array(z) return (p[0]*x + p[1]*z + p[2]) initialGuess = [1,1,1] myModel = Model(func) myData = Data([x_data, z_daya], y_data) myOdr = ODR(myData, myModel, beta0 = initialGuess) out = myOdr.run() print out.beta It works perfectly in 2d dimension (2 axes), but in 3d dimension the results are not even close to real, moreover it is very sensitive to initial Guess, so it returns different result even if i change InitiaGuess from [1,1,1] to [0.99,1,1] What do I do wrong? Im not very strong in mathematics, but may be I should specify some additional parameters such as Jacobian matrix or weight matrix or something else? -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.scipy.org/pipermail/scipy-user/attachments/20120302/bd70d7cb/attachment.html More information about the SciPy-User mailing list
{"url":"http://mail.scipy.org/pipermail/scipy-user/2012-March/031667.html","timestamp":"2014-04-18T01:31:20Z","content_type":null,"content_length":"4040","record_id":"<urn:uuid:ca19808b-3994-4462-b5b0-1b3143c4098b>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00078-ip-10-147-4-33.ec2.internal.warc.gz"}
Conjugacy of P-configurations and nonlinear solutions to a certain conditional Cauchy equation Banach Journal of Mathematical Analysis Conjugacy of P-configurations and nonlinear solutions to a certain conditional Cauchy equation We study the connection between conjugations of a special kind of dynamical systems, called P-configurations, and solutions to homogeneous Cauchy type functional equations. We find that any two regular P-configurations are conjugate by a homeomorphism, but cannot be conjugate by a diffeomorphism. This leads us to the following conclusion (answering an open question posed by Paneah): there exist continuous nonlinear solutions to the functional equation: $$ f(t) = f\left(\frac{t+1}{2}\right) + f\left(\frac{t-1}{2}\right) \,\, , \,\, t \in [-1,1] . $$ • J. Aczél and J. Dhombres, Functional Equations in Several Variables, Cambridge University Press, 1989. • J. Dhombres and R. Ger, Conditional Cauchy equations, Glanik Mat. Ser. III, 13(33), (1978), no. 1, 39–62. Mathematical Reviews (MathSciNet): • G.L. Forti, On some conditional Cauchy equations on thin sets, Boll. Un. Mat. Ital. B (6), 2 (1983), no. 1, 391–402. Mathematical Reviews (MathSciNet): • W. Jarczyk, On continuous functions which are additive on their graphs, Selected topics in functional equations (Graz, 1986), Ber. No. 292, 66 pp., Ber. Math.-Statist. Sekt. Forschungsgesellsch. Joanneum, 285–-296, Forschungszentrum Graz, Graz, 1988. • M. Kuczma, Functional equations on restricted domains, Aequationes Math., 18 (1978), no. 1-2, 1–34. • J. Matkowski, Functions which are additive on their graphs and some generalizations, Rocznik Nauk.-Dydakt. Prace Mat. No. 13 (1993), 233–240. • B. Paneah, On the solvability of functional equations associated with dynamical systems with two generators, (Russian) Funktsional. Anal. i Prilozhen. 37 (2003), no. 1, 55–72, 96; translation in Funct. Anal. Appl. 37 (2003), no. 1, 46–60. • B. Paneah, Dynamic methods in the general theory of Cauchy type functional equations, Complex analysis and dynamical systems, 205–223, Contemp. Math., 364, Amer. Math. Soc., Providence, RI, 2004. • B. Paneah, On the over determinedness of some functional equations, Partial differential equations and applications. Discrete Contin. Dyn. Syst. 10 (2004), no. 1-2, 497–505. • M. Sablik, Some remarks on Cauchy equation on a curve, Demonstratio Math., 23 (1990), no. 2, 477–-490. • O.M. Shalit, Guided Dynamical Systems and Applications to Functional and Partial Differential Equations M.Sc. thesis, available at arXiv:math /0511638v2. • O.M. Shalit, On the overdeterminedness of a class of functional equations, Aequationes Math., 74 (2007), no. 3, 242–248. • M. Zdun, On the uniqueness of solutions of the functional equation $\varphi((x+f(x)) =\varphi(x)+\varphi(f(x))$, Aequationes Math., 8 (1972), 229–-232.
{"url":"http://projecteuclid.org/euclid.bjma/1240336420","timestamp":"2014-04-19T17:05:34Z","content_type":null,"content_length":"35644","record_id":"<urn:uuid:3790fd3f-8179-4e7d-88f0-23d1a8c3f6db>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00263-ip-10-147-4-33.ec2.internal.warc.gz"}
Countable ordinal mapping w/ order preserving function October 19th 2009, 07:36 PM #1 Junior Member Jul 2008 Countable ordinal mapping w/ order preserving function Here's the problem as stated: Show if $\alpha$ is any countable ordinal, then $\exists f:\alpha \rightarrow \Re$ (to the reals) where $f$ is order preserving... So $\beta \epsilon \gamma \epsilon \alpha \Rightarrow f(\beta) < Now, the only idea I have is to create a function that will map any countable ordinal to $\Re$, for instance: $1 \rightarrow .1$ $2 \rightarrow .12$ $3 \rightarrow .123$ $\omega \rightarrow .12345...$ So this cover through $\omega$. Do I need it to cover through $\omega_{1}$, the first uncountable ordinal? If so, what is an example of something that can get that large? Maybe all the irrationals, mapped similarly as above? Any help would be appreciated. Note: I'm trying to avoid using the continuum hypothesis. Can you use transfinite induction to beef up your construction, so that instead of just going up to $\omega$ it goes up to any countable ordinal $\alpha$? Transfinite induction seems to be the way to go on this one. In case anyone comes across this same sort of problem, I found the following at http://www.math.niu.edu/~rusin/known-math/00_incoming/countable_ord and it appears to have been posted by Dave Seaman of Purdue. Lemma. Let Q(0,1) be the set of rationals in the interval (0,1). Then there is an order-preserving map f: Q -> Q(0,1). Proof. Divide (0,1) into a countable number of subintervals such that the set of subintervals has the same order type as Z. For example, the partition points may be 1/2 +/- 1/2^k for k = 2, 3, 4, .... For each n, construct an order-preserving map of the rationals in [n,n+1) into the n-th subinterval of (0,1), where [1/2,3/4) is the zeroth subinterval. Proposition. Let alpha be a countable ordinal. Then there is an order-preserving map f: alpha -> Q. Proof. By transfinite induction. The case for successor ordinals follows immediately from the lemma. Suppose gamma is a countable limit ordinal, and suppose { alpha_k } is a countable sequence of ordinals with lim_k alpha_k = gamma, where for each alpha_k we have an order-preserving f_k : alpha_k -> Q. Using the lemma, we may construct an order-preserving g_k : alpha_k -> Q(k,k+1), where the range is the set of rationals in the interval (k,k+1). Then we define f: gamma -> Q such that for each beta < gamma, we let k be the least index such that beta < alpha_k, and then define f(beta) = f_k(beta). The resulting map is clearly order-preserving, Q.E.D. Notice it does not follow from this that there is an order-preserving map defined on the union of all the countable ordinals, since that is an uncountable set and any partition of the reals or rationals into nondegenerate intervals is necessarily a countable collection. In fact omega_1, the first uncountable ordinal, is the smallest ordinal that cannot be so mapped onto the rationals (or the reals). October 20th 2009, 12:51 PM #2 October 26th 2009, 09:27 AM #3 Junior Member Jul 2008
{"url":"http://mathhelpforum.com/discrete-math/109134-countable-ordinal-mapping-w-order-preserving-function.html","timestamp":"2014-04-20T00:57:59Z","content_type":null,"content_length":"41154","record_id":"<urn:uuid:6bd98d1a-6c23-4975-995b-40d32c177191>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00406-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Air at 17oC, 100 kPa flows in a duct. A stagnation tube connected to a U-tube manometer filled with mercury is placed in the duct. Using data on the figure, find the air velocity. Assume atmospheric pressure is 100 kPa. • one year ago • one year ago Best Response You've already chosen the best response. Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/5090064ae4b0ad6205376889","timestamp":"2014-04-16T22:41:08Z","content_type":null,"content_length":"28677","record_id":"<urn:uuid:b7748228-ba07-4a7d-ba29-74f3cdcf90ee>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00472-ip-10-147-4-33.ec2.internal.warc.gz"}
Finite Continued Fraction Question December 7th 2010, 08:05 PM #1 Nov 2008 Finite Continued Fraction Question Show that every rational number has exactly two finite simple continued fraction expansions. (Does this have something to do with how you handle the end of the continued fraction?) $\displaystyle \frac{225}{157}$ This can be represented as $<b_0;b_1,b_2,\cdots, b_k> \ \mbox{and} \ <b_0;b_1,b_2,\cdots, b_k-1,1>$ Therefore, our fraction can be represented like $\displaystyle <1; 2,3,4,5>$ $\displaystyle 1+\frac{1}{2+\frac{1}{3+\frac{1}{4+\frac{1}{5}}}}$ $\displaystyle <1;2,3,4,4,1>$ $\displaystyle 1+\frac{1}{2+\frac{1}{3+\frac{1}{4+\frac{1}{4+\fra c{1}{1}}}}}}$ December 11th 2010, 06:10 PM #2 MHF Contributor Mar 2010
{"url":"http://mathhelpforum.com/number-theory/165635-finite-continued-fraction-question.html","timestamp":"2014-04-16T04:23:18Z","content_type":null,"content_length":"32947","record_id":"<urn:uuid:727fea9f-46ab-4ae4-a7df-fdb9382b5fe5>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00548-ip-10-147-4-33.ec2.internal.warc.gz"}
A Generalized Nonlinear Gronwall-Bellman Inequality with Maxima in Two Variables Journal of Applied Mathematics Volume 2013 (2013), Article ID 853476, 10 pages Research Article A Generalized Nonlinear Gronwall-Bellman Inequality with Maxima in Two Variables Department of Mathematics, Sichuan University for Nationalities, Kangding, Sichuan 626001, China Received 15 November 2012; Accepted 20 January 2013 Academic Editor: Jitao Sun Copyright © 2013 Yong Yan. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. This paper deals with a generalized form of nonlinear retarded Gronwall-Bellman type integral inequality in which the maximum of the unknown function of two variables is involved. This form includes both a nonconstant term outside the integrals and more than one distinct nonlinear integrals. Requiring neither monotonicity nor separability of given functions, we apply a technique of monotonization to estimate the unknown function. Our result can be used to weaken conditions for some known results. We apply our result to a boundary value problem of a partial differential equation with maxima for uniqueness. 1. Introduction The Gronwall-Bellman inequality [1, 2] plays an important role in the study of existence, uniqueness, boundedness, stability, invariant manifolds, and other qualitative properties of solutions of differential equations and integral equations. There can be found a lot of its generalizations in various cases from literatures (see, e.g., [3–18]). In 1956, Bihari [3] discussed the integral inequality where is a constant, is a continuous and nonnegative function, and is a continuous and nondecreasing positive function. Replacing by a function in (1), Lipovan [4] investigated the retarded integral inequality Their results were further generalized by Agarwal et al. [5] to the inequality where the constant is replaced with a function , ’s are continuously differentiable and nondecreasing functions, and ’s are continuous and nondecreasing positive functions such that that is, each ratio is also nondecreasing on , called in [6] that is stronger nondecreasing than . On the basis of this work, Wang [7] considered the inequality of two variables where the functions , , and are not required to be monotone, and those ’s are not required to be stronger monotone than the one after the next as shown in (4). This inequality belongs to both the case of multivariables, to which great attentions [7–11] have been paid, and to the case that the left-hand side is a composition of the unknown function with a known function, in which Ou-Iang's idea [19] was applied [11–14]. He applied a technique of monotonization to construct a sequence of functions, made each function possess stronger monotonization than the previous one, and gave an estimate for the unknown function . On the other aspect, many problems in the control theory can be modeled in the form of differential equations with the maxima of the unknown function [20–22]. In connection with the development of the theory of differential equations with maxima (see, e.g., [20, 21, 23]) and partial differential equations with maxima [24, 25], a new type of integral inequalities with maxima is required, respectively. There have been given some results for integral inequalities containing the maxima of the unknown function [23, 26–28]. Concretely, in 2012, Bohner et al. [26] discussed the following system of integral inequalities: where , ’s, ’s, , and are nonnegative continuous functions and ’s are nonnegative continuously differentiable and nondecreasing functions. They required that , is on and increasing such that for , and satisfies the following: (i) is an increasing function, and (ii) for all and . Bainov and Hristova [23] considered the following system: where is nonnegative and nondecreasing in both of its arguments, , , and are continuous and nonnegative functions, and . In this paper, we consider the system of integral inequalities as follows: where , ’s, ’s, and are continuous and nonnegative functions, ’s and ’s are nonnegative continuously differentiable and nondecreasing functions, and . As required in previous works [27–29], we suppose that , , is constant. In this paper, we require neither monotonicity of , 's, 's, and nor . We monotonize those ’s to make a sequence of functions in which each one possesses stronger monotonicity than the previous one so as to give an estimation for the unknown function. We can use our result to discuss inequalities (6) and (7), giving the stronger results under weaker conditions. We finally apply the obtained result to a boundary value problem of a partial differential equation with maxima for 2. Main Result Consider system (8) of integral inequalities with and in . Let , . Suppose that(H[1]) and , , are nondecreasing such that on , on and ; (H[2]) all ’s are continuous and nonnegative functions on ;(H [3]) and are continuous, and is strictly increasing such that ; (H[4]) all ’s () are continuous on and positive on ; (H[5]) is a continuous and nonnegative function on . For those ’s given in (), define , , inductively by for and for , where for , if or if for , and be a given very small constant. Theorem 1. Suppose that hold, for all and satisfies the system (8) of integral inequalities. Then, for all , where is the inverse of the function is a given constant, is defined just before the theorem, and is defined recursively by for , and are chosen such that for . For the special choice that , , , , , , , , and , where is a nonnegative continuously differentiable and nondecreasing function, Theorem 1 gives an estimate for the unknown in the system (7). we require neither the monotonicity of nor the monotonicity of . Obviously, Lemma 2 and Theorem 1 are applicable to more general forms than Corollary 2.3.4 in [23]. Even if is enlarged to such that (8) is changed into the form of in [29], where , our theorem gives a better estimate. For example, the system of inequalities implies that by enlarging to . Applying Theorem 1, we obtain On the other hand, Theorem 2.2 of [29] gives from (17) that Clearly, (18) is sharper than (19) for large and . In order to prove Theorem 1, we need the following lemma. Lemma 2. Suppose that (C1) and are nondecreasing such that on and on and ;(C2), for ; (C3) all ’s are continuous and nondecreasing on and positive on such that ; (C4) is continuously differentiable in and , nonnegative on , and for all . If satisfies the system of inequalities as follows: then for all , where is the inverse of the function is a given constant, and is defined recursively by for , and are chosen such that for . Proof. From (23), we see that is nondecreasing on , , and for . It implies from (20) that for all . LetClearly, is nondecreasing in . Then, we have From (25), (27), and (28) and the definition of on , we get Applying Theorem1 of [7] to the case that , , , and , , we obtain (21) from (28). This completes the proof. Proof of Theorem 1. First of all, we monotonize some given functions , , , and in the system (8) of integral inequalities. Let From (13), we see that the function is strictly increasing, and therefore its inverse is well defined, continuous, and increasing in its domain. The sequence , defined by , consists of nondecreasing nonnegative functions on and satisfies Moreover, because the ratios , , are all nondecreasing. Furthermore, let which is nondecreasing in and for each fixed and and satisfies for all . The monotonicity of implies that for . From (8) and the definition of , we obtain Concerning (34), we consider the auxiliary system of inequalities where and are chosen arbitrarily, and claim for all , , where , is defined inductively by for , and are chosen such that for . Notice that we may take and . In fact, the monotonicity that and are both nondecreasing in and for fixed , . Furthermore, it is easy to check that , for . If , are replaced with , , respectively, on the left side of (39), we get from (15) that Thus, it means that we can take , . Now, we prove (36) by induction. From (33), (35), and the definitions of , , and , we obtain for all , where and are chosen arbitrarily. Since and , we have . Define a function byClearly, is nondecreasing in . By (41) and the definition of , we have Then noting that is nondecreasing and is strictly increasing, from (43), we obtain It follows from (43), (44), and the definition of that In order to demonstrate the basic condition of monotonicity, let , which is clearly a continuous and nondecreasing function on . Thus, each is continuous and nondecreasing on and satisfies for . Moreover, since , is also continuous and nondecreasing on and positive on , implying that , for . By Lemma 2 and (45), for and . It follows from (43) and (46) that for and . This proves the claimed ( Taking , , and in (36), we have for all , . It is easy to verify . Thus, (48) can be written as Since are arbitrary, replacing and with and , respectively, we get for all . This completes the proof. 3. Applications In this section, we apply our result to prove the boundedness of solutions for a differential equation with the maxima. Consider a system of partial differential equations with maxima where , , are nondecreasing such that , , and ( is a positive constant) for , , and , satisfy that and , for all . Equation (51) is more general than the equation considered in Section 2.4 of [23]. The following result gives an estimate for its solutions. Corollary 3. Suppose that functions and in (51) satisfy where and , . Then, any solution of (51) has the estimate for all , where and , are given as in Theorem 1, and constants , are given Proof. From (51), we obtain From (52) and (55), we get Set for . Noting that , from (56), we get Applying Theorem 1 to specified , , , , and , , , , and , we obtain (53) from (57). Next, we discuss the uniqueness of solutions for system (51). Corollary 4. Suppose that and for all and all , where and are both nondecreasing such that , for , is also nondecreasing, and , . Then, system (51) has at most one solution on . Proof. . From (51), we get Assume that (59) has two different solutions and . From the equivalent integral equation system (55), we have for all . The continuity of the function implies that for any fixed points and there exists a point such that the inequality holds, and therefore Hence, Let Because , from (62), we obtain Applying Theorem 1 to specified , , , , , , , , and , from (64), we
{"url":"http://www.hindawi.com/journals/jam/2013/853476/","timestamp":"2014-04-17T13:09:57Z","content_type":null,"content_length":"1048747","record_id":"<urn:uuid:2c4f3c6a-7c1c-4348-87ae-04ab25cee979>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00615-ip-10-147-4-33.ec2.internal.warc.gz"}
Homework Help Posted by jasmine20 on Sunday, April 15, 2007 at 4:45pm. how can i simplify this more its for the following problem: Problem #22 solve by using the quadratic formual. this is where i am but i do not know how to go further. x = (4 (+/-) sqrt (-4))/(10) Problem #23 Solve by completing the square. i started using the quadratic formula. but i am up to this point x = (-2 (+/-) sqrt (-44))/(8) in #22 you are right so far. Have you learned about imaginary or complex numbers. If not, at this point you would say, "there is no real solution" for #23 you are not supposed to use the quadratic equation. Use the method I showed you in the last post to you, but you have to divide all terms by 4 to get x^2 at the front x^2 + 1/2 x = 3/4 now take 1/2 of the 1/2 which is 1/4, square that and add 1/16 to both sides of the equation to keep the equality. let me know what your got. i don't know what you mean half of the half but how do know to take that half.and where did you get the 1/16 can you show me. write down that step of what you refer to please. 1/2 of 1/2 = 1/2 * 1/2 = 1/4 If you have half a pizza and you take half of that, how much pizza do you have? I had to square 1/4 ---> (1/4)(1/4)= 1/16 I need help NOW because report cards are comming out tommorow and I have a F in math Related Questions HELP MATH - how would u simplify this problem i think it is 60 --- x^6 am i ... math,help - i am stuck with this problem: Problem #12 Solve by using the ... math,help,algebra I - I need help can someone help me get unstuck and let me ... math,correction - Problem 10 use the quadratic formula to solve each of the ... math - solve the equation x^3-5x^2-4x+20=0 I need a hint on how to solve this ... math,correction - Is this correct now. Problem #1 solve by completing the square... Math!Please Help Me! - Solve the following quadratic by either factoring or ... College Algebra - I could not figure out the answer to this Problem: The ... math - can someone show me the steps for this problem because i got it incorrect... maht,correction please - Problem #1 Solve by completing the square 2x^2-4x-11=0 ...
{"url":"http://www.jiskha.com/display.cgi?id=1176669915","timestamp":"2014-04-20T19:28:19Z","content_type":null,"content_length":"9412","record_id":"<urn:uuid:7309946c-0d0c-4576-a9c0-08f63f691309>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00347-ip-10-147-4-33.ec2.internal.warc.gz"}
Polygon tesselation [Archive] - OpenGL Discussion and Help Forums You have to split all triangles sharing the longest edge at the same time, else you'll get T junctions. That means, for manifold meshes, you delete 2 triangles and add 4 triangles for each edge Powered by vBulletin® Version 4.2.2 Copyright © 2014 vBulletin Solutions, Inc. All rights reserved.
{"url":"http://www.opengl.org/discussion_boards/archive/index.php/t-149667.html","timestamp":"2014-04-18T21:12:31Z","content_type":null,"content_length":"13338","record_id":"<urn:uuid:e34807bc-4121-4869-8b9c-83ce9603c6a6>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00258-ip-10-147-4-33.ec2.internal.warc.gz"}
100,547pages on this wiki Stuff from T:Formulas:Defense Does anyone know what the damage reduction from armor is? I know it changes across levels but someone must know the formula... —The preceding unsigned comment was added by WoWWiki-Orb (talk · contr). Damage-Reduction = armor / (armor + 85*level + 400) Taken from http://www.worldofwar.net/guides/damagereduction.php —The preceding unsigned comment was added by Feysharnalie (talk · contr). On the main page I submitted a more precise number for defense skill-per-rating; here, I offer the data from which I derived that number. The first column is taken directly from the character sheet with different gear combinations; the second column is taken from the "GetCombatRatingBonus(2)" API function in-game. The output of this function appears to be floating point, which only allows for a certain number of significant digits; this is why the decimal precision drops as the number of non-decimal digits increases. Please note that WoW only considers Weapon and Defense Skills as integers, which are rounded down, which means there is no change to miss, dodge, etc from 12 to 14 Rating (for example), since both give only 5 Skill. Rating Bonus Defense Skill/Rating Rating/Skill 12 5.0731697876777 0.4227641489731420 2.365385055542000 14 5.9186980856240 0.4227641489731430 2.365385055541990 16 6.7642263835703 0.4227641489731440 2.365385055541990 19 8.0325188304897 0.4227641489731420 2.365385055542000 21 8.8780471284360 0.4227641489731430 2.365385055541990 24 10.146339575355 0.4227641489731250 2.365385055542090 26 10.991867873302 0.4227641489731540 2.365385055541930 30 12.682924469194 0.4227641489731330 2.365385055542040 35 14.796745214060 0.4227641489731430 2.365385055541990 38 16.065037660979 0.4227641489731320 2.365385055542050 45 19.024386703791 0.4227641489731330 2.365385055542040 59 24.943084789415 0.4227641489731360 2.365385055542030 71 30.016254577093 0.4227641489731410 2.365385055542000 92 38.894301705529 0.4227641489731410 2.365385055542000 94 39.739830003475 0.4227641489731380 2.365385055542020 106 44.812999791153 0.4227641489731420 2.365385055542000 --Taleden 16:19, 22 March 2007 (EDT) ○ Rounded down?! Does this mean that the Shield Block rating of 15 on Andormu's Tear which counts for +1.90% (@ lvl70) is rounded down to +1%? And what about things like Greater Inscription of Warding with its 15 Dodge rating? Doesnt that equate to some +0.8% (@ lvl70) to dodge? is that rounded down to 0? Surely not. What about on my character sheet where it says maybe +15.5% chance to parry, is that rounded down? Or does the rounding only occur with Defense Rating conversions and Weapon Skill Rating conversions? Benser 08:44, 25 July 2007 (UTC) ■ Originally, there was only defense skill on gear. The game is only coded to directly translate defense skill into dodge, parry, miss and block chance. Defense rating from gear is converted into defense skill by a formula based on level, and that amount of defense skill can only be an integer. As such, any extra defense rating is rounded down and provides no benefit at that particular time. WoWWiki-Sakkura (talk) 15:00, 4 September 2008 (UTC) The 'ideal' defense skill rating for Druids with 3/3 Survival of the Fittest at 415 seems to be off, as I've had defense 416 today while fighting the Curator in Karazhan and he managed to land a few crushing blows on me. Coincidence? Or is he much more higher than 3 levels above me? —The preceding unsigned comment was added by VerVe (talk · contr). +Defense can actually reduce crushing blows Theoretically you can actually reduce the chance of crushing blows (indirectly) via defense, because if you have enough of it you can get your Dodge, Parry, and Block stats high enough to push Crushing blows off the Attack Table. Although not via the same mechanism, it's important to not be misleading in the fact that there's some sort of "defense cap". —The preceding unsigned comment was added by Nathanmx (talk · contr). shadok posting: I woul like to ask a few questions about defense that i'm unable to test myself. First the defense cap that is mentionned on different sites; is it just the defense level that prevent all posible crit from bosses, or is it a real cap where defense stops reducing damages in any ways? My second question is does defense increase glancing blow rate. Indeed if you just take the dodge/block/parry/miss chance that gives def, you will find that pure dodge or parry rate is better in pure damage reduction. Altough when you try it it seems that defense reduces the damages way more. :From my own experience i would say it is due to an increased glancing blow rate and an increased glancing blow effect, however it is impossible to find on the net a serious test made to calculate it. And i dont see what you mean about the crushing blows. even if your parry miss dodge rate prevents a lot of attacks on you, the remaining attacks will always crush at the same rate. There is no reason for parry dodge ans miss to apply preferentially on a crushing blow. —The preceding unsigned comment was added by WoWWiki-Shadok (talk · contr). Something the article isn't too clear on: A level 73 boss v a level 70 player tank. Does the boss still have a 5% Crit chance v the player, or is it increased due to his skill being 365? For a druid to become crit-immune with Survival of the Fittest (-3% Chance to be crit) leaves the NPC at 2% Crit chance: 365 - 415 * .04 = -2 which means 415 Defense is needed by a feral druid to become crit-immune for a skullboss. Would appreciate it if someone could just confirm that for me, and I can maybe work on making the article a little more clear on this. Furiousv 17:14, 31 December 2007 (UTC) A 73 boss starts with a 5.6% crit chance against a 70 player with max defense. In the absence of resilience, 415 is the target defense for feral druids Thoth 19:13, 16 January 2008 (UTC) arithmetic error In the "2.0 changes" section we have this text: For a level 70 : 2.36 DR = 2 41/112 DR = 1 defense skill The arithmetic is wrong. 241/112 = 2.15 while 241/102 = 2.36. Can someone dig up a URL with the formula for all levels, plus fix whichever number is wrong? (so far the 241/102 is corresponding with my test data better). Thoth 19:13, 16 January 2008 (UTC) Thoughts on Math: I believe they are saying it is a mixed fraction meaning 2 + (41/112) DR = 2 + 0.36607 DR ~= 2.367 DR... or rounded down to be 2.36 DR. —The preceding unsigned comment was added by Brainbit (talk · contr). Tank rating script /script DEFAULT_CHAT_FRAME:AddMessage(2.6-(GetCombatRatingBonus(CR_DEFENSE_SKILL)*.04+GetCombatRatingBonus(CR_CRIT_TAKEN_MELEE)),1,0.5,0) Is that script still accurate? and does it work for all classes? Zurr T ∙ C 02:26, 22 January 2008 (UTC) I don't understand where the 2.6 comes from. Looking at the Combat rating - Resilience page, it seems to be in relation to feral druids with the survival of the fittest (SotF) talent. Should this not be 5.6 for tanks which don't have the -3% crit against chance given by SotF? I believe 5.6% would come from: □ 5% crit chance for all mobs of equal level (from Defense page), and □ and 0.6% crit chance for the extra 15 weapon skill a lvl73 will have against a lvl70 (0.04 crit/weapon skill). --Brainbit 21:49, 26 January 2008 (UTC) Base defense value 350 or 370 The page consistently speaks about the base defense capped at character_level * 5 => 70*5 = 350 at level 70, but my level 70 draenei warrior has a defense skill of 370 (in the actual skills table). Since 490 is the aim regaantrdless, it's not a big deal, but I wonder at the difference. Kallewoof 20:58, 31 March 2008 (UTC) It's because you have the anticipation talent. Anticipation adds 20 defense to the base defense, so the base will appear as 370 in your tables. —The preceding unsigned comment was added by Georgesmith (talk · contr). Patch 3.0.2 Changes? Is anyone working on changes to Defense in the 3.0.2 patch? My specific question: In 3.0.2, the Druid Talent Survival of the Fittest now reduces chance to be critically hit by 6%, from an original 3%. Since a 73 mob (boss) only has a crit chance of 5.6%, doesn't this mean that I am crit-immune without adding any +def to my base stat? The whole thing bases on max. Weapon Skill (WS) vs. max Defense Skill (DS), which is cLvl*5. For a 80 player this would be 80*5=400 DS vs 80*5=400 WS from a 80 mob. According to this, in the old (2.4.3) rules it still would mean 5% crit chance for the mob, add 3*0.2% for a lvl 83 mob, and you'll be at 5.6%. Now, Survival of the Fittest still grants 6% crit chance reduction, so I still should be crit-immune, right? The formula for Druids on the Defense page states 2.6% (instead of the usual 5.6%), so Survival of the Fittest is taken into account there. Now if i substract 6%, I already start with a negative number (and WoW returns an error when using the formula that way, hehe)... another proof we should be crit-immune now :D If my theory was correct, this would be great news for all Druid tanks, as we don't have to gimp ourselves with +def any more and can go for sta, agi, dodge, ap instead :) —The preceding unsigned comment was added by Subworx (talk · contr). Remove section on crushing blows? I've updated the section on crushing blows to better reflect gameplay after patch 3.0 but the entire section can probably be moved to another article or removed completely. Crushing blows are irrelevant to raids and high-level dungeons and hence probably not even worth bringing up in an article about defense (arguably a stat that only max-level tanks care about anyway). —The preceding unsigned comment was added by Hunterforhire (talk · contr). Self contradiction I decided to read up on tanking stats, as I am looking into making a tank set. I noticed this quote at the beginning of the article. "It further decreases the chance of receiving critical hits from any level attacker by 0.04% per point that the target's Defense skill exceeds the attacker's Weapon Skill." And this quote later down. "Each point of defense beyond the player's base reduces the chance to be critically struck by 0.04%." They can't both be right, because a raid boss's effective weapon skill on a level 80 raider is.. 415, which means that the first 15 points of defense beyond the base 400 (assuming it's trained up) don't reduce the raider's chance to be critically hit. Correct me if I'm wrong... but doesn't that go completely against all of the examples of how to become crit immune, which start counting crit chance reductions from the very first point of additional defense? Does the attacker's weapon skill have anything at all to do with the crit chance reduction given by defense? Or is the crit chance reduction purely based on every point of defense that exceeds the raider's level*5, regardless of the attacker's level or stats? LieAfterLie (talk) 11:55, 29 August 2009 (UTC) I know this is belated with respect to the question, but for anyone reading here is the explanation. The first part means that every NPC in the game (unless otherwise stated) has a base crit chance of 5%. But at 400 defense against a level 83 mob (weapon skill of 415), you're "recuding his chance to crit" by -.06% ((400-415) * .04 = -.06%). This leads to a double-negative: (5.0 - (-.6%) = 5.6%), an increase of .6% over his base crit rate. This is what the second quote is explaining, which is that his chance to crit against you is 5.6% from the start, so every point of defense skill that you gain reduces the 5.6% by 0.04%, requiring 140 total for a level 83 mob. Of course, the easier way to look at it if your skill is maxed at any level is to just add the .2% per level above you and work from there without the reverse logic, but the previous example can be used if you're not 80 yet going into new content, or if your defense hasn't yet caught up shortly after reaching 80. This serves as a warning: if you just reached level 80, without accounting for resilience, even with 140 defense from gear, your character defense skill is only 395; you're still not uncrittable for raids until your character's defense skill reaches 400. (Ravath) 30 April 2010 A lot of numbers thrown around, without a lot of explanation. "Critical Hit immunity for a level 80 player against a raid boss occurs at 540 Defense and requires a defense skill of 140 (689+ def rating) from gear to achieve. Critical hit immunity at level 80 for a heroic dungeon is 535 Defense, because mobs in a level 80 heroic 5-man are never higher than level 82. " So, do I need the thing on my character sheet to read '689'? Or 540? Gear doesn't show 'defense skill', just 'defense'. Is that the same thing? Any chance of a short version? --Azaram (talk) 09:40, April 29, 2010 (UTC) Gear has defense rating, which adds to the defense stat on your character sheet in much the same manner as critical hit rating. Your CHARATER SHEET, in the Defenses tab, will list your defense stat, and that's what needs to say 540. To get that, you will need a total of 689 defense rating on your gear. -- Dark T Zeratul (talk) 16:10, April 29, 2010 (UTC) I edited the Critical Hit section re: Heroic Mobs and Raid Bosses to make it more clear that a 535 Defense is sufficient for Heroics, because it's not uncommon to get DPS (i.e. non-Tanks) in Heroics screaming that a Tank needs 540 to be Crit capped. Jspattison (talk) 00:20, May 4, 2010 (UTC) Does anyone know the formula for the increase in defense rating required per skill per level, or some way to derive it? I tried shortly to work it out, but didn't get any results. I was curious about this to write a script to see whether you're currently uncritable at any level while leveling up :) (Ravath) The best I can do is say that Combat rating#Defense skills has the values for levels 60 and 70. -- Dark T Zeratul (talk) 21:26, April 30, 2010 (UTC)
{"url":"http://www.wowwiki.com/Talk:Defense","timestamp":"2014-04-19T08:34:43Z","content_type":null,"content_length":"73170","record_id":"<urn:uuid:e00ff1bc-e596-420c-b02e-a2da66b942fb>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00327-ip-10-147-4-33.ec2.internal.warc.gz"}
[SciPy-User] Kurtosis/Skewness Robert Kern robert.kern@gmail.... Tue Mar 30 16:21:24 CDT 2010 On Tue, Mar 30, 2010 at 16:04, Dan bole <d.boles@hotmail.com> wrote: > Hi all, > I am trying to create a series of random variables selected from a > distribution. I would like this distribution to start as a normal > distribution, but then be altered based on assumptions of skewness and > kurtosis (so I am not calculating skewness/kurtosis from a dataset, but > instead creating the probability density function from assumptions of > skewness/kurtosis). I can create a normal distribution and then pull random > variables from this, and was wondering if it is possible to create a > distribution based on assumptions of skewness and kurtosis? There are an infinite number of distributions that will have the same skewness and kurtosis. However, it is reasonable to search for the maximum entropy distribution satisfying those constraints. The normal distribution is the maximum entropy distribution for a fixed mean and The PDF will have the form: pdf(x) = c * exp(- lagrange * (x ** arange(1, 5))) c is just the normalizing constant. You will have to find the lagrange parameters that satisfy the mean, variance, skewness and kurtosis. Sampling from this distribution will be tricky, though. You will have to resort to general methods that are going to be pretty slow. Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco More information about the SciPy-User mailing list
{"url":"http://mail.scipy.org/pipermail/scipy-user/2010-March/024908.html","timestamp":"2014-04-17T12:59:07Z","content_type":null,"content_length":"4216","record_id":"<urn:uuid:aa9c1b0f-55d3-45ef-8aaf-32221f8b455b>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00038-ip-10-147-4-33.ec2.internal.warc.gz"}
Back to Direct Style Results 1 - 10 of 35 , 1992 "... This paper investigates the transformation of v -terms into continuation-passing style (CPS). We show that by appropriate j-expansion of Fischer and Plotkin's two-pass equational specification of the CPS transform, we can obtain a static and context-free separation of the result terms into "esse ..." Cited by 81 (7 self) Add to MetaCart This paper investigates the transformation of v -terms into continuation-passing style (CPS). We show that by appropriate j-expansion of Fischer and Plotkin's two-pass equational specification of the CPS transform, we can obtain a static and context-free separation of the result terms into "essential" and "administrative" constructs. Interpreting the former as syntax builders and the latter as directly executable code, we obtain a simple and efficient one-pass transformation algorithm, easily extended to conditional expressions, recursive definitions, and similar constructs. This new transformation algorithm leads to a simpler proof of Plotkin's simulation and indifference results. Further we show how CPS-based control operators similar to but more general than Scheme's call/cc can be naturally accommodated by the new transformation algorithm. To demonstrate the expressive power of these operators, we use them to present an equivalent but even more concise formulation of - Proceedings of the 4th International Conference on Typed Lambda Calculi and Applications (TLCA'99 , 1999 "... We present a system of natural deduction and associated term calculus for intuitionistic non-commutative linear logic (INCLL) as a conservative extension of intuitionistic linear logic. We prove subject reduction and the existence of canonical forms in the implicational fragment. ..." Cited by 33 (15 self) Add to MetaCart We present a system of natural deduction and associated term calculus for intuitionistic non-commutative linear logic (INCLL) as a conservative extension of intuitionistic linear logic. We prove subject reduction and the existence of canonical forms in the implicational fragment. - Implementation and Application of Functional Languages, 16th International Workshop, IFL’04, number 3474 in Lecture Notes in Computer Science , 2004 "... Abstract. Landin’s SECD machine was the first abstract machine for applicative expressions, i.e., functional programs. Landin’s J operator was the first control operator for functional languages, and was specified by an extension of the SECD machine. We present a family of evaluation functions corre ..." Cited by 27 (19 self) Add to MetaCart Abstract. Landin’s SECD machine was the first abstract machine for applicative expressions, i.e., functional programs. Landin’s J operator was the first control operator for functional languages, and was specified by an extension of the SECD machine. We present a family of evaluation functions corresponding to this extension of the SECD machine, using a series of elementary transformations (transformation into continuation-passing style (CPS) and defunctionalization, chiefly) and their left inverses (transformation into direct style and refunctionalization). To this end, we modernize the SECD machine into a bisimilar one that operates in lockstep with the original one but that (1) does not use a data stack and (2) uses the caller-save rather than the callee-save convention for environments. We also identify that the dump component of the SECD machine is managed in a callee-save way. The caller-save counterpart of the modernized SECD machine precisely corresponds to Thielecke’s doublebarrelled continuations and to Felleisen’s encoding of J in terms of call/cc. We then variously characterize the J operator in terms of CPS and in terms of delimited-control operators in the CPS hierarchy. As a byproduct, we also present several reduction semantics for applicative expressions - ACM Letters on Programming Languages and Systems , 1993 "... syntax of the source language ` c : ' f:::; x : ø ; :::g ` x : ø ß ` e : ø !ø ß ` fix e : ø ß [ fx : ø 1 g ` e : ø 2 ß ` x : ø 1 : e : ø 1 !ø 2 ß ` e 0 : ø 1 !ø 2 ß ` e 1 : ø 1 ß ` @ e 0 e 1 : ø 2 ß ` e 1 : ' ß ` e 2 : ø ß ` e 3 : ø ß ` if e 1 then e 2 else e 3 : ø ß ` e 0 : ø 0 ß [ fx : ø 0 g ` ..." Cited by 26 (10 self) Add to MetaCart syntax of the source language ` c : ' f:::; x : ø ; :::g ` x : ø ß ` e : ø !ø ß ` fix e : ø ß [ fx : ø 1 g ` e : ø 2 ß ` x : ø 1 : e : ø 1 !ø 2 ß ` e 0 : ø 1 !ø 2 ß ` e 1 : ø 1 ß ` @ e 0 e 1 : ø 2 ß ` e 1 : ' ß ` e 2 : ø ß ` e 3 : ø ß ` if e 1 then e 2 else e 3 : ø ß ` e 0 : ø 0 ß [ fx : ø 0 g ` e 1 : ø 1 ß ` let x = e 0 in e 1 : ø 1 ß ` e 1 : ø 1 ß ` e 2 : ø 2 ß ` pair e 1 e 2 : ø 1 \Theta ø 2 ß ` e : ø 1 \Theta ø 2 ß ` fst e : ø 1 ß ` e : ø 1 \Theta ø 2 ß ` snd e : ø 2 Fig. 2. Type-checking rules for the source language approach is used by Kesley and Hudak [11] and by Fradet and Le M'etayer [9]. Both include a CPS transformation. Fradet and Le M'etayer compile both CBN and CBV programs by using the CBN and the CBV CPS-transformation. Recently, Burn and Le M'etayer have combined this technique with a global programanalysis [2], which is comparable to our goal here. 1.4 Overview Section 2 presents the syntax of the source language and the strictness-annotated language. We , 2003 "... We present a new transformation of-terms into continuation-passing style (CPS). This transformation operates in one pass and is both compositional and first-order. Previous CPS transformations only enjoyed two out of the three properties of being first-order, one-pass, and compositional, but the new ..." Cited by 26 (9 self) Add to MetaCart We present a new transformation of-terms into continuation-passing style (CPS). This transformation operates in one pass and is both compositional and first-order. Previous CPS transformations only enjoyed two out of the three properties of being first-order, one-pass, and compositional, but the new transformation enjoys all three properties. It is proved correct directly by structural induction over source terms instead of indirectly with a colon translation, as in Plotkin’s original proof. Similarly, it makes it possible to reason about CPS-transformed terms by structural induction over source terms, directly. The new CPS transformation connects separately published approaches to the CPS transformation. It has already been used to state a new and simpler correctness proof of a direct-style transformation, and to develop a new and simpler CPS transformation of control-flow information. , 1995 "... We prove an occurrence property about formal parameters of continuations in Continuation-Passing Style (CPS) terms that have been automatically produced by CPS transformation of pure, call-byvalue -terms. Essentially, parameters of continuations obey a stack-like discipline. This property was intro ..." Cited by 24 (18 self) Add to MetaCart We prove an occurrence property about formal parameters of continuations in Continuation-Passing Style (CPS) terms that have been automatically produced by CPS transformation of pure, call-byvalue -terms. Essentially, parameters of continuations obey a stack-like discipline. This property was introduced, but not formally proven, in an earlier work on the Direct-Style transformation (the inverse of the CPS transformation). The proof has been implemented in Elf, a constraint logic programming language based on the logical framework LF. In fact, it was the implementation that inspired the proof. Thus this note also presents a case study of machineassisted proof discovery. All the programs are available in ( ftp.daimi.aau.dk:pub/danvy/Programs/danvy-pfenning-Elf93.tar.gz ftp.cs.cmu.edu:user/fp/papers/cpsocc95.tar.gz Most of the research reported here was carried out while the first author visited Carnegie Mellon University in the Spring of 1993. Current address: Olivier Danvy, Ny Munkeg... , 1999 "... Higher-order program transformations raise new challenges for proving properties of their output, since they resist traditional, rst-order proof techniques. In this work, we consider (1) the \ one-pass" continuationpassing style (CPS) transformation, which is second-order, and (2) the occurrence ..." Cited by 22 (8 self) Add to MetaCart Higher-order program transformations raise new challenges for proving properties of their output, since they resist traditional, rst-order proof techniques. In this work, we consider (1) the \ one-pass" continuationpassing style (CPS) transformation, which is second-order, and (2) the occurrences of parameters of continuations in its output. To this end, we specify the one-pass CPS transformation relationally and we use the proof technique of logical relations. , 2004 "... We present a systematic construction of a reduction-free normalization function. Starting from ..."
{"url":"http://citeseerx.ist.psu.edu/showciting?doi=10.1.1.10.7604","timestamp":"2014-04-18T01:45:50Z","content_type":null,"content_length":"35243","record_id":"<urn:uuid:79a7d3a2-cc80-4df9-9cd3-804cd4d19d9e>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00536-ip-10-147-4-33.ec2.internal.warc.gz"}
Solving Trigonometric Conditional Equations April 16th 2012, 10:44 AM #1 Mar 2012 Sacramento, CA Solving Trigonometric Conditional Equations tan^2x=sin2x, 0<=x<2pi cosx cannot = 0 ; x cannot = pi/2 or 3pi/2 sinx=0 or sinx-2cos^3x=0 +-sqrt(1-cos^2x) - 2cos^3x=0 +-sqrt(1-cos^2x) = 2cos^3x *square both sides to remove radical 1-cos^2x = 4cos^6x I don't know how to solve this equation, can someone please provide me with a link to read how to solve this equiation? Re: Solving Trigonometric Conditional Equations tan^2x=sin2x, 0<=x<2pi cosx cannot = 0 ; x cannot = pi/2 or 3pi/2 sinx=0 or sinx-2cos^3x=0 +-sqrt(1-cos^2x) - 2cos^3x=0 +-sqrt(1-cos^2x) = 2cos^3x *square both sides to remove radical 1-cos^2x = 4cos^6x I don't know how to solve this equation, can someone please provide me with a link to read how to solve this equiation? First of all, don't forget that you need to solve \displaystyle \begin{align*} \sin{x} = 0 \end{align*}. Anyway, once you get to \displaystyle \begin{align*} \sin{x} - 2\cos^3{x} = 0 \end{align*} I would do this... \displaystyle \begin{align*} \sin{x} - 2\cos^3{x} &= 0 \\ \sin{x} &= 2\cos^3{x} \\ \frac{\sin{x}}{\cos{x}} &= 2\cos^2{x} \textrm{ we can do this since it has already been stated that }\cos{x} eq 0 \\ \tan{x} &= \frac{2}{\sec^2{x}} \\ \tan{x} &= \frac{2}{\tan^2{x} + 1} \\ \tan{x}\left(\tan^2{x} + 1\right) &= 2 \\ \tan^3{x} + \tan{x} &= 2 \\ \tan^3{x} + \tan{x} - 2 &= 0 \\ X^3 + X - 2 &= 0 \textrm{ if we let }X = \tan{x} \\ \left(X - 1\right)\left(X^2 + X + 2\right) &= 0 \end{align*} You should be able to solve this now. Also please advise if you only want real solutions or if you want complex solutions as well (they are, as the name suggests, more complex)... Re: Solving Trigonometric Conditional Equations tanx=1 gives x=pi/4 which cannot be a solution I suggest the following sin2x/cos2x=sin2x sin2x=sin2xcos2x sin2x-sin2xcos2x =0 sin2x(1-cos2x)=0 sin2x=0 or 1-cos2x=0 cos2x=1 The first gives 2x=0 or pi or 2pi or 3pi So x= 0 or pi/2 or pi or3pi/2 cos2x=1 gives 2x= 0 or 2pi so x= 0 or pi which we already have. Re: Solving Trigonometric Conditional Equations If we are ruling out pi/2 and 3pi/2 that of course just leaves 0 and pi Re: Solving Trigonometric Conditional Equations tanx=1 gives x=pi/4 which cannot be a solution I suggest the following sin2x/cos2x=sin2x sin2x=sin2xcos2x sin2x-sin2xcos2x =0 sin2x(1-cos2x)=0 sin2x=0 or 1-cos2x=0 cos2x=1 The first gives 2x=0 or pi or 2pi or 3pi So x= 0 or pi/2 or pi or3pi/2 cos2x=1 gives 2x= 0 or 2pi so x= 0 or pi which we already have. Why can't \displaystyle \begin{align*} x = \frac{\pi}{4} \end{align*} be a solution? It doesn't make \displaystyle \begin{align*} \cos{x} = 0 \end{align*}... Re: Solving Trigonometric Conditional Equations The original equation was tan2x=sin2x and x=pi/4 is not a solution. Re: Solving Trigonometric Conditional Equations Many apologies, I misread the question! I've been solving tan2x=sin2x Re: Solving Trigonometric Conditional Equations April 16th 2012, 06:45 PM #2 April 16th 2012, 09:54 PM #3 Senior Member Mar 2012 Sheffield England April 16th 2012, 10:58 PM #4 Senior Member Mar 2012 Sheffield England April 17th 2012, 02:50 AM #5 April 17th 2012, 03:12 AM #6 Senior Member Mar 2012 Sheffield England April 17th 2012, 03:17 AM #7 Senior Member Mar 2012 Sheffield England April 17th 2012, 03:21 AM #8
{"url":"http://mathhelpforum.com/trigonometry/197397-solving-trigonometric-conditional-equations.html","timestamp":"2014-04-17T21:57:03Z","content_type":null,"content_length":"57112","record_id":"<urn:uuid:4140e546-cdce-4ff6-8338-8ba1c5f8b3ef>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00395-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum: Teacher2Teacher - Q&A #19885 View entire discussion From: --- To: Teacher2Teacher Service Date: Oct 21, 2008 at 15:27:45 Subject: Geometry for "gifted" student HI. Wondering if you had any thoughts on whether it is possible to "extend" 9th grade Geometry class material for a gifted student -- extending it, say, into her areas of interest, which are Biology and Biomedical research? The problem is that she hasn't yet studied Physics, Trig or Calculus? Would it just be better for her to complete the regular 9th grade curriculum, and look for Geometry applications as she moves into Physics, Trig and Calc.? Thanks for any thoughts! I am a parent, not a teacher, trying to get my gifted daughter's math needs met, EITHER by finding some kind of appropriate YEAR-LONG approaches and materials to "extend" the reach of the course while she is in the 9th grade Geometry course, OR by obtaining permission for her to move through the course at her own pace and finish by mid-year (to fit in some other course). I'm not a "math" person, per se, hence the questions. I was hoping that someone could tell me if 9th grade high-school geometry COULD even be "meaningfully extended" to make the course beneficial and of academic value to her ALL YEAR -- IS it possible to go into fluid mechanics, for example, without having yet studied Physics -- or would she be better off just moving thru this regular 9th-grade course material at her own accelerated pace, perhaps with some "extended asides" as appropriate, and then moving on -- being done with this course, and knowing to expect to see other types of geometry later on in college or graduate studies, after she'd had more science and more math to tie into it -- It just occurred to me that you might not be aware of the typical 9th-grade Geometry course contents (and we're here in Pennsylvania, too). These statements are from the high-school's website, and will put the course in perspective for you. My daughter is extremely discouraged by the painfully SLOW movement through course material, and the redundancy of the presentations -- and she has the "top" teacher and the "top" level class offered. She's a very visual person and likes math; she "gets" this stuff the first time through (and sits bored thru umpteen repetitions until the others get it), and she NEEDS to either move through and be done, OR have it "extended" in some way that has value to her professional goals... but that's what I'm trying to ascertain -- CAN it be "extended" now, with her having ZERO background in Physics or Trig or Calculus or Statistics -- or does it make sense to acquire what's here and be done, for now? Thanks, Karen In this course, you will develop skills in defining terms, thinking logically, and arriving at conclusions, both geometric and non-geometric. Lines, angles, circles, triangles, quadrilaterals and other geometric figures are studied. Students become familiar with two-column, paragraph, and indirect proofs. The relationship of geometry to arithmetic, algebra, and right triangle trigonometry is emphasized. You will also learn and develop some basic concepts of solid geometry, coordinate geometry, and In Geometry, you will study basic definitions and concepts relevant in Geometry. You will learn how to use deductive structure in which conclusions are justified by means of previously assumed or proved statements. You will learn the concept of congruent angles, segments, and triangles. You will also learn the concept of similar figures, the Pythagorean Theorem, circles, area, surface area and volume of various geometric figures. Post a public discussion message Ask Teacher2Teacher a new question
{"url":"http://mathforum.org/t2t/message.taco?thread=19885&message=1","timestamp":"2014-04-16T22:10:57Z","content_type":null,"content_length":"7995","record_id":"<urn:uuid:227a6318-c7dd-4eeb-b1a0-4d3a46983608>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00068-ip-10-147-4-33.ec2.internal.warc.gz"}
Wolfram Demonstrations Project The Traveling Salesman Problem 5: Traveling Salesman Path Problem Given a set of points, the traveling salesman path problem asks for the shortest path from one point to another going through all other the points on the way. The case in which the start and end points coincide is precisely the traveling salesman path problem (TSP). This Demonstration lets you drag the points and specify the starting and ending points. The shortest path is shown with arrows. To compare it to the TSP, the periphery of the yellow polygon shows the solution to the TSP for these points.
{"url":"http://demonstrations.wolfram.com/TheTravelingSalesmanProblem5TravelingSalesmanPathProblem/","timestamp":"2014-04-17T03:52:17Z","content_type":null,"content_length":"43435","record_id":"<urn:uuid:2babe9af-ae8f-4fe7-a44e-e82902b9206e>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00319-ip-10-147-4-33.ec2.internal.warc.gz"}
1 ) Sand dunes on a desert island move as sand is swept up the windward side to settl November 8th 2010, 08:00 AM #1 Senior Member Oct 2009 1 ) Sand dunes on a desert island move as sand is swept up the windward side to settl 1 ) Sand dunes on a desert island move as sand is swept up the windward side to settle in the leeward side. Such “walking” dunes have been known to travel 20 feet in a year and can travel as much as 100 feet per year in particularly windy times. Calculate the average speed in each case in m/s. (b) Fingernails grow at the rate of drifting continents, about 10 mm/yr. Approximately how long did it take for North America to separate from Europe, a distance of about 3 000 mi? a ) 20ft = 32180m average speed = 32180/100 = 321.8 t = 3000/10 = 300 2 ) A bristlecone pine tree has been know to take 4000 years to grow to a height of 20 ft. a)Find the avearge speed of growth in m/s b) In contrast the faster growing palnt is the gaint kelp which can grow at rate of 2 feet in one day a ) 4000 years conver to secound now 20 ft to m = 6.1 avearge speed = 6.1/345.6X10^-6 = 1.7650 b) I don't understand this ? 3 ) Two cars travel in the same direction along a straight highway, one at a constant speed of 55 mi/h and the other at 70 mi/h. (a) Assuming that they start at the same point, how much sooner does the faster car arrive at a destination 10 mi away? (b) How far must the faster car travel before it has a 15-min lead on the slower car? give me the idea in this queation . November 8th 2010, 08:35 AM #2
{"url":"http://mathhelpforum.com/math-topics/162541-1-sand-dunes-desert-island-move-sand-swept-up-windward-side-settl.html","timestamp":"2014-04-18T13:29:26Z","content_type":null,"content_length":"34161","record_id":"<urn:uuid:1689dd83-835e-41e2-b898-2524e5bd2aef>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00655-ip-10-147-4-33.ec2.internal.warc.gz"}
Homework Help Physics - tchrwill, Monday, December 17, 2007 at 3:45pm The Law of Universal Gravitation states that each particle of matter attracts every other particle of matter with a force which is directly proportional to the product of their masses and inversely proportional to the square of the distance between them. Expressed mathematically, F = GM(m)/r^2 where F is the force with which either of the particles attracts the other, M and m are the masses of two particles separated by a distance r, and G is the Universal Gravitational Constant. The product of G and, lets say, the mass of the earth, is sometimes referred to as GM or mu (the greek letter pronounced meuw as opposed to meow), the earth's gravitational constant. Thus the force of attraction exerted by the earth on any particle within, on the surface of, or above, is F = 1.40766x10^16 ft^3/sec^2(m)/r^2 where m is the mass of the object being attracted and r is the distance from the center of the earth to the mass. The gravitational constant for the earth, GM(E), is 1.40766x10^16ft^3/sec^2. The gravitational constant for the moon, GM(M), is 1.7313x10^14ft^3/sec^2. Using the average distance between the earth and moon of 239,000 miles, let the distance from the moon, to the point between the earth and moon, where the gravitational pull on a 32,200 lb. satellite is the same, be X, and the distance from the earth to this point be (239,000 - X). Therefore, the gravitational force is F = GMm/r^2 where r = X for the moon distance and r = (239000 - X) for the earth distance, and m is the mass of the satellite. At the point where the forces are equal, 1.40766x10^16(m)/(239000-X)^2 = 1.7313x10^14(m)/X^2. The m's cancel out and you are left with 81.30653X^2 = (239000 - X)^2 which results in 80.30653X^2 + 478000X - 5.7121x10^10 = 0. From the quadratic equation, you get X = 23,859 miles, roughly one tenth the distance between the two bodies from the moon. So the spacecraft's distance from the earth is ~215,140 miles. Subtract this from the distance between the earth and moon and you will have your answer. Checking the gravitational pull on the 32,200 lb. satellite, whose mass m = 1000 lb.sec.^2/ft.^4. The pull of the earth is F = 1.40766x10^16(1000)/(215,140x5280)^2 = 10.91 lb. The pull of the moon is F = 1.7313x10^14(1000)/(23858x5280)^2 = 10.91 lb. This point is sometimes referred to as L2. There is an L5 Society which supports building a space station at this point between the earth and moon. There are five such points in space, L1 through L5, at which a small body can remain in a stable orbit with two very massive bodies. The points are called Lagrangian Points and are the rare cases where the relative motions of three bodies can be computed exactly. In the case of a body orbiting a much larger body, such as the moon about the earth, the first stable point is L1 and lies on the moon's orbit, diametrically opposite the earth. The L2 and L3 points are both on the moon-earth line, one closer to the earth than the moon and the other farther away. The remaining L4 and L5 points are located on the moon's orbit such that each forms an equilateral triangle with the earth and moon. Physics - tchrwill, Monday, December 17, 2007 at 7:22pm 215.140 miles = 346,217 km 390,000 - 346,217 = 43,783 km The actual mean distance between the earth and moon is 238,868 miles or 384,338 km. Most often the mean distance is quoted as 239,000 miles or 384,551 km. Then, 384,551 - 346,237 = 38,313 km., or less than using your distance of 3.9x10^5. What is the answer you are seeking?
{"url":"http://www.jiskha.com/display.cgi?id=1197922499","timestamp":"2014-04-19T22:15:23Z","content_type":null,"content_length":"13074","record_id":"<urn:uuid:a38fe639-fd4d-4cf2-9598-031ad84e9b2f>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00137-ip-10-147-4-33.ec2.internal.warc.gz"}
How Safe Is Excel? by AccountingWeb on printer friendly Everyone has, or knows, horror stories about Excel. With the next release of Excel allowing a worksheet to hold over a million rows, does this mean we are headed for bigger and bigger disasters? David Carter suggests that the dangers stem not from Excel itself, but from using Excel formulas. What would life be like without Excel? For most accountants it’s the most important piece of software they have - very easy to use, but incredibly versatile. Every so often, however, we hear horror stories about Excel disasters. The doom-mongers mutter darkly that this very strength – its ease of use - is Excel’s weakness. So many spreadsheets out there have simply been cobbled together by users as they go along, with no proper discipline behind the design or the data structure. And indeed we’ve all seen a forecast where somehow one of the figures has got wrongly calculated and the error has flowed right through to the bottom line. So just how safe is Excel? The question is particularly relevant when the next version of Excel will be able to hold 20 times more data in a worksheet than it does now. Will this just mean bigger and bigger disasters? To answer the question, we first need to think what Excel actually does, because over the 15 years of its life Excel has developed into a composite of three different applications. 1. Excel is a Spreadsheet After the PC arrived in the early 80s, the biggest selling application to run on it was the Lotus 1-2-3 spreadsheet. Accountants used Lotus for financial modelling and forecasting; non-accountants for more humble but very useful tasks such as printing lists. 2. Excel is a Database In the early 90s Microsoft introduced Excel. In the ensuing “feature wars” against 1-2-3, Microsoft realised that the row and columns design of a spreadsheet exactly mirrors the records and fields design of a database. So they enhanced Excel by adding features up till then found only in database packages – features such as Autofilter, Sort, Sub-Total and Pivot Tables (“cross-tabs”, as they are known in the trade). Excel became a personal database as well as a spreadsheet. 3. Excel is a Data Analysis and Reporting Tool Finally, with the advent of Windows and WYSIWYG (“what you see is what you get”) accountants realized that if they could get their data into Excel, they could manipulate it, then format it into a final report good enough to distribute to senior management. Using the Import Wizard they could take data from somewhere else, then analyse and reformat it in Excel to produce management reports. A spreadsheet for financial modelling, a database for storing records, a tool for analysing and reporting on your company data – Excel is all of these. Where things go wrong in Excel Most Excel disasters involve large spreadsheets which have been continually added to until they get out of control. So can we conclude that big spreadsheets are bad spreadsheets? Not necessarily. Suppose, for example, that you import a million records from your ERP/accounts package into Excel and analyse them with pivot tables. This would be perfectly safe. The input data has come from the accounts program where it is stored under proper control, while the output reports are automatically calculated by the program, and the results are always correct. (You may interpret them incorrectly of course, but arithmetically they will always be right). How many formulas will you be using? So it’s perfectly safe to have monster-size spreadsheets when you are importing external data into Excel, then using pivot tables to analyse and report on it. Where things start to go wrong is where there is a lot of data in the spreadsheet, and the user is using Excel formulas to manipulate it. Quite often I’ve been to companies whose accountant has left. Behind him remains a spreadsheet containing all their costings, pricing, forecasts, whatever. The information is vital and as an “Excel expert” I’m supposed to be able to come in and make sense of I dread these spreadsheets. Often there are formulas all over the place. Changing one number here will change a dozen numbers somewhere else. It’s impossible to work out the logic of how it works at all, let alone whether it is working properly. At the end of the day all you can do is scrap it and start again. A lot of formulas = bad design At one level a formula can be a brilliant example of human ingenuity. But at another it indicates a failure in design – a short-term solution when the real need is to structure the data in a proper way for the long-term. So, for example, at the risk of offending some readers, I would argue that any spreadsheet that uses the SUMIF formula to calculate totals has been badly designed. It is more reliable to use a pivot table for these calculations. Excel and development But a lot of the time you simply have to use formulas. If you want a job to be computerised, IT developers have this irritating habit of demanding a complete specification before they begin to write a line of code. But often at the beginning you only know vaguely that something needs to be done, without having any clear idea of what the final result is going to look like. So you just sling the data into Excel, and use formulas to work out the answers as you go along. In the real world this suck it and see approach is often the only way to get any new system off the ground. But in principle, of course, the developers are right. What should happen is that, after a few weeks or months running the job in Excel, the application matures and you’ve now got a clear idea in your head of the logic of it all. Having by trial and error got yourself to the stage where you have a full understanding of the job, you can now decide on the next step. Is it simple enough to leave in Excel? Or is it too big for Excel, with too much data being strung together by too many formulas? If the latter, it’s time to go out and search for a third party package that will do the job, or go to an IT professional, give them a spec, and get them to re-write the application properly in something like Access. It’s when this second step is not taken, and prototype systems based on Excel are simply left to get bigger and bigger, that disasters tend to occur. This article was originally published on AccountingWEB UK, our sister site. It can be found in the ExcelZone at www.accountingweb.co.uk [2]
{"url":"http://www.accountingweb.com/print/135471","timestamp":"2014-04-21T11:42:51Z","content_type":null,"content_length":"15832","record_id":"<urn:uuid:9b7740db-d112-4609-834f-cf52d00e2b06>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00394-ip-10-147-4-33.ec2.internal.warc.gz"}
LRA on Carrier 5-ton Infinity / Performance 03-08-2013, 06:26 PM LRA on Carrier 5-ton Infinity / Performance I am trying to install a 20 kW Generac Guardian natural gas generator. I believe this generator is rated for 145 LRA, but my current AC unit is 10+ years old, and pulls 165 LRA. So, I'm looking to upgrade my 5-ton AC unit, likely to a Carrier model. It appears that the Performance Series 5-ton (24ACC6) has an LRA of 135 amps. Good, but it'd be better if there were some more wiggle room. See top of page 8 below: On the other hand, according to the specs that I'm able to find online, the Carrier Infinity 5-ton (24ANB7) has an LRA of 118 -- great! See top of page 5: However, my HVAC contractor informs me that the latest info available on the Infinity 5-ton lists an LRA of 159.6. That's not great. Can anyone here help out? Which is correct? Thanks very much! 03-08-2013, 08:30 PM He is correct. In early 2013 Carrier change a few things to the 24ANB7 units, and one was the compressor. When the compressor changed, so did the electrical specs for the unit. Both the MCA and MOCP increased, as did the compressor LRA, which is now 152.9. 03-08-2013, 08:45 PM Add some insulation, get a load calc done, maybe you only need 4 tons. look at a greenspeed. I susect lra would be a little lower. Sent from my SGPT12 using Tapatalk 2 03-08-2013, 09:42 PM Wow, at least according to this, LRA is 42 !! Will look into this, thanks much for the recommendation. 03-09-2013, 09:28 AM rundawg: Do you happen to know if the LRA on the Performance is current as well (135), or did they make changes to that compressor as well? 03-09-2013, 01:20 PM It has a digital scroll... so theres a drive and its a permenant magnet motor, so it makes good low end torque andc doesnt need a huge electrically generated field to start spinning. Plus the drive has large capacitors as well. The greenspeed also produces more heat without strips at beliw about 40f, temps and if sized right will run almost continous. Factor in its cost over a larger Sent from my SGPT12 using Tapatalk 2 03-09-2013, 03:09 PM The 25HCB6 (16 SEER) also had the compressor change in July 2012, so it's LRA is 152.9. The 25HCB3 (13 SEER), and 25HCC5 (15 SEER), both have a LRA of 134.0 03-09-2013, 05:48 PM 03-09-2013, 06:31 PM Sorry about that, I was thinking heat pumps. In the latest 24ACC6 manual, the LRA is 135.
{"url":"http://hvac-talk.com/vbb/printthread.php?t=1273871&pp=13&page=1","timestamp":"2014-04-19T06:39:36Z","content_type":null,"content_length":"11269","record_id":"<urn:uuid:dfc36570-ddde-4f5b-8f2e-9c23de9807a2>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00447-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions - Re: Trisection Date: Jul 3, 2002 10:52 AM Author: John Conway Subject: Re: Trisection When I read this, I was immediately suspicious, partly because the claimed accuracy sounded incredible, and partly because the construction involves an arbitrary choice (which, so to speak, "contradicts" the particularity of the error-estimate given, which is probably just for one value that that choice). So I started to try to disprove it, but since I've now verified quite a bit of it, I'm beginning to change my mind... On 20 Jun 2002, Mark wrote: Come on Mark, tell us your full name, and how you came on this remarkable idea! > Just wanted to share this, I worked it out a couple of years ago and > was surprised by the accuracy: > Near Exact Trisection: > 1. Start with an unknown angle <90 deg., label the vertex A. > 2. Draw an arc with origin at A crossing both lines of the angle at > points B and C. > 3. Draw line BC making an isosceles triangle. > 4. Using point C as the origin, draw an arc crossing line BC and the > earlier arc somewhere between à ¼ and à ½ way between points C and B. > Label where this new arc crosses line BC point D. > Label where this new arc crosses the first arc point E. > 5. Draw line DE and extend it well past A . If line DE passes > exactly through point A (it wonà  t) stop, your first guess was an exact > trisection. My first "hope" was that this would be wrong (I say "hope", because if it were wrong, I'd be absolved from checking the rest). But it's entirely correct, and already intriguing, in that it leads to a nice "neusis" construction whose simplicity rivals Archimedes'. Congratulations, Mark! > 6. Extend line AC well past point A, step off 3 times length AC from > point A and label the new point F. > 7. Swing an arc of length AF with A as the origin that crosses the > extended line DE near point F. > Label the intersection G. > 8. Draw line GA and extend it to intersect the original arc from step > 2. > Label the intersection Eà  . I haven't yet checked all this, but plan to. I'm already quite > Line AEà  is a good (within less than 1/1000 degree) trisection. > However this is only the start. Repeating the process from step 4 > using CEà  as the arc radius results in a trisection to within 10E-11 > degrees. Each subsequent iteration improves the trisection by several > orders of magnitude. ...and plan to study this construction very carefully. Regards, John Conway
{"url":"http://mathforum.org/kb/plaintext.jspa?messageID=1088424","timestamp":"2014-04-17T21:36:17Z","content_type":null,"content_length":"3718","record_id":"<urn:uuid:6d1e2340-6c5c-4822-bc1c-1d334dc66348>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00461-ip-10-147-4-33.ec2.internal.warc.gz"}
Hyperbolic sine: Series representations (subsection 06/01) Generalized power series Expansions at z==z[0] For the function itself Expansions at z==0 For the function itself For powers of the function For the second power For the third power For symbolical integer power Expansions at z==Pi i/2 For the function itself For powers of the function For the second power For the third power For symbolical integer power
{"url":"http://functions.wolfram.com/ElementaryFunctions/Sinh/06/01/ShowAll.html","timestamp":"2014-04-19T02:11:05Z","content_type":null,"content_length":"66922","record_id":"<urn:uuid:e5785a95-8a5b-4c78-b1bc-8009d7835e9d>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00173-ip-10-147-4-33.ec2.internal.warc.gz"}
Need some help with some linear algerbra Im having trouble doing questions 1 and 4 on the following sheet, would someone please help get started, thanks! 1. consider the column vector ${\bold{p}}=[p_t, p_b]'$ the first element of which is the probability that the student travelled by train yesterday, and the second the probability that they travelled by bus yesterday. Then from the statement of the problem we know that: $<br /> {\bold{A}}\left[ \begin{array}{c} 1 \\ 0 \end{array} \right] = \left[ \begin{array}{c} 1/3 \\ 2/3 \end{array} \right]<br />$ So the first column of ${\bold{A}}$ is $[1/3, 2/3]'$. Similarly by considering its action on $[0,1]'$ we can conclude that the second column of ${\bold{A}}$ is $[2/5, 3/5]'$. So: $<br /> {\bold {A}} = \left[ \begin{array}{cc} 1/3 & 2/5 \\ 2/3 & 3/5 \end{array} \right]<br />$ That's you started. RonL
{"url":"http://mathhelpforum.com/advanced-algebra/38714-need-some-help-some-linear-algerbra-print.html","timestamp":"2014-04-16T06:23:16Z","content_type":null,"content_length":"6405","record_id":"<urn:uuid:1ba25926-ff25-48ba-b3ef-63cc3e0756e2>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00581-ip-10-147-4-33.ec2.internal.warc.gz"}
Differentiate, Do not simplify November 16th 2008, 09:18 AM #1 Nov 2008 Differentiate, Do not simplify how do i differentiate this problems, i tried doing it but it's hard and i am kind of weak on fractions. f(x)=2x^5/9-5√x- 3/x^2√x the 5 is on top of the square root of x Please help thanx in advance Follow Math Help Forum on Facebook and Google+
{"url":"http://mathhelpforum.com/calculus/59846-differentiate-do-not-simplify.html","timestamp":"2014-04-16T05:09:38Z","content_type":null,"content_length":"28464","record_id":"<urn:uuid:f7fd69d8-b598-47e2-b4e6-c1387580398b>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00158-ip-10-147-4-33.ec2.internal.warc.gz"}
Art and Craft of Mathematical Problem Solving In The Art and Craft of Mathematical Problem Solving, award-winning Professor Paul Zeitz conducts you through scores of problems at all levels of difficulty. More than a bag of math tricks, these 24 lectures reveal strategies, tactics, and tools for overcoming mathematical obstacles in fields such as algebra, geometry, combinatorics, and number theory. This course is the perfect way to sharpen your mind, think more creatively, and tackle intellectual challenges you might never have imagined. Thank you! You have successfully submitted a comment for this review. Ratings-Only Reviews for Art and Craft of Mathematical Problem Solving
{"url":"http://reviews.teach12.com/3456/1483/reviews.htm?sort=contributorRank&dir=asc","timestamp":"2014-04-20T15:51:14Z","content_type":null,"content_length":"184420","record_id":"<urn:uuid:13261feb-684b-44f9-ae00-dd3d61bbde6d>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00074-ip-10-147-4-33.ec2.internal.warc.gz"}
Independent Classes of Random Variables Summary: The concept of independence for classes of events is developed in terms of a product rule. Recall that for a real random variable X, the inverse image of each reasonable subset M on the real line (i.e., the set of all outcomes which are mapped into M by X) is an event. Similarly, the inverse image of N by random variable Y is an event. We extend the notion of independence to a pair of random variables by requiring independence of the events they determine in this fashion. This condition may be stated in terms of the product rule P(X in M, Y in N) = P(X in M)P(Y in N) for all Borel sets M, N. This product rule holds for the distribution functions FXY(t,u) = FX(t)FY(u) for all t, u. And similarly for density functions when they exist. This condition puts restrictions on the nature of the probability mass distribution on the plane. For a rectangle with sides M, N the probability mass in M x N is P(X in M)P(Y in N). Extension to general classes is simple and immediate. The concept of independence for classes of events is developed in terms of a product rule. In this unit, we extend the concept to classes of random variables. Recall that for a random variable X, the inverse image X-1(M)X-1(M) (i.e., the set of all outcomes ω∈Ωω∈Ω which are mapped into M by X) is an event for each reasonable subset M on the real line. Similarly, the inverse image Y-1(N)Y-1(N) is an event determined by random variable Y for each reasonable set N. We extend the notion of independence to a pair of random variables by requiring independence of the events they determine. More precisely, A pair {X,Y}{X,Y} of random variables is (stochastically) independent iff each pair of events {X-1(M),Y-1(N)}{X-1(M),Y-1(N)} is independent. This condition may be stated in terms of the product rule Note that the product rule on the distribution function is equivalent to the condition the product rule holds for the inverse images of a special class of sets {M,N}{M,N} of the form M=(-∞,t]M=(-∞,t] and N=(-∞,u]N=(-∞,u]. An important theorem from measure theory ensures that if the product rule holds for this special class it holds for the general class of {M,N}{M,N}. Thus we may assert The pair {X,Y}{X,Y} is independent iff the following product rule holds so that the product rule FXY(t,u)=FX(t)FY(u)FXY(t,u)=FX(t)FY(u) holds. The pair {X,Y}{X,Y} is therefore independent. If there is a joint density function, then the relationship to the joint distribution function makes it clear that the pair is independent iff the product rule holds for the density. That is, the pair is independent iff Suppose the joint probability mass distributions induced by the pair {X,Y}{X,Y} is uniform on a rectangle with sides I1=[a,b]I1=[a,b] and I2=[c,d]I2=[c,d]. Since the area is (b-a)(d-c)(b-a)(d-c), the constant value of fXYfXY is 1/(b-a)(d-c)1/(b-a)(d-c). Simple integration gives Thus it follows that X is uniform on [a,b][a,b], Y is uniform on [c,d][c,d], and fXY(t,u)=fX(t)fY(u)fXY(t,u)=fX(t)fY(u) for all t,ut,u, so that the pair {X,Y}{X,Y} is independent. The converse is also true: if the pair is independent with X uniform on [a,b][a,b] and Y is uniform on [c,d][c,d], the the pair has uniform joint distribution on I1×I2I1×I2. It should be apparent that the independence condition puts restrictions on the character of the joint mass distribution on the plane. In order to describe this more succinctly, we employ the following terminology. If M is a subset of the horizontal axis and N is a subset of the vertical axis, then the cartesian product M×NM×N is the (generalized) rectangle consisting of those points (t,u)(t,u) on the plane such that t∈Mt∈M and u∈Nu∈N. The rectangle in Example 2 is the Cartesian product I1×I2I1×I2, consisting of all those points (t,u)(t,u) such that a≤t≤ba≤t≤b and c≤u≤dc≤u≤d (i.e., t∈I1t∈I1 and u∈I2u∈I2). Figure 1: Joint distribution for an independent pair of random variables. We restate the product rule for independence in terms of cartesian product sets. Reference to Figure 1 illustrates the basic pattern. If M, N are intervals on the horizontal and vertical axes, respectively, then the rectangle M×NM×N is the intersection of the vertical strip meeting the horizontal axis in M with the horizontal strip meeting the vertical axis in N. The probability X∈MX∈M is the portion of the joint probability mass in the vertical strip; the probability Y∈NY∈N is the part of the joint probability in the horizontal strip. The probability in the rectangle is the product of these marginal probabilities. This suggests a useful test for nonindependence which we call the rectangle test. We illustrate with a simple example. Figure 2: Rectangle test for nonindependence of a pair of random variables. Supose probability mass is uniformly distributed over the square with vertices at (1,0), (2,1), (1,2), (0,1). It is evident from Figure 2 that a value of X determines the possible values of Y and vice versa, so that we would not expect independence of the pair. To establish this, consider the small rectangle M×NM×N shown on the figure. There is no probability mass in the region. Yet P(X∈M)>0P (X∈M)>0 and P(Y∈N)>0P(Y∈N)>0, so that P(X∈M)P(Y∈N)>0P(X∈M)P(Y∈N)>0, but P(X,Y)∈M×N=0P(X,Y)∈M×N=0. The product rule fails; hence the pair cannot be stochastically independent. Remark. There are nonindependent cases for which this test does not work. And it does not provide a test for independence. In spite of these limitations, it is frequently useful. Because of the information contained in the independence condition, in many cases the complete joint and marginal distributions may be obtained with appropriate partial information. The following is a simple Suppose the pair {X,Y}{X,Y} is independent and each has three possible values. The following four items of information are available. These values are shown in bold type on Figure 3. A combination of the product rule and the fact that the total probability mass is one are used to calculate each of the marginal and joint probabilities. For example P(X=t1)=0.2P(X=t1)=0.2 and P(X=t1,Y=u2)P(X=t1,Y=u2) =1-P(Y=u1)-P(Y=u2)=0.3=1-P(Y=u1)-P(Y=u2)=0.3. Others are calculated similarly. There is no unique procedure for solution. And it has not seemed useful to develop MATLAB procedures to accomplish this. A pair {X,Y}{X,Y} has the joint normal distribution iff the joint density is The marginal densities are obtained with the aid of some algebraic tricks to integrate the joint density. The result is that X∼N(μX,σX2)X∼N(μX,σX2) and Y∼N(μY,σY2)Y∼N(μY,σY2). If the parameter ρ is set to zero, the result is so that the pair is independent iff ρ=0ρ=0. The details are left as an exercise for the interested reader. Remark. While it is true that every independent pair of normally distributed random variables is joint normal, not every pair of normally distributed random variables has the joint normal We start with the distribution for a joint normal pair and derive a joint distribution for a normal pair which is not joint normal. The function is the joint normal density for an independent pair (ρ=0ρ=0) of standardized normal random variables. Now define the joint density for a pair {X,Y}{X,Y} by Both X∼N(0,1)X∼N(0,1) and Y∼N(0,1)Y∼N(0,1). However, they cannot be joint normal, since the joint normal distribution is positive for all (t,u)(t,u). Since independence of random variables is independence of the events determined by the random variables, extension to general classes is simple and immediate. A class {Xi:i∈J}{Xi:i∈J} of random variables is (stochastically) independent iff the product rule holds for every finite subclass of two or more. Remark. The index set J in the definition may be finite or infinite. For a finite class {Xi:1≤i≤n}{Xi:1≤i≤n}, independence is equivalent to the product rule Since we may obtain the joint distribution function for any finite subclass by letting the arguments for the others be ∞ (i.e., by taking the limits as the appropriate ti increase without bound), the single product rule suffices to account for all finite subclasses. If a class {Xi:i∈J}{Xi:i∈J} is independent and the individual variables are absolutely continuous (i.e., have densities), then any finite subclass is jointly absolutely continuous and the product rule holds for the densities of such subclasses Similarly, if each finite subclass is jointly absolutely continuous, then each individual variable is absolutely continuous and the product rule holds for the densities. Frequently we deal with independent classes in which each random variable has the same marginal distribution. Such classes are referred to as iid classes (an acronym for independent,identically distributed). Examples are simple random samples from a given population, or the results of repetitive trials with the same distribution on the outcome of each component trial. A Bernoulli sequence is a simple example. Consider a pair {X,Y}{X,Y} of simple random variables in canonical form Since Ai={X=ti}Ai={X=ti} and Bj={Y=uj}Bj={Y=uj} the pair {X,Y}{X,Y} is independent iff each of the pairs {Ai,Bj}{Ai,Bj} is independent. The joint distribution has probability mass at each point (ti,uj)(ti,uj) in the range of W=(X,Y)W=(X,Y). Thus at every point on the grid, According to the rectangle test, no gridpoint having one of the ti or uj as a coordinate has zero probability mass . The marginal distributions determine the joint distributions. If X has n distinct values and Y has m distinct values, then the n+mn+m marginal probabilities suffice to determine the m·nm·n joint probabilities. Since the marginal probabilities for each variable must add to one, only (n-1)+(m-1)=m+n-2(n-1)+(m-1)=m+n-2 values are needed. Suppose X and Y are in affine form. That is, Since Ar={X=tr}Ar={X=tr} is the union of minterms generated by the Ei and Bj={Y=us}Bj={Y=us} is the union of minterms generated by the Fj, the pair {X,Y}{X,Y} is independent iff each pair of minterms {Ma,Nb}{Ma,Nb} generated by the two classes, respectivly, is independent. Independence of the minterm pairs is implied by independence of the combined class Calculations in the joint simple case are readily handled by appropriate m-functions and m-procedures. In the general case of pairs of joint simple random variables we have the m-procedure jcalc, which uses information in matrices X,Y,X,Y, and P to determine the marginal probabilities and the calculation matrices t and u. In the independent case, we need only the marginal distributions in matrices X, PXPX, Y, and PYPY to determine the joint probability matrix (hence the joint distribution) and the calculation matrices t and u. If the random variables are given in canonical form, we have the marginal distributions. If they are in affine form, we may use canonic (or the function form canonicf) to obtain the marginal distributions. Once we have both marginal distributions, we use an m-procedure we call icalc. Formation of the joint probability matrix is simply a matter of determining all the joint probabilities Once these are calculated, formation of the calculation matrices t and u is achieved exactly as in jcalc. X = [-4 -2 0 1 3]; Y = [0 1 2 4]; PX = 0.01*[12 18 27 19 24]; PY = 0.01*[15 43 31 11]; icalc Enter row matrix of X-values X Enter row matrix of Y-values Y Enter X probabilities PX Enter Y probabilities PY Use array operations on matrices X, Y, PX, PY, t, u, and P disp(P) % Optional display of the joint matrix 0.0132 0.0198 0.0297 0.0209 0.0264 0.0372 0.0558 0.0837 0.0589 0.0744 0.0516 0.0774 0.1161 0.0817 0.1032 0.0180 0.0270 0.0405 0.0285 0.0360 disp(t) % Calculation matrix t -4 -2 0 1 3 -4 -2 0 1 3 -4 -2 0 1 3 -4 -2 0 1 3 disp(u) % Calculation matrix u 4 4 4 4 4 2 2 2 2 2 1 1 1 1 1 0 0 0 0 0 M = (t>=-3)&(t<=2); % M = [-3, 2] PM = total(M.*P) % P(X in M) PM = 0.6400 N = (u>0)&(u.^2<=15); % N = {u: u > 0, u^2 <= 15} PN = total(N.*P) % P(Y in N) PN = 0.7400 Q = M&N; % Rectangle MxN PQ = total(Q.*P) % P((X,Y) in MxN) PQ = 0.4736 p = PM*PN p = 0.4736 % P((X,Y) in MxN) = P(X in M)P(Y in N) As an example, consider again the problem of joint Bernoulli trials described in the treatment of Composite trials. 1 Bill and Mary take ten basketball free throws each. We assume the two seqences of trials are independent of each other, and each is a Bernoulli sequence. What is the probability Mary makes more free throws than Bill? Let X be the number of goals that Mary makes and Y be the number that Bill makes. Then X∼X∼ binomial (10,0.8)(10,0.8) and Y∼Y∼ binomial (10,0.85)(10,0.85). X = 0:10; Y = 0:10; PX = ibinom(10,0.8,X); PY = ibinom(10,0.85,Y); icalc Enter row matrix of X-values X % Could enter 0:10 Enter row matrix of Y-values Y % Could enter 0:10 Enter X probabilities PX % Could enter ibinom(10,0.8,X) Enter Y probabilities PY % Could enter ibinom(10,0.85,Y) Use array operations on matrices X, Y, PX, PY, t, u, and P PM = total((t>u).*P) PM = 0.2738 % Agrees with solution in Example 9 from "Composite Trials". Pe = total((u==t).*P) % Additional information is more easily Pe = 0.2276 % obtained than in the event formulation Pm = total((t>=u).*P) % of Example 9 from "Composite Trials". Pm = 0.5014 Twelve world class sprinters in a meet are running in two heats of six persons each. Each runner has a reasonable chance of breaking the track record. We suppose results for individuals are Compare the two heats for numbers who break the track record. Let X be the number of successes in the first heat and Y be the number who are successful in the second heat. Then the pair {X,Y}{X,Y} is independent. We use the m-function canonicf to determine the distributions for X and for Y, then icalc to get the joint distribution. c1 = [ones(1,6) 0]; c2 = [ones(1,6) 0]; P1 = [0.61 0.73 0.55 0.81 0.66 0.43]; P2 = [0.75 0.48 0.62 0.58 0.77 0.51]; [X,PX] = canonicf(c1,minprob(P1)); [Y,PY] = canonicf(c2,minprob(P2)); icalc Enter row matrix of X-values X Enter row matrix of Y-values Y Enter X probabilities PX Enter Y probabilities PY Use array operations on matrices X, Y, PX, PY, t, u, and P Pm1 = total((t>u).*P) % Prob first heat has most Pm1 = 0.3986 Pm2 = total((u>t).*P) % Prob second heat has most Pm2 = 0.3606 Peq = total((t==u).*P) % Prob both have the same Peq = 0.2408 Px3 = (X>=3)*PX' % Prob first has 3 or more Px3 = 0.8708 Py3 = (Y>=3)*PY' % Prob second has 3 or more Py3 = 0.8525 As in the case of jcalc, we have an m-function version icalcf We have a related m-function idbn for obtaining the joint probability matrix from the marginal probabilities. Its formation of the joint matrix utilizes the same operations as icalc. PX = 0.1*[3 5 2]; PY = 0.01*[20 15 40 25]; P = idbn(PX,PY) P = 0.0750 0.1250 0.0500 0.1200 0.2000 0.0800 0.0450 0.0750 0.0300 0.0600 0.1000 0.0400 An m- procedure itest checks a joint distribution for independence. It does this by calculating the marginals, then forming an independent joint test matrix, which is compared with the original. We do not ordinarily exhibit the matrix P to be tested. However, this is a case in which the product rule holds for most of the minterms, and it would be very difficult to pick out those for which it fails. The m-procedure simply checks all of them. idemo1 % Joint matrix in datafile idemo1 P = 0.0091 0.0147 0.0035 0.0049 0.0105 0.0161 0.0112 0.0117 0.0189 0.0045 0.0063 0.0135 0.0207 0.0144 0.0104 0.0168 0.0040 0.0056 0.0120 0.0184 0.0128 0.0169 0.0273 0.0065 0.0091 0.0095 0.0299 0.0208 0.0052 0.0084 0.0020 0.0028 0.0060 0.0092 0.0064 0.0169 0.0273 0.0065 0.0091 0.0195 0.0299 0.0208 0.0104 0.0168 0.0040 0.0056 0.0120 0.0184 0.0128 0.0078 0.0126 0.0030 0.0042 0.0190 0.0138 0.0096 0.0117 0.0189 0.0045 0.0063 0.0135 0.0207 0.0144 0.0091 0.0147 0.0035 0.0049 0.0105 0.0161 0.0112 0.0065 0.0105 0.0025 0.0035 0.0075 0.0115 0.0080 0.0143 0.0231 0.0055 0.0077 0.0165 0.0253 0.0176 itest Enter matrix of joint probabilities P The pair {X,Y} is NOT independent % Result of test To see where the product rule fails, call for D disp(D) % Optional call for D 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 0 0 0 0 0 Next, we consider an example in which the pair is known to be independent. jdemo3 % call for data in m-file disp(P) % call to display P 0.0132 0.0198 0.0297 0.0209 0.0264 0.0372 0.0558 0.0837 0.0589 0.0744 0.0516 0.0774 0.1161 0.0817 0.1032 0.0180 0.0270 0.0405 0.0285 0.0360 itest Enter matrix of joint probabilities P The pair {X,Y} is independent % Result of test The procedure icalc can be extended to deal with an independent class of three random variables. We call the m-procedure icalc3. The following is a simple example of its use. X = 0:4; Y = 1:2:7; Z = 0:3:12; PX = 0.1*[1 3 2 3 1]; PY = 0.1*[2 2 3 3]; PZ = 0.1*[2 2 1 3 2]; icalc3 Enter row matrix of X-values X Enter row matrix of Y-values Y Enter row matrix of Z-values Z Enter X probabilities PX Enter Y probabilities PY Enter Z probabilities PZ Use array operations on matrices X, Y, Z, PX, PY, PZ, t, u, v, and P G = 3*t + 2*u - 4*v; % W = 3X + 2Y -4Z [W,PW] = csort (G,P); % Distribution for W PG = total((G>0).*P) % P(g(X,Y,Z) > 0) PG = 0.3370 Pg = (W>0)*PW' % P(Z > 0) Pg = 0.3370 An m-procedure icalc4 to handle an independent class of four variables is also available. Also several variations of the m-function mgsum and the m-function diidsum are used for obtaining distributions for sums of independent random variables. We consider them in various contexts in other units. In the study of functions of random variables, we show that an approximating simple random variable Xs of the type we use is a function of the random variable X which is approximated. Also, we show that if {X,Y}{X,Y} is an independent pair, so is {g(X),h(Y)}{g(X),h(Y)} for any reasonable functions g and h. Thus if {X,Y}{X,Y} is an independent pair, so is any pair of approximating simple functions {Xs,Ys}{Xs,Ys} of the type considered. Now it is theoretically possible for the approximating pair {Xs,Ys}{Xs,Ys} to be independent, yet have the approximated pair {X,Y}{X,Y} not independent. But this is highly unlikely. For all practical purposes, we may consider {X,Y}{X,Y} to be independent iff {Xs,Ys}{Xs,Ys} is independent. When in doubt, consider a second pair of approximating simple functions with more subdivision points. This decreases even further the likelihood of a false indication of independence by the approximating random variables. Since e-12≈6×10-6e-12≈6×10-6, we approximate X for values up to 4 and Y for values up to 6. tuappr Enter matrix [a b] of X-range endpoints [0 4] Enter matrix [c d] of Y-range endpoints [0 6] Enter number of X approximation points 200 Enter number of Y approximation points 300 Enter expression for joint density 6*exp(-(3*t + 2*u)) Use array operations on X, Y, PX, PY, t, u, and P itest Enter matrix of joint probabilities P The pair {X,Y} is independent The pair {X,Y}{X,Y} has joint density fXY(t,u)=4tu0≤t≤1,0≤u≤1fXY(t,u)=4tu0≤t≤1,0≤u≤1. It is easy enough to determine the marginals in this case. By symmetry, they are the same. so that fXY=fXfYfXY=fXfY which ensures the pair is independent. Consider the solution using tuappr and itest. tuappr Enter matrix [a b] of X-range endpoints [0 1] Enter matrix [c d] of Y-range endpoints [0 1] Enter number of X approximation points 100 Enter number of Y approximation points 100 Enter expression for joint density 4*t.*u Use array operations on X, Y, PX, PY, t, u, and P itest Enter matrix of joint probabilities P The pair {X,Y} is independent
{"url":"http://cnx.org/content/m23321/latest/","timestamp":"2014-04-21T02:14:17Z","content_type":null,"content_length":"227961","record_id":"<urn:uuid:40d2aa23-8a3b-4dd7-9627-b5228cc12de0>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00384-ip-10-147-4-33.ec2.internal.warc.gz"}
Feasibility problems for recurring tasks on one processor Results 1 - 10 of 28 - ACM Transactions on Computer Systems , 1995 "... This paper considers the use of lock-free shared objects within hard real-time systems. As the name suggests, lock-free shared objects are distinguished by the fact that they are not locked. As such, they do not give rise to priority inversions, a key advantage over conventional, lock-based object-s ..." Cited by 54 (8 self) Add to MetaCart This paper considers the use of lock-free shared objects within hard real-time systems. As the name suggests, lock-free shared objects are distinguished by the fact that they are not locked. As such, they do not give rise to priority inversions, a key advantage over conventional, lock-based object-sharing approaches. Despite this advantage, it is not immediately apparent that lock-free shared objects can be employed if tasks must adhere to strict timing constraints. In particular, lock-free object implementations permit concurrent operations to interfere with each other, and repeated interferences can cause a given operation to take an arbitrarily long time to complete. The main contribution of this paper is to show that such interferences can be bounded by judicious scheduling. This work pertains to periodic, hard real-time tasks that sharelock-free objects on a uniprocessor. In the first part of the paper, scheduling conditions are derived for such tasks, for both static and dynamic pri... - In Proceedings of the 17th IEEE Real-Time Systems Symposium , 1996 "... We present an integrated framework for developing realtime systems in which lock-free algorithms are employed to implement shared objects. There are two key objectives of our work. The first is to enable functionality for object sharing in lock-free real-time systems that is comparable to that in lo ..." Cited by 27 (16 self) Add to MetaCart We present an integrated framework for developing realtime systems in which lock-free algorithms are employed to implement shared objects. There are two key objectives of our work. The first is to enable functionality for object sharing in lock-free real-time systems that is comparable to that in lock-based systems. Our main contribution toward this objective is an efficient approach for implementing multiobject lock-free operations and transactions. A second key objective of our work is to improve upon previously proposed scheduling conditions for tasks that share lock-free objects. When developing such conditions, the key issue is to bound the cost of operation "interferences". We present a general approach for doing this, based on linear programming. 1. Introduction Mostworkon implementing shared objects in preemptive real-time uniprocessor systems has focused on using critical sections to ensure object consistency. The main problem that arises when using critical sections is that ... - In Proceedings of the 19th IEEE Real-Time Systems Symposium , 1998 "... The authors present a new scheme for implementing shared objects on a real-time system [1]. They assume that the system allocates processor time in discrete quanta, and that the quantum is large compared to the length of an object call. Under these conditions, most object calls are likely to execute ..." Cited by 21 (5 self) Add to MetaCart The authors present a new scheme for implementing shared objects on a real-time system [1]. They assume that the system allocates processor time in discrete quanta, and that the quantum is large compared to the length of an object call. Under these conditions, most object calls are likely to execute without preemption, thus allowing the usage of simpler and more efficient access mechanisms. - University , 2003 "... ..." - In Proceedings of EuroMicro Conference on Real-Time Systems , 2003 "... We present a sufficient linear-time schedulability test for preemptable, asynchronous, periodic task systems with arbitrary relative deadlines, scheduled on a uniprocessor by an optimal scheduling algorithm. We show analytically and empirically that this test is more accurate than the commonlyused d ..." Cited by 18 (0 self) Add to MetaCart We present a sufficient linear-time schedulability test for preemptable, asynchronous, periodic task systems with arbitrary relative deadlines, scheduled on a uniprocessor by an optimal scheduling algorithm. We show analytically and empirically that this test is more accurate than the commonlyused density condition. We also present and discuss the results of experiments that compare the accuracy and execution time of our test with that of a pseudo-polynomial-time schedulability test presented previously for a restricted class of task systems in which utilization is strictly less than one. 1 - Information and Computation , 1995 "... We consider the problem of non-preemptively scheduling periodic and sporadic task systems on one processor using inserted idle times. For periodic task systems, we prove that the decision problem of determining whether a periodic task system is schedulable for all start times with respect to the cla ..." Cited by 17 (0 self) Add to MetaCart We consider the problem of non-preemptively scheduling periodic and sporadic task systems on one processor using inserted idle times. For periodic task systems, we prove that the decision problem of determining whether a periodic task system is schedulable for all start times with respect to the class of algorithms using inserted idle times is NP-Hard in the strong sense, even when the deadlines are equal to the periods. We then show that if there exists a polynomial time scheduling algorithm which correctly schedules a periodic task system T whenever T is feasible for all start times, then P=NP. We also prove that with respect to the same class of algorithms, the problem of determining whether there exist start times for which a periodic task system is feasible is also NP-Hard in the strong sense even when the deadlines are equal to the periods. The second part of the paper concentrates on sporadic task systems and inserted idle times. It seems reasonable to suppose that to insert idl... , 1997 "... This work aims to establish the viability of lock-free object sharing in uniprocessor real-time systems. Naive usage of conventional lock-based object-sharing schemes in real-time systems leads to unbounded priority inversion. A priority inversion occurs when a task is blocked by a lower-priority ta ..." Cited by 16 (0 self) Add to MetaCart This work aims to establish the viability of lock-free object sharing in uniprocessor real-time systems. Naive usage of conventional lock-based object-sharing schemes in real-time systems leads to unbounded priority inversion. A priority inversion occurs when a task is blocked by a lower-priority task that is inside a critical section. Mechanisms that bound priority inversion usually entail kernel overhead that is sometimes excessive. We propose that lock-free objects offer an attractive alternative to lock-based schemes because they eliminate priority inversion and its associated problems. On the surface, lock-free objects may seem to be unsuitable for hard real-time systems because accesses to such objects are not guaranteed to complete in bounded time. Nonetheless, we present scheduling conditions that demonstrate the applicability of lock-free objects in hard real-time systems. Our scheduling conditions are applicable to schemes such as rate-monotonic scheduling and earliest-deadline-... , 1999 "... In this paper, we extend the determination of feasibility intervals to task sets with arbitrary deadlines, both in the synchronous and the asynchronous case. Meanwhile we also improve the arguments and results generally found in the literature. ..." Cited by 15 (6 self) Add to MetaCart In this paper, we extend the determination of feasibility intervals to task sets with arbitrary deadlines, both in the synchronous and the asynchronous case. Meanwhile we also improve the arguments and results generally found in the literature. - In Proceedings of the 2nd International Conference on Knowledge Engineering and Knowledge Management (EKAW'2000), Juan-les-Pins , 2000 "... Optimal online scheduling algorithms are known for sporadic task systems scheduled upon a single processor. Additionally, optimal online scheduling algorithms are also known for restricted subclasses of sporadic task systems upon an identical multiprocessor platform. The research reported in this ar ..." Cited by 9 (0 self) Add to MetaCart Optimal online scheduling algorithms are known for sporadic task systems scheduled upon a single processor. Additionally, optimal online scheduling algorithms are also known for restricted subclasses of sporadic task systems upon an identical multiprocessor platform. The research reported in this article addresses the question of existence of optimal online multiprocessor scheduling algorithms for general sporadic task systems. Our main result is a proof of the impossibility of optimal online scheduling for sporadic task systems upon a system comprised of two or more processors. The result is shown by finding a sporadic task system that is feasible on a multiprocessor platform that cannot be correctly scheduled by any possible online, deterministic scheduling algorithm. Since the sporadic task model is a subclass of many more general real-time task models, the nonexistence of optimal scheduling algorithms for the sporadic task systems implies nonexistence for any model which generalizes the sporadic task model. The sporadic task model [18, 16] has received tremendous research attention over the years for its usefulness in modeling recurring processes for hard-real-time systems. A sporadic task τi = (ei, di, pi) is characterized by a worst-case execution requirement ei, a (relative) deadline di, and a minimum inter-arrival separation pi, which , 2005 "... In this report, we discuss a metric that characterizes the load of a sporadic task system. We give an exact, exponential-time algorithm that determines a task system’s load by essentially simulating the execution of task system. In addition, we also give an algorithm that can determine the load of a ..." Cited by 8 (5 self) Add to MetaCart In this report, we discuss a metric that characterizes the load of a sporadic task system. We give an exact, exponential-time algorithm that determines a task system’s load by essentially simulating the execution of task system. In addition, we also give an algorithm that can determine the load of a task system within an arbitrarily small threshold ɛ> 0. While the worst-case time complexity of the approximation is still possibly exponential, we have empirically observed that this algorithm generally provides a very significant reduction in the time that we must simulate the task system to obtain the load. Additionally, we provide proofs of correctness for our algorithms. 1
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=1163687","timestamp":"2014-04-19T21:00:08Z","content_type":null,"content_length":"38822","record_id":"<urn:uuid:cfb7ef76-a6e4-4cf7-bc17-40361c86fbe0>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00377-ip-10-147-4-33.ec2.internal.warc.gz"}
Euler Mascheroni Constant, Curious? up vote 6 down vote favorite I found this formula for the Euler-Mascheroni constant $\gamma$. Just wondering whether such a formula already exists in literature? Also, wanted to know whether there are formulas that converge faster than this? $$\gamma = \sum_{k = 1}^{\infty} \frac{1}{2^k k} - \sum_{k = 1}^{\infty} \frac{\zeta \left( 2 k + 1 \right)}{2^{2 k} \left( 2 k + 1 \right)} $$ UPDATE: Thanks for your reply quid. I just came across this while doing some calculations with the zeta function. The calculations are a bit too long to be posted, but in short it derives from $$\ zeta(s) = \frac{s+1}{2(s-1)} + \frac{s}{8} - \frac{s(s+1)}{2\pi^2}\int_1^\infty \frac{(\tan^{-1}\cot(\pi x))^2}{x^{s+2}}dx$$. Could you please recheck the formula you state. I have no good access to a CAS at the moment but the first sum seems to be soomething like 0.693 and the latter 0.177 so the diff is quite a bit smaller than 0.57. – quid Aug 14 '12 at 12:08 1 I am using Mathematica, and "Sum[1./(k*2^k) - Zeta[2*k + 1]/((2 k + 1)*2^(2*k)), {k, 1, 100}]" gives me 0.5772156649015329 – Roupam Ghosh Aug 14 '12 at 12:46 Sorry for the preceeding comment. You are right. I used Wolfram Alpha and got the other values, but I now retried and got what you say. Although I tried to be careful, in all likelihood I just somehow made an error before; sorry again for the noise. – quid Aug 14 '12 at 12:59 @Roupam Do you have a reference or online resource for details about the integral representation of $\zeta(s)$ given above? – pbs Feb 4 at 23:35 add comment 2 Answers active oldest votes In his 1887 paper Table des valeurs des sommes $S_k = \sum_{1}^\infty n^{-k}$ (Acta Mathematica 10 (1887), 299-302; volume available online), Stieltjes used almost exactly this formula to compute Euler's constant to 33 decimal places. Of course as quid points out you need to know the zeta values to do this, but the main point of this paper was to compute those values, so up vote 17 he was just getting Euler's constant as a corollary. He uses a slight variant of the formula, with $\zeta(2k+1)-1$ in place of $\zeta(2k+1)$ for faster convergence (and a corresponding down vote adjustment in the other term, which becomes $1+\log 2 - \log 3$). He derives the formula by taking the Taylor series expansion of $\log \Gamma(1+x)$ and using it to compute $\log \Gamma accepted (1+1/2) - \log \Gamma(1-1/2)$. Wow... I will look at that paper for sure! :) Thanks. – Roupam Ghosh Aug 14 '12 at 12:47 Yep... pretty close to what I got... Here's the link for others to see... archive.org/stream/actamathematica24lefgoog#page/n308/mode/2up – Roupam Ghosh Aug 14 '12 at 14:31 For the records, here is the link for Legendre's calculation using Euler-Maclaurin summation: gallica.bnf.fr/ark:/12148/bpt6k1101484/f440.image, Traité des fonctions elliptiques et des intégrales eulériennes, Paris, (1825-1828), vol. 2, p. 434 – Papiro Aug 14 '12 at 19:23 add comment The method used for the recent record computations of Euler--Mascheroni is (a refinement of) a classical algorithm due to Bren-McMillan . This algorithm is $O(n (\log n)^3)$. Now, for you formula (btw, could you say where/how you found it?), it is not quite clear to me what you are asking. The series involves $\zeta$ values (not only elementary things)! So, if one where to use this to compute approximations of $\gamma$ one would need all kinds of $\zeta$ values at odd naturals, and early ones to essentailly the same precision as one seeks $\gamma$. And, also from a theoretical this makes unclear what type of expressions you would admit as 'competition'. up vote 6 down vote To continue on the computation bit, if one would only compute/estimate $\zeta(3)$ naively this would already make this worse then above mentioned method. (One can also compute $\zeta(3)$ faster--in the recent record computations for it also an $O(n (\log n)^3)$ algorithm was used (though it appears to be simpler to compute $\zeta(3)$ than $\gamma$); but this is only $\zeta (3)$. And, it reraises the issue that it is unclear how to treat the $\zeta$ values when interpreting your question.) An overview on algorithms used to calculate these and related constants to high precision in practise, can also be found on that site. 1 +1 For the links. I am just wanting to know whether this formula is unique and is of any use from a computational perspective. – Roupam Ghosh Aug 14 '12 at 10:50 1 You are welcome, and thank you for the added information. – quid Aug 14 '12 at 12:23 add comment Not the answer you're looking for? Browse other questions tagged riemann-zeta-function or ask your own question.
{"url":"https://mathoverflow.net/questions/104667/euler-mascheroni-constant-curious","timestamp":"2014-04-18T11:12:39Z","content_type":null,"content_length":"67057","record_id":"<urn:uuid:87c13bf5-9e01-442f-8059-6cdf4c3ed2f4>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00406-ip-10-147-4-33.ec2.internal.warc.gz"}
Wolfram Demonstrations Project Dissecting a Dodecagram {12/2} into a Hexagram {6/2} This Demonstration shows a dissection of a dodecagram {12/2} to a hexagram {6/2}. THINGS TO TRY • Gamepad Controls The notation means join every vertex of a regular -gon. G. N. Frederickson, Dissections: Plane & Fancy, New York: Cambridge Univ. Press, 2002 p. 178. "Dissecting a Dodecagram {12/2} into a Hexagram {6/2}" from the Wolfram Demonstrations Project Contributed by: Izidor Hafner Based on work by: Greg N. Frederickson This Demonstration shows a dissection of a dodecagram {12/2} to a hexagram {6/2}.
{"url":"http://demonstrations.wolfram.com/DissectingADodecagram122IntoAHexagram62/","timestamp":"2014-04-21T02:39:36Z","content_type":null,"content_length":"45368","record_id":"<urn:uuid:3796201b-2fce-4c58-b68c-bc2dfb001ced>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00462-ip-10-147-4-33.ec2.internal.warc.gz"}
Spetsakis and Yiannis Aloimonos, “Optimal Visual Motion Estimation - IEEE Transactions on Pattern Analysis and Machine Intelligence , 1994 "... Modelling th# push broom sensors commonly used in satellite imagery is quite di#cult and computationally intensive due to th# complicated motion ofth# orbiting satellite with respect to th# rotating earth# In addition, th# math#46 tical model is quite complex, involving orbital dynamics, andh#(0k is ..." Cited by 140 (6 self) Add to MetaCart Modelling th# push broom sensors commonly used in satellite imagery is quite di#cult and computationally intensive due to th# complicated motion ofth# orbiting satellite with respect to th# rotating earth# In addition, th# math#46 tical model is quite complex, involving orbital dynamics, andh#(0k is di#cult to analyze. Inth#A paper, a simplified model of apush broom sensor(th# linear push broom model) is introduced. Ith as th e advantage of computational simplicity wh#A9 atth# same time giving very accurate results compared with th# full orbitingpush broom model. Meth# ds are given for solving th# major standardph# togrammetric problems for th e linear push broom sensor. Simple non-iterative solutions are given for th# following problems : computation of th# model parameters from groundcontrol points; determination of relative model parameters from image correspondences between two images; scene reconstruction given image correspondences and ground-control points. In addition, th# linearpush broom model leads toth#0 retical insigh ts th# t will be approximately valid for th# full model as well.Th# epipolar geometry of linear push broom cameras in investigated and sh own to be totally di#erent from th at of a perspective camera. Neverth eless, a matrix analogous to th e essential matrix of perspective cameras issh own to exist for linear push broom sensors. Fromth#0 it is sh# wn th# t a scene is determined up to an a#ne transformation from two viewswith linearpush broom cameras. Keywords :push broom sensor, satellite image, essential matrixph# togrammetry, camera model The research describ ed in this paper hasb een supportedb y DARPA Contract #MDA97291 -C-0053 1 Real Push broom sensors are commonly used in satellite cameras, notably th# SPOT satellite forth# generatio... - In Proc. DARPA Image Understanding Workshop , 1993 "... This paper describes a pair of projectivity invariants of four lines in three dimensional projective space, P3. Invariants are derived in both algebraic and geometric terms, and the connection between the two ways of defining the invariants is established. Since a count of the number of degrees of f ..." Cited by 9 (3 self) Add to MetaCart This paper describes a pair of projectivity invariants of four lines in three dimensional projective space, P3. Invariants are derived in both algebraic and geometric terms, and the connection between the two ways of defining the invariants is established. Since a count of the number of degrees of freedom would predict the existence of a single invariant, rather that the two that are shown to exist, an isotropy of the four lines must exist. The nature of this isotropy is investigated. It is shown that once the epipolar geometry is known, the invariants of four lines may be computed from the images of the four lines in two distinct views with uncalibrated cameras. An example with real images is computed to shows that the invariants are effective in distinguishing different geometrical configurations of lines. 1
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=3641400","timestamp":"2014-04-23T09:33:23Z","content_type":null,"content_length":"16560","record_id":"<urn:uuid:e547e14a-1d12-4a46-bfb4-9d1be45d9d0c>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00373-ip-10-147-4-33.ec2.internal.warc.gz"}
Digital Waveguide Models Search Physical Audio Signal Processing Would you like to be notified by email when Julius Orion Smith III publishes a new entry into his blog? Digital Waveguide Models In this chapter, we summarize the basic principles of digital waveguide models. Such models are used for efficient synthesis of string and wind musical instruments (and tonal percussion, etc.), as well as for artificial reverberation. They can be further used in modal synthesis by efficiently implementing a quasi harmonic series of modes in a single ``filtered delay loop''. We begin with the simplest case of the infinitely long ideal vibrating string, and the model is unified with that of acoustic tubes. The resulting computational model turns out to be a simple bidirectional delay line. Next we consider what happens when a finite length of ideal string (or acoustic tube) is rigidly terminated on both ends, obtaining a delay-line loop. The delay-line loop provides a basic digital-waveguide synthesis model for (highly idealized) stringed and wind musical instruments. Next we study the simplest possible excitation for a digital waveguide string model, which is to move one of its (otherwise rigid) terminations. Excitation from ``initial conditions'' is then discussed, including the ideal plucked and struck string. Next we introduce damping into the digital waveguide, which is necessary to model realistic losses during vibration. This much modeling yields musically useful results. Another linear phenomenon we need to model, especially for piano strings, is dispersion, so that is taken up next. Following that, we consider general excitation of a string or tube model at any point along its length. Methods for calibrating models from recorded data are outlined, followed by modeling of coupled waveguides, and simple memoryless nonlinearities are introduced and analyzed. Subsections Previous: Recent Research Modeling the LeslieNext: Ideal Vibrating StringAbout the Author: Julius Orion Smith III Julius Smith's background is in electrical engineering (BS Rice 1975, PhD Stanford 1983). He is presently Professor of Music and Associate Professor (by courtesy) of Electrical Engineering at Stanford's Center for Computer Research in Music and Acoustics (CCRMA) , teaching courses and pursuing research related to signal processing applied to music and audio systems. See for details.
{"url":"http://www.dsprelated.com/dspbooks/pasp/Digital_Waveguide_Models.html","timestamp":"2014-04-18T10:35:04Z","content_type":null,"content_length":"154364","record_id":"<urn:uuid:b138f710-d5ab-4699-b791-e7382ea31a5f>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00247-ip-10-147-4-33.ec2.internal.warc.gz"}
solution set calculator Author Message toshhol18 Posted: Saturday 30th of Dec 09:51 Hey friends , I have just completed one week of my high school , and am getting a bit upset about my solution set calculator home work. I just don’t seem to grasp the topics. How can one expect me to do my homework then? Please help me. From: to concur the espinxh Posted: Monday 01st of Jan 09:08 Can you be a bit more clear about solution set calculator ? I conceivably able to help you if I knew some more . A good quality software can help you solve your problem instead of paying for a math tutor. I have tried many math program and guarantee that Algebrator is the best program that I have stumbled onto . This Algebrator will solve any math problem write from your book and it also clarifies every step of the solution – you can exactly write it down as your homework . However, this Algebrator should also help you to learn algebra rather than only use it to copy answers. From: Norway DoniilT Posted: Wednesday 03rd of Jan 07:54 I can confirm that. Algebrator is the ultimate program for solving algebra assignments. Been using it for a while now and it keeps on amazing me. Every homework that I type in, Algebrator gives me a perfect answer to it. I have never enjoyed learning algebra homework on like denominators, function range and logarithms so much before. I would suggest it for MichMoxon Posted: Thursday 04th of Jan 07:23 Algebrator is a user friendly software and is definitely worth a try. You will also find lot of exciting stuff there. I use it as reference software for my math problems and can say that it has made learning math more enjoyable. GoxnFisx Posted: Friday 05th of Jan 10:50 Friends , Thanks a lot for the replies that you have given . I just had a look at the Algebrator available at: http://www.linear-equation.com/ using-augmented-matrices-to-solve-systems-of-linear-equations.html. The interesting part that I liked was the money back guarantee that they are extending there. I went ahead and bought Algebrator. It is really easy to handle and proves to be a noteworthy tool for Pre Algebra. From: New Mexico, USA Vnode Posted: Friday 05th of Jan 12:55 You can get more details at: http://www.linear-equation.com/linear-least-squares-fit-mapping-method-for-information-retrieval-from-natural-language-texts.html. Not only are you guaranteed satisfaction but also there is a money-back guarantee if you are unhappy with the program. Of course you will get your answer and the way to go about it. Best wishes.
{"url":"http://www.linear-equation.com/linear-equation-graph/ratios/solution-set-calculator.html","timestamp":"2014-04-19T03:05:37Z","content_type":null,"content_length":"38352","record_id":"<urn:uuid:e62d0b02-a504-4152-b89c-ae8e115eea7b>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00053-ip-10-147-4-33.ec2.internal.warc.gz"}
Marlborough, MA Prealgebra Tutor Find a Marlborough, MA Prealgebra Tutor ...I do well on standardized tests (I scored a perfect 170 on the GRE general test quantitative section). However, I also understand what makes SAT questions difficult. I have ample experience tutoring math, both as a graduate instructor of basic math courses, and as a high school tutor. I have tutored a dozen students in SAT math specifically. 29 Subjects: including prealgebra, reading, calculus, English ...My work won a $1000 MIT Biomedical Engineering Society's Award for Research Excellence, awarded to only five MIT undergraduate and master's students each year; I'm also second author on the peer-reviewed article we published. I studied literature as an undergraduate at MIT and Harvard and took m... 47 Subjects: including prealgebra, English, chemistry, reading ...I enjoy working with students who are motivated but need a little help to understand the subject at hand. I'm very good at explaining hard concepts or problems using easy to understand and every day examples. I'm patient with my students and experienced in helping them improve their grades in s... 11 Subjects: including prealgebra, calculus, geometry, algebra 1 ...I graduated from MIT and am currently working on a start-up part time and at MIT as an instructor. I miss the one-on-one academic environment and am keen to share some of my knowledge. I would like to teach math (through BC calculus), science (physics, chemistry, biology, environmental), engine... 63 Subjects: including prealgebra, chemistry, reading, calculus ...I really enjoy history and I bring that enthusiasm to my tutoring. Portuguese While living in Europe, I had the opportunity to study Portuguese, and had exposure to both European and Brazilian Portuguese. I am more familiar with Brazilian Portuguese, but I am familiar with European Portuguese too. 14 Subjects: including prealgebra, Spanish, Italian, grammar Related Marlborough, MA Tutors Marlborough, MA Accounting Tutors Marlborough, MA ACT Tutors Marlborough, MA Algebra Tutors Marlborough, MA Algebra 2 Tutors Marlborough, MA Calculus Tutors Marlborough, MA Geometry Tutors Marlborough, MA Math Tutors Marlborough, MA Prealgebra Tutors Marlborough, MA Precalculus Tutors Marlborough, MA SAT Tutors Marlborough, MA SAT Math Tutors Marlborough, MA Science Tutors Marlborough, MA Statistics Tutors Marlborough, MA Trigonometry Tutors
{"url":"http://www.purplemath.com/Marlborough_MA_prealgebra_tutors.php","timestamp":"2014-04-17T04:06:12Z","content_type":null,"content_length":"24327","record_id":"<urn:uuid:fdb70b77-d7c5-4bf3-a869-3f62c4ce9f0e>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00543-ip-10-147-4-33.ec2.internal.warc.gz"}
The Herd of Camels Puzzle - Solution The Puzzle: An Arab Sheik, finding himself about to die, called his sons about him and said: "Divide my camels among you in the proportion of one-half of the herd to the eldest son, the second son one-third, and to the youngest son one-ninth." Thereupon the oldest son cried: "O, my father, one-half, one-third, and one-ninth do not constitute a whole. To whom, therefore, shall the remainder of the herd be given?" "To any poor man who may be standing by when the division is made," replied the Sheik, who thereupon died. When the herd was collected a new difficulty arose. The number of the camels could not be divided either by two or three or nine. While the brothers were disputing, a poor but crafty Bedouin, standing by with his camel, exclaimed, "Behold, I will sell you my beast for ten pieces of silver, so that you may then divide the herd." Seeing that the addition of one camel would solve the difficulty, the brothers jumped at the offer, and proceeded to divide the herd, but when each had received his allotted portion there yet remained one camel. "I am the poor man standing by." Said the crafty Bedouin, and gaily mounting the camel, he rode away, with the ten pieces of silver in his turban. Now, how many camels were in the Sheik's herd? Our Solution: The camels could be divided exactly according to the Sheik's will only if it were a multiple of 2,3 and 9 i.e. 18,36,54,72,... But according to the brothers, they needed one more camel to divide it according to the Sheik's will; so the Sheik had 17,35,52,71,... camels. But "only 18" (18/2=9, 18/3=6, 18/9=2 so that 9+6+2=17) leaves remainder one (18-17=1). So there were 17 camels in the Sheik's herd. But, the problem is that the number of camels 9,6,2 that the brothers received are not yet in the proportion of one-half, one-third and one-ninth. This is the paradoxical situation. The Camels could be divided exactly according to the Sheik's will only if he had 18+...,36+...,54+...,72+...,...camels. Probably the Sheik was not good at mathematics. Puzzle Author: Loyd, Sam See this puzzle without solutionDiscuss this puzzle at the Math is Fun Forum
{"url":"http://www.mathsisfun.com/puzzles/the-herd-of-camels-solution.html","timestamp":"2014-04-18T15:40:21Z","content_type":null,"content_length":"7845","record_id":"<urn:uuid:f8f8990b-414b-44d7-9f4c-eb7a7f400a49>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00200-ip-10-147-4-33.ec2.internal.warc.gz"}
[Python-Dev] Comparing heterogeneous types Guido van Rossum guido at python.org Wed Jun 2 13:08:46 EDT 2004 > Unfortunately, long/float comparison doesn't work quite correctly right now: > >>> n = 1 > >>> for i in range(2000): n += n > >>> n == 0.0 > OverflowError: long int too large to convert to float > One strategy for solving the problem is to observe that for every > floating-point implementation, there is a number N with the property that if > x >= N, converting x from float to long preserves information, and if x <= > N, converting x from long to float preserves information. Therefore, > instead of unconditionally converting to float, the conversion's direction > should be based on the value of one of the comparands. > Of course, such comparisons can be made faster by doing a rough range check > first, and doing the actual conversion only if the number of bits in the > long is commensurate with the exponent of the float. Do you think you can come up with a patch, or at least a description of an algorithm that someone without a wizard level understanding of the issues could implement? --Guido van Rossum (home page: http://www.python.org/~guido/) More information about the Python-Dev mailing list
{"url":"https://mail.python.org/pipermail/python-dev/2004-June/045141.html","timestamp":"2014-04-16T18:04:12Z","content_type":null,"content_length":"3883","record_id":"<urn:uuid:49688863-cb12-44c1-9da3-0c9b4505756f>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00545-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: Flaw in commonly used bash random seed method [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: Flaw in commonly used bash random seed method Matthijs wrote: > I hope nobody generates passwords with ANY kind of pseudo-RNG. This is the main point, anyway. > By the way, if the random function can only generate numbers between 0 > and 32767, won't 2 bytes be enough then? The algorithm will perform a > modulo calculation anyway, so 4 bytes won't really add anything. Of > course, it is much better then only one byte. You have made the assumption that the size of the seed matches the size of the output values. In fact, this is highly unlikely to be correct. In the standard C library (on which this implementation is almost certainly based), the seed is a full 32-bits even though the output is 15. That's because the seed is the internal state of the generator, and if it only had the same number of bits as the output, then the next output from the generator could be wholly determined by knowing the current output, and the generator would only be able to output 32768 numbers before the sequence repeated. Think of the extra bits as selecting one of 2^17 different permutations of the 2^15 possible output values; if the generator didn't have more internal state than it puts in its output, there would only ever be one constant permutation, the seed would choose your starting point at that permutation, and each output number you see generated would always be followed by the exact same next one every time. Can't think of a witty .sigline today....
{"url":"http://archive.cert.uni-stuttgart.de/bugtraq/2006/04/msg00091.html","timestamp":"2014-04-18T00:32:40Z","content_type":null,"content_length":"6382","record_id":"<urn:uuid:b8984ed1-0eeb-4833-b98c-406d2aa95293>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00279-ip-10-147-4-33.ec2.internal.warc.gz"}
Modeling gene expression regulatory networks with the sparse vector autoregressive model • We are sorry, but NCBI web applications do not support your browser and may not function properly. More information BMC Syst Biol. 2007; 1: 39. Modeling gene expression regulatory networks with the sparse vector autoregressive model To understand the molecular mechanisms underlying important biological processes, a detailed description of the gene products networks involved is required. In order to define and understand such molecular networks, some statistical methods are proposed in the literature to estimate gene regulatory networks from time-series microarray data. However, several problems still need to be overcome. Firstly, information flow need to be inferred, in addition to the correlation between genes. Secondly, we usually try to identify large networks from a large number of genes (parameters) originating from a smaller number of microarray experiments (samples). Due to this situation, which is rather frequent in Bioinformatics, it is difficult to perform statistical tests using methods that model large gene-gene networks. In addition, most of the models are based on dimension reduction using clustering techniques, therefore, the resulting network is not a gene-gene network but a module-module network. Here, we present the Sparse Vector Autoregressive model as a solution to these problems. We have applied the Sparse Vector Autoregressive model to estimate gene regulatory networks based on gene expression profiles obtained from time-series microarray experiments. Through extensive simulations, by applying the SVAR method to artificial regulatory networks, we show that SVAR can infer true positive edges even under conditions in which the number of samples is smaller than the number of genes. Moreover, it is possible to control for false positives, a significant advantage when compared to other methods described in the literature, which are based on ranks or score functions. By applying SVAR to actual HeLa cell cycle gene expression data, we were able to identify well known transcription factor targets. The proposed SVAR method is able to model gene regulatory networks in frequent situations in which the number of samples is lower than the number of genes, making it possible to naturally infer partial Granger causalities without any a priori information. In addition, we present a statistical test to control the false discovery rate, which was not previously possible using other gene regulatory network models. In order to understand cell functioning as a whole, it is necessary to describe, at the molecular level, how gene products interact with each other. This could help to identify new target genes and to design new drugs for treatment of several diseases [1-3]. Due to the high number of genes involved in these networks, activating or suppressing feedback loops, the dynamics of their interactions is very complex and difficult to infer. With the development of high-throughput technologies, such as DNA microarrays, it is possible to simultaneously analyze the expression of up to thousands of genes and to construct gene networks based on inferences over gene expression data. Several methods to model genetic networks were proposed in the last few years, such as the Bayesian networks [4-8], Structural Equation Models [9], Probabilistic Boolean Networks [10-12], Graphical Gaussian Models [13], Fuzzy controls [14], and Differential Equations [15]. Although these methods allow modeling several regulatory networks for which biological information is available, it is difficult to determine the flow of information when there is no a priori In addition, all of these methods face the same problem, i.e., the number of samples (microarrays) is very small, when compared to the high number of variables (genes) (ill posed problems, related to the "curse of dimensionality") [16]. Therefore, it is difficult to infer large scale networks using traditional statistical methods, limiting this inference to only a few genes. As a consequence, modeling and simulating large networks becomes a field of intensive and challenging research. At this point, it is important to define what is considered a "large" network. We consider as "large" a network in which the number of genes is larger than the number of microarrays experiments, implying in a large number of parameters to be estimated. Some methods have been developed to overcome this problem. For example, Barrera et al. use mutual information for dimension reduction [17], with mutual information between genes being computed and then, the highest mutual informations selected. However, this approach is not founded on a statistical test, rendering it very difficult to interpret and identify the actual edges of the network. Therefore, the choice of the threshold parameter to determine whether there is or not a connection, becomes quite subjective. An alternative to model the large number of genes is to construct modules (clusters), where each module is composed by several genes, and then, to construct the module-module networks [18-20]. A limitation of these methods is that they still are not a gene-gene network, therefore, interpretation of the meaning of each module is difficult, varying with each cluster. Here we present the Sparse Vector Autoregressive model to approach these problems. This method was first applied, with success, in neurosciences, to estimate functional connectivity between several brain areas [21]. Here, we present the Sparse Vector Autoregressive model based on LASSO penalized regression for variable selection to reduce the dimensionality on large gene networks. In cases of multiple time series, a first approach to infer connectivity would be to apply techniques such as multivariate autoregressive modeling (VAR), which allows identification of connectivity by combining graphical modeling methods with the concept of Granger causality [22]. This is an attractive approach since it does not require a priori network information. Unfortunately, the current time series methods can only be applied only for cases in which the length of the time-series T is much larger than n, the number of genes, which is exactly the reverse of the situation commonly found in microarray experiments, for which relatively short time-series are measured over tens of thousands of genes. The Sparse Vector AutoRegressive model (SVAR), on the other hand, estimates the network in a two-stage process involving (i) penalized regression with LASSO regression [23] and (ii) pruning of unlikely connections by means of the False Discovery Rate (FDR) developed by [24]. Extensive simulations were performed with artificial gene networks having scale-free like topologies [25] and stable dynamics. These simulations show that the detection efficiency of connections of the proposed procedure is quite high. An application of the method to actual HeLa cell line data was illustrated by the identification of well known transcription factor targets and circuitries involving important genes in cancer development. Results and discussion In order to measure the performance of SVAR, intensive simulations were carried out. For this purpose, we simulated hundreds of networks with scale-free like topology since the metabolic network was described as scale-free graphs by [25]. In our case, the graph nodes represent the genes whereas the edges represent the Granger-causal relationships. For details of these artificial regulatory networks, see the Methods section. The number of genes was kept at n = 100 and we varied the sample size, i.e., the time-series length (time-series length T = 25, 50, 75, 100, 125, 150, 175 and 200 for SVAR and T = 110, 125, 150, 175 and 200 for VAR). Notice that, for VAR of order one, m = T - 1 must be larger than n. For each time-series length, we performed 100 simulations, i.e., 100 different scale-free like graphs were generated. The starting conditions of the scale-free like graphs were two fully connected genes (z[0 ]= 2, z[edges ]= 2, where z[0 ]is the initial number of genes and z[edges ]is the initial number of edges), in other words, two nodes with two edges, one pointing to the other. The number of edges added at each iteration is z = 1, therefore, each network is composed by 100 genes and 100 edges out of 10,000 possible edges (the maximum number of possible edges is n^2). Notice that since the goal is to construct a network with n = 100 genes, we set the number of iterations T[step ]= n - z[0 ]= 98. In Figure Figure1,1, an example of the artificially generated gene expression regulatory network is illustrated. Artificial gene regulatory network. Example of a simulated sparse gene regulatory network with n = 100 genes and 100 connections. The arrows indicate the Granger-causal relationships. It is important to highlight that SVAR was able to identify true positive edges even when the time-series length was lower than the number of genes. Figures Figures2,2, ,33 and and44 show, respectively, the number of true positives inferred by SVAR and VAR for controlled false positives rate, i.e., q-value (error type I rate within rejected hypotheses) thresholds lower than 0.01, 0.05 and 0.10. Since the estimated β's standard error is proportional to the time series' length (the greater the time series, the lower is the β's standard error) we varied only the time series' length. Comparison between SVAR and VAR. The simulations were performed in a scale-free like network composed of 100 nodes and 100 edges. VAR was performed only for experiments with the length of the time-series of up to 110. TP: True positives. The number of ... Comparison between SVAR and VAR. The simulations were performed in a scale-free like network composed of 100 nodes and 100 edges. VAR was performed only for experiments with the length of the time-series of up to 110. TP: True positives. The number of ... Comparison between SVAR and VAR. The simulations were performed in a scale-free like network composed of 100 nodes and 100 edges. VAR was performed only for experiments with the length of the time-series of up to 110. TP: True positives. The number of ... Analyzing figures figures2,2, ,33 and and4,4, we obtained the following results: 1. The capacity of SVAR to identify true positives even when the number of samples is lower than the number of genes is satisfactory. This was found when comparing the performance between SVAR, with the time-series length equal to 50, and VAR, with time-series length equal to 110. Also, in this case, SVAR has identified more true positive edges than VAR (the proportion of the quantity of true positives inferred by SVAR is about 75% higher than the number of true positives inferred by VAR). 2. By comparing SVAR and VAR when the number of genes is lower than the number of samples, in general, SVAR is slightly more powerful than VAR, since the number of connectivities is larger than the number of samples. 3. When m n, where m = T - 1 and n is the number of genes, there is no statistical difference between SVAR and VAR. This could be explained, in this context, because the best λ which minimizes the GCV (Generalized Cross-Validation) is near to zero. When λ = 0, the SVAR model becomes the traditional VAR model. We have also analyzed the expression profile of a set of 94 cell cycle-regulating genes represented by 48 microarrays, i.e., the number of genes n is approximately 2 times larger than the time-series length T. Figure Figure55 shows the genes that display any connectivity under a false-positive rate (FDR) of 5% (q-value < 0.05). Genes with no connectivity were excluded. HeLa gene expression regulatory network. Gene regulatory network inferred from HeLa cell cycle gene expression data. The arrows represent the Granger-causal associations with q-value < 0.05. Genes with no Granger-causal links identified by SVAR ... The SVAR method reveals at least three gene regulatory networks related to cell transformation and tumor progression, namely: NFκB, p53, and STAT3 transcriptional modules [26-28], which is in agreement with already well known cell cycle-regulated pathways in several cellular models and in Hela cells themselves. It is important to highlight that the out-degree (number of edges with the gene as their initial vertex) of genes encoding proteins that act as well-known transcriptional factors (p53, NFκB and STAT3) or important genes for cell proliferation control (p21, bai1, tsp1, a20) is higher than that of other genes. In a similar analysis, the in-degree (number of edges with the gene as their terminal vertex) of the FGFs (fgf18, fgf20, fgfr4) and of genes involved in cell cycle regulation and apoptosis (cyclin d1, c-myc, bcl-2, noxa, fas) is also higher, demonstrating the association between their key role in cell homeostasis and their in-degree and/or out-degree values [29]. NFκB is an inducible transcription factor complex formed by heterodimeric association between relA and c-rel gene products, whose transcriptional activity is regulated by interaction with the inhibitory IκBα protein. It has already been demonstrated that activation of NFκB controls cell-cycle progression in HeLa cells by several mechanisms [30]. The SVAR method was not able to identify the relationship between NFκB and its natural targets, such as A20, iap, bclx and iκBα genes. However, SVAR is showing that NFκB directly regulates several fibroblast growth factors (FGFs) and the c-Myc protein, which are key regulators of cell proliferation. Indeed, it is noticed that the majority of NFκB transcriptional activity is mediated by interaction with FGFs-related proteins, at the upstream and/or downstream levels. These results support the hypothesis that some of the multiple aspects of tumorigenesis in Hela cells may be related to NFκB -mediated transcription of FGFs-related As discussed above, the positive NFκB regulation of several well-known natural targets was not detected by SVAR. However, these regulatory processes appear to be present, even in the absence of an evident direct link with NFκB, since all of these transcriptionally regulated genes form a highly related network (Figure (Figure5).5). A20, a zinc finger protein, which is transcriptionally regulated by NFκB in several cell types [31], appears to orchestrate the genes relationship in this network, activating the transcription of well-known anti-apoptotic genes, such as iap, bclx and junB – NFκB target genes themselves [32-34] – towards transduction of the proliferative transcriptional activity of NFκB. The A20 protein is also involved in NFκB regulation, blocking its activity, in a negative feedback mechanism [35]. Although this control is operated at the post-transcriptional level, results obtained using the SVAR method suggest that this process could also be controlled by A20-mediated positive regulation of iκBα (Figure (Figure5).5). These results confirm the reliability of SVAR for predicting gene relationship, since iκBα, the natural NFκB inhibitor has a key role in controlling the NFκB -regulated cell cycle events in Hela cells, as referred to in literature [30]. Moreover, SVAR showed that this role of iκBα in Hela cell cycle progression also appears to be regulated through p53-mediated activation of iκBα (Figure (Figure5),5), in agreement with data reported in the literature [36]. In summary, these data support the hypothesis that iκBα may be involved in attenuation of tumor progression and be responsible for the mildly invasive phenotype displayed by Hela cells. The p53 protein is a transcription factor that binds to the enhancer/promoter elements of downstream target genes and regulating their transcription and initiating cellular programs that account for most of its tumor-suppressor functions, namely: cell cycle arrest, inhibition of angiogenesis and metastasis, apoptosis induction and DNA repair [37]. The SVAR method was capable of identifying the interactions of several members of the p53 network. IGF-BP3 (IGF-binding protein 3), an inhibitor of insulin-like growth factor, and NOXA, a BCL-2 homology domain 3-only (BH3-only) protein, are transcriptionally activated by p53 in activation of apoptosis in several cell types [38,39]. Our in silico results showed that this regulation is also present in Hela cells. Although the fas gene is not a known target of p53, its activation could be mediated by other p53 targets, leading to increased apoptosis rate and cell proliferation control. On the other hand, SVAR showed that bai-1 and tsp-1 genes are induced by the p53 gene product in Hela cells. It is known that the bai-1 gene codes for a member of the secretin receptor family, which contains at least one functional p53-binding site within an intron, and its product is postulated to be an inhibitor of angiogenesis and a tumor growth suppressor [40]. Similarly, the tsp-1 gene codes for an adhesive glycoprotein that mediates cell-to-cell and cell-to-matrix interactions and has been shown to play a role in platelet aggregation, angiogenesis, and tumorigenesis [41]. Taken together, the p53-mediated upregulation of bai-1 and tsp-1 genes may be a mechanism to evade cell migration and angiogenesis, features which are commonly absent in Hela cells. We noticed that the classical p53 targets, such as gadd45 and p21, do not appear to be directly regulated by p53 in the SVAR analysis (Figure (Figure5).5). This may be explained by the fact that the time-series length is not large enough. It is important to note that our previous study applying DVAR (Dynamic Vector AutoRegressive) [42], it was possible to identify these connectivities. The observed p53-independent transcriptional regulation of the p21 gene (Figure (Figure5),5), appears to be unrelated to cell cycle arrest, as discussed below. The STAT3 protein is a member of the STAT protein family. In response to cytokines and growth factors, it forms both homo- or heterodimers with other STAT proteins and the complex translocates to the nucleus, where they act as transcriptional activators. STATs mediate the cell response to different stimuli, playing a key role in several cellular processes, such as cell growth and apoptosis [43]. As shown, using the SVAR method (Figure (Figure5),5), STAT3 regulates the expression of the cycle positive regulator Cyclin D1 and of the anti-apoptotic protein Bcl-2. It has already been reported that constitutive activation of STAT 3 correlates with cyclin d1 and bcl-2 gene overexpression, thus providing a novel prognostic marker for head and neck squamous cell carcinoma [44]. Moreover, repression of p53 gene expression by STAT3 is likely to have an important role in development of tumors [45]. These evidence point to an involvement of STAT3 in cell cycle progression and transformation of Hela cells. Our in silico analysis also highlighted an unexpected behavior for the p21 gene, independently of p53 regulation. This alternative regulation has already been described for other cell types [46], but still remains unclear in the case of Hela cells. Although p21 is not a transcription factor, it is conceivable that indirect effects of p21 on cellular gene expression of well-known cell cycle progression promoters, such as Cyclin D1 and apoptosis inhibitors, such as Bcl-2 may mediate some unexpected functions in Hela cells. These functions appear to be unrelated to growth inhibition and cell cycle arrest, supporting the hypothesis that p53-independent regulation of p21 could be one of the signaling pathways activated during tumorogenesis and/or tumor progression in Hela cells as well as in other cancer types [47,48]. Future efforts directed to evaluate this hypothesis include gene transfection of p21 mutants lacking the p53 and STAT3-binding sites and subsequent, analysis of the newly identified p21 targets gene expression and changes in Hela cells phenotype and tumorigenicity. It is interesting that, even using a small dataset, the SVAR method allowed identification of actual regulations, as detailed above, illustrating the power of this technique. In general, the methods reported in the literature are not based on a statistical test due to difficulties generated by the fact that the number of samples is lower than the number of parameters to be estimated, consequently, they do not provide an objective control for false-positives. The main advantage of the sparse vector autoregressive model (SVAR), compared with other connectivity models, is that it models a Granger-causal network with a number of genes that is larger than the number of samples, in other words, it is useful to model "large" networks with a statistical test for each one of the edges. To the best of our knowledge, the approach taken here is the only one that combines these two advantages since other methods which model "large" networks usually do not present statistical tests for the edges. Moreover, "large" gene-gene networks are commonly dealt with in pairwise comparisons. Using SVAR, it is possible to infer partial Granger-causalities resulting in a lower number of spurious edges than pairwise comparisons. Since SVAR deals with the multivariate case, the definition of Granger causality becomes complex, because of the existence of multi-steps connectivities. In the present report, identification of Granger causality using the SVAR model is related to the definition of partial Granger-causality given by [49]. By definition of Granger's causality [49] the SVAR model allows analysis of cycles containing networks. Therefore, there is no a priori assumption that the network must be a DAG (Directed Acyclic Graph), as assumed by other methods [5,9]. As a consequence, the SVAR method can be used to model networks with cycles. This is of extreme importance, since it is well-known that genetic regulatory networks maintain their control and balance by a number of positive/negative feedback There is a class of Bayesian network with MCMC algorithm which may integrate expression data with multiple sources of information [8]. The advantages of integrating multiple sources of information, i.e., adding a priori knowledge, is speculative. Integration of a priori knowledge maybe interesting to recover more realistic connections and to increase the power of the test. However, it also lead to a bias depending on the kind of information assumed in the model. In this actual stage of development of SVAR, integration of different information is not possible since only gene expression levels are used to estimate Granger causality. Further studies may be focused on integrating biological information to improve the power of SVAR. The experimental comparison between SVAR and other methods is difficult since SVAR is the only one which has a statistical test for gene-gene networks comprising a notion of Granger-causality. The Graphical Gaussian Models reported by Schäfer and Strimmer, which apply partial correlations in the context of (n > m) is the closest one to SVAR, presenting a statistical test, however, the edges obtained by this approach represent instantaneous associations (correlations), failing to provide a notion of Granger-causality, i.e., the edges have no direction. Differently from score functions, which pose difficult interpretations or subjective choices of the threshold to determine where there is (or not) an edge, a statistical test is an objective way to determine whether there is an edge and what is the rate of type I error. In this work, we considered only lags of first order, but it is relatively straightforward to generalize this method to analyze SVAR models with orders higher than one. However, this issue depends on the number of parameters to be estimated and the time series length. The complexity of the proposed inference is linear to the number of genes, since only one regression is performed for each gene. There are other approaches for variable selection based on stepwise methods. Unfortunately, these methods are not consistent when n > m [50], i.e., even increasing the sample size (T → ∞), there is no guarantee that the set of non-zero coefficients is the correct one. This result does not change even if all subsets of variables are explored. In contrast to LASSO, one may choose to use other penalized regressions, such as the more popular Ridge [51] or the non-negative Garrote [52]. Ridge does not set the variables to zero, resulting in models with difficult interpretations. Comparing LASSO to non-negative Garrote, the latter is worse than LASSO when multicolinearity is present in the data [23]. Therefore, LASSO seems to be the most appropriate in identifying gene regulatory networks. Another advantage of SVAR is the fact that it does not require model pre-specification; therefore, this method is unbiased and makes it possible to infer new connections, not just quantifying the dependence level measured by already known edges. Furthermore, it is not necessary to discretize gene expression values to Boolean variables, as in the Boolean network models [17]; therefore, there is no loss of information. In the SVAR approach, to render the application of statistics when (n > m) feasible, we used the fact that the metabolic networks are sparsely connected as part of the solution. Therefore, the number of variables to be analyzed decreases significantly, resulting only in variables whose estimated coefficients are large enough to be tested and rejected as being different from zero. In summary, here we introduce the SVAR method to model gene regulatory networks in the present context, where the number of samples is often lower than the number of genes. With this method, it is possible to naturally model networks with feedback loops and to infer partial Granger causalities without any a priori information, which minimizes the number of spurious causalities. Moreover, we present a statistical test to control for the false discovery rate, a task which was not previously possible in several other proposed gene regulatory network models. Firstly, we describe the classical vector autoregressive model (VAR) and, then, we explore the feasibility of using LASSO regression as part of a technique for variable selection, by introducing the sparse vector autoregressive model (SVAR). The statistical test for the edges is also presented followed by the control of the false positives. To simplify the description of these methods, we describe both the SVAR and the VAR of order one, but they could easily be generalized to higher orders. After this description, we present the algorithm to construct artificial regulatory networks based on scale-free topology, since metabolic networks were described to have power-law distributions in the nodes' degrees [25]. We use this artificial network to evaluate the performance of our proposed model. Finally, the SVAR model is applied to actual biological data. Statistical background Granger (1969) [53] defined a concept of causality, which is easy to deal with in the context of VAR models; therefore, it has become quite popular in recent years [54]. The idea is that a cause cannot come after the effect. Thus, in the case of VAR(1) (VAR of order one) [54], if a gene i at time (t - 1) affects another gene j at time t, the former should help to predict the target gene A first order VAR model is described as shown: y[t ]= A[1]y[t-1 ]+ ε[t ]t = 2,..., T where T is the time-series' length (number of microarrays) y[t ]is an n × 1 vector of gene expression (where n is the number of genes), the normally distributed disturbance ε[t ]is an n × 1 vector with mean zero and covariance matrix Ω, and A[1 ]is an n × n matrix of parameters (connectivities). The disturbances ε[t ]are serially uncorrelated, but may be contemporaneously correlated. Thus $E (εtε′t) MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacH8akY=wiFfYdH8Gipec8Eeeu0xXdbba9frFj0=OqFfea0dXdd9vqai=hGuQ8kuc9pgc9s8qqaq= dirpe0xb9q8qiLsFr0=vr0=vr0dc8meaabaqaciaacaGaaeqabaqabeGadaaakeaacqWGfbqrcqGGOaakiiGacqWF1oqzdaWgaaWcbaGaemiDaqhabeaakiqb=v7aLzaafaWaaSbaaSqaaiabdsha0bqabaGccqGGPaqkaaa@361B@$ = Ω, where Ω is an n × n matrix. It is important to highlight that, in this multivariate model, each gene may depend not only on its own past values, but, also, on the past values of the other genes. Thus if y[it ]denotes the ith element in y[t], the ith row yields y[it ]= a[i1]y[1,t-1 ]+ a[i2]y[2,t-1 ]+ ...+ a[iN]y[N,t-1 ]+ ε[it], i = 1,...,n This model can be estimated by Ordinary Least Squares (OLS), simply by regressing each variable on the lags of itself and the other variables. Therefore, we can re-write it as Z = Xβ + EE[i ]~ N(0, Ω) i = 1,..., n where E[i ]follows a multivariate Gaussian distribution N(0, Ω), with zero mean 0[(n×1) ]and covariance matrix Ω. We define m = T - 1 and introduce the notation: Z[(m×n) ]= [y[2],...,y[t],...,y[T]]' = [z[1],...,z[i],...,z[n]], $β(n×n)=A′1=[β1,...,βn], MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacH8akY=wiFfYdH8Gipec8Eeeu0xXdbba9frFj0=OqFfea0dXdd9vqai=hGuQ8kuc9pgc9s8qqaq E[(m×n) ]= [ε[2],...,ε[t],...,ε[T]]' The explicit solution of the OLS estimator is Therefore, one can carry out separate regression analyses for each gene. In other words, it is possible to separately estimate each column β[i ]of β: $β^i=(X′X)−1X′zii=1,...,n MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacH8akY=wiFfYdH8Gipec8Eeeu0xXdbba9frFj0=OqFfea0dXdd9vqai= where z[i ]is the i-th column of Z. In order to specify the distribution of the j-th element of $β^ MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacH8akY= wiFfYdH8Gipec8Eeeu0xXdbba9frFj0=OqFfea0dXdd9vqai=hGuQ8kuc9pgc9s8qqaq=dirpe0xb9q8qiLsFr0=vr0=vr0dc8meaabaqaciaacaGaaeqabaqabeGadaaakeaaiiGacuWFYoGygaqcaaaa@2E64@$, let us denote the j-th diagonal element of (X'X)^-1 by w[jj]. Then, we may assert the statistical test as $β^ijσ^2wjj∼t(m−n)i=1,...,n MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacH8akY=wiFfYdH8Gipec8Eeeu0xXdbba9frFj0=OqFfea0dXdd9vqai= under the null hypothesis, where t(m - n) denotes a t distribution of (m - n) degrees of freedom and $σ^2=1m−n(Z−Xβ^)′(Z−Xβ^)=E′Em−n MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacH8akY=wiFfYdH8Gipec8Eeeu0xXdbba9frFj0=OqFfea0dXdd9vqai= It is to point out that these definitions will work only if m > n. Additionally, it is also well known that OLS does not ensure sparse connectivity patterns for A. To overcome these problems, in the next section, we introduce the sparse vector autoregressive model. Sparse Vector AutoRegressive (SVAR) Consider Z, β, X and E as described above. According to [55-58], the LASSO (Least Absolute Shrinkage and Selection Operator) regression [23] can be carried out by iterative application of: $β^ik+1=(X′X+λ2D(β^ik))−1X′zii=1,...,n and k=1,...,Nit MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacH8akY=wiFfYdH8Gipec8Eeeu0xXdbba9frFj0= where N[it ]is the number of iterations (we set N[it ]= 30 to our analysis), λ is the regularization parameter which determines the amount of penalization enforced, $D(β^ik) MathType@MTEF@5@5@+= vr0dc8meaabaqaciaacaGaaeqabaqabeGadaaakeaacqWGebarcqGGOaakiiGacuWFYoGygaqcamaaDaaaleaacqWGPbqAaeaacqWGRbWAaaGccqGGPaqkaaa@3418@$ is a diagonal matrix defined by $D(θ)=diag(p′λ(θ)/θ)k=1,...,n MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacH8akY=wiFfYdH8Gipec8Eeeu0xXdbba9frFj0=OqFfea0dXdd9vqai= $p′λ(θ)=λsign(θ) MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacH8akY=wiFfYdH8Gipec8Eeeu0xXdbba9frFj0=OqFfea0dXdd9vqai=hGuQ8kuc9pgc9s8qqaq= At each iteration, the regression coefficients of each gene with all others are weighted according to their current size and several coefficients are successively down-weighted and set to zero. The covariance matrix of the estimators may then be approximated by: $(X′X+λ2D(β^))−1X′X(X′X+λ2D(β^))−1σ^2 MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacH8akY=wiFfYdH8Gipec8Eeeu0xXdbba9frFj0=OqFfea0dXdd9vqai= where σ^2 is an estimate of the error variance $σ^2=1m−n−c(Z−Xβ^)′(Z−Xβ^)=E′Em−n−c MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacH8akY=wiFfYdH8Gipec8Eeeu0xXdbba9frFj0=OqFfea0dXdd9vqai= and c is the number of variables β set to zero by LASSO regression. When $σ^2 MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacH8akY=wiFfYdH8Gipec8Eeeu0xXdbba9frFj0=OqFfea0dXdd9vqai=hGuQ8kuc9pgc9s8qqaq= dirpe0xb9q8qiLsFr0=vr0=vr0dc8meaabaqaciaacaGaaeqabaqabeGadaaakeaaiiGacuWFdpWCgaqcamaaCaaaleqabaGaeGOmaidaaaaa@2FA5@$ replaces σ^2, we get the result that the statistical test is $β^σ^2wjj~t(m−n−c)i=1,...,n MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacH8akY=wiFfYdH8Gipec8Eeeu0xXdbba9frFj0=OqFfea0dXdd9vqai= under the null hypothesis, where t(m - n - c) denotes a t distribution of (m - n - c) degrees of freedom and w[jj ]is the j-th diagonal element of $(X′X+λ2D(β^))−1X′X(X′X+λ2D(β^))−1 MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacH8akY=wiFfYdH8Gipec8Eeeu0xXdbba9frFj0=OqFfea0dXdd9vqai= It is important to emphasize that the number of variables set to zero in this method will depend on the value of the regularization parameter λ, with higher values implying on the selection of fewer In our work, the value of the tuning parameter λ was selected as the value that minimizes the generalized cross validation criterion (GCV). Let q(λ) = tr{X(X'X + λ^2D(β))^-1X'} and rss(λ) be the residual sum of squares for the constrained fit with constraint λ, the generalized cross-validation statistic can be written as: $GCV=1mrss(λ){1−q(λ)/m}2 MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacH8akY=wiFfYdH8Gipec8Eeeu0xXdbba9frFj0=OqFfea0dXdd9vqai=hGuQ8kuc9pgc9s8qqaq The minimum value for GCV was achieved by the L-BFGS-B algorithm [59], which was implemented in the function optim of the R statistical environment. For more details on the statistical properties of LASSO in autoregressive models see [60]. Controlling the number of false-positives To control the type I error in cases of multiple tests of hundreds of edges, we applied the FDR method [24]. Firstly, assume that of the n hypotheses tested ${H10,H20,...,Hn0} MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacH8akY= , where $Hj0 MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacH8akY=wiFfYdH8Gipec8Eeeu0xXdbba9frFj0=OqFfea0dXdd9vqai=hGuQ8kuc9pgc9s8qqaq= dirpe0xb9q8qiLsFr0=vr0=vr0dc8meaabaqaciaacaGaaeqabaqabeGadaaakeaacqWGibasdaqhaaWcbaGaemOAaOgabaGaeGimaadaaaaa@303D@$ is the null hypothesis of the j-th test and {p(1), p(2),...,p(n)} their corresponding p-values, n[0 ]are the number of true null hypotheses and the other (n - n[0]) hypotheses are false. Let p(1) ≤ p(2) ≤ ... ≤ p(n) be the ordered observed p-values of each test. Define $l−max{i:p(i)≤inq} MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacH8akY=wiFfYdH8Gipec8Eeeu0xXdbba9frFj0=OqFfea0dXdd9vqai=hGuQ8kuc9pgc9s8qqaq= and reject $H(1)0...H(l)0 MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacH8akY=wiFfYdH8Gipec8Eeeu0xXdbba9frFj0=OqFfea0dXdd9vqai= vr0dc8meaabaqaciaacaGaaeqabaqabeGadaaakeaacqWGibasdaqhaaWcbaGaeiikaGIaeGymaeJaeiykaKcabaGaeGimaadaaOGaeiOla4IaeiOla4IaeiOla4IaemisaG0aa0baaSqaaiabcIcaOiabdYgaSjabcMcaPaqaaiabicdaWaaaaaa@397F@$. If no such i exists, reject all null hypothesis. FDR is defined as the expected proportion (q) of incorrectly rejected null hypotheses (type I error) in a list of all rejected hypotheses. Artificial regulatory networks The description that many networks in nature have a power-law degree distribution was first addressed by [61]. In their random graph model, called scale-free graph, it is described how these networks grow and expand, being based on two generic mechanisms, which are common to several networks in the real world. Several networks in the real world start from a small number of nodes and grow by continuous addition of new nodes, therefore, the number of nodes increases throughout the lifetime of the network. When a new node is added to the network, its attachment is preferential, i.e., the probability of a new node connects to the existing nodes is not uniform as in a random graph [62]. There is a higher probability to be linked to a node that already has a large number of connections, resulting in a power-law degree distribution. In other words, the probability P(v) that a node in the network is connected to v other nodes decays as a power-law. Therefore, the degree distribution has a power-law tail P(v) ~ v^-γ, where γ is a scalar which represents the rate of decayment of the degree distribution. In our case, the nodes are representing the genes and the connections are the Granger-causal relationships. This scale-free graph can be constructed as below: 1. Growth: Starting with a small number z[0 ]of genes, at each iteration, a new gene with z ≤ (z[0]) edges are added. This new gene is connected to the genes already present in the network with a preferential attachment. 2. Preferential attachment: The gene with which the new gene will connect is selected in a non-deterministic fashion. Assume that the probability π that a new gene will be connected to gene i depends on the degree d[i ]of that gene which is already in the network. Therefore: $π(di)=di∑jdj MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacH8akY=wiFfYdH8Gipec8Eeeu0xXdbba9frFj0=OqFfea0dXdd9vqai=hGuQ8kuc9pgc9s8qqaq= Since we are interested in causal relationships, we need to define a direction for each edge. Therefore, there is a third step in our graph construction. In our simulations, the probability attributed to add an edge from i to j is the same from j to i, i.e., 0.5. After T[step ]iterations, the constructed random scale-free like network is composed of n = T[step ]+ z[0 ]genes and z * T[step ]+ z[edges ]Granger-causal relationships, where z[edges ]is the initial number of edges. The graph constructed using the algorithm described above may be represented by its adjacency matrix A, i.e., where there is an edge from gene i to gene j it was set to A[i, j] = 0.8, and 0 otherwise, in our simulations. This adjacency matrix A corresponds to the matrix A described in equation 1. The time-series' lag was set to one in our simulations, therefore, set m = T - 1. To construct the corresponding time-series for each gene, firstly, generate normally distributed random numbers with zero mean and unit variance for each gene i = 1,...,n for the time step t = 1, y[i 1 ]= ε[i]. Then, use equation 2 to generate the time-series for each gene i = 1,...,n, time step t = 2,...,T. We implemented our program using R [63], a statistical computing environment. Computation was conducted under a Pentium IV CPU 3.06 GHz, 2.5 GB of RAM. Application to real data We applied the SVAR approach to HeLa cell cycle gene expression data collected by Whitfield et al. (2002) [64]. Gene expression was measured using microarrays manufactured in the Stanford Microarray Facility. The data used contain 48 time points distributed at one hour intervals with one reading at each time point, synchronized by double thymidine block (described as Experiment 3 in the web page [65]). The 94 genes were selected from actual biological microarray data on the basis of there association with cell cycle regulation and tumor development. The HeLa cell cycle lasts 16 hours. These data were downloaded from: [65]. Authors' contributions AF has made substantial contributions to the conception and design of the study, analysis and interpretation of data. JRS has made substantial contributions to the analysis and interpretation of mathematical results. HMGM has made substantial contributions to the analysis and interpretation of biological data. AF, JRS and HMGM have been involved in drafting of the manuscript. RY and SM have discussed the mathematical results. MCS has discussed the biological results. CEF has directed the work. RY, SM, MCS and CEF critically revised the manuscript for important intellectual content. All authors read and approved the final manuscript. This research was supported by JICA, FAPESP, CAPES, CNPq, FINEP and PRP-USP. • Gardner T, di Bernardo D, Lorenz D, Collins J. Inferring genetic networks and identifying compound mode of action via expression profiling. Science. 2003;301:102–105. doi: 10.1126/ science.1081900. [PubMed] [Cross Ref] • di Bernardo D, Thompson M, Gardner T, Chobot S, Eastwood E, Wojtovich A, Elliott S, Schaus S, Collins J. Chemogenomic profiling on a genome-wide scale using reverse-engineered gene networks. Nature Biotechnology. 2005;23:377–383. doi: 10.1038/nbt1075. [PubMed] [Cross Ref] • Faith J, Hayete B, Thaden J, Mogno I, Wierzbowski J, Cotterel G, Kasif S, Collins J, Gardner T. Large-scale mapping and validation of Escherichia coli transcriptional regulation from a Compedium of expression profiles. PLoS Biology. 2007;5:e8. doi: 10.1371/journal.pbio.0050008. [PMC free article] [PubMed] [Cross Ref] • Imoto S, Goto T, Miyano S. Estimation of genetic networks and functional structures between genes by using Bayesian networks and nonparametric regression. Pac Symp Biocomput. 2002:175–186. [ • Tamada Y, Kim S, Bannai H, Imoto S, Tashiro K, Kuhara S, Miyano S. Estimating gene networks from gene expression data by combining Bayesian network model with promoter element detection. Bioinformatics. 2003;19:227–236. doi: 10.1093/bioinformatics/btg1082. [PubMed] [Cross Ref] • Friedman N. Inferring cellular networks using probabilistic graphical models. Science. 2004;303:799–805. doi: 10.1126/science.1094068. [PubMed] [Cross Ref] • Dojer N, Gambin A, Mizera A, Wilczynski B, Tiuryn J. Applying dynamic Bayesian networks to perturbed gene expression data. BMC Bioinformatics. 2006;7:249. doi: 10.1186/1471-2105-7-249. [PMC free article] [PubMed] [Cross Ref] • Werhli A, Husmeier D. Reconstructing gene regulatory networks with bayesian networks by combining expression data with multiple sources of prior knowledge. Stat Appl Genet Mol Biol. 2007;6:15. [ • Xiong M, Li J, Fang X. Identification of genetic networks. Genetics. 2004;166:1037–1052. doi: 10.1534/genetics.166.2.1037. [PMC free article] [PubMed] [Cross Ref] • Akutsu T, Miyano S, Kuhara S. Algorithms for identifying Boolean networks and related biological networks based on matrix multiplication and fingerprint function. J Comput Biol. 2000;7:331–343. doi: 10.1089/106652700750050817. [PubMed] [Cross Ref] • Shmulevich I, Dougherty E, Zhang W. Gene perturbation and intervention in probabilistic Boolean networks. Bioinformatics. 2002;18:1319–1331. doi: 10.1093/bioinformatics/18.10.1319. [PubMed] [ Cross Ref] • Pal R, Datta A, Bittner M, Dougherty E. Intervention in context-sensitive probabilistic Boolean networks. Bioinformatics. 2005;21:1211–1218. doi: 10.1093/bioinformatics/bti131. [PubMed] [Cross • Schäfer J, Strimmer K. An empirical Bayes approach to inferring large-scale gene association networks. Bioinformatics. 2005;21:754–764. doi: 10.1093/bioinformatics/bti062. [PubMed] [Cross Ref] • Woolf P, Wang Y. A fuzzy logic approach to analyzing gene expression data. Physiol Genomics. 2000;3:9–15. [PubMed] • Mestl T, Plahte E, Omholt S. A mathematical framework for describing and analyzing gene regulatory networks. J theor Biol. 1995;176:291–300. doi: 10.1006/jtbi.1995.0199. [PubMed] [Cross Ref] • Vapnik V. The nature of statistical learning theory. New York: Springer; 1995. • Barrera J, Cesar RJ, Martins DJ, Merino E, Vêncio R, Leonardi F, Yamamoto M, Pereira C, del Portillo H. A new annotation tool for malaria based on inference of probabilistic genetic networks. Critical Assessment of microarray data analysis: 10–12 November 2004; Durham. 2004. pp. 36–40. • Segal E, Shapira M, Regev A, Pe'er D, Botstein D, Koller D, Friedman N. Module networks: identifying regulatory modules and their condition-specific regulators from gene expression data. Nat Genet. 2003;34:166–176. [PubMed] • Xu X, Wang L, Ding D. Learning module networks from genome-wide location and expression data. FEBS Lett. 2004;578:297–304. doi: 10.1016/j.febslet.2004.11.019. [PubMed] [Cross Ref] • Yamaguchi R, Yoshida R, Imoto S, Higuchi T, Miyano S. Finding module-based gene networks in time-course gene expression data with state space models. IEEE Signal processing magazine. 2007. • Valdes-Sosa P, Sanchez-Bornot J, Lage-Castellanos A, Vega-Hernandez M, Bosch-Bayard J, Melie-Garcia L, Canales-Rodriguez E. Estimating brain functional connectivity with sparse multivariate autoregression. Phil Trans R Soc B. 2005;360:969–981. doi: 10.1098/rstb.2005.1654. [PMC free article] [PubMed] [Cross Ref] • Eichler M. A graphical approach for evaluating effective connectivity in neural systems. Philos Trans R Soc Lond B Biol Sci. 2005;360:953–967. doi: 10.1098/rstb.2005.1641. [PMC free article] [ PubMed] [Cross Ref] • Tibshirani R. Regression shrinkage and selection via the Lasso. Journal of the Royal Statistical Society Series B. 1996;58:267–288. • Benjamini Y, Hochberg Y. Controlling the false discovery rate: a practical and powerful approach to multiple testing. J Roy Statist Soc Ser B. 1995;57:289–300. • Jeong H, Tombor B, Albert R, Oltvai Z, Barabasi A. The large-scale organization of metabolic networks. Nature. 2000;65:651–654. [PubMed] • Inoue J, Gohda J, Akiyama T, Semba K. NF-kappaB activation in development and progression of cancer. Cancer Sci. 2007;98:268–274. doi: 10.1111/j.1349-7006.2007.00389.x. [PubMed] [Cross Ref] • Soussi T. p53 alterations in human cancer: more questions than answers. Oncogene. 2007;26:2145–2156. doi: 10.1038/sj.onc.1210280. [PubMed] [Cross Ref] • Yu H, Kortylewski M, Pardoll D. Crosstalk between cancer and immune cells: role of STAT3 in the tumour microenvironment. Nat Rev Immunol. 2007;7:41–51. doi: 10.1038/nri1995. [PubMed] [Cross Ref] • Albert R, Jeong H, Barabasi A. Error and attack tolerance of complex networks. Nature. 2000;406:378–385. doi: 10.1038/35019019. [PubMed] [Cross Ref] • Chen F, Castranova V, Shi X. New insights into the role of nuclear factor-kappaB in cell growth regulation. Am J Pathol. 2001;159:387–397. [PMC free article] [PubMed] • Krikos A, Laherty C, Dixit V. Transcriptional activation of the tumor necrosis factor alpha-inducible zinc finger protein, A20, is mediated by kappa B elements. J Biol Chem. 1992;267:17971–17976. • You M, Ku P, Hrdlickova R, Bose HJ. ch-IAP1, a member of the inhibitor-of-apoptosis protein family, is a mediator of the antiapoptotic activity of the v-Rel oncoprotein. Mol Cell Biol. 1997;17 :7328–7341. [PMC free article] [PubMed] • Chen M, Ghosh G. Regulation of DNA binding by Rel/NF-kappaB transcription factors: structural views. Oncogene. 1999;377:6845–6852. doi: 10.1038/sj.onc.1203224. [PubMed] [Cross Ref] • Brown R, Ades I, Nordan R. An acute phase response factor/NF-kappa B site downstream of the junB gene that mediates responsiveness to interleukin-6 in a murine plasmacytoma. J Biol Chem. 1995;270 :31129–21135. doi: 10.1074/jbc.270.52.31129. [PubMed] [Cross Ref] • Storz P, Doppler H, Ferran C, Grey S, Toker A. Functional dichotomy of A20 in apoptotic and necrotic cell death. Biochem J. 2005;387:47–55. doi: 10.1042/BJ20041443. [PMC free article] [PubMed] [ Cross Ref] • Dreyfus D, Nagasawa M, Gelfand E, Ghoda L. Modulation of p53 activity by IkappaBalpha: evidence suggesting a common phylogeny between NF-kappaB and p53 transcription factors. BMC Inmunol. 2005;6 :12. doi: 10.1186/1471-2172-6-12. [PMC free article] [PubMed] [Cross Ref] • Jin S, Levine A. The p53 functional circuit. J Cell Sci. 2001;114:4139–4140. [PubMed] • Buckbinder L, Talbott R, Velasco-Miguel S, Takenaka I, Faha B, Seizinger B, Kley N. Induction of the growth inhibitor IGF-binding protein 3 by p53. Nature. 1995;377:646–649. doi: 10.1038/ 377646a0. [PubMed] [Cross Ref] • Yakovlev A, Di Giovanni S, Wang G, Liu W, Stoica B, Faden A. BOK and NOXA are essential mediators of p53-dependent apoptosis. J Biol Chem. 2004;279:28367–28374. doi: 10.1074/jbc.M313526200. [ PubMed] [Cross Ref] • Fukushima Y, Oshika Y, Tsuchida T, Tokunaga T, Hatanaka H, Kijima H, Yamazaki H, Ueyama Y, Tamaoki N, Nakamura M. Brain-specific angiogenesis inhibitor 1 expression is inversely correlated with vascularity and distant metastasis of colorectal cancer. Int J Oncol. 1998;13:967–970. [PubMed] • Dameron K, Volpert O, Tainsky M, Bouck N. Control of angiogenesis in fibroblasts by p53 regulation of thrombospondin-1. Science. 1994;265:1582–1584. doi: 10.1126/science.7521539. [PubMed] [Cross • Fujita A, Sato J, Garay-Malpartida H, Morettin P, Sogayar M, Ferreira C. Time-varying modeling of gene expression regulatory networks using the wavelet dynamic vector autoregressive method. Bioinformatics. 2007;23:1623–1630. doi: 10.1093/bioinformatics/btm151. [PubMed] [Cross Ref] • Jing N, Tweardy D. Targeting Stat3 in cancer therapy. Anticancer Drugs. 2005;16:601–607. doi: 10.1097/00001813-200507000-00002. [PubMed] [Cross Ref] • Masuda M, Suzui M, Yasumatu R, Nakashima T, Kuratomi Y, Azuma K, Tomita K, Komiyama S, Weinstein I. Constitutive activation of signal transducers and activators of transcription 3 correlates with cyclin D1 overexpression and may provide a novel prognostic marker in head and neck squamous cell carcinoma. Cancer Res. 2002;62:3351–3355. [PubMed] • Niu G, Wright K, Ma Y, Wright G, Huang M, Irby R, Briggs J, Karras J, Cress W, Pardoll D, Jove R, Chen J, Yu H. Role of Stat3 in regulating p53 expression and function. Mol Cell Biol. 2005;25 :7432–7440. doi: 10.1128/MCB.25.17.7432-7440.2005. [PMC free article] [PubMed] [Cross Ref] • Roninson I. Oncogenic functions of tumour suppressor p21(Waf1/Cip1/Sdi1): association with cell senescence and tumour-promoting activities of stromal fibroblasts. Cancer Lett. 2002;179:1–14. doi: 10.1016/S0304-3835(01)00847-3. [PubMed] [Cross Ref] • Gartel A. Is p21 an oncogene? Mol Cancer Ther. 2006;5:1385–1386. doi: 10.1158/1535-7163.MCT-06-0163. [PubMed] [Cross Ref] • De la Cueva E, Garcia-Cao I, Herranz M, Lopez P, Garcia-Palencia P, Flores J, Serrano M, Fernandez-Piqueras J, Martin-Caballero J. Tumorigenic activity of p21Waf1/Cip1 in thymic lymphoma. Oncogene. 2006;25:4128–4132. doi: 10.1038/sj.onc.1209432. [PubMed] [Cross Ref] • Hosoya Y. Elimination of third-series effect and defining partial measures of causality. Journal of time series analysis. 2001;22:537–554. doi: 10.1111/1467-9892.00240. [Cross Ref] • Hastie T, Tibshirani R, Friedman J. The elements of statistical learning: data mining, inference, and prediction. Econometrica. 1969;37:424–438. doi: 10.2307/1912791. [Cross Ref] • Hoerl A, Kennard R. Ridge regression: biased estimation for non-orthogonal problems. Technometrics. 1970;12:55–67. doi: 10.2307/1267351. [Cross Ref] • Breiman L. Better subset regression using the nonnegative garrote. Technometrics. 1995;37:373–384. doi: 10.2307/1269730. [Cross Ref] • Granger C. Investigating causal relation by econometric and cross-sectional method. Econometrica. 1969;37:424–438. doi: 10.2307/1912791. [Cross Ref] • Mukhopadhyay N, Chatterjee S. Causality and pathway search in microarray time series experiment. Bioinformatics. 2007;23:442–449. doi: 10.1093/bioinformatics/btl598. [PubMed] [Cross Ref] • Fan J, Li R. Variable selection via nonconcave penalized likelihood and its oracle properties. J Am Stat Assoc. 2001;96:1348–1360. doi: 10.1198/016214501753382273. [Cross Ref] • Fan J, Peng H. Nonconcave penalized likelihood with a diverging number of parameters. Ann Stat. 2004;32:928–961. doi: 10.1214/009053604000000256. [Cross Ref] • Hunter D. MM algorithms for generalized Bradley-Terry models. Ann Stat. 2004;32:384–406. doi: 10.1214/aos/1079120141. [Cross Ref] • Hunter D, Lange K. A tutorial on MM algorithms. Am Stat. 2004;58:30–37. • Bryd R, Peihuang L, Nocedal J, Ciyou Z. A limited memory algorithm for bound constrained optimization. SIAM J Scientific Computing. 1995;16:1190–1208. doi: 10.1137/0916069. [Cross Ref] • Wang H, Li G, Tsai C. Regression coefficient and autoregressive order shrinkage and selection via the lasso. J R Statist SocB. 2007;69:63–78. • Barabási A, Albert R. Emergence of scaling in randomnetworks. Science. 2000;286:509–512. [PubMed] • Erdös P, Rényi A. On random graphs. Publicationes Mathematicae. 1959;6:290–297. • The R project for statistical computing http://www.r-project.org [PubMed] • Whitfield M, Sherlock G, Saldanha A, Murray J, Ball C, Alexander K, Matese J, Perou C, Hurt M, Brown P, Botstein D. Identification of genes periodically expressed in the human cell cycle and their expression in tumors. Molecular Biology of the Cell. 2002;13:1977–2000. doi: 10.1091/mbc.02-02-0030.. [PMC free article] [PubMed] [Cross Ref] • Human cell cycle: HeLa cells http://genome-www.stanford.edu/Human-CellCycle/HeLa/ Articles from BMC Systems Biology are provided here courtesy of BioMed Central Your browsing activity is empty. Activity recording is turned off. See more...
{"url":"http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2048982/?tool=pubmed","timestamp":"2014-04-20T01:58:26Z","content_type":null,"content_length":"170883","record_id":"<urn:uuid:382f4062-ceed-44cd-9318-3d7caff3adc8>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00163-ip-10-147-4-33.ec2.internal.warc.gz"}
Course Hero has millions of student submitted documents similar to the one below including study guides, practice problems, reference materials, practice exams, textbook help and tutor support. Find millions of documents on Course Hero - Study Guides, Lecture Notes, Reference Materials, Practice Exams and more. Course Hero has millions of course specific materials providing students with the best way to expand their education. Below is a small sample set of documents: Washington - PHYS - 122 CHAPTER19Heat and the First Law of Thermodynamics1* Body A has twice the mass and twice the specific heat of body B. If they are supplied with equal amounts of heat, how do the subsequent changes in their temperatures compare? M A = 2M B; cA = 2cB; C = Washington - PHYS - 122 CHAPTER20The Second Law of Thermodynamics1* Where does the energy come from in an internal-combustion engine? In a steam engine? Internal combustion engine: From the heat of combustion (see Problems 19-106 to 19-109). Steam engine: From the burning of Washington - PHYS - 122 CHAPTER21Thermal Properties and Processes1* Why does the mercury level first decrease slightly when a thermometer is placed in warm water? The glass bulb warms and expands first, before the mercury warms and expands. 2 A large sheet of metal has a hole Washington - PHYS - 122 CHAPTER22The Electric Field I: Discrete Charge Distributions1* If the sign convention for charge were changed so that the charge on the electron were positive and the charge on the proton were negative, would Coulomb's law still be written the same? Ye Washington - PHYS - 122 CHAPTER23The Electric Field II: Continuous Charge Distributions1* A uniform line charge of linear charge density ? = 3.5 nC/m extends from x = 0 to x = 5 m. (a) What is the total charge? Find the electric fie ld on the x axis at (b) x = 6 m, (c) x = 9 Washington - PHYS - 122 CHAPTER24Electric Potential1* A uniform electric field of 2 kN/C is in the x direction. A positive point charge Q = 3 C is released from rest atthe origin. (a) What is the potential difference V(4 m) V(0)? (b) What is the change in the potential ener Washington - PHYS - 122 CHAPTER25Electrostatic Energy and Capacitance1* Three point charges are on the x axis: q 1 at the origin, q 2 at x = 3 m, and q 3 at x = 6 m. Find the electrostatic potential energy for (a) q 1 = q 2 = q 3 = 2 C, (b) q 1 = q 2 = 2 C and q 3 = 2 C, and Washington - PHYS - 122 CHAPTER26Electric Current and Direct-Current Circuits1* In our study of electrostatics, we concluded that there is no electric field within a conductor in electrostatic equilibrium. How is it that we can now discuss electric fields inside a conductor? Washington - PHYS - 122 CHAPTER27The Microscopic Theory of Electrical Conduction1* In the classical model of conduction, the electron loses energy on average in a collision because it loses the drift velocity it had picked up since the last collision. Where does this energy a Washington - PHYS - 122 CHAPTER The Magnetic Field281* When a cathode-ray tube is placed horizontally in a magnetic field that is directed vertically upward, the electrons emitted from the cathode follow one of the dashed paths to the face of the tube in Figure 28-30. The corr Washington - PHYS - 122 CHAPTER Sources of the Magnetic Field291* Compare the directions of the electric and magnetic forces between two positive charges, which move along parallel paths (a) in the same direction, and (b) in opposite directions. (a) The electric forces are rep Washington - PHYS - 122 CHAPTER Magnetic Induction301* A uniform magnetic field of magnitude 2000 G is parallel to the x axis. A square coil of side 5 cm has a single turn and makes an angle ? with the z axis as shown in Figure 30-28. Find the magnetic flux through the coil wh Washington - PHYS - 122 CHAPTER Alternating-Current Circuits31Note: Unless otherwise indicated, the symbols I, V, E, and P denote the rms values of I, V, and E and the average power. 1* A 200-turn coil has an area of 4 cm2 and rotates in a magnetic field of 0.5 T. (a) What fre Washington - PHYS - 122 CHAPTER Maxwell's Equations and Electromagnetic Waves321* A parallel-plate capacitor in air has circular plates of radius 2.3 cm separated by 1.1 mm. Charge is flowing onto the upper plate and off the lower plate at a rate of 5 A. (a) Find the time rate Washington - PHYS - 122 CHAPTER Properties of Light331* Why is helium needed in a heliumneon laser? Why not just use neon? The population inversion between the state E2,Ne and the state 1.96 eV below it (see Figure 33-9) is achieved by inelastic collisions between neon atoms a Washington - PHYS - 122 CHAPTER Optical Images341* Can a virtual image be photographed? Yes. Note that a virtual image is &quot;seen&quot; because the eye focuses the diverging rays to form a real image on the retina. Similarly, the camera lens can focus the diverging rays onto the film Washington - PHYS - 122 CHAPTER Interference and Diffraction351* When destructive interference occurs, what happens to the energy in the light waves? The energy is distributed nonuniformly in space; in some regions the energy is below average (destructive interference), in oth Washington - PHYS - 122 CHAPTER36Applications of the Schrdinger Equation1* True or false: Boundary conditions on the wave function lead to energy quantization. True 2 Sketch (a) the wave function and (b) the probability distribution for the n = 4 state for the finite square-w Washington - PHYS - 122 CHAPTER Atoms371* As n increases, does the spacing of adjacent energy levels increase or decrease? The spacing decreases. 2 The energy of the ground state of doubly ionized lithium (Z = 3) is _ ,where E0 = 13.6 eV. (a) 9E0, (b) 3E0, (c) E0/3, (d)E0/9. ( Washington - PHYS - 122 CHAPTER38Molecules and Solids1*Would you expect the NaCl molecule to be polar or nonpolar? NaCl is a polar molecule. 2 3 4 Would you expect the N2 molecule to be polar or nonpolar? N2 is a non-polar molecule. Does neon occur naturally as Ne or Ne2? Why Washington - PHYS - 122 CHAPTER39Relativity1* You are standing on a corner and a friend is driving past in an automobile. Both of you note the times when the car passes two different intersections and determine from your watch readings the time that elapses between the two ev Washington - PHYS - 122 CHAPTER Nuclear Physics401* Give the symbols for two other isotopes of (a) 14N, (b) 56Fe, and (c) 118Sn (a) 15N, 16N; (b) 54Fe, 55Fe; (c) 114Sn, 116Sn 2 Calculate the binding energy and the binding energy per nucleon from the masses given in Table 40-1 DeVry NY - FINANCE - 1515 Time Remaining: 1.A fixed-income analyst has made the following assessments: (1) The real risk-free rate is expected to remain at 2.5 percent for the next ten years. (2) Inflation is expected to be 3 percent this year, 4 percent next year, and 5 percent Alfaisal University - COE - 10001 National University of Computer &amp; Emerging Sciences Operating System Operating Spring 09Lec26-OS-161 Introduction Lec26-OS-161 Prepared by Uzma Maroufuzma.maroof@nu.edu.pkReference Material2What is OS-161 What3What is OS-161 A model OS to teach OS Alfaisal University - COE - 10001 National University of Computer &amp; Emerging Sciences Operating System Operating Spring 09Lec26-TSL &amp; Priority Inversion Prepared by Uzma Maroofuzma.maroof@nu.edu.pkReference Modern Operating System Andrew S. Tanenbaum 2nd edition 2.3 Interprocess comm Alfaisal University - COE - 10001 National University of Computer &amp; Emerging Sciences Operating System Operating Spring 09Lec27-Semaphores Prepared by Uzma Maroofuzma.maroof@nu.edu.pkReference Modern Operating System Andrew S. Tanenbaum 2nd edition 2.3 Interprocess communications2I Alfaisal University - COE - 10001 National University of Computer &amp; Emerging Sciences Operating System Operating Spring 09Lec28-Semaphores II Prepared by Uzma Maroofuzma.maroof@nu.edu.pkProducer/Consumer Problem Consumer must wait for producer to fill buffers, if none full (schedulin Alfaisal University - COE - 10001 National University of Computer &amp; Emerging Sciences Operating System Operating Spring 09Lec29-Semaphores III Prepared by Uzma Maroofuzma.maroof@nu.edu.pkint BUFFER_SIZE = 100; int count = 0; Semaphore Empty(BUFFER_SIZE); void producer(void) cfw_ int it Alfaisal University - COE - 10001 National University of Computer &amp; Emerging Sciences Operating System Operating Spring 09Lec25-Classical Synchronization Problems Prepared by Uzma Maroofuzma.maroof@nu.edu.pkReaders/Writers Shared database (for example, bank balances, or airline seats) Alfaisal University - COE - 10001 National University of Computer &amp; Emerging Sciences Operating System Operating Spring 09Lec25-Classical Synchronization Problems Prepared by Uzma Maroofuzma.maroof@nu.edu.pkReaders/Writers Shared database (for example, bank balances, or airline seats) Alfaisal University - COE - 10001 National University of Computer &amp; Emerging Sciences Operating System Operating Spring 09Lec26-Classical Synchronization Problems II Prepared by Uzma Maroofuzma.maroof@nu.edu.pkReaders/Writers Shared database (for example, bank balances, or airline sea Alfaisal University - COE - 10001 National University of Computer &amp; Emerging Sciences Operating System Operating Spring 09Lec26-Classical Synchronization Problems II Prepared by Uzma Maroofuzma.maroof@nu.edu.pkReaders/Writers Shared database (for example, bank balances, or airline sea HKU - BA - 2002 PASSIVEVOICE/ACTIVEVOICE(?) Choosetherightone. Everybody_bytheterriblenewsyesterday. shocked wasshocked Mr.Green_attheUniversitysince1989. hasbeenteaching hasbeentaught Notmuch_abouttheaccidentsincethattime. hassaid hasbeensaid Anewbook_bythatcompanynexty HKU - BA - 2002 NEVER PASSIVE! Intransitivevs.TransitiveVerbs INTRANSITIVE VERB An intransitive verb is an action that happens by itself. The verb is not used with an object; therefore, no passive form can be used. The earthquake happened on April 6, 2009 in Italy. The e HKU - BA - 2002 ParticipialAdjectives CAUSE OF THE FEELING RECEIVER OF THE FEELINGalarming frustrating alarmed frustrated amusing humiliating amused humiliated annoying interesting annoyed interested boring intriguing bored intrigued concerning overwhelming concerned ov HKU - BA - 2002 ThepresentperfecttenseItisusedtoreferto: anactionorsituationwhichbeganinthepastandisstillcurrentanactionoreventwhichtookplaceinthepast,wheretheexact timeoftheeventisnotspecifiedornotclearanactionoreventwhichhasjusthappened Hehaslivedheresincehewasyou HKU - BA - 2002 WhQuestionsWho,What,Where,Why,Which,WhenorHow Doug:Hi,Tim._areyou? Tim:Notbad._wasyourChristmas? Doug:Fantastic. Tim:Oh!_didyoudo? Doug:IwenthomeforChristmas. Tim:_ishome? Doug:Australia. Tim:_longdidyougofor? Doug:Ispentthreewonderfulweeksthere. Tim:_di HKU - BA - 2002 Connectives of Time 1. the night, the wind blew the front door open. 2. It began to snow I was walking home. 3. We visited many relatives our vacation. 4. Susan has been studying English five years. 5. The children have been playing video games hours. 6. HKU - BA - 2002 2 a b c 3. a b c 7. a b c d 8 a b c d 9 a b c d 10 a b c d 11 a b c d( _ ) are you? I'm fine, thank you. He How What ( _ ) is your name? What's How It ( _ ) am Aunt Jane. I He She It ( _ ) it a dog? Is He She It ( _ ) is a rabbit. I He She It She is my s HKU - BA - 2002 Passive Voice I 1. They speak English in India. _ . 2. Mrs Jones had cleared the table _ . 3. You must keep dogs on a leash at all times. _. 4. They have just arrested the thief. _ . 5. My sister, Laura, has decorated the house. _ . 6. They will ask you t HKU - BA - 2002 Question Tagsisn't it?, has he?, had they?, were you?, aren't you?, doesn't he?, do you?, is she?, didn't you?, won't he?, did she?1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11.She didn't watch the film last night, _? It's great to see each other again, _? He come HKU - BA - 2002 Reported Speech IQ: Tom said, 'I want to visit my friends this weekend.' _ Q: Jerry said, 'I'm studying English a lot at the moment.' _ Q: They said, 'We've lived here for a long time.' _ Q: He asked me, 'Have you finished reading the newspaper?' _ Q: 'I HKU - BA - 2002 Somebody / Something 1. There is _ strange in the kitchen 2. There is _ in the garden, can be a burglar 3. Look , I have _ for you! 4. I have got _ in my eye 5. He is playing _ on the piano. 6. _ has just knocked at the door. 7. I met _ yesterday. 8. Ther HKU - BA - 2002 Past continuous 1 I _ letters all day yesterday. (write) 2 You _very slow about it. (be) 3 You _on the phone for hours and hours. (talk) 4 What _ you _ when I phoned yesterday? (do) 5 _ you _ TV when it happened? (watch) 6 What _ you _at the party? (wear) HKU - BA - 2002 Prefect Tense II1. Last summer I spent a nice afternoon with a friend; we_ _ _ ( go) to the cinema, did some shopping and talked a lot. 2. I met my friend Jenny at university and we _ _ _ ( remain) friends ever since. 3. It is late ! 11.00 pm ! By the wa HKU - BA - 2002 PresentPerfectorPastSimple?Puteachofthefollowingverbsintoeitherthepastsimpleorpresentperfect.1. Jack_(live)inBostonforthepast15years. 2. Janet_(work)forSmithandBrothersbeforeshecametoworkforus. 3. Dad_(you/finish)readingthepaperyet? 4. Iwouldlovetovisit HKU - BA - 2002 Yan Chai Hospital Lan Chi Pat Memorial Secondary SchoolF.1 English Easter Holiday HomeworkName: _ Simple present tenseExercise 1Class: _ ()Complete the sentences below with the simple present tense of the verbs in brackets. 1 2 3 4 5 6 7 8 9 10 11 1 HKU - BA - 2002 Jin speaks Chinese _. fluent fluently David arrived _. late lately Sue learned Japanese _. quick quickly Sharon usually sings _. sad sadly Bill understands Spanish _. good well The women work _. hard hardly Mei Li dresses _. beautiful beautifully I like t HKU - BA - 2002 AdverbsorAdjectivesMaryisa_swimmer. a.slow b.slowly Andrewplaysthepiano_. a.beautiful b.beautifully Mrs.Thompsonsews_. a.quick b.quickly Mr.Garciaspeaks_. a.loud b.loudly Hiskidsare_students. a.good b.well Joannasings_. a.awful b.awfully Fayewrites_. a.n HKU - BA - 2002 Advice Choosethemostappropriateanswerforexpressingadvice. 1.It'sagreattown.You_visititsomeday. It'sagreattown. couldmightshouldhadbetter 2.Ifshewantstobuyanapartment,she_consultagoodrealestateagent. hastoshouldmayneedtocould 3._IaskJohntohelpus? ShouldOug HKU - BA - 2002 Q2 - She's reading Italian at _. university the university Q3 - I'll take you to _. airport the airport Q4 - He was sent to _ for theft. prison the prison Q5 - She lives in _. North the North Q6 - I like _ food. Indian the Indian Either could be used here HKU - BA - 2002 Proper nouns: We use the definite article with certain kinds of proper nouns:Geographical places: the Sound, the Sea of Japan, the Mississippi, the West, the Smokies, the Sahara (but often not when the main part of the proper noun seems to be modified by HKU - BA - 2002 COMPARATIVE SUPERLATIVE ADJECTIVES ComparativeandSuperlativeAdjectivesbad worse worst badly worse worst far(distance) farther farthest furthe far(extent) furthest r good better best ill worse worst latest or late later last less lesser least little(amoun HKU - BA - 2002 Comparative&amp;Superlative adjectivecomparativesuperlativebad clever far good hot little many much narrow pretty shy Universidad TecMilenio - MATH - tehory of Problema 1 (extrado del libro de texto, problema 4, pgina 55) Truckco manufactures two types of trucks: 1 and 2. Each truck must go through the painting shop and assembly shop. If the painting shop were completely devoted to painting Type 1 trucks, then 8 Télécom Paris - ECON - 512 Thorie des jeux eIntroduction et thorie de la dcision e eThorie des jeux eFrdric Koessler ee frederic[point]koessler[at]gmail[point]com http :/frederic.koessler.free.fr/cours.htmThorie des jeux eIntroduction et thorie de la dcision e eThorie des jeu Télécom Paris - ECONOMICS - 512 Thorie des jeux eIntroduction et thorie de la dcision e eThorie des jeux eFrdric Koessler ee frederic[point]koessler[at]gmail[point]com http :/frederic.koessler.free.fr/cours.htm Plan gnral du cours ee(22 juillet 2008)1/ Introduction et thorie de la Télécom Paris - ECONOMICS - gt512 Thorie des jeux eJeux sous forme normaleJeux sous forme normale(Jeux statiques ` information compl`te) a eThorie des jeux eJeux sous forme normaleJeux sous forme normale(Jeux statiques ` information compl`te) a ePlan du chapitre(2 septembre 2007) Télécom Paris - ECONOMICS - gt512 Thorie des jeux eJeux sous forme normaleJeux sous forme normale(Jeux statiques ` information compl`te) a ePlan du chapitre(2 septembre 2007)1/ Dnitions et exemples e Equilibre de Nash Applications Existence dun quilibre de Nash en stratgies pures Télécom Paris - ECONOMICS - gt512 Thorie des jeux eJeux sous forme normale (suite)Extension mixte dun jeu(22 juillet 2008)Thorie des jeux eJeux sous forme normale (suite)Extension mixte dun jeu(22 juillet 2008)Thorie des jeux eJeux sous forme normale (suite)Stratgie pure (action Télécom Paris - ECONOMICS - gt512 Thorie des jeux eJeux sous forme normale (suite)Extension mixte dun jeu(22 juillet 2008)1/Stratgie pure (action) : souvent insusante pour dcrire le comportement dun e e joueur dans un jeu Comment crire formellement quun joueur a tendance ` jouer plus
{"url":"http://www.coursehero.com/file/6085801/ch18/","timestamp":"2014-04-20T01:07:09Z","content_type":null,"content_length":"81053","record_id":"<urn:uuid:b48ba2e7-47ff-441a-8efb-5677f003e6fd>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00346-ip-10-147-4-33.ec2.internal.warc.gz"}